I've been posting about the UltraZed-EV roadtest and realized that I should have provided an overview first to put my posts in context.


I've wanted to design a home security/video surveillance system for a while.  I have a mix of different IP cameras that I use for monitoring and an NVR that I use for recording.  I prefer not to have a Cloud based system because of poor wan reliability, high latency, unknown data security, and high subscription cost.  I also have a conglomeration of sensors and miscellaneous home automation components that I'd like to incorporate.  I'd like to have a system that could do some intelligent processing, doing detection and classification on camera video streams and doing correlation between camera inputs and sensors.  One of the key constraints is having a system that has enough video processing bandwidth to handle multiple videos streams simultaneously with low latency.  I've tried designs using software codecs and have been disappointed with performance with even a single full HD stream at high frame rate (1920x1080p@60fps).  I've actually only been able to achieve frame rates below 30fps just transcoding a single stream.


The UltraZed_EV Starter Kit seems like an ideal fit for my application.  It has a hardened IP VCU (video codec) that is capable handling simultaneous decode/encode of a 4k@60fps video stream and processes H264/H265 formats.  The VCU bandwidth can be partitioned to handle up to 32 video streams.  It can handle four streams at 1920x1080p@60Hz or eight streams at 1920x1080p@30Hz.  The VCU has an interesting feature where you can define regions of interest (ROI) within the video frame and specify a higher quality of service (QOS) within those regions.  The carrier card provides connectors for HDMI IN/OUT and DisplayPort output.  And it also has a SATA port that can be used for external storage.  It's the perfect platform for an intelligent NVR.  I'll describe my roadtest project in more detail in another post.


Roadtest Plan

  • Initial Hardware Test
    • Verify hardware setup using the prebuilt image and test procedures for the 2018.3 VCU TRD port (completed)
  • Evaluate tool options for development
    • 2018.3 (this is the base configuration that I could use build my project, I did not realize when I had started that a 2019.2 version had become available)
      • Build Vivado project from TCL scripts and generate HDF with bitstream (completed)
      • Build Petalinux image from HDF (completed)
      • Verify the reference VCU test suite (completed)
    • 2019.1 (this would allow me to port PYNQv2.5 for prototyping - not sure what kind of issues I might encounter)
      • Port PYNQv2.5 using the uz7ev_evcc_2019_1.bsp and verify (completed)
      • Upgrade Vivado project from 2018.3 and generate HDF with bitstream (completed)
      • Build Petalinux image from HDF and verify (completed)
      • Package vcu_uz7ev.bsp (completed)
      • Port PYNQv2.5 using the new vcu_uz7ev.bsp and verify using VCU tests (failed)
    • 2019.2 (this has the advantage of Vitis integration which eases the implementation of AI capability)
      • Verify the setup using the prebuilt image and verify using VCU tests (completed)
      • Build the Vivado project from TCL scripts and generate the DSA with bitstream (completed)
      • Build PetaLinux image from DSA and verify using VCU tests (completed)


  • Evaluate functional elements for roadtest project
    • Integrate camera rtsp streams into gstreamer pipeline (I am using multiple camera brands with different proprietary API so I'll need to verify each camera)
    • Modify design to handle multiple input streams into VCU (start with just two)
    • Integrate local storage capability (I could use USB but I'd prefer to use SATA if I can get that working)
    • Investigate using network (LAN) storage (optional)
    • Explore and select an AI processing element (I've used the DPU with the Ultra96v2 so this is probably the right choice)
      • Integrate DPU element with single stream (look at using Vitis-AI)
      • The devil is always in the details - I want to run inference simultaneously on multiple streams - that seems to imply multiple DPU instances? not sure if this will work
      • Search for examples of parallel processing with AI elements using MPSoC+


  • Roadtest Project
    • I intend to implement my system in multiple phases as I realize that the complete project will take me much more time than is available for a roadtest.  The first phase will be my roadtest project.  I'll elaborate my project plan in a separate post but I do realize that how much I can accomplish in Phase One will depend on my success in integrating the functional elements that I described above.
    • I am making a long term commitment to this system design so I'll continue to post after the completion of the roadtest but I'll review the starter kit and design ecosystem based on what I'm able to achieve within the roadtest window.



Links to previous posts for this roadtest:

  1. Avnet UltraZed-EV Starter Kit Road Test- the adventure begins.....
  2. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD
  3. Avnet UltraZed-EV Starter Kit Road Test - VCU TRD continued
  4. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5
  5. Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5 continued
  6. Avnet UltraZed-EV Starter Kit Road Test - Vitis AI