Skip navigation
> RoadTest Reviews

Avnet UltraZed-EV Starter Kit - Review

Scoring

Product Performed to Expectations: 10
Specifications were sufficient to design with: 10
Demo Software was of good quality: 10
Product was easy to use: 8
Support materials were available: 8
The price to performance ratio was good: 10
TotalScore: 56 / 60
  • RoadTest: Avnet UltraZed-EV Starter Kit
  • Buy Now
  • Evaluation Type: Development Boards & Tools
  • Was everything in the box required?: Yes
  • Comparable Products/Other parts you considered: Xilinx ZCU104 and ZCU106 boards with the same ZU7EV UltraScale+ MPSoC with VCU.
  • What were the biggest problems encountered?: 1) Most of the reference designs using the VCU are for the ZCU104 and ZCU106, 2) Had difficulty getting GStreamer to work with VCU and RTSP streams, 3) The normal issues with design examples and tutorials using different tool versions, 4) There isn't a license voucher included to use the HDMI and some of the video IP

  • Detailed Review:

    Introduction

    The UltraZed-EV SOM + Carrier card included in the UltraZed-EV Starter kit represent a high performance full-featured embedded video processing system.  My roadtest will only test a subset of those features but it will still take quite a bit of detail to document it, so I've decided to do it as a series of blog posts which I'll summarize here.  Please refer to the links to see the test details.  My main focus of this roadtest was to utilize the VCU (hardware video codec) capability and add some machine learning using the DPU (Deep Learning processor) using Vitis-AI.

    Links to posts for this roadtest:

     

    the adventure begins.....

    The kit arrived the last week in June after a short shipping delay caused by a train derailment.  The kit included the UltraZed-EV SOM, the Carrier Card, 12V power supply, ethernet cable, micro USB cable, 16GB SD card, SOM mounting and heatsink/fan assembly hardware.  What the kit did not include - a license voucher and an RTC battery.  I found out later that the kit no longer ships with a license voucher but you can get a 30 day evaluation license on request.  Licenses are needed for the HDMI input/output IP and various video subsystem IP elements.  I'll just use the PS DisplayPort interface which uses one of the GTH transceivers and does not require an additional license.  I'll also not use the RTC in this roadtest although I'll probably get a battery later.

     

    The out-of-box (OOB) demo comes pre-loaded in the onboard flash memory and provides simple GPIO tests using the user LEDs and switches.  The demo ran without issue and we're off to a good start.  The fan is a bit annoying if you're sitting next to the board but I'm used to the noise as it sounds similar to the Ultra96v2 fan.

     

    Overview

    Chronologically I wrote this as the seventh blog, but it probably should have been the second.  You can read the blog for detail, but at a high level here is the plan:

     

    Roadtest Plan

    • Initial Hardware Test
    • Evaluate tool options for development
    • Evaluate functional elements for roadtest project
    • Roadtest Project

     

     

    VCU TRD

    The thing that got me interested in this roadtest was an Avnet demo of a port to the UltraZed-EV that Jason Moss had done from the VCU reference design (VCU TRD) for the ZCU106.  The initial port used the 2018.3 tools.  Jason provided excellent documentation describing the porting process and the test procedure to validate the design using GStreamer pipelines.  The source code and a pre-built image are available on github.  I initially started with the pre-built image and ran through the GStreamer test suite.

     

    1. Test features of DisplayPort Interface (configuration, Alpha Blending, and test pattern)
    2. Use video files as input to the VCU decoder and output to the DisplayPort
    3. Use a USB Webcam with gstreamer v4l2src plugin and output to DisplayPort
    4. Use USB Webcam and VCU encoder, to output to both the DisplayPort and an RTP Network stream simultaneously

     

    I had a minor issue with the RTP test because I didn't realize that the RTP client had to be running when the server started (I'm used to starting the server first when using RTSP).

     

    I then proceeded to build the design starting from the source TCL files and ran through the test procedure again to validate the build.

     

    Jason has released a 2019.2 version of the VCU TRD and I've run through the same test process using the pre-built and build from TCL designs.  I'm currently using the 2019.2 design for my GStreamer development.

     

    I've also learned that a 2020.1 design will be available soon but no ETA yet.

     

     

    Port PYNQv2.5

    I've found that PYNQ with Jupyter notebooks to be a very useful prototyping tool and I thought that it would be nice to port it the UltraZed-EV.  I started by upgrading the VCU TRD design to 2019.1 which is the current toolbase for PYNQv2.5.  Then I followed a tutorial by Peter Ogden of Xilinx to port PYNQ to an unsupported board.  The port using the UZ7EV_EVCC_2019_1.BSP (OOB design) worked after I fixed a minor issue specifying the correct root partition on the SD card.  I ran through the GPIO tests (LEDs and switches) for the OOB design.  I had a minor issue where the python libraries did not like using the concat element to aggregate interrupts rather than using an interrupt controller.  I needed to go back and add an interrupt controller to the hardware design and that resolved the issue and the GPIO tests worked.  I then tried using the upgraded VCU TRD BSP to add the VCU functionality.  I ran through the same GPIO tests which worked, but I realized that the VCU drivers did not load and I needed to go back and modify my PetaLinux config.  That seemed to be straightforward but then I started running into some odd boot related issues.  At that point I decided that I needed to move on with the roadtest and come back to fight this battle another day.

     

    I've recently discovered that there has been a process developed to upgrade PYNQv2.5 to use the DPU https://github.com/Xilinx/DPU-PYNQ .  Of course, the process currently only supports the Ultra96, ZCU104, and ZCU111 boards .   Since I want to incorporate the DPU functionality, I may just move forward and try to get this upgrade working with the VCU TRD when I have time.

     

     

    Vitis AI

    In searching for an AI element to integrate into my project for object detection and classification, I came across a great series of tutorials that Mario Bergeron of Avnet has done on Hackster.io using Vitis AI.

     

     

    I started with the Vitis AI 1.1 Flow for the DNNDK and as usual tried the pre-built image first.  There are images available for the Ultra96v2, UZ7EV_EVCC, UZ3EG_IOCC, and UZ3EG_PCIEC boards.

     

    There are example applications that are doing object detection and classification with images and video files stored on the SD card and also using a webcam input.

     

    I tried the following examples:

     

    • Pose detection - using video file
    • Face detection - using webcam
    • ADAS detection - using video file, doing vehicle detection
    • Video Analysis - using video file, classifying vehicles (cars, buses), motorcycles, pedestrians
    • mobilenet - classifying stored images

     

    The mobilenet example classified 1000 images in about 5.4s for an average frame rate of 183 fps which was quite impressive!

     

    I have subsequently used the VART (Vitis AI Runtime) flow both from pre-built and built from hardware images, but I did not document that.  The DNNDK and VART flows are similar on the build side but the VART flow requires a bit more setup for the target deployment and initialization of the runtime environment.

    The VART flow is using the WAYLAND/Weston desktop rather than the X11/Matchbox desktop that was used with the DNNDK flow.  One annoying aspect of the new desktop is that it doesn't allow locating the display window at a specific position on the desktop, so it moves around from run-to-run.  Makes it painful for creating videos.

     

     

    GStreamer difficulties

    My plan is to use GStreamer to set up the video processing pipelines.  I ran into a problem processing RTSP video streams from my IP cameras.  Only one of my cameras properly executes a pipeline to decode and display an RTSP stream.  The other cameras will initially display an image but it will not update.  This has been a frustrating problem as the pipeline doesn't fail in an obvious way and this failure does not occur using a similar pipeline with Ubuntu Linux.  I'm getting some troubleshooting help from the Avnet team and hopefully will get this figured out soon.  There's always the possibility that maybe network or camera issues could be causing the problem.  This blog will only be of interest to those familiar with GStreamer.

     

     

    Network Performance Test

    I wanted to get a measure of network performance not just of the UltraZed_EV interface but in the context of my network setup (GigE switch, etc).  The test uses the iperf3 tool.

     

    Running with TCP I was able to achieve a symmetrical 574 Mbps speed between the UltraZed-EV and the Win10 host computer connected through the Gigabit switch.  The reference test configuration (point to point connection using the UltraZed-EV connected to a CentOS laptop host) was able to achieve 856-942 Mbps.  I did not try a point to point connection as it would not be representative of my use configuration.  I was less successful trying to run UDP at a high data rate but I discuss that in the blog.

     

     

    SATA Performance test

    I ran into an interesting issue testing SATA performance that was not related to the hardware but the PetaLinux OS.  Plus there is also a minor issue with ease of use.  The SATA interface on the board provides only a data interface connector, so that requires using an external power supply with the SATA drive.  The ZCU102 board provides a power connector making an external power supply unnecessary.

     

    I am using a 500GB SSD for this test.  It came unformatted, so I formatted it as NTFS which I tend to use with larger capacity drives.  I discovered that PetaLinux does not support NTFS except as Read Only.  It supports FAT32 and EXT4.  Windows did not give me the option to format a drive this large to FAT32 so I ended up formatting it to EXT4 using Ubuntu.

     

    Here are the 3 sets of tests performed with their results (see blog post for test details):

    dd

    4096000000 bytes (4.1 GB, 3.8 GiB) copied, 13.0229 s, 315 MB/s

    hdparm

    Timing cached reads:   2166 MB in  2.00 seconds = 1082.55 MB/sec

    Timing buffered disk reads: 1394 MB in  3.00 seconds = 464.11 MB/sec

    bonnie++

    12226 is the speed (in KBytes/sec) at which the dataset was written a single character at a time

    406518 is the speed (in KBytes/sec) is the speed at which a file is written a block at a time

    12239 is the speed (in KBytes/sec) at which the dataset was read a single character at a time

    545003 is the speed (in KBytes/sec) at which a file is read a block at a time

     

     

    Project

    My roadtest project is a subset of an Intelligent NVR that I'm designing.  The project is a proof of concept that includes the following features:

     

    1. Implement video input processing from two simultaneous RTSP video streams
    2. Display streams on a FHD DisplayPort monitor
    3. Store streams on a SATA SSD drive
    4. Perform detection and classification on streams using the Xilinx DPU (Deep Learning Processor) with Vitis-AI

     

    Unfortunately, I'm unable to demonstrate a working project due to issues that I discuss in the blog post.

     

    Instead I decided to demonstrate face detection/tracking using an RTSP stream with software decoding and the DPU using a VART Python based application.

    This application is derived from yet another Mario Bergeron tutorial from Hackster.io that I discuss in the blog post.

     

    I also discovered a Xilinx Embedded Platform Reference Design for the ZCU104 (8-Stream VCU + CNN Demo Design) that I may try to leverage for the Intelligent NVR.

     

     

     

    Roadtest Summary

    Issues encountered

    1. Most reference designs using the VCU use the ZCU104 and ZCU106 boards.  Therefore those designs would need to be ported to the UltraZed-EV
    2. Related to the first issue - I find it incredibly difficult to find relevant designs.  It's serendipity when I can find what I am actually looking for
    3. GStreamer issues - not working for some RTSP streams for reasons yet to be determined
    4. Design examples and tutorials are often a mix of different tool versions which require maintaining multiple VMs and often having to upgrade designs
    5. The kit no longer comes with a license voucher - the Ultra96v2 came with a one year license.  You can get free 30 day evaluation licenses but I find design iterations often take longer than that
    6. Build times can be extremely long - on the order of 2-4 hours.  Extremely challenging if you're trying to iterate design variations

     

    Positives

    1. Dan Rozwood's Avnet team has been doing an incredible job of late with new tutorials and process documents.  Keep up the great work.
    2. Support has been responsive both from Avnet (related to GStreamer) and Xilinx (related to PYNQ).  The difficult part is figuring out the correct questions to ask.
    3. Xilinx has a lot of neat design examples - you just have to find them and figure out how to port them.

     

    Overall impression

    The UltraZed-EV Starter kit is a very capable and versatile board set.  I wish I could afford the GMSL cameras and interfaces.  That being said, I think that it is a good cost vs performance/capability tradeoff vs the Xilinx ZCU104/ZCU106 boards.


Comments

Also Enrolling

Enrollment Closes: Oct 29 
Enroll
Enrollment Closes: Oct 26 
Enroll
Enrollment Closes: Oct 5 
Enroll
Enrollment Closes: Oct 9 
Enroll
Enrollment Closes: Oct 5 
Enroll