Skip navigation
1 2 3 4 Previous

RoadTests & Reviews

52 Posts authored by: ralphjy Top Member
I have two basic criteria that have to be met before I will apply for a roadtest: The item being roadtested needs align with my interests and it needs to provide sufficient value relative to the amount of effort that would be required to roadtest it.  This could range from simple lower value items that don't require a lot of time invested to roadtest to very complex high value items that require significant effort and time. I need to think that I have the capability (knowledge and equipme ...
This will be the last hardware design performance test that I'll be running for the roadtest.   This is the SATA performance test: UltraZed-EV-RD-SATA_Performance The link provides a pre-built PetaLinux image with an included test script drive-test.sh that facilitates running SATA drive read/write and I/O performance tests.  The test script encapsulates the following 3 tests: dd - This utility is from the Coreutils Linux package and is a very simple tool which can be used to perform ...
I am continuing to run through existing performance tests in parallel with starting to integrate the elements of my project.   This is the network performance test: UltraZed-EV-RD_Ethernet_Performance_Test The performance tests use iperf3: https://iperf.fr/  which is a tool for performing network throughput measurements by testing either TCP or UDP throughput.  The tests require iperf3 to be running on two network endpoints - a client and a server.  The link above provides ...
I had mentioned in a prior post that I was having a bit more trouble using GStreamer than I had anticipated.  I had some problems using the udpsink and udpsrc elements that were used for the RTPStreaming demo.  It turns out that was actually caused by my prior GStreamer experience.  I've used GStreamer, VLC, and OMXPlayer a fair amount as clients to receive RTSP streams from IP cameras.  I'm used to the client sending a request to the server (camera) when it wants to start re ...
I've been posting about the UltraZed-EV roadtest and realized that I should have provided an overview first to put my posts in context.   I've wanted to design a home security/video surveillance system for a while.  I have a mix of different IP cameras that I use for monitoring and an NVR that I use for recording.  I prefer not to have a Cloud based system because of poor wan reliability, high latency, unknown data security, and high subscription cost.  I also have a conglom ...
In my previous post Avnet UltraZed-EV Starter Kit Road Test - Port PYNQv2.5 , I had ported PYNQv2.5 to the UltraZed-EV using the PetaLinux BSP (uz7ev_evcc_2019_1.bsp) which is for the OOB design.  This verifies the porting process works but now I need to integrate the VCU which is not in the OOB design.   The hardware design bitfile is is loaded as an overlay in PYNQ but any drivers and software need to be installed separately.  Since PYNQ uses an Ubuntu rootfs and I am using a p ...
Mario Bergeron recently did a tutorial project on Hackster.io vitis-ai-1-1-flow-for-avnet-vitis-platforms  that "provides detailed instructions for targeting the DNNDK samples from the Xilinx Vitis-AI 1.1 flow for Avnet Vitis 2019.2 platforms."   The UltraZed-EV is one of the supported platforms.  I need an AI component for my roadtest project so this was a very timely tutorial.  I decided that since he provided pre-built images that I would do a quick run through of the ...
I've been using PYNQ on the PYNQ-Z2 and Ultra96v2 boards and I thought that it could prove useful for evaluating and prototyping with the UZ7EV.  Unfortunately there isn't a prebuilt PYNQ image for this board yet.  The good news is that Peter Ogden of Xilinx made a couple of posts that document how to get PYNQ working with unsupported boards.   There are two approaches described: Port PYNQ using a board agnostic rootfs image combined with the correct BSP for your board: quick-po ...
July 4, 2020 I had hoped to finish the VCU TRD design tests this week and move on to taking a closer look at the design and using the design tools to tweak it to verify that I have a good tool setup. Unfortunately, even the simplest things sometimes turn out to be difficult and I have a couple of problems that I haven't resolved yet.   July 17, 2020 Well, here I am two weeks later.  Some personal difficulties and other problems that I had not anticipated.  But I guess relative ...
Initial Hardware Tests I've started to learn my way around this board - it has a tremendous amount of capability/functionality.  My plan is to use the VCU Design example (v2018.3) to learn the board (SOM + Carrier) and test the functionality that I will need to implement my roadtest project.   The design is well documented on GitLab:  v2018.3_vcu_uz7ev_cc-06212019 There is a pre-built SD card image which makes it easy to get up and running.   For initial hardware testing I ...
I was fortunate to have been selected to test a second Avnet UltraZed-EV Starter Kit that Randall and the sponsor added to the original roadtest.  Things got off to an inauspicious start as a train derailment somewhere between Chicago and Portland (OR) delayed the UPS shipment of my kit.   I finally received it last night (06/22/2020) at around 7pm.   I am particularly interested in testing the integrated H.264/H.265 video codec unit (VCU) component of this kit and eventually ...
The next neural network that I'm going to try is a variant of Tiny-YOLO.  The You Only Look Once (YOLO) architecture was developed to create a one step process for detection and classification.  The image is divided into a fixed grid of uniform cells and bounding boxes are predicted and classified within each cell.  This architecture enables faster object detection and has been applied to streaming video.   The network topology is shown below.  The pink colored layers h ...
Before I move on to object detection I thought I would try one more example of object classification using a more complex neural network based on the Multi-layer offload architecture.  The network used is a variant of the DoReFa-Net and uses the large ImageNet dataset http://www.image-net.org/  for training.  The DoReFa-Net https://arxiv.org/pdf/1606.06160 is a low bitwidth convolutional neural network that is trained with low bitwidth gradients optimized for implementation on har ...
After my previous blog post it was pointed out to me that the amount of whitespace (or other non-object pixels, i.e. background) that I had in my captured image was affecting the accuracy of the classification.  I did a quick inverse test that was suggested by beacon_dave and added whitespace around the CIFAR-10 test image and it then classified as an airplane!  I guess this makes sense as there are a lot of extraneous pixels to confuse the classifier.  In general purpose use of a ...
In the previous blog PYNQ-Z2 Dev Kit - CIFAR-10 Convolutional Neural Network , I verified the 3 hardware classifiers against the reference "deer" test image.  Now I'm going to see how the classifiers perform with captured webcam images.  I expect the performance will be degraded because the webcam will produce lower quality images due to issues like image brightness and focus.  CIFAR-10 has a small training set (5000 images per class), so I'm going to use a solid background to hel ...

Filter Blog

By date: By tag: