I have recently been exploring the possibility of using FPGA SoC devices and toolchains for experimenting on video signal processing. I tried Xilinx ZCU104, and found that the development process is not straightforward involving a complicated design flow which needs a repetitive trial and error process. Although it is recommended that the most recent tools of Xilinx Vivado 2019.2 + PetaLinux SDK 2019.2 + Vitis 2019.2 are used but no clear step by step guidelines is available for the ZCU104 board. What is even confusing is that this development kit is originally dedicated for video signal processing using ReVision package with Vivado 2018.3, and now Xilinx is promoting its new DPU + Vitis AI (DNNDK), not Revision package.

 

After attending Adam's workshop, I found a new approach to implementing vision-related applications by using the handy PYNQ toolchain, which uses Vivado 2019.1 and Jupyter Notebook as the development tool. The design flow is easy to understand, and the price of the development tools are very much affordable, especially for my students. I am keen to learn more in the rest two sessions, to become a veteran in developing the PYNQ projects, and adopt this methodology in my future teaching of embedded system design. I also look forward to seeing that PYNQ development will support the Xilinx Vitis design flow (i.e., version >= 2019.2).