Introduction

 

After I successfully completed all three P2P training courses (see my blogs here), this is the blog of my project - Facial Recognition. This project is based on a special Linux distribution - PYNQ for Xilinx Zynq SoC. Harmonizing with open source image processing library OpenCV and Xilinx computer vision overlays, PYNQ is a very good platform for image classification applications such as facial recognition. I will use a couple of Mr. Trump's news images to verify that the system can correctly recognize his face.

 

 

 

 

Hardware

 

A list of hardware devices used for this project (shown in the picture below):

1. Logitech C922 webcam x1

2. Ultra96v2 x1

3. 8G uSD cards x2

4. Display port ot HDMI adapter cable x1

5. HDTV x1

6. USB Hub x1

7. Wireless keyboard & mouse x1

8. USB network/NART cable x1

 

 

 

Prepare PYNQ image SD Card

 

First, you need to download the latest pre-built PYNQ image here. When I wrote this blog, the latest version is 2.5 (ultra96v2_PYNQ_v2.5.img).

 

Next, you need to create a bootable SD card with the downloaded image. Here's a few tips for how to correctly create the SD card:

1. If you have a brand new SD card, use the utility tool Win32 Disk Imager to write the downloaded image to the SD card as shown below

2. If your SD card has been used for other projects, you may need to use Windows' DiskPart command to remove all existing partitions and create a new FAT32 partition. Then format it. After that, you can do the previous step for creating the SD card. You can find DiskPart documentation here.

 

 

Check if Your Webcam Works with PYNQ

 

My Logitech C922 webcam is supported by PYNQ, but you may need to check if your webcam is supported. If you run the following Python code in (https://www.codingforentrepreneurs.com/blog/opencv-python-web-camera-quick-test) and see a similar screen as shown below, that means your webcam is supported by PYNQ.

 

import numpy as np

import cv2

 

cap = cv2.VideoCapture(0)

 

while(True):

    # Capture frame-by-frame

    ret, frame = cap.read()

 

    # Our operations on the frame come here

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

 

    # Display the resulting frame

    cv2.imshow('frame',frame)

 

    cv2.imshow('gray',gray)

 

    if cv2.waitKey(20) & 0xFF == ord('q'):

        break

 

# When everything done, release the capture

cap.release()

cv2.destroyAllWindows()

 

 

 

Connect to Ultra96

 

You can use either of the following two methods to connect to Ultra96:

1. Serial UART connection through USB Network/UART cable

2. Network connection through USB Network/UART cable

 

UART Connection

 

I use Tera Term and the set up of UART connection is shown below.

 

 

 

 

Network Connection through USB Cable

Default password is xilinx.

 

WiFi Connection

 

Use either of the above two methods to connect to Ultra96, type the commands below (in bold font)

xilinx@pynq:~$ sudo python3

[sudo] password for xilinx:

Python 3.6.5 (default, Apr  1 2018, 05:46:30)

[GCC 7.3.0] on linux

Type "help", "copyright", "credits" or "license" for more information.

>>> from pynq.lib import Wifi

>>> port = Wifi()

>>> port.connect("your_wifi_essid","your_wifi_password")

ifdown: interface wlan0 not configured

Internet Systems Consortium DHCP Client 4.3.5

Copyright 2004-2016 Internet Systems Consortium.

All rights reserved.

For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/wlan0/f8:f0:05:c3:30:96

Sending on   LPF/wlan0/f8:f0:05:c3:30:96

Sending on   Socket/fallback

DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 (xid=0x7928c939)

DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 (xid=0x7928c939)

DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 (xid=0x7928c939)

DHCPREQUEST of 192.168.1.214 on wlan0 to 255.255.255.255 port 67 (xid=0x39c92879)

DHCPOFFER of 192.168.1.214 from 192.168.1.1

DHCPACK of 192.168.1.214 from 192.168.1.1

bound to 192.168.1.214 -- renewal in 21017 seconds.

>>>

 

 

You can see the Ultra96 got an IP address 192.168.1.214. Internet connection is required for the software installation described in the next chapter.

 

 

 

Software

 

PYNQ comes with the popular OpenCV library pre-installed, but it is an old version. Thus I replaced it with newer version 4.1.1. To accelerate OpenCV components with the programming logic subsystem, the computer vision overlay is pre-installed in PYNQ. We need to update the PYNQ computer vision overlay.

    

 

Update PYNQ

 

The following commands just make sure PYNQ components up-to-date. On Linux based system, these two commands are usually executed before installing and/or update software.

sudo apt-get update

sudo apt-get upgrade

 

 

Download, Build and Install OpenCV 4.1.1

 

PYNQ 2.5 came with an old version OpenCV - v3.2.0 and I updated to 4.1.1.

 

First, make sure you have WiFi connection using ifconfig command as shown below. If not, please refer to the WiFi Connection section above.

 

Next, download the source code as shown below

 

After download completed, a file named opencv.zip is created. Unzip it and create a symbol link to the unzipped folder using the following commands:

xilinx@pynq:~$ unzip opencv.zip

xilinx@pynq:~$ ln -s opencv-4.1.1 opencv

 

Before building OpenCV, let's make some preparation. First, create a build folder using the following commands:

xilinx@pynq:~$ cd opencv

xilinx@pynq:~$ mkdir build

xilinx@pynq:~$ cd build

 

Then we insert the second SD card and find its device name

 

We can see the device name is /dev/sda1 from the above screen shot. Next, we enable it as a swap disk

 

After preparation is done, we can configure OpenCV and build it using the following two commands

xilinx@pynq:~$ cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local -DINSTALL_PYTHON_EXAMPLES=ON -DINSTALL_C_EXAMPLES=OFF -DPYTHON_EXECUTABLE=/usr/bin/python3 -DBUILD_EXAMPLES=ON ..

xilinx@pynq:~$ make -j2


The build took 2 hour 55 minutes to complete on my Ultra96-22 board.

 

The last step is to install the newly built OpenCV 4.1.1 using the following commands:

xilinx@pynq:~$ sudo make install

xilinx@pynq:~$ sudo ldconfig

 

Let's check the new OpenCV version. Now it's 4.1.1 as shown below.

 

 

Update PYNQ Computer Vision

 

Using the following command to update PYNQ computer vision overlay:

xilinx@pynq:~$ sudo pip3 install --upgrade --user git+https://github.com/Xilinx/PYNQ-ComputerVision.git

 

 

 

Demo

 

 

The demo software will be created in Jupter notebook. To access Jupyter Notebooks, type http://192.168.3.1:9090/  into your web browser's address bar. Log into Jupyter Notebook use password xilinx.

 

You can create a Jupyter notebook from the following Python code or you can upload the notebook file from the attached zip file (you need the caffe mode file and the prototxt file).

import cv2
import IPython
import numpy as np

def initStream():
    stream = cv2.VideoCapture(0)
    stream.set(cv2.CAP_PROP_FRAME_WIDTH,1920)
    stream.set(cv2.CAP_PROP_FRAME_HEIGHT,1080)
    return stream

def readAFrame(stream):
    (grabbed, frame) = stream.read()
    return frame

def showFrame(img):
    returnValue, buffer = cv2.imencode('.jpg',img)
    IPython.display.display(IPython.display.Image(data=buffer.tobytes()))

def findFaceInAFrame(frame):
        return face_frame, faces

def getFaceFingerprint(frame):
    haar_face_cascade = cv2.CascadeClassifier('/home/xilinx/opencv/data/haarcascades/haarcascade_frontalface_default.xml')
    face_frame = np.copy(frame)
    faces = haar_face_cascade.detectMultiScale(face_frame[:,:,1], scaleFactor=1.1, minSize=(4,4), minNeighbors=6)
    if len(faces) != 0:
        facenet = cv2.dnn.readNetFromCaffe('bvlc_googlenet.prototxt', 'bvlc_googlenet.caffemodel')
        face_crop = face_frame[faces[0][1]:faces[0][1] + faces[0][3], faces[0][0]:faces[0][0] + faces[0][2], :]
        faceblob = cv2.dnn.blobFromImage(face_crop, 1, (224, 224))
        facenet.setInput(faceblob)
        face_fingerprint = facenet.forward()
        return face_fingerprint

webcamStream = initStream()
trumpFaceFingerprints = []
cutoff = 0.15
IPython.display.clear_output(wait=True)

count = 0
while True:
    frameIn = readAFrame(webcamStream)
    # faceFrame, faces = findFaceInAFrame(frameIn)
    faceFingerprint = getFaceFingerprint(frameIn)
    if faceFingerprint is None:
        continue
    if faceFingerprint.size != 0:
        trumpFaceFingerprints.append(faceFingerprint)
        count += 1
        if count >= 30:
            break
print("Learning Trump's face has completed using the picture below.")        
showFrame(np.copy(frameIn))

input("Start facial recognition...")
while True:
    frameIn = readAFrame(webcamStream)
    faceFingerprint = getFaceFingerprint(frameIn)
    if faceFingerprint is None:
        continue
    for i,face_encodings in enumerate(trumpFaceFingerprints):
        if np.linalg.norm(face_encodings[0] - faceFingerprint, axis=1) < cutoff:
            print("Recognized Trump's face in the picture below.")
            showFrame(np.copy(frameIn))
            break

 

 

Here's the output from the above code:

 

Learning Trump's face has completed using the picture below. 

Start facial recognition... Recognized Trump's face in the picture below. 

Recognized Trump's face in the picture below. 
Recognized Trump's face in the picture below.