Vision Thing

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project!

Back to The Project14 homepage

Project14 Home
Monthly Themes
Monthly Theme Poll

 

What started out as a jokey discussion between some Element14 members ended up with an interview, talk and webcast, and nearly won me $5000

Discussion

My initial entry was as follows:

V1

This remarkably got me into the competition so they sent me a DragonBoard.

 

The first step was to install Linux and OpenCV

It was my first time with OpenCV so I had a play with Python and did some simple face recognition

 

I then set about training the system to regonise dragons. My first experiments performed badly but it eventually got to the point where it could detect them.

Training 1Good Data

I then set about getting the hardware sorted.

I added a PIR detector, WebCam and some LEDs for output, using GPIO and a level shifter to connect the 1.8v logic levels to work with the detector and LEDs.

V2

By this point I was getting short on time, so I reach out to Rob Ives to see if he had a cardboard knight I could use. This was printed out and assembled so I could enter the competition.

 

 

After a public vote, I got to the final and was narrowly beaten by a quadcopter that used imaging dynamically position itself. But they did award me the Quacomm Developer of the Month award and then I was invited to talk about the project on the 96boards webchat.

 

 

I wasn't that happy with the cardboard knight as I really wanted to have some movement. So I designed a 3D printed model and printed that out and painted it. At the time the DragonBoard did not support PWM outputs so I switch to a Picon Zero board to handle the I/O and communicated to that using I2C.

Knight v2

At the end of the year, the Linuxing in London group asked me to give a talk on the project so I explained it all and after a few false starts, gave a successful demo.

https://skillsmatter.com/skillscasts/8225-linuxing-in-london

 

Talk

 

On reflection there's a few things that could be improved with the project. The web camera was of very low quality so that could do with an upgrade. Also the training as shown above was done with a white background, that should have been done with a more diverse range of backgrounds to give better detection and preferably should have been done with the webcamera providing the images. As with the face detection, it might have been better to detect "animals" with a generic classifier then within that frame, look to see if the animal was a dragon.

 

As you can tell, I had a lot of fun with this project and learnt quite a few things along the way,

 

For further details of the build, code and electronics see: