This is a continuation of my post: Arduino Nano 33 BLE Sense with OV7670 Camera .


I wanted to do a quick evaluation of the performance on a deployed object classification model.  I decided to develop a simple model to identify 3 fairly distinct hardware objects - a bolt, nut and washer.  This was not intended to be a rigorous test of the model quality.


I captured a small dataset from the camera using the Data Acquisition panel.  I used a white background and only a single instance of each object with different orientations, so I expect to be able to achieve good accuracy if I test with the same objects.


I then created an impulse (model) using the default image classification application.  The preprocessing squashed the image to 96x96 size needed for the neural network.



I then trained the neural network.  With only 20 training cycles, I had poor accuracy - less than 50%.  I increased to 100 training cycles and achieved good accuracy as shown below.  I'm sure that in this case I have overfit the data, but for my purposes it is acceptable.



Then I tried "Live Classification".  You can see that I am having an issue with getting sharp focus, but I still get reasonable results.  I'll need to figure out how to get better focus.  The washer has the poorest performance with confidence in the 70% range.







I then generated the binary firmware for the device (rather than a source library).  If I was developing a project I would have opted for a library.



After the firmware is generated there is an installation video that shows how to deploy (flash) the binary to the device.  It's essentially the same process that was used to flash the data acquisition firmware.


Then instead of running the edge-impulse-daemon that was used for data acquisition, we use the command "edge-impulse-run-impulse" to start the inferencing.

I have a short video showing the classification performance.  The speed of the acquisition and inferencing (2 seconds) makes this setup only suitable for snapshots.  You'll also see that it can take 2-3 iterations to get accuracy achieved in "Live Classification".  I did not have any visual feedback so I needed to draw a box that allowed me to position the objects in the frame.  I also happened to notice in case of the washer that a slight shift to the right improved the classification probability.  I think this is probably caused by a change in the light reflection off the washer.