My project idea is to used the BeagleBone AI board + Mini Power Bank + USB camera to make a Portable Dried Sea Cucumber Species Identifier. At first, i am going to aim at identifying at most 3 type of dry sea cucumber species : Isostichopus Badionotus , Apostichopus Californicus ,and Holothuria Mamamata. The project going to use machine learning, feature extraction, and classification to produce a predicted output. The ideas came about during my visit to the Philippines where i got sampled of various dry sea cucumber from my uncle who was a middleman between restaurant and fisherman. Sea cucumber in dried from is difficult to identify it specie so i thought it would be a neat idea to use computer vision as a solution to identified dried sea cucumber species.
|Holothuria Mamata||Isostichopus Badionotus||Apostichopus Californicus|
_Beagle Bone AI
For neural net frame work, i decided on using Darknet as it something i work before and i was able to get around 10fps using only cpu. I initaly try to convert a Tensorflow model to TIDL format but i couldn't get it to work so i went with my current approach. For the model, i used a modified tiny-yolo model and retrained it with new object for around 5000 steps. For darknet framework, i modified the framework so that it could output the prediction onto a text file. Overall, the accuracy of this model was around 85% and since my application detect object that is idle there no need to had a high frame rate. The link to the modified Darknet version and model can be found on the link in Implementation.
The first problem i ran into when doing this project is finding a suitable amount of images for specific species of dried sea cucumber. Since there barely any image of dried sea cucumber on Imagenet, i spend a couple hour just looking at image on site like Alibaba and Amazon since there are actual dried cucumber seller. After a couple of hour , i decided to settle on these sea cucumber species : Isostichopus Badionotus , Apostichopus Californicus ,and Holothuria Mamamata. However , I was only able to get at most around 20 different images and i would like to have around at least 60 different images for each species. Thus, I decided to create new images from the existing pool of images. One method that i used was simply using pictures with multiple dried sea cucumber and make each sea cucumber their own images. Another method I use is rotating the image so it at a 45 degree angle.
This is the original image :
This is an image i generate from the original:
I use LabelImg to create a bounding box around all the objects in the images. In addition, LabelImg also create the coordinate text file in Darknet format so it convenient to use since I don't have to convert the coordinate file into Darknet format.
During image labeling process, it important that you make the bounding box as tight as possible as the framework going to learn feature within the bounding box. The higher the percentage of desire features within the bounding box the more accurate your detection model going to be.
1.For training , I simply copied the data folder in LabelImage (after all the image is finish labeling ) and put into Darknet folder
2.Next , I create a BBAI.data as show below:
classes= 2 train = BBAI/train.txt valid = BBAI/train.txt names = BBAI/BBAI.names backup = backup/
3.Then, I create a train.txt to let the program know where all the images is stored . Train.txt should look like this
BBAI/obj/Badionotus0.jpg BBAI/obj/Badionotus1.jpg ... BBAI/obj/Badionotus60.jpg BBAI/obj/Mammata0.jpg .... BBAI/obj/Mammata40.jpg
4.Finally, I create a BBAI.names that simply contain the object name
5.Next, I create my own .cfg file ( this is the model file) from the tiny-yolo.cfg. The important part here is change your model classes to how many class ( object ) you have and change the number of filter to (classes + 5 ) * 5 on line 114. Other thing you can experiment in the model for performance improvement is width/height and the number of convolution layer.
6. Finally , use the following command to start training your model. There should be an already pre-trained model in the Darknet directory. You are simple retraining with new data.
./darknet detector train BBAI/BBAI.data BBAI/BBAI.cfg BBAI/darknet53.conv.74
It recommend to train around 9000 iteration , but in some case around 7000 or even 5000 iterations would yield better result. Since i modified Darknet to save the model every 1000 iterations. It should be fairly simple to test and see which iterations have the best result.
To make enabling image capturing and detection simpler. I make a local webapp using nodeJS to control this application remotely.
The app.js create a webserver to host the webpage. To access the webpage you simply just type in localhost:5000 in the url.
This is the control UI
Full Repo : https://github.com/Husky95/BBAI
For manual testing of object detection. I feed the framework and image with the current model to make sure it can detect object correct.
Use this command to for manual testing :
./darknet detector test BBAI/BBAI.data BBAI/BBAI.cfg backup/BBAI_5000.weights Test.jpg
|Isostichopus Badionotus||Holothuria Mamamata||Apostichopus Californicus|
Overall, the average prediction accuracy was around 75% and the average time it take is around 30 sec for an image. The long amount of time it take is mostly due to Darknet not able to fully use all of the BBAI special hardware. I believe that once BBAI release their Tensorflow Lite support and using the Tensorflow Lite framework , the amount of time it take to identify the object will be significantly reduced. Out of the three dried sea cucumbers i tested , Mammata and Californicus have a relatively high prediction accuracy at around 85% . While, Badionotus have around a 65% accuracy. This is mostly due to the lack of images variety in Badionotus dataset and maybe the unique color pattern of this specific sea cucumber specie.