|Product Performed to Expectations:||10|
|Specifications were sufficient to design with:||10|
|Demo Software was of good quality:||10|
|Product was easy to use:||10|
|Support materials were available:||10|
|The price to performance ratio was good:||10|
|TotalScore:||60 / 60|
I have applied for Raspberry Pi camera bundle roadtest so I could test it with my students in our programming class since I was teaching Matlab this semester. We have made 3D printed cases for the pi and camera. Please see some of the projects we have done using this module.
3D representation of the object
We simply cannot create a "true" higher dimensional image from the only couple of 2-dimensional images. It's a question of math: there's an infinite number of possible solutions. The more 2D images we have, taken from different projection angles, the better we can approximate the 3D image. However, taking around 40 photos of an object that rotates on a specific axis with a stable camera module we can a better 3D representation of the object. Taking the 40 photos takes a while, along with the fact that Zephyr, the image processing software we are using, takes a while to create a model from the images. We can use a simple model and shorten that time, but the problem still exists in the resolution of the Raspberry Pi Camera.
A simple 12in turntable off of Amazon would We need this to keep a constant distance for the photographs as the image rotates around. The camera and light source will stay still while the model moves with the turntable.
rpi = raspi('rasppi.local','pi','raspberry');
cam = cameraboard(rpi, 'Resolution', '640x480', 'FrameRate', 89, 'Rotation', 180, 'Brightness', 50,...
'ExposureMode', 'auto', 'AWBMode','auto', 'MeteringMode','average', 'VideoStabilization','on');
disp('Raspberyy Pi 3 Model B connected.');
CameraValue = 'Turn Camera Live Feed: ON or OFF? ';
UserCommand = input(CameraValue,'s');
if isequal(UserCommand, 'ON') || isequal(UserCommand, 'on')
count = 0;
figure('pos',[200 700 100 200])
uicontrol('Style', 'pushbutton', 'string', 'Take Photo', 'Position', [0 100 100 100],...
'Units','normalized','Callback', 'global y; y=1;','BackgroundColor','black','ForegroundColor','white',...
uicontrol('Style', 'pushbutton', 'String', 'Done', 'Position', [0 0 100 100],...
'Units','normalized','Callback', 'close all; global x; x=0;','BackgroundColor','black','ForegroundColor','red',...
x = 1;
elseif isequal(UserCommand, 'OFF') || isequal(UserCommand, 'off')
x = 0;
disp('Please, Re-run the program and enter one of the choices');
img = snapshot(cam);
set(gca,'Position',[0 0 1 1])
count = count + 1;
fprintf('Photo Number: %d \n',count);
disp('this is the max number of photos allowed on 3DF Zephyr Free');
Shutdown = 'Shutdown "Raspberry Pi 2 Model B"? Yes or No? ';
SHUT = input(Shutdown,'s');
if isequal(SHUT, 'Yes') || isequal(SHUT, 'yes') || isequal(SHUT, 'YES') || isequal(SHUT, 'y') || isequal(SHUT, 'Y')
y = 1;
elseif isequal(SHUT, 'No') || isequal(SHUT, 'no') || isequal(SHUT, 'NO') || isequal(SHUT, 'n') || isequal(SHUT, 'N')
y = 0;
disp('You chose to shutdown Raspberry Pi');
h = raspberrypi;
h.execute('sudo shutdown -h now');
disp('Safe to unplug Raspberry Pi');
disp('You chose to NOT to shutdown Raspberry Pi');
disp('NOTE: Make sure to shutdown before unplugging the Raspberry Pi');
disp('NOTE: Your photos are stored in this root folder');
Where we need to be (Hardware):
Where we need to be (Software):
We need to be able to have the camera take a picture, save it to a directory with a unique name for each image, and get ready for another photo.
Live camera feed that recognized faces
The goal of the project was to create a live camera feed that recognized faces using matlab, Raspberry Pi 3, and a camera module for the Raspberry Pi. The program is able to do this using Matlab's built-in function called “vision.CascadeObjectDetector” and a camera connected to the Raspberry Pi.
This Project used Matlab, Raspberry Pi 3, and a camera module for the Raspberry. The main function used in this program is called vision.CascadeObjectDetector. What this function does is detects objects using the Viola-Jones algorithm. This was used to find “faces” and define them as such to be recognized in the program. Once the program is running it will display all faces inside of the shot surrounded by yellow boxes. The program maintains a continuous feed and the location of the faces recognized. If someone new enters the picture the program will recognize them too and display a new box around their face. The program impressively held all five of our faces at once, and a demonstration of the program catching three faces can be seen in the figure below.
The code continuously detects faces from images that are continuously being taking by the pi camera using a while loop. We then use the ‘vision.CascadeObjectDetector’ function, which uses the built-in Viola-Jones algorithm to detect different people’s faces. The Viola-Jones algorithm is a four-stage algorithm that uses different shapes projected over pixels of an image to determine if the given image shows the person’s face or the requested object. The step function then creates a position matrix of the image and the given location of the faces from the ‘Detector’ function. Next, the ‘insertObjectAnnotation’ function is used to put a rectangle around the face at the given location in the position matrix and label the face. The code while is continuously updated in real time as long as the pi camera is running.
The goal of this project was to detect the presence of faces on a live video feed coming off of a Raspberry Pi using Matlab as a source for the code. The most important part of the program was Matlab’s function vision.CascadeObjectDetector. The problems we encountered were the vision.CascadeObjectDetector, which sometimes mistakes other objects as faces that don’t look anything like a face. The solution to the problem encountered was simply solved by adjusting the shape parameters via Matlab. Once fully adjusted for facial detection the program has proven great a facial recognition and more.
For example, this program could be used to count how many people are in a room or more likely be used as a start of some other programs that need to require a continuous camera feed and not just a picture. Applications include things such as an automated defense system, active building security, finding wanted peoples’ in specific areas, facial recognition advertising, etc. A popular extension of our current application would be using this software, coupled with facial recognition to specifically advertise to individual interest via malls, restaurants, and of course online. Utilizing this new form of advertising would be very valuable to businesses; imagine simply going to the mall, using the kiosk and instantly see a welcome sign with your name and the new shoes you googled last week. There are many ways to utilize our simple algorithm within the markets and private companies.
Lastly, we have demonstrated the elegance and simplicity of this program and how easy it has become to utilize such powerful technology. Matlab was first introduced as a tool to solve matrix mathematics and focused primarily on the realm of Linear Algebra. Now it is proven to be an extremely powerful tool within programming to perform tasks involving algorithms and visualization. The biggest takeaway from this project for all of us was the importance of technology and robotics in today's world, and how we can be a part of that.
Object identification : Specifically to isolate a yellow cone from a live feed of images inputted from the Pixie camera
The initial goal of this project was to create a code that detected yellow cones in a field and draws a box around its location for another robot to travel to and obtain. The Raspberry Pi and Pixie camera took continuous pictures to feed into the code, so the original plan consisted of getting the range of the pixel value of the yellow cone in the last image obtained from the camera, and black out all the pixels in the image that were not in that range. With the blacked-out image we would add the value of all the pixels, and if there was not a yellow cone in the image the added value would be 0 or very low. If there was a yellow cone in the image, the pixel value would return a significantly higher number, so we would choose the highest x and y value of the pixel and the lowest to draw a square box around the cone and display that back on the original image, and then onto the camera feed. This original plan could not work with the Raspberry Pi and the camera. The next route of action was color segmentation. We implemented code that would take the picture and find the color that stood out the most and isolate the rest of the image from that color, which would result in the bright, yellow cone being found. Once it was found we saturated it blue, which resulted in any yellow cone appearing green once it was recognized.
Results and Discussion
The program executed on a separate computer in Matlab and communicated with the pi via the SSH protocol, wirelessly, allowing the pi to potentially be mounted on a mobile robot. The first line of code sets up this connection. The code correctly identified the color and shape of the cone and differentiated it visually on the camera feed. The exact parameters of a yellow color or a brighter color was seen as separate from the rest of the image. The code took an image, separated the brighter from the dimmer colors, and saturated one.
This screenshot shows the identified yellow cone, it appears green because the program shades it blue when the bright, yellow color is identified.
This is the image that the program would flicker to from image 1 above.
Summary and Conclusion
The program correctly identified the yellow cones in the images fed in through the live feed since the parameters of the color of the cone fit the cone exactly, but the image presented would flicker between the original feed image and the new image with the identified cone. In order to solve this, we tried changing the settings of the Pixie Camera, but the problem had to do with how we wrote the code. It was changing between variables too rapidly and we did not know how to solve it with the time we had to complete the project.
The benefits of using color segmentation over hard-coding the pixel values of the cone were that we could now identify if there were multiple cones in a field, which worked better with the original goal of helping the Vex Robotics Team in competition. We used this method of color identification from Mathworks to start, but the range of the colors was not as specific as we wanted. The Mathworks code identified 5 different colors and separated them to figure out if there was a certain color in the code, but not the shape of the object with that color. We then modified the code by deleting the extra color identification, modifying it so that it only compared and contrasted two colors, allowing for more accurate cone identification.
Road Test Summary
This road test was a great opportunity for us to implement real-life projects using Raspberry Pi and camera module. It helped me to show them what they are capable of doing using Matlab and raspberry pi. I am quite happy and grateful to be chosen as a roadtester. I personally would like to thank element14 for giving us this chance.