To take a video file and determine the frames with motion in them on a Netduino.
Looking around the Internet, I found a couple of postings asking whether or not video processing was possible with a netduino board (an open source electronics platform using the .NET Micro Framework). So, I decided to do some experimenting of my own. In this blog post, I’m going to demonstrate how to read a sequence of image files (simulating an incoming video) from the SD card and do some processing on them to determine when the motion in the video starts and stops.
First, I needed to get a video in which there was motion in some parts but none in others. Also, I wanted a fairly short video so that I didn’t have to wait a long time while it was processing. So, I shot this short video using a digital camera:
The video came out as an .avi off of my camera. Since I didn’t want to spend days writing a reader for an .avi file and didn’t think that I would be able to find one that worked on the .NET micro framework, I decided to go another route. I converted the video into a sequence of bitmap images. To do this, I used VirtualDubMod. To do the actual conversion:
File – Save Image Sequence…
Output Format: Windows BMP
Now, the .bmp file format is much easier than reading a .avi, but I was still looking for something easier. So, I converted all of the .bmp images to .pgm images using ImageMagick. I’m using Windows, so I did it through cygwin with a command that looked like this:
for i in *.bmp; do out=`echo $i | cut -d "." -f 1`; convert $i $out.pgm; done
Excellent, so now all of the images were in a very easy file format to read in.
Displaying Image Sequence (Recreating the Video):
As a first check to make sure that the image conversion process went well, I opened up the images using gimp. Everything checked out fine, so I started writing a .NET application on my desktop that would read in a .pgm file and display it. Luckily, with .NET 3.5, there is a great way to display pixel images with the WritableBitmap class. Using that class and a timer, you can recreate the video by displaying the images one after another. (See the attached code for more specifics.)
Now that we have verified that the image sequence can be read in correctly, it is time to do something useful with it. Since the netduino has very little RAM, the first thing to do is to downsample the image. I chose a down sample ratio of 20 in both rows and columns. Meaning that for every 400 input pixels, I’ll get 1 output pixel. Here’s what the video looks like after downsampling (and shortened to only show the parts with motion):
A couple of things jump out at you. First off, it is very noisy. Secondly, whole features are missing (i.e. the light switch). Luckily the motion in the video is rather large and still in there after the downsampling.
In order to detect motion, a simple algorithm is going to be used. Compare the previous value of the pixel to the current value of it and if it changes by more than a given threshold, then there is motion in the frame. Experimenting with the threshold, a value of about 37 or higher seems to do a good job in detecting the motion in the scene while not detecting the noise.
That’s a bit high of a threshold, considering that each pixel in the image has a value from 0 - 255. So, is there anything that we can do to lower this threshold?
The simplest way is with an averaging filter. The idea is to take a 5x5 region in the image and add up all of the pixel values and then divide by the number of pixels (25) and use that as the new pixel value for the center pixel. If you do that to every pixel in the image, you get a smoothed out video that looks like this (again clipped to only show the parts with motion):
This video has noticeably less noise in it. Experimenting with the threshold value again, a threshold value of about 7 or higher yields good results.
Running on the netduino:
So, now to its time to test it out on the netduino. Porting the image reader and image processing code to the .NET micro framework was pretty easy. For the very interested reader, I didn’t make any updates to the desktop version of the code, so you can see what needed to change when porting to the netduino.
Running on the netduino, I was able to get the same results (frame in which the motion starts/stops) as what I got on the desktop.
This post demonstrated that some image processing on the netduino could be done. Images were read from the SD card and processed to determine the frames in the image sequence that contained motion. The next test would be to get a video camera and hook it up to the netduino and see if it could be done in real time. (I wanted to at least prove that it was feasible before I went out and bought a video camera.)
Any suggestions on how to improve the project? Post them to the comments and let me know.