I grabbed one of the first RPI's just as the model B became available, Now it's powering my own rover! There are some challenges with video, but overall it's a fun build and is still getting better every day.

20140107_232453.png

Gbot  has 4 cameras total, one mounted on a Dream cheeky missile launcher turret for 270 degree view, One mounted on the nose( shown in lower images) and 2 in the Xbox Kinect; one color and one IR. If you don't already know, the Kinect delivers grayscale 3D(depth) via infrared, which makes it easy to do object detection and avoidance. It also has an excellerometer and a mic, and I think some other stuff that I'm not interested in. ( Check it out).

 

 

20140108_183825.png

I'm using an old Kyocera cellphone for GPS, and a 4 wheel drive R/C chassis I found at a junk store. The battery is a 12V gell cell(deep cycle) which powers the Kinect directly, and a Duracell usb hub via a car cigarette usb charger which gives out a clean 5V(clean enough).

 

20140108_183857.png

I made a patch cable to power the USB hub by splicing the ground and vcc(power) and putting a diode between the RPI and the Cig adapter. That should(SHOULD) prevent feedback current going into the pi, and has worked so far. My first attempt was to disconnect the usb power from the RPI completely, but it seems all my devices require some sort of handshake on voltage.

20140108_183841.png

I've written some software to drive it.(Python) I use 3 modules, Bridge.py, Targeting.py and Engine.py.(custom) I could have used instances, but I wanted it to perform as though I was a passenger in one of 3 decks( like on star trek). The information is transponded via sockets, and all commands go through the engine room for processing and management. From the bridge you can toggle camera views ect, engine room controls movement and gps , and targeting is of course the  turret

gbupdate3.gif

The software, as mentioned is custom. My first serious attempt to write something others might want to use and hack.  On the left, the engine room console. On the lower right, IR depth radar(very cool). just above the radar grid is a front facing view that peals it's way back as the radar scan progresses from bottom(close) to top(far). I also adde some motion targeting(toggled on here) and I'm planning to add audio detection to turn the turret, but Linux doesn't like to listen.

 

Problems: RPI+video processing= FAIL! It works, but it's sadly slow.. On a similar spec'd HP desktop I can drive 3 cameras before It becomes annoying, but on the RPI, one will make you want an arduino, Still, I already know the RPI, so BLAH! I'm tweaking Bridge.py so that all cameras are 177x144 on cv.queryframe(), then if a high res image is requested, it will simply change the query resolution. Also, only one cam will be online at any time, Perhaps 2 if the RPI will eat Kinect IR images and a mini-color image at the same time. If anyone has any better solutions for frame grabs, let me know! I will update the blog with dependencies and such at a later date.

The next step for me is to build/buy a motor controller. I had planned to just use some transistors to toggle via GPIO, but I want full directional control, and that's getting more confusing than fun.

 

*****UPDATE May 2014

I've made a lot of changes to the overall project. I decided to use the Olinuxino (Olimex a20) for the heavy lifting like the kinect and the webcams. The RPI just wasn't getting it done. The Olimex is where I should have started with SBC's and I highly recommend them. The a20 is so much faster, it's not fair to compare them, and it's only 100$. I also got fed up with the short range of usb wireless and decided to stick a Linksys wrt54g on-board to manage all the networking headaches, then stuck all the tech inside a box. A few custom Ethernet cables later, and Gbot was 100% mobile, and a LAN hotspot!

 

20140315_224219.jpg20140315_224213.jpg

20140315_223938.jpg

I was able to build the motor control circuitry, and have already began testing pathfinding and autopilot course correction. It is already making simple object avoidance turns using thresholded peels of the kinect depth imagery, but to get it more precise, I fear I may have to upgrade to stepper motors, which might be better served on a new chassis, I want to at least get this one running outside before I swap out the 'cool' for practical.