Skip navigation
1 2 Previous Next

Andy Clark's Blog

22 posts

I decided to wire up the Google AIY box to Marty the Robot.

Google AIY and Marty the Robot

OK Google!


The first thing to do was to upgrade the software so I did not have to press the button to work the robot. I followed Eric Duncan Google AIY upgrade steps.


After stopping the service

sudo systemctl stop voice-recognizer.service


I then edited the config file to switch to "Ok Google" mode rather than button mode.


nano ~/.config/voice-recognizer.ini


And set the trigger to be

trigger = ok-google


Configuring for Marty


Next I installed the Marty Python library from Robotical


cd ~/voice-recognizer-raspi
source env/bin/activate
pip install martypy


Next up is adding some custom actions to google AIY
I edited the actions file and included the Marty library.


import datetime
import logging
import subprocess
import martypy

import phue
from rgbxy import Converter


Then in the make_actor function I added my own command.


 # =========================================
    # Makers! Add your own voice commands here.
    # =========================================

    actor.add_keyword(_('raspberry power off'), PowerCommand(say, 'shutdown'))
    actor.add_keyword(_('raspberry reboot'), PowerCommand(say, 'reboot'))

    actor.add_keyword(_('marty walk'), MartyCommand(say, 'walk'))

    return actor


Then finally I added a class to process that command, I based the flow on the Hue Lightbulb one provided. I used the "calibrate tool" to find my Marty but I think it's also possible to find them in code. I'd also recommend enhancing the code to cope with exceptions.


# Control Marty the robot #

class MartyCommand(object):
    """Control Marty the Robot"""

   def __init__(self, say, command):
        self.say = say
        self.command = command
        mymarty = self.connect()
        mymarty.hello()  # Move to zero positions and wink

    def connect(self):
        return martypy.Marty('socket://') # Change IP to match your Marty

    def run(self, voice_command):
        if self.command == "walk":
            self.say("Walking forward 5 paces")
            mymarty = self.connect()

            logging.error("Error identifying Martys command.")
            self.say("Sorry I didn't identify that command")


Finally to test run




Once you are happy with testing you can restart the recogniser service with


sudo systemctl start voice-recognizer


My biggest issue was that Google did not like my accent so I ended up adding lots of keywords with the combinations that the voice recognition had detected. Also the detection needs to match exactly so you'd end up having to add lots of different commands. One approach is the approach taken by Marcin Gorecki to use wild cards. You could then say "Marty Walk Forward Five Paces" and have the recognition software parse the number of steps.


Adding wildcards to Google AIY actions on Raspberry Pi. –

Andy Clark (Workshopshed)


Posted by Andy Clark (Workshopshed) Top Member Aug 12, 2017

The dancing Element14 Blue Guy which was sent to me by tariq.ahmad was slightly damaged in transit so I resurrected him as an experimental crime-fighting cyborg named RoboGuy.


Stay out of trouble!

I've used the Picon Zero board from 4Tronix before when I was working on the Dragon Detector project. It is the same size as the Pi Zero and can drive 6 outputs, 4 inputs and 2 motors via H-Bridges. So it seemed a good option for driving my latest car project.


Prepare the Pi

The Picon Zero runs via I2C so you need to run raspi-config and enable I2C in the settings.


It's also worth updating the apt-get cache as we'll be installing some software.

sudo apt-get update


Install tools

This step is optional but it's good to have some tools for diagnosing what's going on.


sudo apt-get install i2ctools


This then allows you to see what I2C adapters are available. This will be different on early pi but most modern Pi are have the I2C bus numbered as "1"

sudo i2cdetect -l


That will return something like:

i2c-1   i2c             bcm2835 I2C adapter                     I2C adapter


Then the scan command will detect if the board is plugged in and can be detected.

sudo i2cdetect -r 1


It will return something like the following:


WARNING! This program can confuse your I2C bus, cause data loss and worse!

I will probe file /dev/i2c-1 using read byte commands.

I will probe address range 0x03-0x77.

Continue? [Y/n] Y

     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f

00:          -- -- -- -- -- -- -- -- -- -- -- -- --

10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

20: -- -- 22 -- -- -- -- -- -- -- -- -- -- -- -- --

30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

70: -- -- -- -- -- -- -- --


Install software

First I removed any existing versions of node and npm (the node package manager)

sudo apt-get remove --purge npm node nodejs


Then I used an excellent script from Steven de Salas to install the latest version of node.

wget -O - | bash


After checking the versions I installed an I2c libraries so that Node could communicate with the I2C bus

node -v
npm -v
npm install i2c


Porting the Picon Zero code to Node

I tried out the "jiphy" tool to see if I could automate some of the code migration. That seemed to work ok with the smaller samples but it choked on the main library. So I ended up porting the code by hand. So far I've just done the version funtion as that allows me to see that communication is happening. I've also tried to mimic the Python version so that people can easily work with either. Here's the two versions side by side.


#! /usr/bin/env python

# Test code for 4tronix Picon Zero

import piconzero as pz

vsn = pz.getRevision()
if (vsn[1] == 2):
    print("Board Type:", "Picon Zero")
    print("Board Type:", vsn[1])
print("Firmware version:", vsn[0])
#!/usr/bin/env node

// Test code for 4tronix Picon Zero

var pz = require('./piconzero');

var vsn = pz.getRevision();
if  (vsn[1] == 2) {
    console.log("Board Type:", "Picon Zero")
else { 
    console.log("Board Type:", vsn[1])
console.log("Firmware version:", vsn[0])


So far it's a proof of concept but it seems to be working reliably so I can't see any problems with porting the rest.


Watch this space



Using the I2C Interface – Raspberry Pi Projects

Just back from an excellent talk by Tim Hunkin of "Secret life of machines" fame. He started things off with a description of how a telephone works and demonstrated a loudspeaker made from a crisp packet. He then went on to talk about his latest arcade creation "I-Zombie".



Photo credit: Martin Evans


This latest game combines a classic optical illusion (Peppers Ghost), carved wooden figures, video screens and some cunning mechanics. The front of the phone as two video screens with animations, instructions and scoring. The control is via a selection of PLCs which Tim like because of their reliability, ease of use and that he can get them cheap of Ebay, And of course given Tim's mischievous nature, there is a twist.


You can see I-Zombie and other devices at novelty-automation-home-page


Tim also has a second exhibit at South Woodford pier

When I was sent the BeagleBone Blue board my first thought was what could I build that needed 8 servos. I'd seen some fun examples in the MusicTech challenge so a music player seemed like a good idea. I bought some small servos, a glockenspiel (which has metal bars vs a xylophone which has wooden bars) and some wooden balls.


The balls were drilled and mounted on some short dowels made from lollipop sticks. These were attached to the servos using rubber bands, this has two reasons firstly it allows me to undershoot on positioning the servos causing the beater to hit the bar and recoil. It also reduces the risk of the servo stalling if there is a software problem.


I initially tried mounting the servos on a block of wood. This proved troublesome and it was not possible to adjust the position or angle of the servos. So a bracket was designed to support the servos. @PiTutorials suggested adding slot for the cable into my design but I found that was not necessary because of the way I was mounting the servos.



For the software, I thought I'd try out MQTT as an approach for getting the commands from the UI to the board. This turned out to be very straight forward, I installed Mosca on the BeagleBone and then wrote a client using Paho to communicate to that via WebSockets.

So that I did not need to run all of my code as root, I wrote a "ServoDaemon" that listened for servo positions on a named pipe.



The power to the servos caused me an issue, the board could not supply enough current to all of the servos. People have reported that the board won't even boot if you try to power more than 6. On the recommendation of the forum, I decided to power the servos from an external supply. This was done by building a small adapter that mimiced the servo pins. Two sockets slid over the outer pins to connect GND and Signal, the middle pins were connected together and isolated from the board with some hot glue. The glue also held the 3 connectors together.


There's a fault with one of the servos but here's my attempt to make something sound a bit musical with the remaining notes. It's not a recognisable tune!


As I've been experimenting with the BeagleBone Blue I decided it would make sense to do a bit of coding with Node.JS rather than Python as I've used for a few previous project. I also wanted to see if I could connect it up to MQTT as I planned to use that for another project.

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

My first simple project to combine those two was to hook up to Cayenne, an IOT dashboard from myDevices. The dashboard allows you to add various widgets for displaying values, voltages, temperatures etc and to log their values over time. There are also triggers that allow you to notify people if certain limits are exceeded for example your beer brewing temperature monitor goes above a particular temperature.

The dashboard supports Raspberry Pi and Arudino boards out of the box and there's a growing list of other supported hardware. However, as my device was not on the list it meant I'd have to use the bring-your-own-thing-api this allows you to connect upto to the MQTT directly and publish and subscribe to channels to interact with the dashboard.


When using the API you create a new device in the dashboard and it will let you know the username, password and clientID which then must be used with all further connections. It's best to put these into a separate configuration file but I've just added mine to the top of the script as variables. Here's where there's a bit of a chicken and egg situation as you can't continue building the dashboard until the app has connected at least once.


To run the following you can install MQTT using NPM.


var mqtt = require('mqtt')
var data = 1;

console.log('Connecting to Queue');

var apiVersion = "v1";
var username = '123456789012345678901234567890';
var clientId =  '123456789012345678901234567890';
var password = '123456789012345678901234567890';

var rootTopic = [

var client = mqtt.connect('mqtt://',{
    port: 1883,
    clientId: clientId,
    username: username,
    password: password,
    connectTimeout: 5000

client.on('connect', function () {
    client.subscribe(rootTopic + '/cmd/+');
    client.publish(rootTopic + '/sys/model', 'Node Test'); 
    client.publish(rootTopic + '/sys/version', 'V1'); 

client.on('message', function (topic, message) {
    const payload = message.toString().split(',');
    const topics = topic.toString().split('/');
    const seq = payload[0];
    const cmd = payload[1];
    const channel = topics[5];
    console.log(channel + "-" + cmd);
    client.publish(rootTopic + '/data/' + channel, cmd); //Echo value back
    client.publish(rootTopic + '/response', 'ok,'+seq);

client.on('close', function (message) { 
    client.end(); })

client.on('error', function (message) {
    console.log('error occurred');
    client.end(); })
client.on('disconnect', function () {
    console.log('disconnection occurred');
    client.end(); })

function writeData() {
    var topic = rootTopic + '/data/testChannel';
    var payload = data.toString();
    client.publish(topic, payload);
    data = data + 1;

process.on('SIGINT', function () {
    console.log("Shutting down SIGINT (Ctrl-C)");

function loop() {

function run() {
    setInterval(loop, 30000);



The app above is designed to publish a value to channel "testChannel" this simply increments each time the loop code runs.


The switch setting is a little more complex. It works by listening for commands "cmd" and when it receives a message it echos the value back again and acknowledges the command with a "response". It's important to do this otherwise the switch widget on the dashboard will become unresponsive. You can "unfreeze" it by editing and saving the settings. It's also important to use distinct channels for your switch and other widgets as that can also affect the behaviour.


Other problems I found was that my firewall blocked connections to the MQTT port by default. The system information does not show up on the dashboard and I felt the dashboard could do with a simple "status" type widget to pass text message back and forth.


I also found that the responsive website for MyDevices made it almost impossible to login in mobile mode and the "App" for Android did not support my custom device. The documentation page is a massive long HTML page with #tags to identify each section. Again this proved challenging when reading on mobile.


So connecting to MQTT in Node is very easy and wiring up to the Cayenne dashboard is straightforward (if not foolproof). The experiment puts me in good stead for my project so for me was a big success.


Early in 2015 I was asked to give a talk about the Enchanted Cottage project that I completed last year. A group of eager London Arduino enthusiasts learnt about my struggles and successes with the Arduino Yún. One of the attendees was Brian Byrne who runs the Linuxing in London group, he approached me later in the year to talk about another project, more on that in a bit.



2016-06-08 17.55.24.png

My next challenge for the year was from Emma Bearman, she had also spotted my enchanted project and wondered if I could bring enchantment to her gnomes. A gnomes workshop was arranged in April to show the youth of Leeds how to use a Raspberry Pi 3 to control motors and LEDs using the IOT software NodeRed. The project got a write up in issue 212 of Linux Format.


As part of my research for the workshop I looked at a bunch of different motor controller boards, one of those the PiCon Zero was to be used later in the year too.


Dragon Detector

The dragon detector was a "joke" entry into the Qualcom DragonBoard competition, they called my bluff and sent me a board to work with.



I managed to complete a project in time and although I did not win the grand prize, Qualcomm awarded me a "Developer of the Month" award.



Since the competition has been completed the project has been enhanced and I've been asked to talk about that project for the London Linuxing group and also for the 96Boards group.




In August, my enthusiasm for the Zx Spectrum won me a Ben Heck Zx Portable. Unfortunately, it was DOA but with some help from the Element14 members we got it back working again and playing games.


My Terminator Eye also won me a Pi3 which will be put to good use as running MineCraft and Scratch for my young daughter.


Mini Project

To help with the above ZxPortable diagnosis a Test Card Generator was made using a Raspberry Pi.


Road Tests

It's been a busy year but I also squeezed in a road test, Elegant and Robust Capacitive Touch Interfaces - Review

I also tested a little board from 4Troniks and got it to sleep and blink as well as smile

Hopefully, I will be starting another Roadtest before the end of the year.

During my earlier experiments with GPIO on this board, I realised that it does not natively support PWM. I had a 4tronix Picon Zero board from my preparation for the Gnomes event and thought that could work with the DragonBoard.


As for the LED and IR detector I needed to use level shifters to connect up the board, so I added a second set to my breadboard and wired that up to the power and first I2C bus on the DragonBoard. This board has three I2C buses, two on the low speed connector and one on the high speed. It was a bit of a lashup as I had neither the right connectors for the DragonBoard(male 2x20x2mm) or the Picon (male 2x20x2.54mm).


I checked that it was working with the i2cdetect command. This needs elevated privileges to run.


To list the buses use:


sudo i2cdetect -l


To probe for devices on bus 0 use:


sudo i2cdetect -r 0


This will report a warning, but did not cause any issue for the 4tronix board.


WARNING! This program can confuse your I2C bus, cause data loss and worse!
I will probe file /dev/i2c-0 using read byte commands.
I will probe address range 0x03-0x77.
Continue? [Y/n] Y
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- 22 -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --


It correctly detected the board at address 22 so I was happy that it was working. I tried to get the board to detect with Libsoc by combining a Libsoc test script with the getRevision function from the piconzero library that did not work reporting a timeout or no data returned so instead I followed the instructions on the 4tronix blog to install the library and examples.


wget -O


I also installed the python-smbus moduel which is a dependency for the piconz library.


sudo apt-get install python-smbus


Finally the library needed a minor change at the top as I was using bus 0 not bus 1. Edit and change the line that sets up the bus, as follows.


bus = smbus.SMBus(1)


I tested the version script and that produced a result.


linaro@linaro-alip:~/piconzero$ sudo python
Board Type: Picon Zero
Firmware version: 7


I then added a servo and tested that.


linaro@linaro-alip:~/piconzero$ sudo python
Tests the servos by using the arrow keys to control
Press <space> key to centre
Press Ctrl-C to end
Up 85
Up 80
Up 75
Up 70
Up 65
Up 60
Up 55
Up 50
Up 45
Up 40
Up 35


Here's the results. Although the competition is over and my video presentation is submitted I'd still like to finish off the project with a 3D printed knight.  If you are interested in my adaptions to the box, you can find those on the Workshopshed blog. Boxing the Dragon - Workshopshed

When my Dragon Detector spots a new dragon I want it to notify the operator that something has happened. When looking for ways to do this, I discovered the IF THIS THEN THAT "Maker channel" this allows you to trigger IFTTT flows by calling a URL of the following form{event}/with/key/{channel key}


You can also pass in parameters so that you can customise the flow. I added the IF client app to my mobile and configured a simple "recipe" to link the maker event to my notification.


To call this from the Dragonboard I used Pycurl which was installed earlier.


from StringIO import StringIO
import pycurl

def get_key():
    with open('IFTTTKey.conf', 'r') as f:
        key = f.readline()
    return key

def get_notifyURL(numDragons):
    return "" + get_key() + "?value1=" + str(numDragons)

def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    return r.getvalue()

print call_api(get_notifyURL(2))



As mentioned in my previous blog on Dragonboard 410C GPIO I was planning to use the libsoc library from Jack Mitch. I though I'd installed this correctly but when I tried to access it from Python it was refusing to import the library. Re-reading the GPIO blog article from 96boards I'd not compiled this correctly. So I tried that again and this time was successful.


./configure --enable-python --enable-board=dragonboard410c
sudo make install
sudo ldconfig /usr/local/lib


As the Dragonboard uses 1.8v logic levels, I used a simple MOSFET based level shifter module.



One channel was connected to my HC-SR501 Passive IR module, the other connected to the three pins of a RGB Led.


I had some issues getting the libsoc code to work, which turned out that I'd forgotten to "request" the GPIOs. Once I'd added that in, it is straightforward to flash an output on GPIO-B.


from time import sleep
from libsoc import gpio
from libsoc import GPIO
# GPIO.set_debug(True)
gpio_out = gpio.GPIO(GPIO.gpio_id("GPIO-B"), gpio.DIRECTION_OUTPUT)
with gpio.request_gpios(gpio_out):
    while True:


In the process of investigating my issues I discovered libsoc_zero. If you've used GPIO_Zero on the Pi, it's very similar. You'll see that it just adds a little more abstraction in the form of LEDs and Buttons.


from libsoc_zero.GPIO import LED
from time import sleep
gpio_red = LED('GPIO-B')
while (True): 


The next example lights the RED led when the sensor detects movement. I noticed that whilst doing this that the output from the IR module appears to float when there is nothing detected so I added a pull down resistor to the output pin and the circuit became a lot more predictable.


from libsoc_zero.GPIO import Button
from libsoc_zero.GPIO import LED
from time import sleep
sensor = Button('GPIO-A')
gpio_red = LED('GPIO-B')
while True:
    if sensor.is_pressed():


I've also been looking at boxing up the project. Normally I'd build this on strip board but as I'm a little short on time, I think it will likely stay on the breadboard. The webcam cable is quite long so I'll see if I can safely shorten that.


I've put the code for the project on Github as it's now starting to get interesting.


Testing on something simpler

So that I knew that OpenCV was working correctly I created a simple test script.


import numpy as np
import cv2
print cv2.__version__


I also decided to do my testing and development on a Windows laptop with a lot more power than the Dragonboard. This will be particularly important when it comes to the number crunching of a new classifier.

I had a few false starts on getting OpenCV working on my laptop. I downloaded and installed the 3.1 version of OpenCV but then used the wrong version of the Python extensions, once I'd picked the right version to go with OpenCV3.1 and Python 2.7 (opencv_python-3.1.0-cp27-cp27m-win32.whl) things started working correctly. I also found that I needed to copy the classifier XML files into my project folder. I modified an example file so that it ran without creating a window.


# A simple test file for OpenCV

import numpy as np
import cv2
import datetime

def detect(img_color):
    face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
    detected = 0
    gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)

    # Now we find the faces in the image. If faces are found, it returns the positions of detected faces as Rect(x,y,w,h).
    # Once we get these locations, we can create a ROI for the face and apply eye detection on this ROI (since eyes are always on the face !!! ).
    faces = face_cascade.detectMultiScale(gray)
    for (x, y, w, h) in faces:
        cv2.rectangle(img_color, (x, y), (x + w, y + h), (255, 0, 0), 2)
        roi_gray = gray[y:y + h, x:x + w]
        roi_color = img_color[y:y + h, x:x + w]
        eyes = eye_cascade.detectMultiScale(roi_gray)
        if len(eyes) > 0:
            detected = detected + 1
        for (ex, ey, ew, eh) in eyes:
            cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)

    print(str(detected) + " people")
    return detected

img = cv2.imread('TestPicture.JPG')

if detect(img) > 0:
    newFileName = "TestOutput" +":","") + ".jpg"
    cv2.imwrite(newFileName, img);


The code reads a test file and then passes that to the detect function. The detect function creates two classifiers, one for faces and one for eyes based on the example XML provided with OpenCV. It grayscales the image for faster processing and then detects faces, for each face found it draws a rectangle and the checks for eyes. If it detects eyes on the face then it's a hit and we have found a person.


I tested my classifier with some astronauts and it detected three of them although interestingly spotted a face on one of the crumpled sleeves. On my laptop it takes about 1s to load and process the (615x425) pixel file.


The same script on the Dragonboard takes 0.6s to run.


Getting Webcam images

Capturing from the webcam with OpenCV is very simple. Most of the tutorials assume you want to display video output but these can be simplified to capture a single frame. In this example we capture a single frame from the camera and then save it to the disk. In the finished version that frame would be passed on the classifer.


import cv2
import datetime

cap = cv2.VideoCapture(0)
# Capture single frame
ret, frame =

if ret:
    newFileName = "CaptureOutput" +":", "") + ".jpg"
    cv2.imwrite(newFileName, frame)
    print "Capture failed"


When I tested this with the dragonboard I received the following error:


VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Capture failed


There seemed to be some anicdotal evidence to fix this but they did not seem to work for me so I swapped the script out with one that called the command line "streamer" app instead.


That also caused me some trouble with the parameters not getting passed to the streamer app correctly, an streamer in turn complaining that the format could not be determined. After some experimentation, the following approach worked and the file could be opened by OpenCV.


import cv2
import datetime
from subprocess import call
capture = "CaptureInput" +":", "") + ".jpeg"
cmdline = "streamer -c /dev/video0 -b 32 -f jpeg -o " + capture
call(cmdline, shell=True)
img = cv2.imread(capture)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
newFileName = "CaptureOutput" +":", "") + ".jpg"
cv2.imwrite(newFileName, gray)

Training a classifier

I followed this tutorial to create my classifier xml data, althought I had to drop the featureType parameter as that caused it to crash on my system. The training application is very hungry for memory and used about 2.5GB on my system for image sizes of 50 x 50. It ran one of my CPUs at between 50% and 100%. When I repeated the test with a 100 x 100 image the memory usage shot upto 8GB, although it should be possible to control this with the buffer size settings. However, I reverted to the 50 x 50 and increased the number of cycles of training as that is apparently what gives the quality results rather than the size. The training programme does seem to crash rather than report sensible errors, it also crashed for me when I put in really large image sizes. After 30 minutes I had my first prototype classifier.


opencv_createsamples.exe -info -num 90 -w 50 -h 50 -vec Dragons.vec

Info file name:
Img file name: (NULL)
Vec file name: Dragons.vec
BG  file name: (NULL)
Num: 90
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 50
Height: 50
Create training samples from images collection...
Done. Created 90 samples

opencv_traincascade.exe" -data data -vec Dragons.vec -bg -numPos 89 -numNeg 765 -numStages 5 -w 50 -h 50

cascadeDirName: data
vecFileName: Dragons.vec
numPos: 89
numNeg: 765
numStages: 5
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 50
sampleHeight: 50
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC

===== TRAINING 0-stage =====
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 1
Precalculation time: 9.468
|  N |    HR   |    FA   |
|   1|        1|        1|
|   2|        1|        1|
|   3|        1|        1|
|   4|        1| 0.330719|
Training until now has taken 0 days 0 hours 12 minutes 54 seconds.

===== TRAINING 1-stage =====
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.407783
Precalculation time: 9.62
|  N |    HR   |    FA   |
|   1|        1|        1|
|   2|        1|        1|
|   3|        1| 0.426144|
Training until now has taken 0 days 0 hours 22 minutes 45 seconds.

===== TRAINING 2-stage =====
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.26127
Precalculation time: 9.143
|  N |    HR   |    FA   |
|   1|        1|        1|
|   2|        1|        1|
|   3|        1| 0.488889|
Training until now has taken 0 days 0 hours 32 minutes 18 seconds.



My initial testing provided too many false positives so I found a 1000 more negative images and added them to the training, I also set the maxFalseAlarmRate to a smaller value and set my training going again.


This time the training took a lot longer, nearly 15hrs later the classifier was trained and it worked a whole lot better than my first version.


I think the next task is to look at hooking some proper hardware to the Dragonboard using the level shifters.


Getting Started with Videos — OpenCV-Python Tutorials 1 documentation

Coding Robin Train Your Own OpenCV Haar Classifier

OpenCV Tutorial: Training your own detector (video)

Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features)…

For my Dragon detector project I was interested in activating some kind of attached "defence" when a dragon was detected. For this I intend to use the GPIO pins, the 96Boards GPIO library only has basic functionality at the moment, digital read and write so I need to find something else. Specifically I am looking for interupt based inputs so that my IR sensor can trigger the camera to take a photo. I'm also looking to drive a servo or two.


After reading some of the blogs from 96Boards, I thought that the Intel MRAA library would work well as it supported interrupt based inputs and PWM outputs.


Shell control of GPIO

Before getting involved in libraries, I though it best to test using simple shell commands.


I ran through the example in the low speed I/O application note and found a couple of things. Firstly, and not surprisingly you need to be root to

configure the GPIO. Switching to super user made this easier. Secondly there was a mention of adding 902 to the GPIO number, I did not find this to be the case.


To enable a pin you "export" it  and then configure for output. Sending 1 turns the pin on.


sudo su
echo 36 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio36/direction

echo 1 > /sys/class/gpio/gpio36/value 


and then to turn it back off again, send a 0.


echo 0 > /sys/class/gpio/gpio36/value 


Once my red LED (note that some colours of LED have a forward voltage > 1.8v so don't light up) was working correctly I thought I'd check the inputs. That's just a case of repeating the export command and reading the value. I used a jumper wire to set the input high as I did not have any switches to hand.


echo 12 > export
echo in > /sys/class/gpio/gpio12/direction
cat /sys/class/gpio/gpio12/value


Further investigation into libraries


When I looked into MRAA in more detail I saw that the PWM functionality was just a wrapper for existing device level functionality. A simple "ls /sys/class/pwm*" showed that there was no  such function on my board.


I cross checked this by looking at the mraa_pincapabilities_t setup for the board.



So in conclusion it does not look like PWM is supported by this library/board. Looking at Libsoc the other library mentioned in 96boards blog that too uses the pwm class so that does not help either. The Libsoc library has a wrapper for I2C which I think I'll be using to connect up an I/O board which does support PWM, so I'll go for that library.


Installing Libsoc


There are some notes on the 96Boards blog but those did not seem to be up to date. So I used the instructions from the libsoc github and that compiled successfully.


For my next post I'll switch into Python and hopefully get OpenCV detecting things from the webcam.



Bringing Standardization to Linux GPIO for 96Boards - 96Boards

How do you install 96BoardGPIO, libsoc and libmraa on a new image? - 96Boards



Using GPIOs on low speed connector on DragonBoard™ 410

DragonBoard 410C controlled RC Car


I also found this extra reference article from Qualcomm

Following on from Getting started with Dragonboard 410c

For my project to work, I need the following:


  • Internet connectivity
  • Webcam
  • Python
  • OpenCV
  • GPIO


I decided to tackle the software elements first as those were the areas I was least familiar with.


Trouble with Wifi

Now that I had a working Linux install my next step was to get the Wifi connected. That was very straight forward, or so I thought. The Linaro/Debian desktop provided a status bar widget where you could select the Wifi and enter pass code. I did that an it connected just fine. However, shortly after it dropped out reporting that it was disconnected. I moved the board slightly and it reconnected.

I tried a range of different locations and even turned off the Rii keyboard incase that was interfering with it. Nothing improved the situation. However, I did have a USB to Ethernet dongle from work which I plugged in and connected up. That was detected automatically I was now on the net reliably. If I have time I'll investigate the Wifi further but for my purposes the Ethernet is just fine.


I installed Bonjour on my desktop and was able to connect to the Dragonboard by name using SSH without any issues.



I was expecting the camera to cause me problems as it was an old one from my Dad's junk box.


I plugged it in and ran lsusb, it correctly detected it as a Logitech QuickCam Express.

WebCam.jpg LsUSB.png

To test the camera I installed "streamer", and captured a test picture of a cat. I've yet to play with the settings on streamer so this is just a low colour version.




I wanted to use Python to control my project as it is quick and easy to prototype code like this.  I also wanted to communicate with the internet for the purpose of notify the user, so I installed Pycurl too.

To install Pycurl, I need to install PIP (the python package manager), so I used the script to do that. I had to install a few pre-requisites too.


sudo apt-get update
sudo apt-get install libcurl4-openssl-dev python-dev
pip install pycurl


To test PyCurl was working, I downloaded a simple web page.


from StringIO import StringIO
import pycurl
import signal,sys

def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    return r.getvalue()

def main():
    r = call_api("")
    print r

# Handle exit and kill from OS
def set_exit_handler(func):
    signal.signal(signal.SIGTERM, func)
def on_exit(sig, func=None):
    print "exit handler triggered"

# Run program
if __name__ == '__main__':




After trying to compile software on the Arduino Yún last year, I expected that OpenCV was going to be an issue. However, I followed the instructions to install any dependencies, and configured the make file. That all went smoothing. The compile took a couple of hours, I was not surprised as OpenCV is quite sophisticated.


It compiled successfully, but I've yet to test it as it was quite late when it finished.


Next up GPIO, there's a few different libraries for this but Intel's MRAA seems to have the most potential for what I'm trying to achieve.




Framegrabbing Applications

Installation in Linux — OpenCV documentation

Once the headless Raspberry Pi had been set-up we got started with Node-Red, sensors, LEDs and Servos.

2016-04-28 20.08.49.jpg

For the sensors I selected 3 different kinds, light, touch and motion.



The light sensor is based around a simple LM393 comparator with a preset to do the comparison. This is powered from 3.3v and the output is a digital signal. There is a power and output LED so that it can be tested independently.

The touch sensor is based around a TTP223B chip, this is also powered from 3.3v and the output should be digital.

The motion sensor is a passive infra-red detector based on the HC-SR501C, this is powered from 5v but has a 3.3v output. There are two adjustments, one for sensitivity and the other controls how long the output stays high once an event is triggered.


I did not have a chance to try out the LDR and my intial experiments with the touch sensor were unsuccessful too. However the PIR was what was needed for the first project and that worked well so we stuck with that.


The LEDs were pre-wired with a series resistor and sockets to connect to the GPIO on the Pi, this worked really well.


The Servos were cheap micro-servos, I also brought along a few servo testers which we used to make sure the servos were working correctly.


Cat Scarer

Charlie's project was to stop the neighbour's cats from spending time on his lawn. One of our main challenges was getting the voice recording off an IPad and onto the Pi. We eventually got help from the sound guy and re-recorded it onto an SDCard and used WinSCP to copy the file onto the Pi.


This project used a PIR sensor and a powered headphone speaker connected to the 3.5mm audio jack on the Pi.


The flow works as follows:

     The sensor goes high when motion is detected.

     To avoid multiple triggers we limit the flow to one message every 2s.

     This is then wired to the LED so we can see that the sensor has triggered an event.

     A random node creates values 1 or 2 and this passes to the switch to select one of the two outputs.

     Finally the EXEC nodes run the command line "aplay" with one of two values for a police siren or a recorded message.


Trouble with servos

We tried to get the servos to work but they simply refused to co-operate. I've since followed up on this and we should have used the GPIO 2 pin as that has hardware PWM, there's also a limited range of values that can be used. If we'd have had time I also had a Picon Zero which we could have used to control lots of servos.



Mouse Toggle

David used a blue tooth mouse to remote control his Pi's LED. This was one area where we could not work out how to complete the flow without code. Each mouse click turned the LED on or off.


Toggle Code

var state = context.get('state')||0;
if (state == 0) {
    state = 1;
else {
    state = 0;
msg.payload = state;
return msg;




Node Red

Playing Audio on the Pi

After my slightly jovial entry to the DragonBoard competition was accepted, I was sent a board to do my project with. I was also sent a US power supply so had to get an adaptor for it to work (luckily the supply was rated for 240v).



For setup, I was relegated to the bedroom to setup as that was where the only TV with a HDMI connector was located.

I followed quick start and booted into Android, there's some nice animated graphics when it boots. You then have to work how to "swipe" the screen using just a track pad.

2016-05-04 20.41.39.jpg2016-05-04 20.33.03.jpg

This seemed to run ok but there was not much I could do with Android and then issues connecting to the Wifi, mine appeared for long enough for me to enter the key but then was replaced by my neighbour's BT home hub.


So I switched to Linux, which can be installed from the SDCard


To boot from SDCard you flip a switch on the back of the board. The DIP switch is minuscule! I used my smallest screw driver to set it. I also seemed to be getting the wrong images, the key seems to be to get one that says "SDCard install" in the name, if you follow the link in the instructions that will get you to the right download. Even after this I had trouble with my the first card that I'd imaged.


Finally I managed to get it all plugged correctly (my HDMI connector was loose) and the flashing could begin.


That process went smoothly and I took out the SDCard and rebooted in the Linux desktop.


Next challenge connecting to the Wifi and remote access via SSH.