Skip navigation
1 2 Previous Next

Andy Clark's Blog

21 posts
Andy Clark (Workshopshed)

RoboGuy

Posted by Andy Clark (Workshopshed) Top Member Aug 12, 2017

The dancing Element14 Blue Guy which was sent to me by tariq.ahmad was slightly damaged in transit so I resurrected him as an experimental crime-fighting cyborg named RoboGuy.

 

Stay out of trouble!

I've used the Picon Zero board from 4Tronix before when I was working on the Dragon Detector project. It is the same size as the Pi Zero and can drive 6 outputs, 4 inputs and 2 motors via H-Bridges. So it seemed a good option for driving my latest car project.

PiconZero.jpg

Prepare the Pi

The Picon Zero runs via I2C so you need to run raspi-config and enable I2C in the settings.

 

It's also worth updating the apt-get cache as we'll be installing some software.

sudo apt-get update

 

Install tools

This step is optional but it's good to have some tools for diagnosing what's going on.

 

sudo apt-get install i2ctools

 

This then allows you to see what I2C adapters are available. This will be different on early pi but most modern Pi are have the I2C bus numbered as "1"

sudo i2cdetect -l

 

That will return something like:

i2c-1   i2c             bcm2835 I2C adapter                     I2C adapter

 

Then the scan command will detect if the board is plugged in and can be detected.

sudo i2cdetect -r 1

 

It will return something like the following:

 

WARNING! This program can confuse your I2C bus, cause data loss and worse!

I will probe file /dev/i2c-1 using read byte commands.

I will probe address range 0x03-0x77.

Continue? [Y/n] Y

     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f

00:          -- -- -- -- -- -- -- -- -- -- -- -- --

10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

20: -- -- 22 -- -- -- -- -- -- -- -- -- -- -- -- --

30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

70: -- -- -- -- -- -- -- --

 

Install software

First I removed any existing versions of node and npm (the node package manager)

sudo apt-get remove --purge npm node nodejs

 

Then I used an excellent script from Steven de Salas to install the latest version of node.

wget -O - https://raw.githubusercontent.com/sdesalas/node-pi-zero/master/install-node-v.last.sh | bash

 

After checking the versions I installed an I2c libraries so that Node could communicate with the I2C bus

node -v
npm -v
npm install i2c

 

Porting the Picon Zero code to Node

I tried out the "jiphy" tool to see if I could automate some of the code migration. That seemed to work ok with the smaller samples but it choked on the main library. So I ended up porting the code by hand. So far I've just done the version funtion as that allows me to see that communication is happening. I've also tried to mimic the Python version so that people can easily work with either. Here's the two versions side by side.

 

PythonNodeJS
#! /usr/bin/env python

# GNU GPL V3
# Test code for 4tronix Picon Zero

import piconzero as pz

pz.init()
vsn = pz.getRevision()
if (vsn[1] == 2):
    print("Board Type:", "Picon Zero")
else:
    print("Board Type:", vsn[1])
print("Firmware version:", vsn[0])
print()
pz.cleanup()
#!/usr/bin/env node

// GNU GPL V3
// Test code for 4tronix Picon Zero

var pz = require('./piconzero');

pz.init();
var vsn = pz.getRevision();
if  (vsn[1] == 2) {
    console.log("Board Type:", "Picon Zero")
}
else { 
    console.log("Board Type:", vsn[1])
}
console.log("Firmware version:", vsn[0])
console.log();
pz.cleanup();

 

So far it's a proof of concept but it seems to be working reliably so I can't see any problems with porting the rest.

 

Watch this space https://github.com/Workshopshed/PiconZero/tree/master/NodeJS

 

Reference

https://blog.miniarray.com/installing-node-js-on-a-raspberry-pi-zero-21a1522db2bb

https://github.com/sdesalas/node-pi-zero

https://github.com/timothycrosley/jiphy/blob/develop/README.md

Using the I2C Interface – Raspberry Pi Projects

https://www.npmjs.com/package/i2c

Andy Clark (Workshopshed)

i-Zombie

Posted by Andy Clark (Workshopshed) Top Member Jun 22, 2017

Just back from an excellent talk by Tim Hunkin of "Secret life of machines" fame. He started things off with a description of how a telephone works and demonstrated a loudspeaker made from a crisp packet. He then went on to talk about his latest arcade creation "I-Zombie".

 

IZombieTalk.jpgiZombieTalk2.jpg

Photo credit: Martin Evans

 

This latest game combines a classic optical illusion (Peppers Ghost), carved wooden figures, video screens and some cunning mechanics. The front of the phone as two video screens with animations, instructions and scoring. The control is via a selection of PLCs which Tim like because of their reliability, ease of use and that he can get them cheap of Ebay, And of course given Tim's mischievous nature, there is a twist.

 

You can see I-Zombie and other devices at novelty-automation-home-page

 

Tim also has a second exhibit at South Woodford pier

When I was sent the BeagleBone Blue board my first thought was what could I build that needed 8 servos. I'd seen some fun examples in the MusicTech challenge so a music player seemed like a good idea. I bought some small servos, a glockenspiel (which has metal bars vs a xylophone which has wooden bars) and some wooden balls.

Wiring.jpg

The balls were drilled and mounted on some short dowels made from lollipop sticks. These were attached to the servos using rubber bands, this has two reasons firstly it allows me to undershoot on positioning the servos causing the beater to hit the bar and recoil. It also reduces the risk of the servo stalling if there is a software problem.

 

I initially tried mounting the servos on a block of wood. This proved troublesome and it was not possible to adjust the position or angle of the servos. So a bracket was designed to support the servos. @PiTutorials suggested adding slot for the cable into my design but I found that was not necessary because of the way I was mounting the servos.

Bracket.png

 

For the software, I thought I'd try out MQTT as an approach for getting the commands from the UI to the board. This turned out to be very straight forward, I installed Mosca on the BeagleBone and then wrote a client using Paho to communicate to that via WebSockets.

So that I did not need to run all of my code as root, I wrote a "ServoDaemon" that listened for servo positions on a named pipe.

SystemDiagram.png

https://github.com/Workshopshed/musicController

 

The power to the servos caused me an issue, the board could not supply enough current to all of the servos. People have reported that the board won't even boot if you try to power more than 6. On the recommendation of the forum, I decided to power the servos from an external supply. This was done by building a small adapter that mimiced the servo pins. Two sockets slid over the outer pins to connect GND and Signal, the middle pins were connected together and isolated from the board with some hot glue. The glue also held the 3 connectors together.

IMG_20170407_133445.jpg

There's a fault with one of the servos but here's my attempt to make something sound a bit musical with the remaining notes. It's not a recognisable tune!

 

As I've been experimenting with the BeagleBone Blue I decided it would make sense to do a bit of coding with Node.JS rather than Python as I've used for a few previous project. I also wanted to see if I could connect it up to MQTT as I planned to use that for another project.

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

My first simple project to combine those two was to hook up to Cayenne, an IOT dashboard from myDevices. The dashboard allows you to add various widgets for displaying values, voltages, temperatures etc and to log their values over time. There are also triggers that allow you to notify people if certain limits are exceeded for example your beer brewing temperature monitor goes above a particular temperature.

The dashboard supports Raspberry Pi and Arudino boards out of the box and there's a growing list of other supported hardware. However, as my device was not on the list it meant I'd have to use the bring-your-own-thing-api this allows you to connect upto to the MQTT directly and publish and subscribe to channels to interact with the dashboard.

Dashboard

When using the API you create a new device in the dashboard and it will let you know the username, password and clientID which then must be used with all further connections. It's best to put these into a separate configuration file but I've just added mine to the top of the script as variables. Here's where there's a bit of a chicken and egg situation as you can't continue building the dashboard until the app has connected at least once.

 

To run the following you can install MQTT using NPM.

 

var mqtt = require('mqtt')
var data = 1;

console.log('Connecting to Queue');

var apiVersion = "v1";
var username = '123456789012345678901234567890';
var clientId =  '123456789012345678901234567890';
var password = '123456789012345678901234567890';

var rootTopic = [
  apiVersion,
  username,
  'things',
  clientId
].join('/');

var client = mqtt.connect('mqtt://mqtt.mydevices.com',{
    port: 1883,
    clientId: clientId,
    username: username,
    password: password,
    connectTimeout: 5000
});

client.on('connect', function () {
    console.log('Connected');
    client.subscribe(rootTopic + '/cmd/+');
    client.publish(rootTopic + '/sys/model', 'Node Test'); 
    client.publish(rootTopic + '/sys/version', 'V1'); 
})

client.on('message', function (topic, message) {
    const payload = message.toString().split(',');
    const topics = topic.toString().split('/');
    const seq = payload[0];
    const cmd = payload[1];
    const channel = topics[5];
    console.log(channel + "-" + cmd);
    client.publish(rootTopic + '/data/' + channel, cmd); //Echo value back
    client.publish(rootTopic + '/response', 'ok,'+seq);
})

client.on('close', function (message) { 
    console.log('closed');
    console.log(message);
    client.end(); })

client.on('error', function (message) {
    console.log('error occurred');
    console.log(message);
    client.end(); })
client.on('disconnect', function () {
    console.log('disconnection occurred');
    client.end(); })

function writeData() {
    var topic = rootTopic + '/data/testChannel';
    var payload = data.toString();
    client.publish(topic, payload);
    data = data + 1;
}

process.on('SIGINT', function () {
    console.log("Shutting down SIGINT (Ctrl-C)");
    client.end();
    process.exit();
})

function loop() {
    console.log("Heartbeat...");
    writeData();
};

function run() {
    console.log('Running');
    setInterval(loop, 30000);
};

run();

 

The app above is designed to publish a value to channel "testChannel" this simply increments each time the loop code runs.

CayenneValue.png

The switch setting is a little more complex. It works by listening for commands "cmd" and when it receives a message it echos the value back again and acknowledges the command with a "response". It's important to do this otherwise the switch widget on the dashboard will become unresponsive. You can "unfreeze" it by editing and saving the settings. It's also important to use distinct channels for your switch and other widgets as that can also affect the behaviour.

CayeneLight.png

Other problems I found was that my firewall blocked connections to the MQTT port by default. The system information does not show up on the dashboard and I felt the dashboard could do with a simple "status" type widget to pass text message back and forth.

 

I also found that the responsive website for MyDevices made it almost impossible to login in mobile mode and the "App" for Android did not support my custom device. The documentation page is a massive long HTML page with #tags to identify each section. Again this proved challenging when reading on mobile.

 

So connecting to MQTT in Node is very easy and wiring up to the Cayenne dashboard is straightforward (if not foolproof). The experiment puts me in good stead for my project so for me was a big success.

Talks

Early in 2015 I was asked to give a talk about the Enchanted Cottage project that I completed last year. A group of eager London Arduino enthusiasts learnt about my struggles and successes with the Arduino Yún. One of the attendees was Brian Byrne who runs the Linuxing in London group, he approached me later in the year to talk about another project, more on that in a bit.

 

Workshop

2016-06-08 17.55.24.png

My next challenge for the year was from Emma Bearman, she had also spotted my enchanted project and wondered if I could bring enchantment to her gnomes. A gnomes workshop was arranged in April to show the youth of Leeds how to use a Raspberry Pi 3 to control motors and LEDs using the IOT software NodeRed. The project got a write up in issue 212 of Linux Format.

 

As part of my research for the workshop I looked at a bunch of different motor controller boards, one of those the PiCon Zero was to be used later in the year too.

 

Dragon Detector

The dragon detector was a "joke" entry into the Qualcom DragonBoard competition, they called my bluff and sent me a board to work with.

Entry.png

https://www.element14.com/community/people/Workshopshed/blog/tags#/?tags=dragonboard%20410c

 

I managed to complete a project in time and although I did not win the grand prize, Qualcomm awarded me a "Developer of the Month" award.

 

 

Since the competition has been completed the project has been enhanced and I've been asked to talk about that project for the London Linuxing group and also for the 96Boards group.

Talk.png

 

Wins

In August, my enthusiasm for the Zx Spectrum won me a Ben Heck Zx Portable. Unfortunately, it was DOA but with some help from the Element14 members we got it back working again and playing games.

TShirt+and+ZxPortable.jpg

My Terminator Eye also won me a Pi3 which will be put to good use as running MineCraft and Scratch for my young daughter.

 

Mini Project

To help with the above ZxPortable diagnosis a Test Card Generator was made using a Raspberry Pi.

 

Road Tests

It's been a busy year but I also squeezed in a road test, Elegant and Robust Capacitive Touch Interfaces - Review

I also tested a little board from 4Troniks and got it to sleep and blink as well as smile

Hopefully, I will be starting another Roadtest before the end of the year.

During my earlier experiments with GPIO on this board, I realised that it does not natively support PWM. I had a 4tronix Picon Zero board from my preparation for the Gnomes event and thought that could work with the DragonBoard.

PiconZero.jpgLevelShifter.png

As for the LED and IR detector I needed to use level shifters to connect up the board, so I added a second set to my breadboard and wired that up to the power and first I2C bus on the DragonBoard. This board has three I2C buses, two on the low speed connector and one on the high speed. It was a bit of a lashup as I had neither the right connectors for the DragonBoard(male 2x20x2mm) or the Picon (male 2x20x2.54mm).

I2C.png

I checked that it was working with the i2cdetect command. This needs elevated privileges to run.

 

To list the buses use:

 

sudo i2cdetect -l

 

To probe for devices on bus 0 use:

 

sudo i2cdetect -r 0

 

This will report a warning, but did not cause any issue for the 4tronix board.

 

WARNING! This program can confuse your I2C bus, cause data loss and worse!
I will probe file /dev/i2c-0 using read byte commands.
I will probe address range 0x03-0x77.
Continue? [Y/n] Y
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- 22 -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

 

It correctly detected the board at address 22 so I was happy that it was working. I tried to get the board to detect with Libsoc by combining a Libsoc test script with the getRevision function from the piconzero library that did not work reporting a timeout or no data returned so instead I followed the instructions on the 4tronix blog to install the library and examples.

 

wget http://4tronix.co.uk/piconz.sh -O piconz.sh
bash piconz.sh

 

I also installed the python-smbus moduel which is a dependency for the piconz library.

 

sudo apt-get install python-smbus

 

Finally the library needed a minor change at the top as I was using bus 0 not bus 1. Edit piconzero.py and change the line that sets up the bus, as follows.

 

bus = smbus.SMBus(1)

 

I tested the version script and that produced a result.

 

linaro@linaro-alip:~/piconzero$ sudo python version.py
Board Type: Picon Zero
Firmware version: 7

 

I then added a servo and tested that.

 

linaro@linaro-alip:~/piconzero$ sudo python servoTest.py
Tests the servos by using the arrow keys to control
Press <space> key to centre
Press Ctrl-C to end
Up 85
Up 80
Up 75
Up 70
Up 65
Up 60
Up 55
Up 50
Up 45
Up 40
Up 35

 

Here's the results. Although the competition is over and my video presentation is submitted I'd still like to finish off the project with a 3D printed knight.  If you are interested in my adaptions to the box, you can find those on the Workshopshed blog. Boxing the Dragon - Workshopshed

When my Dragon Detector spots a new dragon I want it to notify the operator that something has happened. When looking for ways to do this, I discovered the IF THIS THEN THAT "Maker channel" this allows you to trigger IFTTT flows by calling a URL of the following form

 

https://maker.ifttt.com/trigger/{event}/with/key/{channel key}

 

You can also pass in parameters so that you can customise the flow. I added the IF client app to my mobile and configured a simple "recipe" to link the maker event to my notification.

Screenshot_20160523-055021.png

To call this from the Dragonboard I used Pycurl which was installed earlier.

 

from StringIO import StringIO
import pycurl


def get_key():
    with open('IFTTTKey.conf', 'r') as f:
        key = f.readline()
    f.close()
    return key


def get_notifyURL(numDragons):
    return "https://maker.ifttt.com/trigger/DragonDetected/with/key/" + get_key() + "?value1=" + str(numDragons)


def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    c.perform()
    c.close()
    return r.getvalue()


print call_api(get_notifyURL(2))

 

Screenshot_20160523-222019.png

As mentioned in my previous blog on Dragonboard 410C GPIO I was planning to use the libsoc library from Jack Mitch. I though I'd installed this correctly but when I tried to access it from Python it was refusing to import the library. Re-reading the GPIO blog article from 96boards I'd not compiled this correctly. So I tried that again and this time was successful.

 

./configure --enable-python --enable-board=dragonboard410c
make
sudo make install
sudo ldconfig /usr/local/lib

 

As the Dragonboard uses 1.8v logic levels, I used a simple MOSFET based level shifter module.

LevelShifterSchematic.pngLevelShifter.pnghc-sr501.png

 

One channel was connected to my HC-SR501 Passive IR module, the other connected to the three pins of a RGB Led.

 

I had some issues getting the libsoc code to work, which turned out that I'd forgotten to "request" the GPIOs. Once I'd added that in, it is straightforward to flash an output on GPIO-B.

 

from time import sleep
from libsoc import gpio
from libsoc import GPIO
# GPIO.set_debug(True)
gpio_out = gpio.GPIO(GPIO.gpio_id("GPIO-B"), gpio.DIRECTION_OUTPUT)
with gpio.request_gpios(gpio_out):
    while True:
       gpio_out.set_high()
       sleep(1)
       gpio_out.set_low()
       sleep(1)

 

In the process of investigating my issues I discovered libsoc_zero. If you've used GPIO_Zero on the Pi, it's very similar. You'll see that it just adds a little more abstraction in the form of LEDs and Buttons.

 

from libsoc_zero.GPIO import LED
from time import sleep
gpio_red = LED('GPIO-B')
while (True): 
    gpio_red.on()
    sleep(0.5)
    gpio_red.off()
    sleep(0.5)

 

The next example lights the RED led when the sensor detects movement. I noticed that whilst doing this that the output from the IR module appears to float when there is nothing detected so I added a pull down resistor to the output pin and the circuit became a lot more predictable.

 

from libsoc_zero.GPIO import Button
from libsoc_zero.GPIO import LED
from time import sleep
sensor = Button('GPIO-A')
gpio_red = LED('GPIO-B')
gpio_red.off()
sleep(2)
while True:
    if sensor.is_pressed():
        gpio_red.on()
        sleep(0.5)
    else:
        gpio_red.off()
        sleep(0.5)

 

I've also been looking at boxing up the project. Normally I'd build this on strip board but as I'm a little short on time, I think it will likely stay on the breadboard. The webcam cable is quite long so I'll see if I can safely shorten that.

Boxing.jpg

I've put the code for the project on Github as it's now starting to get interesting.

 

Testing on something simpler

So that I knew that OpenCV was working correctly I created a simple test script.

 

import numpy as np
import cv2
print cv2.__version__

 

I also decided to do my testing and development on a Windows laptop with a lot more power than the Dragonboard. This will be particularly important when it comes to the number crunching of a new classifier.

I had a few false starts on getting OpenCV working on my laptop. I downloaded and installed the 3.1 version of OpenCV but then used the wrong version of the Python extensions, once I'd picked the right version to go with OpenCV3.1 and Python 2.7 (opencv_python-3.1.0-cp27-cp27m-win32.whl) things started working correctly. I also found that I needed to copy the classifier XML files into my project folder. I modified an example file so that it ran without creating a window.

 

# A simple test file for OpenCV

import numpy as np
import cv2
import datetime


def detect(img_color):
    face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
    detected = 0
    gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)

    # Now we find the faces in the image. If faces are found, it returns the positions of detected faces as Rect(x,y,w,h).
    # Once we get these locations, we can create a ROI for the face and apply eye detection on this ROI (since eyes are always on the face !!! ).
    faces = face_cascade.detectMultiScale(gray)
    for (x, y, w, h) in faces:
        cv2.rectangle(img_color, (x, y), (x + w, y + h), (255, 0, 0), 2)
        roi_gray = gray[y:y + h, x:x + w]
        roi_color = img_color[y:y + h, x:x + w]
        eyes = eye_cascade.detectMultiScale(roi_gray)
        if len(eyes) > 0:
            detected = detected + 1
        for (ex, ey, ew, eh) in eyes:
            cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)

    print(str(detected) + " people")
    return detected

img = cv2.imread('TestPicture.JPG')

if detect(img) > 0:
    newFileName = "TestOutput" + datetime.datetime.now().isoformat().replace(":","") + ".jpg"
    cv2.imwrite(newFileName, img);

 

The code reads a test file and then passes that to the detect function. The detect function creates two classifiers, one for faces and one for eyes based on the example XML provided with OpenCV. It grayscales the image for faster processing and then detects faces, for each face found it draws a rectangle and the checks for eyes. If it detects eyes on the face then it's a hit and we have found a person.

 

I tested my classifier with some astronauts and it detected three of them although interestingly spotted a face on one of the crumpled sleeves. On my laptop it takes about 1s to load and process the (615x425) pixel file.

TestOutput2016-05-18T110903.651000.jpg

The same script on the Dragonboard takes 0.6s to run.

 

Getting Webcam images

Capturing from the webcam with OpenCV is very simple. Most of the tutorials assume you want to display video output but these can be simplified to capture a single frame. In this example we capture a single frame from the camera and then save it to the disk. In the finished version that frame would be passed on the classifer.

 

import cv2
import datetime

cap = cv2.VideoCapture(0)
# Capture single frame
ret, frame = cap.read()
cap.release()

if ret:
    newFileName = "CaptureOutput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpg"
    cv2.imwrite(newFileName, frame)
else:
    print "Capture failed"

 

When I tested this with the dragonboard I received the following error:

 

VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Capture failed

 

There seemed to be some anicdotal evidence to fix this but they did not seem to work for me so I swapped the script out with one that called the command line "streamer" app instead.

 

That also caused me some trouble with the parameters not getting passed to the streamer app correctly, an streamer in turn complaining that the format could not be determined. After some experimentation, the following approach worked and the file could be opened by OpenCV.

 

import cv2
import datetime
from subprocess import call
capture = "CaptureInput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpeg"
cmdline = "streamer -c /dev/video0 -b 32 -f jpeg -o " + capture
call(cmdline, shell=True)
img = cv2.imread(capture)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
newFileName = "CaptureOutput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpg"
cv2.imwrite(newFileName, gray)

Training a classifier

I followed this tutorial to create my classifier xml data, althought I had to drop the featureType parameter as that caused it to crash on my system. The training application is very hungry for memory and used about 2.5GB on my system for image sizes of 50 x 50. It ran one of my CPUs at between 50% and 100%. When I repeated the test with a 100 x 100 image the memory usage shot upto 8GB, although it should be possible to control this with the buffer size settings. However, I reverted to the 50 x 50 and increased the number of cycles of training as that is apparently what gives the quality results rather than the size. The training programme does seem to crash rather than report sensible errors, it also crashed for me when I put in really large image sizes. After 30 minutes I had my first prototype classifier.

 

opencv_createsamples.exe -info Dragons.info -num 90 -w 50 -h 50 -vec Dragons.vec

Info file name: Dragons.info
Img file name: (NULL)
Vec file name: Dragons.vec
BG  file name: (NULL)
Num: 90
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 50
Height: 50
Create training samples from images collection...
Done. Created 90 samples

opencv_traincascade.exe" -data data -vec Dragons.vec -bg Negative.info -numPos 89 -numNeg 765 -numStages 5 -w 50 -h 50

PARAMETERS:
cascadeDirName: data
vecFileName: Dragons.vec
bgFileName: Negative.info
numPos: 89
numNeg: 765
numStages: 5
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 50
sampleHeight: 50
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC

===== TRAINING 0-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 1
Precalculation time: 9.468
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1|        1|
+----+---------+---------+
|   4|        1| 0.330719|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 12 minutes 54 seconds.

===== TRAINING 1-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.407783
Precalculation time: 9.62
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1| 0.426144|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 22 minutes 45 seconds.

===== TRAINING 2-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.26127
Precalculation time: 9.143
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1| 0.488889|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 32 minutes 18 seconds.

 

Testing

My initial testing provided too many false positives so I found a 1000 more negative images and added them to the training, I also set the maxFalseAlarmRate to a smaller value and set my training going again.

TestOutput2016-05-20T082730.246000.jpg

This time the training took a lot longer, nearly 15hrs later the classifier was trained and it worked a whole lot better than my first version.

TestOutput2016-05-20T233252.626000.jpg

I think the next task is to look at hooking some proper hardware to the Dragonboard using the level shifters.

Reference

Getting Started with Videos — OpenCV-Python Tutorials 1 documentation

Coding Robin Train Your Own OpenCV Haar Classifier

OpenCV Tutorial: Training your own detector (video)

Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features)…

For my Dragon detector project I was interested in activating some kind of attached "defence" when a dragon was detected. For this I intend to use the GPIO pins, the 96Boards GPIO library only has basic functionality at the moment, digital read and write so I need to find something else. Specifically I am looking for interupt based inputs so that my IR sensor can trigger the camera to take a photo. I'm also looking to drive a servo or two.

 

After reading some of the blogs from 96Boards, I thought that the Intel MRAA library would work well as it supported interrupt based inputs and PWM outputs.

 

Shell control of GPIO

Before getting involved in libraries, I though it best to test using simple shell commands.

 

I ran through the example in the low speed I/O application note and found a couple of things. Firstly, and not surprisingly you need to be root to

configure the GPIO. Switching to super user made this easier. Secondly there was a mention of adding 902 to the GPIO number, I did not find this to be the case.

 

To enable a pin you "export" it  and then configure for output. Sending 1 turns the pin on.

 

sudo su
echo 36 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio36/direction

echo 1 > /sys/class/gpio/gpio36/value 

 

and then to turn it back off again, send a 0.

 

echo 0 > /sys/class/gpio/gpio36/value 

 

Once my red LED (note that some colours of LED have a forward voltage > 1.8v so don't light up) was working correctly I thought I'd check the inputs. That's just a case of repeating the export command and reading the value. I used a jumper wire to set the input high as I did not have any switches to hand.

 

echo 12 > export
echo in > /sys/class/gpio/gpio12/direction
cat /sys/class/gpio/gpio12/value

 

Further investigation into libraries

 

When I looked into MRAA in more detail I saw that the PWM functionality was just a wrapper for existing device level functionality. A simple "ls /sys/class/pwm*" showed that there was no  such function on my board.

 

I cross checked this by looking at the mraa_pincapabilities_t setup for the board.

https://github.com/intel-iot-devkit/mraa/blob/master/include/arm/96boards.h

And

http://iotdk.intel.com/docs/1.1/mraa/structmraa__pincapabilities__t.html

 

So in conclusion it does not look like PWM is supported by this library/board. Looking at Libsoc the other library mentioned in 96boards blog that too uses the pwm class so that does not help either. The Libsoc library has a wrapper for I2C which I think I'll be using to connect up an I/O board which does support PWM, so I'll go for that library.

 

Installing Libsoc

 

There are some notes on the 96Boards blog but those did not seem to be up to date. So I used the instructions from the libsoc github and that compiled successfully.

 

For my next post I'll switch into Python and hopefully get OpenCV detecting things from the webcam.

 

Reference

Bringing Standardization to Linux GPIO for 96Boards - 96Boards

How do you install 96BoardGPIO, libsoc and libmraa on a new image? - 96Boards

https://www.kernel.org/doc/Documentation/gpio/sysfs.txt

 

Videos

Using GPIOs on low speed connector on DragonBoard™ 410

DragonBoard 410C controlled RC Car

 

I also found this extra reference article from Qualcomm https://developer.qualcomm.com/blog/dragonboard-410c-maker-month-contest-tools-you-need

Following on from Getting started with Dragonboard 410c

For my project to work, I need the following:

 

  • Internet connectivity
  • Webcam
  • Python
  • OpenCV
  • GPIO

 

I decided to tackle the software elements first as those were the areas I was least familiar with.

 

Trouble with Wifi

Now that I had a working Linux install my next step was to get the Wifi connected. That was very straight forward, or so I thought. The Linaro/Debian desktop provided a status bar widget where you could select the Wifi and enter pass code. I did that an it connected just fine. However, shortly after it dropped out reporting that it was disconnected. I moved the board slightly and it reconnected.

I tried a range of different locations and even turned off the Rii keyboard incase that was interfering with it. Nothing improved the situation. However, I did have a USB to Ethernet dongle from work which I plugged in and connected up. That was detected automatically I was now on the net reliably. If I have time I'll investigate the Wifi further but for my purposes the Ethernet is just fine.

 

I installed Bonjour on my desktop and was able to connect to the Dragonboard by name using SSH without any issues.

SSH.png

Camera

I was expecting the camera to cause me problems as it was an old one from my Dad's junk box.

 

I plugged it in and ran lsusb, it correctly detected it as a Logitech QuickCam Express.

WebCam.jpg LsUSB.png

To test the camera I installed "streamer", and captured a test picture of a cat. I've yet to play with the settings on streamer so this is just a low colour version.

outfile.jpeg

Python

 

I wanted to use Python to control my project as it is quick and easy to prototype code like this.  I also wanted to communicate with the internet for the purpose of notify the user, so I installed Pycurl too.

To install Pycurl, I need to install PIP (the python package manager), so I used the get-pip.py script to do that. I had to install a few pre-requisites too.

 

sudo apt-get update
sudo apt-get install libcurl4-openssl-dev python-dev
pip install pycurl

 

To test PyCurl was working, I downloaded a simple web page.

 

from StringIO import StringIO
import pycurl
import signal,sys

def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    c.perform()
    c.close()
    return r.getvalue()

def main():
    r = call_api("http://csb.stanford.edu/class/public/pages/sykes_webdesign/05_simple.html")
    print r

# Handle exit and kill from OS
def set_exit_handler(func):
    signal.signal(signal.SIGTERM, func)
def on_exit(sig, func=None):
    print "exit handler triggered"
    sys.exit(1)


# Run program
if __name__ == '__main__':
     set_exit_handler(on_exit)
     sys.exit(main())

 

OpenCV

 

After trying to compile software on the Arduino Yún last year, I expected that OpenCV was going to be an issue. However, I followed the instructions to install any dependencies, and configured the make file. That all went smoothing. The compile took a couple of hours, I was not surprised as OpenCV is quite sophisticated.

Compiling.png

It compiled successfully, but I've yet to test it as it was quite late when it finished.

 

Next up GPIO, there's a few different libraries for this but Intel's MRAA seems to have the most potential for what I'm trying to achieve.

 

Reference

 

Framegrabbing Applications

https://pip.pypa.io/en/stable/installing/

Installation in Linux — OpenCV 2.4.13.0 documentation

Once the headless Raspberry Pi had been set-up we got started with Node-Red, sensors, LEDs and Servos.

2016-04-28 20.08.49.jpg

For the sensors I selected 3 different kinds, light, touch and motion.

 

LDRTouchPIR.png

The light sensor is based around a simple LM393 comparator with a preset to do the comparison. This is powered from 3.3v and the output is a digital signal. There is a power and output LED so that it can be tested independently.

The touch sensor is based around a TTP223B chip, this is also powered from 3.3v and the output should be digital.

The motion sensor is a passive infra-red detector based on the HC-SR501C, this is powered from 5v but has a 3.3v output. There are two adjustments, one for sensitivity and the other controls how long the output stays high once an event is triggered.

 

I did not have a chance to try out the LDR and my intial experiments with the touch sensor were unsuccessful too. However the PIR was what was needed for the first project and that worked well so we stuck with that.

 

The LEDs were pre-wired with a series resistor and sockets to connect to the GPIO on the Pi, this worked really well.

 

The Servos were cheap micro-servos, I also brought along a few servo testers which we used to make sure the servos were working correctly.

 

Cat Scarer

Charlie's project was to stop the neighbour's cats from spending time on his lawn. One of our main challenges was getting the voice recording off an IPad and onto the Pi. We eventually got help from the sound guy and re-recorded it onto an SDCard and used WinSCP to copy the file onto the Pi.

 

This project used a PIR sensor and a powered headphone speaker connected to the 3.5mm audio jack on the Pi.

 

The flow works as follows:

     The sensor goes high when motion is detected.

     To avoid multiple triggers we limit the flow to one message every 2s.

     This is then wired to the LED so we can see that the sensor has triggered an event.

     A random node creates values 1 or 2 and this passes to the switch to select one of the two outputs.

     Finally the EXEC nodes run the command line "aplay" with one of two values for a police siren or a recorded message.

SensorFlow2.png

Trouble with servos

We tried to get the servos to work but they simply refused to co-operate. I've since followed up on this and we should have used the GPIO 2 pin as that has hardware PWM, there's also a limited range of values that can be used. If we'd have had time I also had a Picon Zero which we could have used to control lots of servos.

ServoFlow.jpg

 

Mouse Toggle

David used a blue tooth mouse to remote control his Pi's LED. This was one area where we could not work out how to complete the flow without code. Each mouse click turned the LED on or off.

Toggle.png

Toggle Code

var state = context.get('state')||0;
if (state == 0) {
    state = 1;
}
else {
    state = 0;
}
context.set('state',state)
msg.payload = state;
return msg;

 

 

Reference

Node Red

Playing Audio on the Pi

After my slightly jovial entry to the DragonBoard competition was accepted, I was sent a board to do my project with. I was also sent a US power supply so had to get an adaptor for it to work (luckily the supply was rated for 240v).

Entry.jpg

 

For setup, I was relegated to the bedroom to setup as that was where the only TV with a HDMI connector was located.

I followed quick start and booted into Android, there's some nice animated graphics when it boots. You then have to work how to "swipe" the screen using just a track pad.

2016-05-04 20.41.39.jpg2016-05-04 20.33.03.jpg

This seemed to run ok but there was not much I could do with Android and then issues connecting to the Wifi, mine appeared for long enough for me to enter the key but then was replaced by my neighbour's BT home hub.

 

So I switched to Linux, which can be installed from the SDCard

https://github.com/96boards/documentation/wiki/Dragonboard-410c-Installation-Guide-for-Linux-and-Android

 

To boot from SDCard you flip a switch on the back of the board. The DIP switch is minuscule! I used my smallest screw driver to set it. I also seemed to be getting the wrong images, the key seems to be to get one that says "SDCard install" in the name, if you follow the link in the instructions that will get you to the right download. Even after this I had trouble with my the first card that I'd imaged.

 

Finally I managed to get it all plugged correctly (my HDMI connector was loose) and the flashing could begin.

Flashing01.jpgFlashing2.jpg

That process went smoothly and I took out the SDCard and rebooted in the Linux desktop.

LinuxDesktop.jpg

Next challenge connecting to the Wifi and remote access via SSH.

Year-in-Review-2015-header.png

I had a great start to 2015, I'd just been selected to Road Test a Cel Robox 3D printer at the end of December and managed to get detailed look at the inside of the printer and also do some test prints before the printer went off for an upgrade at the end of Jan.


I'd just re-launched my Workshopshed website and I found out on Jan 14th that I'd been made Member of the Month,

2015-01-01+23.45.27.jpg

http://www.element14.com/community/groups/3d-printing/blog/authors/Workshopshed

 

I intended to make some circuits using the touch sensor chips I'd experimented with the year before. To help with this I bought my self a temperature controlled soldering iron and some tweezers.

 

I started on a magnifier lamp project in February. This used some LEDs I'd bought many years before, some scrap metal and wood as well as a selection of 3D printed parts. After a strong start in Feb and March progress slowed and it was not till October that the lamp was finally completed. The lamp was designed using OpenSCAD and I've really improved my skill in that over the year.

2015-02-04+20.44.02.jpgWiredUp2.jpg

In March I applied for the Enchanted Objects Design Challenge, and my application was accepted. I knew this was going to be a lot of work as it was 16 weeks of blogging and making. In the end it was over 200 hours of designing and making electronics, configuring and coding embedded Linux and designing and printing the mechanics. 41 blog posts charted the progress of the project which was weaved around the story of Hans and Matilda the young meteorologists. The challenge dominated my life from March through to the end of June.

 

At the end of March, Maplin awarded me a prize in their "Arduino Day" competition for my Topsy Turvy Clock project which was an added bonus.

 

In the summer, I met with shabaz and mcb1 whilst Mark was on his grand tour. An entertaining evening from what I can remember of it.

 

At the end of July I got a call from Dave to tell me that I'd won the challenge and a trip to New York and the Maker Faire which was fantastic news. What Dave did not mention was that I'd also have my pictures and name up in lights around New York!

8th+ave+%26+26th+-+03.jpg

The trip to the New York Makers Faire was an amazing weekend and I met loads of great people and their project. It was a packed weekend and I can't thank Element14 enough for sending me.

I took Hans and Matilda along too but managed to loose them somewhere in the show ground. Luckily a Swiss company who make weather houses managed to send me some replacements later in the year. 2015-09-26+10.08.44.jpg

My reports from the faire made it far and wide, as well as the reports here, here and on the Workshopshed blog, they were also published in the Imperial Engineer (the alumni magazine for Imperial College) and in Model Engineer's Workshop magazine.

MEW+236+preview+cover.jpgImperialEngineer.png

This was a good end to my year but I also won a copy of the Beagle Bone Cookbook in November.

 

My final electronics project of the year was a Christmas Decoration in the form of a tree powered by an Adafruit Trinket.

 

 

So what's next for 2016? I do plan to do a little more metal work than this year, I've a little Stirling Engine I've been hoping to build for some time now. However, I do think I'll have a few electronics projects, hopefully I am now in a position to build the rotary sensor I was researching a few years ago. I've also got a Raspberry Pi Zero and a Beagle Bone Green to play with so hopefully I'll be able to put the Linux skills picked up during this years challenge to good use with those.