Skip navigation
1 2 Previous Next

Andy Clark's Blog

17 posts

As I've been experimenting with the BeagleBone Blue I decided it would make sense to do a bit of coding with Node.JS rather than Python as I've used for a few previous project. I also wanted to see if I could connect it up to MQTT as I planned to use that for another project.

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.

My first simple project to combine those two was to hook up to Cayenne, an IOT dashboard from myDevices. The dashboard allows you to add various widgets for displaying values, voltages, temperatures etc and to log their values over time. There are also triggers that allow you to notify people if certain limits are exceeded for example your beer brewing temperature monitor goes above a particular temperature.

The dashboard supports Raspberry Pi and Arudino boards out of the box and there's a growing list of other supported hardware. However, as my device was not on the list it meant I'd have to use the bring-your-own-thing-api this allows you to connect upto to the MQTT directly and publish and subscribe to channels to interact with the dashboard.

Dashboard

When using the API you create a new device in the dashboard and it will let you know the username, password and clientID which then must be used with all further connections. It's best to put these into a separate configuration file but I've just added mine to the top of the script as variables. Here's where there's a bit of a chicken and egg situation as you can't continue building the dashboard until the app has connected at least once.

 

To run the following you can install MQTT using NPM.

 

var mqtt = require('mqtt')
var data = 1;

console.log('Connecting to Queue');

var apiVersion = "v1";
var username = '123456789012345678901234567890';
var clientId =  '123456789012345678901234567890';
var password = '123456789012345678901234567890';

var rootTopic = [
  apiVersion,
  username,
  'things',
  clientId
].join('/');

var client = mqtt.connect('mqtt://mqtt.mydevices.com',{
    port: 1883,
    clientId: clientId,
    username: username,
    password: password,
    connectTimeout: 5000
});

client.on('connect', function () {
    console.log('Connected');
    client.subscribe(rootTopic + '/cmd/+');
    client.publish(rootTopic + '/sys/model', 'Node Test'); 
    client.publish(rootTopic + '/sys/version', 'V1'); 
})

client.on('message', function (topic, message) {
    const payload = message.toString().split(',');
    const topics = topic.toString().split('/');
    const seq = payload[0];
    const cmd = payload[1];
    const channel = topics[5];
    console.log(channel + "-" + cmd);
    client.publish(rootTopic + '/data/' + channel, cmd); //Echo value back
    client.publish(rootTopic + '/response', 'ok,'+seq);
})

client.on('close', function (message) { 
    console.log('closed');
    console.log(message);
    client.end(); })

client.on('error', function (message) {
    console.log('error occurred');
    console.log(message);
    client.end(); })
client.on('disconnect', function () {
    console.log('disconnection occurred');
    client.end(); })

function writeData() {
    var topic = rootTopic + '/data/testChannel';
    var payload = data.toString();
    client.publish(topic, payload);
    data = data + 1;
}

process.on('SIGINT', function () {
    console.log("Shutting down SIGINT (Ctrl-C)");
    client.end();
    process.exit();
})

function loop() {
    console.log("Heartbeat...");
    writeData();
};

function run() {
    console.log('Running');
    setInterval(loop, 30000);
};

run();

 

The app above is designed to publish a value to channel "testChannel" this simply increments each time the loop code runs.

CayenneValue.png

The switch setting is a little more complex. It works by listening for commands "cmd" and when it receives a message it echos the value back again and acknowledges the command with a "response". It's important to do this otherwise the switch widget on the dashboard will become unresponsive. You can "unfreeze" it by editing and saving the settings. It's also important to use distinct channels for your switch and other widgets as that can also affect the behaviour.

CayeneLight.png

Other problems I found was that my firewall blocked connections to the MQTT port by default. The system information does not show up on the dashboard and I felt the dashboard could do with a simple "status" type widget to pass text message back and forth.

 

I also found that the responsive website for MyDevices made it almost impossible to login in mobile mode and the "App" for Android did not support my custom device. The documentation page is a massive long HTML page with #tags to identify each section. Again this proved challenging when reading on mobile.

 

So connecting to MQTT in Node is very easy and wiring up to the Cayenne dashboard is straightforward (if not foolproof). The experiment puts me in good stead for my project so for me was a big success.

Talks

Early in 2015 I was asked to give a talk about the Enchanted Cottage project that I completed last year. A group of eager London Arduino enthusiasts learnt about my struggles and successes with the Arduino Yún. One of the attendees was Brian Byrne who runs the Linuxing in London group, he approached me later in the year to talk about another project, more on that in a bit.

 

Workshop

2016-06-08 17.55.24.png

My next challenge for the year was from Emma Bearman, she had also spotted my enchanted project and wondered if I could bring enchantment to her gnomes. A gnomes workshop was arranged in April to show the youth of Leeds how to use a Raspberry Pi 3 to control motors and LEDs using the IOT software NodeRed. The project got a write up in issue 212 of Linux Format.

 

As part of my research for the workshop I looked at a bunch of different motor controller boards, one of those the PiCon Zero was to be used later in the year too.

 

Dragon Detector

The dragon detector was a "joke" entry into the Qualcom DragonBoard competition, they called my bluff and sent me a board to work with.

Entry.png

https://www.element14.com/community/people/Workshopshed/blog/tags#/?tags=dragonboard%20410c

 

I managed to complete a project in time and although I did not win the grand prize, Qualcomm awarded me a "Developer of the Month" award.

 

 

Since the competition has been completed the project has been enhanced and I've been asked to talk about that project for the London Linuxing group and also for the 96Boards group.

Talk.png

 

Wins

In August, my enthusiasm for the Zx Spectrum won me a Ben Heck Zx Portable. Unfortunately, it was DOA but with some help from the Element14 members we got it back working again and playing games.

TShirt+and+ZxPortable.jpg

My Terminator Eye also won me a Pi3 which will be put to good use as running MineCraft and Scratch for my young daughter.

 

Mini Project

To help with the above ZxPortable diagnosis a Test Card Generator was made using a Raspberry Pi.

 

Road Tests

It's been a busy year but I also squeezed in a road test, Elegant and Robust Capacitive Touch Interfaces - Review

I also tested a little board from 4Troniks and got it to sleep and blink as well as smile

Hopefully, I will be starting another Roadtest before the end of the year.

During my earlier experiments with GPIO on this board, I realised that it does not natively support PWM. I had a 4tronix Picon Zero board from my preparation for the Gnomes event and thought that could work with the DragonBoard.

PiconZero.jpgLevelShifter.png

As for the LED and IR detector I needed to use level shifters to connect up the board, so I added a second set to my breadboard and wired that up to the power and first I2C bus on the DragonBoard. This board has three I2C buses, two on the low speed connector and one on the high speed. It was a bit of a lashup as I had neither the right connectors for the DragonBoard(male 2x20x2mm) or the Picon (male 2x20x2.54mm).

I2C.png

I checked that it was working with the i2cdetect command. This needs elevated privileges to run.

 

To list the buses use:

 

sudo i2cdetect -l

 

To probe for devices on bus 0 use:

 

sudo i2cdetect -r 0

 

This will report a warning, but did not cause any issue for the 4tronix board.

 

WARNING! This program can confuse your I2C bus, cause data loss and worse!
I will probe file /dev/i2c-0 using read byte commands.
I will probe address range 0x03-0x77.
Continue? [Y/n] Y
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- 22 -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

 

It correctly detected the board at address 22 so I was happy that it was working. I tried to get the board to detect with Libsoc by combining a Libsoc test script with the getRevision function from the piconzero library that did not work reporting a timeout or no data returned so instead I followed the instructions on the 4tronix blog to install the library and examples.

 

wget http://4tronix.co.uk/piconz.sh -O piconz.sh
bash piconz.sh

 

I also installed the python-smbus moduel which is a dependency for the piconz library.

 

sudo apt-get install python-smbus

 

Finally the library needed a minor change at the top as I was using bus 0 not bus 1. Edit piconzero.py and change the line that sets up the bus, as follows.

 

bus = smbus.SMBus(1)

 

I tested the version script and that produced a result.

 

linaro@linaro-alip:~/piconzero$ sudo python version.py
Board Type: Picon Zero
Firmware version: 7

 

I then added a servo and tested that.

 

linaro@linaro-alip:~/piconzero$ sudo python servoTest.py
Tests the servos by using the arrow keys to control
Press <space> key to centre
Press Ctrl-C to end
Up 85
Up 80
Up 75
Up 70
Up 65
Up 60
Up 55
Up 50
Up 45
Up 40
Up 35

 

Here's the results. Although the competition is over and my video presentation is submitted I'd still like to finish off the project with a 3D printed knight.  If you are interested in my adaptions to the box, you can find those on the Workshopshed blog. Boxing the Dragon - Workshopshed

When my Dragon Detector spots a new dragon I want it to notify the operator that something has happened. When looking for ways to do this, I discovered the IF THIS THEN THAT "Maker channel" this allows you to trigger IFTTT flows by calling a URL of the following form

 

https://maker.ifttt.com/trigger/{event}/with/key/{channel key}

 

You can also pass in parameters so that you can customise the flow. I added the IF client app to my mobile and configured a simple "recipe" to link the maker event to my notification.

Screenshot_20160523-055021.png

To call this from the Dragonboard I used Pycurl which was installed earlier.

 

from StringIO import StringIO
import pycurl


def get_key():
    with open('IFTTTKey.conf', 'r') as f:
        key = f.readline()
    f.close()
    return key


def get_notifyURL(numDragons):
    return "https://maker.ifttt.com/trigger/DragonDetected/with/key/" + get_key() + "?value1=" + str(numDragons)


def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    c.perform()
    c.close()
    return r.getvalue()


print call_api(get_notifyURL(2))

 

Screenshot_20160523-222019.png

As mentioned in my previous blog on Dragonboard 410C GPIO I was planning to use the libsoc library from Jack Mitch. I though I'd installed this correctly but when I tried to access it from Python it was refusing to import the library. Re-reading the GPIO blog article from 96boards I'd not compiled this correctly. So I tried that again and this time was successful.

 

./configure --enable-python --enable-board=dragonboard410c
make
sudo make install
sudo ldconfig /usr/local/lib

 

As the Dragonboard uses 1.8v logic levels, I used a simple MOSFET based level shifter module.

LevelShifterSchematic.pngLevelShifter.pnghc-sr501.png

 

One channel was connected to my HC-SR501 Passive IR module, the other connected to the three pins of a RGB Led.

 

I had some issues getting the libsoc code to work, which turned out that I'd forgotten to "request" the GPIOs. Once I'd added that in, it is straightforward to flash an output on GPIO-B.

 

from time import sleep
from libsoc import gpio
from libsoc import GPIO
# GPIO.set_debug(True)
gpio_out = gpio.GPIO(GPIO.gpio_id("GPIO-B"), gpio.DIRECTION_OUTPUT)
with gpio.request_gpios(gpio_out):
    while True:
       gpio_out.set_high()
       sleep(1)
       gpio_out.set_low()
       sleep(1)

 

In the process of investigating my issues I discovered libsoc_zero. If you've used GPIO_Zero on the Pi, it's very similar. You'll see that it just adds a little more abstraction in the form of LEDs and Buttons.

 

from libsoc_zero.GPIO import LED
from time import sleep
gpio_red = LED('GPIO-B')
while (True): 
    gpio_red.on()
    sleep(0.5)
    gpio_red.off()
    sleep(0.5)

 

The next example lights the RED led when the sensor detects movement. I noticed that whilst doing this that the output from the IR module appears to float when there is nothing detected so I added a pull down resistor to the output pin and the circuit became a lot more predictable.

 

from libsoc_zero.GPIO import Button
from libsoc_zero.GPIO import LED
from time import sleep
sensor = Button('GPIO-A')
gpio_red = LED('GPIO-B')
gpio_red.off()
sleep(2)
while True:
    if sensor.is_pressed():
        gpio_red.on()
        sleep(0.5)
    else:
        gpio_red.off()
        sleep(0.5)

 

I've also been looking at boxing up the project. Normally I'd build this on strip board but as I'm a little short on time, I think it will likely stay on the breadboard. The webcam cable is quite long so I'll see if I can safely shorten that.

Boxing.jpg

I've put the code for the project on Github as it's now starting to get interesting.

 

Testing on something simpler

So that I knew that OpenCV was working correctly I created a simple test script.

 

import numpy as np
import cv2
print cv2.__version__

 

I also decided to do my testing and development on a Windows laptop with a lot more power than the Dragonboard. This will be particularly important when it comes to the number crunching of a new classifier.

I had a few false starts on getting OpenCV working on my laptop. I downloaded and installed the 3.1 version of OpenCV but then used the wrong version of the Python extensions, once I'd picked the right version to go with OpenCV3.1 and Python 2.7 (opencv_python-3.1.0-cp27-cp27m-win32.whl) things started working correctly. I also found that I needed to copy the classifier XML files into my project folder. I modified an example file so that it ran without creating a window.

 

# A simple test file for OpenCV

import numpy as np
import cv2
import datetime


def detect(img_color):
    face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
    detected = 0
    gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)

    # Now we find the faces in the image. If faces are found, it returns the positions of detected faces as Rect(x,y,w,h).
    # Once we get these locations, we can create a ROI for the face and apply eye detection on this ROI (since eyes are always on the face !!! ).
    faces = face_cascade.detectMultiScale(gray)
    for (x, y, w, h) in faces:
        cv2.rectangle(img_color, (x, y), (x + w, y + h), (255, 0, 0), 2)
        roi_gray = gray[y:y + h, x:x + w]
        roi_color = img_color[y:y + h, x:x + w]
        eyes = eye_cascade.detectMultiScale(roi_gray)
        if len(eyes) > 0:
            detected = detected + 1
        for (ex, ey, ew, eh) in eyes:
            cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)

    print(str(detected) + " people")
    return detected

img = cv2.imread('TestPicture.JPG')

if detect(img) > 0:
    newFileName = "TestOutput" + datetime.datetime.now().isoformat().replace(":","") + ".jpg"
    cv2.imwrite(newFileName, img);

 

The code reads a test file and then passes that to the detect function. The detect function creates two classifiers, one for faces and one for eyes based on the example XML provided with OpenCV. It grayscales the image for faster processing and then detects faces, for each face found it draws a rectangle and the checks for eyes. If it detects eyes on the face then it's a hit and we have found a person.

 

I tested my classifier with some astronauts and it detected three of them although interestingly spotted a face on one of the crumpled sleeves. On my laptop it takes about 1s to load and process the (615x425) pixel file.

TestOutput2016-05-18T110903.651000.jpg

The same script on the Dragonboard takes 0.6s to run.

 

Getting Webcam images

Capturing from the webcam with OpenCV is very simple. Most of the tutorials assume you want to display video output but these can be simplified to capture a single frame. In this example we capture a single frame from the camera and then save it to the disk. In the finished version that frame would be passed on the classifer.

 

import cv2
import datetime

cap = cv2.VideoCapture(0)
# Capture single frame
ret, frame = cap.read()
cap.release()

if ret:
    newFileName = "CaptureOutput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpg"
    cv2.imwrite(newFileName, frame)
else:
    print "Capture failed"

 

When I tested this with the dragonboard I received the following error:

 

VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Capture failed

 

There seemed to be some anicdotal evidence to fix this but they did not seem to work for me so I swapped the script out with one that called the command line "streamer" app instead.

 

That also caused me some trouble with the parameters not getting passed to the streamer app correctly, an streamer in turn complaining that the format could not be determined. After some experimentation, the following approach worked and the file could be opened by OpenCV.

 

import cv2
import datetime
from subprocess import call
capture = "CaptureInput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpeg"
cmdline = "streamer -c /dev/video0 -b 32 -f jpeg -o " + capture
call(cmdline, shell=True)
img = cv2.imread(capture)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
newFileName = "CaptureOutput" + datetime.datetime.now().isoformat().replace(":", "") + ".jpg"
cv2.imwrite(newFileName, gray)

Training a classifier

I followed this tutorial to create my classifier xml data, althought I had to drop the featureType parameter as that caused it to crash on my system. The training application is very hungry for memory and used about 2.5GB on my system for image sizes of 50 x 50. It ran one of my CPUs at between 50% and 100%. When I repeated the test with a 100 x 100 image the memory usage shot upto 8GB, although it should be possible to control this with the buffer size settings. However, I reverted to the 50 x 50 and increased the number of cycles of training as that is apparently what gives the quality results rather than the size. The training programme does seem to crash rather than report sensible errors, it also crashed for me when I put in really large image sizes. After 30 minutes I had my first prototype classifier.

 

opencv_createsamples.exe -info Dragons.info -num 90 -w 50 -h 50 -vec Dragons.vec

Info file name: Dragons.info
Img file name: (NULL)
Vec file name: Dragons.vec
BG  file name: (NULL)
Num: 90
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 50
Height: 50
Create training samples from images collection...
Done. Created 90 samples

opencv_traincascade.exe" -data data -vec Dragons.vec -bg Negative.info -numPos 89 -numNeg 765 -numStages 5 -w 50 -h 50

PARAMETERS:
cascadeDirName: data
vecFileName: Dragons.vec
bgFileName: Negative.info
numPos: 89
numNeg: 765
numStages: 5
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 50
sampleHeight: 50
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC

===== TRAINING 0-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 1
Precalculation time: 9.468
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1|        1|
+----+---------+---------+
|   4|        1| 0.330719|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 12 minutes 54 seconds.

===== TRAINING 1-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.407783
Precalculation time: 9.62
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1| 0.426144|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 22 minutes 45 seconds.

===== TRAINING 2-stage =====
<BEGIN
POS count : consumed   89 : 89
NEG count : acceptanceRatio    765 : 0.26127
Precalculation time: 9.143
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1| 0.488889|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 32 minutes 18 seconds.

 

Testing

My initial testing provided too many false positives so I found a 1000 more negative images and added them to the training, I also set the maxFalseAlarmRate to a smaller value and set my training going again.

TestOutput2016-05-20T082730.246000.jpg

This time the training took a lot longer, nearly 15hrs later the classifier was trained and it worked a whole lot better than my first version.

TestOutput2016-05-20T233252.626000.jpg

I think the next task is to look at hooking some proper hardware to the Dragonboard using the level shifters.

Reference

Getting Started with Videos — OpenCV-Python Tutorials 1 documentation

Coding Robin Train Your Own OpenCV Haar Classifier

OpenCV Tutorial: Training your own detector (video)

Tutorial: OpenCV haartraining (Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features)…

For my Dragon detector project I was interested in activating some kind of attached "defence" when a dragon was detected. For this I intend to use the GPIO pins, the 96Boards GPIO library only has basic functionality at the moment, digital read and write so I need to find something else. Specifically I am looking for interupt based inputs so that my IR sensor can trigger the camera to take a photo. I'm also looking to drive a servo or two.

 

After reading some of the blogs from 96Boards, I thought that the Intel MRAA library would work well as it supported interrupt based inputs and PWM outputs.

 

Shell control of GPIO

Before getting involved in libraries, I though it best to test using simple shell commands.

 

I ran through the example in the low speed I/O application note and found a couple of things. Firstly, and not surprisingly you need to be root to

configure the GPIO. Switching to super user made this easier. Secondly there was a mention of adding 902 to the GPIO number, I did not find this to be the case.

 

To enable a pin you "export" it  and then configure for output. Sending 1 turns the pin on.

 

sudo su
echo 36 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio36/direction

echo 1 > /sys/class/gpio/gpio36/value 

 

and then to turn it back off again, send a 0.

 

echo 0 > /sys/class/gpio/gpio36/value 

 

Once my red LED (note that some colours of LED have a forward voltage > 1.8v so don't light up) was working correctly I thought I'd check the inputs. That's just a case of repeating the export command and reading the value. I used a jumper wire to set the input high as I did not have any switches to hand.

 

echo 12 > export
echo in > /sys/class/gpio/gpio12/direction
cat /sys/class/gpio/gpio12/value

 

Further investigation into libraries

 

When I looked into MRAA in more detail I saw that the PWM functionality was just a wrapper for existing device level functionality. A simple "ls /sys/class/pwm*" showed that there was no  such function on my board.

 

I cross checked this by looking at the mraa_pincapabilities_t setup for the board.

https://github.com/intel-iot-devkit/mraa/blob/master/include/arm/96boards.h

And

http://iotdk.intel.com/docs/1.1/mraa/structmraa__pincapabilities__t.html

 

So in conclusion it does not look like PWM is supported by this library/board. Looking at Libsoc the other library mentioned in 96boards blog that too uses the pwm class so that does not help either. The Libsoc library has a wrapper for I2C which I think I'll be using to connect up an I/O board which does support PWM, so I'll go for that library.

 

Installing Libsoc

 

There are some notes on the 96Boards blog but those did not seem to be up to date. So I used the instructions from the libsoc github and that compiled successfully.

 

For my next post I'll switch into Python and hopefully get OpenCV detecting things from the webcam.

 

Reference

Bringing Standardization to Linux GPIO for 96Boards - 96Boards

How do you install 96BoardGPIO, libsoc and libmraa on a new image? - 96Boards

https://www.kernel.org/doc/Documentation/gpio/sysfs.txt

 

Videos

Using GPIOs on low speed connector on DragonBoard™ 410

DragonBoard 410C controlled RC Car

 

I also found this extra reference article from Qualcomm https://developer.qualcomm.com/blog/dragonboard-410c-maker-month-contest-tools-you-need

Following on from Getting started with Dragonboard 410c

For my project to work, I need the following:

 

  • Internet connectivity
  • Webcam
  • Python
  • OpenCV
  • GPIO

 

I decided to tackle the software elements first as those were the areas I was least familiar with.

 

Trouble with Wifi

Now that I had a working Linux install my next step was to get the Wifi connected. That was very straight forward, or so I thought. The Linaro/Debian desktop provided a status bar widget where you could select the Wifi and enter pass code. I did that an it connected just fine. However, shortly after it dropped out reporting that it was disconnected. I moved the board slightly and it reconnected.

I tried a range of different locations and even turned off the Rii keyboard incase that was interfering with it. Nothing improved the situation. However, I did have a USB to Ethernet dongle from work which I plugged in and connected up. That was detected automatically I was now on the net reliably. If I have time I'll investigate the Wifi further but for my purposes the Ethernet is just fine.

 

I installed Bonjour on my desktop and was able to connect to the Dragonboard by name using SSH without any issues.

SSH.png

Camera

I was expecting the camera to cause me problems as it was an old one from my Dad's junk box.

 

I plugged it in and ran lsusb, it correctly detected it as a Logitech QuickCam Express.

WebCam.jpg LsUSB.png

To test the camera I installed "streamer", and captured a test picture of a cat. I've yet to play with the settings on streamer so this is just a low colour version.

outfile.jpeg

Python

 

I wanted to use Python to control my project as it is quick and easy to prototype code like this.  I also wanted to communicate with the internet for the purpose of notify the user, so I installed Pycurl too.

To install Pycurl, I need to install PIP (the python package manager), so I used the get-pip.py script to do that. I had to install a few pre-requisites too.

 

sudo apt-get update
sudo apt-get install libcurl4-openssl-dev python-dev
pip install pycurl

 

To test PyCurl was working, I downloaded a simple web page.

 

from StringIO import StringIO
import pycurl
import signal,sys

def call_api(url):
    r = StringIO()
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.CONNECTTIMEOUT, 10)
    c.setopt(c.TIMEOUT, 60)
    c.setopt(c.WRITEFUNCTION, r.write)
    c.perform()
    c.close()
    return r.getvalue()

def main():
    r = call_api("http://csb.stanford.edu/class/public/pages/sykes_webdesign/05_simple.html")
    print r

# Handle exit and kill from OS
def set_exit_handler(func):
    signal.signal(signal.SIGTERM, func)
def on_exit(sig, func=None):
    print "exit handler triggered"
    sys.exit(1)


# Run program
if __name__ == '__main__':
     set_exit_handler(on_exit)
     sys.exit(main())

 

OpenCV

 

After trying to compile software on the Arduino Yún last year, I expected that OpenCV was going to be an issue. However, I followed the instructions to install any dependencies, and configured the make file. That all went smoothing. The compile took a couple of hours, I was not surprised as OpenCV is quite sophisticated.

Compiling.png

It compiled successfully, but I've yet to test it as it was quite late when it finished.

 

Next up GPIO, there's a few different libraries for this but Intel's MRAA seems to have the most potential for what I'm trying to achieve.

 

Reference

 

Framegrabbing Applications

https://pip.pypa.io/en/stable/installing/

Installation in Linux — OpenCV 2.4.13.0 documentation

Once the headless Raspberry Pi had been set-up we got started with Node-Red, sensors, LEDs and Servos.

2016-04-28 20.08.49.jpg

For the sensors I selected 3 different kinds, light, touch and motion.

 

LDRTouchPIR.png

The light sensor is based around a simple LM393 comparator with a preset to do the comparison. This is powered from 3.3v and the output is a digital signal. There is a power and output LED so that it can be tested independently.

The touch sensor is based around a TTP223B chip, this is also powered from 3.3v and the output should be digital.

The motion sensor is a passive infra-red detector based on the HC-SR501C, this is powered from 5v but has a 3.3v output. There are two adjustments, one for sensitivity and the other controls how long the output stays high once an event is triggered.

 

I did not have a chance to try out the LDR and my intial experiments with the touch sensor were unsuccessful too. However the PIR was what was needed for the first project and that worked well so we stuck with that.

 

The LEDs were pre-wired with a series resistor and sockets to connect to the GPIO on the Pi, this worked really well.

 

The Servos were cheap micro-servos, I also brought along a few servo testers which we used to make sure the servos were working correctly.

 

Cat Scarer

Charlie's project was to stop the neighbour's cats from spending time on his lawn. One of our main challenges was getting the voice recording off an IPad and onto the Pi. We eventually got help from the sound guy and re-recorded it onto an SDCard and used WinSCP to copy the file onto the Pi.

 

This project used a PIR sensor and a powered headphone speaker connected to the 3.5mm audio jack on the Pi.

 

The flow works as follows:

     The sensor goes high when motion is detected.

     To avoid multiple triggers we limit the flow to one message every 2s.

     This is then wired to the LED so we can see that the sensor has triggered an event.

     A random node creates values 1 or 2 and this passes to the switch to select one of the two outputs.

     Finally the EXEC nodes run the command line "aplay" with one of two values for a police siren or a recorded message.

SensorFlow2.png

Trouble with servos

We tried to get the servos to work but they simply refused to co-operate. I've since followed up on this and we should have used the GPIO 2 pin as that has hardware PWM, there's also a limited range of values that can be used. If we'd have had time I also had a Picon Zero which we could have used to control lots of servos.

ServoFlow.jpg

 

Mouse Toggle

David used a blue tooth mouse to remote control his Pi's LED. This was one area where we could not work out how to complete the flow without code. Each mouse click turned the LED on or off.

Toggle.png

Toggle Code

var state = context.get('state')||0;
if (state == 0) {
    state = 1;
}
else {
    state = 0;
}
context.set('state',state)
msg.payload = state;
return msg;

 

 

Reference

Node Red

Playing Audio on the Pi

After my slightly jovial entry to the DragonBoard competition was accepted, I was sent a board to do my project with. I was also sent a US power supply so had to get an adaptor for it to work (luckily the supply was rated for 240v).

Entry.jpg

 

For setup, I was relegated to the bedroom to setup as that was where the only TV with a HDMI connector was located.

I followed quick start and booted into Android, there's some nice animated graphics when it boots. You then have to work how to "swipe" the screen using just a track pad.

2016-05-04 20.41.39.jpg2016-05-04 20.33.03.jpg

This seemed to run ok but there was not much I could do with Android and then issues connecting to the Wifi, mine appeared for long enough for me to enter the key but then was replaced by my neighbour's BT home hub.

 

So I switched to Linux, which can be installed from the SDCard

https://github.com/96boards/documentation/wiki/Dragonboard-410c-Installation-Guide-for-Linux-and-Android

 

To boot from SDCard you flip a switch on the back of the board. The DIP switch is minuscule! I used my smallest screw driver to set it. I also seemed to be getting the wrong images, the key seems to be to get one that says "SDCard install" in the name, if you follow the link in the instructions that will get you to the right download. Even after this I had trouble with my the first card that I'd imaged.

 

Finally I managed to get it all plugged correctly (my HDMI connector was loose) and the flashing could begin.

Flashing01.jpgFlashing2.jpg

That process went smoothly and I took out the SDCard and rebooted in the Linux desktop.

LinuxDesktop.jpg

Next challenge connecting to the Wifi and remote access via SSH.

Year-in-Review-2015-header.png

I had a great start to 2015, I'd just been selected to Road Test a Cel Robox 3D printer at the end of December and managed to get detailed look at the inside of the printer and also do some test prints before the printer went off for an upgrade at the end of Jan.


I'd just re-launched my Workshopshed website and I found out on Jan 14th that I'd been made Member of the Month,

2015-01-01+23.45.27.jpg

http://www.element14.com/community/groups/3d-printing/blog/authors/Workshopshed

 

I intended to make some circuits using the touch sensor chips I'd experimented with the year before. To help with this I bought my self a temperature controlled soldering iron and some tweezers.

 

I started on a magnifier lamp project in February. This used some LEDs I'd bought many years before, some scrap metal and wood as well as a selection of 3D printed parts. After a strong start in Feb and March progress slowed and it was not till October that the lamp was finally completed. The lamp was designed using OpenSCAD and I've really improved my skill in that over the year.

2015-02-04+20.44.02.jpgWiredUp2.jpg

In March I applied for the Enchanted Objects Design Challenge, and my application was accepted. I knew this was going to be a lot of work as it was 16 weeks of blogging and making. In the end it was over 200 hours of designing and making electronics, configuring and coding embedded Linux and designing and printing the mechanics. 41 blog posts charted the progress of the project which was weaved around the story of Hans and Matilda the young meteorologists. The challenge dominated my life from March through to the end of June.

 

At the end of March, Maplin awarded me a prize in their "Arduino Day" competition for my Topsy Turvy Clock project which was an added bonus.

 

In the summer, I met with shabaz and mcb1 whilst Mark was on his grand tour. An entertaining evening from what I can remember of it.

 

At the end of July I got a call from Dave to tell me that I'd won the challenge and a trip to New York and the Maker Faire which was fantastic news. What Dave did not mention was that I'd also have my pictures and name up in lights around New York!

8th+ave+%26+26th+-+03.jpg

The trip to the New York Makers Faire was an amazing weekend and I met loads of great people and their project. It was a packed weekend and I can't thank Element14 enough for sending me.

I took Hans and Matilda along too but managed to loose them somewhere in the show ground. Luckily a Swiss company who make weather houses managed to send me some replacements later in the year. 2015-09-26+10.08.44.jpg

My reports from the faire made it far and wide, as well as the reports here, here and on the Workshopshed blog, they were also published in the Imperial Engineer (the alumni magazine for Imperial College) and in Model Engineer's Workshop magazine.

MEW+236+preview+cover.jpgImperialEngineer.png

This was a good end to my year but I also won a copy of the Beagle Bone Cookbook in November.

 

My final electronics project of the year was a Christmas Decoration in the form of a tree powered by an Adafruit Trinket.

 

 

So what's next for 2016? I do plan to do a little more metal work than this year, I've a little Stirling Engine I've been hoping to build for some time now. However, I do think I'll have a few electronics projects, hopefully I am now in a position to build the rotary sensor I was researching a few years ago. I've also got a Raspberry Pi Zero and a Beagle Bone Green to play with so hopefully I'll be able to put the Linux skills picked up during this years challenge to good use with those.

A simple project to flash some LEDs as is traditional at this time of year.

 

I took a Adafruit Trinket which is an ATTiny based board and added some transistors as simple open collector drivers. The resistors were carefully calculated by just grabbing a bunch that came with the bag of LEDs, just looking at the photo it looks like they are 200Ω

 

Tree.png

trinket5.png

The trinket is a bit of a tricky beast to program compared to the Uno but the details are all covered on the Adafruit pages. You need to configure the IDE to have the board definitions and install a driver. You'll also likely to need to run the IDE as root/admin otherwise it does not communicate with the driver. You then switch to the USBTinyISP programmer and select AdaFruitTrinket in the board menu.

To program you have to reset the board before pressing the upload button, as you only have 10s to do that it might be worth compiling first before pressing reset.

 

int LED1 = 0;
int LED2 = 1;
int LED3 = 2;
int LEDSpeed = 200;
void setup() {
  pinMode(LED1, OUTPUT);
  pinMode(LED2, OUTPUT);
  pinMode(LED3, OUTPUT);
}
void loop() {
  digitalWrite(LED1, HIGH);  
  delay(LEDSpeed);
  digitalWrite(LED3, LOW);
  delay(LEDSpeed);
  digitalWrite(LED2, HIGH);    
  delay(LEDSpeed);
  digitalWrite(LED1, LOW);
  delay(LEDSpeed);
  digitalWrite(LED3, HIGH);  
  delay(LEDSpeed);
  digitalWrite(LED2, LOW);
  delay(LEDSpeed);         
}

Tree3b.jpg

I was lucky enough to grow up at the time that the home computing in the UK was taking off. Micro Live was on the telly and Sinclair User and Crash Magazine were in the news agents.

 

School

 

At school we had BBC computers, they first appeared in a little room that used to be a store room around the back of the library, then they got a whole classroom to themselves upstairs. Remarkably for the 1980s they were all networked, this allowed you to view and take over other terminals on the network and hence get into trouble with the teachers. I did get too much out of the classes as they were still trying to get people to understand a simple loop construct and I was thinking about graphics algorithms in Z80 assembly language. As I was leaving an Archimedes was added to the library lab and my friend Robert Harrison managed to get it to do amazing things with some ray tracing code he wrote. In 6 form the BBCs were used for teaching us typing and doing educational puzzles. Computing was a separate subject and we did not use computers in other lessons. However, I did use the Spectrum in my course work with X-Y Plotter project for GCSE Design and Technology and for my A-Level Chemistry I created an animated explanation of the making of sulphuric acid.

 

Games

 

At home we had a ZX Spectrum, initially a 48K one then we had the keyboard and power supply upgraded to make it a Spectrum+. I don't remember having too much of an interest in games but when I came to drop off the boxes at the The Centre for Computing History there were a lot more than I remembered. I had also amassed a wide selection of hardward too including a sound interface, speech synth, light pen and of course and Joystick interface. Most of the things I played were demo version which came on the covers on my favourite magazine, Sinclair User. As well as games on tape there were also listings in magazines that had to be painstakingly typed in. Luckily these could be saved to tape so you could stop for tea half way through. This was a good learning exercise as you got to study the code in detail and often had to debug the code as the listings were not always perfect.

Games 2014-08-16 19.57.51.jpg

 

Programming

 

We did learn a little BBC Basic at school and I also remember a summer camp over in the North East where we programmed a turtle using Logo. Most of my learning was self taught from books and  I started with Basic on the Spectrum but quickly moved on to Z80 Assembly language so that I could get the most out of this little computer. I also dabbled with Forth and Pascal. I did a lot of coding on paper as the editor was loaded from tape, then the source code was saved and the assembler was loaded, the assembled version was saved and finally the debugger was loaded. The turn around process from editing to debugging was maybe 30 minutes so hence why paper was quicker.

 

I attempted to build my own computer game so I could be come rich and famous like the Darling Brothers, Richard and David from CodeMasters

CodeMasters.jpg

King Penguin

The idea of the game was that you were a penguin escaping from London Zoo, fighting your way across London to the docklands to get on a ship to Argentina. There was to be several different screens to work your way through and some challenging opponents such as the man with the bowler hat and a politically incorrect Chinese tourist character. I to make my game, I had to design the health display (in the form of a large mackerel fish that turned into a skeleton version), graphics, sound effects and control system. There was a lot of coding done with things such as mask generation and mirroring code so that I did not need to load so much from tape. I also spent one summer with my parents in Wales when I drew up a lot of the graphics on squared paper.

KP.jpgKP+Backgrounds.jpg

 

Unfortunately the whole project was abandoned when I went to university, it was much too ambitious a project and I should have started with something a bit simpler.

As part of the Enchanted Objects Design Challenge I wanted to draw up some circuit diagrams. Given that I was planning to mostly use modules I chose Fritzing rather than Eagle as it's good sketch like diagrams.

 

As I was hoping to use the RGB LED Shield, I needed to make a new part. That was my first problem I could not work out how to create a one. I found out that it was possible to edit an existing part and save it as a new part so I took that approach. I saved a Arduino prototype shield as my starting point and edited it. I tried loading up JPG and PNG files but that simply did not work so I swapped to SVG files and had some more success.

 

There are several steps to the process.

 

  1. Find a similar part and save it as a new part
  2. Enter the Meta data i.e. the name, url, part number etc.
  3. Enter the Connectors data, the name of each of the connectors to match the data sheet.
  4. Find or create an SVG file of the board or component. I had some help from shabaz here with a SVG file. RGB LED Shield diagrams for documentation purposes
    FritzingEditor.png
    • I tried various editors for SVG files, Visio, Libre Office Draw but Inkscape seemed to be the most reliable for creating files that Fritzing would load.
    • For text labels, make sure the font is OCR A
      FritzingFonts.png
    • Resize the page to the component
    • Make sure there is not a big white rectangle behind your component
    • Make sure each of the "pin" is a separate SVG object so you can wire it up later
    • In Inkscape I had better results when I saved as a "Simple SVG" file rather than the Inkscape SVG.
  5. Load the SVG file into the part for breadboard
  6. Wire up each of the connectors to the diagram
  7. Repeat 4 to 6 for the schematic and PCB
  8. Load up a picture for the icon, I just used the same one as for the breadboard.

 

You should now be able to use your part in a diagram. I've done a really simple one below in breadboard and schematic styles.

 

RGB Demo_bb.png

RGB Demo_schem.png

 

I've posted the part up on Github in the electronics folder

https://github.com/Workshopshed/EnchantedObjects

 

Note: I've not wired up the PCB part of it so if you need it to make a motherboard for a shield then you'll have to wire that up yourself.

Andy Clark (Workshopshed)

Updated blog

Posted by Andy Clark (Workshopshed) Top Member Jan 15, 2015

My making and repairing blog over at Workshopshed has had a bit of a make over, you should find it easier to find things now.