Skip navigation
1 2 Previous Next

Pi IoT

26 Posts authored by: Frederick Vandenbosch Top Member


prowl.pngThe challenge may be over, it doesn't mean the project cannot be further improved or expanded!


In this post, I will cover the notification feature for iOS devices using Prowl, which can be useful to notify the home owner in case of anomalies. An example could be that the garage has been opened while the key is still in the key holder, or that the front door remains open longer than a certain amount of time. I have covered this in the past, during the Forget Me Not Design Challenge, but as I'm using OpenHAB 2, some steps are different in the deployment of the notification feature, hence the new, updated post.


OpenHAB 2


The main difference since last time, is that I'm using the OpenHAB 2 beta, and not all bindings have been ported to the beta yet. As a consequence, I have to manually add the OpenHAB 1 Prowl binding into my OpenHAB 2 installation. Though I'm currently using this procedure for the Prowl binding, this should be applicable to any other OH1 binding not yet available for OH2, assuming they are compatible.




Because we will be running a OH1 addon in OH2, we need to verify this feature is enabled in OH2. This is done by manually starting OH2 and entering some commands at the prompt.


Manually start OH2:


pi@pictrl_livingroom:~ $ sudo /usr/share/openhab2/
Launching the openHAB runtime...

                          __  _____    ____
  ____  ____  ___  ____  / / / /   |  / __ )
 / __ \/ __ \/ _ \/ __ \/ /_/ / /| | / __  |
/ /_/ / /_/ /  __/ / / / __  / ___ |/ /_/ /
\____/ .___/\___/_/ /_/_/ /_/_/  |_/_____/
    /_/                        2.0.0.b3

Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown openHAB.



Verify the "openhab-runtime-compat1x" is installled, it was in my case:


openhab> feature:list | grep compat
shell-compat                              | 4.0.4            |          | Uninstalled | standard-4.0.4          | Karaf Shell Compatibility
openhab-runtime-compat1x                  | 2.0.0.b3         | x        | Started     | openhab-aggregate-xml   | Compatibility layer for openHAB 1 addons


If it isn't, install it:


openhab> feature:install openhab-runtime-compat1x


That should be enough to run OH1 addons in OH2.




Next, deploy the actual addon.


Go to the "/tmp" folder and download the addons from the openhab website:


pi@pictrl_livingroom:~ $ cd /tmp/
pi@pictrl_livingroom:/tmp $ wget


Unzip the package and move the desired addon to the OH2 addons folder:


pi@pictrl_livingroom:/tmp $ unzip
pi@pictrl_livingroom:/tmp $ sudo mv /tmp/org.openhab.action.prowl-1.8.3.jar /usr/share/openhab2/addons/


Cleanup the remaining addons:


pi@pictrl_livingroom:/tmp $ rm -rf org.openhab.*


The addon is now deployed.




Finally, configure the addon with the necessary parameters. In case of the prowl notifications, and API key is required. This key can be obtained by creating a free account on


Screen Shot 2016-09-02 at 21.46.11.png


Once you have a key, create the prowl service config as follows:


pi@pictrl_livingroom:~ $ sudo nano /etc/openhab2/services/prowl.cfg



Prowl is now ready for use!






On the server side, OpenHAB rules can be used to trigger notifications when a certain condition is met. A sample rule would look like this:


import org.joda.time.*
import org.openhab.model.script.actions.*

var Timer alertOn

rule "Alert Light"
    Item EnOcean_sensor_00298B1A_B received update
    sendCommand(TowerLight, 1)
    pushNotification("Alert!", "You have been summoned.")

    if(alertOn!=null) {
    alertOn = createTimer(now.plusMinutes(5)) [|
        sendCommand(TowerLight, 0)



This rule will light up the Tower light when the correct button is pressed, and will in addition, trigger a push notification.




On the client (your smartphone, tablet, etc ...) an app is required: Prowl. Download and install the app, use the same credentials to log in as used to request an API key. You are now ready to receive notifications!


When the above rule is triggered, a notification appears in the device:




Et voila, custom notifications!





Navigate to the next or previous post using the arrows.



A lot of online sources were used in order to achieve the creation of my project. Though the sources have been linked in the relevant posts, I have summarised the complete list per subject right here for your convenience.




Raspberry Pi


Automatically copy "wpa_supplicant" file
Getting Raspberry Pi 3 UART to work
I2C Level shiftingIs level shifting really needed for I2C?
Disabling Pi3 onboard LEDs
Installing Chromium browser on Pipi 3 - How to get Chromium on raspberry 3 - Raspberry Pi Stack Exchange




Puppet Documentation
Puppet Keynote by Luke KaniesPuppet Keynote: Puppet Camp London


Voice Control


Voice Control project on Raspberry Pi using PocketSphinx

Raspberry Pi 3 Voice recognition performance

RoadTest Review a Raspberry Pi 3 Model B ! - Review

Various text-to-speech solutions for Raspberry Pi

RPi Text to Speech (Speech Synthesis) -


Sense HAT


AstroPi Official Website
Sense HAT generic information

Sense HAT Python API

Calibrating Magnetometer
Joystick KeycodesKey codes - Qi-Hardware
Negative temperatures issue


Pi Camera


Enabling Pi Camera support via command line, without "raspi-config"

raspicam - How can I enable the camera without using raspi-config? - Raspberry Pi Stack Exchange

Video Surveillance OS for SBCs
Pi Smart Surveillance projectRaspberry Pi Smart Surveillance Monitoring System
MJPEG Streamer for SBCs


OpenHAB 2


Official WebsiteopenHAB
Hue binding
Weather Binding
OH1 addons in OH2




Official Website
Previous Challenge using EnOcean sensors

Forget Me Not Design Challenge

Visualise EnOcean sensors telegrams via command line

EnOceanSpy by hfunke

ESP3 SpecificationEnocean: Specification for EnOcean Serial Protocol 3 (ESP3)


Energy Monitoring


Open Energy Monitor Official Webite
emonPi Kickstarter
emonSD Software Image




Python LED backpack library
I2S Audio Amplifier
Trellis Keypad




What is the ShapeOko 2ShapeOko 2 - ShapeOko
What is the gShield
CNC Software


It's been a tough, stressful, but certainly fun three months competing in this challenge. As if the challenge itself wasn't challenging enough, I also moved house halfway the challenge. Though the move was more time consuming than originally anticipated, I managed to complete most of the objectives I had set originally.


This is my final post for element14's Pi IoT Design Challenge, summarising and demonstrating my project builds.




Following features were implemented, making several rooms smarter:

  • configuration management
  • monitoring
    • contact (doors or windows)
    • temperature
    • energy
    • video
    • key presence
  • control
    • lights
    • music
    • voice



Unfortunately I couldn't crack the code of my domotics installation yet, but help seems to be on the way.





To accommodate all of the above mentioned features, five different devices were created:

  • a smart alarm clock
  • a touch enabled control unit
  • a smart key holder
  • two IP cameras
  • a energy monitor

Energy Monitor


The energy monitoring device makes use of an open source add-on board for the Raspberry Pi, called emonPi. Using clamps, it is able to measure the current passing through a conductor and convert it in power consumption. I combined the emonPi with a Raspberry Pi Zero and two currents clamps: one to measure the power consumption of the shed, the other for the lab. This can of course be applied to any room, as long as the clamp is attached to the proper conductor.


Want to know more about emonPi?:


IP Camera


Two IP cameras were installed for live monitoring: one in the lab, and one in the shed. Both make use of the Raspberry Pi Zero v1.3 with camera port. The video stream is converted to MPJEP and embedded in OpenHAB in the matching view.



Key Holder


A mini build which was not originally foreseen, but which I thought would fit nicely in this challenge. The concept is simple: four connectors are foreseen to which keys can be attached. When a key is attached, a GPIO pin changes status, reporting the change to the control unit.


A future improvement could be to either use a different connector per key, or make use of different resistors and an ADC to know which key is inserted where.


The full project is described in a dedicated blog post:


Alarm Clock


The idea of the smart, voice-controlled alarm clock started in 2014. The result was a functional prototype, but too slow and bulky to be really useful. This challenge was the perfect opportunity to revisit this project, and I'm quite happy with the way it turned out!


Here's a side-by-side comparison:



The original Raspberry Pi 1 B with Wolfson audio card has been replaced by the new Raspberry Pi 3 B with USB microphone and I2S audio module. The difference in performance is incredible. The result is a near real-time, voice controlled device capable of verifying sensor status, fetching internet data such as weather information or even playing music.


Most of the work was done for this device, and simply reused by the others. The posts cover voice control, setting up OpenHAB, controlling displays, and much more:


Control Unit


The Control Unit has the same guts as the alarm clock: I2S audio, USB microphone, speaker, Raspberry Pi 3, etc ... It does however add a keypad and touch screen, allowing control via touch on top of voice. The keypad switches between different webpages on the touch screen, which is locked in kiosk mode.


The touch screen can be used to trigger actions, visualise historic data (power consumption, temperature), consult the weather, etc ...


IMG_2360.PNGScreen Shot 2016-08-29 at 23.28.58.png


You can find the relevant posts below:




Various demonstrations were already made over the course of the challenge. But as this is a summary post, I've created a video showcasing the entirety of the project. Hope you like it!




Because this project wouldn't have been possible without the plethora of online content and tutorials allowing me to combine and modify functionality to give it my own twist, I am publishing all the code created as part of this challenge in a dedicated GitHub repository. You can find it here:


The repository contains the Python scripts, Puppet modules and diagrams, all categorised in a way I thought would make sense. I will make sure the repository is updated as soon as possible!




I'd like to thank element14, Duratool, EnOcean and Raspberry Pi Foundation for organising and sponsoring another great challenge. It's been a wild ride, thank you! I would also like to thank element14Dave, fellow challengers and members for their input and feedback over the course of the challenge. Finally, a big thank you to my wife and kids for allowing me to participate and even help me do the demonstrations!


Time for some rest now, and who knows, perhaps we'll meet again in a future challenge.




Navigate to the previous post using the arrow.


This post is about a mini project that I suddenly thought of during the challenge and thought would fit well as part of the larger project The idea was to make a key holder allowing up to four different (sets of) keys. It serves two purposes: a fixed place to hang our keys (we tend to misplace them a lot!) and assuming proper use, could be used as an alternative/additional presence check.





For the key holders, I decided to use stereo jacks and panel mount connectors. By shorting the left and right channel in the jack, a loop is created. On the connector, the left channel connects to ground, the right channel connects to a GPIO pin with internal pull-up resistor. When the jack is not inserted the GPIO is HIGH, when inserted, LOW. There is no differentiator per key at the moment, but could be achieved in a future version in different ways:

  • Rather than just pulling to GND, resistors could be used, resulting in different analog values, each unique per key This will require the use of an ADC.
  • Use a different connector set per key, making it impossible to connect in any other slot.


To have everything removable/replaceable, I used male header pins on the connectors and Dupont wires. The ground wire is daisy-chained across all four connectors. This results in a total of five connections to the Raspberry Pi's GPIO header: four GPIO pins and one ground. As a visual aid and indication, every connector is associated to an LED of a certain colour. When the jack is plugged in, the LED is turned off, when removed, turned on. The LEDs are located on a small board which fits straight on the GPIO header, called Blinkt!. Using the python library, the individual LEDs can be controlled.


Finally, to turn this key holder in an IoT device, whenever a jack is inserted or removed, an MQTT message is published to the control unit, which can then visualise the status in OpenHAB. From there, rules can be associated to these events. What if the shed was opened while the key was still in place??


Enjoy the gallery illustrating the build process and final result, just after a quick explanation of the code!




The code is straightforward, and using the GPIOZero library for the first time, made it even more simple! But basically, the four GPIO pins are checked in an infinite loop. Depending on the state, the matching LED is set or cleared, and an MQTT message is sent.





{gallery} Key Holder


Connectors: Four sets of connectors are used to connect the keys


Headers: Using male headers, all pieces can be connected/disconnected easily


Wiring: Testing the wiring. Ground is daisy-chained to all connectors


Pi Zero: A Raspberry Pi Zero is used to keep everything compact


Panel: Mounting the connectors and LEDs to an acrylic panel


Assembled: The fully assembled electronics


Hook: Twisting copper wire in a nice loop


Soldering: Soldering the loop onto the connector


Enclosure: Stacking and glueing pieces of wood to form an enclosure


Finish: A bit of sanding and rounding of the edges


Tadaaaa: The finished result on the cabinet


Tadaaaa #2: The finished result on the cabinet





Navigate to the next or previous post using the arrows.



Not all blog posts can be about successful implementations or achievements. Sometimes, failure happens as well This is the case for my domotics implementation. Does that mean I have given up on getting it to work? Certainly not, but I'm stuck and don't have to luxury that is time, so close to the deadline with plenty of other things left to do.


Here's what I did manage to figure out so far ...




As you may or may not know, I moved house during the challenge, beginning of July. The new house has a domotics installation by Domestia, a belgian domotics brand from what I could find.


The installation consists of two relay modules, capable of turning lights and outlets on or off. There are also two dimmer modules for lights. When we started replacing the halogen bulbs by LED ones, we noticed the dimmers no longer worked, and had to replace the dimmers by LED compatible ones.

Next to the electrical wires, the modules have a three way connector labeled A, B and GND. Searching the datasheets, it is explained the domotics modules are connected to a RS485 bus for communication.


The wiring is illustrated in the module's manual:

Screen Shot 2016-08-26 at 22.04.15.png


The RS485 bus could be an entry point in reading the lights or outlets' status, and eventually control them.


Here's what it looks like in real life:


The RS485 bus can be accessed via the dimmer's blue, green and orange wires, labeled A, B and GND.




According to this, the pins' functions are the following:

  • A: Data+ (non-inverted)
  • B: Data- (inverted)
  • GND: ground


I started by first connecting my oscilloscope to the bus, verifying there is activity. Probe 1 was connected to line A, probe 2 to line B. This is what I saw:



Three things can be observed/confirmed at a glance:

  • there is a constant flow of data
  • there is a short sequence followed by a long one: request vs response?
  • line B is indeed an inverted version of line A


Knowing there is data present, I could perhaps find a script or piece of software able to decode the data. For that purpose, I bought a generic RS485 to Serial USB module.

IMG_1880.JPGScreen Shot 2016-08-27 at 11.34.17.png


Using a basic serial tool, I was able to dump the raw hexadecimal data. A new observation, is that every new line, starts with the hexadecimal value "0x0C".


With a script I found and modified to suit my needs, I captured the raw data and jumped to a new line every time the "0x0C" value appeared.


#!/usr/bin/env python

# Original script from
# Modified to print full hex sequences per line instead of individual values

import serial
import binascii
import time

ser = serial.Serial()
data = ""

def initSerial():
    global ser
    ser.baudrate = 9600
    ser.port = '/dev/tty.usbserial-A50285BI'
    ser.stopbits = serial.STOPBITS_ONE
    ser.bytesize = 8
    ser.parity = serial.PARITY_NONE
    ser.rtscts = 0

def main():
    global ser
    global data
    while True:
        mHex =
        if len(mHex)!= 0:
            if not binascii.hexlify(bytearray(mHex)).find("0c"):
                print data
                data = binascii.hexlify(bytearray(mHex))
                data = data + " " + binascii.hexlify(bytearray(mHex))

if __name__ == "__main__":


Some of the captured sequences:


0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 aa 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 85 ff
0c 08 08 08 08 0a 08 08 0a 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 0a 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 aa 08 fe 85 ff 22 20
0c 08 08 08 08 0a 08 08 08 18 08 a8 08 ff 84 ff
0c 08 0a 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff 22 20
0c 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 08 08 08 08 08 08 08 08 08 08 18 0a a8 0a fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 0a ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 1a 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 0a 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff 22 20
0c 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 0a 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 22 20


There is a very repetitive pattern, with occasionally different values. But what does it do or mean?




This is where I got blocked. This is a bit too low-level for me, so any help would be greatly appreciated! Before being able to go any further, I need to be able to make sense of the data. Until then, this feature will be parked. The goal is still to be able to control and monitor the domotics, but sadly It most likely won't be achieved in this challenge.


Now, if you do have knowledge or know about tools which could help me further, feel free to leave a comment below





Navigate to the next or previous post using the arrows.



No time to go out on a Friday night, only a couple of days before the challenge's deadline. Instead, I decided to annoy the neighbours by doing some final milling and sanding ... So, as promised, here's the enclosure for the second control unit. Unlike the alarm clock, this unit makes use of a touch screen and keypad for user input, on top of the voice commands. Because of these components, it is also quite larger than the alarm clock. It will be sitting on the cabinet.


Here's what I've done with it and how I got there ...




This unit was too large to cut solely with the CNC. The board to cut from was so large I couldn't clamp it normally and had to resort to alternative methods demonstrated below. The CNC was used to mill the slots in the front and top panel, just the maximum supported width of my CNC.

To actually cut the different panels out of the board, I used the classic method: the table saw. Using the router, I manually made the grooves, trimmed the pieces to fit and rounded the edges.




Using some wood glue and clamps, the pieces were attached to each other. This unit required a lot more manual work than the alarm clock, but was clearly faster for some actions, though not always as accurate as the CNC. I suppose accuracy in manual actions comes as experience is gained.






Milling acrylic using the CNC required a few attempts before achieving clean results. During the initial runs, the mill's feed rate was too low, causing the acrylic to heat up too much, melt and stick to the milling bit. This in turn, caused damage to the piece because of the molten blob swinging around.


By increasing the feed rate to 1000mm / min, with passes of 0.7mm, the mill travelled fast enough to cut without melting, resulting in clean cut pieces, as demonstrated below.




Manual Router


To compensate for possible inconsistency issues due to the manual cutting and assembling of this enclosure, the side panels would have to be measured and drawn individually for milling. A much easier and faster approach was to glue a slightly larger, roughly cut piece of acrylic to the sides and use a flush trim router bit.



The flush trim bit has a bearing which follows the shape of the wooden enclosure it is rolling on, while cutting the acrylic to the same shape.


Before and after a manual flush trim:



A bit of sanding will ensure everything is smooth and soft to the touch.




So, after all the sanding, glueing, filling, milling, etc ... I showed it to the wife, and I was allowed to put it on the cabinet


Here's the result:




It's a bit of a pity the touch screen's border is black. I'm thinking I could get some white film to stick on the edges of the display, giving it a white border.


By the way, I feel it looks like a microwave or retro TV. Can anyone confirm or deny this??





Navigate to the next or previous post using the arrows.



In order to be able to visualise the home control interface on the touch screen, a browser is required. The resolution of the touch screen is limited to 800x480, so every pixel counts. By putting the browser in full screen mode and hiding all the navigation bars, maximum space is made available. This is often referred to as "kiosk mode".





Rick has already demonstrated how to put the stock browser "Epiphany" in kiosk mode. In order to try something different and be able to compare with Rick's solution, I decided to use the Chromium browser instead.


Chromium is not available in the default repositories. But according to this thread, Chromium can be sourced from the Ubuntu repositories, in order to install on Raspbian Jessie.


First, add the new source:


pi@piiot1:~ $ sudo nano /etc/apt/sources.list.d/chromium-ppa.list

deb vivid main


Apply the key to verify the downloaded packages:


pi@piiot1:~ $ sudo apt-key adv --keyserver --recv-keys DB69B232436DAC4B50BDC59E4E1B983C5B393194


Update your package list and install chromium:


pi@piiot1:~ $ sudo apt-get update
pi@piiot1:~ $ sudo apt install chromium-browser


Test the installation by launching the browser. I tried it via SSH and got following error:


pi@piiot1:~ $ chromium-browser
[16670:16670:0818/] Gtk: cannot open display:


To solve this issue, specify which display to use the browser with (the touch screen):


pi@piiot1:~ $ chromium-browser --display=:0


Tadaaa! Chromium is installed and running on Raspberry Pi.




With Chromium installed and executable, let's take a look at some interesting switches. Switches are command line parameters that can be passed when launching Chromium, altering its behaviour and/or appearance.


For my application, these seemed like the most relevant switches:

  • --display: specify the display to launch the browser on
  • --kiosk: enable kiosk mode, full screen without toolbars or menus
  • --noerrdialogs: do not display any error dialogs
  • --disable-pinch: disable pinching to zoom
  • --overscroll-history-navigation: disable swiping left and right to navigate back and forth between pages


Launching the full command can then be done as follows:


pi@piiot1:~ $ chromium-browser --display=:0 --kiosk --noerrdialogs --disable-pinch --overscroll-history-navigation=0




At startup, the Chromium browser is started with different tabs. These tabs are not visible due to the kiosk mode though (and can't accidentally be closed either). In order to navigate between these tabs and refresh their content, we need to know how to simulate the correct keypresses, triggering the tab switching.


This is done as follows:


pi@piiot1:~ $ xte "keydown Control_L" "key 3" -x:0 && xte "key F5" -x:0


What this does, is switch tab by simulating the "CTRL + <TAB_ID>" combination, optionally followed by an "F5", refreshing the selected tab.




In order to implement this tab switching functionality, I'm using the 4x4 button matrix called Trellis, which I introduced in my previous post. It connects to the I2C pins and requires two software libraries to be installed.


On the hardware side, nothing fancy: connect the Trellis to the I2C pins and power it via the 5V pin:

Screen Shot 2016-08-22 at 09.41.45.png



On the software side, start by installing some dependencies, if not yet installed:


pi@piiot1:~ $ sudo apt-get install build-essential python-pip python-dev python-smbus git


Download and install Adafruit's Python GPIO library:


pi@piiot1:~ $ git clone
pi@piiot1:~ $ cd Adafruit_Python_GPIO/
pi@piiot1:~/Adafruit_Python_GPIO $ sudo python install
pi@piiot1:~/Adafruit_Python_GPIO $ cd ..


Download Adafruit's Python Trellis library, in order to apply some changes before installation:


pi@piiot1:~ $ git clone
pi@piiot1:~ $ cd Adafruit_Trellis_Python/


Update the Trellis library to make use of the GPIO library:


pi@piiot1:~/Adafruit_Trellis_Python $ nano
        #import Adafruit_I2C
        import Adafruit_GPIO.I2C as I2C
        def begin(self, addr = 0x70, bus = -1):
                """Initialize the Trellis at the provided I2C address and bus number."""
                #self._i2c = Adafruit_I2C.Adafruit_I2C(addr, bus)
                self._i2c = I2C.Device(addr, bus)


Install the updated library:


pi@piiot1:~/Adafruit_Trellis_Python $ sudo python install
pi@piiot1:~/Adafruit_Trellis_Python $ cd examples/


Depending on whether or not the I2C address was changed by shorting some pads at the back, adapt the code to take into account the correct address:


pi@piiot1:~/Adafruit_Trellis_Python/examples $ nano
trellis.begin((0x72, I2C_BUS))


Test the installation by running the example script:


pi@piiot1:~/Adafruit_Trellis_Python/examples $ sudo python
Trellis Demo
Press Ctrl-C to quit.


The Trellis keypad is now installed and usable!




Adding the "xte" commands to a modified Trellis Python script, the tab switching is implemented.


The below script does the following:

  • stop all running chromium browsers and launch a new one with 4 tabs
  • listen for keypresses on the first 4 buttons
  • if a button is pressed, turn of the previous button's led, and turn on this one
  • select the matching tab in the browser


#!/usr/bin/env python

import time
import os
import Adafruit_Trellis

trellis = Adafruit_Trellis.Adafruit_TrellisSet(Adafruit_Trellis.Adafruit_Trellis())
trellis.begin((0x72, 1))

for i in range(16):

os.system("sudo killall chromium-browser ; chromium-browser --display=:0 --noerrdialogs --kiosk --disable-pinch --overscroll-history-navigation=0 &")

j = 0

while True:

  if trellis.readSwitches():
    for i in range(0, 4):
      if trellis.justPressed(i):
        j = i
        os.system("xte 'keydown Control_L' 'key " + str(i+1) + "' -x:0")


The visual result:




Almost there! I should now finalise the enclosure of the second unit, finish the voice commands and define the different views. *stress level rising*




Navigate to the next or previous post using the arrows.



Starting this challenge, I set out to build not one, but two control units. The idea behind this was that a single control unit would require the user to move to that room in order to be able to trigger actions (aside from using a smartphone of course). That's why I planned to have a control unit in the bedroom (the alarm clock) and one in the living room. The hard part, figuring out the software side of things, has already mostly been done while developing the alarm clock. This can now easily be reproduced for a second control unit, using the puppet modules i created as part of the challenge. The biggest difference is that this second unit will provide a touch screen interface, making more data available at a glance, than the alarm clock.


Let's see what the main differences will be


Touch Screen


This control unit will make use of the Raspberry Pi 7” Touchscreen Display as provided in this challenge's kit. You may already have seen it make an appearance in some demo footage when I was demonstrating the EnOcean sensors or Tower Light in OpenHAB.

Because the touch screen's resolution is limited, different views will be created, each focusing on a different aspect of my smarter spaces.


The browser will be used in full screen, kiosk mode, similar to what Rick has done in his [Pi IoT] Hangar Control #5.1 -- Raspberry Pi Kiosk, The Movie  post. A mechanism will be foreseen to switch between different web pages, as kiosk mode hides all navigation bars and buttons, in exchange for more screen space.


Button Matrix


What's a design challenge, if not the chance to experiment with new things? I came across this interesting I2C 4x4 keypad with silicone elastomer buttons and integrated LEDs, called Trellis. You can pick any 3mm diffused LED to suit your project and solder them on the board. Obviously, I picked white for this project The LEDs can be controlled independently of the buttons as well, making it possible to blink a button in order to draw attention to a certain action, or keep the last pressed button lit for example.


Similarly to the 8x8 LED matrix, pads can be shorted on the back using solder, in order to change the I2C address. As I already used 0x70 for the 7-segment display and 0x71 for the 8x8 LED matrix, I decided to use 0x72 for this keypad. This would allow me to combine all three in the code without creating any conflicts.


Here's a quick animation of the keypad's example program:





This unit will obviously also require an enclosure. The same wood and acrylic highlights will be used, making a consistent set I'm still trying to figure out which shape to go for and how to tackle it, but here's a first attempt at the front panel:



To gain on milling times, I'm trying a combination of CNC routing for the outer edges, and manual routing to create grooves and depth. This drastically reduces CNC time!


Time is running out!




Navigate to the next or previous post using the arrows.



I just realised I didn't explain the audio amplifier of my alarm clock introduced in the wiring post. Silly me ... Anyway, as you may have seen from the wiring in my previous port, I'm using the I2S pins to get the audio from the GPIO header and amplify it. I2S (or Inter-IC Sound) is a digital sound protocol, meant to pass audio data around between integrated circuits. Getting audio from the I2S pins requires some configuration to not use the onboard stereo jack output.




To amplify the audio, I'm using a small 3W amplifier breakout board from Adafruit. It takes the digital audio signal as an input, converts it to analog and amplifies it, outputting straight to a speaker.


It connects to the Pi's GPIO as follows:

Screen Shot 2016-08-16 at 19.33.21.png


The little board costs about $6, making it a cheap and compact solution for projects like this one. The gain can be adjusted as well using the gain pin and resistors of different values, but leaving it disconnected works just as well, resulting in 9dB gain. Other possible values are 3dB, 6dB, 12dB and 15dB.




I already did some reconfiguration of the audio when I added my USB microphone for voice control. Mainly to configure a different card for the default capture and playback devices. In my original tests, I was using a powered speaker connected to the stereo output jack of the Pi. Now, the Pi needs to be told to use the I2S audio rather than the stereo jack.




Using "aplay -l" and "arecord -l" we can list the current playout and recording devices. The situation is as follows:


pi@piclock:~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 1: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 1: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


pi@piclock:~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


Device Tree Overlay


A device tree is the description of the hardware in a system. And lucky for me, the device tree of the HiFiBerry DAC can be reused for this little amplifier board, making the configuration extremely easy. In the Pi's config file, the audio device tree parameter needs to be commented out and the hifiberry-dac overlay needs to be enabled.


pi@piclock:~ $ sudo nano /boot/config.txt

# Enable audio (loads snd_bcm2835)


A reboot is required to apply the changes:


pi@piclock:~ $ sudo reboot




After the reboot, using the same commands we used before applying the device tree overlay, it is possible to see the change has successfully been applied:


pi@piclock:~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 1: sndrpihifiberry [snd_rpi_hifiberry_dac], device 0: HifiBerry DAC HiFi pcm5102a-hifi-0 []
  Subdevices: 1/1
  Subdevice #0: subdevice #0


pi@piclock:~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


The playback device has changed, the capture device hasn't. Exactly what was needed.




Now, what's the best way to test the new audio config than by turning on the radio using voice control?





Navigate to the next or previous post using the arrows.



With the enclosure finished, the next step is to wire everything together, as so far, most components have either been used individually or were connected using a breadboard: a more permanent solution is required.


Proto Board


Because there are different components requiring the same pins (5V, GND, I2C), a prototyping HAT was used to provide multiple connections. The board was cut in such a way that one row of 90° male headers pins could be soldered on to make connections. On the bottom side, adjacent pins were connected, creating different groups, each associated with a certain GPIO pin.


The required connections are:


7-Segment display8x8 Matrix displayLED ButtonI2S Audio Amplifier
  • +5V
  • GND
  • GPIO 2 (I2C SDA)
  • GPIO 3 (I2C SCL)
  • +5V
  • GND
  • GPIO 2 (I2C SDA)
  • GPIO 3 (I2C SCL)
  • +5V
  • GND
  • GPIO 6
  • +5V
  • GND
  • GPIO 18 (I2S BCK)
  • GPIO 19 (I2S LRCK)
  • GPIO 21 (I2S DOUT)




I used some leftover LED leads I cut off to make the connections on the proto board


Colour Coding


To facilitate the identification of wires and possible troubleshooting, colour coded wires were used.


In the wiring below, the following colours were used:

  • RED: +5V
  • PURPLE: button




To keep things removable, female Dupont wires were used. Since these are off the shelf wires, they are a bit too long, but there is enough space inside the enclosure to fit everything.




A thing to pay attention to with this semi transparent enclosure, are the red and green glows created by the Pi's activity and power LEDs. In the dark, they manage to shine through the white acrylic. There is a method to disable these LEDs in software, which is the preferred option, but it doesn't seem to work (yet?) for the power LED on the Pi 3:


Turning off the activity LED works though, and can be done as follows:


pi@piclock:~ $ sudo nano /boot/config.txt

# Disable the ACT LED


As a workaround, until a software solution is available for the power LED, I used a little piece of tape to mask the LED in a non-permanent way. For a permanent solution, a dab of nail polish could be used.

A single piece of tape wasn't enough to stop the light from shining through (those things are bright!), so I ended up adding four layers of tape to cover the LED!


The light can no longer shine through, and the other components don't have any power/activity LEDs either. The only light shining is now the white light from the 7-segment display, the LED matrix and the button




To test the wiring, I ran the different scripts controlling the various components: everything worked as expected. In the pictures below, I'm testing different weather icons: sun and rain. It's hard to take decent pictures due to the brightness (more blurred on the pictures than it actually is), but looks great in real life, slightly diffused by the acrylic.



Wiring: complete!




Navigate to the next or previous post using the arrows.



In my previous post, I showed you the start of the enclosure: the front panel. I have since then been working on the rest of the enclosure, trying to figure out which style to go for, one piece at the time. I'm a software guy, not a product designer (or a wood worker for that matter), but I enjoy experimenting and giving projects a finished look


So without further ado, I present to you the completed Pi IoT  Alarm Clock enclosure in animated GIF form:


2016-08-13 23_59_23.gif


Of course, a full demo of its features will be made, but for now, let's focus on the enclosure itself. I hope you like what I've done with it. There are some imperfections, but overall, I'm very happy with the way it turned out!






Continuing to work with the same board the front panel was made of, I made two hollowed out side pieces. The first piece houses a speaker which sits comfortably inside the hollowed out space, with just enough tension to keep it in place. The other side remains hollow, providing access to the USB microphone of the Raspberry Pi. The more the hardware components are housed inside the walls of the enclosure, the more space is left in the center for wiring and easy access to the Raspberry Pi itself.





The back panel was made in the same style as the front one. A piece of acrylic is glued in the back of the piece, and a hole was made to insert a power connector. Inside, the power input is split to power the Raspberry Pi, audio amplifier and the displays.





There were some difficulties making the top. The first one is that the board was 18mm thick. Slapping almost 2cm on top (and later bottom) of the from would render the build way too bulky. Using a router, I was able to slim down the piece to a height of approximately 5mm.Needless to say that looked a lot better, but it did make the piece more fragile as well, as can be seen in the middle picture. After glueing it back together using wood glue and attaching the acrylic, the piece became more solid and it was as if it never broke in the first place. The top piece was then glued to the frame (front, sides, back) and a bit of wood filler was used to mask imperfections.




Finally, the bottom part was created (just today!). It is made such that the rest of the frame slides on top of it, with enough tension to keep everything in place. The Raspberry Pi 3 will be mounted on that piece, allowing easy access in case of maintenance. As a finishing touch, transparent rubber feet have been added to the bottom side, preventing the enclosure from slipping, while slightly elevating it.




Well, not sure if these can be called files, but here are the links to the different parts created in Easel. If you are using Easel, you can clone my parts and modify them as you please:



That's it! It's been a fun week of experimenting and figuring out how to be put everything together. Stay tuned for the next post, the end of the challenge is nearing rapidly!




Navigate to the next or previous post using the arrows.



Found some time this weekend to start the actual build of the alarm clock. I started with the front panel and thought I'd collect some feedback on the progress so far. So be sure to let me know what you think in the comments!




To make the necessary cutouts for the clock display and button, I'm using my ShapeOko2 Desktop CNC machine. It uses a Dremel to mill and is controlled via an Arduino UNO with gShield used to control the stepper motors of the CNC.

On the software side I'm using Easel, Inventables' web-based all-in-one software application for CNC milling. It combines the CAD software to create the design, the CAM software to set the tool paths, and additional software to send the resulting G-Code to the Arduino.


FullSizeRender (2).jpgScreen Shot 2016-08-08 at 18.02.10.png


Since it was my first time milling a solid board like this one, I used conservative milling speeds and depths, to avoid breaking anything or having the stepping motors skip steps.


The settings used were:

  • feed rate: 750mm / min
  • plunge rate: 500mm / min
  • pass depth: 1.0mm


This means that in the horizontal plane (X & Y), the mill moves at 750mm / minute, in the vertical plane, 500mm / minute (Z). Every layer is milled 1mm deep, so for a board like this one which is 18mm thick, 18 passes are required.


The entire process took about 1h30min. Apart from the occasional vacuuming of the wood dust, not interference was required


Finally, if you have an Inventables account, you can access my file using the following link, allowing you to copy and modify my design to your needs: Easel - PiIoT Alarm Clock - Front




Here's a gallery of different steps in the milling and fitting process. I've added descriptions to every picture


{gallery} CNC


Solid Board: The starting piece, a solid board from the hardware store.


Passes: Unlike a 3D printer adding layers of material, the mill removes it, layer by layer.


Order: It is important to define the shapes to be milled in the correct order. The outside perimeter is done last to avoid the piece coming loose before milling other shapes.


Tabs: The piece remains attached to the board with easy to cut "tabs".


Sanding: With minimal sanding, the piece is cleaned up and ready for fitting.


Fitting: Fitting a push button, and attaching it with a nut. Enough space was foreseen for easy access.


Acrylic: A small acrylic plat is fitted, acting as a diffuser for the clock's display.


Display: The plate and display are held into place using a few drops of hot glue.


Result: The result from the front. What do you think?


With the front finished, I can now proceed with the rest of the enclosure.




Navigate to the next or previous post using the arrows.



text-to-audi.jpgNow that we are able to easily create and customise voice commands on the Pi, let's do the reverse and create voice responses. As mentioned in my previous post, There are a lot of voice tools available, but I would like to have an offline alternative capable of working without an internet connection. What's a home automation system if it's crippled because of no internet?


That's why in this post, I will work with both an offline and online text to speech tool, and provide a mechanism to switch between the two, should the internet connection be down. I'm using both, because from what I've experienced, the online alternatives just sounds better than the offline ones.




Searching for an offline and easy to use text to speech tool, I came across flite. "Flite" is a lightweight version of another text to speech tool called Festival ("flite" = "festival-lite"). It is designed specifically for embedded systems and has specific commands to make it easier to use from the command line.




Flite is available in the repository and will use a mere 384kB of disk space. I suppose that indeed qualifies as lightweight


pi@piclock:~ $ sudo apt-get install flite
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 234 kB of archives.
After this operation, 384 kB of additional disk space will be used.
Get:1 jessie/main flite armhf 1.4-release-12 [234 kB]
Fetched 234 kB in 0s (395 kB/s)
Selecting previously unselected package flite.
(Reading database ... 119163 files and directories currently installed.)
Preparing to unpack .../flite_1.4-release-12_armhf.deb ...
Unpacking flite (1.4-release-12) ...
Processing triggers for man-db ( ...
Processing triggers for install-info (5.2.0.dfsg.1-6) ...
Setting up flite (1.4-release-12) ...




Different voices are installed by default. You can list them as follows:


pi@piclock:~ $ flite -lv
Voices available: kal awb_time kal16 awb rms slt


To use a certain voice, use the "-voice" option when launching flite. For example:


pi@piclock:~ $ flite -voice slt -t "Hello, is it me you're looking for?"


If you can't find a voice you like, additional voices are available for download on the flite website: Flite English Synthesis Demo




Nothing to be installed here for the speech synthesis, as it will be processed online, but a tool is required to play the received audio file.


Using the preinstalled "omxplayer", the audio seemed to be cut off and the program not stopping after playing out the file. So instead, I installed "mplayer".


pi@piclock:~ $ sudo apt-get install mplayer
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'mplayer2' instead of 'mplayer'
The following extra packages will be installed:
  liba52-0.7.4 libbs2b0 liblircclient0 liblua5.2-0 libpostproc52 libquvi-scripts libquvi7
Suggested packages:
The following NEW packages will be installed:
  liba52-0.7.4 libbs2b0 liblircclient0 liblua5.2-0 libpostproc52 libquvi-scripts libquvi7 mplayer2
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,042 kB of archives.
After this operation, 2,711 kB of additional disk space will be used.
Do you want to continue? [Y/n]


For the text to speech side of things, I'm making use of the Google Translate TTS API. It's possible to pass a string to the API in the form of a URL, which will return an mp3 file containing the spoken version.


Clicking the link below should play out some audio:…


By integrating this URL in a script and make the query variable, custom responses can be generated on the fly.





In order to be able to use voice control at all times, even without an active internet connection, both solutions can be implemented and combined in order to have the code switch between them.


I wrote a script taking the desired message as an argument. The script first checks connectivity to Google. If ping to Google is successful, use the Google Translate TTS, otherwise, use "flite".


#!/usr/bin/env python

import os
from sys import argv

response = argv[1]

def check_internet():
        host = ""
        connectivity = os.system("ping -W 1 -c 1 " + host)

        return connectivity

def offline_response():
        os.system("flite -voice slt -t \"" + response + "\"")

def online_response():
        url = "\"" + response + "\""
        agent = "\"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0\""
        recording = "/tmp/recording.mp3"
        os.system("wget -U " + agent + " -O " + recording + " " + url + "  && mplayer " + recording + "")

def main():
        if check_internet() == 0:






Ok, for this post's demo, I'm calling the response script defined in the previous paragraph and have it repeat the incoming speech registered via PocketSphinx, as installed in my previous post.

The first part of the video demonstrates the offline TTS by temporarily setting the ping host to a dummy value ("google.coma"), simulating internet down. In the second part, the ping host is valid, and the script uses the online TTS. You can see the audio file being downloaded on the fly.



I hope you've enjoyed this post!




Navigate to the next or previous post using the arrows.



We have seen many forms of voice control, and I've used some of them in the past (IoT Alarm Clock using Jasper ) or recently (Running Amazon Echo (Alexa) on Raspberry Pi Zero ).


For this project, I thought I'd try to find a voice control solution that can meet following requirements:

  • work offline
  • easy to customise commands


For example, Alexa is extremely powerful and can understand and answer a lot of questions, while sounding very human. But the data is processed online and wouldn't work without an active internet connection. Another thing is that in order to easily customise commands with Alexa, an additional service like IFTTT is required. So unfortunately, no internet = no voice control.


Luckily, Alan's Raspberry Pi 3 RoadTest was all about speech recognition performance using a Speech To Text tool called PocketSphinx. Alan also refers to a practical application by Neil Davenport for Make. Exactly what I was looking for!




First things first. To be able to do voice control, we need to have an audio input device on the Pi. A USB microphone is probably the easiest and cheapest option.


I found this tiny USB microphone on eBay for about $1:




To verify it was properly detected, I listed the recording devices with following command:


pi@piclock:~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: Device [USB PnP Sound Device], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0


Next, trigger a recording and generate some sound:


pi@piclock:~ $ arecord -Dhw:1 -r 44100 -f S16_LE file
Recording WAVE 'file' : Signed 16 bit Little Endian, Rate 44100 Hz, Mono
^CAborted by signal Interrupt...


And finally, verify the recording by playing it out:


pi@piclock:~ $ aplay -Dhw:0 -r 44100 -f S16_LE file
Playing WAVE 'file' : Signed 16 bit Little Endian, Rate 44100 Hz, Mono


You should be able to hear the recorded sounds, indicating the microphone is working as expected.






To install the software, I mainly followed the very clear and detailed instructions from Neil on Make:

There were however some things I needed to adapt or add in order to get it fully working, so here's my take on the PocketSphinx installation




Some dependencies need to be installed, to avoid running into problems when building PocketSpinx or running the code.


pi@piclock:~ $ sudo apt-get install libasound2-dev autoconf libtool bison swig python-dev python-pyaudio


pi@piclock:~ $ curl -O
pi@piclock:~ $ sudo python
pi@piclock:~ $ sudo pip install gevent grequests


Once the dependencies are installed, the first bit of software can be installed.




These instructions have been followed as is from Neil's guide, and are used to download the source files for SphinxBase and build it.


pi@piclock:~ $ git clone git://
pi@piclock:~ $ cd sphinxbase
pi@piclock:~/sphinxbase $ git checkout 3b34d87
pi@piclock:~/sphinxbase $ ./
pi@piclock:~/sphinxbase $ make
pi@piclock:~/sphinxbase $ sudo make install
pi@piclock:~/sphinxbase $ cd ..




After building SphinxBase, the same is done for PocketSphinx:


pi@piclock:~ $ git clone git://
pi@piclock:~ $ cd pocketsphinx
pi@piclock:~/pocketsphinx $ git checkout 4e4e607
pi@piclock:~/pocketsphinx $ ./
pi@piclock:~/pocketsphinx $ make
pi@piclock:~/pocketsphinx $ sudo make install




With the installation complete, I first tested PocketSphinx using the microphone input, in continuous listen mode:


pi@piclock:~ $ pocketsphinx_continuous -inmic yes
pocketsphinx_continuous: error while loading shared libraries: cannot open shared object file: No such file or directory


This returned an error. For some reason, the location of the shared libraries needs to be included in library search path, as it's not part of the defaults:


pi@piclock:~ $ sudo nano /etc/
include /etc/*.conf


After adding the "/usr/local/lib" path to "/etc/", apply the change:


pi@piclock:~ $ sudo ldconfig


I then tried again and bumped into another issue:


pi@piclock:~ $ pocketsphinx_continuous -inmic yes
Error opening audio device default for capture: No such file or directory


PocketSphinx searched for the microphone on the default audio card, which is the Pi's onboard audio, that has not input capabilities. This can easily be helped by specifying the microphone's index on the command line interface:


pi@piclock:~ $ pocketsphinx_continuous -inmic yes -adcdev plughw:1


PocketSphinx is then running, trying to recognise speech. Don't worry if at this stage it's not recognising what you say (at all), as it still needs to be configured with meaningful dictionary and grammar data.




Audio Devices


As documented in Neil's post, I changed the <> config to put the USB device at index 0 and only then the onboard audio. This makes the USB device the default for playout and capture, without having to change other files. This works particularly well if you are using a USB sound card with both input and output capabilities.


pi@piclock:~ $ sudo nano /etc/modprobe.d/alsa-base.conf
options snd-usb-audio index=0
options snd_bcm2835 index=1


Then came, what was for me, the most tricky part. Since the USB dongle I used is microphone only, I needed to have the default playout to remain the onboard audio, but the default capture to be the USB mic. After a lot of searches and different tests, this became the resulting audio configuration to get both playout and capture to work on the expected devices:


pi@piclock:~ $ sudo nano /etc/asound.conf

    type hw
    card 0
    type hw
    card 1

    type asym
        type plug
        slave.pcm "onboard"
        type plug
        slave.pcm "mic"


Dictionary & Language Model


The required input is a "corpus" file, a file containing the phrases that need to be recognised. This file can then be fed to Sphinx's lmtool to generate the dictionary and language model files.


As explained on that page:

To use: Create a sentence corpus file, consisting of all sentences you would like the decoder to recognize. The sentences should be one to a line (but do not need to have standard punctuation). You may not need to exhaustively list all possible sentences: the decoder will allow fragments to recombine into new sentences.


Example questions for my application are:

  • is the door of the shed closed
  • is the door of the shed open
  • what is the temperature in the shed
  • turn on the lab light
  • turn off the lab light


Screen Shot 2016-08-02 at 15.37.48.pngScreen Shot 2016-08-02 at 21.22.31.png


Two of these generated files are relevant: the *.dic (dictionary) file and the *.lm (language model)


For ease of use, renamed the files to "dictionary.dic" & "language_model.lm".


Grammar File


The grammar file ("grammar.jsgf") contains the structure of the sentences that will be spoken. Based on Neil's example, I created my own grammar file:


#JSGF V1.0;
grammar commands;

<action> = TURN ON |
  TURN OFF       |
  DOOR ;

<object> = SHED |
  LAB ;

public <command> = <action> THE <object> LIGHT |
  WHAT IS THE <action> OF THE <object> |
  IS THE <action> OF THE <object> CLOSED |
  IS THE <action> OF THE <object> OPEN ;


Be careful though, every word used in the grammar file should be present in the dictionary. Otherwise, an error will be generated at startup and the script will fail to start.




After replacing the files in the demo code by my own, I was able to detect my customised phrases accurately.


Here's a short clip demonstrating the recognition. It is now just a matter of linking the actual actions to the detected phrases.



As you can see, the recognition is extremely fast. Also, everything is done locally, without the need for an internet connection!




Navigate to the next or previous post using the arrows.