Skip navigation
1 2 3 Previous Next

Raspberry Pi

364 posts

MATRIX Creator Eclipse Weather App

In celebration of Eclipse Day we have made this app to tell you what the weather is outside so you know if you will be able to see the eclipse or not with your current local weather conditions. This guide provides step-by-step instructions for locating your general location to give you information about the weather via a series of LED animations on a Raspberry Pi with a MATRIX Creator. It demonstrates how to use the to find your location and then feed it to the Dark Sky API to get the relevant local weather information that will be used to show an LED animation on your MATRIX Creator. The main goal of this app was to give an interesting new way to receive your current weather conditions.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX OS (MOS) to easily program the Raspberry Pi and MATRIX Creator in Javascript.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 2: Create a MATRIX-Weather-App

To create your own MATRIX-Weather-App app on your local computer, use the command "matrix create MATRIX-Weather-App". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. You will also be creating a file called weatherAnimations.js for the weather animations.

From here you can clone the MATRIX-Weather-App GitHub repo with the code or follow the guide below for an overview of the code. Either way, make sure to follow the instructions in step 4.


Step 3: Global Variables

In the app.js file you will need to set up the following libraries and global variables for the app:


//Load libraries
var weatherAnims = require(__dirname+'/weatherAnimations'); //custom weather animations
var Forecast = require('forecast'); //
var request = require('request'); //

//Global Variables
//Detailed location data
var location = {};

//Configure forecast options
var forecast = new Forecast({
    service: 'darksky', //only api available
    key: 'YOUR_KEY_HERE', //darksky api key (
    units: 'fahrenheit', //fahrenheit or celcius
    cache: false //cache forecast data


Step 4: Dark Sky API

Within the forecast variable created in Step 3 change YOUR_KEY_HERE to be the API key you get once you make an account with Dark Sky here.


Step 5: Obtaining Location Data

To obtain your location data we will be using in order to get your Latitude and Longitude from your IP address. This is done with the following code in the app.js file:


//Obtaining location data
function getLocation(callback){
    //catch any errors
    .on('error', function(error){
        return console.log(error + '\nCould Not Find Location!');
    //get response status
    .on('response', function(data) {
        console.log('Status Code: '+data.statusCode)
    //get location data
    .on('data', function(data){
            //save location data
            location = JSON.parse(data);

            //log all location data



Step 6: Selecting Weather Animations

Within the app.js file there will be a function that stops and loads an LED animation corresponding to the weather information provided by Dark Sky. Use the function below:


//Selecting Weather Animation
function setWeatherAnim(forecast){
    //clear MATRIX LEDs
    //set MATRIX LED animation
    weatherAnims.emit('start', forecast);


In the MATRIX-Weather-App folder you will need to create a file called weatherAnimations.js. You can find the code for the weatherAnimations.js file here.


Each LED sequence in the weatherAnimations.js file is tied to one of these responses from the Dark Sky API.

  • clear-day
  • clear-night
  • rain
  • snow
  • sleet
  • wind
  • fog
  • cloudy
  • partly-cloudy-day
  • Partly-cloudy-night

If there is a hazard such as hail, thunderstorms, or tornadoes than the LED's will turn red.

If there is no LED sequence created for the current weather the LED's will turn yellow.


Step 7: Obtaining Forecast Data

Using the forecast NPM module this function in the app.js file retrieves and stores relevant weather information received from Dark Sky. Use the following code:


//Obtaining Forecast data
function determineForecast(lat, lon){
    // Retrieve weather information
    forecast.get([lat, lon], true, function(error, weather) {
        //stop if there's an error
            console.log(error+'\n\x1b[31mThere has been an issue retrieving the weather\nMake sure you set your API KEY \x1b[0m ');
            //pass weather into callback

            //loop every X milliseconds


The weather is updated every 3 minutes.


Step 8: Action Zone

This last function calls all the previous functions and starts the app with the following code:


//Action Zone
//Auto Obtain Location
    //Start Forcast requests
    determineForecast(, location.lon);//input your coordinates for better accuracy ex. 25.7631,-80.1911


If you experience an inaccurate forecast feel free to hardcode your location in the place of the and location.lon variables. This inaccuracy with your location is due to the approximately 2 mile error margin of using your IP for location.


All code for the app can be found on GitHub here:

Dataplicity released a new feature "Custom Actions" that might be useful for projects including remote control.





MathWorks recently ran a mobile devices challenge where users were asked to submit a project in which they programmed their Android or iOS devices using MATLAB or Simulink. There were over 15 submissions that competed for the grand prize of 1000 USD.


The third place winning team built a low cost alternative to expensive GPS systems, click here to read more about this project and learn more about the other two winners. The link contains video references to their projects as well.


MATRIX Creator Amazon Alexa

This guide provides step-by-step instructions for setting up AVS on a Raspberry Pi with a MATRIX Creator. It demonstrates how to access and test AVS using our Java sample app (running on a Raspberry Pi), a Node.js server, and a third-party wake word engine using MATRIX mic array. You will use the Node.js server to obtain a Login with Amazon (LWA) authorization code by visiting a website using your Raspberry Pi's web browser.

Required hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • External Speaker with 3.5mm audio cable - Buy on Amazon
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.

Let's get started

The original Alexa on a Pi project required manual download of libraries/dependencies and updating configuration files, which is prone to human error. To make the process faster and easier, we've included an install script with the project that will take care of all the heavy lifting. Not only does this reduce setup time to less than an hour on a Raspberry Pi 3, it only requires developers to adjust three variables in a single install script.

Step 1: Setting up your Pi

Configure your Raspberry Pi like in the original Alexa documentation, for this please complete steps: 1,2,3,4,5 and 6 from the original documentation: Raspberry Pi Alexa Documentation

Step 2: Override ALSA configuration

MATRIX Creator has 8 physical microphone channels and an additional virtual beam formed channel that combines the physical ones. Utilize a microphone channel by placing the following in /home/pi/.asoundrc .

  type asym
  playback.pcm {
    type hw
    card 0
    device 0
  capture.pcm {
    type file
    file "/tmp/matrix_micarray_channel_0"
    infile "/tmp/matrix_micarray_channel_0"
    format "raw"
    slave {
        pcm sc

Step 3: Install MATRIX software and reboot

echo "deb ./" | sudo tee --append /etc/apt/sources.list;
sudo apt-get update;
sudo apt-get upgrade;
sudo apt-get install libzmq3-dev xc3sprog matrix-creator-openocd wiringpi cmake g++ git;
sudo apt-get install matrix-creator-init matrix-creator-malos
sudo reboot

Step 4: Run your web service, sample app and wake word engine

Return to the  Raspberry Pi Alexa Documentation and execute Step 7 but in the last terminal select the sensory wake word engine with:

cd ~/Desktop/alexa-avs-sample-app/samples
cd wakeWordAgent/src && ./wakeWordAgent -e sensory

Step 5: Talk to Alexa

You can now talk to Alexa by simply using the wake word "Alexa". Try the following:

Say "Alexa", then wait for the beep. Now say "what's the time?"

Say "Alexa", then wait for the beep. Now say "what's the weather in Seattle?"

If you prefer, you can also click on the "Listen" button, instead of using the wake word. Click the "Listen" button and wait for the audio cue before beginning to speak. It may take a second or two before you hear the audio cue.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Code has become a part of so many different ways of making music today: digital audio workstation (DAW) software records and sequences it, digital instruments perform it, and digital consoles at live music venues process and enhance it for your enjoyment. But using Sonic Pi you actually perform the music by writing code, and Sebastien Rannou used this technique to cover one of his favorite songs, "Aerodynamic," by electronic music legends Daft Punk.


Q: To start off, for someone like me who knows little to nothing about code in general, what exactly is happening in this video!? I’ve watched it several times in full, and I’m still not sure!


Sebastien: It's a video where a song by Daft Punk is played from code being edited on the fly. This happens in a software called Sonic Pi, which is a bit like a text editor; you can write code in the middle of the screen and it plays some music according to the recipe you provided. Sometimes you can see the screen blink in pink; this is when the code is evaluated, and Sonic Pi takes up modifications. A bit after that, you'll hear something changing in the music. It's a bit like you were writing a recipe with a pencil and at the same time instantly getting the result in your food.



Q: Among the most famous features of Daft Punk’s music is the extensive use of sampling, i.e. using existing recordings that are re-purposed to create new compositions. In covering a song that is sample based, as is the case with "Aerodynamic" - which is based on a Sister Sledge track - how did you go about doing a cover?


S: This is one of my favorite songs, but the choice of doing this cover was more motivated by the different technical aspects it offers. My initial goal was to write an article about Sonic Pi, so I wanted a song where different features of it could be shown. "Aerodynamic" was good for this purpose, as it's made of distinct parts using different techniques: samples, instruments, audio effects, etc. Recreating the sampled part was especially interesting, as there isn't much more than this, so I had of one of those 'a-ha' moments when I got the sequence right, and it surprised me.


Q: How did you come to use Sonic Pi? Do you feel it has any particular strengths and weaknesses in what it does?

sonic pi logo.png


S: I really like the idea of generating sound from code; I think it makes a lot of sense, as there are many patterns in music which can be expressed in a logical way.


I started playing around with Extempore and Overtone, which are both environments to play music from code. The initial learning curve was harder than I expected, as they implied learning a new language (Extempore comes with its own Scheme and DSL languages, and Overtone uses Clojure). So the initial time spent there was more about learning a new language and environment, so it removes some part of the fun you can have (not the technical fun part, but the musical one). On the other hand, Sonic Pi is really easy to start with: one of its main goals is to offer a platform to teach people how to code, and I think Sam Aaron (the creator of Sonic Pi) did a very good job on this. What's surprising is that, even though it's initially made to teach you how to code, you don't feel limited and can go around and do most of the crazy stuff you need to express musically.


One thing which is a bit hard to get right at the beginning is that live coding environments aren't live in the same way an instrument is: you don't get instant feedback on your live modifications if you tweak a parameter within Sonic Pi, as those are usually caught up to on the next musical measure. So you have to think of what's going to happen in the next bar or two, and try to imagine how it's going to sound. This takes some practice.


sonic pi additive_synthesis.png


Q: There’s quite a bit of discussion about how Daft Punk recorded the “guitar solo” in this track; how did you go about covering it?


S: I don't know much about the theories of how they did the guitar solo part, which I naïvely thought they did digitally. I did a spectral analysis of the track, and isolated each individual note to get their pitch and an approximation of their envelope characteristics (the attack, decay, sustain, and release, essentially how the sound develops over time). Then it was just a matter of using a Sonic Pi instrument that sounded a bit like a guitar, and telling it to play them. I then wrapped it in a reverb and a bitcrusher effect (which downgrades the audio's bit rate and / or sampling rate) to make it sound a bit more metallic. Because the notes are so fast during this solo, it sounds kind of good as is (unlike the sound of the bells at the beginning, more on this later!).


Q: As you were working on your cover, did you run into any notable technical problems, and how did you solve them?:


S: Yes! I spent a lot of time trying to get the bells sound right, but failed. Usually when an instrument plays a note, it has a timbre: this is a sort of signature which can be more or less explained, for instance a violin has a very complex timbre, whereas a wheel organ is way more simple. This complexity is highlighted when you look at audio frequencies when such an instrument plays a note: there is usually one frequency that outweighs others (the frequency of the pitch or the fundamental), and a myriad of others, which correspond to the timbre.


The timbre of the bells at the beginning of "Aerodynamic" is very complex, and it evolves in a non-trivial way. I've tried different approaches to reproducing it, including doing Fourier transforms to extract bands of main frequencies at play at different intervals and converting these to Sonic Pi code (more about this here). Sonic Pi comes with a very simple sine instrument, which plays only one frequency, so the idea was to call this instrument several times using different frequencies all together. I kind of got something that sounded like a bell, but it was far from sounding right. I ended up using the bell instrument that also comes with Sonic Pi, playing it at different octaves at the same time, and wrapping these in a reverb effect. That's kind of a poor solution, but at least I had fun in this adventure!


Q: Have you used Sonic Pi to create original music? If so, how did you feel about that process? If not, how do you imagine it would be?


S: Yes, I have, using different approaches. For example, I tried using only Sonic Pi, which ended up sounding a bit experimental, and then by composing in a DAW software (Digital Audio Workstation, eg Pro Tools) and then sampling that so it can be easily imported into Sonic Pi. With this approach I can then use Sonic Pi as a sequencer and wrap the samples in effects. I did another cover using that method, this time of a Yann Tiersen song, and also a few songs with my band, Camembert Au Lait Crew (SoundCloud). The code can all be found here on github.




Q: Do you have any plans for future music projects using Sonic Pi?


S: There are recent changes in Sonic Pi version 3 which I'm really excited about, especially the support of MIDI, so you can now control external synths with code from Sonic Pi while keeping the ability to turn knobs on your synth. I haven't tried this yet, but it's definitely what I want to do next. Sam Aaron did a live coding session recently showing this and I find it amazing:

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. The musical collective Sonic Robots were inspired by one of the most famous electronic instruments of all time, the Roland TR-808 drum machine, and created a live musical installation where physical instruments recreate the purely synthesized sounds of the legendary 808. We asked their founder some questions about the MR-808 interactive drum robot.




Q: What was the origin of the MR-808 project? When I first watched the video of it at the Krake Festival I couldn’t stop smiling; do you recall any particularly memorable reactions that people have had to it?


Moritz Simon Geist, founder of the Sonic Robots collective: I started out as a young hacker and tinkerer when I was 10, taking apart radios and electronic devices from my parents. I come from a music-centered family, having been taught piano, clarinet, bass, and guitar. At some point I combined these two things - music and hacking. In 2010 I thought I should sum up all the experiments of my last few years in one piece, and came up with the robotic 808. In classic fashion, I got the idea at night in the bar, over a beer. Once I got the idea it was such an obvious thing - to do electronic music with robots - that I feared that somebody else would do it before me during the two and a half years it took to build the MR-808. Of course, that never happened.


And the first question that people ask is: “Craaazy! How long did it take to build it?”


Q: The Roland TR-808 is famous for many reasons, but maybe its best known feature is its synthesized bass drum sound. How did you go about recreating this legendary sound, which has practically become the basis for some electronic music styles?


M: Yes, the 808 is famous for its bass drum, and the clap, maybe. In the beginning of the build, I did nearly a year of experiments; initially I wanted to take a “real” 18-inch bass drum from a drum set, but that doesn't sound at all like the 808's bass drum. The electronically-generated 808 bass drum is basically a sine wave with an attack and release curve. So I searched for sounds that come close to sine waves in real life, and ended up using a very short bass drum string. For my latest robots, I optimize that and use metallic tongs, similar to a kalimba. They sound surprisingly similar to a real 808 bass drum, really boomy.


Since I've been making robotic music as my living for nearly three years now, my workshop and storage have filled up with experiments, parts, and unfinished robotic instruments. I still have enough plans for crazy instruments in my drawer to build music robots for the next few decades.



Q: How does one program the MR-808? Have you integrated it into any live performances?


M: Actually, it was meant to be an instrument in the first place! I did a lot of performances in 2012 and 2013, alone and with Mouse on Mars. At some point I had so many problems with my back - the installation weighs 350 Kg - that I had to stop, and I started building lighter robots. The MR-808 is still on display as an interactive installation at festivals and galleries, but not for shows anymore.


The MR-808 can be played with MIDI, and so actually by everything that spits out MIDI. For the interactive version we built a collaborative sequencer that outputs MIDI signal. The sequencer is a Super Collider Patch running on the Raspberry Pi. There is also a small web server providing a simple website with a step sequencer. There are two Nexus 2 tablets as the interface, which connect to the Raspberry Pi via Wi-Fi. They display the sequencer which finally controls the robot. We also blogged about it here in detail, and it's freely available at github.


Q: Why did you choose the Raspberry Pi to be part of this project? What advantages does it offer?


M: As everyone knows, the Raspberry Pi is the platform when it comes to lightweight prototype installations. As I was looking to reduce the weight of the overall installation, I was also not so keen on taking a full-blown laptop with me. Additionally, the data processing - providing a simple web server and running a Super Collider patch - are perfect for the Raspberry Pi. We are currently using a Pi 3, with a small TFT and customized restart and power off buttons, connected to some IO pins. It's a workhorse.




Q: As you were putting together the MR-808, did you run into any notable technical problems, and how did you solve them?


M: So many, I couldn't name them all! One funny thing: when we were building the 16 big push buttons for the bottom of the installation, we had to find a 1:12 model of the original buttons, which of course doesn't exist.


The 3D printing which we use now didn’t exist back then, so we ended up replicating the buttons with a pizza oven, a vacuum cleaner, and a self-made mold. The process is called “thermoforming,” and we did it hacker-style with a zero budget.


IT-wise, one big issue was the synchronization of the web interface with the MIDI sequencer. On the sequencer where you can program the 808 there is a light which constantly cycles through the rhythm, indicating at which step you are. You want the feedback light of the sequencer to both be in time with the actual rhythm that is played, but you also don't want it to be interrupted. As everything is running on Wi-Fi and websockets, it was a little tricky to synchronize everything to run smoothly. My programmer Karsten did a lot of the work there.


Q: I make electronic music myself, and in that world we often talk about trying to introduce the human element into compositions that one could otherwise say are very machine-like. Beyond the fact that it exists in the physical world, in what other ways does the MR-808 feel like a living instrument to you, perhaps more so than an actual TR-808 unit?


M: The most obvious thing for self-built robots that resembles human-like behavior is their fragility; they break all the time! Industrial robots might be very powerful and rigid, but with a limited budget you always take the cheapest route and recycle a lot of parts. For the first shows of my Glitch Robots installation, I took a 3D printer on tour so I could re-print broken parts. Apart from being useful, it looked very cool to have one on stage!


When an artist leaves the pre-made route of presets and starts digging in the mud - be it with mechanics, circuit bending, self-made electronics, or field recording - one always brings error into the art. This is a good thing! It's like playing guitar and by chance hitting the wrong chord: it might sound unexpected, but somehow cool, and can start being the trademark part of the whole riff. When one experiments, a lot of these random moments appear. 90% of it might be useless, but there is the 10% which is helpful and you can’t come up with through planning. I like this introduced randomness of music robots a lot.



Q: Do you have any plans for future music tech projects? An update to the MR-808, perhaps, or another new device?


M: The 808 was cool at the time that I built it, and for me it just "had to be done." But at the same time, it refers back to an historical instrument, and is very much bound to this reference. My opinion is that art should also be futuristic, and should sometimes fail, but it should point to an unknown future. So I decided not to build the Robotic 909, for example (editor's note: the TR-909 was a subsequent drum machine from Roland, a famous instrument in its own right).


With my last instrument, “Tripods One,” I tried to think of an instrument which is futuristic and that also plays with a human-machine interaction. Also, I took a lot more design ideas into account. It consists of 5 pyramids which inhabit small mechanical robots (of course!). Sound-wise, I did not refer to the classic "bassdrum / snare / hihat" sounds; instead, I searched for sounds which I can use well in the context of electronic music. You can see that project here:


Tripods One – Sonic Robots


See more Sonic Robots projects on their site, and check out more Raspberry Pi projects on element14 here!

lucie tozer

Sense Hat Color Chooser

Posted by lucie tozer Top Member Jul 10, 2017

Ive moved into a new house and came across a sense hat for the raspberry pi which made me remember a little project that I was working on, its basically a html based color chooser which updates the selected colour on the sense hat so I thought I'd share the scripts etc.. incase anybody finds them helpful / useful.




To start with I was running lighttpd on the Raspberry Pi which is a lightweight webserver, very simple to use and just requires a small modification to its config file to allow it to run Python scripts.


Below is the html, javascript, css and python




    <link rel="stylesheet" type="text/css" media="all" href="shstyles.css"/>
    <script src="shcommon.js" type="text/javascript"></script>

    <div id="colordisplay"></div>
    <div id="colorcontrols">
    <p class="colorcontrollabel">R</p>
    <input id="redslider" class="slider" type="range"  min="0" max="255" value="255" onchange="slideRed(this.value)" />
    <p id="redvaluelabel" class="colorvaluelabel">255</p>
    <p class="colorcontrollabel">G</p>
    <input id="greenslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideGreen(this.value)" />
    <p id="greenvaluelabel" class="colorvaluelabel">90</p>
    <p class="colorcontrollabel">B</p>
    <input id="blueslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideBlue(this.value)" />
    <p id="bluevaluelabel" class="colorvaluelabel">90</p>
    <input type="button" value="update" onClick="setSenseHatColorDisplay()">
    <p id="outputarea">output area</p>



var colorred = 255;
var colorblue = 90;
var colorgreen = 90;

function slideRed(newvalue){
    colorred = newvalue;

function slideGreen(newvalue){
    colorgreen = newvalue;

function slideBlue(newvalue){
    colorblue = newvalue;

function setSenseHatColorDisplay(){
var colorstring = colorred+"|"+colorgreen+"|"+colorblue;
var req = new XMLHttpRequest();
req.onreadystatechange = function() {
        if (this.readyState == 4 && this.status == 200) {
            document.getElementById("outputarea").innerHTML = this.responseText;




html, body{
min-height: 100%;
height: 100%;
max-width: 100%;

    float: left;
    width: 120px;
    height: 120px;
    border: 1px solid black;
    background-color: rgb(255,90,90);

    display: inline;
    width: 100px;

    display: inline;

    display: inline;

    float: left;
    border: 1px solid black;
    width: 200px;

#! /usr/bin/python

import sys
import os
from sense_hat import SenseHat

colorstring =
#colorstring = "255|90|90"
colortup = colorstring.split("|")
redvalue = colortup[0]
greenvalue = colortup[1]
bluevalue = colortup[2]
print "Content-Type: text/html\n\n"

p = os.popen("sudo python /home/pi/www/cgi-bin/ "+redvalue+" "+greenvalue+" "+bluevalue)

print '<html><head><meta content="text/html; charset=UTF-8" />'
print "</body></html>"

import sys
import os
from sense_hat import SenseHat
sense = SenseHat()

#colorstring = sys.argv[1]

redvalue = int(sys.argv[1])
greenvalue = int(sys.argv[2])
bluevalue = int(sys.argv[3])

colortup = (redvalue,greenvalue,bluevalue)

canvas = [



It should be possible to merge the 2 python scripts but there was some stumbling over returning the html headers to the raspberry pi and updating the sense hat display from a single script so I used 1 script to get the data, process it, run a second python script and return the headers allowing the 2nd script to update the sense hat.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Toby Hendricks, an electronic musician who records and performs as otem rellik, became dissatisfied with the iPad he used in live performance, and decided to build his own device using Raspberry Pi.






Q: What was the origin of the Looper project? You mention in the video that it replaced your iPad for live performances, were there deficiencies in the iPad, did you want features it didn’t offer, and so on?


Toby: The origin dates back about three years, when I first started learning Pure Data. At that time I was using an iPad for live shows, and it seemed like nearly every year when iOS got updated some of the apps I was using would break. This trend has gotten better, but I still find it a bit unnerving to use iOS live. I sort of got sick of not having a reliable setup, so I started creating Pure Data patches for an app called MobMuPlat. I fell in love with Pd (Pure Data), and eventually replaced all the apps I was using with one single Pd patch loaded into MobMuPlat. That looping/drum patch became pretty robust over the course of about three years, and then I decided to attempt to turn it into a complete standalone hardware unit.


Q: I make electronic music myself, and I always find when I get a new piece of hardware or software that there are features I didn’t expect to be using or that I didn’t know were there that I turn out to love. Despite the fact that you designed the Pi Looper, have you found yourself using it in ways you didn’t expect?




Toby: Definitely. I’m always finding ways to improve my live performances with it, mostly with the effects. I’ve become pretty proficient in playing the effects section almost like its own instrument; the delay feedback can be infinite, so creating a short delay and then playing with the delay time mixed with the other effects can really create some cool sounds and textures. Also, if you already have a loop going, the delay time is synced with the tempo of the song, so you can get some really cool rhythmic stuff going on.


Q: Why did you choose the Raspberry Pi for this project? What advantages does it offer?


Toby: I chose Raspberry Pi because I knew it could run Pure Data; I really had no other knowledge of Raspberry Pi. The form factor also works great, because I wanted to have all the components inside the box. This was my first Pi project.


Q: As you were putting together the Looper, did you run into any notable technical problems, and how did you solve them?


Toby: I had tons! It took me about three months to figure everything out. One of the main milestones was getting Pd to talk to all the controls, which are all connected to a teensy 3.6. I had absolutely no idea how I was going to make that work when I started. I eventually learned about the comport object, which is an external for Pd which allows Pd to send and receive serial data. Originally, I was planning on just sending MIDI back and forth from the Pi to the teensy, but then realized I needed to also transmit song names back and forth. Learning how to package serial data ended up being many hours of frustration, but I finally got it working with some code I found on the Arduino forum. I also had to make Pd create and delete directories to store the songs; the shell Pd external eventually saved the day on that one. There were way more issues I had to solve, but those were some of the ones on which I remember almost giving up the whole project.


Q: In the electronic music world there seems to be to be a movement of people wanting to avoid staring at their computer screens while they write, and devices like Native Instruments’ Maschine, Ableton’s Push, and new models of the classic AKAI MPC are trying to give electronic musicians the tools to write without needing their mouse and keyboard to manipulate their DAWs. Do you feel that your Looper fits in that tradition, or is it more of a device for live performance? Perhaps it’s useful in both areas?



Toby: I think it fits in both areas. It was definitely built for my live shows, but I often jam out on the couch with it. All the internal instruments were actually an afterthought; originally it was just going to have drum samples. I have yet to fully create a song on it that ended up being something I liked enough to import into my DAW (Digital Audio Workstation) to work on further, but I’m guessing that will eventually happen. I really like when an electronic band plays a show with no computer, or at least a controller that allows them to not even look at the computer. Laptops on stage are fine, but sometimes I feel like the performer could just be checking their email up there and I wouldn’t know the difference. Seeing someone on a piece of hardware really cranking on knobs and pounding buttons (even if it’s just a controller) is so much more interesting to watch.


Q: I very much agree on that! So do you have any plans for future music tech projects? An update to the Looper, perhaps, or a device that fills a different need you have in your writing or performing?


Toby: I’m pretty much always working on a new project. I’ve been building projects more than making music lately. I’ve already built a new MIDI controller that I’m going to shoot a video for eventually. It’s a drum pad / sequencer thing (kind of like this), but it uses force sensitive resistors for the note pads. I actually learned how to cast my own urethane for the pads, which was probably one of the most unnecessary steps I’ve ever taken for a project. I also just purchased a CNC machine and am currently working on a new Raspberry Pi project that will be very similar to this, but the sound engine will be in Pure Data and the touch screens will be much larger. As for the Looper, I was just updating the code yesterday to add a pickup function to the volume knobs for saved songs. The Looper is eventually going to be completely rebuilt with force sensitive resistors for the pads, but that may be some time from now.




See more of Toby's projects on Youtube, and check out more Raspberry Pi projects on element14 here!

This post features videos that I published to my YouTube channel in the series "IOT with Raspberry Pi ". This basically contains 4 videos around raspberry pi that show how to use Raspberry Pi as an IOT device. It starts from interfacing sensor to publishing the sensor data to cloud server using protocols like REST or MQTT. For the entire project I have used JAVA and on top of that used various libraries for specific tasks like Pi4J, Unirest, Eclipse PAHO etc (Links provided below). If you have watched any of the videos you might know that the series is divided into 4 parts namely,

  1. DS18B20 Sensor interfacing with Raspberry Pi.
  2. Publishing data to Thingspeak using REST.
  3. Publishing data to Thingspeak using MQTT.
  4. Completing the project.


So let's check out how to do so.


You can Subscribe on YouTube by clicking this link to show your support and be updated with the latest video on the channel like these.



1.DS18B20 Sensor interfacing with Raspberry Pi.

This video is the first part of it where we will see how to interface DS18B20 one wire temperature sensor with Raspberry Pi by using JAVA with the help of the pi4J library.

2. Publishing data to Thingspeak using REST.

This video is the 2nd in the series where we will see how to publish or send sensor data using REST API to cloud. And in this, we are using ThingSpeak as cloud service to publish data. HTTP calls for REST API are done using Unirest lightweight HTTP client library. In the next video, we will see the same by using MQTT.

3.  Publishing data to Thingspeak using MQTT.

This video is the 3rd in the series and is about how to publish or send sensor data using MQTT API to cloud. And in this, we are using Thingspeak as cloud service to publish data.Publishing Data using MQTT is done using Eclipse PAHO lightweight library. MQTT is a simple lightweight publish/subscribe protocol that can be used over TCP instead of going for HTTP as MQTT is power friendly and bandwidth friendly as compared to HTTP. So it fits perfect for IOT applications. If you are interested in more about it, you can check some docs linked below.

4. Completing the project.

If you have not checked above videos please chek those first before checking out this video. This video is the final one in the series where we will complete the project by combining the codes developed in the earlier videos. We will make the application such that we can decide te API that we will be using to publish the data to Thingspeak.

Github Repo:

Download Pi4J Library:
Download Unirest Library:
Unirest Website:
Unirest Jar Download (With Dependencies):
Download Eclipse PAHO Library(With Dependencies):
Eclipse PAHO Website:


More on MQTT
Official Website:

Java Application on Pi Playlist:
Catch Me On:



Microsoft was able to squeeze their deep-learning algorithms onto an RPi 3 in order to bring intelligence to small devices.


Love it or fear it, AI is advancing, and it’s coming to small/portable electronic devices thanks to advanced developments made by Microsoft. The software giant was recently successful at loading their deep-learning algorithms onto a Raspberry Pi 3 SBC. The advancement will obviously be a boon for anything, and everything IoT, which is on track to take the world by storm and speculation suggests there will be 46-billion connected devices by 2021- depending on whom you ask.


Regardless, Microsoft’s latest breakthrough will allow engineers the opportunity to bring about intelligent medical implants, appliances, sensor systems and much more without the need for incredible computer horsepower. Most AI platforms today utilize the cloud for all their hardware endeavors, certainly so with infant platforms such as Amazon’s Alexa and Apple’s Siri but Microsoft’s breakthrough will make those systems obsolete and unnecessary.



Microsoft is developing AI platforms that will be squeezed into hardware no bigger than this chip. (Image credit Microsoft)


To further put Microsoft’s development into perspective- the team is capable of taking algorithms that normally run on 64 and 32-bit systems and drop the requirements down to a single bit in some cases. What’s astounding is how this new development came about- all due to a flower garden. Ofer Dekel, Manager of Machine Learning and Optimization at Microsoft’s Research Lab in Redmond, Washington, needed a way to keep squirrels from eating his flower bulbs and birdseed, leading him to develop a computer-vision platform utilizing an inexpensive RPi 3 to alert him when there was an intrusion.


When the alert is triggered, the platform engages a sprinkler system to shoo away the culprits- an ingenious solution indeed. “Every hobbyist who owns a Raspberry Pi should be able to do that, today very few of them can,” stated Dekel. Yet, the breakthrough will allow just that and can even be installed on a tiny Cortex-MO chip like the one pictured above.


To get the deep-learning algorithms compressed enough to fit on the RPi 3 using just a few bits, Ofer and his team employed a technique known as sparsification, a technique that shave’s off unneeded redundancies. Doing so allowed them to devise an image detection system that could process 20-times faster on limited hardware without losing any accuracy. Still, the team hasn’t yet figured out a way to take ultra-sophisticated AI or a deep-neural network and compress it enough to fit on limited, low-powered hardware. Regardless, this is an unprecedented first step in doing so, and we can certainly expect advancements that will get us there sometime in not too distant future.  


I'm working on some Pi projects at the moment. Instead of IoT projects... maybe I should be looking into AI.


Have a story tip? Message me at: cabe(at)element14(dot)com


Home automation is a topic that has been around for decades, using classic wired technologies such as X10. The 21st century has favoured IP (Internet Protocol) to be the communication method of choice for delivering control and management of virtually anything imaginable. Devices can be untethered and operate wirelessly using sub-1GHz license-free bands. Radio is nothing new, but in modern times it has got a lot more easier to produce reliable, low-cost and energy-efficient radio links for consumer items. Small wireless nodes such as door/window monitoring devices can function with a single cell for a year or longer due to ultra-low power microcontrollers.


I wanted to see how to deploy home automation, and whether it can be easy-to-use and reliable, and if I can get good value from it. I was also interested to see how well it could integrate with everything else in my surroundings; for example could I control devices using my voice? And how effective is it? Could I also connect a Pi and do some extra things?



In an ideal world, there would be no such thing like ugly mains light switches. Everything should be seamless, with lighting turning on when it is desired. My next home will have no mains light switches I’ve decided : )


There are many home automation products out there, usually as part of an ecosystem since there are many building blocks and they all need to work together. Some stuff is fun coding and developing, but the interface to the mains power supply requires good quality, approved, off-the-shelf products. It is not worth the risk assembling something with a no-name relay from ebay. How could one know for sure that the material is flame-tested and approved for use in the UK?


I wanted to examine those products that were certified for UK use. Some very low cost products are available from overseas particularly from Asia but I believe some are self-certified and I was not about to take a risk by permanently running them inside my home. Lots of high quality stuff exists from overseas, but approvals are expensive for a reason; subtle things like the wrong plastic could cause a flame situation to occur if the electronics went wrong, or electrocution is a possibility too. The legislation has plenty of subtle things such as how mains cables should be attached and what distance there should be between wires. To add to that, there are electromagnetic compatibility (EMC) rules which are designed to prevent equipment interfering with TV and radios, and laws governing how frequently transmissions can occur. The CE marking doesn’t mean much unless there is a reputable firm standing up for their product. In the event of a claim for liability I would want that firm to be located in Europe.


I window-shopped for home automation products that I could work with and finally decided to try out the Energenie ecosystem. The products seemed to be of very reasonable cost, and the range looked like it covered many things that I would want home automation to do. The company has been around for many years, so this provided confidence too. This first blog post reports the initial findings and shows how to set up the Energenie solution for control and monitoring using a PC or mobile phone, and natural language based voice control using Amazon’s Alexa service. All of this can be set up within an hour with the Energenie solution.


The next blog post will explore the Energenie solution further and investigate how it can work together with the Pi.



What Problems does it Solve?

I actually had several use-cases for home automation.


One was to make my small apartment ‘upmarket’ so I can sell it for more money : ) I suspect a lot of people think home automation is a lot more complicated to install than it actually is, and therefore there could be good value-add to have this installed in an apartment. Many individuals/couples are away from their apartment all day and would appreciate being able to get some insight and control of their home remotely. The apartment already has a burglar alarm and video system, so home automation would be a nice addition.


Another use-case is to keep a light touch view on elderly relatives; it can be useful to see activity occurring in the home to make sure the relative is well.


As another idea for the elderly, a virtual voice controlled assistant could be very useful for people who may have trouble walking up/down stairs just to turn on the heating or switching off a light. Voice enablement will help out here. Taking this further, a home can have far more discreet physical buttons and controls if voice enablement is primarily used instead.


A very typical scenario where home automation can help out is to energy-save; the ability to get on-the-fly energy readings (either for the entire home, or more granular) provides insight and that drives consumer behaviour such as switching unused lights and TVs off more frequently.


Home security solutions can be improved with home automation; it becomes easy to automatically switch on home lighting when you’re out, to make it appear that someone is home. Timer devices are available but home automation provides a far cleaner implementation that can be programmed and schedules adjusted from anywhere and therefore makes it more practical to use. Home automation can provide insight into unusual activity even when an alarm has not been triggered. It provides deeper visibility. In a nutshell the opportunity exists to make home security and home automation better together.


For the engineer, home automation is important because it provides real-world sensor data that can be analysed and used to develop interesting future products. For example, I would love to know how long home lighting is switched on, to begin to understand how long LED products in homes could survive and how to improve them.


Can it be Installed in any Home?

This blog post will look in detail how to install and use home automation, but in summary there are several ways that a system can be installed. One typical scenario is to retro-fit it inside an existing home without touching any existing wiring. This is feasible and relies predominantly on the use of plug-in adapters which sit in-between the existing mains sockets and the connected device. It allows plug-in things like TVs and table lamps to be monitored or controlled.


Permanently wired home lighting can be controlled with some slight modification, by unscrewing the light switch on the wall and replacing it with a smart light switch. This can be achieved by nearly anyone provided some care is taken.


It is also possible to replace home mains sockets with smart mains sockets but this is an advanced activity that usually requires an electrician to install it. It is recommended to use the plug-in adapters if an electrician is not available.


For all the scenarios, an Internet connection is fairly essential.


Mi|Home Gateway

The gateway device which interfaces to all the rest of the Mi|Home ecosystem is really compact. It is only very slightly bigger than an ice hockey puck. There are just two ports on it. One is a USB connector for the 5V power supply (it comes with the Gateway) and the other is for the Ethernet cable to attach the gateway to the home router. One dual-colour LED and a pinhole reset switch completes the external features.



The entire thing is small and unobtrusive and runs cool, and can be hidden from view. The top cover can be unclipped to look inside. There is not a lot to go wrong here, it should provide many years of good service. The circuit consists of a fairly high-end ARM Cortex-M3 based microcontroller from NXP, an Ethernet interface and a very popular RF transceiver module from HopeRF. Good brand parts are used like the Wurth Ethernet transformers. The enclosure is of a sufficient size to allow the antenna to have space around it for good range.


What looks like a standard debug port is also present. Lots of great potential to use this as a low-cost board for other projects too!



Using the Gateway is pretty easy, it is plug-and-play. No configuration needed. You can take the code printed on the underside and apply it to the Mi|Home web portal once you have registered. For most users, there is no router configuration to do either. Just plug in the power supply and Ethernet connection and provided you have the code you’re ready to start using it.


Protocols and Examining the Risks

It’s always good to examine these things. Armed with the knowledge, we can deploy solutions in the right scenario and avoid fitting them where there are security risks.


The communication between the gateway and the Mi|Home cloud service uses UDP packets and is very lightweight, typical payload size was around 48-69 bytes, with what looks like a heartbeat every five seconds or so. This is a tiny amount of data traffic (less than 2MB per day) and therefore it will not impact Internet usage allowances, and also opens up the possibility of using a 4G/LTE router for monitoring and control of remote locations. The transmission is unencrypted but I could see no username/password/person identifying information transmitted; the MAC address of the gateway is sent. For home automation generally the risk of vulnerability between the gateway and the cloud service is low because it is very difficult for an individual to capture and decode communication over technologies such as 4G or cable or DSL.


For those unwilling to connect to a cloud service there is an add-on board available for the Pi, which, with some coding, can be used to control the devices locally.



The radio communication between the gateway and devices is based on the OpenThings specification (registration required) which is documented, which means (in theory; I have not tried) that the Energenie solution is flexible enough to design your own additions. There are no fees involved to use the specification and modifications are permitted too. The radio communication occurs in the 433MHz band using frequency shift keying (FSK).


There is the risk that somebody could record radio transmissions and replay them; it requires some technical skill and it is up to individuals to determine if this poses a security risk in the environment where they are installing their home automation. With low power transmissions between the gateway and devices, it would require someone to be nearby in order to capture radio transmissions. The technology, like most of the current home automation solutions, will be susceptible to radio jamming signals. Due to the ease for jamming, the Energenie solution cannot be used as a replacement for home security solutions (burglar alarms, video cameras, etc).


Using the Mi|Home Cloud Service and Mobile App

I browsed to the Energenie Mi|Home website and registered for free and entered the code printed on the back of the Mi|Home gateway, and it was immediately registered. It is all very intuitive and once the gateway is added you can give it a name and start adding additional devices by clicking on ‘Pair New Device’. As soon as you do that the web page shows the entire product range.



The color coding is roughly proportional to functionality. The basic products are blue and provide simple control in one direction. The pink items are monitoring products that gather information but do not have any controlling capability. The purple items are full-featured and offer both monitoring and control capabilities. This colour-coding matches the glossy card packaging of the devices too, so you can easily see the broad functionality that you are getting.


The product range can be configured in a consistent way. The procedure is to connect/plug in the device so that it is powered up, then hold a button down for 5 seconds until and LED flashes. Assuming that ‘Pair Devices’ and then ‘Start Pairing’ had been clicked in the web browser first, then the device will become attached and will appear in the Devices List; in the screenshot below I added a door sensor and a mains control adapter:



It is actually possible to do this from a mobile phone too. The pairing for the device can be done anywhere within radio coverage of the Mi|Home gateway using the Mi|Home app.



You can also assign custom friendly names to each device; this is handy when you have many devices connected, but also is useful for voice control by device name; see further below). The app is easy to use and during my testing I didn’t notice any bugs or crashes. There is also the ability to integrate with IFTTT which is a way to have rules such as “if the weather is cold the turn on the heating” however I’m not keen on IFTTT due to the need to have facebook/twitter for the free account in order to create your own applets. There are other ways of achieving such things and they will be explored in another blog post.



An interesting feature in the Mi|Home app is the ability to 'geofence'. This allows the system to control devices based on the location of the mobile phone. An example would be to turn on the heating if you’re approaching home.


In summary I thought the app was not bad, it is useful for checking up on the status of things in your home and controlling them of course. There are no fancy features like the ability to have a status widget on your mobile phone or to log data.


With the app installed, it was time to start pairing and exploring all the interesting devices!


Energenie Mi|Home Adapter Plus

The Energenie Adapter Plus is a very cool advanced ‘smart plug’. I thought it was great. It has a number of features. There is a small button on it and an LED, and any connected equipment can be powered or unpowered by directly pressing the button on it. The status is sent back to the home gateway, so that the user can check via the web portal what the actual status of the Adapter Plus is. This product is within the purple range, i.e. more feature-rich and with control/monitoring capability. Furthermore the Adapter Plus can be used to measure power consumption. This is extremely useful even if you’re not interested in actual energy usage, because it can tell you if the appliance at the end of the cable is actually switched on or off by observing the power consumption. So, you can use it to tell if (say) a television itself is actually powered up or not.



It was interesting to examine it in more detail, to see precisely how it functioned and how accurate it could be.


It has security screws and once they were removed, I was impressed at the quality of construction. The earth and neutral connections are direct from the mains plug side to the mains socket side of the product. The Live connection is switched of course, and all wires are crimped to the metal components of the plug/socket portion of the design. The PCB is made of fiberglass and there is a fair amount of circuitry. The radio transceiver module is a HopeRF board again, with a helix shaped antenna soldered perpendicular to the circuit board on the side hidden from view.



There is a 2 milliohm shunt resistor for measuring current. The other side of the board contains a nice DC-DC converter circuit. The AC mains input is rectified and directly stepped-down using the DC-DC converter. This type of design will run cool and in practice I could not observe any warmth of the device during operation. There is also a varistor for protecting the circuitry from excessive mains spikes, for hopefully many years of good service. A dedicated IC performs the energy measurement and communications protocol handling before passing the data to the HopeRF module for transmission. The dedicated IC from Sentec handles reactive loads (i.e. it can measure real power) and the datasheet states that power measurement accuracy is 2% or 2W, whichever is greater. Although not spectacular, this is a reasonable level of accuracy for such a device.



The mains is switched using a relay which has UL and TUV certificates. In summary I thought the design was good, I liked that it had some protection against spikes from mains input noise, the power consumption feature is really cool to see what devices are actually powered up, and a push-button switch to be able to turn devices on/off locally if desired.


Energenie Mi|Home Open Sensor

The Open Sensor does exactly that, it reports when something like a door or window has been opened or closed. It is a small single AAA cell powered device. It has low power consumption, I measured 50uA (it periodically changes between about 40uA and 60uA), and current of course increases during radio transmission whenever an open or close event occurs. Based on this, Energenie’s estimation of 1-1.5 years battery life appears accurate. I liked that it uses a standard AAA cell because they are cheap to replace compared to the small 12V batteries that used to be common in wireless sensors in the past. The sensor has the typical two-part magnet and reed switch implementation.



The design is very nice; there in an internal 3V DC-DC step-up converter that presumably runs continuously, and a low-power microcontroller. As mentioned a reed switch and magnet performs the actual detection. The transmitter is a tiny 6-pin SOT-23 device, most likely another HopeRF part.



There is a very discreet faint LED that shines through the white plastic, and it is useful for confirming that the battery is functioning because it flashes briefly each time the door is opened or closed.


Inside the box there were lots of mounting bits and pieces for attaching to doors/windows, and a card instruction leaflet.



Energenie Mi|Home Double Socket

In the blue range (i.e. control, not monitoring capability), I tried out the Mi|Home mains wall socket. This product can be fitted into a new electrical installation, or retrofitted. Its connections are identical to any standard double socket, and I liked that it had two earth terminal connections which simplified installation.



The unit is quite deep, and it will be a real struggle to fit into a 25mm deep back box if there is more than one mains cable coming into the back box (more than one mains cable is common, since a ring mains will result in two cables into each box). However, a 25mm back box for ring mains is rare and homes should have deeper boxes usually. With a 35mm back box (as in these photos) there is no issue, and I tested with three mains cables; it just about fitted. With two mains cables it fits just fine.



Another approach for retrofitting is to leave the existing mains socket where it is, and fit the Mi|Home one alongside it as a spur connection. As a result, a 25mm back box is fine since the spur connection only has one mains cable. Also, if you didn’t want to go making too many holes in your wall, you could always fit it alongside to an existing mains outlet but in a surface mount box; that way you only need to plaster and repaint a very small area. This latter option should also result in better radio coverage so it would be worth considering if the Mi|Home gateway is positioned far away (or a second gateway could be purchased; multiple ones can be added in the Mi|Home solution).


So, to summarize, if you’re installing with a single cable, then a 25mm back box is ok, otherwise you will definitely need a 35mm back box minimum and it will be tight but feasible with three mains cables, so if you have the choice, go deeper.



In terms of aesthetics and the finish of the plastic I think it looks quite reasonable, no better or worse than typical home mains wall sockets. There are are also versions with brushed steel, or chrome or nickel finish if you need to match it to others in the home.


Energenie Mi|Home Light Switch

Another item in the blue range are the Mi|Home light switches. They are optionally available in the metal finishes just like the mains wall sockets.



It has a depth of about 22mm, and so it requires a 25mm depth back box at a minimum (usually the boxes are recessed by at least a few millimetres into the wall, so that will also help to provide sufficient clearance for the mains cable. The photos here shows a 25mm box.



If you’re replacing an existing light switch then the chances are that the back box will be more than 25mm deep, however I have seen very shallow back boxes (15mm) as was the case in one room at home, and these would not be suitable. It isn’t difficult to make any cavity deeper (no need to do that with a stud wall) and replace the back box of course.



What seems to be missing from the range currently are double light switches. This actually made it awkward to install in a couple of rooms, since I wanted individual control of the two lights in the room.


Virtual Voice Assistant with Amazon Alexa

Amazon, Google and Apple all offer virtual voice assistant services. If you’re not familiar with them, they basically consist of small Internet-connected devices (usually WiFi enabled) that have a loudspeaker and an array of microphones inside. By saying a keyword (‘Alexa’ in the case of the Amazon service) the device wakes up and streams any subsequent speech to a cloud service which performs speech recognition and natural language processing to try to discover the intent of the speech. Once that is done, it formulates an intelligent response based on the wealth of information searchable on the Internet and streams a synthesized voice response which gets played out of the speaker on the device. I also find it handy for playing music, or for answering all silly questions from my little nephews : )


So, the virtual assistants today consist of two elements; the physical hardware and the cloud service. Recently Google came out with their AIY hardware kit which also provides a virtual assistant using the Google cloud service, with the Raspberry Pi and AIY kit forming the physical hardware device. Meanwhile, the Raspberry Pi also has another multi-functional hardware attachment for similar purposes called the MatrixCreatorMatrixCreator. In summary there are plenty of options.


I decided to try Amazon’s Alexa voice assistant service. It uses physical hardware known as the Amazon Echo range, and the Mi|Home service directly integrates with it. There are several models in the Echo range; the one in the photo is called the Echo Dot and costs about £50. There are some buttons on top but in normal use they are not used; the entire interaction can be by voice.

amazon-echo-dot.jpg (Picture source: Amazon)


The setup is extremely easy; I signed into my Amazon Alexa account and searched for the Energenie ‘skill’ and enabled it.



Next, by clicking on the Smart Home item on the menu on the left, a ‘Discover’ button appears. I pressed that and less than a minute later the Mi|Home devices appeared.



That’s it! Now the home can be controlled by speaking to Alexa. The devices can be named anything in the Mi|Home web portal or mobile phone app, so turning a device on is as simple as saying (for example if the device has been named ‘bedroom lamp’) “Alexa, turn on the bedroom lamp”.


Application Programming Interface (API)

The Mi|Home cloud service has what is known as an Application Programming Interface (API). This offers control of your home programmatically. In other words, you can connect additional software and services to control the home. I did a basic ‘hello world’ type of test to confirm that I could connect using the API, but further use of the API will be explored in more detail in a subsequent blog post.



Generally, I’ve been quite impressed with the Energenie Mi|Home solution. I like that the gateway and all the devices appear well constructed, even on the inside for those that I took apart. Furthermore the electronics look designed for a long service life with cool operation. I didn’t observe any problems related to safety and mains wires were crimped and separated from each other in the plastic moulding.


In terms of functionality, Energenie have made it easy to choose what you need using their colour-coding scheme.


I also like that everything is actually really good value for money. The hub device, the Mi|Home gateway, is not expensive at all, just £39+VAT currently from CPC.


In contrast, LightwaveRF’s hub is almost twice that, currently £78 from Amazon. The Hive hub is a similar price. Given that you might need a couple of gateway/hub devices for adequate coverage of a home, the cost difference is quite large.


The Hive plug-fitting mains control device costs £31, and, described as a ‘British Gas Innovation’ (British Gas is an energy company) it does not support energy monitoring. In contrast, the Mi|Home Adapter Plus supports control and energy monitoring at just £18.50+VAT from CPC.


To me it seems an easy decision to go with the Energenie products currently. Even if in future years one was to adopt a different home automation solution, the Energenie offering has another trick up its sleeve to help with that too; a radio board is available for the Pi, so that an owner could continue to use the hardware provided they were willing to do the integration work (coding). However the Mi|Home cloud service is free to use and has a northbound application programming interface (API) so a user could directly integrate with that as well.


Improvements that I would like to see to the Mi|Home solution would be a dual light switch, and a thermostat. There are Mi|Home radiator valves, but I’d prefer to directly control the entire heating system.


I’m excited that I have the beginnings of a decent home automation solution, and in my next blog post I’ll explore how to integrate this with the Pi.



I'm working actually on a professional RS422/RS485 shield for the Raspberry Pi. I wasn't satisfied with the shields on the market. They are very simple and have some disadvantages. My shield comes with the following features:


  • RS422 (full duplex) support
  • RS485 (half duplex) support
  • galvanic isolation between PI and interface
  • indicator leds for RX and TX activity
  • switchable pull-up, pull-down and terminal resistor
  • different modes for send/receive switching (Auto, GPIO, always transmitter, always receiver)
  • Auto switching via monoflop
  • all options adjustable via DIP switches


I'm curious about your feedback.


RS485 Raspberry Pi


Having to develop a MagicMirror project almost flexible to fit in different contexts including extra features - first of all supporting user interaction - I started exploring the possibilities using what already done supporting the Raspberry PI.

The project evolved through two steps: an easy step implementing the open design of the MagicMirror2 implementation on the Raspberry and a complex step developing the parts not yet available.

This first part describes the settings I followed for the easy step.



Platform scenario

This magic mirror will work as a development platform and prototyping base: should be flexible supporting upgrades and changes in the future. In addition the platform design should be easy to customise depending on installation needs and environments.


Building the structure

For the external frame structure I used a good quality of wood to produce a good aesthetic impact. The measures depends on the currently available 15" 4:3 HDMI display that will be perfect for the development prototype but any size of HDMI display can be used as well.

The other element impacting the form factor and size is the mirror (we are not obliged to keep it squared or rectangular etc.; there are several ways to make a two-ways mirror: using a glass and a semi-transparent adhesive film, buying a pre-built glass or buying an acrylic one.

To give the right impact it is important the entire mirror surface is not too big compared to the monitor size and not too small, avoiding a reduced mirror surface vs. the screen size.

For the mirror I have used two-ways transparent acrylic mirror from Tap Plastics with the following dimensions:

  • 3/16 (.177) inches Thick
  • 20 inches Wide
  • 16 inches Long
  • Cost: 73$ (it is the most expensive part excluding the Pi and monitor)

The frame has been built few mm larger internally and 5 cm depth to host the mirror, back supports and the electronic stuff.


Wooden frame

The gallery below shows he wooden frame details. I have applied a transparent coating paint specific for wood on its surface. The cost of the finished frame is about 15$

{gallery} Wooden fram construction details


The wooden frame front side


The wooden frame back side


Detail of the corner mounting of the frame


Detail of the internal side of the frame

Fixing the mirror

After removing the front protective sheet I have put the mirror plate inside the frame. A soft adhesive seal tape on the inside borders frame avoid sliding the mirror plate and keep it better in place.


The images below shows the final effect of the soft seal.

IMG_20170530_155216.jpg IMG_20170530_155205.jpg

For now only the front protective sheet should be removed. A lot of work has to be done on the back before we fix all the stuff, and eventually testing the mirror effect.


Assembling the mirror back side

Seeing this project it  seems that some solutions was thought to make things complex. For example the mirror pressed over the soft seal instead of glueing it or using screws and supports to keep all in place.

As mentioned before this is a magic mirror development platform; every component should be easy to replace and should be possible to assemble/disassemble then entire structure.

In this design I first thought to a modular system: based on this development design it should be possible to build a number of variants depending on the features the user want to include or not. For this same reason I have not used any sort of recycled parts but components - the cheaper and more reliable to find - available on the marked plus some custom designed parts 3D printed.


Instead of placing black tape or other kind of adhesive opaque film on the back side of the mirror I have used a black plastic propylene sheet. The Raspberry PI will support screen rotation so it is possible that in the future a different screen rotation or a different scree size will replace the actual settings. The gallery below illustrates the process of creating the black frame.

{gallery} Black back frame


The propylene black thin sheet


The black sheet should be cut of the same size of the mirror


Measuring the screen dimensions to cut the rectangular area in the black sheet


Measures should be exactly the size of the visible screen without the frame edge


Placing paper tape to safely mark the cutting area


The cutting area should be exactly horizontally centred; the vertical position should be 1/4 lower than the vertical centre


The back sheet after cutting. Now the paper tape can be removed.


Double checking the part to fit exactly before fixing it


After removing the protective plastic sheet from the back of the mirror the black sheet is positioned as a second layer.

Until this point the extra cost we have added is less than 5$ more.


Keeping the LCD in place

The LCD screen is the heaviest part of the structure and is difficult to keep in place. I have explored several methods used by other magic mirror buildings but none of these was efficient; a modular and replicable project should adopt solutions  easy to make and reproduce (and cheap too).

For both the back frame and magic mirror back cover (the cover of the assembled stuff I used 3 mm thick MDF; it cost only few cents and is easy to cut and prepare (can be cut with a cutter) but sufficiently strong to make his job.


A first MDF frame has been cut of the exact size of the internal frame. Inside a rectangular cut removed an area aligned with the previously LCD screen position. The cut should be exactly of the size of the screen frame that should be inserted in it. This will keep the screen stably in place when the magic mirror stands in vertical position for use.


The screen is inserted and fixed with black adhesive tape as shown in the image above. This part not only keep the LCD in the right position but also support the other layers in place spporting the electronic parts and wires.

The two images below shows the back of the magic mirror with the screen positioned inside the rectangular cut.

IMG_20170530_163621.jpg IMG_20170530_163629.jpg

Adding the Raspberry Pi and wiring all together

The last component that should be added to the modular magic mirror is the Raspberry Pi; for this version it has been used a Pi B V 1.2 with the WiPi WiFi USB adapter. The board has been placed on the back top side using a VESA support 3D printed and screwed to the MDF layer as shown in the images below





Wiring is not difficult; to connect the Pi to the HDMI LCD it is suggested a short HDMI cable avoiding too long wires inside. To reduce weight and avoid extra heating inside the magic mirror box the power units are left outside of the structure.


Raspberry Pi Cooling

A series of holes have also been done on the main back cover - another 3mm MDF layer - for the Raspberry cooling. After some days the system was running in test I have decided that for now a cooling fan is not essential for the health of the device.



Adding the Pi devices

After the Pi installation I have added a Pi camera V. 2 and a NFC/RFID Shield. For the development version it is sufficient to keep the Pi board on top of the Raspberry; in a production model this device should be in a more accessible place, i.e. one of the frame sides.

IMG_20170531_215619.jpg IMG_20170531_215707.jpg


Fixing the back layers firmly

After the final assembly the internal MDF layer should be fixed firmly to press the other layers: the mirror and the black frame. To make the entire structure easy to remove and change custom 3D printed supports have been designed and screwed to the internal sides.



{gallery} Plastic blocks fixking the internal layers

Internal supports.png

Angular and linear blocks design


3D printed supports detail


Side support detail


Side support detail


Finished internal assembly


Last touch: positioning the camera

Also the camera support has been designed thinking to a modular approach.

Camera Support.png

As shown in the image above a small camera case design will host the Pi Camera V.2 placed on top of the wooden frame. The support is built in two parts glued together; it is easy to remove, eventually replacing the camera with a black model if needed. In a production version the PiCamera will be hidden on the back of the frame leaving only a small hole for the lens.

IMG_20170601_065258.jpg IMG_20170601_065412.jpg


Finished setup and some anticipations


The image above shows the Pi Magic box complete and running. The base essential software is:


  • Raspbian Jessie Pixel edition
  • Node-js
  • MagicMirror2 development environment


The NFC/RFID tag reader will be used for user identification while the Pi Camera for - at least - gesture recognition. This aim to give a great add-on to the currently available MagicMirror2 projects not supporting these features. It is essential to add user interactivity to this project: this means integrating the Magic Pi building in a IoT context.


In the next blog I will introduce the standard and custom software that will complete the project.

I'm not sure how I managed to miss this, perhaps because it's still in 'developer preview', but a version of Android is officially available for the Raspberry Pi 3.


If you're really keen, you can download the image here for the developer preview 3.1.


Thanks to the recent release of Google's AIY Project with the latest version of the MagPi magazine (already selling for £40+ on ebay), the official guide for the project which is rumored to be on sale at a later date from Google, links to a github for running AIY Project on 'Android Things' with the aforementioned link to the developer preview.



Google also has an Android image with the AIY Project as opposed to it running on Raspbian.



It turns out there's a full site for Android Things and it runs on more than Raspberry Pi 3, it also runs on the intel Edison. However, it appears to be as 'bare bones' as the Windows 10 IoT Core, intended to be a deployment platform for apps via the adb interface. Though it likely still means that the full graphical interface could be run on it. It's not without issues though and is still very much in development:




I for one, welcome our Google overlords what're you going to make, and will this cause you to check out Android if you haven't already?

Filter Blog

By date: By tag: