Skip navigation
1 2 3 4 Previous Next

Raspberry Pi

373 posts

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Code has become a part of so many different ways of making music today: digital audio workstation (DAW) software records and sequences it, digital instruments perform it, and digital consoles at live music venues process and enhance it for your enjoyment. But using Sonic Pi you actually perform the music by writing code, and Sebastien Rannou used this technique to cover one of his favorite songs, "Aerodynamic," by electronic music legends Daft Punk.


Q: To start off, for someone like me who knows little to nothing about code in general, what exactly is happening in this video!? I’ve watched it several times in full, and I’m still not sure!


Sebastien: It's a video where a song by Daft Punk is played from code being edited on the fly. This happens in a software called Sonic Pi, which is a bit like a text editor; you can write code in the middle of the screen and it plays some music according to the recipe you provided. Sometimes you can see the screen blink in pink; this is when the code is evaluated, and Sonic Pi takes up modifications. A bit after that, you'll hear something changing in the music. It's a bit like you were writing a recipe with a pencil and at the same time instantly getting the result in your food.



Q: Among the most famous features of Daft Punk’s music is the extensive use of sampling, i.e. using existing recordings that are re-purposed to create new compositions. In covering a song that is sample based, as is the case with "Aerodynamic" - which is based on a Sister Sledge track - how did you go about doing a cover?


S: This is one of my favorite songs, but the choice of doing this cover was more motivated by the different technical aspects it offers. My initial goal was to write an article about Sonic Pi, so I wanted a song where different features of it could be shown. "Aerodynamic" was good for this purpose, as it's made of distinct parts using different techniques: samples, instruments, audio effects, etc. Recreating the sampled part was especially interesting, as there isn't much more than this, so I had of one of those 'a-ha' moments when I got the sequence right, and it surprised me.


Q: How did you come to use Sonic Pi? Do you feel it has any particular strengths and weaknesses in what it does?

sonic pi logo.png


S: I really like the idea of generating sound from code; I think it makes a lot of sense, as there are many patterns in music which can be expressed in a logical way.


I started playing around with Extempore and Overtone, which are both environments to play music from code. The initial learning curve was harder than I expected, as they implied learning a new language (Extempore comes with its own Scheme and DSL languages, and Overtone uses Clojure). So the initial time spent there was more about learning a new language and environment, so it removes some part of the fun you can have (not the technical fun part, but the musical one). On the other hand, Sonic Pi is really easy to start with: one of its main goals is to offer a platform to teach people how to code, and I think Sam Aaron (the creator of Sonic Pi) did a very good job on this. What's surprising is that, even though it's initially made to teach you how to code, you don't feel limited and can go around and do most of the crazy stuff you need to express musically.


One thing which is a bit hard to get right at the beginning is that live coding environments aren't live in the same way an instrument is: you don't get instant feedback on your live modifications if you tweak a parameter within Sonic Pi, as those are usually caught up to on the next musical measure. So you have to think of what's going to happen in the next bar or two, and try to imagine how it's going to sound. This takes some practice.


sonic pi additive_synthesis.png


Q: There’s quite a bit of discussion about how Daft Punk recorded the “guitar solo” in this track; how did you go about covering it?


S: I don't know much about the theories of how they did the guitar solo part, which I naïvely thought they did digitally. I did a spectral analysis of the track, and isolated each individual note to get their pitch and an approximation of their envelope characteristics (the attack, decay, sustain, and release, essentially how the sound develops over time). Then it was just a matter of using a Sonic Pi instrument that sounded a bit like a guitar, and telling it to play them. I then wrapped it in a reverb and a bitcrusher effect (which downgrades the audio's bit rate and / or sampling rate) to make it sound a bit more metallic. Because the notes are so fast during this solo, it sounds kind of good as is (unlike the sound of the bells at the beginning, more on this later!).


Q: As you were working on your cover, did you run into any notable technical problems, and how did you solve them?:


S: Yes! I spent a lot of time trying to get the bells sound right, but failed. Usually when an instrument plays a note, it has a timbre: this is a sort of signature which can be more or less explained, for instance a violin has a very complex timbre, whereas a wheel organ is way more simple. This complexity is highlighted when you look at audio frequencies when such an instrument plays a note: there is usually one frequency that outweighs others (the frequency of the pitch or the fundamental), and a myriad of others, which correspond to the timbre.


The timbre of the bells at the beginning of "Aerodynamic" is very complex, and it evolves in a non-trivial way. I've tried different approaches to reproducing it, including doing Fourier transforms to extract bands of main frequencies at play at different intervals and converting these to Sonic Pi code (more about this here). Sonic Pi comes with a very simple sine instrument, which plays only one frequency, so the idea was to call this instrument several times using different frequencies all together. I kind of got something that sounded like a bell, but it was far from sounding right. I ended up using the bell instrument that also comes with Sonic Pi, playing it at different octaves at the same time, and wrapping these in a reverb effect. That's kind of a poor solution, but at least I had fun in this adventure!


Q: Have you used Sonic Pi to create original music? If so, how did you feel about that process? If not, how do you imagine it would be?


S: Yes, I have, using different approaches. For example, I tried using only Sonic Pi, which ended up sounding a bit experimental, and then by composing in a DAW software (Digital Audio Workstation, eg Pro Tools) and then sampling that so it can be easily imported into Sonic Pi. With this approach I can then use Sonic Pi as a sequencer and wrap the samples in effects. I did another cover using that method, this time of a Yann Tiersen song, and also a few songs with my band, Camembert Au Lait Crew (SoundCloud). The code can all be found here on github.




Q: Do you have any plans for future music projects using Sonic Pi?


S: There are recent changes in Sonic Pi version 3 which I'm really excited about, especially the support of MIDI, so you can now control external synths with code from Sonic Pi while keeping the ability to turn knobs on your synth. I haven't tried this yet, but it's definitely what I want to do next. Sam Aaron did a live coding session recently showing this and I find it amazing:

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. The musical collective Sonic Robots were inspired by one of the most famous electronic instruments of all time, the Roland TR-808 drum machine, and created a live musical installation where physical instruments recreate the purely synthesized sounds of the legendary 808. We asked their founder some questions about the MR-808 interactive drum robot.




Q: What was the origin of the MR-808 project? When I first watched the video of it at the Krake Festival I couldn’t stop smiling; do you recall any particularly memorable reactions that people have had to it?


Moritz Simon Geist, founder of the Sonic Robots collective: I started out as a young hacker and tinkerer when I was 10, taking apart radios and electronic devices from my parents. I come from a music-centered family, having been taught piano, clarinet, bass, and guitar. At some point I combined these two things - music and hacking. In 2010 I thought I should sum up all the experiments of my last few years in one piece, and came up with the robotic 808. In classic fashion, I got the idea at night in the bar, over a beer. Once I got the idea it was such an obvious thing - to do electronic music with robots - that I feared that somebody else would do it before me during the two and a half years it took to build the MR-808. Of course, that never happened.


And the first question that people ask is: “Craaazy! How long did it take to build it?”


Q: The Roland TR-808 is famous for many reasons, but maybe its best known feature is its synthesized bass drum sound. How did you go about recreating this legendary sound, which has practically become the basis for some electronic music styles?


M: Yes, the 808 is famous for its bass drum, and the clap, maybe. In the beginning of the build, I did nearly a year of experiments; initially I wanted to take a “real” 18-inch bass drum from a drum set, but that doesn't sound at all like the 808's bass drum. The electronically-generated 808 bass drum is basically a sine wave with an attack and release curve. So I searched for sounds that come close to sine waves in real life, and ended up using a very short bass drum string. For my latest robots, I optimize that and use metallic tongs, similar to a kalimba. They sound surprisingly similar to a real 808 bass drum, really boomy.


Since I've been making robotic music as my living for nearly three years now, my workshop and storage have filled up with experiments, parts, and unfinished robotic instruments. I still have enough plans for crazy instruments in my drawer to build music robots for the next few decades.



Q: How does one program the MR-808? Have you integrated it into any live performances?


M: Actually, it was meant to be an instrument in the first place! I did a lot of performances in 2012 and 2013, alone and with Mouse on Mars. At some point I had so many problems with my back - the installation weighs 350 Kg - that I had to stop, and I started building lighter robots. The MR-808 is still on display as an interactive installation at festivals and galleries, but not for shows anymore.


The MR-808 can be played with MIDI, and so actually by everything that spits out MIDI. For the interactive version we built a collaborative sequencer that outputs MIDI signal. The sequencer is a Super Collider Patch running on the Raspberry Pi. There is also a small web server providing a simple website with a step sequencer. There are two Nexus 2 tablets as the interface, which connect to the Raspberry Pi via Wi-Fi. They display the sequencer which finally controls the robot. We also blogged about it here in detail, and it's freely available at github.


Q: Why did you choose the Raspberry Pi to be part of this project? What advantages does it offer?


M: As everyone knows, the Raspberry Pi is the platform when it comes to lightweight prototype installations. As I was looking to reduce the weight of the overall installation, I was also not so keen on taking a full-blown laptop with me. Additionally, the data processing - providing a simple web server and running a Super Collider patch - are perfect for the Raspberry Pi. We are currently using a Pi 3, with a small TFT and customized restart and power off buttons, connected to some IO pins. It's a workhorse.




Q: As you were putting together the MR-808, did you run into any notable technical problems, and how did you solve them?


M: So many, I couldn't name them all! One funny thing: when we were building the 16 big push buttons for the bottom of the installation, we had to find a 1:12 model of the original buttons, which of course doesn't exist.


The 3D printing which we use now didn’t exist back then, so we ended up replicating the buttons with a pizza oven, a vacuum cleaner, and a self-made mold. The process is called “thermoforming,” and we did it hacker-style with a zero budget.


IT-wise, one big issue was the synchronization of the web interface with the MIDI sequencer. On the sequencer where you can program the 808 there is a light which constantly cycles through the rhythm, indicating at which step you are. You want the feedback light of the sequencer to both be in time with the actual rhythm that is played, but you also don't want it to be interrupted. As everything is running on Wi-Fi and websockets, it was a little tricky to synchronize everything to run smoothly. My programmer Karsten did a lot of the work there.


Q: I make electronic music myself, and in that world we often talk about trying to introduce the human element into compositions that one could otherwise say are very machine-like. Beyond the fact that it exists in the physical world, in what other ways does the MR-808 feel like a living instrument to you, perhaps more so than an actual TR-808 unit?


M: The most obvious thing for self-built robots that resembles human-like behavior is their fragility; they break all the time! Industrial robots might be very powerful and rigid, but with a limited budget you always take the cheapest route and recycle a lot of parts. For the first shows of my Glitch Robots installation, I took a 3D printer on tour so I could re-print broken parts. Apart from being useful, it looked very cool to have one on stage!


When an artist leaves the pre-made route of presets and starts digging in the mud - be it with mechanics, circuit bending, self-made electronics, or field recording - one always brings error into the art. This is a good thing! It's like playing guitar and by chance hitting the wrong chord: it might sound unexpected, but somehow cool, and can start being the trademark part of the whole riff. When one experiments, a lot of these random moments appear. 90% of it might be useless, but there is the 10% which is helpful and you can’t come up with through planning. I like this introduced randomness of music robots a lot.



Q: Do you have any plans for future music tech projects? An update to the MR-808, perhaps, or another new device?


M: The 808 was cool at the time that I built it, and for me it just "had to be done." But at the same time, it refers back to an historical instrument, and is very much bound to this reference. My opinion is that art should also be futuristic, and should sometimes fail, but it should point to an unknown future. So I decided not to build the Robotic 909, for example (editor's note: the TR-909 was a subsequent drum machine from Roland, a famous instrument in its own right).


With my last instrument, “Tripods One,” I tried to think of an instrument which is futuristic and that also plays with a human-machine interaction. Also, I took a lot more design ideas into account. It consists of 5 pyramids which inhabit small mechanical robots (of course!). Sound-wise, I did not refer to the classic "bassdrum / snare / hihat" sounds; instead, I searched for sounds which I can use well in the context of electronic music. You can see that project here:


Tripods One – Sonic Robots


See more Sonic Robots projects on their site, and check out more Raspberry Pi projects on element14 here!

Ive moved into a new house and came across a sense hat for the raspberry pi which made me remember a little project that I was working on, its basically a html based color chooser which updates the selected colour on the sense hat so I thought I'd share the scripts etc.. incase anybody finds them helpful / useful.




To start with I was running lighttpd on the Raspberry Pi which is a lightweight webserver, very simple to use and just requires a small modification to its config file to allow it to run Python scripts.


Below is the html, javascript, css and python




    <link rel="stylesheet" type="text/css" media="all" href="shstyles.css"/>
    <script src="shcommon.js" type="text/javascript"></script>

    <div id="colordisplay"></div>
    <div id="colorcontrols">
    <p class="colorcontrollabel">R</p>
    <input id="redslider" class="slider" type="range"  min="0" max="255" value="255" onchange="slideRed(this.value)" />
    <p id="redvaluelabel" class="colorvaluelabel">255</p>
    <p class="colorcontrollabel">G</p>
    <input id="greenslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideGreen(this.value)" />
    <p id="greenvaluelabel" class="colorvaluelabel">90</p>
    <p class="colorcontrollabel">B</p>
    <input id="blueslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideBlue(this.value)" />
    <p id="bluevaluelabel" class="colorvaluelabel">90</p>
    <input type="button" value="update" onClick="setSenseHatColorDisplay()">
    <p id="outputarea">output area</p>



var colorred = 255;
var colorblue = 90;
var colorgreen = 90;

function slideRed(newvalue){
    colorred = newvalue;

function slideGreen(newvalue){
    colorgreen = newvalue;

function slideBlue(newvalue){
    colorblue = newvalue;

function setSenseHatColorDisplay(){
var colorstring = colorred+"|"+colorgreen+"|"+colorblue;
var req = new XMLHttpRequest();
req.onreadystatechange = function() {
        if (this.readyState == 4 && this.status == 200) {
            document.getElementById("outputarea").innerHTML = this.responseText;




html, body{
min-height: 100%;
height: 100%;
max-width: 100%;

    float: left;
    width: 120px;
    height: 120px;
    border: 1px solid black;
    background-color: rgb(255,90,90);

    display: inline;
    width: 100px;

    display: inline;

    display: inline;

    float: left;
    border: 1px solid black;
    width: 200px;

#! /usr/bin/python

import sys
import os
from sense_hat import SenseHat

colorstring =
#colorstring = "255|90|90"
colortup = colorstring.split("|")
redvalue = colortup[0]
greenvalue = colortup[1]
bluevalue = colortup[2]
print "Content-Type: text/html\n\n"

p = os.popen("sudo python /home/pi/www/cgi-bin/ "+redvalue+" "+greenvalue+" "+bluevalue)

print '<html><head><meta content="text/html; charset=UTF-8" />'
print "</body></html>"

import sys
import os
from sense_hat import SenseHat
sense = SenseHat()

#colorstring = sys.argv[1]

redvalue = int(sys.argv[1])
greenvalue = int(sys.argv[2])
bluevalue = int(sys.argv[3])

colortup = (redvalue,greenvalue,bluevalue)

canvas = [



It should be possible to merge the 2 python scripts but there was some stumbling over returning the html headers to the raspberry pi and updating the sense hat display from a single script so I used 1 script to get the data, process it, run a second python script and return the headers allowing the 2nd script to update the sense hat.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Toby Hendricks, an electronic musician who records and performs as otem rellik, became dissatisfied with the iPad he used in live performance, and decided to build his own device using Raspberry Pi.






Q: What was the origin of the Looper project? You mention in the video that it replaced your iPad for live performances, were there deficiencies in the iPad, did you want features it didn’t offer, and so on?


Toby: The origin dates back about three years, when I first started learning Pure Data. At that time I was using an iPad for live shows, and it seemed like nearly every year when iOS got updated some of the apps I was using would break. This trend has gotten better, but I still find it a bit unnerving to use iOS live. I sort of got sick of not having a reliable setup, so I started creating Pure Data patches for an app called MobMuPlat. I fell in love with Pd (Pure Data), and eventually replaced all the apps I was using with one single Pd patch loaded into MobMuPlat. That looping/drum patch became pretty robust over the course of about three years, and then I decided to attempt to turn it into a complete standalone hardware unit.


Q: I make electronic music myself, and I always find when I get a new piece of hardware or software that there are features I didn’t expect to be using or that I didn’t know were there that I turn out to love. Despite the fact that you designed the Pi Looper, have you found yourself using it in ways you didn’t expect?




Toby: Definitely. I’m always finding ways to improve my live performances with it, mostly with the effects. I’ve become pretty proficient in playing the effects section almost like its own instrument; the delay feedback can be infinite, so creating a short delay and then playing with the delay time mixed with the other effects can really create some cool sounds and textures. Also, if you already have a loop going, the delay time is synced with the tempo of the song, so you can get some really cool rhythmic stuff going on.


Q: Why did you choose the Raspberry Pi for this project? What advantages does it offer?


Toby: I chose Raspberry Pi because I knew it could run Pure Data; I really had no other knowledge of Raspberry Pi. The form factor also works great, because I wanted to have all the components inside the box. This was my first Pi project.


Q: As you were putting together the Looper, did you run into any notable technical problems, and how did you solve them?


Toby: I had tons! It took me about three months to figure everything out. One of the main milestones was getting Pd to talk to all the controls, which are all connected to a teensy 3.6. I had absolutely no idea how I was going to make that work when I started. I eventually learned about the comport object, which is an external for Pd which allows Pd to send and receive serial data. Originally, I was planning on just sending MIDI back and forth from the Pi to the teensy, but then realized I needed to also transmit song names back and forth. Learning how to package serial data ended up being many hours of frustration, but I finally got it working with some code I found on the Arduino forum. I also had to make Pd create and delete directories to store the songs; the shell Pd external eventually saved the day on that one. There were way more issues I had to solve, but those were some of the ones on which I remember almost giving up the whole project.


Q: In the electronic music world there seems to be to be a movement of people wanting to avoid staring at their computer screens while they write, and devices like Native Instruments’ Maschine, Ableton’s Push, and new models of the classic AKAI MPC are trying to give electronic musicians the tools to write without needing their mouse and keyboard to manipulate their DAWs. Do you feel that your Looper fits in that tradition, or is it more of a device for live performance? Perhaps it’s useful in both areas?



Toby: I think it fits in both areas. It was definitely built for my live shows, but I often jam out on the couch with it. All the internal instruments were actually an afterthought; originally it was just going to have drum samples. I have yet to fully create a song on it that ended up being something I liked enough to import into my DAW (Digital Audio Workstation) to work on further, but I’m guessing that will eventually happen. I really like when an electronic band plays a show with no computer, or at least a controller that allows them to not even look at the computer. Laptops on stage are fine, but sometimes I feel like the performer could just be checking their email up there and I wouldn’t know the difference. Seeing someone on a piece of hardware really cranking on knobs and pounding buttons (even if it’s just a controller) is so much more interesting to watch.


Q: I very much agree on that! So do you have any plans for future music tech projects? An update to the Looper, perhaps, or a device that fills a different need you have in your writing or performing?


Toby: I’m pretty much always working on a new project. I’ve been building projects more than making music lately. I’ve already built a new MIDI controller that I’m going to shoot a video for eventually. It’s a drum pad / sequencer thing (kind of like this), but it uses force sensitive resistors for the note pads. I actually learned how to cast my own urethane for the pads, which was probably one of the most unnecessary steps I’ve ever taken for a project. I also just purchased a CNC machine and am currently working on a new Raspberry Pi project that will be very similar to this, but the sound engine will be in Pure Data and the touch screens will be much larger. As for the Looper, I was just updating the code yesterday to add a pickup function to the volume knobs for saved songs. The Looper is eventually going to be completely rebuilt with force sensitive resistors for the pads, but that may be some time from now.




See more of Toby's projects on Youtube, and check out more Raspberry Pi projects on element14 here!

This post features videos that I published to my YouTube channel in the series "IOT with Raspberry Pi ". This basically contains 4 videos around raspberry pi that show how to use Raspberry Pi as an IOT device. It starts from interfacing sensor to publishing the sensor data to cloud server using protocols like REST or MQTT. For the entire project I have used JAVA and on top of that used various libraries for specific tasks like Pi4J, Unirest, Eclipse PAHO etc (Links provided below). If you have watched any of the videos you might know that the series is divided into 4 parts namely,

  1. DS18B20 Sensor interfacing with Raspberry Pi.
  2. Publishing data to Thingspeak using REST.
  3. Publishing data to Thingspeak using MQTT.
  4. Completing the project.


So let's check out how to do so.


You can Subscribe on YouTube by clicking this link to show your support and be updated with the latest video on the channel like these.



1.DS18B20 Sensor interfacing with Raspberry Pi.

This video is the first part of it where we will see how to interface DS18B20 one wire temperature sensor with Raspberry Pi by using JAVA with the help of the pi4J library.

2. Publishing data to Thingspeak using REST.

This video is the 2nd in the series where we will see how to publish or send sensor data using REST API to cloud. And in this, we are using ThingSpeak as cloud service to publish data. HTTP calls for REST API are done using Unirest lightweight HTTP client library. In the next video, we will see the same by using MQTT.

3.  Publishing data to Thingspeak using MQTT.

This video is the 3rd in the series and is about how to publish or send sensor data using MQTT API to cloud. And in this, we are using Thingspeak as cloud service to publish data.Publishing Data using MQTT is done using Eclipse PAHO lightweight library. MQTT is a simple lightweight publish/subscribe protocol that can be used over TCP instead of going for HTTP as MQTT is power friendly and bandwidth friendly as compared to HTTP. So it fits perfect for IOT applications. If you are interested in more about it, you can check some docs linked below.

4. Completing the project.

If you have not checked above videos please chek those first before checking out this video. This video is the final one in the series where we will complete the project by combining the codes developed in the earlier videos. We will make the application such that we can decide te API that we will be using to publish the data to Thingspeak.

Github Repo:

Download Pi4J Library:
Download Unirest Library:
Unirest Website:
Unirest Jar Download (With Dependencies):
Download Eclipse PAHO Library(With Dependencies):
Eclipse PAHO Website:


More on MQTT
Official Website:

Java Application on Pi Playlist:
Catch Me On:



Microsoft was able to squeeze their deep-learning algorithms onto an RPi 3 in order to bring intelligence to small devices.


Love it or fear it, AI is advancing, and it’s coming to small/portable electronic devices thanks to advanced developments made by Microsoft. The software giant was recently successful at loading their deep-learning algorithms onto a Raspberry Pi 3 SBC. The advancement will obviously be a boon for anything, and everything IoT, which is on track to take the world by storm and speculation suggests there will be 46-billion connected devices by 2021- depending on whom you ask.


Regardless, Microsoft’s latest breakthrough will allow engineers the opportunity to bring about intelligent medical implants, appliances, sensor systems and much more without the need for incredible computer horsepower. Most AI platforms today utilize the cloud for all their hardware endeavors, certainly so with infant platforms such as Amazon’s Alexa and Apple’s Siri but Microsoft’s breakthrough will make those systems obsolete and unnecessary.



Microsoft is developing AI platforms that will be squeezed into hardware no bigger than this chip. (Image credit Microsoft)


To further put Microsoft’s development into perspective- the team is capable of taking algorithms that normally run on 64 and 32-bit systems and drop the requirements down to a single bit in some cases. What’s astounding is how this new development came about- all due to a flower garden. Ofer Dekel, Manager of Machine Learning and Optimization at Microsoft’s Research Lab in Redmond, Washington, needed a way to keep squirrels from eating his flower bulbs and birdseed, leading him to develop a computer-vision platform utilizing an inexpensive RPi 3 to alert him when there was an intrusion.


When the alert is triggered, the platform engages a sprinkler system to shoo away the culprits- an ingenious solution indeed. “Every hobbyist who owns a Raspberry Pi should be able to do that, today very few of them can,” stated Dekel. Yet, the breakthrough will allow just that and can even be installed on a tiny Cortex-MO chip like the one pictured above.


To get the deep-learning algorithms compressed enough to fit on the RPi 3 using just a few bits, Ofer and his team employed a technique known as sparsification, a technique that shave’s off unneeded redundancies. Doing so allowed them to devise an image detection system that could process 20-times faster on limited hardware without losing any accuracy. Still, the team hasn’t yet figured out a way to take ultra-sophisticated AI or a deep-neural network and compress it enough to fit on limited, low-powered hardware. Regardless, this is an unprecedented first step in doing so, and we can certainly expect advancements that will get us there sometime in not too distant future.  


I'm working on some Pi projects at the moment. Instead of IoT projects... maybe I should be looking into AI.


Have a story tip? Message me at: cabe(at)element14(dot)com


Home Automation in the UK Simplified, Part 1: Energenie MiHome

Join Shabaz as he works on his IoT home!

Learn about home automation using the Raspberry Pi, Energenie MiHome and Node Red.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part


Note: This is part 1 of a two-part series. After you've read this, if you're interested to read further, navigate to Home Automation in the UK Simplified, Part 2: Raspberry Pi and Touch Display


Home automation is a topic that has been around for decades, using classic wired technologies such as X10. The 21st century has favoured IP (Internet Protocol) to be the communication method of choice for delivering control and management of virtually anything imaginable. Devices can be untethered and operate wirelessly using sub-1GHz license-free bands. Radio is nothing new, but in modern times it has got a lot more easier to produce reliable, low-cost and energy-efficient radio links for consumer items. Small wireless nodes such as door/window monitoring devices can function with a single cell for a year or longer due to ultra-low power microcontrollers.


I wanted to see how to deploy home automation, and whether it can be easy-to-use and reliable, and if I can get good value from it. I was also interested to see how well it could integrate with everything else in my surroundings; for example could I control devices using my voice? And how effective is it? Could I also connect a Pi and do some extra things?



In an ideal world, there would be no such thing like ugly mains light switches. Everything should be seamless, with lighting turning on when it is desired. My next home will have no mains light switches I’ve decided : )


There are many home automation products out there, usually as part of an ecosystem since there are many building blocks and they all need to work together. Some stuff is fun coding and developing, but the interface to the mains power supply requires good quality, approved, off-the-shelf products. It is not worth the risk assembling something with a no-name relay from ebay. How could one know for sure that the material is flame-tested and approved for use in the UK?


I wanted to examine those products that were certified for UK use. Some very low cost products are available from overseas particularly from Asia but I believe some are self-certified and I was not about to take a risk by permanently running them inside my home. Lots of high quality stuff exists from overseas, but approvals are expensive for a reason; subtle things like the wrong plastic could cause a flame situation to occur if the electronics went wrong, or electrocution is a possibility too. The legislation has plenty of subtle things such as how mains cables should be attached and what distance there should be between wires. To add to that, there are electromagnetic compatibility (EMC) rules which are designed to prevent equipment interfering with TV and radios, and laws governing how frequently transmissions can occur. The CE marking doesn’t mean much unless there is a reputable firm standing up for their product. In the event of a claim for liability I would want that firm to be located in Europe.


I window-shopped for home automation products that I could work with and finally decided to try out the Energenie ecosystem. The products seemed to be of very reasonable cost, and the range looked like it covered many things that I would want home automation to do. The company has been around for many years, so this provided confidence too. This first blog post reports the initial findings and shows how to set up the Energenie solution for control and monitoring using a PC or mobile phone, and natural language based voice control using Amazon’s Alexa service. All of this can be set up within an hour with the Energenie solution.


The next blog post will explore the Energenie solution further and investigate how it can work together with the Pi.



What Problems does it Solve?

I actually had several use-cases for home automation.


One was to make my small apartment ‘upmarket’ so I can sell it for more money : ) I suspect a lot of people think home automation is a lot more complicated to install than it actually is, and therefore there could be good value-add to have this installed in an apartment. Many individuals/couples are away from their apartment all day and would appreciate being able to get some insight and control of their home remotely. The apartment already has a burglar alarm and video system, so home automation would be a nice addition.


Another use-case is to keep a light touch view on elderly relatives; it can be useful to see activity occurring in the home to make sure the relative is well.


As another idea for the elderly, a virtual voice controlled assistant could be very useful for people who may have trouble walking up/down stairs just to turn on the heating or switching off a light. Voice enablement will help out here. Taking this further, a home can have far more discreet physical buttons and controls if voice enablement is primarily used instead.


A very typical scenario where home automation can help out is to energy-save; the ability to get on-the-fly energy readings (either for the entire home, or more granular) provides insight and that drives consumer behaviour such as switching unused lights and TVs off more frequently.


Home security solutions can be improved with home automation; it becomes easy to automatically switch on home lighting when you’re out, to make it appear that someone is home. Timer devices are available but home automation provides a far cleaner implementation that can be programmed and schedules adjusted from anywhere and therefore makes it more practical to use. Home automation can provide insight into unusual activity even when an alarm has not been triggered. It provides deeper visibility. In a nutshell the opportunity exists to make home security and home automation better together.


For the engineer, home automation is important because it provides real-world sensor data that can be analysed and used to develop interesting future products. For example, I would love to know how long home lighting is switched on, to begin to understand how long LED products in homes could survive and how to improve them.


Can it be Installed in any Home?

This blog post will look in detail how to install and use home automation, but in summary there are several ways that a system can be installed. One typical scenario is to retro-fit it inside an existing home without touching any existing wiring. This is feasible and relies predominantly on the use of plug-in adapters which sit in-between the existing mains sockets and the connected device. It allows plug-in things like TVs and table lamps to be monitored or controlled.


Permanently wired home lighting can be controlled with some slight modification, by unscrewing the light switch on the wall and replacing it with a smart light switch. This can be achieved by nearly anyone provided some care is taken.


It is also possible to replace home mains sockets with smart mains sockets but this is an advanced activity that usually requires an electrician to install it. It is recommended to use the plug-in adapters if an electrician is not available.


For all the scenarios, an Internet connection is fairly essential.


Mi|Home Gateway

The gateway device which interfaces to all the rest of the Mi|Home ecosystem is really compact. It is only very slightly bigger than an ice hockey puck. There are just two ports on it. One is a USB connector for the 5V power supply (it comes with the Gateway) and the other is for the Ethernet cable to attach the gateway to the home router. One dual-colour LED and a pinhole reset switch completes the external features.



The entire thing is small and unobtrusive and runs cool, and can be hidden from view. The top cover can be unclipped to look inside. There is not a lot to go wrong here, it should provide many years of good service. The circuit consists of a fairly high-end ARM Cortex-M3 based microcontroller from NXP, an Ethernet interface and a very popular RF transceiver module from HopeRF. Good brand parts are used like the Wurth Ethernet transformers. The enclosure is of a sufficient size to allow the antenna to have space around it for good range.


What looks like a standard debug port is also present. Lots of great potential to use this as a low-cost board for other projects too!



Using the Gateway is pretty easy, it is plug-and-play. No configuration needed. You can take the code printed on the underside and apply it to the Mi|Home web portal once you have registered. For most users, there is no router configuration to do either. Just plug in the power supply and Ethernet connection and provided you have the code you’re ready to start using it.


Protocols and Examining the Risks

It’s always good to examine these things. Armed with the knowledge, we can deploy solutions in the right scenario and avoid fitting them where there are security risks.


The communication between the gateway and the Mi|Home cloud service uses UDP packets and is very lightweight, typical payload size was around 48-69 bytes, with what looks like a heartbeat every five seconds or so. This is a tiny amount of data traffic (less than 2MB per day) and therefore it will not impact Internet usage allowances, and also opens up the possibility of using a 4G/LTE router for monitoring and control of remote locations. The transmission is unencrypted but I could see no username/password/person identifying information transmitted; the MAC address of the gateway is sent. For home automation generally the risk of vulnerability between the gateway and the cloud service is low because it is very difficult for an individual to capture and decode communication over technologies such as 4G or cable or DSL.


For those unwilling to connect to a cloud service there is an add-on board available for the Pi, which, with some coding, can be used to control the devices locally.



The radio communication between the gateway and devices is based on the OpenThings specification (registration required) which is documented, which means (in theory; I have not tried) that the Energenie solution is flexible enough to design your own additions. There are no fees involved to use the specification and modifications are permitted too. The radio communication occurs in the 433MHz band using frequency shift keying (FSK).


There is the risk that somebody could record radio transmissions and replay them; it requires some technical skill and it is up to individuals to determine if this poses a security risk in the environment where they are installing their home automation. With low power transmissions between the gateway and devices, it would require someone to be nearby in order to capture radio transmissions. The technology, like most of the current home automation solutions, will be susceptible to radio jamming signals. Due to the ease for jamming, the Energenie solution cannot be used as a replacement for home security solutions (burglar alarms, video cameras, etc).


Using the Mi|Home Cloud Service and Mobile App

I browsed to the Energenie Mi|Home website and registered for free and entered the code printed on the back of the Mi|Home gateway, and it was immediately registered. It is all very intuitive and once the gateway is added you can give it a name and start adding additional devices by clicking on ‘Pair New Device’. As soon as you do that the web page shows the entire product range.



The color coding is roughly proportional to functionality. The basic products are blue and provide simple control in one direction. The pink items are monitoring products that gather information but do not have any controlling capability. The purple items are full-featured and offer both monitoring and control capabilities. This colour-coding matches the glossy card packaging of the devices too, so you can easily see the broad functionality that you are getting.


The product range can be configured in a consistent way. The procedure is to connect/plug in the device so that it is powered up, then hold a button down for 5 seconds until and LED flashes. Assuming that ‘Pair Devices’ and then ‘Start Pairing’ had been clicked in the web browser first, then the device will become attached and will appear in the Devices List; in the screenshot below I added a door sensor and a mains control adapter:



It is actually possible to do this from a mobile phone too. The pairing for the device can be done anywhere within radio coverage of the Mi|Home gateway using the Mi|Home app.



You can also assign custom friendly names to each device; this is handy when you have many devices connected, but also is useful for voice control by device name; see further below). The app is easy to use and during my testing I didn’t notice any bugs or crashes. There is also the ability to integrate with IFTTT which is a way to have rules such as “if the weather is cold the turn on the heating” however I’m not keen on IFTTT due to the need to have facebook/twitter for the free account in order to create your own applets. There are other ways of achieving such things and they will be explored in another blog post.



An interesting feature in the Mi|Home app is the ability to 'geofence'. This allows the system to control devices based on the location of the mobile phone. An example would be to turn on the heating if you’re approaching home.


In summary I thought the app was not bad, it is useful for checking up on the status of things in your home and controlling them of course. There are no fancy features like the ability to have a status widget on your mobile phone or to log data.


With the app installed, it was time to start pairing and exploring all the interesting devices!


Energenie Mi|Home Adapter Plus

The Energenie Adapter Plus is a very cool advanced ‘smart plug’. I thought it was great. It has a number of features. There is a small button on it and an LED, and any connected equipment can be powered or unpowered by directly pressing the button on it. The status is sent back to the home gateway, so that the user can check via the web portal what the actual status of the Adapter Plus is. This product is within the purple range, i.e. more feature-rich and with control/monitoring capability. Furthermore the Adapter Plus can be used to measure power consumption. This is extremely useful even if you’re not interested in actual energy usage, because it can tell you if the appliance at the end of the cable is actually switched on or off by observing the power consumption. So, you can use it to tell if (say) a television itself is actually powered up or not.



It was interesting to examine it in more detail, to see precisely how it functioned and how accurate it could be.


It has security screws and once they were removed, I was impressed at the quality of construction. The earth and neutral connections are direct from the mains plug side to the mains socket side of the product. The Live connection is switched of course, and all wires are crimped to the metal components of the plug/socket portion of the design. The PCB is made of fiberglass and there is a fair amount of circuitry. The radio transceiver module is a HopeRF board again, with a helix shaped antenna soldered perpendicular to the circuit board on the side hidden from view.



There is a 2 milliohm shunt resistor for measuring current. The other side of the board contains a nice DC-DC converter circuit. The AC mains input is rectified and directly stepped-down using the DC-DC converter. This type of design will run cool and in practice I could not observe any warmth of the device during operation. There is also a varistor for protecting the circuitry from excessive mains spikes, for hopefully many years of good service. A dedicated IC performs the energy measurement and communications protocol handling before passing the data to the HopeRF module for transmission. The dedicated IC from Sentec handles reactive loads (i.e. it can measure real power) and the datasheet states that power measurement accuracy is 2% or 2W, whichever is greater. Although not spectacular, this is a reasonable level of accuracy for such a device.



The mains is switched using a relay which has UL and TUV certificates. In summary I thought the design was good, I liked that it had some protection against spikes from mains input noise, the power consumption feature is really cool to see what devices are actually powered up, and a push-button switch to be able to turn devices on/off locally if desired.


Energenie Mi|Home Open Sensor

The Open Sensor does exactly that, it reports when something like a door or window has been opened or closed. It is a small single AAA cell powered device. It has low power consumption, I measured 50uA (it periodically changes between about 40uA and 60uA), and current of course increases during radio transmission whenever an open or close event occurs. Based on this, Energenie’s estimation of 1-1.5 years battery life appears accurate. I liked that it uses a standard AAA cell because they are cheap to replace compared to the small 12V batteries that used to be common in wireless sensors in the past. The sensor has the typical two-part magnet and reed switch implementation.



The design is very nice; there in an internal 3V DC-DC step-up converter that presumably runs continuously, and a low-power microcontroller. As mentioned a reed switch and magnet performs the actual detection. The transmitter is a tiny 6-pin SOT-23 device, most likely another HopeRF part.



There is a very discreet faint LED that shines through the white plastic, and it is useful for confirming that the battery is functioning because it flashes briefly each time the door is opened or closed.


Inside the box there were lots of mounting bits and pieces for attaching to doors/windows, and a card instruction leaflet.



Energenie Mi|Home Double Socket

In the blue range (i.e. control, not monitoring capability), I tried out the Mi|Home mains wall socket. This product can be fitted into a new electrical installation, or retrofitted. Its connections are identical to any standard double socket, and I liked that it had two earth terminal connections which simplified installation.



The unit is quite deep, and it will be a real struggle to fit into a 25mm deep back box if there is more than one mains cable coming into the back box (more than one mains cable is common, since a ring mains will result in two cables into each box). However, a 25mm back box for ring mains is rare and homes should have deeper boxes usually. With a 35mm back box (as in these photos) there is no issue, and I tested with three mains cables; it just about fitted. With two mains cables it fits just fine.



Another approach for retrofitting is to leave the existing mains socket where it is, and fit the Mi|Home one alongside it as a spur connection. As a result, a 25mm back box is fine since the spur connection only has one mains cable. Also, if you didn’t want to go making too many holes in your wall, you could always fit it alongside to an existing mains outlet but in a surface mount box; that way you only need to plaster and repaint a very small area. This latter option should also result in better radio coverage so it would be worth considering if the Mi|Home gateway is positioned far away (or a second gateway could be purchased; multiple ones can be added in the Mi|Home solution).


So, to summarize, if you’re installing with a single cable, then a 25mm back box is ok, otherwise you will definitely need a 35mm back box minimum and it will be tight but feasible with three mains cables, so if you have the choice, go deeper.



In terms of aesthetics and the finish of the plastic I think it looks quite reasonable, no better or worse than typical home mains wall sockets. There are are also versions with brushed steel, or chrome or nickel finish if you need to match it to others in the home.


Energenie Mi|Home Light Switch

Another item in the blue range are the Mi|Home light switches. They are optionally available in the metal finishes just like the mains wall sockets.



It has a depth of about 22mm, and so it requires a 25mm depth back box at a minimum (usually the boxes are recessed by at least a few millimetres into the wall, so that will also help to provide sufficient clearance for the mains cable. The photos here shows a 25mm box.



If you’re replacing an existing light switch then the chances are that the back box will be more than 25mm deep, however I have seen very shallow back boxes (15mm) as was the case in one room at home, and these would not be suitable. It isn’t difficult to make any cavity deeper (no need to do that with a stud wall) and replace the back box of course.



What seems to be missing from the range currently are double light switches. This actually made it awkward to install in a couple of rooms, since I wanted individual control of the two lights in the room.


Virtual Voice Assistant with Amazon Alexa

Amazon, Google and Apple all offer virtual voice assistant services. If you’re not familiar with them, they basically consist of small Internet-connected devices (usually WiFi enabled) that have a loudspeaker and an array of microphones inside. By saying a keyword (‘Alexa’ in the case of the Amazon service) the device wakes up and streams any subsequent speech to a cloud service which performs speech recognition and natural language processing to try to discover the intent of the speech. Once that is done, it formulates an intelligent response based on the wealth of information searchable on the Internet and streams a synthesized voice response which gets played out of the speaker on the device. I also find it handy for playing music, or for answering all silly questions from my little nephews : )


So, the virtual assistants today consist of two elements; the physical hardware and the cloud service. Recently Google came out with their AIY hardware kit which also provides a virtual assistant using the Google cloud service, with the Raspberry Pi and AIY kit forming the physical hardware device. Meanwhile, the Raspberry Pi also has another multi-functional hardware attachment for similar purposes called the MatrixCreatorMatrixCreator. In summary there are plenty of options.


I decided to try Amazon’s Alexa voice assistant service. It uses physical hardware known as the Amazon Echo range, and the Mi|Home service directly integrates with it. There are several models in the Echo range; the one in the photo is called the Echo Dot and costs about £50. There are some buttons on top but in normal use they are not used; the entire interaction can be by voice.

amazon-echo-dot.jpg (Picture source: Amazon)


The setup is extremely easy; I signed into my Amazon Alexa account and searched for the Energenie ‘skill’ and enabled it.



Next, by clicking on the Smart Home item on the menu on the left, a ‘Discover’ button appears. I pressed that and less than a minute later the Mi|Home devices appeared.



That’s it! Now the home can be controlled by speaking to Alexa. The devices can be named anything in the Mi|Home web portal or mobile phone app, so turning a device on is as simple as saying (for example if the device has been named ‘bedroom lamp’) “Alexa, turn on the bedroom lamp”.


Application Programming Interface (API)

The Mi|Home cloud service has what is known as an Application Programming Interface (API). This offers control of your home programmatically. In other words, you can connect additional software and services to control the home. I did a basic ‘hello world’ type of test to confirm that I could connect using the API, but further use of the API will be explored in more detail in a subsequent blog post.



Generally, I’ve been quite impressed with the Energenie Mi|Home solution. I like that the gateway and all the devices appear well constructed, even on the inside for those that I took apart. Furthermore the electronics look designed for a long service life with cool operation. I didn’t observe any problems related to safety and mains wires were crimped and separated from each other in the plastic moulding.


In terms of functionality, Energenie have made it easy to choose what you need using their colour-coding scheme.


I also like that everything is actually really good value for money. The hub device, the Mi|Home gateway, is not expensive at all, just £39+VAT currently from CPC.


In contrast, LightwaveRF’s hub is almost twice that, currently £78 from Amazon. The Hive hub is a similar price. Given that you might need a couple of gateway/hub devices for adequate coverage of a home, the cost difference is quite large.


The Hive plug-fitting mains control device costs £31, and, described as a ‘British Gas Innovation’ (British Gas is an energy company) it does not support energy monitoring. In contrast, the Mi|Home Adapter Plus supports control and energy monitoring at just £18.50+VAT from CPC.


To me it seems an easy decision to go with the Energenie products currently. Even if in future years one was to adopt a different home automation solution, the Energenie offering has another trick up its sleeve to help with that too; a radio board is available for the Pi, so that an owner could continue to use the hardware provided they were willing to do the integration work (coding). However the Mi|Home cloud service is free to use and has a northbound application programming interface (API) so a user could directly integrate with that as well.


Improvements that I would like to see to the Mi|Home solution would be a dual light switch, and a thermostat. There are Mi|Home radiator valves, but I’d prefer to directly control the entire heating system. (update - the Mi|Home range now includes a thermostat).


I’m excited that I have the beginnings of a decent home automation solution, and in my next blog post I’ll explore how to integrate this with the Pi.



I'm working actually on a professional RS422/RS485 shield for the Raspberry Pi. I wasn't satisfied with the shields on the market. They are very simple and have some disadvantages. My shield comes with the following features:


  • RS422 (full duplex) support
  • RS485 (half duplex) support
  • galvanic isolation between PI and interface
  • indicator leds for RX and TX activity
  • switchable pull-up, pull-down and terminal resistor
  • different modes for send/receive switching (Auto, GPIO, always transmitter, always receiver)
  • Auto switching via monoflop
  • all options adjustable via DIP switches


I'm curious about your feedback.


RS485 Raspberry Pi


Having to develop a MagicMirror project almost flexible to fit in different contexts including extra features - first of all supporting user interaction - I started exploring the possibilities using what already done supporting the Raspberry PI.

The project evolved through two steps: an easy step implementing the open design of the MagicMirror2 implementation on the Raspberry and a complex step developing the parts not yet available.

This first part describes the settings I followed for the easy step.



Platform scenario

This magic mirror will work as a development platform and prototyping base: should be flexible supporting upgrades and changes in the future. In addition the platform design should be easy to customise depending on installation needs and environments.


Building the structure

For the external frame structure I used a good quality of wood to produce a good aesthetic impact. The measures depends on the currently available 15" 4:3 HDMI display that will be perfect for the development prototype but any size of HDMI display can be used as well.

The other element impacting the form factor and size is the mirror (we are not obliged to keep it squared or rectangular etc.; there are several ways to make a two-ways mirror: using a glass and a semi-transparent adhesive film, buying a pre-built glass or buying an acrylic one.

To give the right impact it is important the entire mirror surface is not too big compared to the monitor size and not too small, avoiding a reduced mirror surface vs. the screen size.

For the mirror I have used two-ways transparent acrylic mirror from Tap Plastics with the following dimensions:

  • 3/16 (.177) inches Thick
  • 20 inches Wide
  • 16 inches Long
  • Cost: 73$ (it is the most expensive part excluding the Pi and monitor)

The frame has been built few mm larger internally and 5 cm depth to host the mirror, back supports and the electronic stuff.


Wooden frame

The gallery below shows he wooden frame details. I have applied a transparent coating paint specific for wood on its surface. The cost of the finished frame is about 15$

{gallery} Wooden fram construction details


The wooden frame front side


The wooden frame back side


Detail of the corner mounting of the frame


Detail of the internal side of the frame

Fixing the mirror

After removing the front protective sheet I have put the mirror plate inside the frame. A soft adhesive seal tape on the inside borders frame avoid sliding the mirror plate and keep it better in place.


The images below shows the final effect of the soft seal.

IMG_20170530_155216.jpg IMG_20170530_155205.jpg

For now only the front protective sheet should be removed. A lot of work has to be done on the back before we fix all the stuff, and eventually testing the mirror effect.


Assembling the mirror back side

Seeing this project it  seems that some solutions was thought to make things complex. For example the mirror pressed over the soft seal instead of glueing it or using screws and supports to keep all in place.

As mentioned before this is a magic mirror development platform; every component should be easy to replace and should be possible to assemble/disassemble then entire structure.

In this design I first thought to a modular system: based on this development design it should be possible to build a number of variants depending on the features the user want to include or not. For this same reason I have not used any sort of recycled parts but components - the cheaper and more reliable to find - available on the marked plus some custom designed parts 3D printed.


Instead of placing black tape or other kind of adhesive opaque film on the back side of the mirror I have used a black plastic propylene sheet. The Raspberry PI will support screen rotation so it is possible that in the future a different screen rotation or a different scree size will replace the actual settings. The gallery below illustrates the process of creating the black frame.

{gallery} Black back frame


The propylene black thin sheet


The black sheet should be cut of the same size of the mirror


Measuring the screen dimensions to cut the rectangular area in the black sheet


Measures should be exactly the size of the visible screen without the frame edge


Placing paper tape to safely mark the cutting area


The cutting area should be exactly horizontally centred; the vertical position should be 1/4 lower than the vertical centre


The back sheet after cutting. Now the paper tape can be removed.


Double checking the part to fit exactly before fixing it


After removing the protective plastic sheet from the back of the mirror the black sheet is positioned as a second layer.

Until this point the extra cost we have added is less than 5$ more.


Keeping the LCD in place

The LCD screen is the heaviest part of the structure and is difficult to keep in place. I have explored several methods used by other magic mirror buildings but none of these was efficient; a modular and replicable project should adopt solutions  easy to make and reproduce (and cheap too).

For both the back frame and magic mirror back cover (the cover of the assembled stuff I used 3 mm thick MDF; it cost only few cents and is easy to cut and prepare (can be cut with a cutter) but sufficiently strong to make his job.


A first MDF frame has been cut of the exact size of the internal frame. Inside a rectangular cut removed an area aligned with the previously LCD screen position. The cut should be exactly of the size of the screen frame that should be inserted in it. This will keep the screen stably in place when the magic mirror stands in vertical position for use.


The screen is inserted and fixed with black adhesive tape as shown in the image above. This part not only keep the LCD in the right position but also support the other layers in place spporting the electronic parts and wires.

The two images below shows the back of the magic mirror with the screen positioned inside the rectangular cut.

IMG_20170530_163621.jpg IMG_20170530_163629.jpg

Adding the Raspberry Pi and wiring all together

The last component that should be added to the modular magic mirror is the Raspberry Pi; for this version it has been used a Pi B V 1.2 with the WiPi WiFi USB adapter. The board has been placed on the back top side using a VESA support 3D printed and screwed to the MDF layer as shown in the images below





Wiring is not difficult; to connect the Pi to the HDMI LCD it is suggested a short HDMI cable avoiding too long wires inside. To reduce weight and avoid extra heating inside the magic mirror box the power units are left outside of the structure.


Raspberry Pi Cooling

A series of holes have also been done on the main back cover - another 3mm MDF layer - for the Raspberry cooling. After some days the system was running in test I have decided that for now a cooling fan is not essential for the health of the device.



Adding the Pi devices

After the Pi installation I have added a Pi camera V. 2 and a NFC/RFID Shield. For the development version it is sufficient to keep the Pi board on top of the Raspberry; in a production model this device should be in a more accessible place, i.e. one of the frame sides.

IMG_20170531_215619.jpg IMG_20170531_215707.jpg


Fixing the back layers firmly

After the final assembly the internal MDF layer should be fixed firmly to press the other layers: the mirror and the black frame. To make the entire structure easy to remove and change custom 3D printed supports have been designed and screwed to the internal sides.



{gallery} Plastic blocks fixking the internal layers

Internal supports.png

Angular and linear blocks design


3D printed supports detail


Side support detail


Side support detail


Finished internal assembly


Last touch: positioning the camera

Also the camera support has been designed thinking to a modular approach.

Camera Support.png

As shown in the image above a small camera case design will host the Pi Camera V.2 placed on top of the wooden frame. The support is built in two parts glued together; it is easy to remove, eventually replacing the camera with a black model if needed. In a production version the PiCamera will be hidden on the back of the frame leaving only a small hole for the lens.

IMG_20170601_065258.jpg IMG_20170601_065412.jpg


Finished setup and some anticipations


The image above shows the Pi Magic box complete and running. The base essential software is:


  • Raspbian Jessie Pixel edition
  • Node-js
  • MagicMirror2 development environment


The NFC/RFID tag reader will be used for user identification while the Pi Camera for - at least - gesture recognition. This aim to give a great add-on to the currently available MagicMirror2 projects not supporting these features. It is essential to add user interactivity to this project: this means integrating the Magic Pi building in a IoT context.


In the next blog I will introduce the standard and custom software that will complete the project.

I'm not sure how I managed to miss this, perhaps because it's still in 'developer preview', but a version of Android is officially available for the Raspberry Pi 3.


If you're really keen, you can download the image here for the developer preview 3.1.


Thanks to the recent release of Google's AIY Project with the latest version of the MagPi magazine (already selling for £40+ on ebay), the official guide for the project which is rumored to be on sale at a later date from Google, links to a github for running AIY Project on 'Android Things' with the aforementioned link to the developer preview.



Google also has an Android image with the AIY Project as opposed to it running on Raspbian.



It turns out there's a full site for Android Things and it runs on more than Raspberry Pi 3, it also runs on the intel Edison. However, it appears to be as 'bare bones' as the Windows 10 IoT Core, intended to be a deployment platform for apps via the adb interface. Though it likely still means that the full graphical interface could be run on it. It's not without issues though and is still very much in development:




I for one, welcome our Google overlords what're you going to make, and will this cause you to check out Android if you haven't already?

Google has been busy getting stuff attached to MagPi magazines.. I spotted a box containing an 'AIY' kit at the local magazine retailer. Basically it is a card box with some electrical parts and the MagPi magazine.

It is like a cut-down DIY version of Amazon Alexa from what I can tell.


They had plenty at closing time, so I suspect there will be lots for people to buy tomorrow too, but from examination it is an audio card and a few peripherals. USB sound cards are cheap these days so that is another option for those who don't/can't purchase the magazine.

Below are photos of the contents of the box (just the electronic bits are shown, there is also a cardboard self-assembly enclosure and the MagPi magazine itself; I might take photos of that later), like a mini-teardown where the teardown is already done.


I think it's fair to say the kit component-wise is a little boring, but at least it's cheap. The stereo mic is good to see though. And maybe more projects will be available in future.

Also there are other ways to achieve Alexa-like capability (from both a hardware and PaaS vendor perspective).

For higher quality there are multi-microphone options for Pi, although of course they are more expensive.


Where the fun will come in is with the software - it will be a lot of fun to create voice-enabled projects.

Although it is intended to use Google PaaS, not everyone likes sending audio to a cloud, but it is usable for standalone non-Internet-connected audio projects too, e.g. music player. No need to plug in the microphone wires.


In summary, it is nice to see traction with voice-enabled natural language assistant projects for hobbyists, and the hardware here is very basic, but at least it keeps interest up and lots of opportunities for those learning programming to have fun and create some cool projects.


Some photos and notes follow, as well as any reverse-engineered schematics. Also see the comments section.




Connections Overview

According to google's page, these are the connections (see diagram below). Handily the Pi's I2C, SPI and UART connections are broken out to pads all ready for soldering to the supplied SIL header pins.

There are also some output pins, probably MOSFET driven (can see what looks like SOT-23 sized MOSFETs marked G31 D5 (they are most likely N-channel MOSFETS part code Diodes Inc DMG3420U) on the right of the image along with SMD fuse and diode per output).

See further below for reverse-engineered schematic for that.


On the left of the board, there is space for soldering SIL pins for attaching hobby servo motors.


Output Drivers

The four drivers (shown on the right side of the diagram above) have most likely the following circuit - I think it is accurate but please let me know in the comments if it isn't:


The four MOSFETs are used in an open drain configuration, so the pins marked 'DRIVER' will be switched to ground (0V) whenever the corresponding GPIO pin goes high. If the desired load is 5V powered then the 5V pin can be used to provide power (there is a 500mA resettable fuse). The open drain configuration also allows a lower voltage source to be used to supply the load, and the MOSFET will switch the other end of the load to ground.

The diagram below shows how to connect a relay with a 5V coil. It is inadvisable to use the Driver outputs with a higher voltage despite it being an open drain configuration because current can flow through the diode in the schematic above and the fuse, causing damage.

Also, it is strongly advised not to use this for connecting mains operated equipment, because there are regulations governing what enclosure is used and how the cable is secured and so on. To control mains equipment it is good to use home automation methods such as wireless (there are radio transmitters available for the Pi).



A suitable relay could be Finder 5V coil SPDT 6A contacts changeover relayFinder 5V coil SPDT 6A contacts changeover relay or Finder SPST 6A relaySPST 6A relay . Both of these relays are very compact (20x10x10mm) and have pins that will fit a breadboard or stripboard so are easy to prototype with. The comments section further below has a photo of how to wire it up and some example Python code to test it by turning the relay on and off. Although these relays can be used with mains, it would be extremely inadvisable to do so without a decent PCB layout (not stripboard!) and other precautions (see here for some reasons). So, the relay control is best used for lower-voltage tasks.


Mains Control

One solution is to use wireless control. This can be country or region specific.


I've used an Energenie wireless module before, it is a very small transmitter board that connects to the Pi and wirelessly controls mains sockets. The Energenie kitEnergenie kit comes with the wireless transmitter board and two sockets. It's good value, and meets the relevant electrical standards for the UK. It could be wired to the connections on the left side of the google voice HAT board allocated for Servos for example (by default it needs six GPIO although that number can optionally be reduced if needed).


Audio Output

The centre of the board contains the integrated circuit (IC) with the audio digital-to-analog converter and mono audio amplifier inside it, it is Maxim MAX98357A - thanks Inderpreet! (the 16-pin QFN sized IC says AKK BDK on it) and an EEPROM marked 24C32F.



Power Connections

There is also space for a DC power jack but none is fitted. A DC power jack with thin pins would fit. There is a SOT23-6 part near it, marked K4S DD and a SOT-23 part marked 23X D1 is close too. These two parts have a Q identifier, so are likely some type of transistors.


Microphone Board

The microphones are marked 432 QDF21G and are Knowles SPH0645LM4H (thanks again Inderpreet) MEMS digital microphones. They directly output an I2S bitstream.



The microphone input ports are on the underside:



This is the schematic of the microphone board, using the information from Inderpreet in the comments below. There may be some minor mistakes (please let me know in the comments section and I'll correct it). In theory the microphone board could be used standalone with the Pi, no need for the main Voice HAT board. It is a convenient breakout board for the surface-mount mics.


It could be suitable for small projects, e.g. with the 'Zero or with other boards with an I2S interface. It would be advisable to have a small resistance (e.g. 51 ohms) in series with the LRCLK and BCLK pins if it is connected directly.

There is nothing to stop a voice assistant from working without a speaker (you just won't hear the result, unless the on-board headphone socket on the Pi 3 is used perhaps).

Not all projects would require an audio response. Also, for some projects one may prefer a discreet response in an earphone (e.g. check your calendar by requesting it, but not announcing your plans to everyone).



Safety Leaflet

This is enclosed in the box:


The Raspberry Pi and other low-cost, board-based computers are becoming increasingly popular amongst developers and hobbyists, thanks to the Internet of Things revolution. Users are easily able to experiment with IoT projects by connecting to a network using wired or wireless connections – whether this is through the simple use of an Ethernet cable, or with complementary accessories such as the Raspberry Pi’s Wi-Fi adapter dongle.


However, users requiring access to internet data on the go would benefit from an add-on that provides the ability to browse the web effectively, send SMS, and transfer data using a mobile network connection – especially if it is readily available as per the Raspberry Pi HAT specification.


When looking at forums and talking to Raspberry Pi users at events, we found that there were repeated requests for a product that added a 2G/3G/4G connection to the Raspberry Pi, as they were finding it hard to source a reliable add-on that would allow for data and SMS capability. We decided to develop a device that allows hobbyists and developers to create IoT projects on the go - and so, the PiloT® was created.

PiloT 3G HAT for Raspberry Pi


What is the PiloT®?

The PiloT® is a WAN communications board which provides a 3G / HSPA wireless interface for the Raspberry Pi. The PiloT® features an on-board Sierra Wireless HL Series module teamed with a SIM card of the user’s choice, as well as a GNSS solution, which is used to provide location and time information.


How does it work?

The PiloT® uses a small number of I/Os; the remainder are passed through on the 40-pin headers for use by other applications. Simple AT commands are used to control and monitor sessions on the Sierra Wireless HL Series module.


The PiloT® is able to communicate with the Raspberry Pi using serial or USB communications, with separate channels for control, data and location data. It can be powered by the Raspberry Pi, or a separate power supply can be used.


When used in CDC-ECM mode over USB, the PiloT® presents as an Ethernet-like WAN device, simplifying control of data sessions. In this mode, PPP is not required; a simple command initiates the session. The PiloT® can also be used to transfer data to the Sierra Wireless AirVantage® service using MQTT from a Raspberry Pi; offering a rapidly deployable device-to-cloud architecture.



Who is it aimed at?

Whilst the PiloT® is suitable for use in business applications by users looking to integrate the PiloT® into IoT projects, it is also ideal for hobbyists, developers and educators alike. The PiloT® allows for the transformation of a number of applications on the Raspberry Pi and other development boards by providing communications out in the field rather than tethering the user to Wi-Fi or Ethernet, allowing for another level of creative and practical opportunity and enhancing the convenience of everyday tasks.


Is it compatible with all Raspberry Pi variants?

The PiloT® is compatible with the Raspberry Pi 2, 3 and Zero, and can also be used in standalone mode to provide communications to other development boards.


PiloT 3G HAT as HL Series Evaluation BoardPiloT 3G HAT for Raspberry PiPiloT 3G HAT on UP board


Does the PiloT® offer 4G connectivity?

A 4G version (with 2G fallback) of the PiloT® will be available for purchase very soon - keep checking our ecommerce site for stock!


How well does it work?

We manually build and test every single PiloT® HAT so we are sure that our customers receive a high quality product, and we constantly ensure that PiloT® users have access to manually written and checked up-to-date user guides for the relevant PiloT® variant. We have taken the PiloT® to a variety of events – from large exhibitions to small conferences; demonstrating its capabilities with a heart rate monitor sending information using a mobile network connection, communicating accurate heart rate readings, along with accurate time to within around a second, and location information up to around 20 metres. The PiloT® runs at roughly 5Mbps downlink and 2Mbps uplink, and the 3G and 4G variants fall back to 2G in areas where data is limited.


Expo PiloT 3G heart rate monitor


Can I use the PiloT® to receive location and time information?

The  HL8548-GHL8548-G variant of the PiloT provides a GNSS engine which is based on SiRF V technology GNSS data can be transferred over serial or USB interfaces providing accurate location information for your application It can also be used to provide accurate time


Where can I get one?

Click here to visit for more information about the PiloT® and how to purchase it.

I am hoping that a new HAT we have designed and released will be of interest to anyone wanting to control motors within their own projects using the Raspberry Pi computer.


The Pulse Train Hat is an add-on board for the Rapsberry Pi computer and allows clean, fast and accurate pulses to be created using simple ASCII commands.


There are many hardware designs where a variable frequency pulse is needed, but one that is the most popular is for driving stepper/servo motors that use pulse and direction lines.

Motors like this are found in machines such as 3D Printers, CNC machines, Robot Arms and not to mention the other endless motion control and automation machines.


Below is a Test Rig we used while developing the code.


It allows us to test all 4 channels of the PTHAT by sending the pulses to stepper drivers, that were connected to small Nema 17 motors. It also has all the limit switch inputs brought out to switches, the ADC inputs connect to 10K pots and AUX outputs connected to LED’s.


We decided to use low cost stepper drivers that are usually found in 3D printers as they are not brilliant, but do the job. Our thinking is if the PTHAT can control these noisy little drivers, then handling the more expensive drivers would be easier!




Controlling motors may seem simple, but when you get down to detailed control, it can all become very confusing and a big learning curve.


With the new Pulse Train Hat (PTHAT) add-on for the Raspberry Pi and a new dedicated support site , we plan to make that task very simple and allow everyone to easily create their automation product.




We have created an number of example applications using Visual Studio 2015 that can be used with Windows 10 IOT.

These examples have been written in C# as a Universal Windows Platform (UWP) and all the source code can be downloaded from the website.









We have also designed the PTHAT to have it's firmware upgraded easily using a JTAG programmer that we supply with each board.

Also full details on the ARM processor we use has been released covering all the GPIO information, Clock settings and peripherals for people wanting to write their own firmware.




Also there are a number of wiring diagrams released covering various stepper driver hook ups.


Of course you do not have to use the PTHAT to control motors and can be used as a pulse generator for other projects.






Please feel free to check out the dedicated support site for more information

Hi, my name is Dan and my ham radio callsign is M0WUT.

I am developing an automated Raspberry Pi Zero based transceiver for the WSPR amateur mode.

What is WSPR?

WSPR stands for Weak Signal Propogation Reporter (full details WSJT Home Page) developed by Joe Taylor, K1JT.

In short, amateur stations transmit data packets containing their callsign, their location and how much power they are transmitting with. Once received, these can be decoded by a computer who uploads them to a central database Database | WSPRnet This can then be used to see whether conditions are good for working certain places on certain bands or not. It is also used as it is very bandwidth efficient (approximately 5Hz bandwidth) as well as time division multiplexed and due to the large amount of error correction built into the code, it allows contacts to be made using very low power.


The map below shows all stations received by my friend George M1GEO in a 24 hour period. Nearly all these stations were running <5W output power. George has a very good article explaining the coding side of the WSPR protocol and using an Arduino to generate the tones from a frequency synthesiser board off eBay which can be found here:


The WSPR protocol encodes your callsign, power and location in 162 'bits'. For full information on how this is done, this document by Andy Talbot, G4JNT is the best example I found I will also be sharing my Arduino code to perform the encoding in a later part of this project. I write 'bits' as it is not strictly 1 bit in the binary sense, it can take the value 0-3. This is due to the  WSPR transmission uses 4-FSK. FSK stands for Frequency Shift Keying and works by sending a carrier at a single frequency and then the information is carried by the frequency the carrier is at. The 4 indicates that there are 4 possible frequencies for the carrier, hence my use of 'bits' above as the use of 4 tones allow a value between 0-3 to be sent at any instant in time.  The WSPR tone spacing is 375/256 Hz or 1.465 Hz so the entire bandwidth is roughly 3 times that (4 tones -> 3 lots of spacing or about 4.5Hz.) The WSPR system is based around 2 minute windows (starting on the even minutes) where a station will decide (randomly but you can alter the probability you will transmit with a TX (transmit) percentage slider) whether to receive or transmit for that window. If transmitting, it will begin to send the 162 tones (which have the minimal time period of (1/seperation frequency = 256/375s or 683ms) This takes 0.683*162 = 110 seconds (roughly) and then will wait the remaining 10 seconds in the window before starting again. The main transmission method is by connecting the output of your computer soundcard into the microphone input of the radio so is treated the same way as if the user were whistling into the radio at the right frequency. This is called AFSK or Audio Frequency Shift Keying as the changing frequency is happening at audio frequencies then being mixed with an oscillator to produce the output at the right frequency. The alternative (as used in WSPRpi) is not to bother with this mixing process and just have an oscillator running at the RF frequency and directly changed the output frequency of that. This is called FSK and has the advantages of not requiring a modulator to combine the audio and LO signal which means a simpler circuit and no mixing products. Any harmonics of the audio signal will also get mixed with any (and all!) of the harmonics from the local oscillator producing a large number of output frequencies, not just the single carrier that is desired.


In receive, the radio will convert the received WSPR signal down to audio frequencies which are then fed into the soundcard of a computer. Once a two minute receive window is complete, the software will decode any WSPR signal that were received and upload them to the WSPR database, allowing maps like the one above to be produced. By only transmitting 20% (this is adjustable but 20% is the standard) of the time this allows even more users to use the same frequencies as different stations will transmit at different times.


WSPR GUI receiving Austria (OE6YWF) and Italy (IZ6BYY and IZ0IWD) on the 20 metre (14MHz) amateur band.


The Problem

The problem with this is that this setup gets rather messy with all of the cable required:




CAT/PTT is the system by which the computer tells the radio to start or stop transmitting.


It also requires my radio and my computer to be on the whole time. This consumes power and ties up resources. Also I haven't found a nice way to swap quickly between WSPR and normal operating (voice or Morse Code) without at least unplugging something and plugging in something elsewhich is a faff and I sometime forget. (For ham radio people: It may be possible on the K3, it's fairly new to me but it certainly couldn't be done on my old FT840)


WPSR is also a useful thing to take on a DXpedition (a radio trip to an unusual country or island which is rarely operated from) as it allows other people to see if they can hear your WSPR station to give them an idea whether it is possible for them to contact you and vice versa, it allows the operators on the trip to see if there is good propagation to a certain place.


The (sort of) Solution

Some people have produced standalone WSPR beacons which are transmit only, but these are normally expensive for what they are (if you look at George's website mentioned above he does it with an Arduino, a £7 board of eBay and about £2 of components and for little extra functionality the nicely enclosed units are £40 up), you have to tell them when to start transmitting as they have no way of knowing the time, have limited band options (either supplied with a single filter or bulky external ones) and don't offer good options for useful features such as automatic band changes or a useful way to see the results (particularly if on a trip and there is no Internet connection!) OK, the last point is impossible without a receiver as you rely on other people uploading to their received stations to the database so you can see who has received your signal. This makes them useless for you, as the operator of the rare station, without Internet.


The (Actual) Solution!

The solution I have gone with is a Pi Zero, adding Ethernet and a sound card via the GPIO (I want small size and no messy cables), and adding a PIC32 (Arduino only has 32 bit precision floating point numbers which is not accurate enough) with a Silicon Labs Si5351 synthesiser IC. This little chip cost 60p and is capable of generating 3 independent clock signal up to 100MHz. This plus an amplifer for the transmit side, some switchable filters to allow different bands to be used and a simple receiver should allow this to be possible and entirely self contained. A nice feature I added was a GPS receiver (I am most of the way through this project by now so this is currently working) to allow the unit to know when the two minutes transmit /receive windows were starting, to automatically know the location and to allow the frequency to be corrected using the 1pps output from the GPS receiver. The system knowing the actual time means that it can be set to automatically switch bands at certain times of day and the Ethernet socket is important even with the Pi Zero W having Wifi as the Pi will host a web server showing all of the received stations and the configuration for the WSPRpi so it (should!) be entirely driverless and need no extra software to run. If the trip is somewhere remote and there is no Internet, often a wired LAN is set up to allow the operators' logs to be synchronised but often there is no wireless network which is why the Ethernet connection is still important. I have named this project WSPRpi. The intended plan (a reasonable amount of which has already been built) is below.





The first part (adding Ethernet and a sound card) is online on element14 here: Adding Ethernet and Sound Card to Rpi Zero (WSPRpi part 1)


Thanks for reading. Any feedback would be appreciated in the comments or on Twitter @m0wut

Thanks and 73 (amateur radio speak for best wishes)


If you are here to see how to add Ethernet / soundcard functionality to the Pi Zero, this is a part to be used in my automated amateur radio WSPR transceiver. Explanation of which can be found here: WSPRpi part 0: Introduction and what is WSPR? If you are not interested in how I intend to use it and just want to see how to add Ethernet or a soundcard to a Pi Zero, skip to the Design section.



Finished result



For people interested in the use in the WSPRpi project:

The two most important features for the Pi to be able to receive WSPR signals are an audio input and an network connection. I say network connection as the WSPRpi may be used on DXpeditions without Internet connection so will provide a web server showing the received stations. This is why the Ethernet connection was still useful, even with the Pi Zero W being released when I was waiting for the PCBs to be delivered as often a wired LAN is run between the operators laptops but no wireless connection is available. If a connection to the Internet is present, it will also upload the spots to the WSPR database automatically.



This is based around two main ICs, the Wolfson / Cirrus Logic WM8731 audio codec and the Microchip ENC28J60 SPI to Ethernet adapter. Both of these parts were chosen as they had kernel drivers built in to the Raspbian distro, making using them relatively simple. There is little more to this design than the datasheet use circuit for both, put onto the same PCB. This produces the following schematic, I have attached the schematic as well:

ethernet schematic.png


R14 and R15 are not needed. They were put there in case the I2C bus required the external pull-up resistors but it works fine without. Strictly, the audio output filter components (C11, C12. R16, R17) are also not required for the WSPRpi as it only needs audio input but I added footprints for them as everything else was already in place. It's also an easier way to test the functionality of the WM8731.



The PCB was laid out, mainly trying to fit all those parts in the small footprint of the Pi Zero and keeping all the high speed data buses away from each other.

This was achieved but does mean that fairly small parts are used (SSOP IC) and the crystals have pads only on the bottom, meaning soldering these with a soldering may be a bit of an endeavour. Luckily, I have an Atten hot air station which managed these no problem.




I assembled the Ethernet portion of the board first as I wanted to use SSH to access the Pi. I don't have much space and having a second monitor around for longer than necessary would be a pain.

I have highlighted these on the BOM (attached also)


Preliminary Testing

I began by connecting the board (no Pi connected) to a 3.3V supply and it drew approximately 120mA which matched with a cheap Chinese breakout board for the same chip. If an Ethernet cable (connected to a router) is plugged in, the orange LED on the Ethernet connector should turn on and the green LED should flash, indicating network activity.


Mistake No. 1

I did not get any flashing LEDs, and got errors in the next step, Setup on the Pi, with a non responsive device. Due to the simplicity of the connections between the Pi and the ENC28J60, this pointed at bad soldering, incorrect power supply, wrong pinout or something wrong with the crystal. I double checked my soldering with decent magnification and it looked fine.The Eagle footprint was download from Farnell / element14 so was reasonably confident with that. Power supply was straight from my bench PSU so again unlikely. Probing with an oscilloscope showed both pins at about 1.1V DC. It turned out I had got my footprint for the crystal wrong, shorted both of the oscillator pins of the IC to the can and connected both ends of the crystal to ground. Oops!. Minor surgery later and this was fixed.



Bodge on the crystal. Please excuse the slightly melted pin header due to trying to fix this! This is a prototype board which is not quite right mechanically (connector locations and such) for the WSPR so I corrected this for v1.1 which will hopefully be the final version in the WSPRpi.


Setup on the Pi

I advise not plugging the Ethernet cable in until I mention to as I discovered some interesting things with how the driver handles addressing, explained in Set constant MAC address.

As the drivers for this are already part of the Raspbian distro, enabling these is very easy, at least for Raspbian users, I'm not sure about other distros. Once the board was connected to the Pi.


Going from a fresh install of Rapsbian (I used Jessie Lite, March 2017) I used the Lite version as I did not need a desktop for WSPRpi but the full version will also be fine.


Once logged in (as usual username pi, password raspberry)


Edit config.txt:

sudo nano /boot/config.txt

Add the following to the bottom:



sudo reboot


Once logged back in, run dmesg


This will give you one of two possible results:


Option 1: This is not good, the line at 13.148478 shows the driver was loaded fine so editing config.txt was fine but the line at 13.201186 shows that it couldn't find the 28J60.


Option 2: Success, as shown by the line at 13.422723. eth0 is showing as link not ready as I didn't have an Ethernet Cable plugged in.


Set constant MAC address

Initially I ran the Pi with a static IP so didn't have this problem, switching back to DHCP (router assigns the IP address) led to problems. A MAC address should uniquely identify the hardware and be the same after reboot. In Raspberry Pis with onboard Ethernet, this is calculated as a function of serial number and a series of MAC address that appear to be linked to Rapsberry Pi ensuring each is unique and doesn't change due to the serial number being hard coded. The kernel module for the 28J60 Ethernet controller instead randomly generates the MAC address on startup and on every reboot which will make most DHCP routers unhappy as they assign a different IP address to devices with different MAC addresses.


This was being a pain as my Pi didn't have a constant IP address to allow me to SSH in so this next step assigns a constant MAC address to the Pi.


Thank to Richard from


     1) Create the file /lib/systemd/system/setmac.service

sudo nano /lib/systemd/system/setmac.service



     2) Add the following contents:

Description=DSet the MAC address for the ENC28J60 enet adapter at eth0

ExecStart=/sbin/ip link set dev eth0 address 00:00:00:00:00:00
ExecStart=/sbin/ip link set dev eth0 up



Change the MAC address (shown above 00:00:00:00:00:00) to whatever. I recommend B8:27:EB:xx:xx:xx (x is 0-9 or A-F e.g. B8:27:EB:12:34:5A) as this prefix appears to be assigned to Raspberry Pis. At least, all of my Pis had this MAC address and it identified in Advanced IP Scanner as manufactured by the Raspberry Pi Foundation and appeared blank when another prefix was used.


     3) Exit the editor (Ctrl-X, y, Enter)


     4) Set File permissions

sudo chmod 644 /lib/systemd/system/setmac.service



      5) Execute the following two commands to enable the service

sudo systemctl daemon-reload
sudo systemctl enable setmac.service



     6) Reboot

sudo reboot


     7) Check that this has been saved



Hopefully you will see the following where the address circled in blue is the MAC address you specified:


Now the Pi can be plugged into your router where it will be assigned an IP address which should be constant after reboot. I recommend testing it is connected to the Internet by seeing it can access a website e.g.:


I then went to and it measured download speeds of 5Mbps and 2Mbps upload, not bad!


Change Hostname (optional)

As this Pi would also be running a webserver, I went into raspi-config

sudo raspi-config


and changed the hostname (Option 2) to WSPRpi so I could access it later. I also updated Raspbian, to prevent issues installing anything in the future

sudo apt-get update


If you are following the WSPRpi project, please note this down as it will the web address of the server to view the received spots


All other components were then soldered onto the PCB.




I know two pins on the IC are shorted, this is intentional (see Mistake 2)

Setup on the Pi

Again go into config.txt

sudo nano /boot/config.txt


Find the line that says



Comment it out as it interferes with the I2S bus used to talk to the audio coded we've added



and add the following:




sudo reboot


Run dmesg to check on the state of the audio driver




The two errors (in red text) are fine. The important line (circled in blue) is the one that says:

audioinjector-audio soc:sound: wm8731-hifi <-> 20203000.i2s mapping ok

This means everything is fine.


Mistake Number 2

Again I didn't get this working first time. Once again, device not responding which produces an output in dmesg which look like:

[ 5.710350] audioinjector-audio soc:sound: ASoC: CODEC DAI wm8731-hifi not registered
[ 5.710381] audioinjector-audio soc:sound: snd_soc_register_card failed (-517)
[ 5.746090] EXT4-fs (mmcblk0p7): re-mounted. Opts: (null)
[ 5.847273] audioinjector-audio soc:sound: ASoC: CODEC DAI wm8731-hifi not registered
[ 5.847304] audioinjector-audio soc:sound: snd_soc_register_card failed (-517)
[ 5.855053] wm8731 1-001a: Assuming static MCLK
[ 5.855425] wm8731 1-001a: Failed to issue reset: -5
[ 5.855535] wm8731: probe of 1-001a failed with error -5
[ 5.857826] usbcore: registered new interface driver brcmfmac
[ 5.858302] audioinjector-audio soc:sound: ASoC: CODEC DAI wm8731-hifi not registered
[ 5.858312] audioinjector-audio soc:sound: snd_soc_register_card failed (-517)


Again, the connections were very simple so: soldering, power supply, footprint or crystal.

Soldering: Checked with magnifying glass. It was fine.

Power supply: Fine

Crystal: Fine

Footprint: Turns out I had accidentally deleted a traced and left the mode pin floating. It needed to be shorted to ground to set the input to be I2C for the configuration instead of the default three wire protocol. One of the pins next to it (CSB) was grounded so I manually shorted these. This was surprisingly difficult on SSOP to selectively short just two pins! This has also been corrected for the version 1.1 PCBs.


Setup on Pi (continued)

The sound card by default doesn't use the inputs and outputs that I used. To edit the open the mixer



The layout does look slightly different on the Pi, I'm doing this over SSH so the graphics are slightly different but it's functionally the same.

Press F5 to see all options then use the arrow keys to navigate to the "Line" option (4th one across) and press Space so it shows L R Capture. Navigate to the "Output Mixer HiFi" option and press "m". It should now be highlighted in green.

It should now look like the image below albeit with minor graphical differences if you are doing this directly on the Pi.



Final Testing

To test the sound output, I decided a bit of Internet radio would be a nice way to test both components of this build.

I plugged in a cheap pair of headphones into the Left and Right outputs. I didn't have room on the PCB for a proper socket so soldered one on wires to the board.


First I installed mplayer (This took a while)

sudo apt-get install mplayer


Then I used:

mplayer -playlist


and the sound of Aerosmith indicated success!


I then added the above to the end of /etc/profile to have the Pi autostart streaming from Heart on power-up and used it as an Internet radio for a few days as it was very comfortable to listen to driving a 16 Ohm speaker.



Once I was happy that the board worked, I soldered a 3.3V regulator and supplied it from the Pi's 5V output (GPIO pin 2) for convenience. This will not be needed in the final WSPRpi where I intend to have a motherboard all these sub-boards will plug into to supply power, it's just for testing.





If you do build this / have feedback, it would be interesting to hear your thoughts, either as a comment here or on Twitter @m0wut.


Thanks and 73 (amateur radio speak for best wishes)


Filter Blog

By date: By tag: