Skip navigation
1 2 3 Previous Next

Raspberry Pi

367 posts

Home Automation in the UK Simplified, Part 1: Energenie MiHome

Join Shabaz as he works on his IoT home!

Learn about home automation using the Raspberry Pi, Energenie MiHome and Node Red.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part


Note: Although this blog post covers a UK home automation solution, a lot of the information is still relevant for other regions. The information here shows how to create software applications and graphical user interfaces using a block-based system called Node-RED and JavaScript, that can communicate with hardware and with cloud based services. The information here also shows how to convert the Raspberry Pi to run in a sort of 'kiosk mode' where the user interacts with the Pi as an end appliance with a graphical touch-screen interface. Finally, the information described here shows how to provide auto-dimming capability for the touchscreen display for suiting environments with varying light conditions.



A few months ago, the topic of home automation in the UK was explored, and how it could be achieved safely, at low cost. It turned out to be simple; attach radio-controlled mains sockets, mains adapters and light switches into your home, and connect a Mi|Home Gateway box into your existing home router. The gateway has a 433MHz radio to talk to the sockets and switches, and connects via the Internet to a free cloud service called Energenie Mi|Home.


This is sufficient to be able to control your home using the buttons on the sockets and switches, and using a web browser or mobile app downloadable from the Mi|Home website or iPhone/Android mobile app store.


The home automation was enhanced by purchasing a low-cost Amazon Echo box which connects to the home network wirelessly. It allows for voice control of your home appliances.


Not everyone wants voice control, although I prefer it. No need to touch and share the germs using a touch-screen : ) Nevertheless, many users still prefer touching buttons or a screen for control. There is also the desire to be able to programmatically control things using something like a Raspberry Pi, for more intelligent automation that just 'if this then that' style of encoding behaviour into your home. It would be perfectly feasible for the Pi to identify that a user has picked up a book, and automatically turn on the reading lamp. I decided to try to implement a large touchscreen on the wall to control the home in conjunction with retaining voice control and browser control. I also wanted to use a simple programming environment that could allow for more elaborate schemes in future.


This part 2 deals with how to go about this, using a Raspberry Pi 3Raspberry Pi 3 for the programming environment and for running a user interface, and a capacitive touch-screencapacitive touch-screen for monitoring and control.


The project is really easy from a hardware perspective, the Pi just needs connecting to the home network (either using the built-in wireless capability, or the Ethernet connection available on the Pi). Any display could be selected but the capacitive touch screen of course makes life easier because touch can be used! No keyboard required.


Further below in this blog post, the hardware design is extended slightly to provide auto-dimming capabilities to suit varying home lighting conditions.


To build the solution described here, the mandatory Energenie MiHome bits you need are the MiHome Gateway, and at least one MiHome control element such as a MiHome mains adapter.


An Amazon Echo, or Echo Dot device is optional but provides useful voice control as discussed in the earlier blog post.


The diagram here shows the approximate topology inside the home. It is really straightforward, difficult to go wrong!


Just to recap, the home devices such as lights and sockets are controlled via radio. These are shown at the top of the diagram. The hub that communicates over radio to them is the MiHome (also referred to as Mi|Home) Gateway. It connects to the Internet (for example using DSL) by plugging into your existing home Internet router. The user sets up an account at the Energenie MiHome website and downloads an app if desired. From here the user can control any device from anywhere with an Internet connection.


Voice commands are possible due to integration between Amazon’s Alexa service and the MiHome cloud service. All it requires is for the user to obtain an Amazon Echo or Echo Dot device as mentioned earlier, and run a small bit of configuration; all this was covered in Home Automation in the UK Simplified, Part 1: Energenie MiHome


This part 2 now covers the green portion in the diagram above. Basically it connects a Raspberry Pi to the solution. The Pi communicates to the MiHome service using an application programming interface (API). A user interface also runs on the Pi, so that a connected touchscreen can be used for controlling and monitoring the home. The typical flow of information is therefore:


  1. The user presses a selection on the touchscreen
  2. The Pi sends the command in a specific format (using the API) to the MiHome web service in the cloud
  3. The MiHome service looks up the pre-registered user, and sends commands to the MiHome Gateway
  4. The MiHome Gateway unwraps the command and converts it into a radio signal
  5. The radio signal is picked up by the appliance intelligent mains socket and switches on or off the connected appliance


In the event of network failure, the local controls on each mains socket will continue to function. The touchscreen controls can also continue to function since the Pi can switch to radio mode, sending commands directly to the IoT devices, using a radio module plugged on top of the Pi. This last capability is outside the scope of this blog post and may be covered in a later article if there is interest.


In summary, the Energenie + Raspberry Pi + Capacitive Display + Amazon Echo forms a fairly comprehensive solution, little effort is required to build it, and all code for this project is published and is easy to customise.


The diagram below shows the complete path of information between the home and the cloud services. This is not necessary to know, it is just background information for those that are curious.


How do I do it? - Brief Overview

Firstly, get a Pi 3 and the correct power supplycorrect power supply (the Pi 3 along with the display uses a lot of power - most USB chargers and associated USB cables will not be sufficient) and do the usual basic configuration (install the latest software image, create a password, and get it connected to your home network using either wireless or the Ethernet connection). The steps for this are covered in many blog posts. Next, attach the display to the Pi.


The next step (described further below) is to enable the software development environment called Node-RED and copy across the example Energenie MiHome code (all links are further below) that was developed as part of this blog post. Configure it to suit your home appliances. This entails storing an 'API Key' that is unique to anyone who registers their MiHome Gateway on the Energenie MiHome website, and also obtaining and entering in the device identifiers so that the Pi knows which adapter you wish to control when you press particular buttons on the touchscreen. Finally, you can customize the touchscreen and make it auto-dimming when the room is dark with a small add-on circuit. The majority of this blog post will cover all these topics in detail.



The security of the base solution was covered in part 1, see the section there titled ‘Protocols and Examining the Risks’. The extra functionality in this part 2 has no known data security issue. No password is stored on the Raspberry Pi and no inbound ports are required to be opened on the router beyond those that would ordinarily be dynamically opened for web browsing responses. All communication between the Pi and the MiHome cloud service is encrypted. The Raspberry Pi stores just an ‘API key’ and the e-mail address that was used to register the MiHome service (use a throwaway e-mail account if you wish). The API key provides control of the home appliances until the user deactivates them from the MiHome cloud service, in the event that someone hacks into the Pi. With sensible precautions (no ports opened up on the router) and user access restricted to the Pi, the risk of this occurring is low.


Depending on the desired level of trust/mistrust, one could modify the touchscreen interface to prompt for the MiHome password always; this would eliminate the need to locally store an API key but would increase the inconvenience. It is an option nevertheless.


What is an API?

An Application Programming Interface is a type of machine-to-machine communication method that is (often) made public. It isn’t necessarily designed for any one particular purpose. The reason is, often the creators of a service are not sure of how all their customers will use the service. By having an API, unexpected solutions can be created, adding value for the user. Whole businesses have been created on the backs of APIs; for example, Uber may not have known what else could be done by ordering a taxi with an API, however it is possible to automate deliveries by using such an API to automatically request a nearby driver as soon as someone places an order for your product. A taxi service that works like DHL is definitely unexpected, and would be harder to create without APIs. It has allowed businesses to have delivery staff on-demand.


Modern APIs frequently rely on HTTP and REST techniques. These techniques ultimately allow for efficient communication in a consistent manner. They nearly all result in the communicating device sending a HTTP request to a web address over the network with any data sent as plain text often in parameter and value pairs (known as JSON format) and the HTTP response looks like a what a web browser might receive, with a response code and text content. It actually means that such APIs can often be tested with any web browser like Chrome or Internet Explorer.


In the case of MiHome, Energenie have created an API that allows one to do things like send instructions to turn on and off devices. Once the MiHome server in the cloud determines that the request was a valid and authenticated use of the API, it will send a message to the MiHome Gateway in your home. From there, a radio signal is used to control the end device. The system can work in the other direction too; end devices can send information via radio, such as power consumption. This information is stored in the Cloud, at the MiHome service database in the cloud. When a request arrives using the API, the MiHome service will look up the data in the database and send it as part of the HTTP response.


For this project, the API will be invoked by the Raspberry Pi whenever a button is pressed on the touchscreen. This is just an example. With some coding effort it is also possible to instruct the Pi to (say) send on/off commands at certain times; this would implement a service to make the home appear occupied when the home is actually empty for instance.


Building the Graphical User Interface (GUI)

There are many ways to achieve a nice user interface with modern Linux based systems. One popular way uses an existing application called OpenHAB which is intended for easy home automation deployments. There are many blog posts which describe how to install it and use the OpenHAB software application. I couldn’t find a working Energenie MiHome plugin however (perhaps it exists or will exist one day).


I decided to take a more general approach and create a lightweight custom application. After all, coding is part of the fun when developing your own home automation system. The custom application is not a large amount of code. In fact it is tiny. This has the benefit of being really easy to follow and modify, allowing people to heavily customize it because everyone's home and needs are unique. For instance, some users may not want a touchscreen. They could easily modify the code to instead take push-button input and show indications with LEDs. This is really easy to do by tweaking the custom app.


For this project, I decided to use JavaScript (one of the most popular languages for web development), and an environment or graphical programming add-on called Node-RED. When this environment is run on the Pi, the software creation is done (mostly) in a web browser using graphical blocks. With Node-RED, user interfaces and program behaviour is implemented by dragging blocks (called 'nodes' onto a blank canvas) and literally 'joining the dots' between nodes. Each node can be customised by double-clicking on it. Once the design is complete, the user interface is automatically made available at a URL such as http://xx.xx.xx.xx:8080/ui where xx.xx.xx.xx is the IP address of the Pi that is running Node-RED.


It is then a straightforward task to automatically start up a web browser on the Pi in full-screen mode, so that the user interface is the only thing visible. In other words, the Pi and touchscreen become a dedicated user interface device. Since web technologies are used, it means a mobile phone can also be used if you're not near the touchscreen.


In brief, Node-RED has nodes (blocks) for doing all sorts of things that are useful for a user interface. There are nodes for buttons and sliders and graphs that can be used to construct up the desired result. There are many nodes for application creation too. However Node-RED does not have an off-the-shelf node object that can control Energenie MiHome devices.


So, my first step was to design such a block and store it online so that anyone is free to use it. The instructions to install it are further below, in the 'Installing Node-RED' section. This means that when Node-RED is started and the web page for development is accessed, the left side blocks palette will contain a mihome node. It will automatically communicate using the Energenie Mihome API to the cloud service.


A one-time thing that needs to be done is to retrieve a key from the mihome cloud service. To do that, a special command called get_api_key is sent to the mihome node, along with the username and password that was used to register to the mihome service. The code does not store the password; just the username (i.e. e-mail address) and the returned API key is stored to a local file. If the Pi crashes or is powered off, the user does not need to re-enter the username and password; the key will be re-read from the file. For those that require a different strategy, it should be straightforward to modify the code.


The next section describes all these steps in detail.


Installing Node-RED

As root user (i.e. prepend sudo to the beginning of each command line, or follow the information at Accessing and Controlling the Pi in the section titled 'Enabling the root user account (superuser)' and then type su to become the root user, and type exit to revert to the previous 'pi' user if you originally logged in as the 'pi' user):


apt-get update
apt-get install npm
npm install node-red-dashboard


exit out of root user, and update node-red by typing:


bash <(curl -sL


It takes a long time (perhaps 15 minutes) to uninstall the earlier version and upgrade it, so take a break!

Afterwards, in the home user folder (/home/pi) become root user and then type:


npm install -g git+


Exit out of root user and type:




After about ten seconds, you should see “Server now running at”.


Now in a browser, open up the web page http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi. You should see a Node-RED web page appear!


Using Node-RED

The CLI command node-red-start will have resulted in a web server running on the Pi at port 1880. Code is written (actually, mainly drawn graphically with a bit of configuration) in a web browser. The editor view is shown when any web browser (e.g. Chrome or Internet Explorer) is used to see the page at http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi.

Here is what it looks like:


In the left pane, (known as the palette), scroll down and confirm that you can see a node called mihome in the group under the title 'function' and a whole set of nodes suitable for user interfaces under the title ‘dashboard’. To save time finding a node in the palette, you could just type the name, e.g. mihome in the search bar as shown here.


What does this mean? Basically, it means that ‘mihome’ functionality is available for you to use in your graphically designed programs which are known as ‘flows’ in Node-RED. The flows will be created in the centre pane, known as the Flow Pane. It is tabbed and by default the blank canvass for the first flow (Flow 1) is presented. When creating programs, nodes would be dragged from the palette onto the flow pane. Then, connections would be made between nodes. Each node would be configured by clicking on it; a node configuration parameter window then appears, and help on the node appears in the tab marked Info. The program is run (or ‘deployed’) by clicking on a button marked Deploy shown in red on the top-right of the web page when a flow is created (by default it is grayed out).


An Example Home Automation Program

To help get started, I’ve created an example program sufficient to control home appliances with the MiHome solution. To obtain it, click to access the example code on github and then copy the program (press ctrl-A and then ctrl-C to copy the entire code into the clipboard). Next, go to the Node-Red web page and click on the menu on the top-right, and select Import->Clipboard. Click in the window and press ctrl-V to past it in there, and click Import. The code will appear graphically, attached to the mouse pointer! Click anywhere inside the web page to place it.

This is what the demo program looks like:


As you can see, it is split into three main parts; the top, the middle one and the bottom one. The middle part is used to control a fan.


The light-blue nodes on the left represent buttons (the actual buttons will look nicer; this is just a view of the graphical code). When a ‘Fan On’ or ‘Fan Off’ button is pressed, some signal or message is sent into the yellow mihome node. The mihome node is responsible for communicating to the Energenie MiHome cloud (which in turn will send a message to your MiHome Gateway box, which will then send a radio signal to the end appliance mains socket). The green node on the right doesn’t do much; it is used for debugging and will dump text into the ‘Debug’ tab in the editor.


The top flow looks near-identical, except that the buttons do not control a fan, but rather control a group of appliances. For example, you may have several lamps in a room and you may wish to define a group to control them all simultaneously.


In summary, the mihome node will recognize various commands and will make the appropriate API call to the cloud, to invoke the appropriate real-world action like switching on appliances.


The bottom flow is a bit different:


It doesn’t have a light-blue button node on the left. Instead it has a darker blue node which is known as an Inject node. It has the characteristic that it can repeatedly do something at regular intervals. It has been configured (by double-clicking on it) to send a message to the yellow mihome node every minute. Every minute it instructs the mihome node to query the Energenie MiHome cloud and find out how much power is being consumed by the fan appliance. When the cloud receives the request, it will send the request to the Energenie MiHome Gateway box which will transmit a radio signal to the fan mains socket, which will respond back with the result.


The pink/orange get real power node is a function node. By double-clicking on it within Node-RED, you’ll see that all it does is extract the ‘real power’ value out of all the information that is returned and discards the rest. The final node in the chain, the fan-power-history node is a chart node. It is responsible for graphing all the information it receives. The end result would be a chart that updates every minute.


To explore the yellow mihome node in a bit more detail, double-click on any preceeding node, to see what information is sent to the mihome node. For example, if you double-click on ‘Fan On’, you’ll see this information appear:


You can see that this is a button node (or more specifically ui_button), which is part of the dashboard collection of nodes in the palette. It basically will display a button on the screen. The button will be labelled “Fan On” and if the user clicks it, then a message or payload will be sent into the mihome node. The payload is partially shown on the screen, but click on the button marked ‘’ to see it fully. When you do that, you’ll see this text:


    "command": "subdevice_on",
    "objid": "65479"


The command indicates that this is something to be powered up, and the objid identifies what device should be powered up. That objid value 65479 happens to be an Energenie mains socket that I own, connected to a fan. In your home, every Energenie device will have its own unique ID, and they are very likely to be different to mine, although there could be overlap. So how does the mihome node know which device should be controlled, yours or mine?


The answer is, the mihome node uses an API key. This is unique and assigned whenever anyone creates an Energenie MiHome account. The API key can be obtained using the username and password that was used to set up the account. Code can be created to do that automatically, and then save it so that the Pi always uses the API key. For security reasons, I wanted it to prompt for the password, but not store the password. Only the e-mail address and API key are stored. To do that, I wanted an ‘admin’ screen on the user interface to allow the user to type in their credentials. This needs some additional code, which is explored next.


Building an Admin View

The Admin view is used to initially configure the Pi so that it has the API key to control your home. I created it as a separate program (flow) that happens to appear in the same user interface. You can obtain the code by clicking on the Menu button (top right) and selecting Flows->Add. You’ll see a Flow 2 tab appear with a blank canvass for your new flow. Then, click here to access the admin view code on github and select all (ctrl-A) and copy (ctrl-C)  the entire program there. Import it into the Node-RED editor as before (click on the Menu icon and then Import->Clipboard and paste it there using ctrl-V) and then click on Deploy.


Here is how it works; the top-left shows four user interface objects; the PIN, email and password nodes will be text boxes where the user can type in these parameters, and then click the OK box. All the information will have been collected up by the next node in the chain, called invoke_get_key which checks that all the text fields have been populated and that the PIN number is correct (the PIN is not used for security; it is just used to prevent young children in the home from accidentally wiping the API key, since the code will not perform a request to obtain an API key until the PIN is correctly entered; it needs the correct username and password to get the API key, but if the incorrect username and password was used then the API key would be wiped out, so the PIN prevents that from accidentally occurring if babies/young children start playing with the touchscreen). Since the PIN doesn’t play a security role, it is just hard-coded; you can edit it by double-clicking on the ‘invoke_get_key’ node. I won’t explain the rest of the flow, but it is simple and straightforward to explore by double-clicking on nodes.


The end result is that the flow will allow the API key to be retrieved and stored permanently on the Pi in a file in plain text format. The password is not stored as mentioned earlier. Since the API key is stored, if the Pi reboots, the user will not have to add the API key again.


When we examined the ‘Fan On’ node earlier, we saw that an identifier is used for the mains socket and in my case it happened to be 65479. To obtain such identifiers, we need to use the API to ask the Energenie MiHome cloud what devices exist in the home. The Scan Devices button is used to do that. It will make the appropriate API call and then show the list on the screen.


Working with the User Interface

So far, we have examined the flows for the example home automation system, and the Admin view. Once you’ve clicked Deploy, the code will be running. The user interface can be accessed by opening up a browser to

http://xx.xx.xx.xx:1880/ui and you’ll see this:


The buttons can be tapped to switch things on and off, and the chart shows the power consumption of the fan over time, allowing you to see when the fan was used (it was not used; it is cold here!).


The menu is the result of the code in Flow 1. But the system won’t work until it has been configured as in Flow 2. To do that, click on the menu icon (the three bars on the top-left, next to where it says “HAL 9000” and in the drop-down, select ‘Admin’, and you’ll see the code from Flow 2 executed:


Once you’ve entered the PIN (it is 1234 unless you edited the code as mentioned earlier) and e-mail address and password as used on the Energenie MiHome cloud service, click on OK and the system will retrieve the API key from the cloud service and store it locally.


You can’t control the fan, because it is set up for my fan mains socket identifier; you’d need to change it to suit your own device. To do that, click on Scan Devices and the system will show in a pop-up window a list of all Energenie devices you own, and their associated identifiers. Take a screen print of that, and you can use it for editing the flow to add buttons and groups for those devices. Once you’ve done that, click on Deploy again.


Theme Customizations

I didn’t like the color scheme, but thankfully it is possible to choose your own. To do that, go back into the editor view at http://xx.xx.xx.xx:1880 and then click on the Dashboard tab on the right as shown here:


You’ll see lots of options to adjust the ordering of buttons in the Layout sub-tab. Click on the Theme sub-tab and then set Style to custom and you’ll see all the elements that can have different colors. Once they have been adjusted to suit preferences, they can be saved under a custom name. I didn’t want the touchscreen to be entirely lit up brightly at night-time, so I chose a dark background for example.


Building a ‘kiosk mode’ for the Pi and Display

For practicality, the Pi needs to be set up so that Node-RED executes automatically when the Pi is powered up, and the web browser must be set up to auto-start too, set to fill the entire touch display with no border or URL/website address visible. In other words, we want a type of kiosk mode much like the interactive help/information screens in shopping centres/malls.


The steps to implement this on the Pi are scattered all over the Internet and a bit outdated; I had to spend some time working out the customisation that would suit the Pi and Capacitive Display, for implementing such a system.


First, stop Node-RED by issuing the command node-red-stop and then as root user, type the following:


systemctl enable nodered.service
systemctl start nodered.service


Now Node-RED will automatically start whenever the Pi is rebooted.


The next step is to invoke a browser whenever the Pi starts up.

To do this, as root user type raspi-config and then select Boot Options and then choose to auto-boot into text console as ‘pi’ user. Then at the main menu press the tab key until Finish is highlighted to save it, and select to reboot the box. When the Pi comes up, you should see the text-based command shell/prompt on the touchscreen display, and the user already logged in.


Also as root user, type the following:


apt-get install matchbox-keyboard


This will install a virtual keyboard for the times you may need to tap text on the display; it isn't used for this project but could be useful in future.


Also type:


apt-get install matchbox-window-manager


You’ll also need a better web browser than the default. I installed two more, so that there was some choice. Still as root user, type:


apt-get install midori
apt-get install chromium-browser


(If you test it from the command line and chromium-browser has an error concerning mmal_vc_init_fd, then you will need to issue rpi-update and then reboot the Pi).


As normal ‘pi’ user, create a file in the /home/pi folder called containing the following:


matchbox-window-manager -use_cursor no&
echo "10" ; sleep 1
echo "20" ; sleep 1
echo "50" ; sleep 3
echo "80" ; sleep 3
echo "100" ; sleep 2
) |
zenity --progress \
  --title="Starting up" \
  --text="Please wait..." \
  --auto-close \

if [ "$?" = -1 ] ; then
        zenity --error \
          --text="Startup canceled."
midori -e Fullscreen -a


Create another file called with the same content, but replace the last line with:


chromium-browser --incognito --kiosk


Edit the /home/pi/.bashrc file and append the following:


if [ $(tty) == /dev/tty1 ]; then
  xinit ./


The result of all this is that when rebooted, the Pi will display a progress bar for ten seconds (allowing sufficient time for the Node-RED server to start up) and will then display a full-screen browser opened up at the correct URL for the user interface ( which is the local host address of the Pi).


Reboot the Pi (i.e. type reboot as root user) and the user interface should appear!


Preventing Display Blanking

After some minutes of inactivity, the display will blank by default. Depending on requirements this may be undesirable. To prevent the screen from blanking, issue the following commands.


Edit the and files, and insert the following lines after the first line:


xset -dpms
xset s off



Auto-Blanking the Mouse Pointer

It could also be desirable to make the mouse pointer/cursor disappear from the screen. Type the following as root user:


apt-get install unclutter


Then, as the ‘pi’ user, edit the and files and insert this just above the line containing the matchbox-window-manager text:


unclutter &


Reboot the Pi for these to take effect.


Auto Brightness for the Capacitive Touch Display

Although the kiosk mode implementation works fine, there is a lot that could be improved. For starters, the display is too bright in the evening. It would be possible to adjust the brightness level based on time, but I felt it may be better to just measure the brightness using a light dependent resistor (LDR).


The capacitive touch display brightness level is controlled using the following command line as root user:


echo xxx > /sys/class/backlight/rpi_backlight/brightness


where xxx is a number between 0 and 255 (a value of about 20 is suitable for night-time use, and 255 can be used for a bright screen during the day).


To automate this, a couple of scripts were created in the /home/pi folder. As the ‘pi’ user, create a file called containing the following:


sudo echo 255 | sudo tee /sys/class/backlight/rpi_backlight/brightness > /dev/null


Do the same for a file called but set the value to 20.


Next, type:


chmod 755
chmod 755


In order to invoke these scripts, a new flow is created in Node-RED. Click here to access the auto brightness source code on github.


Once it has been added to Node-RED, click on Deploy to activate it.

The flow looks like this:


The left node, called dark_detect, is configured as shown below (double-click on it within Node-RED to see this):


The dark_detect node will generate a message of value 1 whenever the Raspberry Pi’s 40-way header pin 7 (GPIO 4) goes high.

A small circuit was constructed to generate a logic level ‘1’ whenever it goes dark:


The circuit consists of a Schmitt trigger inverter integrated circuitSchmitt trigger inverter integrated circuit, a light dependent resistorlight dependent resistor, a 50k trimmer variable resistor50k trimmer variable resistor and a 100nF capacitor100nF capacitor. The trimmer resistor can be adjusted to suit the home lighting level.


It worked well. When the room lighting is reduced, the display automatically dims to a very comfortable level.



It is possible to create a nice touchscreen based user interface for home automation with the Pi. The programming effort is low using Node-RED. It is possible to create code ‘flows’ with graphical ‘node’ objects that can represent buttons on the screen. The functionality that interacts with the Energenie MiHome service is contained in a ‘mihome node’ graphical object that is inserted into the code flow. It will automatically send the appropriate commands to the Energenie MiHome cloud service, which will in turn send a message to the MiHome Gateway that will issue a radio message to control the desired home appliance. Monitoring capability is possible too; an example showing appliance energy consumption over time is contained in the code.


The solution with the Pi is reasonably secure; no password is stored on the Pi, the system stores an API key instead.


Finally a small circuit was constructed and an additional code flow was created that would automatically dim the display backlight when the home lighting is reduced.


I hope the information was useful; these two blog post were rather long, but I wanted it to be detailed so that anyone can implement a home automation solution.


This guide provides step-by-step instructions for connecting a unity client to your MATRIX Creator. This connection will be used to demonstrate how unity can read data from every sensor the MATRIX Creator has.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard, and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX OS (MOS), to easily program the Raspberry Pi and MATRIX Creator in Javascript, and the Unity Engine.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 2: Create a Unity-Sensor-Utility app

To create your own Unity-Sensor-Utility app on your local computer, use the command "matrix create Unity-Sensor-Utility". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. From here you can clone the Unity-Sensor-Utility GitHub repo with the code or follow the guide below for an overview of the code.


Step 3: Start Socket Server

In the app.js file, you will need to require and create a server for the unity client to connect to. 6001 is the port left by default, but it can be changed to whatever you want.


///Start Socket Server
var io = require('')(6001);
console.log('server started');


Step 4: Configure & Start MATRIX Sensors

To read data from the MATRIX’s sensors, each sensor has to be initialized and configured with a refresh and timeout option. The options object will be used as a default for all the sensors. To save the data from each sensor, an empty JSON object is created and overwritten each time there’s a new sensor value. Each sensor has its own object.


// Config & Start MATRIX Sensors
var options = {
     refresh: 100,
     timeout: 15000

var gyroscopeData = {};
     matrix.init('gyroscope', options).then(function(data){
     gyroscopeData = data;
var uvData = {};
     matrix.init('uv', options).then(function(data){
     uvData = data;
var temperatureData = {};
     matrix.init('temperature', options).then(function(data){
     temperatureData = data;
var humidityData = {};
     matrix.init('humidity', options).then(function(data){
     humidityData = data;
var pressureData = {};
     matrix.init('pressure', options).then(function(data){
     pressureData = data;
var accelerometerData = {};
     matrix.init('accelerometer', options).then(function(data){
     accelerometerData = data;
var magnetometerData = {};
     matrix.init('magnetometer', options).then(function(data){
     magnetometerData = data;


Step 5: Event Listeners

With the MATRIX Creator now reading and storing sensor data, it’s time to handle how to send that data when requested. Event Listeners are created here to listen to any events that call the sensor name. Once that event is received, The MATRIX will respond by emitting another event back, but this event will contain the corresponding JSON object of the sensor requested. Sensor data will only be sent when requested because it is unlikely every sensor will be used at once. However they can all be sent if you choose.


//Event Listeners
io.on('connection', function (socket) {
  console.log('Client Connected\n Sending Data...');

  //Send gyroscope data on request
  socket.on('gyroscope', function () {
    socket.emit('gyroscopeData', gyroscopeData);

  //Send uv data on request
  socket.on('uv', function () {
    socket.emit('uvData', uvData);

  //Send uv data on request
  socket.on('temperature', function () {
    socket.emit('temperatureData', temperatureData);

  //Send humidity data on request
  socket.on('humidity', function () {
    socket.emit('humidityData', humidityData);

  //Send humidity data on request
  socket.on('pressure', function () {
    socket.emit('pressureData', pressureData);

  //Send accelerometer data on request
  socket.on('accelerometer', function () {
    socket.emit('accelerometerData', accelerometerData);

  //Send magnetometer data on request
  socket.on('magnetometer', function () {
    socket.emit('magnetometerData', magnetometerData);

  //Client has left or lost connection
  socket.on('disconnect', function () {
    console.log('Client Disconnected');


Step 6: Unity Setup

If you haven’t already, download the latest version of unity here:

Unity will act as the client to the server running on the MATRIX Creator. Once you have unity up and running, you’ll need to install a plugin from the asset store

In the “SocketIO” folder, from the newly downloaded asset, navigate to the “Prefabs” folder and then drag&drop the prefab located inside onto the current scene.The SocketIO game object added will require you to input your Raspberry Pi’s IP address and the server port defined in the MOS app we made.

  • ws://YOUR_PI_IP:6001/


Step 7: Creating MATRIX.cs

Moving onto the last steps, you’ll need to create a new c# file called MATRIX.cs inside your Unity Assets. Under the libraries you’ll need are the public booleans that will be determining which sensors we want from the MATRIX Creator. Below that is where the SocketIO object will be defined.


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using SocketIO;

public class MATRIX : MonoBehaviour {
   //Pick Desired Sensors
   public bool useGyroscope = false;
   public bool useUV = false;
   public bool useTemperature = false;
   public bool useHumidity = false;
   public bool usePressure = false;
   public bool useAccelerometer = false;
   public bool useMagnetometer = false;

   private SocketIOComponent socket;


Step 8: On Scene Start

Everything defined in this function will be executed once the moment the current scene runs. The first thing that needs to be done here is to locate the game object we created from the prefab at the end of step 6. After that, we can include an event listener, similar to what was done in the MOS app, for each sensor to handle its values. How we handle the data will be described in a later step. The last part is to begin a Coroutine that contains an infinite loop.


   //On Scene Start
   public void Start()
       //locate prefab
       GameObject go = GameObject.Find("SocketIO");
       socket = go.GetComponent();
       //Set Event Listeners
       socket.On("open", Open);//connection made
       socket.On("error", Error);// error
       socket.On("close", Close);//connection lost
       //Set MATRIX Sensor Event Listeners
       socket.On("gyroscopeData", gyroscopeData);
       socket.On("uvData", uvData);
       socket.On("temperatureData", temperatureData);
       socket.On("humidityData", humidityData);
       socket.On("pressureData", pressureData);
       socket.On("accelerometerData", accelerometerData);
       socket.On("magnetometerData", magnetometerData);

       //start non-blocking loop


Step 9: Requesting Sensor Data

This eventLoop() Coroutine is essential because it allows us to write non-blocking code while we are requesting sensor data. A while(true) loop, that will never end,  is defined here to request sensor data based on which booleans are set to true from step 7. If true, the loop will emit a sensor event to the MATRIX Creator that will then respond by sending us an event back with the sensor data.


    // Requesting Device Data
    private IEnumerator eventLoop()
        //delay to properly initialize
        yield return new WaitForSecondsRealtime(0.1f);
        //loop forever
        while (true)
            yield return new WaitForSecondsRealtime(0f);//no delay
            //Use sensors if requested\\
            if (useGyroscope)
            if (useUV)
            if (useTemperature)
            if (useHumidity)
            if (usePressure)
            if (useAccelerometer)
            if (useMagnetometer)


Step 10: Handling Sensor Data

Here is where we define the functions that the event listeners in step 8 call on. The first 3 are functions for logging connection, disconnection, and errors from the when connecting to the server running in MOS. The rest of the functions are for each sensor the MATRIX Creator has. Similar to our MOS app, each function reads any data put into it and stores it into a static class that can be read by other scripts.


    // Event Listener Functions

    // On Connection
    public void Open(SocketIOEvent e)
        Debug.Log("[SocketIO] Open received: " + + " " +;
    // Error
    public void Error(SocketIOEvent e)
        Debug.Log("[SocketIO] Error received: " + + " " +;
    // Lost Connection To Server
    public void Close(SocketIOEvent e)
        Debug.Log("[SocketIO] Close received: " + + " " +;
    // Gyroscope
    public static class Gyroscope
        public static float yaw = 0f;
        public static float pitch = 0f;
        public static float roll = 0f;
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void gyroscopeData(SocketIOEvent e)
        Gyroscope.yaw = float.Parse(["yaw"].ToString());
        Gyroscope.pitch = float.Parse(["pitch"].ToString());
        Gyroscope.roll = float.Parse(["roll"].ToString());
        Gyroscope.x = float.Parse(["x"].ToString());
        Gyroscope.y = float.Parse(["y"].ToString());
        Gyroscope.z = float.Parse(["z"].ToString());
    // UV
    public static class UV
        public static float value = 0f;
        public static string risk = "";
    public void uvData(SocketIOEvent e)
        UV.value = float.Parse(["value"].ToString());
        UV.risk =["risk"].ToString();
    // Temperature 
    public static class Temperature
        public static float value = 0f;
        public static string risk = "";
    public void temperatureData(SocketIOEvent e)
        Temperature.value = float.Parse(["value"].ToString());
    // Humidity 
    public static class Humidity
        public static float value = 0f;
        public static string risk = "";
    public void humidityData(SocketIOEvent e)
        Humidity.value = float.Parse(["value"].ToString());
    // Pressure 
    public static class Pressure
        public static float value = 0f;
        public static string risk = "";
    public void pressureData(SocketIOEvent e)
        Pressure.value = float.Parse(["value"].ToString());
    // Accelerometer 
    public static class Accelerometer
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void accelerometerData(SocketIOEvent e)
        Accelerometer.x = float.Parse(["x"].ToString());
        Accelerometer.y = float.Parse(["y"].ToString());
        Accelerometer.z = float.Parse(["z"].ToString());
    // Magnetometer 
    public static class Magnetometer
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void magnetometerData(SocketIOEvent e)
        Magnetometer.x = float.Parse(["x"].ToString());
        Magnetometer.y = float.Parse(["y"].ToString());
        Magnetometer.z = float.Parse(["z"].ToString());


Step 11: Reading Data

With MATRIX.cs done all that’s left is to attach the script onto our SocketIO object in our scene. Once attached there will be boxes you can check that will let you pick which sensors you want to read. Each sensor chosen will log its value in the Unity console. If you see the values of the sensor you choose then you’re good to go! Usage for reading each sensor in Unity can be found here:



Ways of the SD card

You may find yourself needing to backup your SD card for future reference or for posterity and fame. Whatever your reason there are several well documented ways you can do it.

In some case you might also want to be able to get things back from your backup and you then generally need to write it back to and SD card to do so.

Using a Linux distribution of your choice for your desktop system this article shows how to backup a card and get contents right from the .img backup file. I used Linux Mint but the procedure should be fairly similar for other distributions too.


Reading a Raspbian SD card

If you are on a Linux platform reading your Raspbian SD card is as easy as plugging it into your SD card reader and the OS will auto-mount it for you.

My Mint desktop mounts my cards under




Getting hold of the contents is obviously very easy in this case. Use the GUI or the Terminal to move and read your files the way you would for any other directory on your system.


Backup the SD card

As I said earlier there are several ways this can be done, check the official pages from the Raspberry Pi Foundation or this really nice article on syntax-err0r which explains how to do it from a live system!

Remember that it is better to unmount the device that you’d like to backup.

Check what devices are available with


sudo fdisk -l



and run




to check that none of the partitions of the device you want to backup are in use.

If for example you are running a graphical desktop then your SD card is automatically mounted


in which case you need to either eject the card from the GUI or run


umount /dev/sdx1 && umount /dev/sdx2


Note that you might have more partitions on your SD card, make sure to unmount them all.

Whichever way you’ll choose to go about creating your backup it pretty much boils down to create a .img file.

You will generally


sudo dd bs=4M if=/dev/sdx of=backup.img


and restore as


sudo dd bs=4M if=backup.img of=/dev/sdx


where sdx is the device assigned to your SD card in your Linux system


You can even combine the command with gzip so that the backup will take much less space on your backup device. This is especially true if the card is almost nearly empty as the command above will not take in consideration empty space and will just add it to the image. So if you have a 16GB SD card with 4GB of data you will get a 16GB file with the method above!!

To use gzip


sudo dd bs=4M if=/dev/sdx | gzip > backup.img.gz


and restore as


gunzip --stdout backup.img.gz | sudo dd bs=4M of=/dev/sdx


Mount the img

One way or another you should now have a .img file sitting somewhere. If you have used gzip to compress the image unzip it at this point.


gunzip backup.img.gz


The first thing to do is to have a look at the partitions within the image file.


fdisk -lu backup.img




This will tell us what the offset for the data partition is. The SD card at its minimum has two partitions in fact, one is the boot partition and we would generally not be interested in it.

The offset is calculated by multiplying the unit size by the start sector of the partition we need to mount.

In our case the unit size is 512 bytes and the start sector is 94208 so the following command


sudo mount -t auto -o loop,offset=$((94208*512)) backup.img /mnt


will mount backup.img2 in /mnt which is generally available and free on most OS. Use another mountpoint of you need to.

Equally if you wanted to mount backup.img1 you would need to use and offset of 8192*512.



Once the partition is mounted you can then proceed to retrieve whichever file you were after in the first place. You can use this same approach even in the case you wanted to add files or change the existing ones on the backup. Once unmounted in fact the backup can be restored to an SD card with all the changes you have made making it a good way to keep an updated master backup.



  • If you want some feedback on the progress of your backup or restore try using dclfdd instead of dd. You might need to install it with
    apt-get install dcfldd
  • All the dd commands above will work perfectly every time but the purists will advice you to run the sync command after each dd command. You can either run it separately or in line with your dd commands by adding
    && sync

This guide provides step-by-step instructions for wiring a robot arm to your MATRIX Creator and then having that arm hit a gong whenever a Stripe sale or Slack command is received. It demonstrates how to use the GPIO pins of the MATRIX Creator to control a servo and how to receive a Slack or Stripe response to trigger the arm.



Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.
  • Gong - You can use a bell or anything else that makes noise when hit - Buy on Amazon
  • Robot Arm - we recommend the meArm because it is a simple robot arm with many degrees of motion - Buy Here
  • Jumper Wires - used to connect the robot arm to the MATRIX Creator - Buy on Amazon

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX CORE to program the Raspberry Pi and MATRIX Creator in Javascript by using its Protocol Buffers.


Step 1: Build your meArm

Follow this guide to build your meArm. (skip step 2 of the meArm guide)


Step 2: Setting up MATRIX CORE

Download MOS (MATRIX Open System) to get all the dependencies for MATRIX CORE on your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 3: Create the app folder and its files

In the directory that you choose create a folder called MATRIX-Gong-Master. Within that folder create three files names as follows: app.js, configure.json, and package.json.


Step 4: Configuration

The configure.json file is used to read your API Keys, server port, and slack channel to post in. Here is the code:


    "slackChannel": "#gong_master",
    "serverPort": 6000


Follow Step 5 to retrieve your Slack access token and follow Step 6 to retrieve your Stripe key. Set the slackChannel to your desired channel to list all gong events and set the serverPort to the port you will be using to accept the requests.


Step 5: Slack Setup

To properly integrate Slack, a few minor edits need to be made before we insert the API Key.

1. Create a new Slack app and select which team you want to install it for.

2. Under features, click on slash commands to create a new command.

          Set the Command to what you would like to type to trigger the arm in Slack.

          Point the Request URL to http://YOUR-PUBLIC-IP:PORT/slack_events. You can find your Public IP here and you can learn how to port forward here.

          Example of this below:

Screen Shot 2017-08-30 at 12.03.44 PM.png

3. Once saved, go into Bot Users and set a username of your choice.

4. The next step for Slack is to go into OAuth & Permissions and allow the following under Scopes:

          Send a bot with the username [your bot's name]

          Post messages as [your bot's name]

          Example of this below:

Screen Shot 2017-08-30 at 12.11.22 PM.png

5. Slack is now configured to run your Gong Master! At the top of the page you'll find 2 API Keys. Copy the Bot User OAuth Access Token and paste it into your configure.json file. Example of the API keys below:

Screen Shot 2017-08-30 at 12.21.14 PM.png

Step 5: Stripe Setup

1. If you do not already have a Stripe account register here and activate your account.

2. Go to API on the left side of the Stripe Dashboard and click Webhooks at the top.

3. Click Add endpoint on the right and type your URL that you will be received requests at as follows: http://YOUR-PUBLIC-IP:PORT/events. You can find your Public IP here and you can learn how to port forward here.

4. From there select the Webhook version you would like to use and press "Select types to send" where you will be able to select what event types you want to accept. In our case we will be using "charge.succeeded" and "invoice.payment_succeeded". Example of this below:

Screen Shot 2017-08-31 at 9.32.53 AM.png

5. Stripe is now configured to send events to your URL. Go to the Webhook you just created click "Click to reveal" in the Signing Secret section to retrieve your API key to add to your configure.json file. Example of this below:

Screen Shot 2017-08-31 at 9.44.37 AM.png

Step 6: Robot Arm Wiring

1. Using the jumper wires we are going to wire the bottom servo of the robot arm to the MATRIX Creator. First connect the Yellow servo wire to pin the pin on the MATRIX Creator labeled GP00.

2. Connect the Red servo wire to one of the pins on the MATRIX Creator labeled 5V. (there are two pins labeled 5V, either one will work)

3. Finally connect the Brown Servo wire to one of the pins on the MATRIX Creator labeled GND. (there are two pins labeled GND, either one will work)

Examples of this below:




Step 7: app.js Code Overview

Below all the code for the app.js file is reviewed. You can copy and paste it all or copy the file from the GitHub repo for the project here.


Global Variables

This is section defines and configures all the necessary libraries we need.

// Global Vars
var creator_ip = ''//local ip
var creator_servo_base_port = 20013 + 32;//port to use servo driver.
var matrix_io = require('matrix-protos').matrix_io;//MATRIX protocol buffers
//Setup connection to use MATRIX Servos
var zmq = require('zmq');
var configSocket = zmq.socket('push')
configSocket.connect('tcp://' + creator_ip + ':' + creator_servo_base_port);
//Api keys
var fs = require("fs");
var userConfig = JSON.parse(fs.readFileSync(__dirname+'/configure.json'));
var stripe = require('stripe')(userConfig.apiKeys.stripe);
var request = require('request');
var express = require('express');
var bodyParser = require('body-parser');
var app = express();


Set Servo Position

This function is meant to simplify moving a servo in MATRIX CORE. The pin for the servo used is set to 0, but it can be changed to any other pin freely with no errors.

function moveServo(angle){
    //configure which pin and what angle
    var servo_cfg_cmd ={
        pin: 0,
        angle: angle
    //build move command
    var servoCommandConfig = matrix_io.malos.v1.driver.DriverConfig.create({
        servo: servo_cfg_cmd
    //send move command


Gong Swing Timing

Using our previously defined function, for moving servos, this function is creating the swing motion that will be called when we want our Gong Master to use the gong. The variables above our function, gongsInQueue and gongInUse, are meant to allow the gong to handle multiple requests and to properly wait for each swing before swinging again.

var gongsInQueue = 0;//gongs requested
var gongInUse = false;//control swing usage

function gongMaster(){
    setInterval(function() {
        //checks for gongs queued and for current swing to stop
        if(gongsInQueue > 0 && !gongInUse){ 
            gongInUse = true;
            gongsInQueue--;//lower queue amount by 1
            moveServo(180);//swing gong arm
            //delay for position transition 
                moveServo(90);//gong arm rest position
                //delay for position transition 
                    gongInUse = false;


Post Slack Message

Using your Slack API Key, a message can be posted to the slack channel set in configure.json.

function logToSlack(message){
            // HTTP Archive Request Object 
            har: {
              url: '',
              method: 'POST',
              headers: [
                  name: 'content-type',
                  value: 'application/x-www-form-urlencoded'
              postData: {
                mimeType: 'application/x-www-form-urlencoded',
                params: [
                    name: 'token',
                    value: userConfig.apiKeys.slack
                    name: 'channel',
                    value: userConfig.slackChannel
                    name: 'link_names',
                    value: true
                    name: 'text',
                    value: message


Handle API Events

This function is where the events for the Slack and Stripe API are handled. Once either API is processed, gongsInQueue is increased to let the GongMaster() function know that it's time to gong!

function processEvents(api, event){
    //stripe events
    if(api === 'stripe'){
        if(event.type === 'charge.succeeded'){
            if( === 'paid'){
                console.log('There was a charge for ';
                logToSlack("A Charge Has Occured");
                gongsInQueue++;//gong once
        else if(event.type === 'transfer.paid'){
            if( === 'paid'){
                console.log('There was a transfer for ';
                logToSlack("A Transfer Has Occured");
                gongsInQueue+=2;//gong twice
    //slack event
    else if(api === 'slack'){
        //check that slack is sending a slash command event
        if(typeof event.command !== 'undefined' && event.command !== null)
            //check that the command is /gong
            if(event.command === '/gong'){
                logToSlack('@'+event.user_name+' has summoned me!');
    //unhandled event
        console.log('I was not made to handle this event');



The final part of the code is creating the server that listens to messages from Stripe and Slack. Once the server receives a message (POST Request)  it will begin to make use of all the previously defined functions.

app.use(bodyParser.urlencoded({ extended: true })); //handle urlencoded extended bodies
app.use(bodyParser.json()); //handle json encoded bodies

//STRIPE POST Request Handling'/events', function(req, res) {
    processEvents('stripe', req.body);//begin gong process
    res.sendStatus(200);//evrything is okay

//SLACK POST Request Handling'/slack_events', function(req, res) {
    //check that request is from slack (not guaranteed)
    if( req.headers['user-agent'].indexOf('') > 0){
        processEvents('slack', req.body);//begin gong process
        console.log("received request from slack");
        res.send(req.body.user_name + ', Your Wish Has Been Gonged!');//response to user for /gong
    //request is not from slack
        res.send('You Have Angered The Gong Master!');

//Create Server
app.listen(userConfig.serverPort, function() {
    console.log('Gong listening on port '+userConfig.serverPort+'!');
    gongMaster();//listening for gong requests


Step 8: Code for package.json

This is the reference for all the libraries and scripts used in this project.

  "name": "gong_master",
  "version": "1.0.0",
  "description": "robot gong that uses the slack and stripe api",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "Carlos Chacin",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.17.2",
    "express": "^4.15.4",
    "matrix-protos": "0.0.13",
    "request": "^2.81.0",
    "stripe": "^4.24.1",
    "zmq": "^2.15.3"


Step 9: Running the program

From the project directory run "node app.js" in the CLI to start the program.

To test in Slack, use the command you made and the Gong Master should respond to your request!



All code for the app can be found on GitHub here:

MATRIX Creator Eclipse Weather App

In celebration of Eclipse Day we have made this app to tell you what the weather is outside so you know if you will be able to see the eclipse or not with your current local weather conditions. This guide provides step-by-step instructions for locating your general location to give you information about the weather via a series of LED animations on a Raspberry Pi with a MATRIX Creator. It demonstrates how to use the to find your location and then feed it to the Dark Sky API to get the relevant local weather information that will be used to show an LED animation on your MATRIX Creator. The main goal of this app was to give an interesting new way to receive your current weather conditions.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX OS (MOS) to easily program the Raspberry Pi and MATRIX Creator in Javascript.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 2: Create a MATRIX-Weather-App

To create your own MATRIX-Weather-App app on your local computer, use the command "matrix create MATRIX-Weather-App". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. You will also be creating a file called weatherAnimations.js for the weather animations.

From here you can clone the MATRIX-Weather-App GitHub repo with the code or follow the guide below for an overview of the code. Either way, make sure to follow the instructions in step 4.


Step 3: Global Variables

In the app.js file you will need to set up the following libraries and global variables for the app:


//Load libraries
var weatherAnims = require(__dirname+'/weatherAnimations'); //custom weather animations
var Forecast = require('forecast'); //
var request = require('request'); //

//Global Variables
//Detailed location data
var location = {};

//Configure forecast options
var forecast = new Forecast({
    service: 'darksky', //only api available
    key: 'YOUR_KEY_HERE', //darksky api key (
    units: 'fahrenheit', //fahrenheit or celcius
    cache: false //cache forecast data


Step 4: Dark Sky API

Within the forecast variable created in Step 3 change YOUR_KEY_HERE to be the API key you get once you make an account with Dark Sky here.


Step 5: Obtaining Location Data

To obtain your location data we will be using in order to get your Latitude and Longitude from your IP address. This is done with the following code in the app.js file:


//Obtaining location data
function getLocation(callback){
    //catch any errors
    .on('error', function(error){
        return console.log(error + '\nCould Not Find Location!');
    //get response status
    .on('response', function(data) {
        console.log('Status Code: '+data.statusCode)
    //get location data
    .on('data', function(data){
            //save location data
            location = JSON.parse(data);

            //log all location data



Step 6: Selecting Weather Animations

Within the app.js file there will be a function that stops and loads an LED animation corresponding to the weather information provided by Dark Sky. Use the function below:


//Selecting Weather Animation
function setWeatherAnim(forecast){
    //clear MATRIX LEDs
    //set MATRIX LED animation
    weatherAnims.emit('start', forecast);


In the MATRIX-Weather-App folder you will need to create a file called weatherAnimations.js. You can find the code for the weatherAnimations.js file here.


Each LED sequence in the weatherAnimations.js file is tied to one of these responses from the Dark Sky API.

  • clear-day
  • clear-night
  • rain
  • snow
  • sleet
  • wind
  • fog
  • cloudy
  • partly-cloudy-day
  • Partly-cloudy-night

If there is a hazard such as hail, thunderstorms, or tornadoes than the LED's will turn red.

If there is no LED sequence created for the current weather the LED's will turn yellow.


Step 7: Obtaining Forecast Data

Using the forecast NPM module this function in the app.js file retrieves and stores relevant weather information received from Dark Sky. Use the following code:


//Obtaining Forecast data
function determineForecast(lat, lon){
    // Retrieve weather information
    forecast.get([lat, lon], true, function(error, weather) {
        //stop if there's an error
            console.log(error+'\n\x1b[31mThere has been an issue retrieving the weather\nMake sure you set your API KEY \x1b[0m ');
            //pass weather into callback

            //loop every X milliseconds


The weather is updated every 3 minutes.


Step 8: Action Zone

This last function calls all the previous functions and starts the app with the following code:


//Action Zone
//Auto Obtain Location
    //Start Forcast requests
    determineForecast(, location.lon);//input your coordinates for better accuracy ex. 25.7631,-80.1911


If you experience an inaccurate forecast feel free to hardcode your location in the place of the and location.lon variables. This inaccuracy with your location is due to the approximately 2 mile error margin of using your IP for location.


All code for the app can be found on GitHub here:

Dataplicity released a new feature "Custom Actions" that might be useful for projects including remote control.





MathWorks recently ran a mobile devices challenge where users were asked to submit a project in which they programmed their Android or iOS devices using MATLAB or Simulink. There were over 15 submissions that competed for the grand prize of 1000 USD.


The third place winning team built a low cost alternative to expensive GPS systems, click here to read more about this project and learn more about the other two winners. The link contains video references to their projects as well.


MATRIX Creator Amazon Alexa

This guide provides step-by-step instructions for setting up AVS on a Raspberry Pi with a MATRIX Creator. It demonstrates how to access and test AVS using our Java sample app (running on a Raspberry Pi), a Node.js server, and a third-party wake word engine using MATRIX mic array. You will use the Node.js server to obtain a Login with Amazon (LWA) authorization code by visiting a website using your Raspberry Pi's web browser.

Required hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • External Speaker with 3.5mm audio cable - Buy on Amazon
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.

Let's get started

The original Alexa on a Pi project required manual download of libraries/dependencies and updating configuration files, which is prone to human error. To make the process faster and easier, we've included an install script with the project that will take care of all the heavy lifting. Not only does this reduce setup time to less than an hour on a Raspberry Pi 3, it only requires developers to adjust three variables in a single install script.

Step 1: Setting up your Pi

Configure your Raspberry Pi like in the original Alexa documentation, for this please complete steps: 1,2,3,4,5 and 6 from the original documentation: Raspberry Pi Alexa Documentation

Step 2: Override ALSA configuration

MATRIX Creator has 8 physical microphone channels and an additional virtual beam formed channel that combines the physical ones. Utilize a microphone channel by placing the following in /home/pi/.asoundrc .

  type asym
  playback.pcm {
    type hw
    card 0
    device 0
  capture.pcm {
    type file
    file "/tmp/matrix_micarray_channel_0"
    infile "/tmp/matrix_micarray_channel_0"
    format "raw"
    slave {
        pcm sc

Step 3: Install MATRIX software and reboot

echo "deb ./" | sudo tee --append /etc/apt/sources.list;
sudo apt-get update;
sudo apt-get upgrade;
sudo apt-get install libzmq3-dev xc3sprog matrix-creator-openocd wiringpi cmake g++ git;
sudo apt-get install matrix-creator-init matrix-creator-malos
sudo reboot

Step 4: Run your web service, sample app and wake word engine

Return to the  Raspberry Pi Alexa Documentation and execute Step 7 but in the last terminal select the sensory wake word engine with:

cd ~/Desktop/alexa-avs-sample-app/samples
cd wakeWordAgent/src && ./wakeWordAgent -e sensory

Step 5: Talk to Alexa

You can now talk to Alexa by simply using the wake word "Alexa". Try the following:

Say "Alexa", then wait for the beep. Now say "what's the time?"

Say "Alexa", then wait for the beep. Now say "what's the weather in Seattle?"

If you prefer, you can also click on the "Listen" button, instead of using the wake word. Click the "Listen" button and wait for the audio cue before beginning to speak. It may take a second or two before you hear the audio cue.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Code has become a part of so many different ways of making music today: digital audio workstation (DAW) software records and sequences it, digital instruments perform it, and digital consoles at live music venues process and enhance it for your enjoyment. But using Sonic Pi you actually perform the music by writing code, and Sebastien Rannou used this technique to cover one of his favorite songs, "Aerodynamic," by electronic music legends Daft Punk.


Q: To start off, for someone like me who knows little to nothing about code in general, what exactly is happening in this video!? I’ve watched it several times in full, and I’m still not sure!


Sebastien: It's a video where a song by Daft Punk is played from code being edited on the fly. This happens in a software called Sonic Pi, which is a bit like a text editor; you can write code in the middle of the screen and it plays some music according to the recipe you provided. Sometimes you can see the screen blink in pink; this is when the code is evaluated, and Sonic Pi takes up modifications. A bit after that, you'll hear something changing in the music. It's a bit like you were writing a recipe with a pencil and at the same time instantly getting the result in your food.



Q: Among the most famous features of Daft Punk’s music is the extensive use of sampling, i.e. using existing recordings that are re-purposed to create new compositions. In covering a song that is sample based, as is the case with "Aerodynamic" - which is based on a Sister Sledge track - how did you go about doing a cover?


S: This is one of my favorite songs, but the choice of doing this cover was more motivated by the different technical aspects it offers. My initial goal was to write an article about Sonic Pi, so I wanted a song where different features of it could be shown. "Aerodynamic" was good for this purpose, as it's made of distinct parts using different techniques: samples, instruments, audio effects, etc. Recreating the sampled part was especially interesting, as there isn't much more than this, so I had of one of those 'a-ha' moments when I got the sequence right, and it surprised me.


Q: How did you come to use Sonic Pi? Do you feel it has any particular strengths and weaknesses in what it does?

sonic pi logo.png


S: I really like the idea of generating sound from code; I think it makes a lot of sense, as there are many patterns in music which can be expressed in a logical way.


I started playing around with Extempore and Overtone, which are both environments to play music from code. The initial learning curve was harder than I expected, as they implied learning a new language (Extempore comes with its own Scheme and DSL languages, and Overtone uses Clojure). So the initial time spent there was more about learning a new language and environment, so it removes some part of the fun you can have (not the technical fun part, but the musical one). On the other hand, Sonic Pi is really easy to start with: one of its main goals is to offer a platform to teach people how to code, and I think Sam Aaron (the creator of Sonic Pi) did a very good job on this. What's surprising is that, even though it's initially made to teach you how to code, you don't feel limited and can go around and do most of the crazy stuff you need to express musically.


One thing which is a bit hard to get right at the beginning is that live coding environments aren't live in the same way an instrument is: you don't get instant feedback on your live modifications if you tweak a parameter within Sonic Pi, as those are usually caught up to on the next musical measure. So you have to think of what's going to happen in the next bar or two, and try to imagine how it's going to sound. This takes some practice.


sonic pi additive_synthesis.png


Q: There’s quite a bit of discussion about how Daft Punk recorded the “guitar solo” in this track; how did you go about covering it?


S: I don't know much about the theories of how they did the guitar solo part, which I naïvely thought they did digitally. I did a spectral analysis of the track, and isolated each individual note to get their pitch and an approximation of their envelope characteristics (the attack, decay, sustain, and release, essentially how the sound develops over time). Then it was just a matter of using a Sonic Pi instrument that sounded a bit like a guitar, and telling it to play them. I then wrapped it in a reverb and a bitcrusher effect (which downgrades the audio's bit rate and / or sampling rate) to make it sound a bit more metallic. Because the notes are so fast during this solo, it sounds kind of good as is (unlike the sound of the bells at the beginning, more on this later!).


Q: As you were working on your cover, did you run into any notable technical problems, and how did you solve them?:


S: Yes! I spent a lot of time trying to get the bells sound right, but failed. Usually when an instrument plays a note, it has a timbre: this is a sort of signature which can be more or less explained, for instance a violin has a very complex timbre, whereas a wheel organ is way more simple. This complexity is highlighted when you look at audio frequencies when such an instrument plays a note: there is usually one frequency that outweighs others (the frequency of the pitch or the fundamental), and a myriad of others, which correspond to the timbre.


The timbre of the bells at the beginning of "Aerodynamic" is very complex, and it evolves in a non-trivial way. I've tried different approaches to reproducing it, including doing Fourier transforms to extract bands of main frequencies at play at different intervals and converting these to Sonic Pi code (more about this here). Sonic Pi comes with a very simple sine instrument, which plays only one frequency, so the idea was to call this instrument several times using different frequencies all together. I kind of got something that sounded like a bell, but it was far from sounding right. I ended up using the bell instrument that also comes with Sonic Pi, playing it at different octaves at the same time, and wrapping these in a reverb effect. That's kind of a poor solution, but at least I had fun in this adventure!


Q: Have you used Sonic Pi to create original music? If so, how did you feel about that process? If not, how do you imagine it would be?


S: Yes, I have, using different approaches. For example, I tried using only Sonic Pi, which ended up sounding a bit experimental, and then by composing in a DAW software (Digital Audio Workstation, eg Pro Tools) and then sampling that so it can be easily imported into Sonic Pi. With this approach I can then use Sonic Pi as a sequencer and wrap the samples in effects. I did another cover using that method, this time of a Yann Tiersen song, and also a few songs with my band, Camembert Au Lait Crew (SoundCloud). The code can all be found here on github.




Q: Do you have any plans for future music projects using Sonic Pi?


S: There are recent changes in Sonic Pi version 3 which I'm really excited about, especially the support of MIDI, so you can now control external synths with code from Sonic Pi while keeping the ability to turn knobs on your synth. I haven't tried this yet, but it's definitely what I want to do next. Sam Aaron did a live coding session recently showing this and I find it amazing:

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. The musical collective Sonic Robots were inspired by one of the most famous electronic instruments of all time, the Roland TR-808 drum machine, and created a live musical installation where physical instruments recreate the purely synthesized sounds of the legendary 808. We asked their founder some questions about the MR-808 interactive drum robot.




Q: What was the origin of the MR-808 project? When I first watched the video of it at the Krake Festival I couldn’t stop smiling; do you recall any particularly memorable reactions that people have had to it?


Moritz Simon Geist, founder of the Sonic Robots collective: I started out as a young hacker and tinkerer when I was 10, taking apart radios and electronic devices from my parents. I come from a music-centered family, having been taught piano, clarinet, bass, and guitar. At some point I combined these two things - music and hacking. In 2010 I thought I should sum up all the experiments of my last few years in one piece, and came up with the robotic 808. In classic fashion, I got the idea at night in the bar, over a beer. Once I got the idea it was such an obvious thing - to do electronic music with robots - that I feared that somebody else would do it before me during the two and a half years it took to build the MR-808. Of course, that never happened.


And the first question that people ask is: “Craaazy! How long did it take to build it?”


Q: The Roland TR-808 is famous for many reasons, but maybe its best known feature is its synthesized bass drum sound. How did you go about recreating this legendary sound, which has practically become the basis for some electronic music styles?


M: Yes, the 808 is famous for its bass drum, and the clap, maybe. In the beginning of the build, I did nearly a year of experiments; initially I wanted to take a “real” 18-inch bass drum from a drum set, but that doesn't sound at all like the 808's bass drum. The electronically-generated 808 bass drum is basically a sine wave with an attack and release curve. So I searched for sounds that come close to sine waves in real life, and ended up using a very short bass drum string. For my latest robots, I optimize that and use metallic tongs, similar to a kalimba. They sound surprisingly similar to a real 808 bass drum, really boomy.


Since I've been making robotic music as my living for nearly three years now, my workshop and storage have filled up with experiments, parts, and unfinished robotic instruments. I still have enough plans for crazy instruments in my drawer to build music robots for the next few decades.



Q: How does one program the MR-808? Have you integrated it into any live performances?


M: Actually, it was meant to be an instrument in the first place! I did a lot of performances in 2012 and 2013, alone and with Mouse on Mars. At some point I had so many problems with my back - the installation weighs 350 Kg - that I had to stop, and I started building lighter robots. The MR-808 is still on display as an interactive installation at festivals and galleries, but not for shows anymore.


The MR-808 can be played with MIDI, and so actually by everything that spits out MIDI. For the interactive version we built a collaborative sequencer that outputs MIDI signal. The sequencer is a Super Collider Patch running on the Raspberry Pi. There is also a small web server providing a simple website with a step sequencer. There are two Nexus 2 tablets as the interface, which connect to the Raspberry Pi via Wi-Fi. They display the sequencer which finally controls the robot. We also blogged about it here in detail, and it's freely available at github.


Q: Why did you choose the Raspberry Pi to be part of this project? What advantages does it offer?


M: As everyone knows, the Raspberry Pi is the platform when it comes to lightweight prototype installations. As I was looking to reduce the weight of the overall installation, I was also not so keen on taking a full-blown laptop with me. Additionally, the data processing - providing a simple web server and running a Super Collider patch - are perfect for the Raspberry Pi. We are currently using a Pi 3, with a small TFT and customized restart and power off buttons, connected to some IO pins. It's a workhorse.




Q: As you were putting together the MR-808, did you run into any notable technical problems, and how did you solve them?


M: So many, I couldn't name them all! One funny thing: when we were building the 16 big push buttons for the bottom of the installation, we had to find a 1:12 model of the original buttons, which of course doesn't exist.


The 3D printing which we use now didn’t exist back then, so we ended up replicating the buttons with a pizza oven, a vacuum cleaner, and a self-made mold. The process is called “thermoforming,” and we did it hacker-style with a zero budget.


IT-wise, one big issue was the synchronization of the web interface with the MIDI sequencer. On the sequencer where you can program the 808 there is a light which constantly cycles through the rhythm, indicating at which step you are. You want the feedback light of the sequencer to both be in time with the actual rhythm that is played, but you also don't want it to be interrupted. As everything is running on Wi-Fi and websockets, it was a little tricky to synchronize everything to run smoothly. My programmer Karsten did a lot of the work there.


Q: I make electronic music myself, and in that world we often talk about trying to introduce the human element into compositions that one could otherwise say are very machine-like. Beyond the fact that it exists in the physical world, in what other ways does the MR-808 feel like a living instrument to you, perhaps more so than an actual TR-808 unit?


M: The most obvious thing for self-built robots that resembles human-like behavior is their fragility; they break all the time! Industrial robots might be very powerful and rigid, but with a limited budget you always take the cheapest route and recycle a lot of parts. For the first shows of my Glitch Robots installation, I took a 3D printer on tour so I could re-print broken parts. Apart from being useful, it looked very cool to have one on stage!


When an artist leaves the pre-made route of presets and starts digging in the mud - be it with mechanics, circuit bending, self-made electronics, or field recording - one always brings error into the art. This is a good thing! It's like playing guitar and by chance hitting the wrong chord: it might sound unexpected, but somehow cool, and can start being the trademark part of the whole riff. When one experiments, a lot of these random moments appear. 90% of it might be useless, but there is the 10% which is helpful and you can’t come up with through planning. I like this introduced randomness of music robots a lot.



Q: Do you have any plans for future music tech projects? An update to the MR-808, perhaps, or another new device?


M: The 808 was cool at the time that I built it, and for me it just "had to be done." But at the same time, it refers back to an historical instrument, and is very much bound to this reference. My opinion is that art should also be futuristic, and should sometimes fail, but it should point to an unknown future. So I decided not to build the Robotic 909, for example (editor's note: the TR-909 was a subsequent drum machine from Roland, a famous instrument in its own right).


With my last instrument, “Tripods One,” I tried to think of an instrument which is futuristic and that also plays with a human-machine interaction. Also, I took a lot more design ideas into account. It consists of 5 pyramids which inhabit small mechanical robots (of course!). Sound-wise, I did not refer to the classic "bassdrum / snare / hihat" sounds; instead, I searched for sounds which I can use well in the context of electronic music. You can see that project here:


Tripods One – Sonic Robots


See more Sonic Robots projects on their site, and check out more Raspberry Pi projects on element14 here!

Ive moved into a new house and came across a sense hat for the raspberry pi which made me remember a little project that I was working on, its basically a html based color chooser which updates the selected colour on the sense hat so I thought I'd share the scripts etc.. incase anybody finds them helpful / useful.




To start with I was running lighttpd on the Raspberry Pi which is a lightweight webserver, very simple to use and just requires a small modification to its config file to allow it to run Python scripts.


Below is the html, javascript, css and python




    <link rel="stylesheet" type="text/css" media="all" href="shstyles.css"/>
    <script src="shcommon.js" type="text/javascript"></script>

    <div id="colordisplay"></div>
    <div id="colorcontrols">
    <p class="colorcontrollabel">R</p>
    <input id="redslider" class="slider" type="range"  min="0" max="255" value="255" onchange="slideRed(this.value)" />
    <p id="redvaluelabel" class="colorvaluelabel">255</p>
    <p class="colorcontrollabel">G</p>
    <input id="greenslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideGreen(this.value)" />
    <p id="greenvaluelabel" class="colorvaluelabel">90</p>
    <p class="colorcontrollabel">B</p>
    <input id="blueslider" class="slider" type="range"  min="0" max="255" value="90" onchange="slideBlue(this.value)" />
    <p id="bluevaluelabel" class="colorvaluelabel">90</p>
    <input type="button" value="update" onClick="setSenseHatColorDisplay()">
    <p id="outputarea">output area</p>



var colorred = 255;
var colorblue = 90;
var colorgreen = 90;

function slideRed(newvalue){
    colorred = newvalue;

function slideGreen(newvalue){
    colorgreen = newvalue;

function slideBlue(newvalue){
    colorblue = newvalue;

function setSenseHatColorDisplay(){
var colorstring = colorred+"|"+colorgreen+"|"+colorblue;
var req = new XMLHttpRequest();
req.onreadystatechange = function() {
        if (this.readyState == 4 && this.status == 200) {
            document.getElementById("outputarea").innerHTML = this.responseText;




html, body{
min-height: 100%;
height: 100%;
max-width: 100%;

    float: left;
    width: 120px;
    height: 120px;
    border: 1px solid black;
    background-color: rgb(255,90,90);

    display: inline;
    width: 100px;

    display: inline;

    display: inline;

    float: left;
    border: 1px solid black;
    width: 200px;

#! /usr/bin/python

import sys
import os
from sense_hat import SenseHat

colorstring =
#colorstring = "255|90|90"
colortup = colorstring.split("|")
redvalue = colortup[0]
greenvalue = colortup[1]
bluevalue = colortup[2]
print "Content-Type: text/html\n\n"

p = os.popen("sudo python /home/pi/www/cgi-bin/ "+redvalue+" "+greenvalue+" "+bluevalue)

print '<html><head><meta content="text/html; charset=UTF-8" />'
print "</body></html>"

import sys
import os
from sense_hat import SenseHat
sense = SenseHat()

#colorstring = sys.argv[1]

redvalue = int(sys.argv[1])
greenvalue = int(sys.argv[2])
bluevalue = int(sys.argv[3])

colortup = (redvalue,greenvalue,bluevalue)

canvas = [



It should be possible to merge the 2 python scripts but there was some stumbling over returning the html headers to the raspberry pi and updating the sense hat display from a single script so I used 1 script to get the data, process it, run a second python script and return the headers allowing the 2nd script to update the sense hat.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Toby Hendricks, an electronic musician who records and performs as otem rellik, became dissatisfied with the iPad he used in live performance, and decided to build his own device using Raspberry Pi.






Q: What was the origin of the Looper project? You mention in the video that it replaced your iPad for live performances, were there deficiencies in the iPad, did you want features it didn’t offer, and so on?


Toby: The origin dates back about three years, when I first started learning Pure Data. At that time I was using an iPad for live shows, and it seemed like nearly every year when iOS got updated some of the apps I was using would break. This trend has gotten better, but I still find it a bit unnerving to use iOS live. I sort of got sick of not having a reliable setup, so I started creating Pure Data patches for an app called MobMuPlat. I fell in love with Pd (Pure Data), and eventually replaced all the apps I was using with one single Pd patch loaded into MobMuPlat. That looping/drum patch became pretty robust over the course of about three years, and then I decided to attempt to turn it into a complete standalone hardware unit.


Q: I make electronic music myself, and I always find when I get a new piece of hardware or software that there are features I didn’t expect to be using or that I didn’t know were there that I turn out to love. Despite the fact that you designed the Pi Looper, have you found yourself using it in ways you didn’t expect?




Toby: Definitely. I’m always finding ways to improve my live performances with it, mostly with the effects. I’ve become pretty proficient in playing the effects section almost like its own instrument; the delay feedback can be infinite, so creating a short delay and then playing with the delay time mixed with the other effects can really create some cool sounds and textures. Also, if you already have a loop going, the delay time is synced with the tempo of the song, so you can get some really cool rhythmic stuff going on.


Q: Why did you choose the Raspberry Pi for this project? What advantages does it offer?


Toby: I chose Raspberry Pi because I knew it could run Pure Data; I really had no other knowledge of Raspberry Pi. The form factor also works great, because I wanted to have all the components inside the box. This was my first Pi project.


Q: As you were putting together the Looper, did you run into any notable technical problems, and how did you solve them?


Toby: I had tons! It took me about three months to figure everything out. One of the main milestones was getting Pd to talk to all the controls, which are all connected to a teensy 3.6. I had absolutely no idea how I was going to make that work when I started. I eventually learned about the comport object, which is an external for Pd which allows Pd to send and receive serial data. Originally, I was planning on just sending MIDI back and forth from the Pi to the teensy, but then realized I needed to also transmit song names back and forth. Learning how to package serial data ended up being many hours of frustration, but I finally got it working with some code I found on the Arduino forum. I also had to make Pd create and delete directories to store the songs; the shell Pd external eventually saved the day on that one. There were way more issues I had to solve, but those were some of the ones on which I remember almost giving up the whole project.


Q: In the electronic music world there seems to be to be a movement of people wanting to avoid staring at their computer screens while they write, and devices like Native Instruments’ Maschine, Ableton’s Push, and new models of the classic AKAI MPC are trying to give electronic musicians the tools to write without needing their mouse and keyboard to manipulate their DAWs. Do you feel that your Looper fits in that tradition, or is it more of a device for live performance? Perhaps it’s useful in both areas?



Toby: I think it fits in both areas. It was definitely built for my live shows, but I often jam out on the couch with it. All the internal instruments were actually an afterthought; originally it was just going to have drum samples. I have yet to fully create a song on it that ended up being something I liked enough to import into my DAW (Digital Audio Workstation) to work on further, but I’m guessing that will eventually happen. I really like when an electronic band plays a show with no computer, or at least a controller that allows them to not even look at the computer. Laptops on stage are fine, but sometimes I feel like the performer could just be checking their email up there and I wouldn’t know the difference. Seeing someone on a piece of hardware really cranking on knobs and pounding buttons (even if it’s just a controller) is so much more interesting to watch.


Q: I very much agree on that! So do you have any plans for future music tech projects? An update to the Looper, perhaps, or a device that fills a different need you have in your writing or performing?


Toby: I’m pretty much always working on a new project. I’ve been building projects more than making music lately. I’ve already built a new MIDI controller that I’m going to shoot a video for eventually. It’s a drum pad / sequencer thing (kind of like this), but it uses force sensitive resistors for the note pads. I actually learned how to cast my own urethane for the pads, which was probably one of the most unnecessary steps I’ve ever taken for a project. I also just purchased a CNC machine and am currently working on a new Raspberry Pi project that will be very similar to this, but the sound engine will be in Pure Data and the touch screens will be much larger. As for the Looper, I was just updating the code yesterday to add a pickup function to the volume knobs for saved songs. The Looper is eventually going to be completely rebuilt with force sensitive resistors for the pads, but that may be some time from now.




See more of Toby's projects on Youtube, and check out more Raspberry Pi projects on element14 here!

This post features videos that I published to my YouTube channel in the series "IOT with Raspberry Pi ". This basically contains 4 videos around raspberry pi that show how to use Raspberry Pi as an IOT device. It starts from interfacing sensor to publishing the sensor data to cloud server using protocols like REST or MQTT. For the entire project I have used JAVA and on top of that used various libraries for specific tasks like Pi4J, Unirest, Eclipse PAHO etc (Links provided below). If you have watched any of the videos you might know that the series is divided into 4 parts namely,

  1. DS18B20 Sensor interfacing with Raspberry Pi.
  2. Publishing data to Thingspeak using REST.
  3. Publishing data to Thingspeak using MQTT.
  4. Completing the project.


So let's check out how to do so.


You can Subscribe on YouTube by clicking this link to show your support and be updated with the latest video on the channel like these.



1.DS18B20 Sensor interfacing with Raspberry Pi.

This video is the first part of it where we will see how to interface DS18B20 one wire temperature sensor with Raspberry Pi by using JAVA with the help of the pi4J library.

2. Publishing data to Thingspeak using REST.

This video is the 2nd in the series where we will see how to publish or send sensor data using REST API to cloud. And in this, we are using ThingSpeak as cloud service to publish data. HTTP calls for REST API are done using Unirest lightweight HTTP client library. In the next video, we will see the same by using MQTT.

3.  Publishing data to Thingspeak using MQTT.

This video is the 3rd in the series and is about how to publish or send sensor data using MQTT API to cloud. And in this, we are using Thingspeak as cloud service to publish data.Publishing Data using MQTT is done using Eclipse PAHO lightweight library. MQTT is a simple lightweight publish/subscribe protocol that can be used over TCP instead of going for HTTP as MQTT is power friendly and bandwidth friendly as compared to HTTP. So it fits perfect for IOT applications. If you are interested in more about it, you can check some docs linked below.

4. Completing the project.

If you have not checked above videos please chek those first before checking out this video. This video is the final one in the series where we will complete the project by combining the codes developed in the earlier videos. We will make the application such that we can decide te API that we will be using to publish the data to Thingspeak.

Github Repo:

Download Pi4J Library:
Download Unirest Library:
Unirest Website:
Unirest Jar Download (With Dependencies):
Download Eclipse PAHO Library(With Dependencies):
Eclipse PAHO Website:


More on MQTT
Official Website:

Java Application on Pi Playlist:
Catch Me On:



Microsoft was able to squeeze their deep-learning algorithms onto an RPi 3 in order to bring intelligence to small devices.


Love it or fear it, AI is advancing, and it’s coming to small/portable electronic devices thanks to advanced developments made by Microsoft. The software giant was recently successful at loading their deep-learning algorithms onto a Raspberry Pi 3 SBC. The advancement will obviously be a boon for anything, and everything IoT, which is on track to take the world by storm and speculation suggests there will be 46-billion connected devices by 2021- depending on whom you ask.


Regardless, Microsoft’s latest breakthrough will allow engineers the opportunity to bring about intelligent medical implants, appliances, sensor systems and much more without the need for incredible computer horsepower. Most AI platforms today utilize the cloud for all their hardware endeavors, certainly so with infant platforms such as Amazon’s Alexa and Apple’s Siri but Microsoft’s breakthrough will make those systems obsolete and unnecessary.



Microsoft is developing AI platforms that will be squeezed into hardware no bigger than this chip. (Image credit Microsoft)


To further put Microsoft’s development into perspective- the team is capable of taking algorithms that normally run on 64 and 32-bit systems and drop the requirements down to a single bit in some cases. What’s astounding is how this new development came about- all due to a flower garden. Ofer Dekel, Manager of Machine Learning and Optimization at Microsoft’s Research Lab in Redmond, Washington, needed a way to keep squirrels from eating his flower bulbs and birdseed, leading him to develop a computer-vision platform utilizing an inexpensive RPi 3 to alert him when there was an intrusion.


When the alert is triggered, the platform engages a sprinkler system to shoo away the culprits- an ingenious solution indeed. “Every hobbyist who owns a Raspberry Pi should be able to do that, today very few of them can,” stated Dekel. Yet, the breakthrough will allow just that and can even be installed on a tiny Cortex-MO chip like the one pictured above.


To get the deep-learning algorithms compressed enough to fit on the RPi 3 using just a few bits, Ofer and his team employed a technique known as sparsification, a technique that shave’s off unneeded redundancies. Doing so allowed them to devise an image detection system that could process 20-times faster on limited hardware without losing any accuracy. Still, the team hasn’t yet figured out a way to take ultra-sophisticated AI or a deep-neural network and compress it enough to fit on limited, low-powered hardware. Regardless, this is an unprecedented first step in doing so, and we can certainly expect advancements that will get us there sometime in not too distant future.  


I'm working on some Pi projects at the moment. Instead of IoT projects... maybe I should be looking into AI.


Have a story tip? Message me at: cabe(at)element14(dot)com

Filter Blog

By date: By tag: