Skip navigation
1 2 3 Previous Next

Raspberry Pi

374 posts

I have seen a lot of guides to configure headless operations without attached display, keyboard, and mouse - all of which seem to be out of date.  Maybe it's because that Raspbian has evolved or something else.  Anyways, this technote is based on Raspbian Stretch or Stretch-Lite circa September 2017.


I got this Raspberry Pi Zero "W" for USD 5.00 by walking into a local store ( in the Dallas/Fort Worth Texas USA area.  That price is for one.  If you want more, then the unit price goes up - sort of a volume "anti-discount" .  They also have a USD 5.00 price on the "official" Pi zero case.  Unfortunately, you have to walk-in as I was told.  There must be other stores around the 3rd rock that offer comparable discounts (UK? Hong Kong? Rotterdam?).


I'll assume that the reader has just created a MicroSD Raspbian installation in the usual manner on the workstation.  Make sure no writes are pending.  If the 2 Stretch partitions are not yet automounted, pull out the caddy and push it back in.  This should cause an automount of the 2 Stretch partitions of the MicroSD.


On my Linux system, `sudo blkid` shows the following after creating the Raspbian boot MicroSD:


/dev/sdc1: LABEL="boot" UUID="E5B7-FEA1" TYPE="vfat" PARTUUID="020c3677-01"

/dev/sdc2: UUID="b4ea8e46-fe87-4ddd-9e94-506c37005ac5" TYPE="ext4" PARTUUID="020c3677-02"


with corresponding mounts





Enable SSH on the first Raspbian power-up


cd /media/USER-NAME/boot/

touch ./ssh


Assuming that you are using Wifi, enable the connection.  The following is an example on my Linux system for the USA using WPA-PSK security.


cd /media/USER-NAME/b4ea8e46-fe87-4ddd-9e94-506c37005ac5/etc/wpa_supplicant

sudo vi  wpa_supplicant.conf


Replace the existing contents of wpa_supplicant.conf with the following:


ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev





ssid="Your network SSID"

psk="Your WPA/WPA2 security key"




I highly recommend setting up a static IP address; this is very convenient for headless SSH operation.


Assuming that:


    1. You are using Wifi and your wireless device is named wlan0
    2. The desired IP address for the Raspberry Pi =
    3. The router IP address =
    4. The primary DNS server is also at


append to /etc/dhcpcd.conf the following:


interface wlan0

static ip_address=

static routers=

static domain_name_servers=


Adjust IP addresses as necessary for your environment.


Note that if you are using wired Ethernet instead of Wifi and its device name is eth0, then specify this interface instead:


interface eth0


Be sure to sync; sync; sync changes to the MicroSD before removing it from the USB port.


Insert the MicroSD into the Raspberry Pi and power up.


On your workstation, I would enter the host name and static IP address for the Raspberry Pi in /etc/hosts (assuming the workstation is Linux or Unix).


From your workstation, connect to the Pi




Change the root password to something that you'll remember, just in case


Configure the Pi

sudo raspi-config


  1. Change password for user pi
  2. Localisation - Language and Regional settings
  3. Timezone - Where are you?
  4. If you already have a Raspberry Pi with host name "raspberrypi", you will probably want to change this Pi Zero W's host name to something else.
  5. Reboot


From your workstation, connect to the Pi




Got an NFS file server that the user "pi" needs access to on a permanent basis?  Permanently mount "/mnt/bigtree.nfs"


1. I'll assume that your NFS server is set-up and you know how to manage it.  Practical overviews of NFS set-up can be found here:

Be sure to read the comments too.  All of us have learned NFS at the "School of Hard Knocks" but it is worth it.


2. Set up a local mount location:


cd /mnt; sudo mkdir bigtree.nfs; sudo chown pi:pi bigtree.nfs


3. Append the following to /etc/fstab; adjust per your local requirements:


# Mount bigtree.nfs using NFS

# NFS-SERVER is the host name as specified in /etc/hosts (static IP address).

# This NFS server has "bigtree" rooted directly under mounted file system "/data4188".

NFS-SERVER:/data4188/bigtree   /mnt/bigtree.nfs   nfs   rw,bg,auto,noatime   0   0


4. Test (In general, always test immediately after making system changes!):


sudo mount -a

touch /mnt/bigtree.nfs/xx

rm /mnt/bigtree.nfs/xx


Got a SAMBA file server that the user "pi" needs access to on a permanent basis?  Permanently mount "/mnt/bigtree.samba" as follows:


1. I'll assume that your SAMBA server is set-up and you know how to manage it.


2. Set up a local mount location:


cd /mnt; sudo mkdir bigtree.samba; sudo chown pi:pi bigtree.samba


3. Append the following to /etc/fstab:


# Mount bigtree using SAMBA

# SAMBA-SERVER is the host name as specified in /etc/hosts (static IP address).

# This SAMBA server has a share named "bigtree".

//SAMBA-SERVER/bigtree /mnt/bigtree.samba cifs auto,owner,rw,credentials=/root/bigtree.credentials,uid=pi,gid=pi 0 0


4. Create /root/bigtree.credentials for the login to the SAMBA server (read and write access):





5. Test mounting changes (In general, always test immediately after making system changes!):


sudo mount -a

touch /mnt/bigtree.samba/xx

rm /mnt/bigtree.samba/xx


Now is a good time to update your system


sudo apt -y update

sudo apt -y dist-upgrade

sudo apt -y autoclean

sudo apt -y autoremove

sudo reboot


That's it


Find any bugs?  Suggestions for improvement?

Richard Elkins

Dallas, Texas, USA, 3rd Rock, Sol

This is a light stand for the official Raspberry Pi 7"touchscreen Display. It is designed for quick prototyping as it is lightweight and takes little time to print.

  • 4 pcs M3-0.5x5mm or M3-0.5x6mm screws are needed.

At the beginning of the design I couldn't decide what was the best angle so I ended up designing 3 stands with with different angles (45, 60 and 70 degrees) -pictures attached-. Design based on the official drawing dimensions for the Raspberry Pi 7 Inch Display.

{gallery:width=640,height=480} 3D-printed light stand for Raspberry Pi 7" Touchscreen

3D-printed light stand for Raspberry Pi 7" Touchscreen

Front: 3D-printed light stand for Raspberry Pi 7" Touchscreen

3D-printed light stand for Raspberry Pi 7" Touchscreen

Back: 7" Touchscreen Display assembled to light stand and Raspberry Pi 3

45 degrees light stand for Raspberry Pi 7" Touchscreen

45 degrees: 45 degrees light stand for Raspberry Pi 7" Touchscreen

60 degrees light stand for Raspberry Pi 7" Touchscreen

60 degrees: 60 degrees light stand for Raspberry Pi 7" Touchscreen

70 degrees light stand for Raspberry Pi 7" Touchscreen

70 degrees: 70 degrees light stand for Raspberry Pi 7" Touchscreen

3D-printed light stand -45, 60 and 70 degrees side-by-side-.

3 stand versions: 45, 60, 70 degrees stands side-by-side.

4 pcs M3-0.5x6mm or M3-0.5x5mm screws

M3 screws: 4 pcs M3-0.5x5mm or M3-0.5x6mm screws are needed for assembly.

Fastening stand to the Raspberry Pi 7" Display

Stand assembly: Fastening stand to the Raspberry Pi 7" Display


Print Settings

Support: No

Brim: No

Material: PLA or ABS will work fine.

Infill: 20% or more.

Notes: Any stand you may choose, needs to be printed two times, one of them needs mirroring X or Y with the slicer app.

STL files available on Thingiverse.

My complete github project ( ) provides a Raspberry Pi Clock & Weather display (rpi_clock), based on the Quimat 3.5" TFT Touch Screen with a 320x480 resolution and their software housed at  The primary Tk window displays date, time, and current weather conditions outside according to the Richardson Texas USA station of the Weather Underground.  If you touch the screen, it will bring up a secondary Tk window which offers the user to go back to the primary window, reboot, or shutdown.

Go back, Reboot, or Shutdown?

You can see that I set up the primary window presentation for an American-formatted date & time presentation.  Modifying this is very simple by editing bin/ constants FORMAT_DATE and FORMAT_TIME.  You can easily modify colors, fonts, widths, button sizes, etc. as you wish as they are also capitalized constants.  The weather data comes in a response from the Weather Underground using a JSON-oriented API for current weather conditions.  I chose to show temperature in both Celsius and Fahrenheit.  The conditions string ("partlycloudy" in the image above) was returned by the Weather Underground as part of its JSON response message, probably formatted for American English due to the station parameters (Richardson); I simply used it as is.

Had I purchased a larger screen (E.g. 7") I could have employed graphic images and other data which would have been more aesthetically-pleasing.

Sorry but I did not show the usual unboxing of the display hardware etc. as I only thought about sharing this experience after construction finished (oops).

Note that the directions from Quimat (TFT supplier) were quite sparse (being generous).  You can see my complete review at Amazon (  There are probably better hardware solutions although this worked fine for a persistent old hacker like me. (=:  In their defense, I'll say that the display hardware pushed onto the Pi's GPIO connectors without issues; no wiring required.  However, watch out for the supplied plastic case because it is very brittle - mine broke [] so I slapped together a semi-protective solution (bottom of a clear case in my bottom desk drawer) that covers the bottom of the Pi.  If the TFT (protecting the front of the Pi) dies because there is nothing protecting it, I'll look for another hardware vendor product.  See? I went cheap (USD 28 for TFT, touch pen, DVD-docs, and brittle clear case) and got bitten.  Lol@myself!

I used my existing Raspberry Pi 2 and passed along its USB drive (previous blog subject) to my new Raspberry Pi 3.  It would not be terribly difficult to modify the Python 3 script to employ:

  • A different touch-screen display product which interfaces with the Raspberry Pi 2 or 3.
  • Run in another Linux distribution or under any O.S. which supports Python 3, JSON, and Tk.


View from the top:

View from the top

View from the bottom:

View from the bottom



  • bin - (Python 3 source code)
  • docs - preparation_notes.txt (how-to do everything required - really! - after following Quimat hardware installation instructions).



Admittedly, there seems to be other 3.5" TFT display products which claim to NOT require special drivers as of the latest Raspbian during 2017 (more desirable IMO). In fact, the Quimat product might too. I just got caught in the middle! Some time in the future, I might try it without the Quimat-supplied driver software. If that effort is successful, I will update this blog and the github project.

Will this work on a Pi Zero? Pi A or B?

How well did I test preparation_notes.txt?

Find any bugs?

Suggestions for improvement?

The Weather Underground seems to be world-wide.  Alternate international weather data provider?

Richard Elkins Dallas, Texas, USA, 3rd Rock, Sol

I am currently running this on a Raspberry Pi 2 with Debian "Raspbian GNU/Linux 9 (stretch)"


1. sudo apt-get update

2. sudo apt-get install apache2

3. sudo mkdir -p /var/www/

4. sudo chown -R $USER:$USER /var/www/

5. sudo chmod -R 755 /var/www

6. nano /var/www/


----------------Add the following to the index.html file-------------



    <title>Welcome to!</title>



    <h1>Success! is working!</h1>





7. sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/


------------file should look like this--------------------------------

7.5 sudo nano /etc/apache2/sites-available/

This is what you will see......

<VirtualHost *:80>

    ServerAdmin webmaster@localhost

    DocumentRoot /var/www/html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


Change it to this......

8. ServerAdmin

9. ServerName


11.DocumentRoot /var/www/

Now should look like this....

<VirtualHost *:80>




    DocumentRoot /var/www/

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined


12.sudo a2ensite

13.sudo service apache2 restart


14.sudo nano /etc/hosts  (ip is local ip of pi)



LeechFTP and CuteFtp dont have SFTP (either doesn't work right away or have to configure more. I couldn't get them to work as much as I tried. (couldn't find how to access SFTP))

FileZila has SFTP = Connection right away.

use your pi ip and pi user and password you don't need to enter port 22 as it is the default SFTP port.


After my last night message of being sleepy thought I would toss this again as reference for the next time I am too sleepy and need to look at my notes lol  its down and dirty note form.  if you need a explanation on this above just google setting up a pi site and should bring up info on each setting a more in depth walkthrough.


Just remember the quickest way to do this is probably ssh into your pi once you first start it up so make sure you go into the settings on the pi (Debian) and enable it.  Then you can just copy and paste into putty or whatever your using.


I was going to try unbuntu server this time but will get there yet I just wanted to make sure I had something working.

I ran a wordpress website on the pi for over 2 years  works great low power and as long as you update wordpress here and there its not that bad.

But I prefer to make my own I was being lazy and used wordpress.   So now I get to have some fun now I got the ftp working as it should.

Also make sure to change your pi's password I would also suggest removing pi and user and adding another as well.


Think I got that all ok there anyone else have any input or am I missing anything?

Also thanks to those that gave me ideas last night and this morning to get things going properly.

This guide provides step-by-step instructions for controlling your MATRIX Creator with a discord bot. By the end, you’ll be able to control the Creator’s LEDs, microphones, and ability to run any command you create.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard, and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX Open System (MOS), to easily program the Raspberry Pi and MATRIX Creator in Javascript. The Discord bot will be made using the discord.js module.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide.


Step 2: Installing Raspberry Pi Dependencies

In order to use the MATRIX Creator’s mics and stream audio to Discord, the following dependencies must be installed to your Pi for these functionalities.

  • Discord Audio Streaming
    • Libav: sudo apt-get install libav-tools
  • MATRIX Creator Mics
    • ALSA Tools: sudo apt-get install alsa-base alsa-utils


Step 3: Creating Your Discord Bot

Discord App Client key and bot creation

  • Save Bot Token


Step 4: Creating Your MOS App

To create your own MATRIX Creator Discord app on your local computer, use the command "matrix create Discord-Bot". You will then be directed to enter a description and keywords for your app. A new folder will be created for the app, along with five files inside. The one you will be editing is the app.js file. From here, you can clone the MATRIX-Discord-Bot GitHub repository with the code or follow the guide below for an overview of the code.


Step 5: Dependencies & Global Vars

The bot variable is a discord.js client object which allows us to read and respond with the Discord API. To login, the bot will require your Discord app’s bot token. Mic is then defined to grab microphone input from the MATRIX Creator. This will be used to stream that input into a Discord voice channel. To keep track of what voice channels the bot joins, currentVoiceChannel is defined to later hold that information.

var Discord = require('discord.js');//
var bot = new Discord.Client();//Discord Bot Object
var token = 'YOUR_BOT_TOKEN_HERE';//Discord Bot Token
var mic = require('mic');//Stream wrapper for arecord
var currentVoiceChannel;//bot's current voice channel


Step 6: Creating Discord Commands

To organize commands, each command you make will need to be assigned to a command group. These groups will hold the commands you want in their list array. The addGroupCommand function is where you make and assign a command to a group. For example, a hello command assigned to the 'matrix' group is now able to be used by typing '/matrix hello' in the Discord chat. A callback you define will run once the command is called.

// Discord Commands
var commandGroups = {
    'matrix': {command: '/matrix', list:[]},//matrix commands
    'basic' : {command: '/basic', list:[]}//basic chat commands

//Command Creator
function addGroupCommand(group, commandName, description, command) {
    //add command to command group
        commandName : commandName,//name
        description: description,//desc
        command: command//function to run


Step 7: Running A Command

The commandSearch function will use the inserted command group and Discord Message to see if any commands in that group were called. If found, the command will run. If not, the user is sent a message with all group commands. The function will return false for the latter.

//Look For & Use Command In Command Group
function commandSearch(group, message){
    var userArgs = message.content.split(' ');//Convert User Arguments Into An Array
    var commandFound = false;//bool on sending help menu
    var commandHelp = commandGroups[group].command+' Commands:\n';//will hold the command group's commands

    //check if command group was called
    if (userArgs[0] === commandGroups[group].command) {
        //Search For Command In Group
        for(i = 0; i < commandGroups[group].list.length; i++){
            //save command and description to help string
            commandHelp += '\n' + commandGroups[group].list[i].commandName +' - '+ commandGroups[group].list[i].description;
            //If command is found
            if( userArgs[1] === commandGroups[group].list[i].commandName){
                //Use command
                commandGroups[group].list[i].command(userArgs, message);
                //Update commandFound
                commandFound = true;
        //If Command Not Found
            message.reply('```'+commandHelp+'```');//reply with command list
        //command group was found
        return true;
    //command group was not called
        return false;


Step 8: MATRIX Led Command

To change the MATRIX Creator’s LEDs, the function looks for an input after ‘/matrix led’. The input (LED color) is then inserted into the matrix.led command. A proper usage reply will be sent to the user if they don’t have a parameter in their message.

// MATRIX Command Group
// - Change MATRIX LEDs
addGroupCommand('matrix', 'led', 'Change Color of MATRIX LEDs', function(userArgs, message){
    //Look For Color Input
    if (userArgs.length === 3){
        message.reply('```Using: matrix.led(\'' +userArgs[2]+ '\').render()```');
        matrix.led(userArgs[2]).render();//change colors
    //Command Had No/Bad Input
        //reply command usage
        message.reply('```\nCommand Usage:\n\t'+
        '/matrix led purple'+'        //color name\n\t'+
        '/matrix led rgb(255,0,255)'+'//rgb values\n\t'+
        '/matrix led #800080'+'       //css color'+


Step 9: MATRIX Join Command

The join command is used to stream the MATRIX Creator microphones into a voice channel. The command requires no parameters and will auto join the user’s current channel, the channel is saved in the currentVoiceChannel variable. The audio will have about a 6-second delay during the initial audio stream, but it will shorten as time passes.

// - Listen To MATRIX Mics
addGroupCommand('matrix', 'join', 'MATRIX Joins Your Voice Channel', function(userArgs, message){
    //continue if no args are present
    if(userArgs.length === 2){
        message.reply('Joining Voice Channel');
        //User Must Be In Voice Channel
        if (message.member.voiceChannel) {
            //just move if in voice channel
            if(currentVoiceChannel !== undefined){
                message.member.voiceChannel.join();//join voice channel
                currentVoiceChannel = message.member.voiceChannel;//save joined channel id
            //join and reinitialize mics
                //join voice channel
                message.member.voiceChannel.join().then(connection => {
                    //save joined channel id
                    currentVoiceChannel = message.member.voiceChannel;
                    //npm mic config
                    var micInstance = mic({
                        rate: 16000,
                        channels: '1',
                        debug: false,
                        exitOnSilence: 0,
                        device : 'mic_channel8'
                    micInputStream = micInstance.getAudioStream();//mic audio stream
                    //when mics are ready
                    micInputStream.on('startComplete', function(){
                        var dispatcher;//will serve audio
                        dispatcher = connection.playArbitraryInput(micInputStream);//stream mics to Discord
                        console.log('mics ready');
                    //start mics
        //User Is Not In Voice Channel
            message.reply('You need to join a Voice channel first!');
    //Tell user to use no args
        message.reply('```"/matrix join" has no parameters```');


Step 10: MATRIX Leave Command

This command will tell the bot to leave the channel saved in currentVoiceChannel. There’s also another command, at the bottom, for getting the link to the MATRIX documentation.

// - MATRIX Leaves Voice Channel
addGroupCommand('matrix', 'leave', 'MATRIX Leaves Current Voice Channel', function(userArgs, message){
    //continue if no args are present
    if(userArgs.length === 2){
        //leave current voice channel
        if(currentVoiceChannel !== undefined){
            message.reply('Leaving Voice Channel');
            //remove saved voice channel id
            currentVoiceChannel = undefined;
            message.reply('Currently not in a voice channel!');
    //Tell user to use no args
        message.reply('```"/matrix leave" has no parameters```'); 
// - MATRIX Documentation Link
addGroupCommand('matrix', 'docs', 'Link To MATRIX Documentation', function(userArgs, message){


Step 11: Basic Ping Command

This is a simple ping command to show you how easy it is to create and organize new commands. The command itself will simply reply ‘pong’ to any user that types ‘/basic ping’

// BASIC Command Group
// - A Simple Ping
addGroupCommand('basic', 'ping', 'Reply To User Ping', function(userArgs, message){


Step 12: Discord Message Event

This message event is fired whenever a message, that the bot can read, appears. Private messages the bot receives are set to be ignored. Any other message will be used in a for loop that runs the commandSearch function. This loop will compare the message sent with each existing command group, running the command that matches.

// Discord Events
//On Discord Message
bot.on('message', function(message){
    //Accept Text Channel & User Messages Only
    if (!message.guild && !=={
        message.reply('You need to join a Text channel first!');

    //Check If User Message
    if ( !=={
        //Loop through commandGroup groups
        for (var group in commandGroups) {
            //Search for and run command
            if (commandGroups.hasOwnProperty(group) && commandSearch(group, message))
                break;//leave loop


Step 13: Logging In

The previously defined token is used for allowing the newly made Discord bot to login.

//On Discord Bot Login
bot.on('ready', function(){

//Start Discord Bot


Step 14: package.json

Before deploying to your MATRIX Creator, update your package.json file to have these dependencies. MOS will auto install everything when it installs your app.

  "name": "discordBot",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "ISC",
  "dependencies": {
    "discord.js": "^11.2.1",
    "mic": "^2.1.2",
    "node-opus": "^0.2.7"


Github: The entire repository for this guide can be found here:

Matt Reed from RedPepper has used a Raspberry Pi, Microphone, a Creepy Doll and  Google’s Speech Neural Network system to listen into .... Ghosts.




"From October 27–31, we’ll be live streaming the DeepWhisper rig nightly from our offices in historic “Butchertown” Nashville so you can watch for any EVPs that may come through. Just the thing to do at 3am when you can’t sleep." - Matt Reed


The DeepWhisper Project  pipes a real-time microphone stream to Google’s Speech Neural Network, which can detect over 110 languages and then we’llimmediately display the results as they come back








Deep Whisper is Opensource so anyone can hunt their own ghosts.

it runs on Node and its libraries have been optimized for Raspberry Pis. You’ll need a USB microphone, Google Cloud Platform Project Key, a display, and patience.

Matt will upload a full repository link soon but, for now, here are the key code snippets.


Connecting to Google Voice Neural Network

You’ll need to have a project set up in the Google Cloud Platform console which will grant you an authentication JSON key that your app will use to connect. Just follow these steps to get that going. Note: you may have to set up billing with Google to proceed.


// Authenticate with Google Cloud
const speech = require('@google-cloud/speech')({
  projectId: 'deepwhisper-XXXXXXXX',
  keyFilename: 'Deepwhisper-XXXXXXX.json'


Streaming mic input to Google

Simply pipe the microphone input to Google, and if the Neural Network detects speech in any of the 110+ supported languages, it will be returned as a string of transcript text.


// Connect and listen to USB microphone
const micInstance = mic({
  rate: ‘16000’,
  channels: ‘1’,
  debug: true,
  exitOnSilence: 0
const micInputStream = micInstance.getAudioStream();
// Create a real-time recognize stream with Google
const recognizeStream = speech.streamingRecognize(request)
  .on(‘error’, console.error)
  .on(‘data’, (data) =>
  (data.results[0] && data.results[0].alternatives[0])
    ? io.emit('text', { transcript: data.results[0].alternatives[0].transcript })
    : `\nReached transcription time limit, press Ctrl+C\n`);


Display the results

Using a simple HTML page with Socket.IO, you can receive the results emitted by the server above and display them immediately. This uses jQuery to set the text and fade it out after five seconds.


  var socket = io.connect('http://localhost:3000');
  socket.on('text', function (data) {
  }, 5000);

Source and Project Matt Reed at


Home Automation in the UK Simplified, Part 1: Energenie MiHome

Join Shabaz as he works on his IoT home!

Learn about home automation using the Raspberry Pi, Energenie MiHome and Node Red.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part


Note: Although this blog post covers a UK home automation solution, a lot of the information is still relevant for other regions. The information here shows how to create software applications and graphical user interfaces using a block-based system called Node-RED and JavaScript, that can communicate with hardware and with cloud based services. The information here also shows how to convert the Raspberry Pi to run in a sort of 'kiosk mode' where the user interacts with the Pi as an end appliance with a graphical touch-screen interface. Finally, the information described here shows how to provide auto-dimming capability for the touchscreen display for suiting environments with varying light conditions.



A few months ago, the topic of home automation in the UK was explored, and how it could be achieved safely, at low cost. It turned out to be simple; attach radio-controlled mains sockets, mains adapters and light switches into your home, and connect a Mi|Home Gateway box into your existing home router. The gateway has a 433MHz radio to talk to the sockets and switches, and connects via the Internet to a free cloud service called Energenie Mi|Home.


This is sufficient to be able to control your home using the buttons on the sockets and switches, and using a web browser or mobile app downloadable from the Mi|Home website or iPhone/Android mobile app store.


The home automation was enhanced by purchasing a low-cost Amazon Echo box which connects to the home network wirelessly. It allows for voice control of your home appliances.


Not everyone wants voice control, although I prefer it. No need to touch and share the germs using a touch-screen : ) Nevertheless, many users still prefer touching buttons or a screen for control. There is also the desire to be able to programmatically control things using something like a Raspberry Pi, for more intelligent automation that just 'if this then that' style of encoding behaviour into your home. It would be perfectly feasible for the Pi to identify that a user has picked up a book, and automatically turn on the reading lamp. I decided to try to implement a large touchscreen on the wall to control the home in conjunction with retaining voice control and browser control. I also wanted to use a simple programming environment that could allow for more elaborate schemes in future.


This part 2 deals with how to go about this, using a Raspberry Pi 3Raspberry Pi 3 for the programming environment and for running a user interface, and a capacitive touch-screencapacitive touch-screen for monitoring and control.


The project is really easy from a hardware perspective, the Pi just needs connecting to the home network (either using the built-in wireless capability, or the Ethernet connection available on the Pi). Any display could be selected but the capacitive touch screen of course makes life easier because touch can be used! No keyboard required.


Further below in this blog post, the hardware design is extended slightly to provide auto-dimming capabilities to suit varying home lighting conditions.


To build the solution described here, the mandatory Energenie MiHome bits you need are the MiHome Gateway, and at least one MiHome control element such as a MiHome mains adapter.


An Amazon Echo, or Echo Dot device is optional but provides useful voice control as discussed in the earlier blog post.


The diagram here shows the approximate topology inside the home. It is really straightforward, difficult to go wrong!


Just to recap, the home devices such as lights and sockets are controlled via radio. These are shown at the top of the diagram. The hub that communicates over radio to them is the MiHome (also referred to as Mi|Home) Gateway. It connects to the Internet (for example using DSL) by plugging into your existing home Internet router. The user sets up an account at the Energenie MiHome website and downloads an app if desired. From here the user can control any device from anywhere with an Internet connection.


Voice commands are possible due to integration between Amazon’s Alexa service and the MiHome cloud service. All it requires is for the user to obtain an Amazon Echo or Echo Dot device as mentioned earlier, and run a small bit of configuration; all this was covered in Home Automation in the UK Simplified, Part 1: Energenie MiHome


This part 2 now covers the green portion in the diagram above. Basically it connects a Raspberry Pi to the solution. The Pi communicates to the MiHome service using an application programming interface (API). A user interface also runs on the Pi, so that a connected touchscreen can be used for controlling and monitoring the home. The typical flow of information is therefore:


  1. The user presses a selection on the touchscreen
  2. The Pi sends the command in a specific format (using the API) to the MiHome web service in the cloud
  3. The MiHome service looks up the pre-registered user, and sends commands to the MiHome Gateway
  4. The MiHome Gateway unwraps the command and converts it into a radio signal
  5. The radio signal is picked up by the appliance intelligent mains socket and switches on or off the connected appliance


In the event of network failure, the local controls on each mains socket will continue to function. The touchscreen controls can also continue to function since the Pi can switch to radio mode, sending commands directly to the IoT devices, using a radio module plugged on top of the Pi. This last capability is outside the scope of this blog post and may be covered in a later article if there is interest.


In summary, the Energenie + Raspberry Pi + Capacitive Display + Amazon Echo forms a fairly comprehensive solution, little effort is required to build it, and all code for this project is published and is easy to customise.


The diagram below shows the complete path of information between the home and the cloud services. This is not necessary to know, it is just background information for those that are curious.


How do I do it? - Brief Overview

Firstly, get a Pi 3 and the correct power supplycorrect power supply (the Pi 3 along with the display uses a lot of power - most USB chargers and associated USB cables will not be sufficient) and do the usual basic configuration (install the latest software image, create a password, and get it connected to your home network using either wireless or the Ethernet connection). The steps for this are covered in many blog posts. Next, attach the display to the Pi.


The next step (described further below) is to enable the software development environment called Node-RED and copy across the example Energenie MiHome code (all links are further below) that was developed as part of this blog post. Configure it to suit your home appliances. This entails storing an 'API Key' that is unique to anyone who registers their MiHome Gateway on the Energenie MiHome website, and also obtaining and entering in the device identifiers so that the Pi knows which adapter you wish to control when you press particular buttons on the touchscreen. Finally, you can customize the touchscreen and make it auto-dimming when the room is dark with a small add-on circuit. The majority of this blog post will cover all these topics in detail.



The security of the base solution was covered in part 1, see the section there titled ‘Protocols and Examining the Risks’. The extra functionality in this part 2 has no known data security issue. No password is stored on the Raspberry Pi and no inbound ports are required to be opened on the router beyond those that would ordinarily be dynamically opened for web browsing responses. All communication between the Pi and the MiHome cloud service is encrypted. The Raspberry Pi stores just an ‘API key’ and the e-mail address that was used to register the MiHome service (use a throwaway e-mail account if you wish). The API key provides control of the home appliances until the user deactivates them from the MiHome cloud service, in the event that someone hacks into the Pi. With sensible precautions (no ports opened up on the router) and user access restricted to the Pi, the risk of this occurring is low.


Depending on the desired level of trust/mistrust, one could modify the touchscreen interface to prompt for the MiHome password always; this would eliminate the need to locally store an API key but would increase the inconvenience. It is an option nevertheless.


What is an API?

An Application Programming Interface is a type of machine-to-machine communication method that is (often) made public. It isn’t necessarily designed for any one particular purpose. The reason is, often the creators of a service are not sure of how all their customers will use the service. By having an API, unexpected solutions can be created, adding value for the user. Whole businesses have been created on the backs of APIs; for example, Uber may not have known what else could be done by ordering a taxi with an API, however it is possible to automate deliveries by using such an API to automatically request a nearby driver as soon as someone places an order for your product. A taxi service that works like DHL is definitely unexpected, and would be harder to create without APIs. It has allowed businesses to have delivery staff on-demand.


Modern APIs frequently rely on HTTP and REST techniques. These techniques ultimately allow for efficient communication in a consistent manner. They nearly all result in the communicating device sending a HTTP request to a web address over the network with any data sent as plain text often in parameter and value pairs (known as JSON format) and the HTTP response looks like a what a web browser might receive, with a response code and text content. It actually means that such APIs can often be tested with any web browser like Chrome or Internet Explorer.


In the case of MiHome, Energenie have created an API that allows one to do things like send instructions to turn on and off devices. Once the MiHome server in the cloud determines that the request was a valid and authenticated use of the API, it will send a message to the MiHome Gateway in your home. From there, a radio signal is used to control the end device. The system can work in the other direction too; end devices can send information via radio, such as power consumption. This information is stored in the Cloud, at the MiHome service database in the cloud. When a request arrives using the API, the MiHome service will look up the data in the database and send it as part of the HTTP response.


For this project, the API will be invoked by the Raspberry Pi whenever a button is pressed on the touchscreen. This is just an example. With some coding effort it is also possible to instruct the Pi to (say) send on/off commands at certain times; this would implement a service to make the home appear occupied when the home is actually empty for instance.


Building the Graphical User Interface (GUI)

There are many ways to achieve a nice user interface with modern Linux based systems. One popular way uses an existing application called OpenHAB which is intended for easy home automation deployments. There are many blog posts which describe how to install it and use the OpenHAB software application. I couldn’t find a working Energenie MiHome plugin however (perhaps it exists or will exist one day).


I decided to take a more general approach and create a lightweight custom application. After all, coding is part of the fun when developing your own home automation system. The custom application is not a large amount of code. In fact it is tiny. This has the benefit of being really easy to follow and modify, allowing people to heavily customize it because everyone's home and needs are unique. For instance, some users may not want a touchscreen. They could easily modify the code to instead take push-button input and show indications with LEDs. This is really easy to do by tweaking the custom app.


For this project, I decided to use JavaScript (one of the most popular languages for web development), and an environment or graphical programming add-on called Node-RED. When this environment is run on the Pi, the software creation is done (mostly) in a web browser using graphical blocks. With Node-RED, user interfaces and program behaviour is implemented by dragging blocks (called 'nodes' onto a blank canvas) and literally 'joining the dots' between nodes. Each node can be customised by double-clicking on it. Once the design is complete, the user interface is automatically made available at a URL such as http://xx.xx.xx.xx:8080/ui where xx.xx.xx.xx is the IP address of the Pi that is running Node-RED.


It is then a straightforward task to automatically start up a web browser on the Pi in full-screen mode, so that the user interface is the only thing visible. In other words, the Pi and touchscreen become a dedicated user interface device. Since web technologies are used, it means a mobile phone can also be used if you're not near the touchscreen.


In brief, Node-RED has nodes (blocks) for doing all sorts of things that are useful for a user interface. There are nodes for buttons and sliders and graphs that can be used to construct up the desired result. There are many nodes for application creation too. However Node-RED does not have an off-the-shelf node object that can control Energenie MiHome devices.


So, my first step was to design such a block and store it online so that anyone is free to use it. The instructions to install it are further below, in the 'Installing Node-RED' section. This means that when Node-RED is started and the web page for development is accessed, the left side blocks palette will contain a mihome node. It will automatically communicate using the Energenie Mihome API to the cloud service.


A one-time thing that needs to be done is to retrieve a key from the mihome cloud service. To do that, a special command called get_api_key is sent to the mihome node, along with the username and password that was used to register to the mihome service. The code does not store the password; just the username (i.e. e-mail address) and the returned API key is stored to a local file. If the Pi crashes or is powered off, the user does not need to re-enter the username and password; the key will be re-read from the file. For those that require a different strategy, it should be straightforward to modify the code.


The next section describes all these steps in detail.


Installing Node-RED

As root user (i.e. prepend sudo to the beginning of each command line, or follow the information at Accessing and Controlling the Pi in the section titled 'Enabling the root user account (superuser)' and then type su to become the root user, and type exit to revert to the previous 'pi' user if you originally logged in as the 'pi' user):


apt-get update
apt-get install npm
npm install node-red-dashboard


exit out of root user, and update node-red by typing:


bash <(curl -sL


It takes a long time (perhaps 15 minutes) to uninstall the earlier version and upgrade it, so take a break!

Afterwards, in the home user folder (/home/pi) become root user and then type:


npm install -g git+


Exit out of root user and type:




After about ten seconds, you should see “Server now running at”.


Now in a browser, open up the web page http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi. You should see a Node-RED web page appear!


Using Node-RED

The CLI command node-red-start will have resulted in a web server running on the Pi at port 1880. Code is written (actually, mainly drawn graphically with a bit of configuration) in a web browser. The editor view is shown when any web browser (e.g. Chrome or Internet Explorer) is used to see the page at http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi.

Here is what it looks like:


In the left pane, (known as the palette), scroll down and confirm that you can see a node called mihome in the group under the title 'function' and a whole set of nodes suitable for user interfaces under the title ‘dashboard’. To save time finding a node in the palette, you could just type the name, e.g. mihome in the search bar as shown here.


What does this mean? Basically, it means that ‘mihome’ functionality is available for you to use in your graphically designed programs which are known as ‘flows’ in Node-RED. The flows will be created in the centre pane, known as the Flow Pane. It is tabbed and by default the blank canvass for the first flow (Flow 1) is presented. When creating programs, nodes would be dragged from the palette onto the flow pane. Then, connections would be made between nodes. Each node would be configured by clicking on it; a node configuration parameter window then appears, and help on the node appears in the tab marked Info. The program is run (or ‘deployed’) by clicking on a button marked Deploy shown in red on the top-right of the web page when a flow is created (by default it is grayed out).


An Example Home Automation Program

To help get started, I’ve created an example program sufficient to control home appliances with the MiHome solution. To obtain it, click to access the example code on github and then copy the program (press ctrl-A and then ctrl-C to copy the entire code into the clipboard). Next, go to the Node-Red web page and click on the menu on the top-right, and select Import->Clipboard. Click in the window and press ctrl-V to past it in there, and click Import. The code will appear graphically, attached to the mouse pointer! Click anywhere inside the web page to place it.

This is what the demo program looks like:


As you can see, it is split into three main parts; the top, the middle one and the bottom one. The middle part is used to control a fan.


The light-blue nodes on the left represent buttons (the actual buttons will look nicer; this is just a view of the graphical code). When a ‘Fan On’ or ‘Fan Off’ button is pressed, some signal or message is sent into the yellow mihome node. The mihome node is responsible for communicating to the Energenie MiHome cloud (which in turn will send a message to your MiHome Gateway box, which will then send a radio signal to the end appliance mains socket). The green node on the right doesn’t do much; it is used for debugging and will dump text into the ‘Debug’ tab in the editor.


The top flow looks near-identical, except that the buttons do not control a fan, but rather control a group of appliances. For example, you may have several lamps in a room and you may wish to define a group to control them all simultaneously.


In summary, the mihome node will recognize various commands and will make the appropriate API call to the cloud, to invoke the appropriate real-world action like switching on appliances.


The bottom flow is a bit different:


It doesn’t have a light-blue button node on the left. Instead it has a darker blue node which is known as an Inject node. It has the characteristic that it can repeatedly do something at regular intervals. It has been configured (by double-clicking on it) to send a message to the yellow mihome node every minute. Every minute it instructs the mihome node to query the Energenie MiHome cloud and find out how much power is being consumed by the fan appliance. When the cloud receives the request, it will send the request to the Energenie MiHome Gateway box which will transmit a radio signal to the fan mains socket, which will respond back with the result.


The pink/orange get real power node is a function node. By double-clicking on it within Node-RED, you’ll see that all it does is extract the ‘real power’ value out of all the information that is returned and discards the rest. The final node in the chain, the fan-power-history node is a chart node. It is responsible for graphing all the information it receives. The end result would be a chart that updates every minute.


To explore the yellow mihome node in a bit more detail, double-click on any preceeding node, to see what information is sent to the mihome node. For example, if you double-click on ‘Fan On’, you’ll see this information appear:


You can see that this is a button node (or more specifically ui_button), which is part of the dashboard collection of nodes in the palette. It basically will display a button on the screen. The button will be labelled “Fan On” and if the user clicks it, then a message or payload will be sent into the mihome node. The payload is partially shown on the screen, but click on the button marked ‘’ to see it fully. When you do that, you’ll see this text:


    "command": "subdevice_on",
    "objid": "65479"


The command indicates that this is something to be powered up, and the objid identifies what device should be powered up. That objid value 65479 happens to be an Energenie mains socket that I own, connected to a fan. In your home, every Energenie device will have its own unique ID, and they are very likely to be different to mine, although there could be overlap. So how does the mihome node know which device should be controlled, yours or mine?


The answer is, the mihome node uses an API key. This is unique and assigned whenever anyone creates an Energenie MiHome account. The API key can be obtained using the username and password that was used to set up the account. Code can be created to do that automatically, and then save it so that the Pi always uses the API key. For security reasons, I wanted it to prompt for the password, but not store the password. Only the e-mail address and API key are stored. To do that, I wanted an ‘admin’ screen on the user interface to allow the user to type in their credentials. This needs some additional code, which is explored next.


Building an Admin View

The Admin view is used to initially configure the Pi so that it has the API key to control your home. I created it as a separate program (flow) that happens to appear in the same user interface. You can obtain the code by clicking on the Menu button (top right) and selecting Flows->Add. You’ll see a Flow 2 tab appear with a blank canvass for your new flow. Then, click here to access the admin view code on github and select all (ctrl-A) and copy (ctrl-C)  the entire program there. Import it into the Node-RED editor as before (click on the Menu icon and then Import->Clipboard and paste it there using ctrl-V) and then click on Deploy.


Here is how it works; the top-left shows four user interface objects; the PIN, email and password nodes will be text boxes where the user can type in these parameters, and then click the OK box. All the information will have been collected up by the next node in the chain, called invoke_get_key which checks that all the text fields have been populated and that the PIN number is correct (the PIN is not used for security; it is just used to prevent young children in the home from accidentally wiping the API key, since the code will not perform a request to obtain an API key until the PIN is correctly entered; it needs the correct username and password to get the API key, but if the incorrect username and password was used then the API key would be wiped out, so the PIN prevents that from accidentally occurring if babies/young children start playing with the touchscreen). Since the PIN doesn’t play a security role, it is just hard-coded; you can edit it by double-clicking on the ‘invoke_get_key’ node. I won’t explain the rest of the flow, but it is simple and straightforward to explore by double-clicking on nodes.


The end result is that the flow will allow the API key to be retrieved and stored permanently on the Pi in a file in plain text format. The password is not stored as mentioned earlier. Since the API key is stored, if the Pi reboots, the user will not have to add the API key again.


When we examined the ‘Fan On’ node earlier, we saw that an identifier is used for the mains socket and in my case it happened to be 65479. To obtain such identifiers, we need to use the API to ask the Energenie MiHome cloud what devices exist in the home. The Scan Devices button is used to do that. It will make the appropriate API call and then show the list on the screen.


Working with the User Interface

So far, we have examined the flows for the example home automation system, and the Admin view. Once you’ve clicked Deploy, the code will be running. The user interface can be accessed by opening up a browser to

http://xx.xx.xx.xx:1880/ui and you’ll see this:


The buttons can be tapped to switch things on and off, and the chart shows the power consumption of the fan over time, allowing you to see when the fan was used (it was not used; it is cold here!).


The menu is the result of the code in Flow 1. But the system won’t work until it has been configured as in Flow 2. To do that, click on the menu icon (the three bars on the top-left, next to where it says “HAL 9000” and in the drop-down, select ‘Admin’, and you’ll see the code from Flow 2 executed:


Once you’ve entered the PIN (it is 1234 unless you edited the code as mentioned earlier) and e-mail address and password as used on the Energenie MiHome cloud service, click on OK and the system will retrieve the API key from the cloud service and store it locally.


You can’t control the fan, because it is set up for my fan mains socket identifier; you’d need to change it to suit your own device. To do that, click on Scan Devices and the system will show in a pop-up window a list of all Energenie devices you own, and their associated identifiers. Take a screen print of that, and you can use it for editing the flow to add buttons and groups for those devices. Once you’ve done that, click on Deploy again.


Theme Customizations

I didn’t like the color scheme, but thankfully it is possible to choose your own. To do that, go back into the editor view at http://xx.xx.xx.xx:1880 and then click on the Dashboard tab on the right as shown here:


You’ll see lots of options to adjust the ordering of buttons in the Layout sub-tab. Click on the Theme sub-tab and then set Style to custom and you’ll see all the elements that can have different colors. Once they have been adjusted to suit preferences, they can be saved under a custom name. I didn’t want the touchscreen to be entirely lit up brightly at night-time, so I chose a dark background for example.


Building a ‘kiosk mode’ for the Pi and Display

For practicality, the Pi needs to be set up so that Node-RED executes automatically when the Pi is powered up, and the web browser must be set up to auto-start too, set to fill the entire touch display with no border or URL/website address visible. In other words, we want a type of kiosk mode much like the interactive help/information screens in shopping centres/malls.


The steps to implement this on the Pi are scattered all over the Internet and a bit outdated; I had to spend some time working out the customisation that would suit the Pi and Capacitive Display, for implementing such a system.


First, stop Node-RED by issuing the command node-red-stop and then as root user, type the following:


systemctl enable nodered.service
systemctl start nodered.service


Now Node-RED will automatically start whenever the Pi is rebooted.


The next step is to invoke a browser whenever the Pi starts up.

To do this, as root user type raspi-config and then select Boot Options and then choose to auto-boot into text console as ‘pi’ user. Then at the main menu press the tab key until Finish is highlighted to save it, and select to reboot the box. When the Pi comes up, you should see the text-based command shell/prompt on the touchscreen display, and the user already logged in.


Also as root user, type the following:


apt-get install matchbox-keyboard


This will install a virtual keyboard for the times you may need to tap text on the display; it isn't used for this project but could be useful in future.


Also type:


apt-get install matchbox-window-manager


You’ll also need a better web browser than the default. I installed two more, so that there was some choice. Still as root user, type:


apt-get install midori
apt-get install chromium-browser


(If you test it from the command line and chromium-browser has an error concerning mmal_vc_init_fd, then you will need to issue rpi-update and then reboot the Pi).


As normal ‘pi’ user, create a file in the /home/pi folder called containing the following:


matchbox-window-manager -use_cursor no&
echo "10" ; sleep 1
echo "20" ; sleep 1
echo "50" ; sleep 3
echo "80" ; sleep 3
echo "100" ; sleep 2
) |
zenity --progress \
  --title="Starting up" \
  --text="Please wait..." \
  --auto-close \

if [ "$?" = -1 ] ; then
        zenity --error \
          --text="Startup canceled."
midori -e Fullscreen -a


Create another file called with the same content, but replace the last line with:


chromium-browser --incognito --kiosk


Edit the /home/pi/.bashrc file and append the following:


if [ $(tty) == /dev/tty1 ]; then
  xinit ./


The result of all this is that when rebooted, the Pi will display a progress bar for ten seconds (allowing sufficient time for the Node-RED server to start up) and will then display a full-screen browser opened up at the correct URL for the user interface ( which is the local host address of the Pi).


Reboot the Pi (i.e. type reboot as root user) and the user interface should appear!


Preventing Display Blanking

After some minutes of inactivity, the display will blank by default. Depending on requirements this may be undesirable. To prevent the screen from blanking, issue the following commands.


Edit the and files, and insert the following lines after the first line:


xset -dpms
xset s off



Auto-Blanking the Mouse Pointer

It could also be desirable to make the mouse pointer/cursor disappear from the screen. Type the following as root user:


apt-get install unclutter


Then, as the ‘pi’ user, edit the and files and insert this just above the line containing the matchbox-window-manager text:


unclutter &


Reboot the Pi for these to take effect.


Auto Brightness for the Capacitive Touch Display

Although the kiosk mode implementation works fine, there is a lot that could be improved. For starters, the display is too bright in the evening. It would be possible to adjust the brightness level based on time, but I felt it may be better to just measure the brightness using a light dependent resistor (LDR).


The capacitive touch display brightness level is controlled using the following command line as root user:


echo xxx > /sys/class/backlight/rpi_backlight/brightness


where xxx is a number between 0 and 255 (a value of about 20 is suitable for night-time use, and 255 can be used for a bright screen during the day).


To automate this, a couple of scripts were created in the /home/pi folder. As the ‘pi’ user, create a file called containing the following:


sudo echo 255 | sudo tee /sys/class/backlight/rpi_backlight/brightness > /dev/null


Do the same for a file called but set the value to 20.


Next, type:


chmod 755
chmod 755


In order to invoke these scripts, a new flow is created in Node-RED. Click here to access the auto brightness source code on github.


Once it has been added to Node-RED, click on Deploy to activate it.

The flow looks like this:


The left node, called dark_detect, is configured as shown below (double-click on it within Node-RED to see this):


The dark_detect node will generate a message of value 1 whenever the Raspberry Pi’s 40-way header pin 7 (GPIO 4) goes high.

A small circuit was constructed to generate a logic level ‘1’ whenever it goes dark:


The circuit consists of a Schmitt trigger inverter integrated circuitSchmitt trigger inverter integrated circuit, a light dependent resistorlight dependent resistor, a 50k trimmer variable resistor50k trimmer variable resistor and a 100nF capacitor100nF capacitor. The trimmer resistor can be adjusted to suit the home lighting level.


It worked well. When the room lighting is reduced, the display automatically dims to a very comfortable level.



It is possible to create a nice touchscreen based user interface for home automation with the Pi. The programming effort is low using Node-RED. It is possible to create code ‘flows’ with graphical ‘node’ objects that can represent buttons on the screen. The functionality that interacts with the Energenie MiHome service is contained in a ‘mihome node’ graphical object that is inserted into the code flow. It will automatically send the appropriate commands to the Energenie MiHome cloud service, which will in turn send a message to the MiHome Gateway that will issue a radio message to control the desired home appliance. Monitoring capability is possible too; an example showing appliance energy consumption over time is contained in the code.


The solution with the Pi is reasonably secure; no password is stored on the Pi, the system stores an API key instead.


Finally a small circuit was constructed and an additional code flow was created that would automatically dim the display backlight when the home lighting is reduced.


I hope the information was useful; these two blog post were rather long, but I wanted it to be detailed so that anyone can implement a home automation solution.


This guide provides step-by-step instructions for connecting a unity client to your MATRIX Creator. This connection will be used to demonstrate how unity can read data from every sensor the MATRIX Creator has.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard, and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX OS (MOS), to easily program the Raspberry Pi and MATRIX Creator in Javascript, and the Unity Engine.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 2: Create a Unity-Sensor-Utility app

To create your own Unity-Sensor-Utility app on your local computer, use the command "matrix create Unity-Sensor-Utility". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. From here you can clone the Unity-Sensor-Utility GitHub repo with the code or follow the guide below for an overview of the code.


Step 3: Start Socket Server

In the app.js file, you will need to require and create a server for the unity client to connect to. 6001 is the port left by default, but it can be changed to whatever you want.


///Start Socket Server
var io = require('')(6001);
console.log('server started');


Step 4: Configure & Start MATRIX Sensors

To read data from the MATRIX’s sensors, each sensor has to be initialized and configured with a refresh and timeout option. The options object will be used as a default for all the sensors. To save the data from each sensor, an empty JSON object is created and overwritten each time there’s a new sensor value. Each sensor has its own object.


// Config & Start MATRIX Sensors
var options = {
     refresh: 100,
     timeout: 15000

var gyroscopeData = {};
     matrix.init('gyroscope', options).then(function(data){
     gyroscopeData = data;
var uvData = {};
     matrix.init('uv', options).then(function(data){
     uvData = data;
var temperatureData = {};
     matrix.init('temperature', options).then(function(data){
     temperatureData = data;
var humidityData = {};
     matrix.init('humidity', options).then(function(data){
     humidityData = data;
var pressureData = {};
     matrix.init('pressure', options).then(function(data){
     pressureData = data;
var accelerometerData = {};
     matrix.init('accelerometer', options).then(function(data){
     accelerometerData = data;
var magnetometerData = {};
     matrix.init('magnetometer', options).then(function(data){
     magnetometerData = data;


Step 5: Event Listeners

With the MATRIX Creator now reading and storing sensor data, it’s time to handle how to send that data when requested. Event Listeners are created here to listen to any events that call the sensor name. Once that event is received, The MATRIX will respond by emitting another event back, but this event will contain the corresponding JSON object of the sensor requested. Sensor data will only be sent when requested because it is unlikely every sensor will be used at once. However they can all be sent if you choose.


//Event Listeners
io.on('connection', function (socket) {
  console.log('Client Connected\n Sending Data...');

  //Send gyroscope data on request
  socket.on('gyroscope', function () {
    socket.emit('gyroscopeData', gyroscopeData);

  //Send uv data on request
  socket.on('uv', function () {
    socket.emit('uvData', uvData);

  //Send uv data on request
  socket.on('temperature', function () {
    socket.emit('temperatureData', temperatureData);

  //Send humidity data on request
  socket.on('humidity', function () {
    socket.emit('humidityData', humidityData);

  //Send humidity data on request
  socket.on('pressure', function () {
    socket.emit('pressureData', pressureData);

  //Send accelerometer data on request
  socket.on('accelerometer', function () {
    socket.emit('accelerometerData', accelerometerData);

  //Send magnetometer data on request
  socket.on('magnetometer', function () {
    socket.emit('magnetometerData', magnetometerData);

  //Client has left or lost connection
  socket.on('disconnect', function () {
    console.log('Client Disconnected');


Step 6: Unity Setup

If you haven’t already, download the latest version of unity here:

Unity will act as the client to the server running on the MATRIX Creator. Once you have unity up and running, you’ll need to install a plugin from the asset store

In the “SocketIO” folder, from the newly downloaded asset, navigate to the “Prefabs” folder and then drag&drop the prefab located inside onto the current scene.The SocketIO game object added will require you to input your Raspberry Pi’s IP address and the server port defined in the MOS app we made.

  • ws://YOUR_PI_IP:6001/


Step 7: Creating MATRIX.cs

Moving onto the last steps, you’ll need to create a new c# file called MATRIX.cs inside your Unity Assets. Under the libraries you’ll need are the public booleans that will be determining which sensors we want from the MATRIX Creator. Below that is where the SocketIO object will be defined.


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using SocketIO;

public class MATRIX : MonoBehaviour {
   //Pick Desired Sensors
   public bool useGyroscope = false;
   public bool useUV = false;
   public bool useTemperature = false;
   public bool useHumidity = false;
   public bool usePressure = false;
   public bool useAccelerometer = false;
   public bool useMagnetometer = false;

   private SocketIOComponent socket;


Step 8: On Scene Start

Everything defined in this function will be executed once the moment the current scene runs. The first thing that needs to be done here is to locate the game object we created from the prefab at the end of step 6. After that, we can include an event listener, similar to what was done in the MOS app, for each sensor to handle its values. How we handle the data will be described in a later step. The last part is to begin a Coroutine that contains an infinite loop.


   //On Scene Start
   public void Start()
       //locate prefab
       GameObject go = GameObject.Find("SocketIO");
       socket = go.GetComponent();
       //Set Event Listeners
       socket.On("open", Open);//connection made
       socket.On("error", Error);// error
       socket.On("close", Close);//connection lost
       //Set MATRIX Sensor Event Listeners
       socket.On("gyroscopeData", gyroscopeData);
       socket.On("uvData", uvData);
       socket.On("temperatureData", temperatureData);
       socket.On("humidityData", humidityData);
       socket.On("pressureData", pressureData);
       socket.On("accelerometerData", accelerometerData);
       socket.On("magnetometerData", magnetometerData);

       //start non-blocking loop


Step 9: Requesting Sensor Data

This eventLoop() Coroutine is essential because it allows us to write non-blocking code while we are requesting sensor data. A while(true) loop, that will never end,  is defined here to request sensor data based on which booleans are set to true from step 7. If true, the loop will emit a sensor event to the MATRIX Creator that will then respond by sending us an event back with the sensor data.


    // Requesting Device Data
    private IEnumerator eventLoop()
        //delay to properly initialize
        yield return new WaitForSecondsRealtime(0.1f);
        //loop forever
        while (true)
            yield return new WaitForSecondsRealtime(0f);//no delay
            //Use sensors if requested\\
            if (useGyroscope)
            if (useUV)
            if (useTemperature)
            if (useHumidity)
            if (usePressure)
            if (useAccelerometer)
            if (useMagnetometer)


Step 10: Handling Sensor Data

Here is where we define the functions that the event listeners in step 8 call on. The first 3 are functions for logging connection, disconnection, and errors from the when connecting to the server running in MOS. The rest of the functions are for each sensor the MATRIX Creator has. Similar to our MOS app, each function reads any data put into it and stores it into a static class that can be read by other scripts.


    // Event Listener Functions

    // On Connection
    public void Open(SocketIOEvent e)
        Debug.Log("[SocketIO] Open received: " + + " " +;
    // Error
    public void Error(SocketIOEvent e)
        Debug.Log("[SocketIO] Error received: " + + " " +;
    // Lost Connection To Server
    public void Close(SocketIOEvent e)
        Debug.Log("[SocketIO] Close received: " + + " " +;
    // Gyroscope
    public static class Gyroscope
        public static float yaw = 0f;
        public static float pitch = 0f;
        public static float roll = 0f;
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void gyroscopeData(SocketIOEvent e)
        Gyroscope.yaw = float.Parse(["yaw"].ToString());
        Gyroscope.pitch = float.Parse(["pitch"].ToString());
        Gyroscope.roll = float.Parse(["roll"].ToString());
        Gyroscope.x = float.Parse(["x"].ToString());
        Gyroscope.y = float.Parse(["y"].ToString());
        Gyroscope.z = float.Parse(["z"].ToString());
    // UV
    public static class UV
        public static float value = 0f;
        public static string risk = "";
    public void uvData(SocketIOEvent e)
        UV.value = float.Parse(["value"].ToString());
        UV.risk =["risk"].ToString();
    // Temperature 
    public static class Temperature
        public static float value = 0f;
        public static string risk = "";
    public void temperatureData(SocketIOEvent e)
        Temperature.value = float.Parse(["value"].ToString());
    // Humidity 
    public static class Humidity
        public static float value = 0f;
        public static string risk = "";
    public void humidityData(SocketIOEvent e)
        Humidity.value = float.Parse(["value"].ToString());
    // Pressure 
    public static class Pressure
        public static float value = 0f;
        public static string risk = "";
    public void pressureData(SocketIOEvent e)
        Pressure.value = float.Parse(["value"].ToString());
    // Accelerometer 
    public static class Accelerometer
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void accelerometerData(SocketIOEvent e)
        Accelerometer.x = float.Parse(["x"].ToString());
        Accelerometer.y = float.Parse(["y"].ToString());
        Accelerometer.z = float.Parse(["z"].ToString());
    // Magnetometer 
    public static class Magnetometer
        public static float x = 0f;
        public static float y = 0f;
        public static float z = 0f;
    public void magnetometerData(SocketIOEvent e)
        Magnetometer.x = float.Parse(["x"].ToString());
        Magnetometer.y = float.Parse(["y"].ToString());
        Magnetometer.z = float.Parse(["z"].ToString());


Step 11: Reading Data

With MATRIX.cs done all that’s left is to attach the script onto our SocketIO object in our scene. Once attached there will be boxes you can check that will let you pick which sensors you want to read. Each sensor chosen will log its value in the Unity console. If you see the values of the sensor you choose then you’re good to go! Usage for reading each sensor in Unity can be found here:



Ways of the SD card

You may find yourself needing to backup your SD card for future reference or for posterity and fame. Whatever your reason there are several well documented ways you can do it.

In some case you might also want to be able to get things back from your backup and you then generally need to write it back to and SD card to do so.

Using a Linux distribution of your choice for your desktop system this article shows how to backup a card and get contents right from the .img backup file. I used Linux Mint but the procedure should be fairly similar for other distributions too.


Reading a Raspbian SD card

If you are on a Linux platform reading your Raspbian SD card is as easy as plugging it into your SD card reader and the OS will auto-mount it for you.

My Mint desktop mounts my cards under




Getting hold of the contents is obviously very easy in this case. Use the GUI or the Terminal to move and read your files the way you would for any other directory on your system.


Backup the SD card

As I said earlier there are several ways this can be done, check the official pages from the Raspberry Pi Foundation or this really nice article on syntax-err0r which explains how to do it from a live system!

Remember that it is better to unmount the device that you’d like to backup.

Check what devices are available with


sudo fdisk -l



and run




to check that none of the partitions of the device you want to backup are in use.

If for example you are running a graphical desktop then your SD card is automatically mounted


in which case you need to either eject the card from the GUI or run


umount /dev/sdx1 && umount /dev/sdx2


Note that you might have more partitions on your SD card, make sure to unmount them all.

Whichever way you’ll choose to go about creating your backup it pretty much boils down to create a .img file.

You will generally


sudo dd bs=4M if=/dev/sdx of=backup.img


and restore as


sudo dd bs=4M if=backup.img of=/dev/sdx


where sdx is the device assigned to your SD card in your Linux system


You can even combine the command with gzip so that the backup will take much less space on your backup device. This is especially true if the card is almost nearly empty as the command above will not take in consideration empty space and will just add it to the image. So if you have a 16GB SD card with 4GB of data you will get a 16GB file with the method above!!

To use gzip


sudo dd bs=4M if=/dev/sdx | gzip > backup.img.gz


and restore as


gunzip --stdout backup.img.gz | sudo dd bs=4M of=/dev/sdx


Mount the img

One way or another you should now have a .img file sitting somewhere. If you have used gzip to compress the image unzip it at this point.


gunzip backup.img.gz


The first thing to do is to have a look at the partitions within the image file.


fdisk -lu backup.img




This will tell us what the offset for the data partition is. The SD card at its minimum has two partitions in fact, one is the boot partition and we would generally not be interested in it.

The offset is calculated by multiplying the unit size by the start sector of the partition we need to mount.

In our case the unit size is 512 bytes and the start sector is 94208 so the following command


sudo mount -t auto -o loop,offset=$((94208*512)) backup.img /mnt


will mount backup.img2 in /mnt which is generally available and free on most OS. Use another mountpoint of you need to.

Equally if you wanted to mount backup.img1 you would need to use and offset of 8192*512.



Once the partition is mounted you can then proceed to retrieve whichever file you were after in the first place. You can use this same approach even in the case you wanted to add files or change the existing ones on the backup. Once unmounted in fact the backup can be restored to an SD card with all the changes you have made making it a good way to keep an updated master backup.



  • If you want some feedback on the progress of your backup or restore try using dclfdd instead of dd. You might need to install it with
    apt-get install dcfldd
  • All the dd commands above will work perfectly every time but the purists will advice you to run the sync command after each dd command. You can either run it separately or in line with your dd commands by adding
    && sync

This guide provides step-by-step instructions for wiring a robot arm to your MATRIX Creator and then having that arm hit a gong whenever a Stripe sale or Slack command is received. It demonstrates how to use the GPIO pins of the MATRIX Creator to control a servo and how to receive a Slack or Stripe response to trigger the arm.



Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.
  • Gong - You can use a bell or anything else that makes noise when hit - Buy on Amazon
  • Robot Arm - we recommend the meArm because it is a simple robot arm with many degrees of motion - Buy Here
  • Jumper Wires - used to connect the robot arm to the MATRIX Creator - Buy on Amazon

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX CORE to program the Raspberry Pi and MATRIX Creator in Javascript by using its Protocol Buffers.


Step 1: Build your meArm

Follow this guide to build your meArm. (skip step 2 of the meArm guide)


Step 2: Setting up MATRIX CORE

Download MOS (MATRIX Open System) to get all the dependencies for MATRIX CORE on your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 3: Create the app folder and its files

In the directory that you choose create a folder called MATRIX-Gong-Master. Within that folder create three files names as follows: app.js, configure.json, and package.json.


Step 4: Configuration

The configure.json file is used to read your API Keys, server port, and slack channel to post in. Here is the code:


    "slackChannel": "#gong_master",
    "serverPort": 6000


Follow Step 5 to retrieve your Slack access token and follow Step 6 to retrieve your Stripe key. Set the slackChannel to your desired channel to list all gong events and set the serverPort to the port you will be using to accept the requests.


Step 5: Slack Setup

To properly integrate Slack, a few minor edits need to be made before we insert the API Key.

1. Create a new Slack app and select which team you want to install it for.

2. Under features, click on slash commands to create a new command.

          Set the Command to what you would like to type to trigger the arm in Slack.

          Point the Request URL to http://YOUR-PUBLIC-IP:PORT/slack_events. You can find your Public IP here and you can learn how to port forward here.

          Example of this below:

Screen Shot 2017-08-30 at 12.03.44 PM.png

3. Once saved, go into Bot Users and set a username of your choice.

4. The next step for Slack is to go into OAuth & Permissions and allow the following under Scopes:

          Send a bot with the username [your bot's name]

          Post messages as [your bot's name]

          Example of this below:

Screen Shot 2017-08-30 at 12.11.22 PM.png

5. Slack is now configured to run your Gong Master! At the top of the page you'll find 2 API Keys. Copy the Bot User OAuth Access Token and paste it into your configure.json file. Example of the API keys below:

Screen Shot 2017-08-30 at 12.21.14 PM.png

Step 5: Stripe Setup

1. If you do not already have a Stripe account register here and activate your account.

2. Go to API on the left side of the Stripe Dashboard and click Webhooks at the top.

3. Click Add endpoint on the right and type your URL that you will be received requests at as follows: http://YOUR-PUBLIC-IP:PORT/events. You can find your Public IP here and you can learn how to port forward here.

4. From there select the Webhook version you would like to use and press "Select types to send" where you will be able to select what event types you want to accept. In our case we will be using "charge.succeeded" and "invoice.payment_succeeded". Example of this below:

Screen Shot 2017-08-31 at 9.32.53 AM.png

5. Stripe is now configured to send events to your URL. Go to the Webhook you just created click "Click to reveal" in the Signing Secret section to retrieve your API key to add to your configure.json file. Example of this below:

Screen Shot 2017-08-31 at 9.44.37 AM.png

Step 6: Robot Arm Wiring

1. Using the jumper wires we are going to wire the bottom servo of the robot arm to the MATRIX Creator. First connect the Yellow servo wire to pin the pin on the MATRIX Creator labeled GP00.

2. Connect the Red servo wire to one of the pins on the MATRIX Creator labeled 5V. (there are two pins labeled 5V, either one will work)

3. Finally connect the Brown Servo wire to one of the pins on the MATRIX Creator labeled GND. (there are two pins labeled GND, either one will work)

Examples of this below:




Step 7: app.js Code Overview

Below all the code for the app.js file is reviewed. You can copy and paste it all or copy the file from the GitHub repo for the project here.


Global Variables

This is section defines and configures all the necessary libraries we need.

// Global Vars
var creator_ip = ''//local ip
var creator_servo_base_port = 20013 + 32;//port to use servo driver.
var matrix_io = require('matrix-protos').matrix_io;//MATRIX protocol buffers
//Setup connection to use MATRIX Servos
var zmq = require('zmq');
var configSocket = zmq.socket('push')
configSocket.connect('tcp://' + creator_ip + ':' + creator_servo_base_port);
//Api keys
var fs = require("fs");
var userConfig = JSON.parse(fs.readFileSync(__dirname+'/configure.json'));
var stripe = require('stripe')(userConfig.apiKeys.stripe);
var request = require('request');
var express = require('express');
var bodyParser = require('body-parser');
var app = express();


Set Servo Position

This function is meant to simplify moving a servo in MATRIX CORE. The pin for the servo used is set to 0, but it can be changed to any other pin freely with no errors.

function moveServo(angle){
    //configure which pin and what angle
    var servo_cfg_cmd ={
        pin: 0,
        angle: angle
    //build move command
    var servoCommandConfig = matrix_io.malos.v1.driver.DriverConfig.create({
        servo: servo_cfg_cmd
    //send move command


Gong Swing Timing

Using our previously defined function, for moving servos, this function is creating the swing motion that will be called when we want our Gong Master to use the gong. The variables above our function, gongsInQueue and gongInUse, are meant to allow the gong to handle multiple requests and to properly wait for each swing before swinging again.

var gongsInQueue = 0;//gongs requested
var gongInUse = false;//control swing usage

function gongMaster(){
    setInterval(function() {
        //checks for gongs queued and for current swing to stop
        if(gongsInQueue > 0 && !gongInUse){ 
            gongInUse = true;
            gongsInQueue--;//lower queue amount by 1
            moveServo(180);//swing gong arm
            //delay for position transition 
                moveServo(90);//gong arm rest position
                //delay for position transition 
                    gongInUse = false;


Post Slack Message

Using your Slack API Key, a message can be posted to the slack channel set in configure.json.

function logToSlack(message){
            // HTTP Archive Request Object 
            har: {
              url: '',
              method: 'POST',
              headers: [
                  name: 'content-type',
                  value: 'application/x-www-form-urlencoded'
              postData: {
                mimeType: 'application/x-www-form-urlencoded',
                params: [
                    name: 'token',
                    value: userConfig.apiKeys.slack
                    name: 'channel',
                    value: userConfig.slackChannel
                    name: 'link_names',
                    value: true
                    name: 'text',
                    value: message


Handle API Events

This function is where the events for the Slack and Stripe API are handled. Once either API is processed, gongsInQueue is increased to let the GongMaster() function know that it's time to gong!

function processEvents(api, event){
    //stripe events
    if(api === 'stripe'){
        if(event.type === 'charge.succeeded'){
            if( === 'paid'){
                console.log('There was a charge for ';
                logToSlack("A Charge Has Occured");
                gongsInQueue++;//gong once
        else if(event.type === 'transfer.paid'){
            if( === 'paid'){
                console.log('There was a transfer for ';
                logToSlack("A Transfer Has Occured");
                gongsInQueue+=2;//gong twice
    //slack event
    else if(api === 'slack'){
        //check that slack is sending a slash command event
        if(typeof event.command !== 'undefined' && event.command !== null)
            //check that the command is /gong
            if(event.command === '/gong'){
                logToSlack('@'+event.user_name+' has summoned me!');
    //unhandled event
        console.log('I was not made to handle this event');



The final part of the code is creating the server that listens to messages from Stripe and Slack. Once the server receives a message (POST Request)  it will begin to make use of all the previously defined functions.

app.use(bodyParser.urlencoded({ extended: true })); //handle urlencoded extended bodies
app.use(bodyParser.json()); //handle json encoded bodies

//STRIPE POST Request Handling'/events', function(req, res) {
    processEvents('stripe', req.body);//begin gong process
    res.sendStatus(200);//evrything is okay

//SLACK POST Request Handling'/slack_events', function(req, res) {
    //check that request is from slack (not guaranteed)
    if( req.headers['user-agent'].indexOf('') > 0){
        processEvents('slack', req.body);//begin gong process
        console.log("received request from slack");
        res.send(req.body.user_name + ', Your Wish Has Been Gonged!');//response to user for /gong
    //request is not from slack
        res.send('You Have Angered The Gong Master!');

//Create Server
app.listen(userConfig.serverPort, function() {
    console.log('Gong listening on port '+userConfig.serverPort+'!');
    gongMaster();//listening for gong requests


Step 8: Code for package.json

This is the reference for all the libraries and scripts used in this project.

  "name": "gong_master",
  "version": "1.0.0",
  "description": "robot gong that uses the slack and stripe api",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "Carlos Chacin",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.17.2",
    "express": "^4.15.4",
    "matrix-protos": "0.0.13",
    "request": "^2.81.0",
    "stripe": "^4.24.1",
    "zmq": "^2.15.3"


Step 9: Running the program

From the project directory run "node app.js" in the CLI to start the program.

To test in Slack, use the command you made and the Gong Master should respond to your request!



All code for the app can be found on GitHub here:

MATRIX Creator Eclipse Weather App

In celebration of Eclipse Day we have made this app to tell you what the weather is outside so you know if you will be able to see the eclipse or not with your current local weather conditions. This guide provides step-by-step instructions for locating your general location to give you information about the weather via a series of LED animations on a Raspberry Pi with a MATRIX Creator. It demonstrates how to use the to find your location and then feed it to the Dark Sky API to get the relevant local weather information that will be used to show an LED animation on your MATRIX Creator. The main goal of this app was to give an interesting new way to receive your current weather conditions.


Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.


Let's get started

We will be using MATRIX OS (MOS) to easily program the Raspberry Pi and MATRIX Creator in Javascript.


Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide


Step 2: Create a MATRIX-Weather-App

To create your own MATRIX-Weather-App app on your local computer, use the command "matrix create MATRIX-Weather-App". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. You will also be creating a file called weatherAnimations.js for the weather animations.

From here you can clone the MATRIX-Weather-App GitHub repo with the code or follow the guide below for an overview of the code. Either way, make sure to follow the instructions in step 4.


Step 3: Global Variables

In the app.js file you will need to set up the following libraries and global variables for the app:


//Load libraries
var weatherAnims = require(__dirname+'/weatherAnimations'); //custom weather animations
var Forecast = require('forecast'); //
var request = require('request'); //

//Global Variables
//Detailed location data
var location = {};

//Configure forecast options
var forecast = new Forecast({
    service: 'darksky', //only api available
    key: 'YOUR_KEY_HERE', //darksky api key (
    units: 'fahrenheit', //fahrenheit or celcius
    cache: false //cache forecast data


Step 4: Dark Sky API

Within the forecast variable created in Step 3 change YOUR_KEY_HERE to be the API key you get once you make an account with Dark Sky here.


Step 5: Obtaining Location Data

To obtain your location data we will be using in order to get your Latitude and Longitude from your IP address. This is done with the following code in the app.js file:


//Obtaining location data
function getLocation(callback){
    //catch any errors
    .on('error', function(error){
        return console.log(error + '\nCould Not Find Location!');
    //get response status
    .on('response', function(data) {
        console.log('Status Code: '+data.statusCode)
    //get location data
    .on('data', function(data){
            //save location data
            location = JSON.parse(data);

            //log all location data



Step 6: Selecting Weather Animations

Within the app.js file there will be a function that stops and loads an LED animation corresponding to the weather information provided by Dark Sky. Use the function below:


//Selecting Weather Animation
function setWeatherAnim(forecast){
    //clear MATRIX LEDs
    //set MATRIX LED animation
    weatherAnims.emit('start', forecast);


In the MATRIX-Weather-App folder you will need to create a file called weatherAnimations.js. You can find the code for the weatherAnimations.js file here.


Each LED sequence in the weatherAnimations.js file is tied to one of these responses from the Dark Sky API.

  • clear-day
  • clear-night
  • rain
  • snow
  • sleet
  • wind
  • fog
  • cloudy
  • partly-cloudy-day
  • Partly-cloudy-night

If there is a hazard such as hail, thunderstorms, or tornadoes than the LED's will turn red.

If there is no LED sequence created for the current weather the LED's will turn yellow.


Step 7: Obtaining Forecast Data

Using the forecast NPM module this function in the app.js file retrieves and stores relevant weather information received from Dark Sky. Use the following code:


//Obtaining Forecast data
function determineForecast(lat, lon){
    // Retrieve weather information
    forecast.get([lat, lon], true, function(error, weather) {
        //stop if there's an error
            console.log(error+'\n\x1b[31mThere has been an issue retrieving the weather\nMake sure you set your API KEY \x1b[0m ');
            //pass weather into callback

            //loop every X milliseconds


The weather is updated every 3 minutes.


Step 8: Action Zone

This last function calls all the previous functions and starts the app with the following code:


//Action Zone
//Auto Obtain Location
    //Start Forcast requests
    determineForecast(, location.lon);//input your coordinates for better accuracy ex. 25.7631,-80.1911


If you experience an inaccurate forecast feel free to hardcode your location in the place of the and location.lon variables. This inaccuracy with your location is due to the approximately 2 mile error margin of using your IP for location.


All code for the app can be found on GitHub here:

Dataplicity released a new feature "Custom Actions" that might be useful for projects including remote control.





MathWorks recently ran a mobile devices challenge where users were asked to submit a project in which they programmed their Android or iOS devices using MATLAB or Simulink. There were over 15 submissions that competed for the grand prize of 1000 USD.


The third place winning team built a low cost alternative to expensive GPS systems, click here to read more about this project and learn more about the other two winners. The link contains video references to their projects as well.


Filter Blog

By date: By tag: