Skip navigation
2017

I am currently running this on a Raspberry Pi 2 with Debian "Raspbian GNU/Linux 9 (stretch)"

 

1. sudo apt-get update

2. sudo apt-get install apache2

3. sudo mkdir -p /var/www/yourdomain.com/public_html

4. sudo chown -R $USER:$USER /var/www/yourdomain.com/public_html

5. sudo chmod -R 755 /var/www

6. nano /var/www/yourdomain.com/public_html/index.html

 

----------------Add the following to the index.html file-------------

<html>

  <head>

    <title>Welcome to yourdomain.com!</title>

  </head>

  <body>

    <h1>Success!   yourdomain.com is working!</h1>

  </body>

</html>

---------------------------end----------------------------------------

 

7. sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/yourdomain.com.conf

 

------------file should look like this--------------------------------

7.5 sudo nano /etc/apache2/sites-available/yourdomain.com.conf

This is what you will see......

<VirtualHost *:80>

    ServerAdmin webmaster@localhost

    DocumentRoot /var/www/html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

Change it to this......

8. ServerAdmin admin@yourdomain.com

9. ServerName yourdomain.com

10.ServerAlias www.yourdomain.com

11.DocumentRoot /var/www/yourdomain.com/public_html

Now should look like this....

<VirtualHost *:80>

    ServerAdmin admin@yourdomain.com

    ServerName yourdomain.com

    ServerAlias www.yourdomain.com

    DocumentRoot /var/www/yourdomain.com/public_html

    ErrorLog ${APACHE_LOG_DIR}/error.log

    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>

12.sudo a2ensite yourdomain.com.conf

13.sudo service apache2 restart

----------optional------------------

14.sudo nano /etc/hosts

    111.111.111.111 yourdomain.com  (ip is local ip of pi)

15.http://yourdomain.com

 

LeechFTP and CuteFtp dont have SFTP (either doesn't work right away or have to configure more. I couldn't get them to work as much as I tried. (couldn't find how to access SFTP))

FileZila has SFTP = Connection right away.

use your pi ip and pi user and password you don't need to enter port 22 as it is the default SFTP port.

 

After my last night message of being sleepy thought I would toss this again as reference for the next time I am too sleepy and need to look at my notes lol  its down and dirty note form.  if you need a explanation on this above just google setting up a pi site and should bring up info on each setting a more in depth walkthrough.

 

Just remember the quickest way to do this is probably ssh into your pi once you first start it up so make sure you go into the settings on the pi (Debian) and enable it.  Then you can just copy and paste into putty or whatever your using.

 

I was going to try unbuntu server this time but will get there yet I just wanted to make sure I had something working.

I ran a wordpress website on the pi for over 2 years  works great low power and as long as you update wordpress here and there its not that bad.

But I prefer to make my own I was being lazy and used wordpress.   So now I get to have some fun now I got the ftp working as it should.

Also make sure to change your pi's password I would also suggest removing pi and user and adding another as well.

 

Think I got that all ok there anyone else have any input or am I missing anything?

Also thanks to those that gave me ideas last night and this morning to get things going properly.

This guide provides step-by-step instructions for controlling your MATRIX Creator with a discord bot. By the end, you’ll be able to control the Creator’s LEDs, microphones, and ability to run any command you create.

 

Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH)into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard, and mouse - and learn how to tail logs for troubleshooting.

 

Let's get started

We will be using MATRIX Open System (MOS), to easily program the Raspberry Pi and MATRIX Creator in Javascript. The Discord bot will be made using the discord.js module.

 

Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide.

 

Step 2: Installing Raspberry Pi Dependencies

In order to use the MATRIX Creator’s mics and stream audio to Discord, the following dependencies must be installed to your Pi for these functionalities.

  • Discord Audio Streaming
    • Libav: sudo apt-get install libav-tools
  • MATRIX Creator Mics
    • ALSA Tools: sudo apt-get install alsa-base alsa-utils

 

Step 3: Creating Your Discord Bot

Discord App Client key and bot creation

  • Save Bot Token

 

Step 4: Creating Your MOS App

To create your own MATRIX Creator Discord app on your local computer, use the command "matrix create Discord-Bot". You will then be directed to enter a description and keywords for your app. A new folder will be created for the app, along with five files inside. The one you will be editing is the app.js file. From here, you can clone the MATRIX-Discord-Bot GitHub repository with the code or follow the guide below for an overview of the code.

 

Step 5: Dependencies & Global Vars

The bot variable is a discord.js client object which allows us to read and respond with the Discord API. To login, the bot will require your Discord app’s bot token. Mic is then defined to grab microphone input from the MATRIX Creator. This will be used to stream that input into a Discord voice channel. To keep track of what voice channels the bot joins, currentVoiceChannel is defined to later hold that information.

var Discord = require('discord.js');//https://discord.js.org
var bot = new Discord.Client();//Discord Bot Object
var token = 'YOUR_BOT_TOKEN_HERE';//Discord Bot Token
var mic = require('mic');//Stream wrapper for arecord
var currentVoiceChannel;//bot's current voice channel

 

Step 6: Creating Discord Commands

To organize commands, each command you make will need to be assigned to a command group. These groups will hold the commands you want in their list array. The addGroupCommand function is where you make and assign a command to a group. For example, a hello command assigned to the 'matrix' group is now able to be used by typing '/matrix hello' in the Discord chat. A callback you define will run once the command is called.

//////////////////////////////////////////////////////////
// Discord Commands
//////////////////////////////////////////////////////////
var commandGroups = {
    'matrix': {command: '/matrix', list:[]},//matrix commands
    'basic' : {command: '/basic', list:[]}//basic chat commands
};


//Command Creator
function addGroupCommand(group, commandName, description, command) {
    //add command to command group
    commandGroups[group].list.push({
        commandName : commandName,//name
        description: description,//desc
        command: command//function to run
    });
}

 

Step 7: Running A Command

The commandSearch function will use the inserted command group and Discord Message to see if any commands in that group were called. If found, the command will run. If not, the user is sent a message with all group commands. The function will return false for the latter.

//Look For & Use Command In Command Group
function commandSearch(group, message){
    var userArgs = message.content.split(' ');//Convert User Arguments Into An Array
    var commandFound = false;//bool on sending help menu
    var commandHelp = commandGroups[group].command+' Commands:\n';//will hold the command group's commands


    //check if command group was called
    if (userArgs[0] === commandGroups[group].command) {
        //Search For Command In Group
        for(i = 0; i < commandGroups[group].list.length; i++){
            //save command and description to help string
            commandHelp += '\n' + commandGroups[group].list[i].commandName +' - '+ commandGroups[group].list[i].description;
            //If command is found
            if( userArgs[1] === commandGroups[group].list[i].commandName){
                //Use command
                commandGroups[group].list[i].command(userArgs, message);
                //Update commandFound
                commandFound = true;
            }
        }
        //If Command Not Found
        if(!commandFound){
            message.reply('```'+commandHelp+'```');//reply with command list
        }
        //command group was found
        return true;
    }
    //command group was not called
    else
        return false;
}

 

Step 8: MATRIX Led Command

To change the MATRIX Creator’s LEDs, the function looks for an input after ‘/matrix led’. The input (LED color) is then inserted into the matrix.led command. A proper usage reply will be sent to the user if they don’t have a parameter in their message.

//////////////////////////////////////////
// MATRIX Command Group
// - Change MATRIX LEDs
addGroupCommand('matrix', 'led', 'Change Color of MATRIX LEDs', function(userArgs, message){
    //Look For Color Input
    if (userArgs.length === 3){
        message.reply('```Using: matrix.led(\'' +userArgs[2]+ '\').render()```');
        console.log(userArgs[2]);
        matrix.led(userArgs[2]).render();//change colors
    }
    //Command Had No/Bad Input
    else{
        //reply command usage
        message.reply('```\nCommand Usage:\n\t'+
        '/matrix led purple'+'        //color name\n\t'+
        '/matrix led rgb(255,0,255)'+'//rgb values\n\t'+
        '/matrix led #800080'+'       //css color'+
        '```');
    }
});

 

Step 9: MATRIX Join Command

The join command is used to stream the MATRIX Creator microphones into a voice channel. The command requires no parameters and will auto join the user’s current channel, the channel is saved in the currentVoiceChannel variable. The audio will have about a 6-second delay during the initial audio stream, but it will shorten as time passes.

// - Listen To MATRIX Mics
addGroupCommand('matrix', 'join', 'MATRIX Joins Your Voice Channel', function(userArgs, message){
    //continue if no args are present
    if(userArgs.length === 2){
        message.reply('Joining Voice Channel');
        //User Must Be In Voice Channel
        if (message.member.voiceChannel) {
            //just move if in voice channel
            if(currentVoiceChannel !== undefined){
                message.member.voiceChannel.join();//join voice channel
                currentVoiceChannel = message.member.voiceChannel;//save joined channel id
            }
            //join and reinitialize mics
            else{
                //join voice channel
                message.member.voiceChannel.join().then(connection => {
                    //save joined channel id
                    currentVoiceChannel = message.member.voiceChannel;
                    //npm mic config
                    var micInstance = mic({
                        rate: 16000,
                        channels: '1',
                        debug: false,
                        exitOnSilence: 0,
                        device : 'mic_channel8'
                    });
                    micInputStream = micInstance.getAudioStream();//mic audio stream
                    //when mics are ready
                    micInputStream.on('startComplete', function(){
                        var dispatcher;//will serve audio
                        dispatcher = connection.playArbitraryInput(micInputStream);//stream mics to Discord
                        console.log('mics ready');
                    });
                    //start mics
                    micInstance.start();
                });
            }
        }
        //User Is Not In Voice Channel
        else{
            message.reply('You need to join a Voice channel first!');
            return;
        }
    }
    //Tell user to use no args
    else
        message.reply('```"/matrix join" has no parameters```');
});

 

Step 10: MATRIX Leave Command

This command will tell the bot to leave the channel saved in currentVoiceChannel. There’s also another command, at the bottom, for getting the link to the MATRIX documentation.

// - MATRIX Leaves Voice Channel
addGroupCommand('matrix', 'leave', 'MATRIX Leaves Current Voice Channel', function(userArgs, message){
    //continue if no args are present
    if(userArgs.length === 2){
        //leave current voice channel
        if(currentVoiceChannel !== undefined){
            message.reply('Leaving Voice Channel');
            currentVoiceChannel.leave();            
            //remove saved voice channel id
            currentVoiceChannel = undefined;
        }
        else
            message.reply('Currently not in a voice channel!');
    }
    //Tell user to use no args
    else
        message.reply('```"/matrix leave" has no parameters```'); 
});
// - MATRIX Documentation Link
addGroupCommand('matrix', 'docs', 'Link To MATRIX Documentation', function(userArgs, message){
    message.reply('https://matrix-io.github.io/matrix-documentation/');
});

 

Step 11: Basic Ping Command

This is a simple ping command to show you how easy it is to create and organize new commands. The command itself will simply reply ‘pong’ to any user that types ‘/basic ping’

//////////////////////////////////////////
// BASIC Command Group
// - A Simple Ping
addGroupCommand('basic', 'ping', 'Reply To User Ping', function(userArgs, message){
    message.reply('pong');
});

 

Step 12: Discord Message Event

This message event is fired whenever a message, that the bot can read, appears. Private messages the bot receives are set to be ignored. Any other message will be used in a for loop that runs the commandSearch function. This loop will compare the message sent with each existing command group, running the command that matches.

//////////////////////////////////////////////////////////
// Discord Events
//////////////////////////////////////////////////////////
//On Discord Message
bot.on('message', function(message){
    //Accept Text Channel & User Messages Only
    if (!message.guild && bot.user.id !== message.author.id){
        message.reply('You need to join a Text channel first!');
        return;
    }


    //Check If User Message
    if (bot.user.id !== message.author.id){
        //Loop through commandGroup groups
        for (var group in commandGroups) {
            //Search for and run command
            if (commandGroups.hasOwnProperty(group) && commandSearch(group, message))
                break;//leave loop
        }
    }
});

 

Step 13: Logging In

The previously defined token is used for allowing the newly made Discord bot to login.

//On Discord Bot Login
bot.on('ready', function(){
    console.log('ready');
});


//Start Discord Bot
bot.login(token);

 

Step 14: package.json

Before deploying to your MATRIX Creator, update your package.json file to have these dependencies. MOS will auto install everything when it installs your app.

{
  "name": "discordBot",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "discord.js": "^11.2.1",
    "mic": "^2.1.2",
    "node-opus": "^0.2.7"
  }
}

 

Github: The entire repository for this guide can be found here:

Matt Reed from RedPepper has used a Raspberry Pi, Microphone, a Creepy Doll and  Google’s Speech Neural Network system to listen into .... Ghosts.

 

 

 

"From October 27–31, we’ll be live streaming the DeepWhisper rig nightly from our offices in historic “Butchertown” Nashville so you can watch for any EVPs that may come through. Just the thing to do at 3am when you can’t sleep." - Matt Reed

 

The DeepWhisper Project  pipes a real-time microphone stream to Google’s Speech Neural Network, which can detect over 110 languages and then we’llimmediately display the results as they come back

 

www.DeepWhisper.io

 

 

 

 

 

 

 

Deep Whisper is Opensource so anyone can hunt their own ghosts.

it runs on Node and its libraries have been optimized for Raspberry Pis. You’ll need a USB microphone, Google Cloud Platform Project Key, a display, and patience.

Matt will upload a full repository link soon but, for now, here are the key code snippets.

 

Connecting to Google Voice Neural Network

You’ll need to have a project set up in the Google Cloud Platform console which will grant you an authentication JSON key that your app will use to connect. Just follow these steps to get that going. Note: you may have to set up billing with Google to proceed.

 

// Authenticate with Google Cloud
const speech = require('@google-cloud/speech')({
  projectId: 'deepwhisper-XXXXXXXX',
  keyFilename: 'Deepwhisper-XXXXXXX.json'
});

 

Streaming mic input to Google

Simply pipe the microphone input to Google, and if the Neural Network detects speech in any of the 110+ supported languages, it will be returned as a string of transcript text.

 

// Connect and listen to USB microphone
const micInstance = mic({
  rate: ‘16000’,
  channels: ‘1’,
  debug: true,
  exitOnSilence: 0
});
const micInputStream = micInstance.getAudioStream();
micInputStream.pipe(recognizeStream);
// Create a real-time recognize stream with Google
const recognizeStream = speech.streamingRecognize(request)
  .on(‘error’, console.error)
  .on(‘data’, (data) =>
  (data.results[0] && data.results[0].alternatives[0])
    ? io.emit('text', { transcript: data.results[0].alternatives[0].transcript })
    : `\nReached transcription time limit, press Ctrl+C\n`);

 

Display the results

Using a simple HTML page with Socket.IO, you can receive the results emitted by the server above and display them immediately. This uses jQuery to set the text and fade it out after five seconds.

 

<script>
  var socket = io.connect('http://localhost:3000');
  socket.on('text', function (data) {
  $('#transcript').text(data.transcript);
  $('#transcript').fadeIn();
  setTimeout(function(){
  $('#transcript').fadeOut(function(){
  $('#transcript').text('');
  });
  }, 5000);
 });
</script>

Source and Project Matt Reed at RedPepper.land

www.DeepWhisper.io

Raspberry_Pi_Logo.svg.png

Home Automation in the UK Simplified, Part 1: Energenie MiHome

Join Shabaz as he works on his IoT home!

Learn about home automation using the Raspberry Pi, Energenie MiHome and Node Red.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

 

Note: Although this blog post covers a UK home automation solution, a lot of the information is still relevant for other regions. The information here shows how to create software applications and graphical user interfaces using a block-based system called Node-RED and JavaScript, that can communicate with hardware and with cloud based services. The information here also shows how to convert the Raspberry Pi to run in a sort of 'kiosk mode' where the user interacts with the Pi as an end appliance with a graphical touch-screen interface. Finally, the information described here shows how to provide auto-dimming capability for the touchscreen display for suiting environments with varying light conditions.

 

Introduction

A few months ago, the topic of home automation in the UK was explored, and how it could be achieved safely, at low cost. It turned out to be simple; attach radio-controlled mains sockets, mains adapters and light switches into your home, and connect a Mi|Home Gateway box into your existing home router. The gateway has a 433MHz radio to talk to the sockets and switches, and connects via the Internet to a free cloud service called Energenie Mi|Home.

 

This is sufficient to be able to control your home using the buttons on the sockets and switches, and using a web browser or mobile app downloadable from the Mi|Home website or iPhone/Android mobile app store.

 

The home automation was enhanced by purchasing a low-cost Amazon Echo box which connects to the home network wirelessly. It allows for voice control of your home appliances.

 

Not everyone wants voice control, although I prefer it. No need to touch and share the germs using a touch-screen : ) Nevertheless, many users still prefer touching buttons or a screen for control. There is also the desire to be able to programmatically control things using something like a Raspberry Pi, for more intelligent automation that just 'if this then that' style of encoding behaviour into your home. It would be perfectly feasible for the Pi to identify that a user has picked up a book, and automatically turn on the reading lamp. I decided to try to implement a large touchscreen on the wall to control the home in conjunction with retaining voice control and browser control. I also wanted to use a simple programming environment that could allow for more elaborate schemes in future.

 

This part 2 deals with how to go about this, using a Raspberry Pi 3Raspberry Pi 3 for the programming environment and for running a user interface, and a capacitive touch-screencapacitive touch-screen for monitoring and control.

 

The project is really easy from a hardware perspective, the Pi just needs connecting to the home network (either using the built-in wireless capability, or the Ethernet connection available on the Pi). Any display could be selected but the capacitive touch screen of course makes life easier because touch can be used! No keyboard required.

 

Further below in this blog post, the hardware design is extended slightly to provide auto-dimming capabilities to suit varying home lighting conditions.

 

To build the solution described here, the mandatory Energenie MiHome bits you need are the MiHome Gateway, and at least one MiHome control element such as a MiHome mains adapter.

 

An Amazon Echo, or Echo Dot device is optional but provides useful voice control as discussed in the earlier blog post.

 

The diagram here shows the approximate topology inside the home. It is really straightforward, difficult to go wrong!

 

Just to recap, the home devices such as lights and sockets are controlled via radio. These are shown at the top of the diagram. The hub that communicates over radio to them is the MiHome (also referred to as Mi|Home) Gateway. It connects to the Internet (for example using DSL) by plugging into your existing home Internet router. The user sets up an account at the Energenie MiHome website and downloads an app if desired. From here the user can control any device from anywhere with an Internet connection.

 

Voice commands are possible due to integration between Amazon’s Alexa service and the MiHome cloud service. All it requires is for the user to obtain an Amazon Echo or Echo Dot device as mentioned earlier, and run a small bit of configuration; all this was covered in Home Automation in the UK Simplified, Part 1: Energenie MiHome

 

This part 2 now covers the green portion in the diagram above. Basically it connects a Raspberry Pi to the solution. The Pi communicates to the MiHome service using an application programming interface (API). A user interface also runs on the Pi, so that a connected touchscreen can be used for controlling and monitoring the home. The typical flow of information is therefore:

 

  1. The user presses a selection on the touchscreen
  2. The Pi sends the command in a specific format (using the API) to the MiHome web service in the cloud
  3. The MiHome service looks up the pre-registered user, and sends commands to the MiHome Gateway
  4. The MiHome Gateway unwraps the command and converts it into a radio signal
  5. The radio signal is picked up by the appliance intelligent mains socket and switches on or off the connected appliance

 

In the event of network failure, the local controls on each mains socket will continue to function. The touchscreen controls can also continue to function since the Pi can switch to radio mode, sending commands directly to the IoT devices, using a radio module plugged on top of the Pi. This last capability is outside the scope of this blog post and may be covered in a later article if there is interest.

 

In summary, the Energenie + Raspberry Pi + Capacitive Display + Amazon Echo forms a fairly comprehensive solution, little effort is required to build it, and all code for this project is published and is easy to customise.

 

The diagram below shows the complete path of information between the home and the cloud services. This is not necessary to know, it is just background information for those that are curious.

 

How do I do it? - Brief Overview

Firstly, get a Pi 3 and the correct power supplycorrect power supply (the Pi 3 along with the display uses a lot of power - most USB chargers and associated USB cables will not be sufficient) and do the usual basic configuration (install the latest software image, create a password, and get it connected to your home network using either wireless or the Ethernet connection). The steps for this are covered in many blog posts. Next, attach the display to the Pi.

 

The next step (described further below) is to enable the software development environment called Node-RED and copy across the example Energenie MiHome code (all links are further below) that was developed as part of this blog post. Configure it to suit your home appliances. This entails storing an 'API Key' that is unique to anyone who registers their MiHome Gateway on the Energenie MiHome website, and also obtaining and entering in the device identifiers so that the Pi knows which adapter you wish to control when you press particular buttons on the touchscreen. Finally, you can customize the touchscreen and make it auto-dimming when the room is dark with a small add-on circuit. The majority of this blog post will cover all these topics in detail.

 

Security

The security of the base solution was covered in part 1, see the section there titled ‘Protocols and Examining the Risks’. The extra functionality in this part 2 has no known data security issue. No password is stored on the Raspberry Pi and no inbound ports are required to be opened on the router beyond those that would ordinarily be dynamically opened for web browsing responses. All communication between the Pi and the MiHome cloud service is encrypted. The Raspberry Pi stores just an ‘API key’ and the e-mail address that was used to register the MiHome service (use a throwaway e-mail account if you wish). The API key provides control of the home appliances until the user deactivates them from the MiHome cloud service, in the event that someone hacks into the Pi. With sensible precautions (no ports opened up on the router) and user access restricted to the Pi, the risk of this occurring is low.

 

Depending on the desired level of trust/mistrust, one could modify the touchscreen interface to prompt for the MiHome password always; this would eliminate the need to locally store an API key but would increase the inconvenience. It is an option nevertheless.

 

What is an API?

An Application Programming Interface is a type of machine-to-machine communication method that is (often) made public. It isn’t necessarily designed for any one particular purpose. The reason is, often the creators of a service are not sure of how all their customers will use the service. By having an API, unexpected solutions can be created, adding value for the user. Whole businesses have been created on the backs of APIs; for example, Uber may not have known what else could be done by ordering a taxi with an API, however it is possible to automate deliveries by using such an API to automatically request a nearby driver as soon as someone places an order for your product. A taxi service that works like DHL is definitely unexpected, and would be harder to create without APIs. It has allowed businesses to have delivery staff on-demand.

 

Modern APIs frequently rely on HTTP and REST techniques. These techniques ultimately allow for efficient communication in a consistent manner. They nearly all result in the communicating device sending a HTTP request to a web address over the network with any data sent as plain text often in parameter and value pairs (known as JSON format) and the HTTP response looks like a what a web browser might receive, with a response code and text content. It actually means that such APIs can often be tested with any web browser like Chrome or Internet Explorer.

 

In the case of MiHome, Energenie have created an API that allows one to do things like send instructions to turn on and off devices. Once the MiHome server in the cloud determines that the request was a valid and authenticated use of the API, it will send a message to the MiHome Gateway in your home. From there, a radio signal is used to control the end device. The system can work in the other direction too; end devices can send information via radio, such as power consumption. This information is stored in the Cloud, at the MiHome service database in the cloud. When a request arrives using the API, the MiHome service will look up the data in the database and send it as part of the HTTP response.

 

For this project, the API will be invoked by the Raspberry Pi whenever a button is pressed on the touchscreen. This is just an example. With some coding effort it is also possible to instruct the Pi to (say) send on/off commands at certain times; this would implement a service to make the home appear occupied when the home is actually empty for instance.

 

Building the Graphical User Interface (GUI)

There are many ways to achieve a nice user interface with modern Linux based systems. One popular way uses an existing application called OpenHAB which is intended for easy home automation deployments. There are many blog posts which describe how to install it and use the OpenHAB software application. I couldn’t find a working Energenie MiHome plugin however (perhaps it exists or will exist one day).

 

I decided to take a more general approach and create a lightweight custom application. After all, coding is part of the fun when developing your own home automation system. The custom application is not a large amount of code. In fact it is tiny. This has the benefit of being really easy to follow and modify, allowing people to heavily customize it because everyone's home and needs are unique. For instance, some users may not want a touchscreen. They could easily modify the code to instead take push-button input and show indications with LEDs. This is really easy to do by tweaking the custom app.

 

For this project, I decided to use JavaScript (one of the most popular languages for web development), and an environment or graphical programming add-on called Node-RED. When this environment is run on the Pi, the software creation is done (mostly) in a web browser using graphical blocks. With Node-RED, user interfaces and program behaviour is implemented by dragging blocks (called 'nodes' onto a blank canvas) and literally 'joining the dots' between nodes. Each node can be customised by double-clicking on it. Once the design is complete, the user interface is automatically made available at a URL such as http://xx.xx.xx.xx:8080/ui where xx.xx.xx.xx is the IP address of the Pi that is running Node-RED.

 

It is then a straightforward task to automatically start up a web browser on the Pi in full-screen mode, so that the user interface is the only thing visible. In other words, the Pi and touchscreen become a dedicated user interface device. Since web technologies are used, it means a mobile phone can also be used if you're not near the touchscreen.

 

In brief, Node-RED has nodes (blocks) for doing all sorts of things that are useful for a user interface. There are nodes for buttons and sliders and graphs that can be used to construct up the desired result. There are many nodes for application creation too. However Node-RED does not have an off-the-shelf node object that can control Energenie MiHome devices.

 

So, my first step was to design such a block and store it online so that anyone is free to use it. The instructions to install it are further below, in the 'Installing Node-RED' section. This means that when Node-RED is started and the web page for development is accessed, the left side blocks palette will contain a mihome node. It will automatically communicate using the Energenie Mihome API to the cloud service.

 

A one-time thing that needs to be done is to retrieve a key from the mihome cloud service. To do that, a special command called get_api_key is sent to the mihome node, along with the username and password that was used to register to the mihome service. The code does not store the password; just the username (i.e. e-mail address) and the returned API key is stored to a local file. If the Pi crashes or is powered off, the user does not need to re-enter the username and password; the key will be re-read from the file. For those that require a different strategy, it should be straightforward to modify the code.

 

The next section describes all these steps in detail.

 

Installing Node-RED

As root user (i.e. prepend sudo to the beginning of each command line, or follow the information at Accessing and Controlling the Pi in the section titled 'Enabling the root user account (superuser)' and then type su to become the root user, and type exit to revert to the previous 'pi' user if you originally logged in as the 'pi' user):

 

apt-get update
apt-get install npm
npm install node-red-dashboard

 

exit out of root user, and update node-red by typing:

 

bash <(curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered)

 

It takes a long time (perhaps 15 minutes) to uninstall the earlier version and upgrade it, so take a break!

Afterwards, in the home user folder (/home/pi) become root user and then type:

 

npm install -g git+https://github.com/shabaz123/node-red-contrib-mihome.git

 

Exit out of root user and type:

 

node-red-start

 

After about ten seconds, you should see “Server now running at http://127.0.0.1:1880/”.

 

Now in a browser, open up the web page http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi. You should see a Node-RED web page appear!

 

Using Node-RED

The CLI command node-red-start will have resulted in a web server running on the Pi at port 1880. Code is written (actually, mainly drawn graphically with a bit of configuration) in a web browser. The editor view is shown when any web browser (e.g. Chrome or Internet Explorer) is used to see the page at http://xx.xx.xx.xx:1880 where xx.xx.xx.xx is the IP address of the Pi.

Here is what it looks like:

 

In the left pane, (known as the palette), scroll down and confirm that you can see a node called mihome in the group under the title 'function' and a whole set of nodes suitable for user interfaces under the title ‘dashboard’. To save time finding a node in the palette, you could just type the name, e.g. mihome in the search bar as shown here.

 

What does this mean? Basically, it means that ‘mihome’ functionality is available for you to use in your graphically designed programs which are known as ‘flows’ in Node-RED. The flows will be created in the centre pane, known as the Flow Pane. It is tabbed and by default the blank canvass for the first flow (Flow 1) is presented. When creating programs, nodes would be dragged from the palette onto the flow pane. Then, connections would be made between nodes. Each node would be configured by clicking on it; a node configuration parameter window then appears, and help on the node appears in the tab marked Info. The program is run (or ‘deployed’) by clicking on a button marked Deploy shown in red on the top-right of the web page when a flow is created (by default it is grayed out).

 

An Example Home Automation Program

To help get started, I’ve created an example program sufficient to control home appliances with the MiHome solution. To obtain it, click to access the example code on github and then copy the program (press ctrl-A and then ctrl-C to copy the entire code into the clipboard). Next, go to the Node-Red web page and click on the menu on the top-right, and select Import->Clipboard. Click in the window and press ctrl-V to past it in there, and click Import. The code will appear graphically, attached to the mouse pointer! Click anywhere inside the web page to place it.

This is what the demo program looks like:

 

As you can see, it is split into three main parts; the top, the middle one and the bottom one. The middle part is used to control a fan.

 

The light-blue nodes on the left represent buttons (the actual buttons will look nicer; this is just a view of the graphical code). When a ‘Fan On’ or ‘Fan Off’ button is pressed, some signal or message is sent into the yellow mihome node. The mihome node is responsible for communicating to the Energenie MiHome cloud (which in turn will send a message to your MiHome Gateway box, which will then send a radio signal to the end appliance mains socket). The green node on the right doesn’t do much; it is used for debugging and will dump text into the ‘Debug’ tab in the editor.

 

The top flow looks near-identical, except that the buttons do not control a fan, but rather control a group of appliances. For example, you may have several lamps in a room and you may wish to define a group to control them all simultaneously.

 

In summary, the mihome node will recognize various commands and will make the appropriate API call to the cloud, to invoke the appropriate real-world action like switching on appliances.

 

The bottom flow is a bit different:

 

It doesn’t have a light-blue button node on the left. Instead it has a darker blue node which is known as an Inject node. It has the characteristic that it can repeatedly do something at regular intervals. It has been configured (by double-clicking on it) to send a message to the yellow mihome node every minute. Every minute it instructs the mihome node to query the Energenie MiHome cloud and find out how much power is being consumed by the fan appliance. When the cloud receives the request, it will send the request to the Energenie MiHome Gateway box which will transmit a radio signal to the fan mains socket, which will respond back with the result.

 

The pink/orange get real power node is a function node. By double-clicking on it within Node-RED, you’ll see that all it does is extract the ‘real power’ value out of all the information that is returned and discards the rest. The final node in the chain, the fan-power-history node is a chart node. It is responsible for graphing all the information it receives. The end result would be a chart that updates every minute.

 

To explore the yellow mihome node in a bit more detail, double-click on any preceeding node, to see what information is sent to the mihome node. For example, if you double-click on ‘Fan On’, you’ll see this information appear:

 

You can see that this is a button node (or more specifically ui_button), which is part of the dashboard collection of nodes in the palette. It basically will display a button on the screen. The button will be labelled “Fan On” and if the user clicks it, then a message or payload will be sent into the mihome node. The payload is partially shown on the screen, but click on the button marked ‘’ to see it fully. When you do that, you’ll see this text:

 

{
    "command": "subdevice_on",
    "objid": "65479"
}

 

The command indicates that this is something to be powered up, and the objid identifies what device should be powered up. That objid value 65479 happens to be an Energenie mains socket that I own, connected to a fan. In your home, every Energenie device will have its own unique ID, and they are very likely to be different to mine, although there could be overlap. So how does the mihome node know which device should be controlled, yours or mine?

 

The answer is, the mihome node uses an API key. This is unique and assigned whenever anyone creates an Energenie MiHome account. The API key can be obtained using the username and password that was used to set up the account. Code can be created to do that automatically, and then save it so that the Pi always uses the API key. For security reasons, I wanted it to prompt for the password, but not store the password. Only the e-mail address and API key are stored. To do that, I wanted an ‘admin’ screen on the user interface to allow the user to type in their credentials. This needs some additional code, which is explored next.

 

Building an Admin View

The Admin view is used to initially configure the Pi so that it has the API key to control your home. I created it as a separate program (flow) that happens to appear in the same user interface. You can obtain the code by clicking on the Menu button (top right) and selecting Flows->Add. You’ll see a Flow 2 tab appear with a blank canvass for your new flow. Then, click here to access the admin view code on github and select all (ctrl-A) and copy (ctrl-C)  the entire program there. Import it into the Node-RED editor as before (click on the Menu icon and then Import->Clipboard and paste it there using ctrl-V) and then click on Deploy.

 

Here is how it works; the top-left shows four user interface objects; the PIN, email and password nodes will be text boxes where the user can type in these parameters, and then click the OK box. All the information will have been collected up by the next node in the chain, called invoke_get_key which checks that all the text fields have been populated and that the PIN number is correct (the PIN is not used for security; it is just used to prevent young children in the home from accidentally wiping the API key, since the code will not perform a request to obtain an API key until the PIN is correctly entered; it needs the correct username and password to get the API key, but if the incorrect username and password was used then the API key would be wiped out, so the PIN prevents that from accidentally occurring if babies/young children start playing with the touchscreen). Since the PIN doesn’t play a security role, it is just hard-coded; you can edit it by double-clicking on the ‘invoke_get_key’ node. I won’t explain the rest of the flow, but it is simple and straightforward to explore by double-clicking on nodes.

 

The end result is that the flow will allow the API key to be retrieved and stored permanently on the Pi in a file in plain text format. The password is not stored as mentioned earlier. Since the API key is stored, if the Pi reboots, the user will not have to add the API key again.

 

When we examined the ‘Fan On’ node earlier, we saw that an identifier is used for the mains socket and in my case it happened to be 65479. To obtain such identifiers, we need to use the API to ask the Energenie MiHome cloud what devices exist in the home. The Scan Devices button is used to do that. It will make the appropriate API call and then show the list on the screen.

 

Working with the User Interface

So far, we have examined the flows for the example home automation system, and the Admin view. Once you’ve clicked Deploy, the code will be running. The user interface can be accessed by opening up a browser to

http://xx.xx.xx.xx:1880/ui and you’ll see this:

 

The buttons can be tapped to switch things on and off, and the chart shows the power consumption of the fan over time, allowing you to see when the fan was used (it was not used; it is cold here!).

 

The menu is the result of the code in Flow 1. But the system won’t work until it has been configured as in Flow 2. To do that, click on the menu icon (the three bars on the top-left, next to where it says “HAL 9000” and in the drop-down, select ‘Admin’, and you’ll see the code from Flow 2 executed:

 

Once you’ve entered the PIN (it is 1234 unless you edited the code as mentioned earlier) and e-mail address and password as used on the Energenie MiHome cloud service, click on OK and the system will retrieve the API key from the cloud service and store it locally.

 

You can’t control the fan, because it is set up for my fan mains socket identifier; you’d need to change it to suit your own device. To do that, click on Scan Devices and the system will show in a pop-up window a list of all Energenie devices you own, and their associated identifiers. Take a screen print of that, and you can use it for editing the flow to add buttons and groups for those devices. Once you’ve done that, click on Deploy again.

 

Theme Customizations

I didn’t like the color scheme, but thankfully it is possible to choose your own. To do that, go back into the editor view at http://xx.xx.xx.xx:1880 and then click on the Dashboard tab on the right as shown here:

 

You’ll see lots of options to adjust the ordering of buttons in the Layout sub-tab. Click on the Theme sub-tab and then set Style to custom and you’ll see all the elements that can have different colors. Once they have been adjusted to suit preferences, they can be saved under a custom name. I didn’t want the touchscreen to be entirely lit up brightly at night-time, so I chose a dark background for example.

 

Building a ‘kiosk mode’ for the Pi and Display

For practicality, the Pi needs to be set up so that Node-RED executes automatically when the Pi is powered up, and the web browser must be set up to auto-start too, set to fill the entire touch display with no border or URL/website address visible. In other words, we want a type of kiosk mode much like the interactive help/information screens in shopping centres/malls.

 

The steps to implement this on the Pi are scattered all over the Internet and a bit outdated; I had to spend some time working out the customisation that would suit the Pi and Capacitive Display, for implementing such a system.

 

First, stop Node-RED by issuing the command node-red-stop and then as root user, type the following:

 

systemctl enable nodered.service
systemctl start nodered.service

 

Now Node-RED will automatically start whenever the Pi is rebooted.

 

The next step is to invoke a browser whenever the Pi starts up.

To do this, as root user type raspi-config and then select Boot Options and then choose to auto-boot into text console as ‘pi’ user. Then at the main menu press the tab key until Finish is highlighted to save it, and select to reboot the box. When the Pi comes up, you should see the text-based command shell/prompt on the touchscreen display, and the user already logged in.

 

Also as root user, type the following:

 

apt-get install matchbox-keyboard

 

This will install a virtual keyboard for the times you may need to tap text on the display; it isn't used for this project but could be useful in future.

 

Also type:

 

apt-get install matchbox-window-manager

 

You’ll also need a better web browser than the default. I installed two more, so that there was some choice. Still as root user, type:

 

apt-get install midori
apt-get install chromium-browser

 

(If you test it from the command line and chromium-browser has an error concerning mmal_vc_init_fd, then you will need to issue rpi-update and then reboot the Pi).

 

As normal ‘pi’ user, create a file in the /home/pi folder called midori_start.sh containing the following:

 

#!/bin/sh
matchbox-window-manager -use_cursor no&
(
echo "10" ; sleep 1
echo "20" ; sleep 1
echo "50" ; sleep 3
echo "80" ; sleep 3
echo "100" ; sleep 2
) |
zenity --progress \
  --title="Starting up" \
  --text="Please wait..." \
  --auto-close \
  --percentage=0


if [ "$?" = -1 ] ; then
        zenity --error \
          --text="Startup canceled."
fi
midori -e Fullscreen -a http://127.0.0.1:1880/ui

 

Create another file called chromium_start.sh with the same content, but replace the last line with:

 

chromium-browser --incognito --kiosk http://127.0.0.1:1880/ui

 

Edit the /home/pi/.bashrc file and append the following:

 

if [ $(tty) == /dev/tty1 ]; then
  xinit ./domidori.sh
fi

 

The result of all this is that when rebooted, the Pi will display a progress bar for ten seconds (allowing sufficient time for the Node-RED server to start up) and will then display a full-screen browser opened up at the correct URL for the user interface (http://127.0.0.1:1880/ui which is the local host address of the Pi).

 

Reboot the Pi (i.e. type reboot as root user) and the user interface should appear!

 

Preventing Display Blanking

After some minutes of inactivity, the display will blank by default. Depending on requirements this may be undesirable. To prevent the screen from blanking, issue the following commands.

 

Edit the midori_start.sh and chromium_start.sh files, and insert the following lines after the first line:

 

xset -dpms
xset s off

 

 

Auto-Blanking the Mouse Pointer

It could also be desirable to make the mouse pointer/cursor disappear from the screen. Type the following as root user:

 

apt-get install unclutter

 

Then, as the ‘pi’ user, edit the midori_start.sh and chromium_start.sh files and insert this just above the line containing the matchbox-window-manager text:

 

unclutter &

 

Reboot the Pi for these to take effect.

 

Auto Brightness for the Capacitive Touch Display

Although the kiosk mode implementation works fine, there is a lot that could be improved. For starters, the display is too bright in the evening. It would be possible to adjust the brightness level based on time, but I felt it may be better to just measure the brightness using a light dependent resistor (LDR).

 

The capacitive touch display brightness level is controlled using the following command line as root user:

 

echo xxx > /sys/class/backlight/rpi_backlight/brightness

 

where xxx is a number between 0 and 255 (a value of about 20 is suitable for night-time use, and 255 can be used for a bright screen during the day).

 

To automate this, a couple of scripts were created in the /home/pi folder. As the ‘pi’ user, create a file called set_bright.sh containing the following:

 

#!/bin/sh
sudo echo 255 | sudo tee /sys/class/backlight/rpi_backlight/brightness > /dev/null

 

Do the same for a file called set_dim.sh but set the value to 20.

 

Next, type:

 

chmod 755 set_bright.sh
chmod 755 set_dim.sh

 

In order to invoke these scripts, a new flow is created in Node-RED. Click here to access the auto brightness source code on github.

 

Once it has been added to Node-RED, click on Deploy to activate it.

The flow looks like this:

 

The left node, called dark_detect, is configured as shown below (double-click on it within Node-RED to see this):

 

The dark_detect node will generate a message of value 1 whenever the Raspberry Pi’s 40-way header pin 7 (GPIO 4) goes high.

A small circuit was constructed to generate a logic level ‘1’ whenever it goes dark:

 

The circuit consists of a Schmitt trigger inverter integrated circuitSchmitt trigger inverter integrated circuit, a light dependent resistorlight dependent resistor, a 50k trimmer variable resistor50k trimmer variable resistor and a 100nF capacitor100nF capacitor. The trimmer resistor can be adjusted to suit the home lighting level.

 

It worked well. When the room lighting is reduced, the display automatically dims to a very comfortable level.

 

Summary

It is possible to create a nice touchscreen based user interface for home automation with the Pi. The programming effort is low using Node-RED. It is possible to create code ‘flows’ with graphical ‘node’ objects that can represent buttons on the screen. The functionality that interacts with the Energenie MiHome service is contained in a ‘mihome node’ graphical object that is inserted into the code flow. It will automatically send the appropriate commands to the Energenie MiHome cloud service, which will in turn send a message to the MiHome Gateway that will issue a radio message to control the desired home appliance. Monitoring capability is possible too; an example showing appliance energy consumption over time is contained in the code.

 

The solution with the Pi is reasonably secure; no password is stored on the Pi, the system stores an API key instead.

 

Finally a small circuit was constructed and an additional code flow was created that would automatically dim the display backlight when the home lighting is reduced.

 

I hope the information was useful; these two blog post were rather long, but I wanted it to be detailed so that anyone can implement a home automation solution.

Filter Blog

By date: By tag: