Skip navigation


This PiIot Challenge evolved in a very strange mode for me; as much the project was growing as much a new scenario was emerging imposing as the main track.

So what happened? I had to make a choice: close the challenge just in time or smoothly follow the project evolution. There was not time to do both. Closing the challenge by the deadline would require a series of simplifications in the project. Then the remaining time for the next - and, why not? More ambitious - deadline was too few for refactoring the idea and prototype to the next target.

Muzieum-1600x1600 4.jpg


I have done the choice: the challenge deadline become just the most important part of a wider project focusing to a more complex design. It was clear to me that it was a necessary choice seeing the first media coverage and - most of all - the interest and support from the other partner, the MuZIEum where the project will take place, the same Element14 supporting and encouraging me, the approach of the second sponsor and the first interview that are under publishing next weeks.


Muzieum-1600x1600 17 strip.jpg

Few words about the new timeline

The new timeline necessarily changed the project approach. This perfect reading place, become focused on the Internet of Things technologies supporting visually-impaired users: it will be a really reading, chatting, discussing and interacting area place to be installed in the MuZIEum site. Many new aspects to take care made things more complex but also more interesting. The design idea becomes a use case adding one more complexity level: usage, color choices, networking, usability, and more. The points below focus the main aspects:


  • The system is a series of installations connected as IoT nodes
  • Visually-impaired staff will be trained on how the components work to be able to illustrate their usage to non-visually impaired visitors
  • The components of this IoT network will be accessible and easily understandable by the visitors: together with rest of the MuZIEum context should be part of a very special and intriguing experience for the visitors (trained and supported by visually-impaired personnel)
  • The MuZIEum staff and the precious consultancy of the project manager Carlijn Nijhof will supervise the project content and features: the colours of the 3D printed parts, the texts-to-voice preset messages, the interaction choices etc.
  • The perfect reading place should work dynamically adapting to the presence of a detected user
  • The system is headless: no monitor, no screen, no keyboard


The next three months timeline

  1. The full working and installable prototype will be completed during the month of September. Every part will be tested on-site as it becomes available. Then a pre-installation full setup will test the IoT nodes
  2. A final project design will be produced to be supervised by the MuZIEum staff for the ergonomy and colour choices: 3D printed parts, powering system, accessibility
  3. The text content of the spoken messages will be tested then supervised by the MuZIEum staff to reach the best content
  4. As well as text and colours also the physical accessibility of the components and the control gestures should be discussed with the MuZIEum staff for the more comfortable user experience
  5. Between the month of September and October a group of visually-impaired MuZIEum collaborators will be trained to illustrate the IoT self-adapting experience to the visitors
  6. Starting by the end of October - first week of November the final reviewed installation will be ready on-site. The system will be used for a period by test-users to verify the installation robustness and reliability.
  7. The official presentation to press and public visitors, together with other projects all focused to give to the visitor a unique perceptual experience will be on the date of 8 December.


During this period a series of video-podcasts with Periscope are planned for streaming on Twitter about project advance news, interviews to the staff members and more.


The project updates will continue as usual by the next 5 September.


I hope that element14Dave, spannerspencer and the resto of Element14 challenge sponsor will appreciate the scenario. And I also hope that the users continue following the project development status.

It's been a very interesting challenge! 3 months to learn a bunch of new things (specially in server and services implementation) and one common enemy... TIME! Only the basic structure of the proposed plan was implemented in the end. However, this is only the beginning


Thanks to element14 for this chance. And thanks to y'all who commented and helped. Of course, thanks to all who read thru these post and maybe found some interesting information.


It was also amazing reading what other participants were doing (that is a good use of the challenge period time), all the projects are incredible and there are some extremely original ones ^^


Alright! Let's do a wrap up:


List of past posts:

Sources and links

Most of the code is available on GitHub


The plan: what was planned and what is


In the end... I let time past by and most of the work was done during the first weeks or the last ones ^^u  I am not even going to think of how many post I uploaded tonight u.uzZ. Lesson learn!


Looking back to that optimistic first weeks...



                                                                                                                                                                                                     Green = completed




How to

Connectivity setup: MQTT

Raspberry Pi 3

Raspberry Pi 1


Broker installed in Raspberry Pi 3

Publisher client in Raspberry Pi 1

Subscriber client in smartphone

Subscriber client in Raspberry Pi 3

Sensors reading

Sensors type 1:

I2C protocol – connect to the corresponding Raspberry Pi 1 I2C ports


Sensors type 2:

  • Door switch
  • Alarm button

Direct connection to Raspberry Pi 1 GPIO ports


Raspberry Pi 1

Reads GPIO ports

Implements MQTT publisher client -> sends data to Raspi 3


Raspberry Pi 3

Implements MQTT broker

Data storage

Raspberry Pi 3

Implements MQTT subscriber client -> read data from Raspi 1

MySQL database to store data

GUI – general home access

Raspberry Pi 3

Same MQTT subscriber client

Displays read data

Mobile app – individual home access


Implements MQTT subscriber client -> read data from Raspi 1

Displays info

Web portal – remote access

Raspberry Pi 3


Extra 1: Announcement board



How to

User sets

  • - Task
  • - Announcement

Raspberry Pi 3

Include a menu to input:


Task that should be finished within a deadline (i.e. cleaning)

Data storage

Raspberry Pi 3


Display task/announcements

Raspberry Pi 3

Update main GUI to include an “Announcements” tab



                                                                                                                                                                Green - completed

Basic: Run competition



How to

Record user’s run distance

Android smart phone

Mobile app.

1) record run distance with either:

  • Use maps framework: get miles
  • Count steps/use phone gyroscope

2) send distance to smart home

       - Send to home server IP address

Data storage

Raspberry Pi 3

Implement home server – Apache

Create PHP interface to fetch data coming from phone

Store data in smart home database - MySQL

Display data

Raspberry Pi 3

Update home GUI:

  • Individual data
  • General table with best results

Extra 1: Tourist/Discovery system



How to

New destination selection

Raspberry Pi 3

Select a reasonable location to visit

Display it on the home GUI

Allow remote access to the selected location

Mobile app- geo location

Android smartphone

Update mobile app:

  • - Map frame work to detect person’s location
  • - Read new location from Raspi 3
  • - Send when the person gets to that location

Extra 2: Smart house inner challenges





Well, the basic modules were developed. A pity that the extras, more interesting ones, were left behind.



What now ??

Interactivity - Central Node and User Node

Even though technically both nodes are working, they are to simplistic! I want something to actually be used in the house and even appealing to my roommates. Which means that:

  • Central node needs a better GUI
    • Nicer touch and look
    • More competition options and some grand announcement of the winner
  • User node application could include also Google map integration


Casing, covers and finishes

All the components and elements were wired up and left as they were brought to this world. However, there are multiple options (ie 3D printed models) out there to have each node decently cover.  It will make it more protected, easier to deal with and not so prototype like looking.


It is difficult not wanting to start with this after seeing what other participants have done, that is a great work !


The extras

While I feel I should implement some of the proposed extras, I have discarded the tourist option (it is too complicated). The announcement board though, it looks like a nice must! (Cleaning schedules are difficult to keep in the new house u.u).


Also, the new apartment I recently moved in gave me another interesting extra (for the laziest if you may): have a camera intercom in the main door of the building. We live in a 3 stores house and everytime the door rings we should walk one full floor (aaaaall the way ) down. However, there is a convenient window thru which one of the cameras could be looking thru.


Tools for improvement

All the development process has been built from scratch. Nevertheless, there are plenty of tools (specially server kind of plugins) which could give the platform a more efficient, easier and professional solution (cloud is quite popular these days). My knowledge is kind of limited here, so I can not really give any practical example, but will be looking for something of the sort :3



Always key in a smart house, and yet, I left it for the very end... till it was too late


That's been all for now


Caterina Lazaro

Last day (+ 1) of Pi IoT competition, and a Smart Competition Home is ready to run

In this post, I wanted to show how the system looks like: both in paper and in the house itself. So here comes the pictures:


System Description








  • 1 Central Node - Raspberry Pi 3
  • 1 Sensors Node - Raspberry Pi
  • 4 User's Nodes - Smartphones













System "Installation"

I want to show where each node is being working in real life. Installation is a very kind work to use in this case, as each node has been location in a best-effort basis (but it works!)

Sensors Node

Attached to the back door in the Kitchen

IMG_20160829_094056_hdr.jpg IMG_20160829_094203_hdr.jpg IMG_20160829_094231_hdr.jpg


Central Node

In the corner of the living room. Accessible and not in the way.

IMG_20160829_130557_hdr.jpg   IMG_20160829_125356_hdr.jpg


User's Node


The User's Node is intended to be each of our smartphones


However, I also had an old Tablet which was only used to control Netflix in a the home Chromecast. Well... it is now a general User's Node to read the smart house information.

Screenshot_20160827-114104.pngThis new post finalized the User Node (an Android device). It will include the smart-house functionalities to that of the competition system. This way, any resident will be able to check the smart house information while connected to the WiFi and switch to Competition mode when leaving to gain some miles.


*In other words... I will make the Smart Competition button work





User's node - include MQTT Publisher Client



Screenshot_20160830-013638.png                                                              Screenshot_20160830-020609.png  Screenshot_20160830-020616.png

      NOT CONNECTED                                                                                                              EXAMPLES OF SUBSCRIBE RESPONSE

Smart competition Activity

Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 3


It is a direct implementation of the MQTT Clients, thanks to the Paho library


MQTT Clients subscriber & publisher

I create both kind of clients in the app. To do so, the code needs:

  • Client id
  • url - local IP of the broker
  • port - that of MQTT Service (or the one our broker is listening to) Default port = 1883


Both types of client (subscriber, with its callbacks and publisher) are implemented in the Paho libraries. Very great news

public static void createMQTTDefaultClients(){
String url = protocol + broker + ":" + port;
clientId = "phone_"+action;

try {
  //Create an instance of this class
   sampleClient = new MyCustomMqttClient(url, clientId, cleanSession, quietMode,userName,password);

   // Perform the requested action

   //For the async
   clientId = "phone_"+action_async;
   sampleSubscriber = new SampleAsyncCallBack(url,clientId,cleanSession, quietMode,userName,password);

} catch(Throwable me) {
   // Display full details of any exception that occurs
   System.out.println("reason "+((MqttException) me).getReasonCode());
   System.out.println("msg "+me.getMessage());
   System.out.println("loc "+me.getLocalizedMessage());
   System.out.println("cause "+me.getCause());
   System.out.println("excep "+me);



In order to create the subscriber, I instantiate the class SampleAsyncCallback (implementing MqttCallback). The subscription will be performed as a combination of the function subscribe() method (which starts and manages the process)and the waitForStateChange(). As a result, the code will navigate through all the connection steps:


While the client is subscribed, the information will get to the phone as a callback, messageArrived(). This method is used to:

  • Get new data of the topic
  • Update the interface to include this new information

More details of this callback:

 * @see MqttCallback#messageArrived(String, MqttMessage)
public void messageArrived(String topic, MqttMessage message) throws MqttException {
   // Called when a message arrives from the server that matches any
  // subscription made by the client
   String time = new Timestamp(System.currentTimeMillis()).toString();
   System.out.println("Time:\t" +time +
   " Topic:\t" + topic +
   " Message:\t" + new String(message.getPayload()) +
   " QoS:\t" + message.getQos());

  if (topic.equals("sensors/door")){
   //Change door values
  //MainActivity.doorState.setText(new String(message.getPayload()));
   SmartHomeActivity.readDoor = new String (message.getPayload());
   receivedDoor = true;
   }else if (topic.equals("sensors/temperature")){
   //Change temperature values
  //MainActivity.tempState.setText(new String(message.getPayload()));
   SmartHomeActivity.readTemp = new String (message.getPayload());

   receivedTemp = true;

   }else if (topic.equals("sensors/pressure")){
   //Change pressure values
  //MainActivity.pressState.setText(new String(message.getPayload()));
   SmartHomeActivity.readPress = new String (message.getPayload());

   receivedPres = true;

   }else if (topic.equals("sensors/warning")){
   //Change warning
  //MainActivity.warningState.setText(new String(message.getPayload()));
   SmartHomeActivity.readWarning = new String (message.getPayload());

   receivedWar = true;

   }else if (topic.equals("sensors/altitude")){
  SmartHomeActivity.readAlt = new String (message.getPayload());

   //Change anything
   SmartHomeActivity.readTemp = ("?");
   SmartHomeActivity.readWarning = ("?");
   SmartHomeActivity.readDoor = ("?");
   SmartHomeActivity.readPress = ("?");

   if (receivedDoor && receivedTemp && receivedPres ){
   receivedDoor = false;
   receivedTemp = false;
   receivedPres = false;
   receivedWar = false;
   //Go to the next step of the connection
   SmartHomeActivity.subscribed = false;



At this point, I use the subscribe() function when pressing SUBSCRIBE button.




I have been using it mainly for debugging purposes: I can check whether messages are received by the broker when the sensor data seems to be lagging.


NOTE: Clients id! Along this project, I have been creating a few different clients. It might be obvious, but sometimes it is not... I have been given them different ids. The broker will refuse any connection if there is already a client with that name


Not the best features


I wanted to refined this Smart home activity, since its missing both a nicer look and additional useful commands. Regarding this commands, I wan to point out that:

  • The connection IP is hardcoded, which means that the user has no way of selecting a different device. Consequently, an extra field needs to be included for that purpose
  • There is no alert when the connection fails.
  • It is not automated - I have to press subscribe button every time a want to get new data
  • There is the "Publish" option on the interface, which will actually create MQTT_Publisher and send the message to the broker. However, the Central Node only logs the result:
    • With a bit more of coding, we can use the smart home application to control some actuators (if some are installed in the house)

All in all... it will get the job done but it is still not the most comfortable to deal with



This post closes the implementation of our User's Node. Now, we have the two main functions of the system:

  • A distance tracker linked to the competition server
  • An MQTT client which can be use to read data from Sensors Node order to have a bit more of feedback and add some thrilling while running ("wait! when did he run all those miles?? no way I will let this be"), I want to have an updated table of the current's month competition state. That means:

  • Including a new function into Competition Service - in Central Node (Raspberry Pi 3). This way, when requested the competition information, it will send back he appropiate data
  • Implementing the "Podium" activity on the Competition Android App - in User's Node




Central Node - Send competition information

Existing file: insert_into_table.php

New functionality: Obtain last row of each column


So, apart from inserting information to the database, the competition service should be able to:

  • Read last row of each of the users table
  • Send it back to the phone


Read last values upon request

The main .php file is now able to read different types of messages. As a result, we differentiate:

  • type = insert -> to update values in a table
  • type = get_row -> to get last row of each table and extract its monthly distance


This request , the get_row, also contains the names of all the roommates, which will be use to select single tables. Then, the file will


1. Once we obtain the value from the right HTTP_POST ($json), the code extracts the roommates requested. Each roommate = table

2. Fetch last row of each of user's table

3. Extract monthly distance

//Decode JSON Into Array
$data = json_decode($json);
foreach($json as $key=>$val){
     $row_last = $db->read_rows($val);
     $month = $row_last[NUM_MONTH_COLUMN];

(*) read_rows function is been developed to contains the corresponding SQL calls to obtain the last row, and fetch it as an array to return


4. At the end, Send it back to the requester, User's Node


User's Node - Podium activity

Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 2


In this section, I explain how the is implemented. It will request the current state of the competition from the server and display in a table. Again, results will be organized from top to bottom.


This Activity will only display a table (and later on,  a REFRESH button).

When the Activity is created, it will request the monthly information for each user from the Central Node. Once the response arrives, the table is updated with the most recent data. The interesting part of this file is the new AsyncHttpResponseHandler which handles successful messages as follows:

//Handle succesful response
public void onSuccess(String response) {
  System.out.println("Get comp Server response: "+response);

  try {
   //Convert to a JSON Array and get the arguments
   JSONArray arr = new JSONArray(response);
   //List<String> args = new ArrayList();
  //Analyze each JSON object
   JSONObject jsonObj = (JSONObject)arr.get(0);
   Iterator<?> keys = jsonObj.keys();
  while( keys.hasNext() ) {
  String key = (String);
   lastMonthValues.put(key, jsonObj.getString(key));
   //Update gui values:

   } catch (JSONException e) {

(*)lastMonthValues is a Map<String, String > structure holding each roommates monthly distance. updateTableValues() we use this information to organize the Podium table.

Competition Application running in the smartphone


NOTE: There should be a way of reducing that long delay when retrieving data




The competition android application is completed! With this version, each user can:

  • Record their traveled distance
  • Check what is the current status and the other total distance post describes the last step to have a functional competition system. It will show how to update the python GUI of the central node with the data stored in the database (coming from each of the roommates phones, as explained in the previous post). It is a short entry describing:

  • The database and tables use to monitor each participant's progress
  • New python functions to include values from the database

All development is done in the central node (Raspberry Pi 3), using Python and SQL queries.



Competition database


It will be hosting two kind of tables:

  • Roommate information table - with the current distance, the daily distance and the monthly distance time stamped. In this case, we are 4 people in the house
  • Winner information table - with the winner and the month they won


Information transaction


Most of the information will be stored from the Compatition service, as explained in [Pi IoT] Smart Competition Home #8: Competition system III - Android Competition application: communicating with the server (Each roomates distance information). Then, the Python main program will retrieve that information and display the competition in its main GUI. It will also determine who is the monthly winner at the end of each period.


Nevertheless, the main python activity will be the one handling the winners table. Once we change to a new month, it will use the last monthly_distance value of each resident to selectand store that past month winner.


Accessing the database itself - creating a local user for the competition service

Both Competition Service and Main program access the database with an specific user and password. Since it is not very advisable to use the very same root, I will show how to:

  • Create a new database user
  • Grant permissions to this user
  • Check the user its working

Let's begin...  On a command prompt of the central node, we start mysql service as a root user



Create a new database

A step 0, create the database to use:

> CREATE  DATABASE Competitiondb;

And start suing it (~open)

> USE Competitiondb;


Creating a new mySQL user

To create a newlocal  user, we input the following SQL command:

>CREATE USER 'userName'@'localhost' IDENTIFIED BY 'password';


To grant permissions (in my case SELECT, DELETE, CREATE, DROP {table}, INSERT, UPDATE {into table})

> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON Competitiondb.* TO 'userName'@'localhost';


Testing the new user

To test this user, I will create a new mock table and then erase it. We will see the number of tables with SHOW TABLES command (none at this point)


Here is the screenshot:



We keep 'userName' and 'password' information to be used in any code accessing the database


Updating Central Node GUI

Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface /  MySQL Server / Apache2 web Server / Competition Service version 1


The GUI is already prepared to host the current competition table (showing each residents progress, and having them organized with the best on top). More details on how it was done, can be found in[PiIoT] Smart Competition Home #5: Central Node Upgrade


(*) last version of the Central Node Code


Read database and display

New File -

Existing file - performs has only one function, read_last_sample(table), performing  two main actions:

  • Connect to the database - use the created SQL user to open a connection to the database
  • Read the last sample of the requested table
def read_last_sample(table_name):
    db = MySQLdb.connect(host="this_host",    # your host, usually localhost
                         user="userName",         # your username
                         passwd="one_password",  # your password
                         db="CompetitionDB")        # name of the data base

    # Cursor to db
    cur = db.cursor()

    # Select table
    cur.execute("SELECT * FROM "+str(table_name))

    # Return last row
    all_rows = cur.fetchall()
    for row in all_rows:
        print row[0]

    last_row = cur.fetchlast()

    return last_row

The will call the function read_last_sample(table) every time a gui label is updated. It will refresh the last values for each of the roommates


Manage competition state and store new winner in database

New File -

Existing file -


In this case, the main_gui will detect when a new month starts, and select the best resident during the previous one. It will be stored in winners table, using the file (very similar to, though it executes an INSERT query)



GUI with the updated competition table


(What a coincidence... I am winning )



For the first time, I can say we have a "Smart Competition House" (yet, very basic one). In the house central node, there will be displayed:

  • Smart house information - temperature, pressure, altitude, door state and alarm
  • Competition table - current distance traveled by each residence


The platform is lacking a lot of interactivity though (we can not see the competition state in the phone, and there is no current interface for the smart house either )

Going through my previous blog entries I realized I had not provided the followup information for my Farm Operations Center Box that I had previously said I would. 


So here it is!  :-)


As I had mentioned before the medium sized box worked perfectly as a container for the 7 inch touchscreen with Pi3 attached.




As you can see here, the 4 squares line up perfectly with the metal frame of the touchscreen.  Making for an easy mounting of the F.O.C. to the plastic container.




Here we have the view of Camera 2 moved over to monitor the Duck Domain.  With an easy swipe of a finger up or down I can look at any of the camera feeds!



From my top picture above you can see there is a good amount of space available to mount additional items including but not limited to batteries.  I had came across a very interesting sounding battery option with pass through capabilities advertised at Best Buy that I wanted to try.  Of course when I arrived there to purchase it there was none to be found.  Online it listed 3 in inventory, even the stores internal inventory listed 1, but after an hour of waiting and searching through Best Buy I ended up heading over to Walmart again to pick up a small usb battery. 


Sadly the idea of pass through charging only seems of interest to those with cell phones.  Trying to explain to various techs what exactly I needed in a battery led to a lot of confused looks and nothing that helped me for this project.


You would think the idea of having a power source that can charge your phone/tablet/micro usb item while also charging the battery would be a great thing to have.  Why bring a charger and a battery if you battery can also charge your device while charging itself?  Obviously a pass through battery would be quite nice to add into the F.O.C. box so I could just quickly pull the cord and head out to check things without having to power down and restart with the battery.


I did end up picking up a basic 4400 mah battery with the idea of testing, but while I was assured it had a 2.1 amp output the RPi3 with 7 inch screen was not impressed with its actual power providing and refused to work with just that battery.  So caveat to others, keep your receipts and packaging. 


Once I do find a pass through battery that works for my setup I will share out to Element 14.

There seems to be Irony that my blog #13 comes with bad news.  I was unable to get my Enocean Pi to work with 3 different Raspberry Pis.  Leading me to wonder if I have a bad Hat.  :-(


I worked for days on getting it to link with my Raspberry Pi3 that is running my Farm Operations Center but it would never show as recognized.  Looking at other comments there seems to be a problem with Raspberry Pi 3 and this currently so I pulled out 2 different Raspberry Pi 2s to see if that would help.


Regardless of the Pi platform the Enocean Pi could not be seen.  There are some very interesting writeups on installing Enocean Pi and they were quite helpful in trying to troubleshoot what was happening but no success.


My desire to use the Sensors are still very high and I am considering ordering another Enocean Pi and another set of sensors since once it works I can see adding more inputs easily.




The Reed sensor is what I want to tie into my sliding Fowl doors and allow for me to trigger when it is open.  Energy harvesting and radio communications, a huge plus!


I had ordered a float sensor and was looking forward to seeing if I could tie into the Temperature card since data on it showed inputs that might be of use.  Of course having current Temperature around animals is always a plus.




The Float sensor is to be installed into this Watering system.  The bottom bowl has a float controlled water input.  The 5 gallon bucket would be where the float sensor would reside to alert when the water was low.  As you can see below the bucket, I have planned ahead by putting a T.  The capped part of the T will be routed over to my Rabbit Cages to allow each rabbit to have it's own individual water source via the Chicken Nipples I have previously shown.  The blue bowl will be for my G.O.A.Ts.  Depending on how quickly the water is went through I may tie in another 5 gallon bucket as well.  But that requires future monitoring.




Here is the one complete Rabbit cage.  You can see the individual pvc feeder tubes I have in place.  I want to do similar for the water but have them all tied into the bucket system shown above.  On the left hand side you can see the Baby Box with Momma Rabbit perched on top of the box.


I am sad that this part of my plans did not work through and look forward to seeing if I can get another Enocean Pi to fully implement the whole plan of the IoT Farm!

First off I want to thank Element 14 and all of the incredible participants of this design challenge!


It was an experience I greatly enjoyed and want to continue to expand upon!  The other participants had brilliant ideas and implementations and constantly made me wonder how I could further tweak and improve my own project.   Plus input from several actually helped me redirect some of my efforts over to a water source monitoring system.


I am very pleased with how the Farm Eyes and the Farm Operations Center worked together to allow me access to monitor my current areas of observation.  I definitely look forward into implementing the sensors part of the project and hope that a new Hat will allow for that to work.  I will of course blog of this to share to Element 14!


My family and I thank Element 14 for giving me the chance to work my ideas in reality.  Please keep watching to see how it improves and feel free to make suggestions based on your experiences or thoughts!



The challenge deadline is almost there, so it's time for a final demonstration! In this last post I'll show you the results of my work in a video and then describe what I did and what I'm planning to improve in the future.


Let's start with the fun part: I present you a relaxed evening at Thuis!



I'll continue this blog in the same style as my previous [Pi IoT] Thuis #9: Status Update and will give you the latest status of all the projects and use cases.


Open Source Projects

These are the projects I made available as open source on my GitHub account during this challenge (or will make available later). They are all set up, so that they can be reused by others in their own home automation projects.





Z-WaveAs described in [Pi IoT] Thuis #7: Publishing activity from Z-Way to MQTT messages are published for each status change. It also subscribes to the responding topics, so you're able to turn on and off devices through MQTT as well. The topics and devices used are completely configurable. With this all major functionality is done. The last couple of weeks I did several improvements to the reliability. Scene activation is something that still needs some work, so the to-do list remains the same.



  • Publish a message on scene activation (e.g. used for each secondary push button on the wall)
  • Get it published on the Z-Way App store, already uploaded June, but still no response
  • Publish energy usage


Zway-MQTT on GitHub




ChefMaking sure I can use Chef to fully install the Raspberry Pi 3 I needed to update a few recipes and also to create a completely new one for Z-Way. This was a major hurdle and took more time than expected, but I learned a lot from it. I hope in [Pi IoT] Thuis #5: Cooking up the nodes – Thuis Cookbook you can also find something new for yourself. In [Pi IoT] Thuis #14: Home Theater part 1: CEC I made some small updates as I installed the node connected to the home theatre system by HDMI.


Chef-Zway on GitHub





PlexPlex doesn't allow me to add a plugin directly in the server, but there is an API and WebSockets for status messages. The API and WebSockets are implemented, as is described in [Pi IoT] Thuis #16: Home Theater part 3: Plex. It is mostly implemented as Java library with a similar set up as I'm using for integrating Java and MQTT. As I have to clean up the project, it will be published at a later stage.



  • Publish the code on GitHub





The library for using CEC (Consumer Electronics Control) in Java was developed about 10 months ago and performs the most common functionality: monitoring stands-status, turning on/off devices, changing volume and changing outputs. Now it's also integrated with Thuis and is available through MQTT. For more information please visit [Pi IoT] Thuis #14: Home Theater part 1: CEC.


CEC-CDI on GitHub




interfacebuilder.pngIn [Pi IoT] Thuis #10: MQTT User Interface components for iOS the MQTT UIKit was developed. It now provides a Tile based interface with elements being updated automatically based on MQTT messages. Also several default UIKit elements were extended with MQTT functionality in [Pi IoT] Thuis #16: Home Theater part 3: Plex.



  • Publish the code on GitHub


Use Cases


Light when and where you need it



Sensors are placed in both the kitchen and the entrance room. The Core knows about them and as described in [Pi IoT] Thuis #8: Core v2: A Java EE application rules are defined to turn the lights in those rooms on and off depending on movement and time. This works pretty well already!



  • Further optimize the rules
  • See if improvements can be made by using iBeacons



Welcome home


The iBeacons are placed at several locations in the house, providing a good coverage to detect if you're arriving home. When you arrive home a notification is sent, which allows you to directly start up the home theatre system.


You can read about this in [Pi IoT] Thuis #13: Presence monitoring using iBeacons.


Home Cinema


Home Cinema

The Z-Wave hardware for the home cinema is in place (using a 6-socket PowerNode), so it can be turned on and off. Using the above mentioned Plex and CEC integration we can fully manage the home theatre system. An extra Raspberry Pi 1B was placed next to the TV to control devices through HDMI CEC. This is described in [Pi IoT] Thuis #15: Home Theater part 2: controls. The Ambilight will be finished at a later stage.



  • Add and integrate a DIY ambilight


Mobile & On-The-Wall-UI


iPad app

The app is running and fully functional, as you can see in [Pi IoT] Thuis #11: Final implementation UI design, you can manage devices in the house and see the latest status. A wish is still to add a speech control to the iPad app. In the kitchen we have an iPad mounted on a cabinet as well, for which I would like to create a separate app with additional features like a cooking timer and a recipe browser (my girlfriend supports this idea a lot :). Another option is to use a Raspberry Pi plus display, which will be integrated in a cabinet door.



  • Add speech commands
  • Create a custom app for the kitchen (either iPad or web)


Wake-up light


Work hasn't started yet on the Wake-up light as one of the key components (the MOVE) is not delivered. And as this is an Indiegogo project it's still not certain when it will be delivered. I did experiment with emulating a Hue bridge to make sure Sleep Cycle can communicate with Thuis and unfortunately I could not get this working properly yet. Nevertheless, one to-do is fully fixed: the main light in the bedroom is now dimmable through Z-Wave.



  • Sleep Cycle doesn't have a web hook available yet, so it's still needed to set up a Philips Hue bridge
  • Install and integrate the MOVE


Manual override


Wall switches

Most lights can already be switched manually using the buttons on the walls. Some of them should however be switched using the secondary button, which does a scene activation. I still have to add support for this to the Zway-MQTT.



  • Add support for secondary buttons in Zway-MQTT


Energy monitoring & saving


For energy monitoring I only did some research. InfluxDB seems to be a good candidate for storing the data. Unfortunately I wasn't able to work on this use case during the challenge, but I'll come back to this in a later stage.



  • Let Zway-MQTT publish energy usage
  • Integrate YouLess to record total energy usage of the house
  • Create reports based on the usage



I already mentioned some future plans, a few of those I want to highlight.


Wake up light

It would be great to wake up with light that feels like a sun rise. In the summer managed by actually letting the sun in by raising the curtains, in the winter by using an electronic light. For this use case I'm currently very depended on external parties, which is the reason this part of Thuis is postponed to the later stage.


Kitchen control

Tools which are used a lot in the kitchen are a timer and a recipe browser. The plan is to integrate these both in an easy to use app which is always available on one of the kitchen cabinets doors. It can be used to override the automatic schedule as well, for example when we get home later than usual and still want the full light for cooking.



A few years back I've built an Ambilight for my TV. However this is based on an a Arduino connected to Mac Mini. It's therefore only usable when the Mac Mini is the source of the video. As we're mainly using a Apple TV nowadays, the Ambilight can't be used. I will use a HDMI splitter and grabber connected to a Raspberry Pi 2B to replace the Mac Mini and make it possible to enable Ambilight for videos from all sources.



  • Using presence information for improved automation
  • Saving usage data and energy usage to a database for data mining
  • Integrate a robotic vacuum cleaner
  • Add voice control



It feels so weird, but with this paragraph my last blog of this challenge comes to an end. Over the last couple of months I've been able to set up a very nice home automation system at my house. It was a hard job to get everything done on time and especially describe all of it in writing, but I've managed well and I've enjoyed the process a lot! I enjoyed reading the blogs of the other challengers as well, great to see so much great ideas! Thanks again to element14 for selecting me as a sponsored challenge and for giving me the inspiration and motivation to work on Thuis!


A lot of online sources were used in order to achieve the creation of my project. Though the sources have been linked in the relevant posts, I have summarised the complete list per subject right here for your convenience.




Raspberry Pi


Automatically copy "wpa_supplicant" file
Getting Raspberry Pi 3 UART to work
I2C Level shiftingIs level shifting really needed for I2C?
Disabling Pi3 onboard LEDs
Installing Chromium browser on Pipi 3 - How to get Chromium on raspberry 3 - Raspberry Pi Stack Exchange




Puppet Documentation
Puppet Keynote by Luke KaniesPuppet Keynote: Puppet Camp London


Voice Control


Voice Control project on Raspberry Pi using PocketSphinx

Raspberry Pi 3 Voice recognition performance

RoadTest Review a Raspberry Pi 3 Model B ! - Review

Various text-to-speech solutions for Raspberry Pi

RPi Text to Speech (Speech Synthesis) -


Sense HAT


AstroPi Official Website
Sense HAT generic information

Sense HAT Python API

Calibrating Magnetometer
Joystick KeycodesKey codes - Qi-Hardware
Negative temperatures issue


Pi Camera


Enabling Pi Camera support via command line, without "raspi-config"

raspicam - How can I enable the camera without using raspi-config? - Raspberry Pi Stack Exchange

Video Surveillance OS for SBCs
Pi Smart Surveillance projectRaspberry Pi Smart Surveillance Monitoring System
MJPEG Streamer for SBCs


OpenHAB 2


Official WebsiteopenHAB
Hue binding
Weather Binding
OH1 addons in OH2




Official Website
Previous Challenge using EnOcean sensors

Forget Me Not Design Challenge

Visualise EnOcean sensors telegrams via command line

EnOceanSpy by hfunke

ESP3 SpecificationEnocean: Specification for EnOcean Serial Protocol 3 (ESP3)


Energy Monitoring


Open Energy Monitor Official Webite
emonPi Kickstarter
emonSD Software Image




Python LED backpack library
I2S Audio Amplifier
Trellis Keypad




What is the ShapeOko 2ShapeOko 2 - ShapeOko
What is the gShield
CNC Software


It's been a tough, stressful, but certainly fun three months competing in this challenge. As if the challenge itself wasn't challenging enough, I also moved house halfway the challenge. Though the move was more time consuming than originally anticipated, I managed to complete most of the objectives I had set originally.


This is my final post for element14's Pi IoT Design Challenge, summarising and demonstrating my project builds.




Following features were implemented, making several rooms smarter:

  • configuration management
  • monitoring
    • contact (doors or windows)
    • temperature
    • energy
    • video
    • key presence
  • control
    • lights
    • music
    • voice



Unfortunately I couldn't crack the code of my domotics installation yet, but help seems to be on the way.





To accommodate all of the above mentioned features, five different devices were created:

  • a smart alarm clock
  • a touch enabled control unit
  • a smart key holder
  • two IP cameras
  • a energy monitor

Energy Monitor


The energy monitoring device makes use of an open source add-on board for the Raspberry Pi, called emonPi. Using clamps, it is able to measure the current passing through a conductor and convert it in power consumption. I combined the emonPi with a Raspberry Pi Zero and two currents clamps: one to measure the power consumption of the shed, the other for the lab. This can of course be applied to any room, as long as the clamp is attached to the proper conductor.


Want to know more about emonPi?:


IP Camera


Two IP cameras were installed for live monitoring: one in the lab, and one in the shed. Both make use of the Raspberry Pi Zero v1.3 with camera port. The video stream is converted to MPJEP and embedded in OpenHAB in the matching view.



Key Holder


A mini build which was not originally foreseen, but which I thought would fit nicely in this challenge. The concept is simple: four connectors are foreseen to which keys can be attached. When a key is attached, a GPIO pin changes status, reporting the change to the control unit.


A future improvement could be to either use a different connector per key, or make use of different resistors and an ADC to know which key is inserted where.


The full project is described in a dedicated blog post:


Alarm Clock


The idea of the smart, voice-controlled alarm clock started in 2014. The result was a functional prototype, but too slow and bulky to be really useful. This challenge was the perfect opportunity to revisit this project, and I'm quite happy with the way it turned out!


Here's a side-by-side comparison:



The original Raspberry Pi 1 B with Wolfson audio card has been replaced by the new Raspberry Pi 3 B with USB microphone and I2S audio module. The difference in performance is incredible. The result is a near real-time, voice controlled device capable of verifying sensor status, fetching internet data such as weather information or even playing music.


Most of the work was done for this device, and simply reused by the others. The posts cover voice control, setting up OpenHAB, controlling displays, and much more:


Control Unit


The Control Unit has the same guts as the alarm clock: I2S audio, USB microphone, speaker, Raspberry Pi 3, etc ... It does however add a keypad and touch screen, allowing control via touch on top of voice. The keypad switches between different webpages on the touch screen, which is locked in kiosk mode.


The touch screen can be used to trigger actions, visualise historic data (power consumption, temperature), consult the weather, etc ...


IMG_2360.PNGScreen Shot 2016-08-29 at 23.28.58.png


You can find the relevant posts below:




Various demonstrations were already made over the course of the challenge. But as this is a summary post, I've created a video showcasing the entirety of the project. Hope you like it!




Because this project wouldn't have been possible without the plethora of online content and tutorials allowing me to combine and modify functionality to give it my own twist, I am publishing all the code created as part of this challenge in a dedicated GitHub repository. You can find it here:


The repository contains the Python scripts, Puppet modules and diagrams, all categorised in a way I thought would make sense. I will make sure the repository is updated as soon as possible!




I'd like to thank element14, Duratool, EnOcean and Raspberry Pi Foundation for organising and sponsoring another great challenge. It's been a wild ride, thank you! I would also like to thank element14Dave, fellow challengers and members for their input and feedback over the course of the challenge. Finally, a big thank you to my wife and kids for allowing me to participate and even help me do the demonstrations!


Time for some rest now, and who knows, perhaps we'll meet again in a future challenge.




Navigate to the previous post using the arrow.

Here is quick project summary of the Pi3 Control Hub project created for the Pi IoT Design Challenge - Smarter Spaces with Raspberry Pi 3. The idea here was to create a Hub and spoke network of Raspberry Pi's with the Pi 3 as the Hub , which had Home- Assistant installed on it, which is a powerful open- source home automation platform. And  a few other versions of the Pi used as the spokes , that is

  • Pi B+ and EnOcean sensor Kit and Module, used to automate the blinds
  • Pi Zero and Pi Noir Camera 2 to build a security camera, with Motion installed for intruder detection (" basically to catch the raccoons overturning Garbage bins )
  • Pi A+ with a servo motor to build a key-less door entry system , which can be unlocked using a secret password using a simple python-flask app

Here is a video demo, of some of the features implemented as part of the project


As part of the Hub , here are few features I plan on use on a day to day basis

- Controlling the Hue Light bulbs using the Home-Assistant

- Check weather conditions on the Sense Hat before leaving for work in Morning

- Use the Pi Camera connected to the Hub to Monitor stuff , like a print job running on my 3D printer, by opening Home-Assistant on a tablet

- Checking the outside temperature and comparing it to the temperature from the Yahoo weather API, this will be handy in the winter.

- Checking on the picture gallery of the intruder detected, basically checking if I was able to catch some raccoons in action.

- Checking if my Aunt/Mother visited me when i was away,given that they both have a spare key. Here the EnOcean magnetic contact switch connected to door will log an entry in Home-Assistant History, which I can come an check on in the evening.




Here are the links to the various blogs, with a brief description

Spoke 1 - Security Camera


(in the image above you see the EnOcean temperature sensor attached which is used send the outside temperature to the Hub aka home assistant on Pi 3 via MQTT)

As part of this spoke a Raspberry Pi Zero with a camera connector was used with a Pi Noir camera V2 was used and we setup motion for intruder detection.

Pi Control Hub: Spoke 1 :Security Camera - setting up Motion to stream video

Using the Single File PHP gallery we create a gallery of pictures that you can access from the Pi Zero, via a web browser on you laptop

Pi Control Hub: Spoke 1 :Security Camera (continued)- Photo gallery of the intruders

And, also designed and 3D printed STL files for an enclosure, which kind-off looks like a security camera , you can find the STL files at

Pi Control Hub: Spoke 1 :Security Camera  -- STL files to 3D print



Spoke 2 -  Blinds Automation


This was the most interesting and challenging spoke that was put together, considering this was the first time I was using the EnOcean Sensor kit and module, which meant I ran some basic test using FHEM and tried blink a few LEDs when the EnOcean push button was pressed + Temperature module detected temperature + Magnetic contact was Open/Closed , which were all connected to Raspberry Pi B+ via the EnOcean module.

Pi Control Hub:Spoke 2:Blinds Automation-- Setting up EnOcean Sensor and Blinking LEDs

To open and close the blinds the plan was to use a gear motor which was driven by the Sparkfun motor driver , when a EnOcean push button was clicked

Pi Control Hub:Spoke2:Blinds Automation(continued)--Driving Motor with EnOcean PushButton

Lastly we 3D printed an enclose for the Pi B+ and EnOcean module and a mount for the gear motor , the STL files can be found at

Pi Control Hub:Spoke2:Blinds Automation(continued)-3D Printing Holder and Motor mount


Blinds Automation using Raspberry Pi and EnOcean Sensor Kit



Spoke 3 -  Key-Less Door Entry


As part of this spoke a Pi A+ was used with a servo motor. In addition, a python-flask application was written which was hosted on the Pi A+ connect to my home Wifi. This App will be used to unlock the back door using a secret password which was setup in the code.

Pi Control Hub:Spoke3: Key-less Door entry-testing the Servo

The challenge with this spoke was to design the files to 3D print to find the door knob and the servo mount, if you plan to replicate this you can find the STL files at

Pi Control Hub:Spoke3: Keyless Door entry STLs


Key-less Door Entry using the Raspberry Pi



For the Hub as mentioned above Home-Assistant was installed on the Raspberry Pi 3, and as part of the first blog we set a lot of sensors that included

- Weather from the Yahoo API

- Bandwidth speed test

-Monitoring  your favorite twitch channels

-Getting today's value of BitCoin

-Getting the Pi 3 CPU usage and Disk space usage



In addition we also setup the Phillips Hue bridge and setup an automation rule to switch on the light when you are at home, using the bluetooth of the your phone.

Pi Control Hub: Setting up Home Assistant+Controlling  Philips hue

Home Assistant setup on Raspberry Pi3 - testing Philips Hue bulbs


Now, no home automation project is complete with out music , which meant the setup of Mopidy with a couple of speakers and Adafruit s Stereo 3.7W Class D Audio Amplifier, for the circuit and commands check out

Pi Control Hub: Music with Mopidy and Home Assistant

Mopidy setup with and Home Assistant on a Raspberry Pi 3


The next blog shows how to setup the Pi Camera V2 in Home Assistant to stream video, and also integrate the feed from the security camera spoke

Pi Control Hub: Integrating Camera's in Home Assistant


Now to develop the case to fit all electronic components, a mix of 3D printing and basic wood working with a Dremel was used and we also installed Chromium on raspbian-jessie

Pi Control Hub: The HUB


And as part of the final blog we integrated the EnOcean Temperature Sensor and the Magnetic contact sensor which mimics the door opening and closing in Home-Assistant , by installing mosquitto MQTT broker on the Pi 3 and installed MQTT client  on the Pi B+ which is connected with the EnOcean module.

Pi Control Hub : Getting EnOcean Sensor data to Hub via MQTT


(n the screen shot above , the left hand side is the terminal running the program to get the EnOcean Temperature and Magnetic Contact sensor values on Pi B+ connected to the EnOcean module + FHEM event monitor in the browser. And the right,  is Home Assistant dashboard showing the two sensors EnOcean Temperature in circle in the middle and the Door represent the magnetic contact sensor, which is the last circle on the right,  installed on Pi 3, aka the Hub.)

Old MacDonald had a farm and on his farm he grows crops, which he likes to monitor using the Internet of Things. Therefore he is in need of a camera capable of doing that. Luckily element14 came up with a challenge in which such a camera is developed. This page gives a summary of the project.


The project only focuses on the development of the Plant Health Camera, a Thing, which, since it runs on a Raspberry Pi, easily can be connected to the Internet.

Inspiration on how to achieve that, can be found in other submissions, for instance the IoT Farm  from John Kutzschebauch.




Video summary


List of all project blog posts

[Pi IoT] Plant Health Camera #1 - Application

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #5 - OpenCV

[Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work

[Pi IoT] Plant Health Camera #7 - Synchronizing the cameras

[Pi IoT] Plant Health Camera #8 - Aligning the images

[Pi IoT] Plant Health Camera #9 - calculating BNDVI and GNDVI

[Pi IoT] Plant Health Camera #10 - connecting the Master and Slave Pi

[Pi IoT] Plant Health Camera #11 - Finalization


Also a nice intermediate summary made by Charles Gantt:

Design Challenge Project Summary #23: Plant Health Camera **Final Update**




I hope you all enjoyed the project, I'm open for any questions and comments.

Best regards,


It's been a fun project and I hope that you were able to glean some useful tidbits to use in your own projects.

HangarControl Episodes

  1. Pi IoT - Smarter Spaces with Raspberry Pi 3 - Smarter Spaces with Raspberry Pi 3 - Hanger Control System
  2. Hangar Central #2: Developing Without Your Pi
  3. [Pi IoT] Hangar Central #3 -- Unboxing the Challenge Kit
  4. [Pi IoT] Hangar Central #4
  5. [Pi IoT] Hangar Control #5 -- Raspberry Pi Kiosk
  6. [Pi IoT] Hangar Control #5.1 -- Raspberry Pi Kiosk, The Movie
  7. [Pi IoT] Hangar Central #6, Minimal Web Application
  8. [Pi IoT] Hangar Control #7, The Message Queue
  9. [Pi IoT] HangarControl #8, Database? What's a Database?
  10. [Pi IoT] HangarControl #8.1 -- What's a database? The Movie
  11. [Pi IoT] Hangar Control #9, Operator's Web Interface
  12. [Pi IoT] HangarControl #10, Preheat via Text Messaging


Here are a couple of pictures of the HangarControl master hub and then the first many remote heater controls.

Hangar Central


The Full Monty


Remote Hangar Control

This is a package of Raspberry Pi2, 25A Solid State Relay connected to a duplex outlet.

Remote 1

Remote (top down)Remote (side view)


Thank you all for reading (and watching) along this adventure. There's so much more to write about, but it looks like this season has come to an end.


Best regards,

Rick Havourd

Now that the web interface is up and running (), it is time to add the final piece: Operating the hangars via text messaging!

Text Messaging Gateway

I'm going to use Twilio (  to provide a link between the SMS or text messaging world and the web-based internet world. Twilio provides a plethora of tutorials and help as well as a free phone number for your development projects. Once you have signed up and have your phone number and API credentials, come on back and follow along.


(Yet Another) Minimal Application

It seems that everyone has a sample application that shows how easy their product works, but when you try integrating it into your own project, it all falls apart. I am going to show you how HangarControl implemented text-messaging so you can see Twilio in a larger context. I have attached the application so that you can download and view in its entirety.


# Use Flask for our "web" interface. In this case "web" is going to return XML rather than HTML.
import flask

# Twilio provides libraries for most of the popular languages.
from twilio import twiml

app = flask.Flask(__name__)


Credentials, not to be shared

I'm using my Twilio credentials as set in environment variables so that they don't get published! Either in your login script, most likely .bashrc, or possibly FASTCGI init script, set a couple of environment variables that your programs will read upon execution. This allows you to share code such as this without giving away your private information. If you're never going to show someone else then you can just replace the following with variable assignments like: app.config['account_sid'] = "your SID goes here"

import os
app.config['account_sid'] = os.getenv('ACCOUNT_SID', '<SID provided by Twilio>')
app.config['auth_token'] = os.getenv('AUTH_TOKEN', '<Secret auth toke provided by Twilio>')


Receiving SMS Commands

# We only have a single endpoint and will parse the string sent by Twilio to determine user intent
@app.route('/', methods=['GET', 'POST'])
def incoming_sms():
    attr = None
    message = flask.request.values.get('Body', '"?"')
    message = message[1:len(message) - 1]

    ary = message.split(" ")
    cmd = ary[0]
    if len(ary) > 1:
      attr = ary[1]

    if cmd in ["heat","preheat"]:
      rtn = preheat(attr)
    elif cmd == ["cancel","off"]:
      rtn = stop(attr)
    elif cmd in ["list","status"]:
      rtn = status(attr)
      rtn = "Ask one of 'heat <hangar>', 'stop <hangar>', 'status', '?'"

    r = twiml.Response()
    r.message('HangarControl "{1}" {2}\n{0}'.format(rtn,message,cmd==u"preheat"))

    return flask.Response(str(r), mimetype='text/xml')


Processing the commands

Once we have pulled apart the command and any parameters from the Twilio SMS message, the program will be dispatched to one of the following methods. I intentionally used these barebones methods to demonstrate what you would need for your own project. I will followup with actual code, but didn't want to overly complicate this presentation. Each method performs the requested action and then returns a string that will be packaged up and returned via text message to the pilot.

def preheat(hangar=None):
  return "Preheating {0}".format(hangar)

def stop(hangar=None):
  return "Turn off heater in hangar {0}".format(hangar)

def status(attr=None):
  return "Status {attr}".format(attr)



Using HangarControl with Text Messages

Getting Help


Status of Hangars


Preheat an Engine


Status While Preheating

After heating


SMS In a Nutshell

Believe it or not, that's all you need to do to implement text messages into your applications. I'd heartily recommend adding SMS capabilites to your next project!


Best of luck,


As part of this blog post we are going to get the values of the EnOcean Temperature sensor and the Magnetic Contact switch which is attached to the door to check if the door is Open/Closed, to Home-Assistant which is installed on the Hub. The EnOcean module is connected to the Raspberry Pi B+ which we used to automate the blinds (,Pi Control Hub:Spoke2:Blinds Automationblog post) which shows you how to install FHEM home automation server and run the python program ,which takes advantage for the FHEM server telnet port, to rotate a gear motor that opens and closes the blinds


To send the EnOcean temperature and magnetic contact values to the Hub via MQTT, we will have to install a MQTT broker on the Hub's Pi 3, for which we will use Mosquitto(An Open Source MQTT v3.1 Broker ) and install a MQTT client on the Pi B+  which is attached to the EnOcean Module used for the blinds automation setup.



in the screen shot above , the left hand side is the terminal running the program to get the EnOcean Temperature and Magnetic Contact sensor values on Pi B+ connected to the EnOcean module + FHEM event monitor in the browser. And the right,  is Home Assistant dashboard showing the two sensors EnOcean Temperature in circle in the middle and the Door represent the magnetic contact sensor, which is the last circle on the right,  installed on Pi 3, aka the Hub.

{gallery} Integrating EnOcean Temperature and Magnetic Contact sensor with Home-Assistant


Magnetic Contact Transmitter Module connected to my front door, currently just using tape ..


Magnetic Contact value which shows the Door Open/Closed , here red is corresponds to closed


EnOcean Temperature Sensor module to be mounted on the Security camera, to get the outside temperature reading


Temperature captured in Home-Assitant from the EnOcean module.


Fhem Event monitor screenshot


Here are the steps to follow

#1 Install Mosquitto on the Hub's Pi 3

       to use the new repository, first import the repository package signing key using the commands below


           sudo apt-key add mosquitto-repo.gpg.key

         Then make the repository available to apt

            cd /etc/apt/sources.list.d/

           sudo wget

         Now to install the mqtt mosquitto for the raspberry pi , run an update followed by an install

            sudo apt-get update

            sudo apt-get install mosquitto


#2  to test setup we will also install mosquitto-client

           sudo apt-get install mosquitto-clients

#3 Run a quick test locally on the Pi 3

     Open two terminal, in one window create a topic using the command

       mosquitto_sub -d -t topic_tes

     And in the second terminal window, send a message to the topic

        mosquitto_pub -d -t topic_test -m "Hello Pi3"



#4 As part of the Home- Assistant add the following to the configuration.yaml file

   To integrate MQTT into Home Assistant, add the following section

  port: 1883
  client_id: home-assistant-1
  keepalive: 60
  protocol: 3.1

     where the broker is the Ip Address of your pi

     For more info check out -


    In addition, add the following to the configuration.yaml file as a door OPEN/CLOSE sensor

  platform: mqtt
  state_topic: "home/door"
  name: "Door"
  qos: 0
  payload_on: "ON"
  payload_off: "OFF"
  sensor_class: opening


    And to get the value of the EnOcean Temperature sensor add following under the sensor section

- platform: mqtt
  state_topic: "home/temperature"
  command_topic: "home/temperature"
  name: "EnOcean Temperature"
  qos: 0
  unit_of_measurement: "°C"



#5 on the Pi B+ install MQTT client paho

     Paho makes communicating with an MQTT server installed on the Pi 3 very simple and can be easily used as part of python program, we will install paho-mqtt using pip ,

       sudo apt-get install python-pip

       sudo pip install paho-mqtt


#6 Lets run a simple python program on the Pi B+, to check if we are able to send data to the MQTT broker on the Pi3

      Here is a sample python program , change the  hostname value to Pi3 aka the Hub's ip address and the Topic values should match the values above that we entered in the configuration.yaml       file.


import paho.mqtt.publish as publish
import time

print("Sending EnOcean Magnetic Contact value")
publish.single("home/door", "ON", hostname="")
print("Sending EnOcean Temperature value")
publish.single("home/temperature", "28", hostname="")

print("Sending EnOcean Magnetic Contact value")
publish.single("home/door", "OFF", hostname="")
print("Sending EnOcean Magnetic Contact value")
publish.single("home/door", "ON", hostname="")

print("Sending EnOcean Temperature value")
publish.single("home/temperature", "26", hostname="")

       Now when you run the following program you should see sensor values for Enocean Temperature and Door on the home assistant dashboard update




#7  Run the program to update values from the EnOcean sensor from the Pi B+ to the Pi 3

import telnetlib
import paho.mqtt.publish as publish
import time

#Connection details to the fhem server installed on the same Pi
#For the telnet details check out URL - http://IpAddressOfPi:8083/fhem?detail=telnetPort
HOST = ""
PORT = 7072
tell = telnetlib.Telnet()
#Connecting to the fhem server,PORT)
#Send command to intiantiate fhem server to get the data
tell.write("inform on\n")

def string_after(s, delim):
    return s.partition(delim)[2]

while True:
        #get the value after the carriage return
        output = tell.read_until("\n")
        #Check the value of the Magnet Contact Transmitter module - Door open close
        if "contact" in output:
                print output
                if "closed" in output:
                        print "Mangentic contact closed"
                        print("Sending EnOcean Magnetic Contact value - ON")
                        #Sending data to door open/close value to topic, change Ip address to the Ip of the pi with broker
                        publish.single("home/door", "ON", hostname="")
                        print("Sending EnOcean Magnetic Contact value - OFF")
                        publish.single("home/door", "ON", hostname="")
        #Checking the temperature sensor
        #if you get the error-No EEP profile identifier and no Manufacturer ID -wait for the sensor to charge
        if "sensor" in output:
                        print output
                        delim ="temperature:"
                        print string_after(output, delim)
                        print("Sending EnOcean Temperature value")
                        publish.single("home/temperature", string_after(output, delim), hostname="")


once your done with testing set the program up in the crontab to run continuously


in the screen shot above , the left hand side is the terminal running the program above on Pi B+ connected to the EnOcean module, and the right  is Home Assistant installed on Pi 3 aka the Hub.

Managing Hangars Via the WebHangars

There are many different individuals that will need access to the HangarControl system. Pilots may be at home, at their office, or on the road. With that in mind, I wanted to create a single web interface that would accommodate a variety of browsers, including desktop, tablets, and smartphones. To ease the pain of coding HTML for all of the different platforms, I chose to use jQuery Mobile ( for the front-end toolset. Don't let anyone fool you -- There is still quite a learning curve when using "the easiest way to build sites and apps that are accessible on all popular smartphone, tablet and desktop devices!" That quote is from the jQuery mobile website. I referred to it often, the quote that is, when I needed to convince myself that front-end coding could be even more difficult. Enough of that, let's move on to the structure.


Initializing a New Hangar

There are many hangars and aircraft that we need to be able to preheat. Once the system is installed, I did not want to have to return each time there was a change to the configuration. The "auto discovery" capabilities in xPL provide for this.

New Node Turns On

Each node (Raspberry Pi + Heater Relay) is flashed with the same image. A node broadcasts it presence and then waits for messages. The web interface of HangarControl provides an Administrator a link to configure each node. Note the "Unconfigured Hangar" below.

Unconfigured List



Configure a Hangar

Clicking on the "49568296" link brings us to a page where the Administrator can name the hangar and airplane. Here is where we can specify how long the heater runs as well as the Raspberry Pi GPIO pin for controlling the heater relay.

Configure Page



Backend Coding in Python

In Episode #6 ([Pi IoT] Hangar Central #6, Minimal Web Application), I introduced the web application framework, Flask (Welcome | Flask (A Python Microframework) ), which I am using to provide the application environment for HangarControl.


Logging in and User AuthenticationLogin Page

Flask provides a number of optional modules that you may use to supplement your project. One of these is Flask-Login, which I am using to aid in session management. Take a look at Episode #8 ([Pi IoT] HangarControl #8.1 -- What's a database? The Movie or [Pi IoT] HangarControl #8, Database? What's a Database?) for a behind the scenes look at integrating with Flask-Login.


Since authentication is such an important component of any public facing application, I will show you what you need to write if you plan on using Flask-Login yourself.


Flask Application

The main module is named, a boilerplate can be found in Episode #6, ([Pi IoT] Hangar Central #6, Minimal Web Application). To that minimal file, Flask-Login needs to be included.


The initialization or "stuff at the top"

# Include the Flask-Login, session & user management module
from flask_login import LoginManager, login_required, login_user, logout_user

# The User class with necessary Flask-Login & search methods
from lib.user import User

# Instantiate flask-login, send our application context, a register a login page
login_manager = LoginManager()
login_manager.login_view = 'login'



Present the login page until successfully validated

# The URL for logging in. A user gets directed here automatically 
# when they select their first page which has been tagged with @login_required
@app.route('/login', methods=['GET', 'POST'])
def login():
    error = None
    next = request.args.get('next')
    if request.method == 'POST':
        user = User.find_by_username(request.form['username'])"login: user=%s", user)
        if user == None:
            error = 'Invalid username'
        elif request.form['password'] != getattr(user,'password'):
            error = 'Invalid password got=%s, expected=%s' % (request.form['password'], getattr(user,'password'))
            flash('You were logged in')
            # next_is_valid should check if the user has valid
            # permission to access the `next` url
            if not next_is_valid(next):
                return abort(400)

            return redirect(next or url_for('hangars'))
        return render_template('login.html', form=form)

    return render_template('login.html', error=error)


Helper methods provided by you, the programmer

# When redirected for login, the URL has a parameter ('next') which
# indicates the page to navigate to after a successful login.
def next_is_valid(next):
    return True

# Flask-Login needs a method to do user lookups. The user_id is passed from
# the login page and we use our "finder" class methods to do a lookup
# on our User class.
def load_user(user_id):
  return User.find_by_username(user_id)


Secure or require logins on a page

# Produce a list of hangars. Require a valid login before presenting the page.
def hangars():
  hangars = server.getHangarList()
  return render_template('hangars.html', hangars=hangars)



Online and Available

At this point we have a completely function system and pilots are able to request (automated) preheating service.

  1. A Raspberry Pi 3 is the primary workhorse. It acts as our xPL communication hub handling messages between hangars and administrative applications.
  2. This same RPi3 is running the Flask application server and provides our HangarControl web interface. Use of smartphones, tablets, and desktop devices has been made seamless with jQuery mobile front end library.
  3. The RPi3 also services the heater in a single hangar.
  4. Additional hangars can be included by adding another networked RPi. When HangarControl "hears" the new hangar it is added to the list of hangars and the Administrator simply clicks the configure link to specify duration and the GPIO pin.


Next up, I'd like to include the SMS and telephone interfaces. Let's see if I have enough time to write it up!




This post is about a mini project that I suddenly thought of during the challenge and thought would fit well as part of the larger project The idea was to make a key holder allowing up to four different (sets of) keys. It serves two purposes: a fixed place to hang our keys (we tend to misplace them a lot!) and assuming proper use, could be used as an alternative/additional presence check.





For the key holders, I decided to use stereo jacks and panel mount connectors. By shorting the left and right channel in the jack, a loop is created. On the connector, the left channel connects to ground, the right channel connects to a GPIO pin with internal pull-up resistor. When the jack is not inserted the GPIO is HIGH, when inserted, LOW. There is no differentiator per key at the moment, but could be achieved in a future version in different ways:

  • Rather than just pulling to GND, resistors could be used, resulting in different analog values, each unique per key This will require the use of an ADC.
  • Use a different connector set per key, making it impossible to connect in any other slot.


To have everything removable/replaceable, I used male header pins on the connectors and Dupont wires. The ground wire is daisy-chained across all four connectors. This results in a total of five connections to the Raspberry Pi's GPIO header: four GPIO pins and one ground. As a visual aid and indication, every connector is associated to an LED of a certain colour. When the jack is plugged in, the LED is turned off, when removed, turned on. The LEDs are located on a small board which fits straight on the GPIO header, called Blinkt!. Using the python library, the individual LEDs can be controlled.


Finally, to turn this key holder in an IoT device, whenever a jack is inserted or removed, an MQTT message is published to the control unit, which can then visualise the status in OpenHAB. From there, rules can be associated to these events. What if the shed was opened while the key was still in place??


Enjoy the gallery illustrating the build process and final result, just after a quick explanation of the code!




The code is straightforward, and using the GPIOZero library for the first time, made it even more simple! But basically, the four GPIO pins are checked in an infinite loop. Depending on the state, the matching LED is set or cleared, and an MQTT message is sent.





{gallery} Key Holder


Connectors: Four sets of connectors are used to connect the keys


Headers: Using male headers, all pieces can be connected/disconnected easily


Wiring: Testing the wiring. Ground is daisy-chained to all connectors


Pi Zero: A Raspberry Pi Zero is used to keep everything compact


Panel: Mounting the connectors and LEDs to an acrylic panel


Assembled: The fully assembled electronics


Hook: Twisting copper wire in a nice loop


Soldering: Soldering the loop onto the connector


Enclosure: Stacking and glueing pieces of wood to form an enclosure


Finish: A bit of sanding and rounding of the edges


Tadaaaa: The finished result on the cabinet


Tadaaaa #2: The finished result on the cabinet





Navigate to the next or previous post using the arrows.


I just realized that the cut off time for submission is 11:59 PM GMT which is a bit sooner than I expected, so here be what I have thus far.

After getting the connections completed with the Pi Rack, I moved to working on the Automation application of the Feeder System I have been working on. This has included implementing Mosquitto, Paho, and MQTT for communication between OpenHAB and the feeder system.  With this, I can adjust the timer and setting locally on the Feeder System as well as change the Timer, Run the Feeder Manually, view the Pi CAM remotely and be notified when there is motion in the stall.


MQTT Phao and Mosquitto


Thus far in the config, I have 3 Topics that get updated at various intervals and events.

feeder/timer - Used to notify topic Subscribers that an update to the timer has been performed. This can be accomplished locally via the Pi Face Display and Control interface or remotely from open HAB.

feeder/manual - Used to Trigger the feeder system to run in Manual mode thus bypassing any timer settings

feeder/motion - Used to notify that there is moved in the Stall via a PIR sensor connected to the Pi Face Digital 2.



Within OpenHAB, to handle the user input for setting the Feeder Timer remotely, the following config was implemented.

The site was very helpful in getting the Timer section completed:


The Initial interface for the OpenHAB config displays the Feeder Timer, an option to set the Timer, Stall CAM (Both Video and Still cam options), The iLumi BLE Lights and EnOcean Energy Harvesting Switches.





From the Stall Timer, the user can Set the Timer by setting the Set_Timer switch to on.  The will grab the Timer that is site from the OpenHAB interface and send it to the feeder/timer topic which is picked up by the Feeder system. Also, the user can run the Feeder manually from the interface.  And, if there is motion, the Stall Motion indicator will be lit.

From the interface, the user selects the Time (Hour/Min) and the day to run the timer and once selected, click the Set Timer Switch.


Smart Feeder Timer setting


sitemap home label="Smart Stall"
        Frame label="Stall Timer" {
                Frame label="Feeder Timer" {

                        Text label="Timer [%s]" item=timerMessage icon="clock" {
                                Frame label="Run Mode"  {
                                        Switch item=Set_Timer label="Set Timer"
                                        Switch item=Manual_Run label="Manual Run"
                                        Switch item=Motion_Alert label="Stall Motion"
                                        Text item=Motion_Detect
                                Frame label="Time"  {
                                        Setpoint item=Set_Hour minValue=0 maxValue=23 step=1
                                        Setpoint item=Set_Minute minValue=0 maxValue=55 step=5
                                Frame label="Days" {
                                        Switch item=timerMonday
                                        Switch item=timerTuesday
                                        Switch item=timerWednesday
                                        Switch item=timerThursday
                                        Switch item=timerFriday
                                        Switch item=timerSaturday
                                        Switch item=timerSunday





Group iLumi
Group Feeder
Group DaysOfWeek
String Feeder_Timer "Get Timer [%s]"  <clock> (Feeder, iLumi) {mqtt="<[jiot2:feeder/timer:state:default]"}
Switch Set_Timer  <switch> (Feeder, iLumi)
String  Send_Timer "[%s]" (Feeder, iLumi) { mqtt=">[jiot2:feeder/timer:command:*:default]" }
Switch Manual_Run  <switch> (Feeder, iLumi)
String  Run_Manual "[%s]" (Feeder, iLumi) { mqtt=">[jiot2:feeder/manual:command:*:default]" }
Number Set_Hour   "Hour [%d]"  <clock> (Feeder, iLumi)
Number Set_Minute   "Minute [%d]"  <clock> (Feeder, iLumi)
Dimmer Set_Day   "Day [%s %%]" (Feeder, iLumi)
String New_Day   "Day [%s]" (Feeder, iLumi)
String New_Hour   "Hour [%d]" (Feeder, iLumi)
String timerMessage "%s"
Switch Motion_Alert  <siren> (Feeder, iLumi)
String  Motion_Detect "Switch Motion[%s]"  (Feeder, iLumi) { mqtt="<[jiot2:feeder/motion:state:default]" }

Switch timerMonday      "Monday"    <switch>  (DaysOfWeek)
Switch timerTuesday     "Tuesday"    <switch>  (DaysOfWeek)
Switch timerWednesday   "Wednesday"    <switch>  (DaysOfWeek)
Switch timerThursday    "Thursday"    <switch>  (DaysOfWeek)
Switch timerFriday      "Friday"    <switch>  (DaysOfWeek)
Switch timerSaturday    "Saturday"    <switch>  (DaysOfWeek)
Switch timerSunday      "Sunday"    <switch>  (DaysOfWeek)


Timer Rules

var String timerToMQTT = ""

rule "Initialization"
        System started
        postUpdate(Set_Hour, 6)
        postUpdate(Set_Minute, 15)
        postUpdate(timerMonday, ON)
        postUpdate(timerTuesday, OFF)
        postUpdate(timerWednesday, OFF)
        postUpdate(timerThursday, OFF)
        postUpdate(timerFriday, OFF)
        postUpdate(timerSaturday, OFF)
        postUpdate(timerSunday, OFF)
        postUpdata(Manual_Run, OFF)
        postUpdata(Motion_Alert, OFF)
er rules


rule "Set Timer"
        Item Set_Hour changed or
        Item Set_Minute changed
        logInfo("Set Timer", "Set Timer")
        var String msg = ""
        var String day = ""
        var String ampm = ""
        var hour = Set_Hour.state as DecimalType
        var minute = Set_Minute.state as DecimalType

        if (timerMonday.state == ON) { day = "Mon" }
        if (timerTuesday.state == ON) { day = "Tue" }
        if (timerWednesday.state == ON) { day = "Wed" }
        if (timerThursday.state == ON) { day = "Thu" }
        if (timerFriday.state == ON) { day = "Fri" }
        if (timerSaturday.state == ON) { day = "Sat" }
        if (timerSunday.state == ON) { day = "Sun" }

        if (hour < 10) { msg = "0"  }
        msg = msg + Set_Hour.state.format("%d") + ":"
        if (hour >= 12) {ampm = "PM"}
        if (hour < 12) {ampm = "AM"}

        if (minute < 10) { msg + msg = "0"  }
        msg = msg + Set_Minute.state.format("%d")

        msg = day + "  " + msg + " " + ampm

        postUpdate(timerMessage, msg)

        timerToMQTT = msg

rule "Set Feed Timer"
        Item Set_Timer changed from OFF to ON
        var String feed_timer

        //feed_timer = "Mon  07:30 AM"

        sendCommand(Send_Timer, timerToMQTT)


rule "Manual Feeder Run"
    Item Manual_Run changed from OFF to ON
    var String set_manual = ""

    set_manual = "Run"

    sendCommand(Run_Manual, set_manual)

rule "Manual Feeder Stop"
    Item Manual_Run changed from ON to OFF
    var String set_manual = ""

    set_manual = "Stop"

    sendCommand(Run_Manual, set_manual)

rule "Motion Detected"
    Item Motion_Detect changed
    var String motion_message = ""
    logInfo("Motion Detected", "In Da Motion")

    motion_message = Motion_Detect.state.toString

    logInfo("Motion Detected", motion_message)
    if (motion_message == "Motion Detected") {
        sendCommand(Motion_Alert, ON)
    else {
        sendCommand(Motion_Alert, OFF)





# Optional. Client id (max 23 chars) to use when connecting to the broker.
# If not provided a default one is generated.


Feeder System Paho config

def send_mqtt(message_dict):
    mqttc = mqtt.Client('python_publisher')
    mqttc.connect('', 1883)
    #message_json2str = json.dumps(message_dict)
    hourTemp = str(timer_dict['hour']).rjust(2, '0')
    minTemp = str(timer_dict['min']).rjust(2, '0')
    message_json2str = timer_dict['day'] + " " + hourTemp + ":" + minTemp + " " + timer_dict['ampm']
    mqttc.publish('feeder/timer', message_json2str, retain=True)

# The callback for when the client receives a CONNACK response from the server
def on_connect(client, userdata, rc):
    print("Connected with result code "+str(rc))
    # Subscribing in on_connect() means that we lose the connect and
    # reconnect then subscriptions will be renewed
    client.subscribe([("feeder/timer", 0), ("feeder/manual", 0)])

# The callback forwhen a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
    hourTemp = ""
    minTemo = ""
    man_run = ""
    current_timer = ""
    #msg_read =""
    msg_read = msg.payload.decode('utf-8')
    #print(msg.topic+" "+str(msg.payload))
    print(msg.topic+" "+msg_read)
    if (msg.topic == "feeder/manual"):
        man_run = msg_read
        #man_run = str(msg.payload)
        print("feeder/manual = %s" % man_run)
        if ('Run' in man_run):

    if (msg.topic == "feeder/timer"):
        timer_data = msg_read
        if os.path.isfile('timer.json'):
            with open('timer.json') as datafile:

           current_timer = json.load(datafile)
        print("New time message %s " % timer_data)
        timer_data = timer_data.split()
        print("Current Timer %s " % current_timer)
        if current_timer != timer_data:
            print("New Timer")
            with open('timer.json', 'w') as outfile:
                json.dump(timer_data, outfile)
            timer_dict['day'] = timer_data[0].strip()
            hour_min = timer_data[1].split(":")
            timer_dict['hour'] = hour_min[0]
            timer_dict['min'] = hour_min[1]
            timer_dict['ampm'] = timer_data[2]
        #hourTemp = str(timer_dict['hour']).rjust(2, '0')
        #minTemp = str(timer_dict['min']).rjust(2, '0')]
        #message_json2str = timer_dict['day'] + " " + hourTemp + ":" + minTemp + " " + timer_dict['ampm']


def getMQTTTimer():
    mqttc = mqtt.Client()
    mqttc.on_connect = on_connect
    mqttc.on_message = on_message
    mqttc.connect('', 1883, 60)
    if (mqttc.connected is not False):
        mqttc.connect('', 1883, 60)

    mqttc.loop_forever(timeout=1.0, max_packets=2, retry_first_connection=False)


Motion Sensor to Pi Face Digital 2


The sensor I used for the Motion Sensing is the Parallax PI Sensor rev B.  This is connected to the Pi Face Digital 2 Input 3, 5V and Ground. (NOTE: Since the Digital 2 Inputs are by default pulled up to 5V, I ended up having to put a 10K resistor between GND and Pin 3 to pull the pin low when the PIR sensor was on.)


PiFace Digital IO Config:

import pifacedigitalio as pdio

def detectMotion():
    MOTION = 0
    NO_MOTION = 1
    pri1 = NO_MOTION

    while True:
        pir1 = pdio.digital_read(3,1)
        if pir1 is MOTION:
            print ("Motion Detected!")
            send_message(topic_motion, feeder_broker, "Motion Detected")


This was started as a Process in Python so it would run parallel in the background to the main app:

    p = Process(target=getMQTTTimer)
    m = Process(target=detectMotion)
    m.daemon = False



With all of that, this is what it looks like in the raw, nothing in case at this point.


This has been an awesome adventure and I appreciate the opportunity to use the tools given to create a project; I just wish I could have completed it in the time allotted.  I'll keep working on this and hopefully get something that is complete be years end.





In previous blogs i had shared my issues of having the RPi B+ running MotionEyeOs with multiple cameras and wireless networking enabled.


Today I have had success getting everything outside and monitoring the Farm!


First off, hardwired I have had zero issues with the MotionEyeOs software.  I have been able to add cameras and test everything out without any issues other then the RPi B+ is (not surprisingly) noticeably slower for response then the RPi3 that I first used.


But once I attempted to go enable WiFi I constantly ran into issues.  Finally after creating a brand new image, ensuring all of the WiFi credential information was correct and running off the same AP for all of my communications I was able to get WiFi running last night.  Things were looking good.


But today when I had reassembled all of my gear and installed it outside I was not able to connect.


After verifying power and all connections I could watch the RPi B+ boot up and the WiPi flash but when I tried to connect to the static IP, no luck.  After pulling off the extra cameras and trying it just as a base system and still having no luck I decided to completely bring everything back inside and retry it there.


Success!  Right away I was connected again via Wireless and able to see all 3 cameras.


Doing a little research on others experience with the WiPi made it sound like some people had issues with some of the dongles having limited range.  Very limited range.


This had not been factored into my planning since I have my AP at an outside window and pretty much all of our electronics have been successful connecting from the outside and streaming the kids various flavor of entertainment at that moment.  Youtube, Spotify, Netflix, it all was useable.


Luckily I had a Realtek wireless dongle and I swapped that in for the WiPi and everything was working again.  Outside!


MotionEyeOs even automatically adjusted for the different card and I was able to both ping the RPi B+ and connect into the monitoring software.




Camera 1 is the baby box monitor, currently looking at the rear of the Momma Rabbit.  As of today, no babies, but they are expected soon.

Camera 2 is the front of the Momma Rabbits cage, usually she is right there checking things out, but right now she is trying to figure out what that camera is doing in her box.  :-)

Camera 3 is the View to the Chicken Casa.




Here in Camera 1 we can see Momma's Ear, she had just put her eye right up to the camera checking everything out.




Here is Momma starting to get a little concerned from all of the camera's and activity.




Here I tried to zoom externally on the Chicken Casa Camera, hard to tell but there is a variety of Chickens and G.O.A.Ts scrambling for treats.  The image is a lot easier to see on screen then it is when captured.




Here is a close up of the Baby Box, I put a block of wood in the corner as a focal point.


I quickly shutdown all of the activity around the Rabbits and Monitoring Station to allow Momma to get used to everything.  I think I may move Camera 2 over to monitor another part of the Farm and allow for just Camera 1 to watch the Baby Box.  Letting her get used to everything.






I really like the functionality of the Cameras and will be ordering some more to see about combining cameras to get a broader range of monitoring.  I used a clear snack box as a cover for the USB Cameras and it functioned very well.  I wanted to add more weather proofing then just having the cameras under a shelter roof.




Yes, I like the way the whole setup is working now and look forward to adding/improving upon it!



And as a final picture, here is a test run of the efforts of my son and I for making a larger Duck Pond.  I wanted to see how level everything is and how long the water naturally stays in our heavy clay area.



Eventually a little bridge will be added over that middle giving the ducks more shade to work with and the kids something neat to walk across and see their ducklings.


Pi Control Hub: The HUB

Posted by carmelito Aug 28, 2016

It is now time!! to put the all the electronic components together into a nice enclosure, which I am calling the HUB ,using some 3D printing with Wood filament and some basic wood working using a Dremel tool.Here is a gallery of the finished HUB


{gallery}Pi3  Control HUB




Top - with the Pi Camera just out of its slot


Pi Camera which can be easily moved around with a flexible gear tie and a 3D printed holder


Bottom, sense Hat connected to Pi 3, this is to use at night in the dark..


Top - speakers to play music




As part of the build , start of by cutting a couple of Hobby wood board as shown in the pictures below

{gallery}Building the Wooden Frame



Mark the size of the components like the Pi 7 inch screen , Speakers ..


Use the Dremel with a cutting bit, for the 7 inch display


Cutting slots for the speakers


slot for the Sense Hat mounted on the Pi3


Gluing to build the frame, and leaving to dry over night


Adding wood putty to fill in the gaps


Sanding any rough edges


Staining and leaving in to dry for a couple of hours


Finishing of with Painters Touch, which gives it a nice glossy and semi waterproof finish


Putting the speakers together, download and 3D print the STL files attached below, using wood filament. Here I am using wood filament to 3D print, so that I can sand it and then Stain it, so that it matches with the wooden fame .

Also 3D print a holder for the Sense hat and the Raspberry Pi 3 using wood filament, here is the link to the wood filament I am using from

Suggest slicer setting to use for 3D printing with wood filament

Layer height - 0.2 mm

Infill density - 40%

Temperature - 200 C


{gallery} Putting the Speaker together


3D printing the speaker holder, you will need another copy of this ..


Staining the 3D printed parts. Dont forget to sand the parts before staining


Glue the componets together, here you can use the same wood glue used for the wooden frame


Adding the Grid



Now, lets add a Handle and the 3D printing part for the Pi Camera to top of the Hub frame. To hold the camera I decided to use a flexible gear tie , which means you can point the camera in any direction you want, by warping the gear tie to the handle.


For 3D printing the Pi Camera holder I am using black 1.75mm PLA and here are some suggested slicer setting to use for 3D printing with wood filament

Layer height - 0.3 mm

Infill density - 20%

Temperature - 205 C


I purchased the handle and the Gear tie at a local hardware store called and also bought matching screws and nuts..


{gallery}Adding Handle and Pi Camera


Screwing the Handle to the top


3D print the Pi Camera holder


Camera will fit in the slot and can be pulled out the stream video when ever required


To hold the camera in place use a flexible gear tie


Adding the Pi3 , sense hat and all the other components to the inside of the hub frame. For more info on the speaker circuit and how to setup Mopidy to play music, check out the blog at

Pi Control Hub: Music with Mopidy and Home Assistant


{gallery} Adding Pi3 and other components inside the frame


Add the sense hat to the 3D printed holder


Cut out a piece of plastic to diffuse the RGB leds of the sense hat , I am using a cover piece from an old notebook


Add the Pi Screen and the Pi3 to the frame, attach the cable for the screen , here I am using the Pi camera cable to connect the Pi3 and Screen driver


Add the speakers and the Pi Camera


3D print holder for the Audio Amplifier and the Lipo charger


Adding Lipo, Audio Amp, switch and Lipo charger to the frame.


Gluing the components together



Now to run Home-Assistant on Pi touch screen as shown in the picture below, we will have to install Chrome. Chromium is not part of the default Raspbian packages , which means we will have to run the following commands to work around this


wget -qO - | sudo apt-key add -
echo "deb jessie main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install chromium-browser rpi-youtube -y



Here is the link to the Raspberry Pi forum post,where I found this workaround

The schedule for the Pi Iot design challenge is winding down, but this project is still just beginning. It is experiencing some horrendous delays which are taking it out of the running in the contest, but hopefully the project will continue at its own pace. The primary challenge was to use a Raspberry Pi3 in an IoT application. Since my proposal wasn't strong enough to warrant sponsorship, I ordered my own Pi3 back in May. It has not arrived yet which is a major setback, killing any shot at the competition. I had been coming up with additional applications for the Pi3, so I ordered a second Pi3 from a different supplier, but it has not arrived yet either. The 3 main Pi3 applications I want to implement are an entertainment system (holo-deck), an EnOcean sensor controller (LCARS Secirity Screen), and a steaming PiCam video system (short range sensor array) to monitor either the interior or the exterior of the habitat.

The long range sensor array is a solar-powered arduino-based bluetooth weather station. The android app for this is complete as is the arduino firmware. However something happened to the bluetooth module that killed the arduino. I thought the arduino died because the bluetooth module was still connecting wirelessly and receiving data okay. After I had fried a second arduino, I figured out the bluetooth module was the culprit. So now I am waiting for another bluetooth module to arrive. This time I am going to protect the arduino, although I have several other projects where the original setup works fine.

The Henrietta Life Support system has 2 remote hosts, one is PC-based, the other is android-based. The PC I was using succumbed to a catastrophic failure and had to be rebuilt. In the mean time I ported the host application to a thin client PC and reworked it for the LCARS theme.

Since there are too many displays to fit on the workstation surfaces in the alcove, I plan to mount some on the walls. The only (currently) bare wall will have a set of lighted bulkheads with a display between them - something like this:


The LED bulkhead illumination components and power supplies finally arrived a couple of days ago, after a long wait, but they have not been installed yet.

Most of the electronic components arrived to make the Pi3 based entertainment system (except the Pi3), so that sub-project is on hold.

I have been working on the infrastructure and skits to make a video to showcase the alcove, but the Star Trek theme party is a ways off. My Klingon shirt will just have to wait for its debut.

I have not moved the replicator (Cel Robox) to the alcove yet, but it has been busy making lots of parts, including a phaser and some com badges.

As this is likely my last installment before the deadline, I guess I can leak the work I have been doing on the transporter out at Star Fleet Academy:


Links to the Pi IoT Design Challenge site:

Pi IoT

Pi IoT - Smarter Spaces with Raspberry Pi 3: About This Challenge


Links to blogs about the Star Trek IoT Alcove project:

Pi IoT - Star Trek IoT Alcove - Blog 1

element14 and the photon torpedo - Pi IoT Blog 2

How many tablets to use? Pi IoT Blog 3

Starship Enocean Voyager

The Starship Enocean Voyager - Pi IoT Blog 4

LCARS (Library Computer Access Retrieval System){Star Trek} - Pi IoT Blog 5

LCARS Tablets

Henrietta LCARS

Alcove Transporter

Henrietta LCARS - Pi Iot Blog 6

3D Printed Phaser

3D Printed Star Trek Phaser

Henrietta's Daughter - Smart Thermostat

Smarter Life Challenge - The Henrietta Project - Final Summary

Make Life Accessible - Clear Walk - Moving Mirrors - blog 18

Forget Me Not - IQU - Custom App

PlexPlex is software which makes it possible to enjoy all of your media on all your devices. When on the server you create a library based on video (and music, photos, etc) files, Plex finds the corresponding metadata and gives you an easy to use interface to browse and play your movies. You can interact with Plex through its API and you can keep up-to-date with what's happening on each client by subscribing to the WebSockets channel. In this last part of the Home Theater series we'll integrate Plex in Thuis.


Plex API

Official documentation for the API is not publicly available, but luckily some other developers are maintaining  the up-to-date wiki about it. For now we'll use the API just for basic playing controls. As time is limited and the calls are simple, we'll execute them directly from iOS:

@IBAction func playPauseAction(_ sender: AnyObject) {
    if (playing) {
    } else {
@IBAction func stopAction(_ sender: AnyObject) {
@IBAction func backAction(_ sender: AnyObject) {
@IBAction func forwardAction(_ sender: AnyObject) {

fileprivate func callPlex(_ action: String) {
    let url = URL(string: "\(clientBaseURL)/player/playback/\(action)?type=video")!
    var request = URLRequest(url: url)
    request.setValue("21DA54C6-CAAF-463B-8B2D-E894A3DFB201", forHTTPHeaderField: "X-Plex-Target-Client-Identifier")
    let task = URLSession.shared.dataTask(with: request) {data, response, error in


As you can see there are four control @IBActions available: to play, to pause, to stop, and to scrub forward and backwards.


Nevertheless there are many more possibilities: something I am currently working on and would like to implement a bit later makes it possible for a user to select a TV series episode directly from the iOS app.


Plex Notifications

To get notifications when the play state changes one can subscribe to the WebSocket of the Plex server. The URL for the WebSockets channel is the following: ws://localhost:32400/:/websockets/notifications. There are multiple types of messages posted, but we're only interested in PlaySessionStateNotifications. It has the following fields:

String guid;
URI key;
String ratingKey;
String sessionKey;
State state;
String transcodeSession;
String url;
long viewOffset;


The other interesting fields are state (playing, paused, etc), viewOffset (how many seconds is the video already playing) and key (identifier used to get information from the API). The code that is directly communicating with Plex is placed in a separate library. Just like for MQTT and CEC it uses CDI events to present the notifications to Thuis. In Thuis we have the PlexObserverBean handling the notifications:

package nl.edubits.thuis.server.plex;

public class PlexObserverBean {
    private Controller controller;

    private LibraryService libraryService;

    MqttService mqttService;

    private PlaySessionStateNotification playSessionStateNotification;
    private MediaContainer mediaContainer;

    public void onPlayingNotification(@Observes @PlexNotification(Type.PLAYING) Notification notification) {
        if (!notification.getChildren().isEmpty()) {
            playSessionStateNotification = notification.getChildren().get(0);
            if (playSessionStateNotification.getState() == State.PLAYING) {
      , Devices.kitchenMicrowave));
      , Devices.kitchenCounter));
      , Devices.kitchenMain));

            mqttService.publishMessage("Thuis/homeTheater/state", playSessionStateNotification.getState().name());
            mqttService.publishMessage("Thuis/homeTheater/playing/viewOffset", playSessionStateNotification.getViewOffset() + "");

            if (playSessionStateNotification.getKey() != null) {
                if (mediaContainer != null && !mediaContainer.getVideos().isEmpty()
                && playSessionStateNotification.getKey().equals(mediaContainer.getVideos().get(0).getKey())) {
                    // No need to retrieve information

                mediaContainer = libraryService.query(playSessionStateNotification.getKey());

                if (!mediaContainer.getVideos().isEmpty()) {
                    Video video = mediaContainer.getVideos().get(0);
                    mqttService.publishMessage("Thuis/homeTheater/playing/title", video.getTitle());
                    mqttService.publishMessage("Thuis/homeTheater/playing/summary", video.getSummary());
                    mqttService.publishMessage("Thuis/homeTheater/playing/art", toAbsoluteURL(video.getArt()));
                    mqttService.publishMessage("Thuis/homeTheater/playing/thumb", toAbsoluteURL(video.getThumb()));
                    mqttService.publishMessage("Thuis/homeTheater/playing/grandParentTitle", video.getGrandparentTitle());
                    mqttService.publishMessage("Thuis/homeTheater/playing/grandParentThumb", toAbsoluteURL(video.getGrandparentThumb()));
                    mqttService.publishMessage("Thuis/homeTheater/playing/duration", video.getDuration() + "");


When the notification has at least one child - we take the first one. If the Plex client is playing and the lights in the kitchen are still on, we are turning the lights off. Then we publish the play state and offset to MQTT. When it's the first notification we get for the key we query the LibraryService, which calls the API to retrieve more information on the video. With all this information available through MQTT we can use it in our iOS app.



In the iOS app we will add a new view for displaying what is currently playing. When we receive a PLAYING message on Thuis/homeTheater/state we'll automatically open it. The button to open it manually will only be available when there is something playing. For this we update our TilesCollectionViewController:

extension TilesCollectionViewController: MQTTSubscriber {
    func didReceiveMessage(_ message: MQTTMessage) {
        guard let payloadString = message.payloadString else {
        if (message.topic == "Thuis/homeTheater/state") {
            if (payloadString == "PLAYING" && currentState != "PLAYING") {
                navigationItem.rightBarButtonItem = UIBarButtonItem(title: "Now Playing", style: .plain, target: self, action: #selector(OldTilesViewController.openNowPlaying))

            if (payloadString == "STOPPED" && currentState != "STOPPED") {
                self.presentedViewController?.dismiss(animated: true, completion: nil)
                navigationItem.rightBarButtonItem = nil
            currentState = payloadString
    func openNowPlaying() {
        DispatchQueue.main.async {
            self.performSegue(withIdentifier: "nowPlaying", sender: self)


The nowPlaying view itself is composed using some StackViews, UILabels and UIImageViews. The interesting thing about them is that these default iOS UI elements themselves are MQTT subscribers and update their content based on messages on the corresponding MQTT topic. This is possible because of two features of Swift: extensions and protocols. For example the UILabel can be made aware of MQTT as follows:

extension UILabel: MQTTSubscriber {
    func setMQTTTopic(_ topic: String) {
        MQTT.sharedInstance.subscribe(topic, subscriber: self);
    func didReceiveMessage(_ message: MQTTMessage) {
        if let payloadString = message.payloadString {
            DispatchQueue.main.async() {
                self.text = payloadString


Similar extensions are made for the other elements. The result looks like this:

iPad: now playing


Following these steps we set up the Home Theater flow to our iOS app and made sure everything works smoothly. In my opinion it still needs a bit of fine-tuning, but even now it works pretty well!

In [Pi IoT] Thuis #11: Final implementation UI design you saw our Thuis iOS app, which has a few buttons for controlling the Home Theater. In this post we'll make sure they work well. For brevity I will describe only the main scene: it makes sure we can watch anything on the Apple TV.


Defining devices

Before we can use any devices in Thuis we have to define them. You might remember from [Pi IoT] Thuis #8: Core v2: A Java EE application that we have a class Devices containing static definitions. Here we will add the devices we need for the home theater system:

package nl.edubits.thuis.server.devices;

public class Devices {
    public static Computer NAS = new Computer(none, "nas", "nas.local", "admin", "00:22:3F:AA:26:65");
    public static AppleTV appleTv = new AppleTV(livingRoomHomeTheater, "appleTv", "");
    public static HomeTheater homeTheater = new HomeTheater(livingRoomHomeTheater, "homeTheater");
    public static MqttSwitch tv = new MqttSwitch(livingRoomHomeTheater, "tv");
    public static Receiver denon = new Receiver(livingRoomHomeTheater, "denon", "");
    public static MqttSwitch homeTheaterTv = new MqttSwitch(livingRoomHomeTheater, "tvSwitch");
    public static MqttSwitch homeTheaterDenon = new MqttSwitch(livingRoomHomeTheater, "denonSwitch");


The bottom 2 are Z-Wave outlets, which you've seen before. All the others are new types of devices. Below we'll describe each of them separately.



Sony Bravia EX700Let's start with the easiest device: the television. With the work we did yesterday in Home Theater part 1: CEC we can turn the TV on and off by sending a simple MQTT message. Because of that it's defined as a MqttSwitch.




Apple TV

Apple TVThe Apple TV is a great device as the centre of the home theatre. It is able to control other devices through CEC, but unfortunately you can't control it yourself through CEC. So I had to look for an alternative and I found it in AirPlay. Xyrion describes it well how you can wake up an Apple TV by connecting to it using Telnet and telling it to play some bogus video.


In Java we can do this by using a Socket. For this we'll create a new Command, the SocketCommand:

package nl.edubits.thuis.server.automation.commands;

public class SocketCommand implements Command {
    String hostname;
    int port;
    String body;

    public SocketCommand(String hostname, int port, String body) {
        // ...

    public Message runSingle() {
        try (
            Socket socket = new Socket(hostname, 7000);
            PrintWriter out = new PrintWriter(socket.getOutputStream(), true);
            BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
        ) {

  "Socket response: " + in.readLine());
        } catch (IOException e) {
            logger.log(Level.WARNING, "Socket failed", e);

        return null;


We use this command in the definition of the AppleTV itself. By extending MqttSwitch we can leverage the logic for updating its status from MQTT. I'm not entirely sure how we can turn off the Apple TV programmatically, so this method is not implemented yet.

package nl.edubits.thuis.server.devices;
public class AppleTV extends MqttSwitch implements Switch {
    String hostname;

    public AppleTV(Room room, String id, String hostname) {
        // ...

    public Command on() {00, "POST /play HTTP/1.1\n" +
        "Content-Length: 65\n" +
        "User-Agent: MediaControl/1.0\n" +
        "\n" +
        "Content-Location:\n" +
        "Start-Position: 0\n" +

    public Command off() {
        // TODO



Denon AVR-X2000My AV receiver is Denon AVR-X2000. CEC support on this device is limited, but luckily there is an API. Unfortunately, the API is not documented, but by using the web interface I could reverse engineer it. While it's starting up there are some quirks though as it can take quite a while before the Denon is reachable through the API (while it already works by manually pressing the power button). Because of this we'll use a combination of both CEC and the API.


Firstly lets create the Receiver class itself. It's a implementation of MqttSwitch, so the CEC part is easily taken care of. We do override the on() method to make sure it's only fired when needed as this command toggles the power status for the Denon. To get more detailed information on the status and to change volume and inputs we use the API. The API calls are performed by a DenonCommand.

package nl.edubits.thuis.server.devices;

public class Receiver extends MqttSwitch implements Device, Switch, Singable {
    private final String hostname;
    private Status status;
    private NowPlaying nowPlaying;

    public Receiver(Room room, String id, String hostname) {
        // ...

    public boolean isFullyOn() {
        return isOn() && (status == null || status.getZonePower());

    public boolean isFullyOff() {
        return !isOn() && (status == null || !status.getZonePower());

    public Command on() {
        if (!isOn()) {
            return super.on();
        return null;

    public DenonCommand volume(double value) {
        value = Math.max(0, Math.min(98, value));
        String volume = (value==0) ? "--" : String.format("%.1f", value-80);
        return new DenonCommand(this, "PutMasterVolumeSet", volume);

    public DenonCommand input(Input input) {
        return new DenonCommand(this, "PutZone_InputFunction", input.getValue());


Due to the time limitations I won't go into the implementation of the API in this post. If you would like to find out more details about this topic, there is a valuable article by Open Remote describing the key possibilities.



ReadyNAS Ultra 4The NAS runs the Plex Media Server. When nobody is home, the NAS is not used and is turned off by default. The NAS supports Wake-on-LAN (WOL), so we can use this to awake it to make Plex available.


For WOL I use a nice little library and built a command around it:

package nl.edubits.thuis.server.automation.commands;

public class WakeOnLanCommand implements Command {
    Computer computer;

    public WakeOnLanCommand(Computer computer) {
        // ...

    public Message runSingle() {
        try {
            for (int i = 0; i < 5; i++) {
            return new Message(String.format("Thuis/computer/%s", computer.getId()), "wake");
        } catch (IOException | DecoderException e) {
            logger.log(Level.WARNING, String.format("Waking up '%s' failed", computer.getId()), e);
        return null;


As the Computer class used for the NAS is just a basic implementation of an Actuator using the WakeOnLanCommand for the wake() method, I would not present the source code here.



Now when we almost have all the devices set up we can combine them in scenes. Let's start with some code:

public static Scene homeTheaterBase = new Scene("homeTheaterBase",
        waitForOn(denon.on(), homeTheaterDenon)
        waitForFullyOff(, denon)

public static Scene homeTheater = new Scene("homeTheater",
        illuminanceIsLowerOrEqual(livingMoodTop.on(), 70l),
        waitForOn(, homeTheaterTv),
        waitForOn(appleTv.on(), denon),
        waitForOn(, appleTv),
        waitForFullyOn(new ListCommand(asList(
        )), denon)


Here the scenes are split in two. The homeTheaterBase is the basis for all different home theater scenes: e.g. the one for the Apple TV is displayed here, or the one for Blu-ray. It also allows me to switch from one to another without turning everything off.


As you can see lots of commands are dependent on each other, so devices have to wait for some other devices before starting up. The most obvious case is that you first have to turn on the power before you can turn on the device itself, or give the device more commands.


The receiver has a special qualifier waitForFullOn: this is because it has two stages of powering on. Firstly CEC reports it's turned on (this is the normal on-state) and later the API reports the powered-on status as well (the full-on-state). We're interested in both of them as it's not possible to send any commands through the API before it reaches the fully-on-state.


Time for a quick demo:

Note: as this is the demo, the launch takes a bit of more time then usually. Please be patient


There is one thing left to integrate: Plex! This will be the subject of part 3.


At this point in the project I will try and explain as best as I can how I see these components working and how they should integrate with the main system. Unfortunately I do not have time to continue developing them along with the Challenge but if nobody is against it I will continue updating these post as I progress with development so I can come to a conclusion on all the parts.


Previous posts:

Pi IoT - Simone - Introduction

Pi IoT - Simone - #1 - Light Controller

Pi IoT - Simone - #2 - Main System

Pi IoT - Simone - #3 - Door / Window notifier

Pi IoT - Simone - #4 - Power consumption statistics and control

Pi IoT - Simone - #5 - Laundry notifier


I will go ahead in this post and cover the rest of the components. As mentioned I will not leave them be as they are and I hope I will be able to make a separate post for each and every one of them with more details.


1. Temperature Control


For this module I will talk about the gas based central heating system. Most of the heating units have the possibility to connect an thermostat that would control it. The thermostat is basically an temperature sensor that send data to a central module, and a relay that make a connection on a wire and the unit starts. The downside of this system is that you only use the data from the room in which the thermostat is located.


Using the same logic you could gather data from all the rooms and even set different temperatures in each of them and stop the central heating unit only when the coldest room reached the temperature it was set for it. But also the individual heaters would have to be controlled because it can get too hot in other rooms so you would stop the water flowing in the heaters from the rooms where the temperature reached the threshold values. For stopping the heaters there are electric valves I could use or make a mechanism that turns the knob from the heater (in this way I could keep the idea throughout the project that all the automatic things should leave the possibility to be done by hand in the same way were are used to do them)


The user should be able to control the temperature from a distance so you could easily heat up the apartment when you come home from a vacation.


2. Coffee machine


This is again a glorified light switch. It can be implemented using the same idea and integrating the the coffee machine so it would start at a certain hour. As the light switch can be controlled from the main server, the switch from the coffee machine could be controlled the same way. The only other thing I would add is a digital button that would connect the relay without the use of the main server.


3. Personalized home welcoming


This I imagined doing by inserting an RFID reader in the shield of the door know where you insert your key to unlock the door. The RFID tag would be glued to the key so when you insert it to unlock the door the main system would recognize who entered the house. Since there are multiple devices controlled by the main server, it should be able to execute a specific number of commands when you enter for example to start playing a certain playlist, turn on the lights on your usual way through the house, etc.


4. TV Control


The TV Control consists of an IR led that could act like a remote for your TV (I attached some code that I used for testing this and it might help). The hardest part is to recognize what is happening on the TV. Since image recognition is a processing power consuming application I imagined this system on a standalone Raspberry PI that has a camera attached to it and the IR sensor. The main system can send the commands from the user and the TV control system can execute them







Sorry for the lack of information but I will update as I progress with the development. Meanwhile please leave your thoughts in the comments bellow, all ideas are useful.


At this point in the project I will try and explain as best as I can how I see these components working and how they should integrate with the main system. Unfortunately I do not have time to continue developing them along with the Challenge but if nobody is against it I will continue updating these post as I progress with development so I can come to a conclusion on all the parts.


Previous posts:

Pi IoT - Simone - Introduction

Pi IoT - Simone - #1 - Light Controller

Pi IoT - Simone - #2 - Main System

Pi IoT - Simone - #3 - Door / Window notifier

Pi IoT - Simone - #4 - Power consumption statistics and control


This module is actually made of two parts but they both are based on weighting things, and they both can be implemented for more than laundry.


1. Notification when you have enough dirty clothes to make a washing cycle


Most washing machines have a limit of how many clothes they can wash on one cycle based on weight. The first part is a scale that tells you when you reached that weight. For this I first imagined making three drawers ( one for whites, one for blacks and one for colors ) and the system could notify me when one of the drawers is close the the washing machine's limit.


2. Notification for how many washing cycles you can do with your available detergent.


As mentioned before, this is also based on weighting things, in this case the bag of detergent. Even if there are small fluctuations the quantity of detergent you use for a wash is almost the same. The system can take into consideration as the quantity needed for one wash the quantity that is missing between two different measurements, and make an average. This way it can tell you how many washing cycles you can make at any time and you can consult this when you are shopping and you don't know if you need to buy detergent or not.

One thing to take into consideration is to make a logic that resolves problems based on unusual handling. For example you can take the box from the scale when you use it and the system would add into the statistic that you used 5 kilos of detergent; so it has to disregard this reading by not taking 0 into consideration. Another possibility is to use some detergent for something else or borrow some to another person. In this case there will be an abnormal usage and the system should persist this quantity and take it into consideration only if it repeats a couple of times.

The third thing to take into consideration on implementation would be the adding of new detergent, the system should reset the quantity it takes into consideration but again, you could press on the scale with your hand while you fill the detergent box at the same time as the system is calculating the weight, so this issue should be handled. For this I think it would be safe to take into consideration only the values that persist for half an hour.



The whole calculation for this system would take place on the raspberry pi and a weight sensor could be attached to it using the same I2C protocol as described in the posts before this one. The sensor should only send the weight to the main server and then all the calculation would take place there.


This system can be used for any other thing that you use in a consistent manner. For example the coffee, I for one am using roughly the same amount of coffee every morning. Since the actual weights are not needed since the calculation is based on the statistics of the usual consumption, you can use it on anything.

Before note:

I am not and electrical engineer nor do I have experience working in this field. Please double check anything I say in this post before trying it yourself.


Previous posts:

Pi IoT - Simone - Introduction

Pi IoT - Simone - #1 - Light Controller

Pi IoT - Simone - #2 - Main System

Pi IoT - Simone - #3 - Door / Window notifier


For this part of the project the main idea is to have a device that can calculate the power consumption on each wall outlet in particular. This can give you a good statistic of the power consumption and it can help you minimize the energy needed to run your house.

The control part comes with the ability to turn the sockets on and off. Mostly this feature is not very useful but if you have kids for example you could start a socket only for two hours so you can charge your phone and then it will be safe for the kids to play around. This is only an example but you can find other uses for this.


The switch part is the same as the one for the light switches, it acts the same way and it has a relay so it can be implemented the same way. One thing to take into consideration is the power consumption and the limit of the relay you are using or you could burn it very easily.


The main chalange of this part of the project is to make a component that can accurately calculate the power consumption even for small consumers. The I2C Protocol described in the previous posts is good enough to send the information to the Raspberry Pi and it has the processing power to display it in any form needed.


I will get more into details but first I would like to say that this is very dangerous and this should not be implemented by someone who does not have any experience in working with high currents. Also spend a while searching online for discussions and topics about this and also safety advices.


If you do want to go ahead with this, here are some things to take into consideration one by one before plugging in a device:

1. Reconsider if you really need to do it and if it is worth it.

2. Make sure the component is not plugged in when you are arranging everything for the test

3. Make sure you are at least one meter away from the device during the testing phase when it is plugged in. You don't know for sure what will happen and mistakes can be made.

4. It is better to have somebody with you at a safe distance so he can help but in case you are working alone make sure there are no people in the area so they do not touch anything by mistake

5. Remove all animals from the room.

6. Make sure you isolate your component well from anything that might be a conductor ( I burned a component because it made a contact through a wooden floor )

7. You should always be able to stop the power at any time, without getting close to the component you are testing

8. Minimize the damage if anything is wrong by keeping the device away from other instruments you use and not connected to anything.

9. Try to develop a way that you can see the readings from a distance or record them on to something so you don't have to get close

10. Make sure your calculations are right and double check them.

11. Reconsider again if you really want to do this

12. Never let the component unsupervised while plugged in unless it is well tested and well isolated.


Please comment below any thoughts on this and I will add them to the list above. I would be glad if this list can be a genuine checklist of thing to consider before testing components with high current.


Now that this is said, here are some options that can be used to calculate the power consumption:

1. The safest way to do this ( not really safe but safer than others I know ) was mentioned to me in a previous post by jc2048 and it refer to the SCT 013-030. More details can be found at 3. AardEnergy – Current and Voltage Transformers . Again, thank you Jon for the advice.


2. A nice component that I found is this one:  It works pretty well but it is not a non intrusive solution as the one above. I for one tried to replicate it and ended up with a pretty good current sensor ( but it looks horrible mostly because I bought the wrong size trimmers)



3. The device that I mentioned in the lights controller blog (Pi IoT - Simone - #1 - Light Controller ). The schematics is below. There should be an operational amplifier on the receiving end of the device.



I have to mention that this is by far the unsafest one of the three mentioned here. Below is a picture the one I used and it worked until (as mentioned before) it made a connection through the floor blew (taking along with it on the friendly ride an Arduino board, an ATTiny, some pins on a Raspberry Pi, my computer's Moderboard and a fuse. The fuse was easy to replace so see again the item 8 on the list above):



Make sure for something like this that the resistor can hold up the power consumption of the socket.


For this list also let me know if you have other ideas. I will add them to the list.


I will continue this post with more information regarding this component when I have some progress. finish the Android competition application, I need to add the "updating functions". That is, send the information not updated to the Central Node (web server)  and wait for a confirmation. If the confirmation arrives, information is in the Raspberry Pi3 whereas, when not confirmation is received, data should be resent in the next cycle.

This will require:

  1. HTTP_Client in the phone, to send the data as a HTTP_POST
  2. Communication service in the central node - PHP files able to receive the HTTP_POST message, extract the information and insert it into the database


Client and Server structure

We are focusing on the smart phone to central node communication. Data flow will start from the phone (client) to the central node (server).

client server.jpg


1)User's node sends a HTTP_POST containing:

  1. Id - to be checked when received (to have some verification that the package was intended for the competition service)
  2. Distance tracked information for the user

The server check the id, and if it is the right one, it will extract the distance tracked information.

2)This information is saved into the database (created in post #5) to be access later on by the main GUI

3) If writing is successful, send response back to phone 4)

Data packets

json logo

The data packet structure is that of an HTTP_POST. The message, however, will contain a String with a JSON Array format: this way, I can have send several samples (several rows in the database), each of them with a key:value format. As a result, when the server receives the HTTP_POST, it will be easy to extract and identify each value.


(Images from JSON )


(*)NOTE - SECURITY . There is no type of protection against eavesdropping/ nor any security whatsoever. Nevertheless, traffic between user's node and central node should be, at least encrypted in the future.


Android App

Initial setup: Nexus 5 / Android / SmartCompetitionHome App v 1

I include a new java class to the project:


It makes use of the libraries:

  • android-async-http: the HTTP client sending the information to the server, as an HTTP Post. It does not freeze the whole application while waiting fora  response.
  • gson: Creates a JSON structure with the information. It is an easier way of extracting the corresponding values when the package is received.


Sending data:

This class will be implementing an AsyncHttpResponseHandler (asyncHttpResponseHandler). This class defines two call back, onSuccess (when we obtain a successful response from the server) and onFailure (when we get some error). As stated before, this is an Asyncronous wait which will not freeze the app while the server response is traveling back.


It also holds fields with the server information: URL to send data to, JSONID etc.


Another important characteristic is the JSON String formatting of packages. To create this structure, this class implements convertToJSON method to obtain the desire JSON Array object from a List of Map<String, String>


public  String convertToJSON( List<Map<String, String>> args){
  String json = "";
   Gson gson = new GsonBuilder().create();
   //Use GSON to serialize Array List to JSON
   try {
  json = gson.toJson(args);
   }catch (Exception e){
  System.out.println("Could not convert to JSON!: "+e);
   return json;

So, the call to send a package to the server is as follows:

 * sendToIITServer()
 * Function to send a JSON object to IIT server

public void sendToCentralNode(String json, String url){
   if (url != null && !url.equals("")) {
  System.out.println("Sending: " + json);

   //Set parameters
   RequestParams params = new RequestParams();
   params.put(_JSON_ID, json);

   //Send http request, params, asyncHTTPClient);

   sending = true;
  System.out.println("Empty URL - Not sending");


When to send data?

The app should be able to send the data automatically (instead of having a SEND button). This can be done with a timer, every X seconds. However, I will use an even simpler solution since data recording does not have a high sample rate, nor do I need a lot of computation to process the track information. Data will be send every 5 new location detected. The code is found in CompetitionTrackActivity


if(num_locations == _LOCATIONS_TO_SEND){
   num_locations = 0;
   List<Map<String, String>> notUpdated = myDB.getAllNotUpdatedValues(user_name, columnsTable, DatabaseManager.upDateColumn, DatabaseManager.updatedStatusNo);
   mServer.sendToCentralNode(mServer.convertToJSON(notUpdated), mServer.WRITE_URL);



(*)getAllNotUpdatedValues reads from the database all values that where not updated


Updating only not synchronized values

In order to maintain sync databases in the server and in the phone, we use an extra column to flag the state of each sample ("synchronized"). This way, when we send data, we only send samples with synchronized = 'no'. Afterwards, when the ACK arrives from the server, this synchronized is turn into a 'yes'


Central Node - Competition Service

Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface /  MySQL Server / Apache2 web Server


A bit more of Re routing



In post #5 I showed how to do a port forwarding from the WiFi router to the Central node. I used port 80, to have web traffic redirected to the web server in the Raspberry Pi 3 (also, using port 80). However, for the competition service, I will be using another port. It can be done in two steps:

  1. Adding a new Port Forwarding rule in the router. Traffic entering thru PORT_COMPETITION will be redirected to the central node, PORT_RASPI_COMPETITION
  2. In the Raspberry Pi 3, traffic coming thru PORT_RASPI_COMPETITION has to be redirected to the PHP files providing the Competition Service. We do so by configuring the Apache server (we have it listening to Ports: 80, PORT_RASPI_COMPETITION):
    • Modify its configuration files, in /etc/apache2/ to create a new VirtualHost for the PORT_RASPI_COMPETITION. It should point to the folder with the PHP Interface files.
    • Restart apache2


PHP Interface - get new data and store it

The server will be receiving HTTP post request from the phone. It will make sure it is the intended data, then decode it an try to save it into the database.


Therefore, the proccess for any new request is:


1. Obtain the JSON Array Object

$json = $_POST["id_for_JSON"];
 //Remove Slashes
 if (get_magic_quotes_gpc()){
        $json = stripslashes($json);

//Decode JSON into an Array
    //Json structure:
    //{var data = ['{"table":"Name of table"}', '{"column1":"value1",..,"columnN": "value2"}'];}
    $data = json_decode($json);


2. For each JSON Object, try to store it in the database

for ($i =0; $i < count($data); $i ++){
         //Get keys of JSON for sensors values
        $res = $db->storeInTable($data[$i]->table_name, $data[$i]->date_time, $data[$i], $data[$i]->synchronized);
        //Build the array to send back
        $response[$i] = $res;


(*)storeInTable is the function I designed to store each sample in the database. It requires table_name, date_time and synchronized fields to be passed separately. The other values can be stored automatically.


During the process, we also generate the $response, which will be encoded into another JSON packet. The difference will reside in the 'synchronized

' flag. If data was successfully updated in the database, 'synchronized' will be turn to 'yes' so that phone can then update its own local database.




We have our complete Competition application. Which means that, right now, the platform can:

  • Record the distance walked into the phone app
  • Send distance values to the central node
  • Central node stores this distance values into MySQL database


To finish the platform, we will have to update our main GUI !

That's right! What's an episode without the "made for TV version"?

The Takeaway

In just a couple dozen lines of code, we have implemented everything that we need for our user management. Understandably, it does not handle the complexities of a frequently changing population. But for the sake of this project, the user population is very stable and the creation and management of pilots is handled by another application and HangarControl only gets a list of pilots when there has been a change.


I suspect that the blogging application is "downscaling" my videos which resulted in previous episodes being somewhat blurry. This episode was generated using the highest resolution available. When viewing it, make sure that you use full screen mode and let me know if the text is sharp and clear.


I hope you have found this useful and informative.



Don't Over Complicate It

Oftentimes it seems that as programmers, or anything else for that matter, we get caught up in a particular pattern for doing things. One that I see quite often is the use of SQL databases in various projects. The "kids" these days are introduced to SQL as a primer for programming and can't think how to handle persistence without it. To show that life can exist without a formal database, HangarControl is being written using some very simple mechanisms for creating, reading, and updating information.


The User Record

In a future episode, I will discuss the Flask-Login module. Flask-Login is a set of helper routines to streamline user session management. As it is written in Python, it expects that your users can be accessed as "User" objects. One might be tempted to just fire up SQLAlchemy (SQLAlchemy - The Database Toolkit for Python ) and start creating a database. Instead, we're going to create a class that provides everything needed without the overhead of a database manager.


According to the documentation, Flask-Login requires the following properties and methods:

is_authenticatedThis property should return True if the user is authenticated, i.e. they have provided valid credentials. (Only authenticated users will fulfill the criteria of login_required.)
is_activeThis property should return True if this is an active user - in addition to being authenticated, they also have activated their account, not been suspended, or any condition your application has for rejecting an account. Inactive accounts may not log in (without being forced of course).
is_anonymousThis property should return True if this is an anonymous user. (Actual users should return False instead.)
get_id()This method must return a unicode that uniquely identifies this user, and can be used to load the user from the user_loader callback. Note that this must be a unicode - if the ID is natively an int or some other type, you will need to convert it to unicode.


Define a User Record

We can easily provide this directly from our favorite text editor. I have broken up the various portions for discussion purposes. Trust that they are all that makes up the '' file.

# Get a template object for our User record.
from flask_login import UserMixin

class User(UserMixin):
    # This is an array that will be shared across all instances of this class
    users = []
    # This method is automatically called when a new instance of User is created.
    # Notice at the end where the *class* User appends this new instance to our
    # users list. Just think "SQL insert".
    def __init__(self, username, acctnum, password=None, fullname=None, active=True):
        self.username = username
        self.acctnum = acctnum
        self.fullname = fullname
        self.password = password = active


Here is something that I recommend you do that just makes your (debugging) liffe so much easier: Create a method for rendering your object in a human readable form!

    # Any class that you create should implement __repr__. This provides a 
    # convenient method to display a human readable representation of your object.
    def __repr__(self):
      return "<User username:%s acctnum:%s password:%s fullname:%s>" % \
        (self.username, self.acctnum, self.password, self.fullname)


If you've done any work with Python, the above pattern is pretty familiar. Next we implement the methods that Flask-Login is expecting from us.

    # These are required by the Flask-Login module        
    def is_active(self):

    def is_anonymous(self):
        return False

    def is_authenticated(self):
        return True
    def get_id(self):
        return self.username


The final portion of our User class is the piece that makes all this "what's a database anyway?" talk complete. Here we are simply defining a mechanism or language, if you will, to query User records in a structured way. Ooh, see what I did there? (Okay, "... structured ... query ... language ...") That's okay, my kids didn't think it was funny either.

    # The @classmethod decorator (think "modifier") makes the method definition
    # available by this syntax: "User.find_by_attr(...)". The important concept
    # is that this method isn't used by individual 'user records', rather the 
    # collection of all 'user' records.
    def find_by_attr(cls,key,target):
      for user in User.users:
        if getattr(user,key) == target:
          return user
        return None
    def find_by_username(cls,target):
      return cls.find_by_attr('username',target)


Working With User Records

Now that we have an implementation of a User, let's take a look at how it will be utilized.

$ python
>>> from lib.user import User
>>> User('admin', 0, 'pilot', 'Administrator')
<User username:admin acctnum:0 password:pilot fullname:Administrator>
>>> User('pilot', 142, 'secret', 'Ima Pilot')
<User username:pilot acctnum:142 password:secret fullname:Ima Pilot>
>>> User.users
[ <User username:admin acctnum:0 password:pilot fullname:Administrator>,
  <User username:pilot acctnum:142 password:secret fullname:Ima Pilot>]
>>> User.find_by_username('admin')
<User username:admin acctnum:0 password:pilot fullname:Administrator>
>>> User.find_by_username('whodat')
>>> User.find_by_username('whodat') == None


Users, Nice and Tidy

In just a couple dozen lines of code, we have implemented everything that we need for our user management. Understandably, it does not handle the complexities of a frequently changing population. But for the sake of this project, the user population is very stable and the creation and management of pilots is handled by another application and HangarControl only gets a list of pilots when there has been a change.


I hope you have found this useful and, in the future, aren't afraid to "go naked" and skip the SQL database!



In my last post I shared the variety of boxes I found at the local Walmart ranging in price from 25 cents to 10 cents.  Today I had a chance to try 2 of them on for size and fit and quickly decided the midsize one seems almost made for being used for a Raspberry Pi implementation with the 7 inch Touchscreen!  If I remember correctly it was also only 10 cents! 


As I had mentioned the idea was to take the Farm Operations Center setup out of it's base setup which was just a no container, everything hanging out in the air condition, to a more protected containment system of a box.


Looking around there were a variety of interesting cases that could be ordered or made from a 3D printer, but I don't have a 3D printer, yet, and I wanted flexibility to add parts and pieces easily without worrying about outgrowing the containment.  One item I am really looking forward to is adding a battery to make it portable, so some extra space was a must.


As such I looked into the larger of the 3 cases first, cutting out a hole large enough to slip the metal frame on to allow screws to be drilled through the case and securing it.  My original hole was in the top of the case.  I think mainly because a case is designed with the top up so that is what I tried.  :-)




Here I am showing the F.O.C. mounted to the large box.  Another box is shown above to give you an idea of depth for expansion.  While this wasn't bad there were 2 issues from my point of view. 


First mounting to the top of the case, the lid, just wasn't the best option for being able to run cables through main box since you would have to carefully open the lid every time watching all of the cable routing. 


Second, it just didn't feel comfortable in my hand.  That extra depth made if feel like a plastic brick.  If you don't plan on holding the F.O.C. Box and just want a containment system to set to the side and use perhaps with an external keyboard and mouse option, then the larger box is not a bad way to go.  Especially for only 25 cents.




Here is the mid-sized option.  Just a tad smaller in the depth.  It looks even smaller then it is because this time I flipped the box over and used the bottom to mount the screen.  This fits my hand much better and I can imagine using the touchscreen and the potential of a battery setup very easily with this in place.




It is hard to tell in this picture, tomorrow I will try to get a new picture with the screws in place and everything wired in, but for now if you look at the 4 plastic squares on the box, they actually line up perfectly with the metal mounting spots on the RPi Screen.  As you can see from the bottom box that I have not cut into, the little boxes are recessed and the extruded part of the metal frame fits right into those.  Making for a great fit once you take some washers and screws and fit it together!


10 cents is not a bad price either, in fact I spent more on the screws and washers then the boxes. 




Here we have the new F.O.C. Box sitting in front of my laptop with both of them connected to the MotionEyeOs RPi B+. 


The odd picture is actually my ceiling fan & light reflected in the outside window.  The latch of the box works great as an angle provider for the box to be used in a standing position.  I am very pleased with how this came together!  I want to actually be able to mount it above my laptops in my desk area in the future and I think the Box will allow for that easily.





On the Farm/Fowl side of things I wanted to share just how incredibly fast ducks can foul their water.  It is crazy!  It also makes for great tree watering supply but still, yuck!




Here is a picture of one of the ducklings being introduced to the pool and enjoying it quite a bit!  We previously had the ducklings and baby keats in the garage.  Word of caution to potential duck owners, ducklings like to play in their water and that quickly makes the entire garage smell very very bad!


We had a pretty good rain so the ducks had tracked even more then a normal amount of dirt into the water making it look not quite so refreshing to me.  So today was Duck Water Refresh day.




Ah clean water!  We won't even wait to let it fill up the pool!




Here we have some more water in the pool, and you may notice that the clean part seems to be diminishing.  Kind of like bath water now.  :-)




I guess "clean" water is a somewhat broad term when it comes to ducks.  But boy are they happy!


I was waiting for the new sensehat to reach me before writing this post, but the recent updates suggests that I may not get it before the deadline. So I decided to go on with the faulty one I have.  Although the code will work regardless of the sensehat condition, the output I showing here will b faulty because of my hat.


In this post, I'll getting data from sense hat and publish it to MQTT broker. Later this data is displayed as a freeboard dash with MQTT plugin.


Hardware Setup

This post will be using the SenseHat I got as a part of the challenge kit. SenseHat for raspberry pi is a addon board which houses

  • Humidity Sensor
  • Barometric Pressure Sensor
  • Temperature Sensor (?)
  • Magnetometer
  • Accelerometer
  • Gyroscope
  • 8x8 Color LED Matrix
  • 5-button joy stick

For more details about SenseHat, visit Raspberry Pi Sense HAT or

The sensehat can be mounted on the raspberry pi (here I use Pi3) with the screws provided with the sensehat. It will look like:


Next is to install the libraries for sensehat. To install them:

$ sudo apt-get update
$ sudo apt-get install sense-hat

This will install the c/c++ and python libraries for using sense hat. Now you need to restart the system for the changes to take effect;

$ sudo reboot

Now you should be able to use SenseHat.


Software Setup

For this post, I'll using a python script which will read the values from environmental sensors and publish it using MQTT to topic 'usr/vish/sensehat'. Each packet will a JSON object like:


For this, we'll be using Paho MQTT python client library. Installation of the library is described in [PiIoT#06]: Ambient monitoring with Enocean sensors.

Once the library is installed, we are ready to go.

To get the sensor values the script is roughly like this:

## Init Sensehat
import sense_hat as sHat
sh  = sHat.SenseHat();

# Function to read the sensor and send data
def senseNsend( client ):
    dataPacket  = {}

    # Get environmental sensors data from sense hat
    dataPacket['humidity']  = sh.get_humidity()
    dataPacket['pressure']  = sh.get_pressure()
    dataPacket['temperature']   = sh.get_temperature()

    mqttPacket = "{" + ','.join( '"%s":"%r"' %(k,v) for (k,v) in dataPacket.iteritems() ) + "}"

    #Now send 'mqttPacket' to broker

    # End of Function

More documentation on using Environmental sensors on SenseHat can be obtained from

Next this is to create a MQTT broker connection and send the actual packet.

import paho.mqtt.client as mqtt
import paho.mqtt.publish as publish

basePath    = 'usr/vish/'
appPath     = 'sensehat'

# MQTT Broker Params
mqtt_broker = ""
mqtt_port   = 1883

## Define MQTT callbacks
def onConnect( client, userData, retCode ):
    print "Connected to broker."
    client.publish( basePath+'devices/sensehat', '{"name":"sensehat","desc":"Sensehat to MQTT bridge"}' )

client  = mqtt.Client( client_id = "sh2mqtt_bridge",
                            clean_session = True )
client.on_connect   = onConnect
client.connect( mqtt_broker, mqtt_port, 60 )

This will create a client  named 'sh2mqtt_bridge' and connect it to the IP at mqtt_broker. Now we can publish a packet to the broker with:

client.publish( basePath+appPath, mqttPacket )

Here I have put the topic to publish as basePath+appPath = 'usr/vish/sensehat'

The complete code is attached to your reference.

You can use one of the MQTT debugging tools like MQTTSpy to monitor the messages in topic 'usr/vish/sensehat'.


Designing the dashboard

Now we will be using freeboard to design the dashboard for viewing data. I have already explained how to host freeboard with nodejs in [PiIoT#01] : Designing a dash board and Freeboard MQTT plugin in [PiIot#04]: Freeboarding with MQTT. Follow the instructions and start your freeboard sensor. Follow the instructions to create the dashboard:

  1. Start your Sensehat to MQTT python script
  2. Load freeboard page fromyou nodeserve in a browser
  3. Configure your MQTT broker as a source using Paho MQTT Client plugin mentioned in PiIoT#04
  4. Create a pane with name 'SenseHat'
  5. Create a text widget inside the pane with name as 'Pressure' and souce as 'datasources["mercury-sensehat"]["usr/vish/sensehat"]["pressure"]'. You will be able to select this data source, if your python script is running. Enable spacklines to get a line graph of values.
  6. Create similar text Widgets for Temperature and Humidity

Finally you dashboard will be looking like this:

You will be able to view the values send by your sensehat( Note that here the values are faulty because of my Sensehat)


Now you can save this to 'www/freeboard/dashboards' directory of you nodejs script as 'senseHat.json'. File is attached below.


To view the dashboard later, goto http://<freeboard host IP>:8080/#source=dashboards/senseHat.json.


Sense Your Environment

For demo, I have modified the update interval to 3 sec. Below is a video of demo where I'm using my android phone's chrome to view the sensehat data.



Happy Coding,


<< Prev | Index | Next >>

During these last 2 days and before the official end of the Challenge, I have managed to integrate my Foscam IPCam into the DomPi project. This will probably be the last post with some solid progress. Let´s go into the details!


Previous Posts

PiIoT - DomPi: ApplicationPiIoT - DomPi: IntroPiIoT - DomPi 02: Project Dashboard and first steps in the Living room
PiIoT - DomPi 03: Living room, light control via TV remotePiIoT - DomPi 04: Movement detection and RF2.4Ghz commsPiIoT - DomPi 05: Ready for use Living Room and parents and kids´ bedrooms
PiIoT - DomPi 06: Setting up the Command Center - RPI3PiIoT - DomPi 07: Setting up the Command Center (2)PiIoT - DomPi 08: Setting up the Command Center (3) openHAB, mosquitto, RF24
PiIoT - DomPi 09: Presence EmulatorPiIoT - DomPi 10: Garage node (slave)PiIoT - DomPi 11: Ready to use. Garage node (master & slave)
PiIoT - DomPi 12: Ready to use. Control Panel and Garden NodePiIoT - DomPi 13: Ready to use. DomPi Phase 1PiIoT - DomPi 14: Presence Identification, Welcome Home and Temperature Alarm


Project Status

Project Status

Foscam IPCam integration

At home I have an IPCam like the one in the picture below. The reason for leveraging this ipcam instead of the PiCam is based on two points:Foscam IPCam

  • Redundancy. This ipcam is in itself a standalone alarm system, meaning, you can configure it to detect motion, raise an alarm and send you an email - among many other features. By using the alarm feature of DomPi together with the alarm system of the IPCam I have two different devices, if one fails (crashes, looses connectivity, it is hacked, it dies, etc) I still have the second system to monitor my home
  • Range. The IPCam includes a motor and it can rotate to cover a wider range. I am leveraging this function to cover two zones with a single camera: the living room and the corridor. By monitoring these two rooms, I am covering the "hot places" at home and those where it is more probably that someone would break in: the main entrance and the garden. Below you can see a diagram which shows the current position of the IPCam and both monitored zones. In green the living room and by asking the IPCam to move to the second position, the corridor with the main entrance in purple

IPCam range

IPCam configuration

The first step to integrate the camera is to configure it using the web interface that Foscam includes. I will skip the basic configuration steps like the user and password, or IP configuration. Specially relevant for the DomPi project is to set up the preset points. A preset point is a position of the camera indicated on the X and Y axis so that it points to the desired place. In my case, I am setting up two preset positions: the preset to point at the living room and the preset to point at the corridor (and main entrance). I have placed the ipcam on top of my fridge where I can cover these two places with the same camera.


The web interface of IPCam is quite intuitive and adding a preset point is  easy, you just need to move the camera with the web buttons to the desired positions and save them. It is important to remember the names you save them with, as they will be the key part of the openHAB integration. There are some other interesting features to configure: sensitivity level (to trigger the alarm if motion is detected), trigger interval (how many seconds to wait before raising a new alarm), etc. I will touch these later on during the openHAB integration.


Foscam MainFoscam Preset

Before leaving the IPCam configuration, I have configured the email account where the Foscam camera has to send the pictures to. I have also configured my router to always assign the same IP to this camera since it will be required for the openHAB.


Foscam IPCamera CGI

My camera supports CGI string requests and the manufacturer has provided the CGI manual (here). This enables the control of the IPCam via http requests sent out from the openHAB. An example of these strings can be:



With this line, the IPCam will enable the Motion detection (see the &isEnable=1 part). If the motion is detected, it will inform the user via an email, it will also take a picture of the image and attache it to the email and finally it will also ring the ipcam´s buzzer (see the &linkage=7), etc, etc. To make it easier to follow, please find a snapshot of the above CGI manual that explains the setMotionDetectConfig parameter.


Foscam manual2Foscam manual


In the openHAB I am using the setMotionDetectConfig function, as well as two other more: getDevState (to check if there is motion detected) and the ptzGotoPresetPoint (to point the ipcam at the living room or the corridor).


openHAB Integration

There is as such no openHAB binding, however, there is a nice webpage with key information to help you move faster on understanding how to integrate the camera (here). For DomPi, I just need the three functions below, so I have decided to implement it in a maybe not very sexy way, but... it works. I have created four items:

  • One switch: to enable or disable the Foscam IPcam alarm. When the switch is turned on, it sends to the camera the below cgi string that instructs the ipcam to start detecting motion. If turned off, it sends the "OFF:GET:..." string to the camera which stops the detection
  • One switch: to move the camera to the preset position of the living room or the corridor. If the switched is moved to the "On" position, DomPi points the camera at the living room. If it is moved to "off", the camera will point at the corridor
  • One string: that captures the motion alarm status: not enabled, no motion detected, detected motion
  • One string: to capture the sound alarm status: not enabled, no sound heard, heard sound

Here is the code at the .items file:

/* Foscam IPCam */
String Foscam_Motion     "Movimiento IPCam [MAP(]"    <camera> (gStatus, gFoscam)     { http="<[http://foscam_IPaddress:88/cgi-bin/CGIProxy.fcgi?cmd=getDevState&usr=USER&pwd=PASSWORD:4000:REGEX(.*?<motionDetectAlarm>(.*?)</motionDetectAlarm>.*)]" }
String Foscam_Sound     "Sonido IPCam [MAP(]"         <camera> (gStatus, gFoscam)     { http="<[http://foscam_IPaddress:88/cgi-bin/CGIProxy.fcgi?cmd=getDevState&usr=USER&pwd=PASSWORD:4000:REGEX(.*?<soundAlarm>(.*?)</soundAlarm>.*)]" }
Switch Foscam_Move        "Apuntar IPCam"                            <camera> (gStatus, gFoscam)     { http=">[ON:GET:http://foscam_IPaddress:88/cgi-bin/CGIProxy.fcgi?cmd=ptzGotoPresetPoint&name=salon&usr=USER&pwd=PASSWORD] >[OFF:GET:http://foscam_IPaddress/cgi-bin/CGIProxy.fcgi?cmd=ptzGotoPresetPoint&name=pasillo&usr=USER&pwd=PASSWORD]"}
Switch Foscam_AlarmSwitch    "Habilitar Alarma IPCam"            <shieldalarm>    (gStatus, gFoscam)    { http=">[ON:GET:http://foscam_IPaddress:88/cgi-bin/CGIProxy.fcgi?cmd=setMotionDetectConfig&isEnable=1&linkage=7&snapInterval=2&sensitivity=1&triggerInterval=5&usr=USER&pwd=PASSWORD] >[OFF:GET:http://foscam_IPaddress:88/cgi-bin/CGIProxy.fcgi?cmd=setMotionDetectConfig&isEnable=0&usr=USER&pwd=PASSWORD]"}


I have added something new in the items description, it is the MAP, which transforms a string (0, 1 or 2 in this case) into something more user friendly: not enabled, no motion detected, detected motion. This is done via the file which needs to be copied into the /configurations/transform folder. The file looks like this:

# Map file for DomPi - IPcam Foscam
1=No Alarm
2=IPCam Alarm


To manage the IPCam I have coded two rules. You can read the comments below. As a summary, DomPi points the camera to the living room or the corridor depending on where DomPi detects movement via the PIR sensors. The system starts by pointing the camera at the living room, with this view, the camera monitors any movement happening in the living room and covers the two doors to the garden. If any of the PIR sensors of the kids´ room or the parents´ room detects movement, then DomPi points the camera at the corridor and the main entrance.


Once the camera has reached any of the two preset positions (living room or corridor), it starts monitoring movement by itself. If the IPcam confirms the movement detected by the PIR, it will take a picture of the motion and email it to me. If the camera is pointing at the corridor and there is no movement after 30s, it is probably a false a larm and DomPi instructs the camera to point back at the living room.


All this only happens, of course, if I have turned on the Alarm switch, meaning, if I armed the alarm and want to protect my home while I am out.


The second rule is quite simple and just enables or disables the Foscam IPcam standalone alarm. When I activate the Alarm switch in DomPi, it automatically turns on the alarm in the ipcam. This is good as I want to avoid the camera emailing me when I am at home


 * Rules for IPCam movement
//This rule moves the IPCam from the default position Living room to Corridor
//This happens when the alarm is active and there is some motion detected in the kids or parents rooms
//It waits 30s and confirms if ipcam has detected motion as well
//After 30s since last movement, move cam to living room
rule "Move IPCam when motion detected"
    Item Nodo01Movimiento changed from 0 to 1 or    //Kids room
    Item Nodo02Movimiento changed from 0 to 1    //Parents room
    if (Nodo09AlarmSwitch.state==ON) {
        //If alarm switch is off, this means we are not interested in monitoring presence at home
        if (Foscam_Move.state==ON) {
            //If the cam is pointing to the living room (Foscam_Move.state==ON) we move it to the corridor
            postUpdate(Foscam_Move, OFF)
            Thread::sleep(5000)        //Allow some time for the ipcam to move to its new position
        ipcam_secs2go = 30        //resets timer to 30 secs
        while (ipcam_secs2go>0 && Nodo09AlarmSwitch.state==ON) {
            if ((Nodo01Movimiento.state==1) || (Nodo02Movimiento.state==1)) {
                //if there is still movement, reset timer to 30 secs
                ipcam_secs2go = 30
            } else ipcam_secs2go = ipcam_secs2go -1 
        postUpdate(Foscam_Move, ON)    //After 30 secs with no motion, move the ipcam to the living room

rule "Mimic AlarmSwitch in IPCam Alarm Switch"
//Mimics any change in status of the Alarm Switch to the IPCam: 
//   If I activate the home alarm, let´s also activate the IPCam one
//   If I deactivate home alarm, let´s deactivate the IPCam one
    Item Nodo09AlarmSwitch changed
    postUpdate(Foscam_AlarmSwitch, Nodo09AlarmSwitch.state)


With this code, I am able to integrate the Foscam ipcam into DomPi. The camera includes many more features, which are not relevant to this project. One improvement if I need any of those features in the future, would be to leverage the general API from Foscam and the REST API from openHAB.


Additional improvements

These two days I have made one additional improvement to a previous rule. I added some fine tunning to the alarm, so that when the Alarm Switch is activated, it resets the Alarm Status by turning it off.


OpenHAB - Final View

These are some snapshots of the final view of the openHAB web interface for DomPi.

Main Menu 1Main Menu 2
Environmental out 1Environmental out 2
Lights MainTemperatures


Nodes´ Dashboard

This is the final dashboard of the nodes, I am conscious that not all of the cells are green, but hope that you have enjoyed the journey since May till now with the DomPi project

Nodes Dashboard


Attached you can find the latest version of the openHAB files and the final part for the project!! I hope you have enjoyed the DomPi posts as much as I have done it Many thanks to all of you that have read the posts and have shared your comments with me on the things done well and also on what could be improved. I have learned a lot! And it´s been a great pleasure. A great pleasure that I shared also with my friends and family

An important part of Thuis is integration of our Home Theater system. As the integration is quite extensive and consists of several components, this will be a 3-part blog series. In the first part we start with communicating to CEC-enabled devices from a Raspberry Pi. In the second part we will integrate CEC with the rest of Thuis, and make sure everything works properly together. In the third - and last - part of the Home Theater series we will add the support for Plex.

Home Theatre



HDMILet's start with a short introduction of CEC itself. CEC stands for Consumer Electronics Control and is a feature of HDMI. CEC enables HDMI-devices to communicate with each other. In the ideal situation this means a user only needs one remote control to control all his devices: TV, AV receiver, Blu-ray player, etc. Unfortunately many manufacturers use their own variation of CEC and therefore in a lot of cases one still needs multiple remotes. To get an idea about the protocol have a look a CEC-O-MATIC, this is a great reference for all available commands!


The good news is that the GPU of the Raspberry Pi supports CEC out of the box!



To be able to handle the different dialects of CEC, Pulse Eight developed libcec. It enables you to interact with other HDMI devices without having to worry about the communication overhead, handshaking and all the differences between manufacturers. In contrast to what I mentioned in [Pi IoT] Thuis #5: Cooking up the nodes – Thuis Cookbook Raspbian Jessie nowadays provides version 3.0.1 in the Apt repository, so there is no need to use the version from Stretch anymore. I've updated the cookbook accordingly. Other than that provisioning the Raspberry Pi using Chef was straightforward.


libCEC comes with the tool cec-client. This basically gives you a terminal for CEC commands. When we execute cec-client you see it connecting to HDMI and collecting some information about other devices, then we can give it commands. For example we ask it for all devices currently connected with the scan command:

thuis-server-tv# cec-client -d 16 -t r
log level set to 16
== using device type 'recording device'
CEC Parser created - libCEC version 3.0.1
no serial port given. trying autodetect: 
 path:     Raspberry Pi
 com port: RPI

opening a connection to the CEC adapter...
DEBUG:   [              94] Broadcast (F): osd name set to 'Broadcast'
DEBUG:   [              96] InitHostCEC - vchiq_initialise succeeded
DEBUG:   [              98] InitHostCEC - vchi_initialise succeeded
DEBUG:   [              99] InitHostCEC - vchi_connect succeeded
DEBUG:   [             100] logical address changed to Free use (e)
DEBUG:   [             102] Open - vc_cec initialised
DEBUG:   [             105] << Broadcast (F) -> TV (0): POLL
// Receiving information from the TV
// ...
// Request information about all connected devices
requesting CEC bus information ...
DEBUG:   [           41440] << Recorder 1 (1) -> Playback 1 (4): POLL
DEBUG:   [           41472] >> POLL sent
DEBUG:   [           41473] Playback 1 (4): device status changed into 'present'
// ...
CEC bus information
device #0: TV
active source: no
vendor:        Sony
osd string:    TV
CEC version:   1.4
power status:  on
language:      dut

device #1: Recorder 1
active source: no
vendor:        Pulse Eight
osd string:    CECTester
CEC version:   1.4
power status:  on
language:      eng

device #4: Playback 1
active source: yes
vendor:        Unknown
osd string:    Apple TV
CEC version:   1.4
power status:  on
language:      ???

device #5: Audio
active source: no
vendor:        Denon
osd string:    AVR-X2000
CEC version:   1.4
power status:  on
language:      ???

currently active source: Playback 1 (4)

// indicates a comment added by me, // ... indicates output that was hidden as it's not needed for understanding


As you can see currently 4 devices are connected to the bus, including the Raspberry Pi itself (device #1). The Apple TV is the currently active source. You can tell cec-client which output it should give with the -d parameter. We'll use this for our integration by choosing -d 8, which just displays the traffic on the bus.



To integrate libCEC (or more specifically cec-client) with Java we have to write a wrapper around it. We'll do that in a similar way as MQTT-CDI, so the Java code can observe events happening on the CEC-bus via a CDI observer. I wrote the initial version about a year ago and the full source code is available on my GitHub as Edubits/cec-cdi. It does not support the full CEC protocol yet, but most of the usual commands are available. For example you're able to turn on and off your devices, and send UI commands like play, pause, volume up, etc. You can of course also monitor these same functions, so the app will for example know when you turn off the TV manually.


You can add CEC-CDI to your own project by adding the following dependency to your pom.xml:



Monitoring what happens in the home theatre system can be done using CDI observers. Currently you can just add a qualifier for the source device, later I might also add some more sophisticated qualifiers such as the type of a command. When you're interesting in all messages send from the TV you can observe them like this:

public class CecObserverBean {
    public void tvMessage(@Observes @CecSource(TV) Message message) {"Message received from TV: " + message);


To turn the TV on you can send it the IMAGE_VIEW_ON message without any arguments, for putting it in standby you use the STANDBY command. In Java this looks as follows:

public class SendExample {
    private CecConnection connection;

    public void send() {
        // Send message from RECORDER1 (by default the device running this code) to the TV to turn on
        connection.sendMessage(new Message(RECORDER1, TV, IMAGE_VIEW_ON, Collections.emptyList(), ""));

        // Send message from RECORDER1 (by default the device running this code) to the TV to turn off
        connection.sendMessage(new Message(RECORDER1, TV, STANDBY, Collections.emptyList(), ""));



Just like the Core application described in [Pi IoT] Thuis #8: Core v2: A Java EE application, this will be a Java EE application running on WildFly. It includes CEC-CDI. The application itself is quite simple as it's only function is bridging between CEC and MQTT. So we have two @ApplicationScoped beans observing events.


The CecObserverBean forwards specific messages from the CEC bus to MQTT. In the example it monitors the power state of the television. Note that my Sony television has its own dialect as well, depending on how the TV is turned off it reports the official STANDBY command or gives a vendor specific command. When turning on it's supposed to report a certain command as well, but the Sony decides to skip it. That's why - as workaround - I listen to REPORT_PHYSICAL_ADDRESS, which is a command it always gives during power on.


public class CecObserverBean {
    MqttService mqttService;

    public void tvMessage(@Observes @CecSource(TV) Message message) {
        if (message.getDestination() != BROADCAST && message.getDestination() != RECORDER1) {

        switch (message.getOperator()) {
            case STANDBY:
                mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "off");
                mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "on");
            case VENDOR_COMMAND_WITH_ID:
                if (message.getRawMessage().equals("0f:a0:08:00:46:00:09:00:01")
                    || message.getRawMessage().equals("0f:87:08:00:46")) {
                    mqttService.publishMessage("Thuis/device/living/homeTheater/tv", "off");


The opposite happens in the MqttObserverBean, which listens to MQTT messages and executes the corresponding CEC commands. Here we'll turn the TV on and off and then ask the TV to report its power status back:


public class MqttObserverBean {
    private CecConnection connection;

    public void onActionMessageTV(@Observes @MqttTopic("Thuis/device/living/homeTheater/tv/set") MqttMessage message) {
        switch(message.asText()) {
            case "on":
                connection.sendMessage(new Message(RECORDER1, TV, IMAGE_VIEW_ON, Collections.emptyList(), ""));
            case "off":
                connection.sendMessage(new Message(RECORDER1, TV, STANDBY, Collections.emptyList(), ""));

        connection.sendMessage(new Message(RECORDER1, TV, REPORT_POWER_STATUS, Collections.emptyList(), ""));


This concludes our implementation of the TV node. It's now able to listen to other CEC-enabled devices, communicate with them and bridge this through MQTT messages. In part 2 we'll take these MQTT messages, wrap them and create some scenes to turn everything on with a single button!

After quite a bit of hard work during the last days, the project reached its end. Of course many improvements can be made, and other features can be added, but that is for later, after the challenge. I had the plan to add humidity, pressure and temperature measurements with the SenseHat, but unfortunately SenseHat, Wi-Pi and PiFace digital were missing from my kit. I this post I will briefly explain the python code which mainly does all the work. I will finish with a number of example images.


Previous posts:

[Pi IoT] Plant Health Camera #10 - connecting the Master and Slave Pi

[Pi IoT] Plant Health Camera #9 - calculating BNDVI and GNDVI

[Pi IoT] Plant Health Camera #8 - Aligning the images

[Pi IoT] Plant Health Camera #7 - Synchronizing the cameras

[Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work

[Pi IoT] Plant Health Camera #5 - OpenCV

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application


Software to capture the NDVI image

Below is the source code of I added comments for each step so the code is self explaining. After some initializations  an endless loop is started

While True:

in which first a live image is shown until a key is pressed. There are five options:

  • q: Quit
  • c: Show Color Image
  • o: Show NoIR Image
  • n: Show NDVI Image
  • g: Show GNDVI Image
  • b: Show BNDVI Image

After pressing q the program terminates, after pressing any other key, an image is captured from the camera and a trigger is send to the slave so that this also captures an image, see [Pi IoT] Plant Health Camera #7 - Synchronizing the cameras for details.

Then also this image is loaded from the share which was mounted from the slave pi (details can be found in [Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work). Then the images of the two cameras are aligned, as described in [Pi IoT] Plant Health Camera #8 - Aligning the images. I tested the options TRANSLATION, AFFINE and HOMOGRAPHY, by commenting out the specific setting. After the images are aligned, the NVDI, GNDVI and BNDVI are calculated, and depending on which key was pressed, one of them is displayed. After a key is pressed, or after ten seconds all images (noir, color, nevi, gndvi and bndvi) are saved, with a timestamp in the filename.


# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import RPi.GPIO as GPIO
import time
import numpy
import readchar
import datetime
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.ISO = 100
camera.resolution = (800, 480)
rawCapture = PiRGBArray(camera)

# Define the motion model
warp_mode = cv2.MOTION_TRANSLATION
#warp_mode = cv2.MOTION_AFFINE
#warp_mode = cv2.MOTION_HOMOGRAPHY
# Define 2x3 or 3x3 matrices and initialize the matrix to identity
if warp_mode == cv2.MOTION_HOMOGRAPHY : 
  warp_matrix = numpy.eye(3, 3, dtype=numpy.float32)
else :
  warp_matrix = numpy.eye(2, 3, dtype=numpy.float32)
# Specify the number of iterations.
number_of_iterations = 5000;
# Specify the threshold of the increment
# in the correlation coefficient between two iterations 
termination_eps = 1e-10;
# Define termination criteria
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, number_of_iterations, termination_eps)

# allow the camera to warmup

# GPIO Setup
GPIO.setup(18, GPIO.OUT)
GPIO.output(18, 0)

while True:
  print(" q: Quit")
  print(" c: Show Color Image")
  print(" o: Show NoIR Image")
  print(" n: Show NDVI Image")
  print(" g: Show GNDVI Image")
  print(" b: Show BNDVI Image")

  c = readchar.readchar()

  if c=='q':

  # grab an image from the camera
  camera.capture(rawCapture, format="bgr")
  noir_image = rawCapture.array

  # trigger camera on slave and load
  GPIO.output(18, 1)
  GPIO.output(18, 0)
  color_image = cv2.imread('pi1iot_share/slave_image.jpg',cv2.IMREAD_COLOR)

  # extract nir, red green and blue channel
  nir_channel = noir_image[:,:,0]/256.0
  green_channel = noir_image[:,:,1]/256.0
  blue_channel = noir_image[:,:,2]/256.0
  red_channel = color_image[:,:,0]/256.0

  # align the images
  # Run the ECC algorithm. The results are stored in warp_matrix.
  # Find size of image1
  sz = color_image.shape
  (cc, warp_matrix) = cv2.findTransformECC (color_image[:,:,1],noir_image[:,:,1],warp_matrix, warp_mode, criteria)
  if warp_mode == cv2.MOTION_HOMOGRAPHY :
  # Use warpPerspective for Homography 
  nir_aligned = cv2.warpPerspective (nir_channel, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP)
  else :
  # Use warpAffine for nit_channel, Euclidean and Affine
  nir_aligned = cv2.warpAffine(nir_channel, warp_matrix, (sz[1],sz[0]), flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP);

  # calculate ndvi
  ndvi_image = (nir_aligned - red_channel)/(nir_aligned + red_channel)
  ndvi_image = (ndvi_image+1)/2
  ndvi_image = cv2.convertScaleAbs(ndvi_image*255)
  ndvi_image = cv2.applyColorMap(ndvi_image, cv2.COLORMAP_JET)

  # calculate gndvi_image
  gndvi_image = (nir_channel - green_channel)/(nir_channel + green_channel)
  gndvi_image = (gndvi_image+1)/2
  gndvi_image = cv2.convertScaleAbs(gndvi_image*255)
  gndvi_image = cv2.applyColorMap(gndvi_image, cv2.COLORMAP_JET)

  # calculate bndvi_image
  bndvi_image = (nir_channel - blue_channel)/(nir_channel + blue_channel)
  bndvi_image = (bndvi_image+1)/2
  bndvi_image = cv2.convertScaleAbs(bndvi_image*255)
  bndvi_image = cv2.applyColorMap(bndvi_image, cv2.COLORMAP_JET)

  # display the image based on key pressed on screen
  if c == 'o':
  cv2.imshow("Image", noir_image)
  elif c == 'c':
  cv2.imshow("Image", color_image)
  elif c == 'n':
  cv2.imshow("Image", ndvi_image)
  elif c == 'b':
  cv2.imshow("Image", bndvi_image)
  elif c == 'g':
  cv2.imshow("Image", gndvi_image)

  # wait at most 10 seconds for a keypress

  # cleanup

  # get current date and time to add to the filenames
  d =
  datestr = d.strftime("%Y%m%d%H%M%S")

  # save all images
  cv2.imwrite("./images/" + datestr + "_noir.jpg",noir_image)
  cv2.imwrite("./images/" + datestr + "_color.jpg",color_image)
  cv2.imwrite("./images/" + datestr + "_ndvi.jpg",ndvi_image)
  cv2.imwrite("./images/" + datestr + "_gndvi.jpg",gndvi_image)
  cv2.imwrite("./images/" + datestr + "_bndvi.jpg",bndvi_image)



The prove of the pudding

The prove of the pudding is the eating, so here are the images you have been waiting for so long.

Here is a video of the setup. In front of the camera are a hydrangea plant and two roses.



This results in the following images:

The color image.


The NoIR image, which is the NoIR camera with the infra-blue filter attached. Note the different perspective on this small distance.


The two images are aligned using the HOMOGRAPHY algorithm, which you can clearly see in the blue border below.

A drawback of the HOMOGRAPHY is that it is quite time consuming. In this case, the ECC algorithm took almost 15 minutes .

The NDVI image clearly shows healthy plant parts in read, while other stuff is in the blue-green range.

Note that the roses looks like very unhealthy! This is true, because they are fake.


The BNDVI and GNDVI doesn't look very promising, I will investigate in this later.



I also took my camera outside, powered by a USB power bank.




With the following results:



Here I used TRANSLATION for the alignment which works pretty well for objects at larger distance from the camera. It also is much faster, less than 30 s computation time in this case.


This finalizes more or less my project. I will try to make a summary blog on Monday, but yet I'm not sure I will have time for that.

I hope you enjoyed my posts and it inspired you to use the Pi for agricultural applications and plant phenotyping.

Feel free to comment or ask questions, I will try to answer them all.

This update describer the Android application created to manage the acquisition of points for the competition. At this moment, the competition only includes running or walking the maximum number of km at the end of each month. Also, since we can not trust the participants good will (or, since we know how witty they can be) we will prevent two basic cheating:

    1. Driving - it will be an easy, fast and effortless way of increase the status
    2. Shaking the phone -the resident does not move but still increases the total distance

So, the distance tracking will involve both GPS location and the phone's accelerations data


Furthermore, the application will have to retain the information after being close. Consequently, we will be using a local database in the phone, an SQLite.


This update will developing in the User's Node, a Nexus 5 phone.


Main application

Initial setup:Nexus 5 - Android 6.0.1

Full code in github - SmartApp



When launching, the user will be able to select one of our two activities:

  • SmartHome - currently disabled till previous version is updated
  • CompetitionActivity - to be used when the person wants to improve their status in the competition. It will start recording and update the central node




Competition System Activity - Version 1 (ONLY INDIVIDUAL TRACKING)



The GUI is constantly updated to show:

  1. Current distance
  2. Today's distance
  3. Monthly distance


Additionally, this information is stored in a local database in the phone.


PODIUM (Not enabled yet) > request other residents information from the server and see current state of the competititon


How to track the phone distance


GPS Location

To access GPS Location - we use Android Location library (android.location) location-icon-map-png-93d693c9-2482-44c1-9073-d95246ce6de3_iconmonstr-location-16-icon.png


More in detail, the app uses of its classes:

  • Location - hold the GPS information (such as latitude and longitude) and offers some methods to operate them.
  • LocationListener - offers callbacks when the location, status or accuracy has changed
  • LocationManager - manager of the Location Service.


The app needs to obtain current location, compare it with the previous one and extract the distance.

This distance will then be updated in the main GUI and included in monthly and daily calculations.



For certain sensible  operations, an Android app will have to request permission (so that the user can decide whether to grant it or not). It can be done in the apps Manifest (programmatically) or directly requested at run time .


We include the COARSE and FINE LOCATION permission in our Android Manifest:

<!-- GPS -->

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>


NOTE - For API 23 or more: permission request needs to be done during run time. So, I include the following lines in our CompetitionActivity (inside the onCreate() method)

** It also includes request for the external storage, needed for the local database

// ****** REQUESTS *****
//Location and Write/Read external storage

//Check permissions
if (!canAccessLocation() || !canAccessMemory()) {

     requestPermissions(INITIAL_PERMS, INITIAL_REQUEST);

     Toast.makeText(ctx, "Request Permissions", Toast.LENGTH_LONG).show();

}else {
     //Start tracking service

     if ( ContextCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS_COARSE_LOCATION) != PackageManager.PERMISSION_GRANTED ) {
          System.out.println("Start tracking ");

          mLocationManager = (LocationManager) getSystemService(LOCATION_SERVICE);

          mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, LOCATION_REFRESH_TIME, LOCATION_REFRESH_DISTANCE, mLocationListener);

     //Do nothing  

The code will first request the permission. Afterwards, checks whether it was granted (if not, requests again) and if so, starts the GPS Tracking activity


The code:

We will have to create an instance of LocationManager and attach a self defined LocationListener(we did so when checking and requesting the permissions):

          mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, LOCATION_REFRESH_TIME, LOCATION_REFRESH_DISTANCE, mLocationListener);


The mLocationListener itself implements the methods:

  • onLocationChanged - this is the one used to obtained the distance
  • onStatusChanged - not used for now
  • onProviderEnabled - not used for now
  • onProviderDisabled - not used for now


Withe onLocationChanged, we will compare the new obtain location with the previous one and obtained the distance between them:


public void onLocationChanged(final Location location) {
     //Check if it is the first updated location

     if (previousLocation ==null) {

          previousLocation = location;
          float distance = previousLocation.distanceTo(location);
          previousLocation = location;

(*) CompetitionTrackActivity.(distance) is a method to update the GUI with the new data


Problem with this approach: cars

It is very easy to gain a lot of kms while driving or commuting in the train if we just implement this approach.


A direct solution, using the same library is calculating the speed too (we have the distance, we can timestamp each location ---> V = distance/ time_variation ). If the speed is above a threshold (say.... 20km/h as the do in Pokemon Go), we discard the obtained distance


This app, however, will combine GPS solution with a step counter (to make sure the person is moving).


!!! GPS Location will probably drain a lot of battery, so I will keep an eye on it


Step counter from acceleration data

To count the steps we make use of the hardware Android library(we will use accelerometer data)


With this approach, we will count the number of steps. Since the system will always give us the total step number, the StepCounter will store the initial step_number and obtain the steps walked by subtracting to the absolute total.


Again, we implement a listener that will tell the app when the step number has bee updated.


The code:

CountSteps implements SensorEventListener, with the meethods:

  • onSensorChanged() - when new step is recorded
  • onAccuracyChanged() - not used


To obtain the number of steps walked, the app has the following code:

public void onSensorChanged(SensorEvent event) {
   if (activityRunning) {
   if (!started) {
   initial_count = event.values[0];
   started = true;
  System.out.println(event.values[0] + "vs "+initial_count);
   count = event.values[0] - initial_count;





Problem with this approach:shaking the phone

As with any other step counter, if you shake the device the number of steps increases. We will need other methods to determine whether the person is really moving or not (aka GPS location).


This solution - Combination of both

In the final app, we use the GPS LocationListener to determined that the person is moving. Also, we include a min_distance parameter to make sure it is, in fact, moving and not some GPS variation.


However, the distance is not obtain with this new location, but with the step counter ! (again, this may not be a good enough solution: in the future, we can try to correlate the GPS distance and the distance obtained from the steps themselves to see any kind of trick).


The code is in this case:


public void onLocationChanged(final Location location) {
     //Check if it is the first updated location

     if (previousLocation ==null) {

          previousLocation = location;
          float distance = previousLocation.distanceTo(location);

              previousLocation = location;          

          //Obtain distance from step counter, not GPS
float steps = stepCounter.getSteps();
if (distance > MIN_DISTANCE_THRESHOLD) {
  Toast.makeText(ctx, "Location steps: " + steps, Toast.LENGTH_LONG).show();
  if (steps >0)
}else {
   //Restart counter
  // Steps but no movement??? Cheating....
   initial_steps = initial_steps +steps;
   stepCounter = new CountSteps(ctx, initial_steps);




How to retain the information

Local database - SQLite

SQLite library



We include the WRITE and READ EXTERAL STORAGE permission in our Android Manifest:

<!-- Write and read -->

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />


NOTE - For API 23 or more: permission request needs to be done during run time. So, I include the following lines in our CompetitionActivity (inside the onCreate() method)

** This code is shown in the previous section


The code:


First, I developed the functions to constantly store the distance values in a table. This table has the user as name, and columns for the time_stamp, current distance, day distance,month distance and updated(to be used when updating to the central node server).



Thanks to this database the app will retrieve today's and the month accumulated distance when restarting !!



The final result

Please see the following video showing the first running of the application




In this post I explained how to create an Android application to track the traveled distance. This is the main app of the competition system. It will:

  • Record the resident's distance:
    • current distance
    • update the values of day total and month total (since it is a monthly competition!)
  • Store it in a local database

This update should help navigate through the next posts: it's been a long period without any news and I will be quite active for the next days... I hope the final result does not look very confusing ^^u


The innovation part of this project is the competition system: we want to engage the residents of the house in a competing environment to promote a healthier way of life. It can later be expanded for more fun type of activities. For now, the only challenge presented to the roommates is the amount of km walked/run/biked during a month. This information will be gather thank to a mobile phone application and be sent to the smart house central node.


In the end, the smart house main GUI will have the regular smart house information plus current status of the competition.


Main components


The following image shows the basic structure of the system (with only one user included):


NOTE: The house wifi router will be performing the corresponding port forwarding to the Central Node and its competition port.


User's tracking: Android application

Initial setup:Nexus 5 - Android 6.0.1



We will update the original User's node, so that it hosts:

  • Competition activity - implements a distance tracker and shows the user how many km have they walked in that current session. It also holds the totals of the day and the month (as the competition will be held MONTHLY). It has to send this information to the central node, so that it can be compared to the other residents of the house.
  • Smart home activity - implements the MQTT subscriber client (showed in Smart Competition Home #4) to show the smart house data only when the phone is connected to the house's WiFi.



Competition System Activity

It will be divided in three main functions:

  • Track the distance, using both GPS location and accelerometer data
  • Manage daily and monthly totals (to be stored in a local SQLite database)
  • Send the information to the central node

Smart Home Activity

I will be modifying the original application. It is a basic Android app that connected to the MQTT broker and display the smart house values upon request.

First, this apps functionalities will be included in the Smart Home Activity to be enhance later on, with:

  1. Select the broker IP and Connect/Disconnect options
  2. Real time update of the smart home values
  3. Phone buzzing when there is an alarm




Competition management: Central node

Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface / Python GTK interface /  MySQL Server / Apache2 web Server


In the central node, I will have to implement the Competition Service. This service will manage the income packages from each roommate (containing the distance update) and update it in the MySQL database.


The main Python scripts (managing the MQTT_client_subscriber and Main GUI) will include functions to read the competition values from the database and update the Interface accordingly.



The developing of the competition system will require:

  • USER'S NODE - SMART COMPETITION HOME APP An Android application implementing:
    • Distance tracking -Record the km/distance
    • Server communication - Send the information to the central node
  • Add new functionalities to CENTRAL NODE (Raspberry Pi 3):
    • Competition server -Receive the km/distance from the Android application and store the values
    • Competition Database - Organize values from different users
    • Main GUI update from database - Display these values in the main GUI of the smart house
  • Integrate User's node in the same SMART COMPETITION HOME APP, so that:
    • If the phone is connected to the home WiFi, it can also read the smart house data
    • If the phone is not connected to that WiFi, it should indicate so

This post will cover the doors and windows notifier


Previous posts:

Pi IoT - Simone - Introduction

Pi IoT - Simone - #1 - Light Controller

Pi IoT - Simone - #2 - Main System


For this part of the project I initially wanted to use the EnOcean Hall Sensor since it seamed the easiest and most non intrusive way to do it but unfortunately the sensor is on a different frequency so it does not connect to the receiver. Instead I used the I2C connection protocol from the light controller for this.


The system has three components:

1. The hall sensor with the ATTiny controller that feeds the data through I2C to the main system.

2. The main system that receives the data and calculates if the door is opened or closed

3. A magnet that will be glued on the door.



The sensor component should be positioned on the door frame is such a way that it would be possible for the magnet on the door to get close to it when the door is closed. (You can place it inside the door frame if you carve a hole in it and because it is a magnetic sensor you can actually cover it up afterwards so you do not see it)


I used for this an AT Tiny 84 SSU and and as a hall sensor I used an SS495A HONEYWELL . This is not really a very exact sensor but it serves it's purpose for this project, it actually makes the difference between tiny movements of the magnet that is close to it.




The controller for the sensor should only read the data and send it to the Main Server on the Raspberry PI when it is requested. The calculation behind this is made from the raspberry.

To calibrate the system a logic can be inserted into the application that would require you to close the door so it can see what is the value it receives when the magnet is positioned closest to the sensor. Then the application will take that value as a default closed values and any other input will be considered opened.



The magnet should be positioned on the door so when it is closed the magnetic field would reach the hall sensor sending data to the main system.




As I said, initially I wanted to use the EnOcean sensor for this. As I described in the main system there can be more than one Raspberry PIs and one I used to make the connections to the EnOcean devices, then connect to the main system as a client and feed data to it. This would have solved some issues and the biggest one is that no more wires would have been needed.



The code for the micro controller:

//#define _DEBUG

#ifdef _DEBUG
#include <Wire.h>
#include "TinyWireS.h"

#include "I2CTypes.h"

#define SLAVE_ADDRESS 0x10

const int analogInPin = A0;

static const String c_identification = "2#10#1#Usa Balcon#0";

int m_freeMemory = 0;

uint8_t m_i2cMessageData[32];
int m_i2cMessageDataLength;
int m_messageCounter;

uint8_t m_dataArray[32];

int freeRam() {
  extern int __heap_start, *__brkval;
  int v;
  return ((int)&v - (__brkval == 0 ? (int)&__heap_start : (int)__brkval)) / 4;

void setup() {
  pinMode(1, OUTPUT);
  // initialize i2c as slave and define callbacks for i2c communication
#ifdef _DEBUG

  m_i2cMessageDataLength = 0;
  m_i2cMessageData[0] = 0xFE;
#ifdef _DEBUG

void loop() {
#ifdef _DEBUG
  m_freeMemory = freeRam();

  Serial.println(analogRead(A0) * (5.0 / 1023.0));
  //Serial.print("Memory: ");


void sendData() {
  uint8_t byteToWrite = 0x05;

  byteToWrite = m_i2cMessageData[m_messageCounter];
  if (m_messageCounter >= m_i2cMessageDataLength) {
  m_messageCounter = 0;

#ifdef _DEBUG

// callback for received data
#ifdef _DEBUG
void receiveData(int byteCount) {
void receiveData(uint8_t byteCount) {
  uint8_t index = 0;
#ifdef _DEBUG
  while (Wire.available()) {
  m_dataArray[index] =;
  while (TinyWireS.available()) {
  m_dataArray[index] = TinyWireS.receive();

void processMessage() {
  m_i2cMessageData[1] = m_dataArray[0];

  switch (m_dataArray[0]) {
  case SW_Ping:
  m_i2cMessageDataLength = 4;
  m_i2cMessageData[2] = 0;
  m_messageCounter = 0;
  case SW_Identify:
  int length;
  length = c_identification.length();
  m_i2cMessageDataLength = 3 + length;
  for (int i = 0; i < length; i++) {
  m_i2cMessageData[2 + i] = c_identification[i];
  m_messageCounter = 0;
  case SW_Restart:
  case SW_Get:
  int sensorValue = analogRead(analogInPin);
  m_i2cMessageDataLength = 3;
  m_i2cMessageData[2] = sensorValue;
  m_messageCounter = 0;
  m_i2cMessageDataLength = 4;
  m_i2cMessageData[2] = 0;
  m_messageCounter = 0;
  m_i2cMessageData[m_i2cMessageDataLength - 1] = 0xFF;


Not all blog posts can be about successful implementations or achievements. Sometimes, failure happens as well This is the case for my domotics implementation. Does that mean I have given up on getting it to work? Certainly not, but I'm stuck and don't have to luxury that is time, so close to the deadline with plenty of other things left to do.


Here's what I did manage to figure out so far ...




As you may or may not know, I moved house during the challenge, beginning of July. The new house has a domotics installation by Domestia, a belgian domotics brand from what I could find.


The installation consists of two relay modules, capable of turning lights and outlets on or off. There are also two dimmer modules for lights. When we started replacing the halogen bulbs by LED ones, we noticed the dimmers no longer worked, and had to replace the dimmers by LED compatible ones.

Next to the electrical wires, the modules have a three way connector labeled A, B and GND. Searching the datasheets, it is explained the domotics modules are connected to a RS485 bus for communication.


The wiring is illustrated in the module's manual:

Screen Shot 2016-08-26 at 22.04.15.png


The RS485 bus could be an entry point in reading the lights or outlets' status, and eventually control them.


Here's what it looks like in real life:


The RS485 bus can be accessed via the dimmer's blue, green and orange wires, labeled A, B and GND.




According to this, the pins' functions are the following:

  • A: Data+ (non-inverted)
  • B: Data- (inverted)
  • GND: ground


I started by first connecting my oscilloscope to the bus, verifying there is activity. Probe 1 was connected to line A, probe 2 to line B. This is what I saw:



Three things can be observed/confirmed at a glance:

  • there is a constant flow of data
  • there is a short sequence followed by a long one: request vs response?
  • line B is indeed an inverted version of line A


Knowing there is data present, I could perhaps find a script or piece of software able to decode the data. For that purpose, I bought a generic RS485 to Serial USB module.

IMG_1880.JPGScreen Shot 2016-08-27 at 11.34.17.png


Using a basic serial tool, I was able to dump the raw hexadecimal data. A new observation, is that every new line, starts with the hexadecimal value "0x0C".


With a script I found and modified to suit my needs, I captured the raw data and jumped to a new line every time the "0x0C" value appeared.


#!/usr/bin/env python

# Original script from
# Modified to print full hex sequences per line instead of individual values

import serial
import binascii
import time

ser = serial.Serial()
data = ""

def initSerial():
    global ser
    ser.baudrate = 9600
    ser.port = '/dev/tty.usbserial-A50285BI'
    ser.stopbits = serial.STOPBITS_ONE
    ser.bytesize = 8
    ser.parity = serial.PARITY_NONE
    ser.rtscts = 0

def main():
    global ser
    global data
    while True:
        mHex =
        if len(mHex)!= 0:
            if not binascii.hexlify(bytearray(mHex)).find("0c"):
                print data
                data = binascii.hexlify(bytearray(mHex))
                data = data + " " + binascii.hexlify(bytearray(mHex))

if __name__ == "__main__":


Some of the captured sequences:


0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 aa 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 85 ff
0c 08 08 08 08 0a 08 08 0a 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 0a 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 aa 08 fe 85 ff 22 20
0c 08 08 08 08 0a 08 08 08 18 08 a8 08 ff 84 ff
0c 08 0a 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff 22 20
0c 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 08 08 08 08 08 08 08 08 08 08 18 0a a8 0a fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 0a ff 08 fe
0c 08 08 08 08 08 08 08 08 08 08 1a 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 85 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 fe
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe
0c 08 08 08 08 08 08 08 08 08 0a 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 fe 08 ff 22 20
0c 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 ff
0c 08 08 08 08 08 0a 08 08 08 08 18 08 a8 08 fe 08 ff
0c 08 08 08 08 08 08 08 08 08 08 18 08 a8 08 ff 08 fe 22 20


There is a very repetitive pattern, with occasionally different values. But what does it do or mean?




This is where I got blocked. This is a bit too low-level for me, so any help would be greatly appreciated! Before being able to go any further, I need to be able to make sense of the data. Until then, this feature will be parked. The goal is still to be able to control and monitor the domotics, but sadly It most likely won't be achieved in this challenge.


Now, if you do have knowledge or know about tools which could help me further, feel free to leave a comment below





Navigate to the next or previous post using the arrows.



No time to go out on a Friday night, only a couple of days before the challenge's deadline. Instead, I decided to annoy the neighbours by doing some final milling and sanding ... So, as promised, here's the enclosure for the second control unit. Unlike the alarm clock, this unit makes use of a touch screen and keypad for user input, on top of the voice commands. Because of these components, it is also quite larger than the alarm clock. It will be sitting on the cabinet.


Here's what I've done with it and how I got there ...




This unit was too large to cut solely with the CNC. The board to cut from was so large I couldn't clamp it normally and had to resort to alternative methods demonstrated below. The CNC was used to mill the slots in the front and top panel, just the maximum supported width of my CNC.

To actually cut the different panels out of the board, I used the classic method: the table saw. Using the router, I manually made the grooves, trimmed the pieces to fit and rounded the edges.




Using some wood glue and clamps, the pieces were attached to each other. This unit required a lot more manual work than the alarm clock, but was clearly faster for some actions, though not always as accurate as the CNC. I suppose accuracy in manual actions comes as experience is gained.






Milling acrylic using the CNC required a few attempts before achieving clean results. During the initial runs, the mill's feed rate was too low, causing the acrylic to heat up too much, melt and stick to the milling bit. This in turn, caused damage to the piece because of the molten blob swinging around.


By increasing the feed rate to 1000mm / min, with passes of 0.7mm, the mill travelled fast enough to cut without melting, resulting in clean cut pieces, as demonstrated below.




Manual Router


To compensate for possible inconsistency issues due to the manual cutting and assembling of this enclosure, the side panels would have to be measured and drawn individually for milling. A much easier and faster approach was to glue a slightly larger, roughly cut piece of acrylic to the sides and use a flush trim router bit.



The flush trim bit has a bearing which follows the shape of the wooden enclosure it is rolling on, while cutting the acrylic to the same shape.


Before and after a manual flush trim:



A bit of sanding will ensure everything is smooth and soft to the touch.




So, after all the sanding, glueing, filling, milling, etc ... I showed it to the wife, and I was allowed to put it on the cabinet


Here's the result:




It's a bit of a pity the touch screen's border is black. I'm thinking I could get some white film to stick on the edges of the display, giving it a white border.


By the way, I feel it looks like a microwave or retro TV. Can anyone confirm or deny this??





Navigate to the next or previous post using the arrows.



This Morning's Bounty of Fowl investments!  The 3 on top are Duck eggs, only 1 was laid today but I wanted to give comparison for size to the Chicken Eggs.  The Element 14 pen is also for size comparison. 


But John, you don't have any Chicken's laying yet do you?  There lies the story in regards to Yesterday...


Yesterday was an interesting day on the IoT Farm. 


The previous night while I was working my swing shift, my better half shared that she had found a lady giving away 5 laying hens and a coop to go with them.  But it sounded like there was so much interest in it that the outcome was uncertain.  So I continued my working on the Raspberry Pi B and using MotionEyeOs with the Noir Camera and 2 new USB cameras that just came in.  Everything connected fine up to the point of trying to go from hardwired to the WiPi usb adapter.  So this had my attention as I am continuing to try and troubleshoot since the WiPi is esssential.  Side note, there is a big difference in response between the RPi 3 and the RPi B running MotionEyeOs.  Patience is a must using the B.


After 6 hours of sleep I was up and running again, getting Kiddos ready for school and planning my day for working on the Farm Operations Center assembly.  I have been playing with just the basic setup of the 7" touchscreen with RPi 3 attached to the back but want to come up with an actual container in case I want to move it about the Farm.  So a F.O.C. Box is in the plans!




This is what greeted my wife in the morning through the newly installed sliding Duck Door.  She was very Happy!  Easy Egg Extraction!


Okay, 2 daughters safely at school courtesy of bus and one son delivered to his school, now time for working on the F.O.C. Box!



Walmart has all of their school supplies drastically reduced and I had noticed my kids having some various sized plastic pencil boxes that looked intriguing.




So $1.13 later I have a variety of sizes and colors to play with!  Fun times await!  25 cents and 10 cents per box depending on size, very nice!


Meanwhile my wife had heard back from the lady with 5 laying hens.  Yes they are available for her if we can go pick everything up.  So time to unload the Truck and make sure all straps and accessories are ready to go.


It took a bit but we caught all of the hens, loaded them into a Kennel/Carrier and also loaded the Chicken coop into the back of the Truck.  She even threw in another female rabbit with food for both rabbit and hens.


Due to the size of the Coop the tailgate had to stay down and everything was strapped/secured quite tightly.  It seems we were an interesting sight as we picked up our son at the school.  It isn't everyday someone pulls up with a Chicken Coop in the back of their truck, complete with live Chickens.  Interestingly enough our son was NOT surprised.  :-)  He just hopped in and started talking to the rabbit. who was in a little carrier by his seat.  Doesn't every family collect farm animals like Old McDonald?


Arriving back at the IoT Farm everyone was quite interested in our new additions.


GoatWheelbarrow01.jpg GoatWheelbarrow02.jpg


Even the G.O.A.Ts were interested in helping in their own special way.



I even got some supplies to play with, she had 10 Chicken Nipples that she gave me that I am going to use with pvc piping to run water to the animals!  There you can see the F.O.C. in it's base form ready to be placed into a box.




Here are the new Ladies being introduced to the full sized Chicken Casa. 



Here is the new Coop in place.  A quick check of it and we want to add some sturdier latches and start some serious weather proofing.  And yes, another sliding door has now been added to the to-build list.  :-)




While we headed out for my son's special tutoring that evening a little rain storm rolled through.  Here are the new Ladies checking everything out after the rain.  By the time we managed to get all of the various animals locked down for the evening I had a couple of inches of wet clay on my running shoes.  Note to self: buy some waterproof mud boots for future weather conditions.


We had been concerned that with the new move and new location the birds would all be quite upset and we may have some chaos for a bit on the IoT Farm but as you can see from the egg picture they have all managed to settle in.  Those 5 Ladies provided us 4 eggs and apparently challenged the other birds since 1 of them laid our first chicken egg from our original chickens!  Very cool! 




And here is a picture of our Vane Chicken roosting out on the fence as I finished up outside for the evening.

Messaging in HangarControl

I have a lot of detail to cover in this episode, so I am taking a break from the podcast and going "old school". It seems that everyone has been jumping on the MQTT, or more affectionately "mosquitto", bandwagon. I decided to go a different route and will be using a communication system called xPL to handle machine communication in HangarControl. The xPL Project has been around since 2003 and was an early entry in the home automation space. Read more at their website,The xPL Project, or just follow along with me.


What is xPL?

xPL is an open protocol intended to permit the control and monitoring of home automation devices. The primary design goal of xPL is to provide a rich set of features and functionality, whilst maintaining an elegant, uncomplicated message structure. The protocol includes complete discovery and auto-configuration capabilities which support a fully “plug-n-play” architecture – essential to ensure a good end-user experience. xPL benefits from a strongly-specified message structure, required to ensure that xPL-enabled devices from different vendors are able to communicate without the risk of incompatibilities.


Minimal Configuration

Auto discovery and configuration were capabilities that I wanted to use in my implementation. Because the HangarControl environment would span a number of locations across an airport and the aircraft and containing hangars were transient in nature, well at least their use is transient, I wanted a system that would allow for devices to "come and go". I did not want to have to provide pre-configured equipment for each airplane or hangar.


Client-Server or Server-Client?

Client and Server is an arbitrary distinction and clients can be servers and servers can be clients. Bascally, any XPL device can send messages or receive them or both.  All xPL messages are broadcast via UDP so any client or any server can receive any message (promiscuous) or just what they should respond to.


xPL Message Structure

You'll see many commonalities between xPL and MQTT. Let's take a look at an xPL message.


xPL device sending the message.  Always fully filled in.  Format is vendor-device.instance (See notes on device idents later.)


xPL device that should receive message OR *.  For *, all receivers can receive it and this is most often used with event triggers and status messages (commands tend to be directed to a specific target). Think "broadcast".


xPL messages are either commands (xpl-cmnd), status (xpl-stat) or triggers (xpl-trig).  Command messages generally tell someone to do something.  Status are messages that indicate status, but are usually sent because someone asked for current status.  Trigger messages are generally sent because something happened (event).  Trigger and status are often similar, but status indicates current state (possibly for hours/days) and triggers mean this just happened.


Schema (which is a class.type) is used to indicate what sort of contents the message has. In short, it's how to know what you should expect to see/send in the body of the message. The class is a general 'class' of messages and 'type' is a specific type within that class.  It's all pretty arbitrary, but each combination of a schema class/type should uniquely describe the contents of the message body.

message body

This is the main payload.  It consists of a series of name/value pairs.  What named values you see in the body are specified by the messages schema class.type.  Message body names are up to 16 characters long and both name and value have to be ASCII text (not control chars).


A given xPL device may support multiple schema class/types or it may just support a particular schema class.  You can't make assumptions about what a xPL device supports.  Device IDs are just 'endpoints' and not indicators of capabilities.  capabilities are only defined by schema class/type.


xPL device Idents have 3 parts -- a vendor code, a device code and an instance code.  Instances are the only part that a particular thing can configure -- vendor and device are "hardcoded".  Instance IDs must be unique (they are what make the 3 parts a unique identifier on the network).  For an unconfigured device, a random instance ID is created at first and later on, can be changed to something more meaningful.  Nothing enforces device uniquness, so you have to make sure you don't muck things up with dups.


All xPL devices send a heartbeat periodically (usually when starting up and every 5 minutes). This is done automatically by the xPL code.  You can listen for broadcasts of the schema type '' to track new devices, but generally, the device should announce itself when starting with something more specific.  Ideally, you track each device you care about and reset a timer whenever you see any message, including a hbeat message.  If you don't hear from the device in 10+ minutes, you'd generally consider it 'dead'.  That much tracking is not really needed for this project, but available if you want it.


In the case of HangarControl, it's all pretty simple, but a given node or hangar can potentially send and receive messages from several different schemas if the device supports it. Think of schema class as a capability and type indicating a function within that capability.


xPL in HangarControl

In HangarControl, we are going to use a schema class of 'heater'. I hope it makes sense since the driving purpose is managing engin preheaters during cold weather operations. The commands will be of type 'basic' (basic type is the most common and usually describes commands).  Status updates from the devices use a type of 'report'. So even though "schema", "class", and "type" all sound confusing, a real world implementation makes quick sense of the terminology.


Example Message Structure


Schema heater.basic (generally sent to heater clients)
  Message type: command
  Message body:  request=start|stop|restart|status

Schema (generally sent by clients when state or time changes)
  Message type: trigger or status
  Message body: heater=on|off remaining=#minutes


A Status Message

Each hangar device will periodically send a "heartbeat" to let other concerned listeners know that it is still available.

  • xPL_MSG TYPE="xpl-stat", SOURCE="rsh-heater.59", TARGET=*, class="hbeat", TYPE="app"


When a heater needs to be turned on, a request is sent from HangarControl to a single hangar.

  • xPL_MSG TYPE="xpl-cmnd", SOURCE="rsh-heater.server", TARGET="rsh-heater.59", class="heater", TYPE="basic"


When a new hangar comes online, it appears as an "unnamed" device. At this time, the operations manager will configure the node using the HangarControl web application. The xPL message sent to the hangar will be something as below, which the hangar node will the save to its local configuration:

  • xPL_MSG TYPE="xpl-cmnd", SOURCE="rsh-heater.server", TARGET="rsh-heater.59", class="heater", TYPE="config"
    BODY="name='N1234T' description='Cessna 182' default-time=120m' gpioPin=17' "


A Little Less Typing and a Lot More Action!

The xPL protocol has been codified and comes in at a paltry 796 lines of (mostly) readable Python! I have attached it for your perusal. While I don't imagine you want to read it all, some interesting points would be:

  • Lines 36+
    An xPL message is, for all intents and purposes, a text message. This makes monitoring, packet sniffing, and debug/logging much easier.
  • Lines 50+
    I implement the __str__ method for a message. This lets you (well, the programmer in "you") get an easily readable and formatted message to use in logging, etc.
  • Lines 241+
    The encodeMessage() method wraps up all of the pieces into a simple string packet (as discussed above in lines 36+).
  • Lines 644+
    We setup a simple thread to listen for UDP packets on port 3865 or if this are many devices (hangars, heaters, etc.) running on this RPi then start at port 50000 and work our way up til we find a free port to listen on.
  • Lines 322+ (Yeah, I jumped backwards. That's becuase it's more interesting once we know how we get messages as discussed in lines 644+!)
    The XPLListener class proves interesting because here we have the heart of the message parsing and identification. There's nothing fancy going on -- It's simply matching strings and optional wildcards.


Final Words

I chose to use xPL primarily because all of this is a learning experience. Having the core of a messaging system in an easily readable and concise file made the chore of implementation much more approachable.


I hope that you find this useful and find some tidbits of code that you may lift and use in your own projects.


Until next time,


One of the goals of Thuis is optimizing it's own rule engine based on actual activity in the house. Although time is to short to actually start the optimization, with this blog we'll start collecting presence data using iBeacons. As a bonus we solve the Welcome Home use case!


iBeacons monitoring, ranging and indoor location

iBeacons (in my case by Estimote) are little Bluetooth LE devices which on regular intervals broadcast their identifiers. Mobile apps can use this as a way of determining their location. They can be used in several ways. The three most common ways are:

  • Monitoring – the app gets a signal whenever it enters the region defined by one or multiple beacons (aka when it receives a packet sent by the them). The app gets a small amount of time to do some work, for example notify the user. This works even when the app is terminated.
  • Ranging – When the app is active it can listen to all beacons around and based on the signal strengths make an approximation of the distance between the beacon and the phone.
  • Indoor location – Estimote developed another layer on top of ranging. Based on several beacons (at least one per wall) and the sensors in the phone it can determine its location within a space. This works quite precise, but it uses more energy than the other methods. Also it can only be used when the app is in use.


I experimented with all three to see which way is most usable for Thuis. Indoor location was a bit of a hassle to set up (it involves walking lots of circles close to the walls through the house, which is not easy when there is furniture!), but the result is impressive. Not being able to use it in the background made me choose monitoring as the main technique.


ps: for the Android users out here: although I'm talking about iBeacons (by Apple) there are very similar technologies available for other platforms. For Android that's Eddystone, which is the Estimote beacons can broadcast as well.


Presence monitoring

To optimize the Thuis rule engine it needs to have knowledge of what happens around. One of the useful things to now is who is where and when. We'll use presence monitoring to determine who is currently where in the house.


Hardware set up

Estimote iBeacon settingsWe want to know who is where on a room-level, so who is in which room. To identify different rooms we'll deploy a iBeacon is every room we're interested in. We're using 3 Estimote Location Beacons and 3 older Estimote Proximity Beacons. In the entrance, living room, bedroom, kitchen and office 1 beacon is installed at the center of the outer wall (as far as possible from other beacons). The signal strength of each of them is tweaked ranging from -30dBm (±1.5 meter) in the smallest rooms to -20dBm (±3.5m) in the bigger rooms. Each beacon is configured to broadcast its ID approximately 3 times per second. These values will likely be optimized further over the coming period.


Initial version

I started by using just region monitoring as this is very easy to implement. To make my life easier I started with some structs describing the deployed beacons together with some helper functions. The interface of the struct looks like:

struct BeaconID : Equatable, CustomStringConvertible, Hashable {
    let proximityUUID: UUID
    let major: CLBeaconMajorValue?
    let minor: CLBeaconMinorValue?
    let identifier: String?

    init(proximityUUID: UUID, major: CLBeaconMajorValue?, minor: CLBeaconMinorValue?, identifier: String?)
    // more init methods

    var asBeaconRegion: CLBeaconRegion { get }

internal func ==(lhs: BeaconID, rhs: BeaconID) -> Bool

extension CLBeacon {
    var beaconID: BeaconID? { get }


And the beacons are defined as:

struct BeaconIDs {
    private static let uuidString = "B9407F30-F5F8-466E-AFF9-25556B57FE6D"
    static let all = [home, bedroom, office, living, kitchen, entrance]
    static let home = BeaconID(uuidString: uuidString, identifier: "home")

    static let bedroom = BeaconID(uuidString: uuidString, major: 23476, minor: 64333, identifier: "bedroom")
    static let office = BeaconID(uuidString: uuidString, major: 25568, minor: 21134, identifier: "office")
    static let living = BeaconID(uuidString: uuidString, major: 40474, minor: 19278, identifier: "living")
    static let kitchen = BeaconID(uuidString: uuidString, major: 16433, minor: 64211, identifier: "kitchen")
    static let entrance = BeaconID(uuidString: uuidString, major: 16433, minor: 21894, identifier: "entrance")
    static func of(identifier: String) -> BeaconID? {
        return all.first(where: { (beaconId) -> Bool in
            return beaconId.identifier == identifier

    static func of(proximityUUID: UUID, major: CLBeaconMajorValue, minor: CLBeaconMinorValue) -> BeaconID? {
        return all.first(where: { (beaconId) -> Bool in
            return beaconId == BeaconID(proximityUUID: proximityUUID, major: major, minor: minor)


All code related to beacons takes place in the class BeaconManager. It sets up monitoring in the init method and implements the delegate methods for the ESTBeaconManagerDelegate. It keeps some information about which is the current region for this phone and for each beacon when was the last time this phone entered or left the region. The initial version just logs information to the console based on activity.

class BeaconManager: NSObject, ESTBeaconManagerDelegate {
    private let beaconManager = ESTBeaconManager()
    private var currentPresence: BeaconID?
    private var lastLeftOrEntered: [BeaconID: Date] = [:]
    override init() {
        beaconManager.delegate = self
        for beaconID in BeaconIDs.all {
            beaconManager.startMonitoring(for: beaconID.asBeaconRegion)
    func beaconManager(_ manager: Any, didDetermineState state: CLRegionState, for region: CLBeaconRegion) {
        guard let beaconID = BeaconIDs.of(identifier: region.identifier) else {
        print("State \(beaconID.identifier!): \(state.rawValue)")
    func beaconManager(_ manager: Any, didEnter region: CLBeaconRegion) {
        guard let beaconID = BeaconIDs.of(identifier: region.identifier) else {

        currentPresence = beaconID
        lastLeftOrEntered[beaconID] = Date()
        if let timeIntervalSinceNow = lastLeftOrEntered[beaconID]?.timeIntervalSinceNow {
            print("Entered \(beaconID.identifier!) since \(timeIntervalSinceNow)")
    func beaconManager(_ manager: Any, didExitRegion region: CLBeaconRegion) {
        guard let beaconID = BeaconIDs.of(identifier: region.identifier) else {

        print("Left \(beaconID.identifier!)")
        lastLeftOrEntered[beaconID] = Date()

        if beaconID.identifier == BeaconIDs.home.identifier {
            currentPresence = nil


This already works quite well, but as we're working with wireless signals there can be mistakes. For example a packet from another room reaches your phone and therefor your presence is adjusted. Or some packets get lost and cause your phone to think it left the region.


The latter is the reason that for resetting the currentPresence to outside we use the home region. This region consists all beacons, so the change of a false positive is smaller. It does however still happen every now and then.


Improving accuracy with ranging

To make presence monitoring more accurate we can combine monitoring with ranging. When the app is in the background and enters a region it's woken up and gets a small amount of time to do some work. We can use this time to start ranging beacons, and with the more detailed data about the beacons around us make a better approximation.


To start ranging we have to adjust the didEnter method, instead of updating currentPresence and lastLeftOrEntered we start ranging: beaconManager.startRangingBeacons(in: BeaconIDs.home.asBeaconRegion). To receive the results we'll implement the corresponding delegate method in which we'll update the values:

func beaconManager(_ manager: Any, didRangeBeacons beacons: [CLBeacon], in region: CLBeaconRegion) {
    guard let beaconID = beacons.first(where: {$0.proximity != .unknown})?.beaconID else {

    currentPresence = beaconID
    lastLeftOrEntered[beaconID] = Date()
    beaconManager.stopRangingBeacons(in: BeaconIDs.home.asBeaconRegion)


Notice we'll take the first beacon (with a known proximity) from the given list. The SDK returns them ordered from close to far away, so we'll always use the closest beacon. Our value of currentPresence got a lot more accurate now!


Based on how it performs I'll do some further optimizations. One thing I still have to do is make the values of currentPresence and lastLeftOrEntered persistent, so they will survive a termination of the app.


Publishing presence

As always we want to publish our data through MQTT, so the other Thuis nodes can use it as well. In [Pi IoT] Thuis #10: MQTT User Interface components for iOS we added MQTT to the app, so we can build on this. Whenever the currentPresence value changes we'll have to publish a message. This means we have to update both the didRangeBeacons method and the didExitRegion method.


didRangeBeacons is updated like this:

func beaconManager(_ manager: Any, didRangeBeacons beacons: [CLBeacon], in region: CLBeaconRegion) {
     // ...
     if currentPresence != beaconID {
         MQTT.sharedInstance.publish(beaconID.identifier!, topic: "Thuis/presence/robin", retain: true)
         currentPresence = beaconID
     // ...


And to detect someone leaving the house we change didExitRegion, here we only publish when the specific region is home:

func beaconManager(_ manager: Any, didExitRegion region: CLBeaconRegion) {
    // ...
    if beaconID.identifier == BeaconIDs.home.identifier {
        MQTT.sharedInstance.publish("outside", topic: "Thuis/presence/robin", retain: true)
        currentPresence = nil


Note that the name in the topic is still static, I'll make it configurable in the app later.


Walking around slowly through the house gives the following MQTT messages:

$ mosquitto_sub -t Thuis/presence/# -v 
Thuis/presence/robin living
Thuis/presence/robin entrance
Thuis/presence/robin bedroom
Thuis/presence/robin entrance
Thuis/presence/robin kitchen
Thuis/presence/robin office


Welcome home

Based on the same events we can welcome a user home as well and directly give him a useful action. I could directly turn on some lights for example, but I rather give the user the choice. So we'll send the user a notification with an action. We'll start with a single action, but later multiple actions can be added depending on for example the time or person.


When we get home we often watch an episode of a TV series (currently we're watching The Mentalist, very nice show!), so the action of choice will be turning on the home theatre system.


Sending a notification is easy. In the BeaconManager we create a function for it:

func sendLocalNotification() {
    let notification: UILocalNotification = UILocalNotification()
    notification.alertAction = "Watch TV"
    notification.alertBody = "Welcome home!"
    notification.soundName = UILocalNotificationDefaultSoundName


We'll call it from the didEnterRegion method when we enter the home region. To avoid getting too many notifications in case of exiting and entering the region by accident we'll add a cool down period of 5 minutes. This looks as follows:

if beaconID.identifier == BeaconIDs.home.identifier 
   && (lastLeftOrEntered[beaconID] == nil || (lastLeftOrEntered[beaconID]?.timeIntervalSinceNow)! < -60*5) {


The result is you receive this notification when you arrive home:

Notification: Welcome home!


To make it work there is one more thing to do and that's implementing another delegate method, this time in the AppDelegate:

class AppDelegate: UIResponder, UIApplicationDelegate {
    // ...

    func application(_ application: UIApplication, didReceive notification: UILocalNotification) {
        // If the app is already active, don't automatically start TV
        if application.applicationState == .active {
        MQTT.sharedInstance.publish("on", topic: "Thuis/scene/homeTheater", retain: false)


So when the notification action is used it will publish a MQTT message which enables the Home Theater scene and you can directly start watching your favorite series. How the home theater scene works will be the subject of the next blog!

In these last days I have implemented three new features of Phase 2 that will make DomPi more useful to my family: determine who is at home, welcome the family when we arrive home and send an alert if the average temperature of the apartment is either too high or too low. Let´s have a look at them!


Previous Posts

PiIoT - DomPi: ApplicationPiIoT - DomPi: IntroPiIoT - DomPi 02: Project Dashboard and first steps in the Living room
PiIoT - DomPi 03: Living room, light control via TV remotePiIoT - DomPi 04: Movement detection and RF2.4Ghz commsPiIoT - DomPi 05: Ready for use Living Room and parents and kids´ bedrooms
PiIoT - DomPi 06: Setting up the Command Center - RPI3PiIoT - DomPi 07: Setting up the Command Center (2)PiIoT - DomPi 08: Setting up the Command Center (3) openHAB, mosquitto, RF24
PiIoT - DomPi 09: Presence EmulatorPiIoT - DomPi 10: Garage node (slave)PiIoT - DomPi 11: Ready to use. Garage node (master & slave)
PiIoT - DomPi 12: Ready to use. Control Panel and Garden NodePiIoT - DomPi 13: Ready to use. DomPi Phase 1


Project Status

Project Status

Presence Identification

This feature allows DomPi know, not only know if there is somebody at home, but also who is there. The best solution as of now to implement this, is to check whose mobile phone is at home. I know this has some drawbacks like, what if the phone is off or if we left it at home... To partially sort out this problem, DomPi also leverages the motion sensors to limit the impact of it and make it more reliable. Let´s have a look at all this.


At the beginning, I thought of using the command line (via the executeCommandLine function from openHAB) and then parsing the output of the "ping" in the RPI. While I was working on this, I have come across an existing binding in openHAB that has made the development much easier and faster It is the Network Health Binding, some more details here. This binding connects an item in openHAB with the status of any device or host in general. You could for example check the "network health" of your RPI to communicate with or for instance. For DomPi, I am controlling the network connectivity to the IP´s of our mobile phones. Let´s review the installation and initial config of the binding and have a look then at the implementation.


Installation and initial config

As with any binding, you just need to download the binding in itself (the best approach is to download all of the addons from here as commented in previous posts) and copy the relevant binding (in my case the .jar file is: org.openhab.binding.ntp-1.8.3) into the openhab/addons folder. To perform some fine tune, you just need to modify some of the lines in the openhab.cfg file:

# Cache the state for n minutes so only changes are posted (optional, defaults to 0 = disabled)
# Example: if period is 60, once per hour the online states are posted to the event bus;
#          changes are always and immediately (refresh interval) posted to the event bus.
# The recommended value is 60 minutes.

# refresh interval in milliseconds (optional, default to 60000)


At the end, I modified the line 5 here and uncommented. By allowing the cache, the binding gets the status of the devices as always, and only updates the item if there is a change in the status. This means, there will only be updates if the phones go out or in range. With no cache, there would be updates posted to the openHAB bus every refresh interval (60s as per line 7). I don´t need to overload the bus with this information, but also if you add persistance, there would be a huge volume to store all of these updates in the HDD... In any case, the binding will update the devices status every hour. All in all, with the cache enabled, openHAB will let me know if the devices go from ON to OFF or viceversa and also every hour it will send an update. I kept the networkhealth refresh (line 8) with the default value.


Implementation in DomPi

Potentially, you can connect directly an item to the binding and just display in openHAB its status. Something like this:

Switch Papa_tlf_nh "Papa Network Binding"  <present> (gStatus, gPresencia_casa_nh) nh="" }


This would display a switch showing if the mobile phone is in range or not. However, this would not be optimum. It seems that my mobile phone goes into sleep mode with the Wifi, probably to save up battery. I have not stress-tested but I guess while on sleep mode, it will not reply to pings. The solution I have implemented is to apply a double level of switches: one as above connected directly to the binding and yet another one controlled by an openHAB rule.


The rule will get the updated status from the first level switch. When it is in range, it will directly update the second level. However, when it is out of range, it maybe that the phone is on sleep mode. Therefore, it will wait 8 minutes. If within this time, DomPi has not seen the phone in range, it determines that the user/phone is out from home. The description of the second level switch is this:

Switch Papa_tlf "Papa" <present>    (gStatus, gPresencia_casa)


And the rule is here:

rule "Presence Identification - Father"
    Item Papa_tlf_nh changed
    //If the temporary item Papa_tlf_nh has changed to ON, 
    //we directly update the final item Papa_tlf and cancel the relevant timer if exists
    if (Papa_tlf_nh.state==ON) {
        postUpdate(Papa_tlf, ON)
        if (timer_presenceID_papa!= null) {
            timer_presenceID_papa = null
    } else if (Papa_tlf_nh==OFF) {
        //If it is OFF, it can be that the phone is saving battery on Wifi
        //Let´s allow 8 minutes since the last time it was updated before putting presence to OFF
        if (timer_presenceID_papa == null) {
            //if there was no timer until now, create it
            timer_presenceID_papa = createTimer(now.plusSeconds(480)) [|
                //Allow 8 minutes and modify Papa_tlf item 
                if (Papa_tlf_nh==OFF) postUpdate(Papa_tlf, OFF)
                if (timer_presenceID_papa!= null) {
                    timer_presenceID_papa = null


The drawback of this implementation is that I need a rule per phone to control. In the end I am just planning to monitor two phones so it won´t be a big issue. However, I need to review openHAB's capabilities as I am sure I can condense all the devices to monitor into a single rule. I will look into this when optimizing the code, after the Challenge unfortunately... You can see some snapshots below with this feature. The line Quien esta en casa? => Who is at home? summarizes in the Main Menu how many people are there. It is a clickable item that takes you to the right hand side submenu with the details.




Welcome Home feature

With this, I intend to execute some actions when somebody has arrived home. At this stage, the action is to turn on the lights of the living room and the parents´ bedroom and in the next development it will also turn on the TV and switch it to the TV channel we usually watch. This can be very useful to us, as you can have some light at the end of the corridor without having to walk till there. Also the light in our room takes time to be fully lit, so it is good as well to turn it on some minutes in advance. I have written the code and split it in four openHAB rules.


How it works

It all starts by determining when the apartment is empty. Since it is after the flat is empty when it makes sense to wait for the family and welcome them warmly or at least "lightly". The first rule is quite straightforward:

//This rule determines if there is presence at home
rule "Detect any presence at home"
     Item Nodo09MotionDetected changed from OFF to ON or
     Item Papa_tlf changed from OFF to ON or
     Item Mama_tlf changed from OFF to ON
     if (Someone_at_home.state!=ON) postUpdate(Someone_at_home, ON)


If there is movement, or DomPi discovers any phone at home, then we determine that there is a family member. This assumption has to be fine tuned in the future: what about if we forgot the mobile phones, etc. It may happen that there is motion detected because a burglar broke in... We should not welcome him or her This is controlled by another rule, that checks the status of the Alarm Switch, if it is on, then we won´t trigger the welcome feature:


rule "Welcome home family"
/*If someone has arrived home and there was nobody before inside, let´s do:
 *         Turn on light in the living room if luminosity is low
 *         Turn on light in the parents bedroom if luminosity is low
 *         Improvements: turn on TV and say hello

    Item Someone_at_home changed from OFF to ON
    if (Nodo09AlarmSwitch.state==OFF) {
        //Reconfirm that the alarm switch is off - it can be that rule "Detect any presence at home"
        //has changed Someone_at_home, but alarm is active
        say("Welcome at home!!")    //Let´s be nice to the family even if no lights needs to be turned on ;)
        if (gLuminos.state<50) {
            //Average luminosity is low, lets turn on the lights
            postUpdate(Lampara_2, ON)        //Parents light
            postUpdate(Lampara_3, ON)        //Living room light
    } else if (Nodo09AlarmSwitch.state==ON) postUpdate(Someone_at_home, OFF)
end //We avoid welcoming burglars!


As you can see in lines 15 to 18, before turning on the lights, it checks the average luminosity in the apartment, taking into account only the luminosity from the sensors in the flat: kids, parents and living rooms. If the average is below 50%, it turns on the lights. I will adjust this value after some days testing it.


The last two rules determine whether the family came back home or left home. They will update the item Someone_at_home accordingly:

//This rule determines if there is nobody at home and if so updates item
rule "Did family leave home"
    Item Someone_at_home changed from OFF to ON 
    //Thread launched as soon as we determine someone is at home. 
    //It will start to check the conditions to determine that there is noone at home
    if (timer_presenceat_home!= null) {
        timer_presenceat_home = null
    while (Someone_at_home.state==ON) {
        //if there is no motion detected and all of the members from gPresence_casa (Mama and Papa)
        //are in the state OFF, which means they are not at home, then execute the loop inside
        if (Nodo09MotionDetected.state==OFF && (gPresencia_casa.members.filter(s | s.state==OFF).size==0)) {
            //Wait 30 mins and check again if there was any movement
            //this long delay helps to avoid issues if somebody is in the bath, etc
            timer_presenceat_home = createTimer(now.plusSeconds(1800)) [|
                if (Nodo09MotionDetected.state==OFF && (gPresencia_casa.members.filter(s | s.state==OFF).size==0)) {
                    //Still no movement -> modify item
                    postUpdate(Someone_at_home, OFF)
                if (timer_presenceat_home!= null) {
                    timer_presenceat_home = null
        if (Someone_at_home.state==ON) Thread::sleep(120000) //Every 2 mins determines if someone is at home

rule "Did family come back home"
    Item Someone_at_home changed from ON to OFF
    //Thread launched as soon as we determine nobody is at home
    //It will start checking the conditions to determine that somebody entered home
    //Conditions to determine someone at home:
    //  if there is a mobile phone in the wifi range
        //There is no need to check if any mobile phone appears in range via the Network Binding
        //since there is already a rule ("Detect any presence at home") that would modify the
        //item Someone_at_home to ON
    //  if there is motion detected and 1)the alarm is not active, or 2)alarm is active but 
    //    user turns it off within time limit 
    while (Someone_at_home.state==OFF) {
        if (Nodo09MotionDetected.state==ON) {
            if (Nodo09AlarmSwitch.state==OFF) postUpdate(Someone_at_home, ON)
            else {
                //val t_delay = t_delay_trigger_alarm * 1000 + 200
                Thread::sleep(60200)    //Wait the 60 secs plus 200 ms to allow some updates //Change with t_delay_trigger_familyhome 
                if (Nodo09AlarmSwitch.state==OFF) postUpdate(Someone_at_home, ON)
        if (Someone_at_home.state==OFF) Thread::sleep(10000)        //Sleep for 10s before checking again


Temperature Alarm

This feature notifies me per email if any temperature sensor at home is below 20ºC or above 27 ºC. Many thanks to jkutzsch who gave me the idea back in May, at the very beginning of the Challenge. Finally, I can implement it I have implemented it in a single rule. If the alarm is triggered and the email sent to myself, then the rule launches a timer to wait for 1h before sending again any notification: I want to avoid spamming myself!

 * Rule to send an email warning if temperature at any room is out of a given range
rule "Notify of extreme temperature at home"
//If any room is below 20ºC or above 27ºC send notify per email
    Item gTempers received update
    //First step to check if the timer is not null, this implies that there was an email already sent and 
    //DomPi is waiting some time (1h-2h) before resending the notification

    if (timer_extreme_temperature_alarm==null) {
        //no timer - we can send the email if required
        //check values of the temperature sensors from home (interiors only)
        gTempers?.members.forEach(tsensor| {
                if (tsensor.state>=27) send_alarm_high_t = true
                if (tsensor.state<=20) send_alarm_low_t = true        
        if ((send_alarm_high_t) || (send_alarm_low_t)) {
            //Send email with the alarm and then create the timer
            var String email_subject =""
            var String email_body =""
            if (send_alarm_high_t) {
                email_subject = "Alta temperatura en Casa. Alarma"
                email_body = "Detectada alta temperatura en casa"
            if (send_alarm_low_t) {
                email_subject = "Baja temperatura en Casa. Alarma"
                email_body = "Detectada baja temperatura en casa"
            sendMail("", "DomPi - " + email_subject, email_body)
            timer_extreme_temperature_alarm = createTimer(now.plusSeconds(3600)) [|
                //Wait 1h and then just remove the timer to allow next notification as required
                if (timer_extreme_temperature_alarm!=null) {
                    timer_extreme_temperature_alarm = null


Additional improvements

As I continue to run through the existing rules, I find ways to improve on them. Below a summary of the news this week:

  • Show the internal average temperature and humidity in the main menu - before it was not implemented
  • Create a submenu which quickly shows the temperatures and humidity
  • Alarm - new rule created to turn Alarm Status back to OFF after 2 minutes of no movement being detected to avoid having a false alarm sine die...
Main Menu temp humHumidity


Nodes´ Dashboard

Nodes Dashboard


More items turning into Green!!

Up until now, I built the 'skeleton' of the platform: we have these three nodes in the smart home (Central Node, Sensors Node, User Node), which show the sensors data at different points. However, the platform lacks a nice access point for the residents to interact with the house. Furthermore, the competition module is still to be started! With this post, I will show the improvements in the house GUI as well as the setting up of the central node to start developing a competition system.


The improvements to be included in the Central Node are:

  1. Touch screen integration and GUI development
  2. Server set up to develop the competition system. Central Node will provide:
    • Database storage service - to create a persistent system
    • Web portal/ Web access service - to enable an interface with the competition devices when outside of the home


Touch screen and GUI development

Initial setup: Raspberry Pi 3 - Raspbian SO (Jessie) / SSH Enabled / Mosquitto MQTT Broker installed / MQTT Subsciber client / Console interface


The main GUI of the house is on the Central Node. Since it will make use of the provided touch screen, I introduce a link on how to integrate Raspi and screen as well as a description of the GUI development itself.


Touch screen integration

We will include the touch screen from the kit: '7" Touchscreen Display'. Luckily, this very same elemet14 page offers a very complete description of the product and how to get started

Raspberry Pi 7” Touchscreen Display

The main steps:

1. In the Raspberry Pi 3,  install the matchbox-keyboard (no more need of a keyboard connected via USB)

2. Do the hardware connections (we continue from step 5: attaching the DSI ribbon and then the 5V and GND jumper wires). Then, mount the Raspi to the back of the display.


Here is the final result in my case

IMG_20160721_105121_hdr.jpg  IMG_20160721_105256_hdr.jpg



GUI Design


Using Python, and the GTK library

Full code can be found at Github


This was an interesting project, as it is the first time I use Python to design a graphical interface. In the end, I went for the most basic way, using GTK directly. However, I have the impression there might be higher level libraries which will provide a nice final look (still, it was an exeprience). As far as this project goes, we will have a functional graphical design to display our significant data.


I started from the previous Python scripts, which already implemented an MQTT_Subscriber and a console interface. In the end, I work with three files:

  • - main file
  • - the MQTT client to be connected to the broker. It receives the information for the smart house data
  • - defines the graphical interface, using GTK library, and functions to update its labels

To generate a temperature gauge, there is the additional file


NOTE - An extra file will be required to get the competition house information.


Initial approach for the temperature gauges - Plotly (FAILED)


Plotly is a powerful online tool to generate online graphs and dashboards. It gives enough freedom when creating this charts, and the additional option of exporting to a local directory. I decided to use it to generate a temperature graph which could be updated both in the central node and for a more remote solution. However, I realized this will not work for my case (see the end of this section for a dramatic outcome).


Using plotly

I created a new account in their webpage (as a free user) and started playing. Each account comes with a username and a key, which has to be included in the code using the platform or the environment.




There are two main steps while installing plotly: installation of python library and authentication. The corresponding commands are:

To install the library:

pip install plotly

To authenticate, we have to start python, with the console command


There, the authentication process only requires two commands:

import plotly'DemoAccount', api_key='lr1c37zw81')

(Another option is to include this authentication process in the python files using plotly)


Now, our python scripts are ready to use Plotly!




I included the file in the project, which can generate different temp graph with small modifications. The code defines a chart, with different temperature regions (and some missplacement in the numbers :3, something to fix if using plotly). Depending on the temperature, we should see the pointer signaling the right section. Some examples:


The limits of plotly free account

After the setup and development, I could generate an image with any new temperature sample that emulated the effect of thermometer indicator. Nevertheless, the program will crash after a certain time without any chances of recovery. It is a bit embarrassing, but it took me a long time to realize why:

Free accounts have a limit of requests (kind of obvious). 30 request per hour and a maximum of 50 requests per day. I want to refresh the GUI in a regular basis, and, even if I can regulate and minimize the number of request, I would prefer finding another solution. For now, though, I will keep the static temperature image in the GUI (until I design a better solution!).


A single interface for the smart house and competition system




This interface has three frames:



     Shows whether we are connected to the broker and which local IP we are using

     If connection was not established, we can enter and try a different local IP




  • Temperature as a gauge
  • Pressure and altitude values in real time
  • Door state and alarm

There is a Log button, which will be used to retrieve past data



     Shows a table with the residents and state of the competition

     Results are ordered, with the winning contestants on top

There is a History button, to be used to show past months results.


(*) We have only mock values so far. The real information will be read from our database once the competition system is implemented.


Server setup


The next step for our central node is to host a server. After the setup, it will provide:

  • Remote and local storage, thanks to the MySQL database. The database itself could be built without the webserver, but we want it integrated and accessible form outside too (the competition system will be sending its information mainly outside of the house).
  • Remote access to the node, thanks to a web interface


Next sections describe how to install each of the new capabilities


Installing and starting a database in the Raspberry Pi 3: MySQL


For the MySQL database, I will be  installing the corresponding libraries (mysql as well as  the python related one) it in the Raspberry Pi 3 with the following command:

sudo apt-get install mysql-server python-mysqldb


(*) If during the installation, you are not asked to set the root password you will have to set it after.

  1. Stop the service
    service mysql stop 
  2. Start with disabled grant-table, so that you don't need to enter root password
    sudo mysqld_safe --skip-grant-tables
  3. Access as a root user
    sudo mysql --user=root mysql
  4. Change root password
update user set Password=PASSWORD('new-password') where user='root';

    5.  (Do not forget to stop and restart the server as normal!)


Create a database for the project and also, a user to manage the input data (but not as many privileges as the root)


Again, my main functions will be developed in Python, as scripts. Therefore, I will be using Python commands to:

1. Create table

2. Access table

3. Delete rows/tables


Installing a web server in the Raspberry Pi 3: Apache and PHP5




The command to install the packages is:

sudo apt-get install apache2 php5 libapache2-mod-php5


Afterwards, we will have in the main node:

  • Apache2 - the server.
  • PHP5 - to develop the web portal which communicates with the database.
  • A bridge between the two of them


A usefull command :Activate the service

sudo service apache2 restart



At this point, we can access our web portal using the local IP for the Central node. In the end, we want the web server to be accessible from the outside (it will manage the remote functions). There are two steps needed still:

  1. Port forwarding to our Raspberry Pi 3- the router will redirect the traffic that comes through an specific port to out Central Node. This process depends on the router itself (lucky enough, there are usually plenty of Google entries on how to), but usually there is a "Port Forwarding" menu when accessing the advanced options of the router. Two things we need to keep in mind:
    • External IP address & Start/End Port - which represents the IP address and Port to be access from the outside(START PORT). I leave the IP address to and select an start port (will use it again when sending data to the central node from the competition mobile app) which is the same as the end port
    • Internal IP address & Start/End Port - This Internal IP address should be the local IP of the central node and the start/end port that of the service.

          We will be using this EXTERNAL IP PORT from the mobile application managing the "Competition application" in order to send data back home.


     2. Obtaining a domain - instead of using the IP address, the web will be accessible  using DNS. I obtain the domain from Absolutely Free Dynamic DNS / DDNS (the same page recognizes the public IP of your house network). No more IP memorizing for a while


For an initial web portal we will just forward the port 80 (both, for the external IP and the internal IP). And here it comes!

(Yes, yes... it is still a mock portal)





Services useful commands

Regarding MySQL and Apache servers, here are some useful commands:



sudo service mysql restart


sudo service mysql start


sudo service mysql stop

Developing a web portal

1. Design and content: HTML & CSS


The web will be located in the folder /var/www . The input point is the index.html file, so let's start changing things a bit.


In order to have a nicely designed looked, there are several free templates available. I chose mine from Free Website Templates. With some modifications in the CSS files for a different look as well as in the HTML to include the content, I will make some changes to include the Competition Application:



At this point, I will just display the information related tot he Competition System. It will be a nice feature to also be able to read the information sent from the Sensors Node also in this same web. However, I would include it only with a minimum of security (e.g. users login) so that it is not available to just anybody and everybody.


2. Adding functionalities: PHP5 and mySQL

For the competition system to work, our central node will be receiving each roommate status through the Internet. This information will be then stored on the database so that the main program (the previously developed Python GUI) can access it.

This will be done in PHP:

  1. Receive and decode the user's information to obtain the total distance (from an HTTP POST)
  2. Insert this data in the database (once the local user has open the database, using SQL commands)




With this post, the setting up of our central node is completed!!

  • Smart house GUI is ready to show the house and competition state
  • A mySQL server has been installed, allowing for some future persistence
  • A web server is also installed,opening a bridge for the competition system to enter the house
    • We keep in mind the domain of our Competition smart house (if accessed with a browser, it will show a simple web hosted in the central node) -
    • And the port for our competition service


There is still much development to do on top of that!!

Internet of things is about connecting connecting th things around you in a meaningful way. Last few posts, I showed how to connect and monitor a few sensors inside the home using raspberry pi. Now it's time for a little entertainment. After the basics in home, I believe it's going to be our entertainment systems coming to online. In this post, I'll explore the idea of an internet connected music player with raspberry pi.


Enter Mopidy

Mopidy is much more than a normal music player. I makes your music device accessible from web and also enables it to stream content from web like spotify, Google Play Music etc., The best part - you can access your music player from your smartphone or tab or from PC. This is how it's going to work - Mopidy is going to run on your raspberry pi (I'm using Pi3). It will be working as a daemon(background process) in your pi. And your Pi must be connected to a network. Noe you can access the Mopidy UI using any of you devices - either phone, tab or another PC. It's basically like bluetooth streaming experience except that it won't drain you battery But the catch is your music files should be either available in your rPi or should be played from cloud.


Hardware Setup

The hardware setup is really simply. You have to connect a speaker to your 3.5mm jack. I'm using an USB powered speaker, which I power from the Pi.



Software Installation

Although there is an official apt repository method for installing Mopidy on Pi, I will be following the pip method. The version in apt repo is seems to be old - 0.19, while the one using pip is new - 2.x

First thing to do is to install all the gstreamer dependencies in Pi. Use the following command to install the dependencies:

sudo apt-get install python-gst-1.0 gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-tools gir1.2-gstreamer-1.0 gir1.2-gst-plugins-base-1.0

Now we can go on to install Modpiy using pip. Use the following command to install:

sudo pip install Mopidy

This will install Mopidy music server in your Pi. Before using it, we have to configure Mopidy to be able to access from other devices. This will be particularly helpful if you are running Pi headless. Other thing is, once you configure these access you will be able to control music player from your mobile phone or other PC also. To edit the configuration:

nano ~/.config/mopidy/mopidy.conf

Basic structure of the file is going to be section in square braces ( eg [core] ) and options under it.

First to be able to access mopidy from outside, you need to enable MPD and HTTP section. Navigate to the sections and modify them as given below:

enabled = true 
hostname =
port = 6600
password =
max_connections = 20
connection_timeout = 60
zeroconf = Mopidy MPD server on $hostname
command_blacklist =
default_playlist_scheme = m3u

enabled = true 
hostname =
port = 6680
static_dir =   
zeroconf = Mopidy HTTP server on $hostname

Now you will be able to access your Mopidy from any devices. Next you have to install clients to be able to access them. First we'll configure a webclient so that we can access Mopidy from PC/tablet/phone. You can get a non-exhaustive list of webclients for mopidy from . For this project, I chose to use 'Musicbox_webclient'. To install it, just enter:

sudo pip install Mopidy-MusicBox-Webclient

Now you will be able to access your Mopidy from any of the web clients from any devices.

Now you can go to http://<Pi's IP>:6680 and you will see a page like this:

Screen Shot 2016-08-24 at 10.39.35 pm.png

This page should show all the available webclients for Mopidy. Here I have only one. You can click on it and will be taken to a page like this:

Screen Shot 2016-08-24 at 10.45.24 pm.png

From this point, it will behave as a normal music player. You will be able to browse you local music files and play them using this UI.


Let's Party

Here is a small demo of how I'm using Mopidy using MusicBox client from my android mobile. It is the webclient itself, but I created a shortcut to my home screen so that I can open it easily.


Happy Hacking,



<< Prev | Index | Next >>

Today a quick note on how to connect the two Pi's.


Previous posts:

[Pi IoT] Plant Health Camera #9 - calculating BNDVI and GNDVI

[Pi IoT] Plant Health Camera #8 - Aligning the images

[Pi IoT] Plant Health Camera #7 - Synchronizing the cameras

[Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work

[Pi IoT] Plant Health Camera #5 - OpenCV

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application


Connecting the slave to the master via ethernet

In my blog yesterday I talked about taking images in the garden. But up till now The master Pi 3 (with NoIR Camera) and slave Pi B+ (with color camera) are connected via my local network. The Pi 3 using its internal WiFi, and since the WiPi dongle was missing in my kit, the Pi B+ using a cable to my network router. This is not a workable solution, since I like to take the camera to my garden, or the field where no network is available. Therefore I connected the two Pi's using a small ethernet cable. luckily no cross-over cable is needed since the ethernet ports of the Pi are auto sensing.


I don't like to install a DHCP server on the Pi 3, therefore I'm using static ip numbers by adding the following to the /etc/dhcpcd.conf file:


# static ip address (gp 24/8/2016)
interface eth0
static ip_address=
static routers=
static domain_name_servers=


Here is a picture of the setup:




stay tuned

Things have been moving around the Farm lately so expect to see some catch up blog posts coming at you soon!


First, anyone else have a loving, caring, helpful spouse who makes your projects grow exponentially?  :-)


When the project was first planned out we were looking at working with Chickens and Rabbits.  Since then my wife has expanded/added G.O.A.Ts (see previous blogs), Guinea Fowl, and now Ducks!  (There has been rumor of Peacocks in the future but luckily none can be found in the immediate 250 mile radius.)


At this point I have not worked IoT into the new fowl, but I found an opportunity to use the Ducks as a trial test for my future door monitoring with the Chickens.




Welcome to the Duck Domain!  The Fence is actually up to try and keep out the previously mentioned G.O.A.Ts.  It seems that Duck food, Chicken Food heck even Rabbit food is all fair game to those rascally Terminators if their Ocular targeting system finds it!


These are all "Free" ducks that she was able to find advertised online, it started as 2, then 2 more, then 2 more, you get the idea.  Yes, Chickens are Gateway Fowl, quickly escalating into the Farmer's wife needing more and more diverse additions!  The nice side of it is duck eggs are big!




The pool gets cleaned often, with the water being toted over to near by trees for reuse.  Eventually a pump system is planned to quickly drain it, but for now it is bucket power!




So the problem addressed to me, using the handy dandy egg roller tool (see above) to get eggs out of the Ducky Domain Domicile is less than efficient.  I have been planning out a sliding door option for the Chicken Casa and thought this would be a good opportunity to apply it quickly using some materials on hand.


I have been trying to use free materials as much as possible, recycling and no cost are big pluses on this Farm.



So taking a handy dandy pallet, I left one solid board on the bottom to ensure nesting materials stay in the nest and then removed 2 boards out of the middle section on both front and back, ending up with 4 cut boards. #1


I then took 2 2x4s a little smaller then 1/2 the height of the pallet and screwed the 4 boards on to them at the appropriate location to allow for the 2x4 door to be fully down and have the hole covered as shown above.  #2


I then used some baling wire attached to the top of the sliding door to allow a firm handle to lift up and control the door.  #3


Finally I added a hole through the pallet and the door that when the door is lifted a screw can be put in to hold everything up and in place.  There is a staple on the top to hold the screw when it is not being used.  #4



Here is the "back" side, which will actually be the inside.  You can better see how the 2 2x4s were put in place with the 4 cut boards sealing the hole.


You can also see a little better view of the wire handle for lifting the door.




Here is the door locked into it's up position.  Next step, install it and have the wife test it out.




Here the pallet back with sliding door has been placed onto the backside of the Duck Domain Domicile to allow easy access for eggs.




The view through the new sliding door!  The wife loves it and now wants 3 more.  Sigh...


But I like this design to use with the Chicken Casa and adding a sensor for when it is closed will let us know remotely when they are locked up.  Eventually upgrades will have a remote motor to open, but especially with this implementation I wanted a heavy door to stay down and keep out potential egg/duck thieves of the non-human variety.


For all of you Water Fowl enthusiasts, yes we are looking at a full duck pond in the future.  My son has already started digging and I have been researching Sodium Bentonite as a water holder.  We would really like to work in a self cleaning pond so more research is in the future for that!

Today a quick update. I wrote a small program to extract the GNDVI and BNDVI images as explained in my previous post.


Previous posts:

[Pi IoT] Plant Health Camera #8 - Aligning the images

[Pi IoT] Plant Health Camera #7 - Synchronizing the cameras

[Pi IoT] Plant Health Camera #6 - Putting the slave Pi to work

[Pi IoT] Plant Health Camera #5 - OpenCV

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application


Python code for extracting GNDVI and BNDVI

As explained last week I added the infra-blue filter in front of the Pi NoIR Camera, in order to get rid of the red part of the scene.


Then I wrote a small Python program which grabs a color image and converts them to GNDVI and BNDVI:

The range of the original NDVI images is -1 to +1. In order to display this properly I converted these values to the range 0-255 and applied a colormap such that NDVI value 0 is green, -1 is blue and +1 is red:





# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import numpy
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
rawCapture = PiRGBArray(camera)

# allow the camera to warmup

# grab an image from the camera
camera.capture(rawCapture, format="bgr")
color_image = rawCapture.array

# extract red green and blue channel
nir_channel = color_image[:,:,0]/256.0
green_channel = color_image[:,:,1]/256.0
blue_channel = color_image[:,:,2]/256.0

# calculate and show gndvi
gndvi = (nir_channel - green_channel)/(nir_channel + green_channel)
gndvi = (gndvi+1)/2
gndvi = cv2.convertScaleAbs(gndvi*255)
gndvi = cv2.applyColorMap(gndvi, cv2.COLORMAP_JET)
cv2.imshow("GNDVI", gndvi)

# calculate and show bndvi
bndvi = (nir_channel - blue_channel)/(nir_channel + blue_channel)
bndvi = (bndvi+1)/2
bndvi = cv2.convertScaleAbs(bndvi*255)
bndvi = cv2.applyColorMap(bndvi, cv2.COLORMAP_JET)
cv2.imshow("BNDVI", bndvi)

# display the image on screen and wait for a keypress
cv2.imshow("Image", color_image)

# save images




Unfortunately at the time of writing sunset has passes already two hours ago, so I couldn't image the plants in my garden.

The only  agricultural stuff I could image was a raspberry:


Color image: (note how green it is due to filtering out the red)









Stay tuned for real plant images and 'real' NDVI images using the other cameras red channel.

Here is a quick post on integrating camera's with Home Assistant, this includes


Basically, as part of Home Assistant dashboard we are going to add two sections  showing the preview of Pi Cameras as shown in the pictures below

{gallery} Integrating Camera's in Home Assistant


Home Assistant dashboard with Pi Cameras preview.


Security Camera setup with Pi Zero + NOIR camera, in a 3D printed case.


Pi camera connected  to the Raspberry Pi 3


Security camera feed in a  separate tab


Picture gallery of the intruders detected , using Single File PHP Gallery


Having a close look into the picture from the Picture gallery



Here are the steps to follow to integrate the Pi cameras with Home-Assistant

#1  Connect the Pi camera to Raspberry Pi 3 and run the following commands to create a directory called picamera

         pi@hub:~ $ cd /home/hass

         pi@hub:~ $sudo mkdir picamera

         pi@hub:/home/hass $ ls


         pi@hub:/home/hass $ sudo chown hass picamera

      and then create a image file

        pi@hub:/home/hass/picamera $   touch image.jpg


#2 Update the configuration.yaml file

  platform: rpi_camera
  name: Raspberry Pi Camera
  image_width: 640
  image_height: 480
  image_quality: 7
  image_rotation: 0
  timelapse: 1000
  horizontal_flip: 0
  vertical_flip: 0
  file_path: /home/hass/picamera/image.jpg

   and then stop and start Home-Assistant and test

      sudo systemctl stop home-assistant@hass

      sudo systemctl start home-assistant@hass



#3 Adding the Security camera  to the dashboard

    modify the configuration.yaml to include the following under the camera section

- platform: mjpeg
  name: Security Camera

    change the ip address above with the security camera Pi's ip address


#4 To add Security camera Preview tab and the Intruder detection tab which show the picture gallery

       Add the following under the panel_iframe section of the configuration.yaml file

    title: 'Intruder detection'
    icon: 'mdi:nature-people'
    url: ''
    title: 'Security Cam'
    icon: 'mdi:camera'
    url: ''

      change the ip address above with the security camera Pi's ip address


AndyWarhol Campbell.jpg

I am not sure if the Dynamic Surface is an Animatronics or something related to Robotics. By a certain point of view it is a sort of modular robotic pixel reacting to some kind of inputs in certain conditions. By another point of view it is a modular animatronics, an object - memory of the famous Warhol's tomato sauce can - changing its height smoothly and precisely. Indeed an animatronics is considered a robotic device who mimics human gestures, a puppet, a moving object(s) built with elements usually static or created for different usages.

From Wikipedia the definition of animatronic is:

Animatronics refers to the use of robotic devices to emulate a human or an animal, or bring lifelike characteristics to an otherwise inanimate object.

So, what is the Dynamic Surface?

The Dynamic Surface is a series of modular, physical pixels - named m-Pix - assembled together in rows or matrices creating flexible and reactive surfaces. Due its modular architecture involving both the hardware construction and the electronics there are virtually no limits to expand this device that can be easily controlled by a small SBC like the Raspberry PIRaspberry PI.


We can also think to this compound structure as a Robotics POP modular dimensional display.

The design details and simulation are described in PiIoT - The perfect reading place #19 [tech]: Dynamic surface, design and simulation



I should mention the MuZIEum that has partnered this project. The Dynamic Surface will be installed together with the other parts of this Internet of Things project inside the MuZIEum site to be available and playable by the visitors starting from the first days of next December 2016; with a special thank to the project manager Carlijn Nijhof. She trusted in the idea when it is was just an idea, also very difficult to explain.

Another great thank to that together with the main sponsor Element14 has contributed to the project providing the 100 stepper motors and controllers needed by the entire project.

I can't forget to mention shabaz that suggested how to fix the levers to the motor: with hot air. This simple tip has saved me a lot of time and revealed a stable and reliable solution.


A note on the Dynamics Surface modules (64 modules)

This m-Pix prototype will be used to develop the software. The Dynamic Surface 64 modules will be produced along the next month of September as the final characteristics, surface, color and container will be discussed with the MuZIEum staff to integrate the components accordingly with the site style, colors and environment.

We should take in account that the project should be constrained by some important parameters: the Dynamics Surface will be presented to the visitors also by visually impaired personnel. The main goal is demonstrating how the non-visual perception can be enhanced and improved by the IoT technologies. Alternative user interface methodologies can change and empower the approach between humans and digital machines.


3D printing the parts

A full m-Pix structure is compound of 10 parts.

DigitalSurface Prototype 27.jpg

The above image shows seven finished elements: the three parts of the moving cylinder, the support and the motor and levers holder. Below we see in detail the printing process of these parts.


The cylinder and the levers joint

DigitalSurface Prototype 51.jpg DigitalSurface Prototype 50.jpg

DigitalSurface Prototype 44.jpg DigitalSurface Prototype 45.jpg

The images above shows four moments while printing the top, bottom and central part of the moving cylinder. These parts should be lightweight and are not subject to particular mechanical stress; to reduce the printing time and the weight these three parts are only 20% filled with 0.6 mm to the external skin thickness. It is sufficient to refine the surface after printing without consuming too material.

The choice to make the cylinder in three parts simplifies the assembly with a more flexible 3D printing strategy.

DigitalSurface Prototype 54.jpg DigitalSurface Prototype 55.jpg

DigitalSurface Prototype 56.jpg DigitalSurface Prototype 42.jpg

The above sequence shows four steps of the 3D printing of the support. This part act as a guide for the moving cylinder and fit inside it; this made possible to use few material - always 20% filled - but granting a good positioning of the part: the cylinder moves vertically and avoid rotating without generating friction.

DigitalSurface Prototype 37.jpg DigitalSurface Prototype 38.jpg DigitalSurface Prototype 39.jpg

Above the levers joint. is a small piece but it should connect the motion levers; it is 3D printed with 100% fill and is glued internally to the bottom cover of the cylinder.


The motor support

The 3D printer motor support - including the end-stop switch support - is built in two parts. The reason is the same: speedup the 3D printing time with lower filling but making a robust component.

DigitalSurface Prototype 33.jpg

The base of the support (the rightmost in the above image) is almost large and includes the switch support. The motor holder is kept separate and will be glued over the other inside the engraved area. This gives a perfect positioning and the option to rotate the holder for last minute adjustments.

These parts are 3D printed with a 30% filling.


The m-Pix base

This is the main structure support that will hold the motor and the moving parts.

DigitalSurface Prototype 21.jpg DigitalSurface Prototype 14.jpg DigitalSurface Prototype 15.jpg

DigitalSurface Prototype 16.jpg

As shown in the 3D printing sequence above also in this case the printing fill density is only 25%. Adopting a solution with the motor supports glued in the engraved area visible in the fourth image the movement forces impact on the structure in a direction aligned with the filling support. Indeed the base is almost flexible to compensate some unexpected mechanical stress when many modules are assembled together.


Assembling the components

DigitalSurface Prototype 12.jpg

Using a special product perfect for the PLA and PVC the parts are glued together. The first step is assembling the bottom of the cylinder with the lever joint: should be glued internally then the cylinder can be closed with the top and bottom parts.

DigitalSurface Prototype 22.jpg DigitalSurface Prototype 17.jpg

The images above show the cylinder bottom cover with the joint glued. Note that the cover has a relief circle to keep it in position with the cylinder body.

Now the cylinder can be closed with the top cover as shown in the images below.

DigitalSurface Prototype 18.jpg DigitalSurface Prototype 19.jpg DigitalSurface Prototype 20.jpg

The next step is to glue the base with the motor support and the cylinder guide.

DigitalSurface Prototype 9.jpg DigitalSurface Prototype 10.jpg

DigitalSurface Prototype 11.jpg DigitalSurface Prototype 5.jpg

Assembling levers and motor

Every module uses a 28BYJ-48 geared stepper motor controlled by a LM298 based motor controller. The controller will be wired externally from the m-Pix while the motor is fixed in the motor support glued to the base. In addition a ultra subminiature micro switch by Omron is used as end-stop for self-position the m-Pix to the lower point. The datasheets of the components are attached below. The images below shows the motor and the linear motion transducer levers. The prototype is complete and all the parts works correctly. The next step will be controlling the movement.

DigitalSurface Prototype 4.jpg DigitalSurface Prototype 6.jpg

DigitalSurface Prototype 1.jpg DigitalSurface Prototype 3.jpg

About the levers motion transducer

The detail images below show the geared stepper motor connected to the transducer levers acting like a camshaft. Thank to shabaz suggestion the easiest way to fix the first lever to the motor shaft was to 3D print the collector about 0.25mm smaller then fit it inside with hot air. The motion has no impact on the lever locking as the motor shaft impress a torsion to the component and it remain in place without risk.

DigitalSurface Prototype 34.jpg DigitalSurface Prototype 35.jpg DigitalSurface Prototype 36.jpg

One thing I forgot to show you earlier is how to install Z-Wave devices into your house. In this post I'll install a dimmer module in the bedroom, which will be part of the wake up light.


Materials used

The materials I'm going to use are the following:

Used materials

  • Fibaro Dimmer 2 – the actual Z-Wave dimmer
  • Fibaro Bypass 2 – needed when using LED lights
  • Philips Dimmable LED – 6W/470lm (comparable to 40W), warm white
  • Jung 535 EU – 2 push buttons
  • Some electrical wire


Installing the dimmer step-by-step

Now we have all materials, let's start the installation. There are two possibilities to connect the dimmer, depending if you have 2 or 3 wires available. In our case we just have 2 wires available behind the outlets, so we'll have to consider the following wiring diagram:

Wiring Diagram


We'll go through the installation step-by-step.


0. Cut the power

For your own safety always cut the power when working on your lights. Find out in which group your light is placed and disable the power for this group.


1. Prepare the dimmer

Prepare the dimmer and push buttons by connecting them together. It's best to do this now, as while you're not working on the wall yet you have plenty of room.

Prepare the dimmer


2. Install the dimmer

Now install the dimmer and buttons in place and connect the live and switch wires. Push everything inside to check if it fits, but don't close it with screws yet.

Install the dimmer


3. Install the bypass

As we're using a LED light bulb we need to use a bypass. This makes it possible for the dimmer to function on low loads. It's best to install the bypass close to the light itself, so we'll install it above the light. (in my case this is also the only place where it fits, as space behind the buttons is very tight)


Installation is easy as you just have to connect both ends to either side of the bulb (direction doesn't matter). Here I'm using a screw terminal. The black and blue wires will go to the light itself.

Install the bypass


4. Connect the light

Now connect the light itself and screw in a light bulb.

Plug in light bulb


5. Include the light in Z-Way

The wiring is ready. You can now enable the power again. To use the light we have to add it to the Z-Wave controller: Z-Way. For this you go to http://server-ip:8083 and then move to Settings > Devices. Here you click the Add new button. Click the Start inclusion button. You're controller is now in inclusion mode. To include the new dimmer you have to activate it; for this dimmer you can triple click the button connected to S1.

Z-Way inclusion


After a while inclusion should be finished and you can test the light!


In my case the interview part of the inclusion could not finish successfully. This is a known problem with this dimmer and Z-Way and hopefully will be fixed soon. The basic features of the device are luckily working as they are supposed to.


6. Test!

And there was light:

And there was light!


By clicking the S1 button you can turn on and off the light. By holding it you can dim it. A nice surprise was that this is the first combination of dimmer/LED which doesn't make any noise while dimming!


7. Finish up the buttons

Now that we verified the light works, the only thing left is finishing up the buttons. Screw it on its place and click the buttons in their socket.

Install buttons


Add the new light to Thuis

The physical part is done, but we of course want to use the light within Thuis. For this we have to do a few steps as well. As described in Publishing activity from Z-Way to MQTT we name the device and add it to a room. We also add the tag mqtt as that's how I configured the MQTT app for Z-Way. The light is now controllable through MQTT.


Next is to add it on the Java side in the Core. This is described in Core v2: A Java EE application. We add the device by adding a line to the Devices class:

public class Devices {
     public static MqttDimmer bedroomMain = new MqttDimmer(bedroom, "main");

The bedroom light is now ready to be used in any rules, or to be controlled from other devices as the iOS app.


Dynamic surface is another moving subproject part of the PiIot design. It represents an independent moving platform: as well as the PiSense HATPiSense HAT includes an 8x8 RGB LED matrix the Dynamic surface is a physical 8x8 matrix built with big moving pixels. The video below shows a rendered simulation of the assembly design an example of a modular Dynamic Surface built with an asset of 81 modules.



Design of the parts


As shown above the moving pixel is built of ten pieces to build a m-Pix. A single m-Pix should be self-contained to be assembled in a matrix platform without empty spaces creating the floating surface effect. The rendering of the matrix simulation is shown in the image below.


Design requisites

Every module should adhere to the following requisites:

  • Self-contained: Modules should be placed side by side in rows so the mechanics and the motors should be not greater than the m-Pix diameter (8 centimetres)
  • LIghtweight: The moving cylinder should be as light as possible to reduce the effort on the motor and the global weight of the structure
  • Compact and robust: This is mostly a problem of the right choice of the 3D printing structure as the influencing parameter if the solid fill percentage
  • Easy to wire: The modules should make easy the motors wiring row by row, column by column
  • Easy to assemble: The module parts should be easy to assemble also when a considerable number of units will be used
  • Self-positioning: Every module should have a end-stop switch to identify the lower point when the platform is powered or reset.


Designing the assembly sequence

As I wrote many other times before in my opinion the most important step when creating a 3D printer object is the design. I mean that - especially when moving parts are involved -  it is in the design phase that we may create the right solution always considering limits and advantages of the 3D printer technology. The sequence of images below shows the simulation of the parts concurring to make the entire m-Pix module:


Image above: the motor and the base support (here in the horizontal view) act on the cylinder while the stabilisation support is internal: this saves a lot of space but grant a good stability to the moving element.

Removal03.png Removal02.png


Above images: Making the moving cylinder in three separate parts saved a lot of time making the things easier. With the stabilisation support built-in the moving component, the cylinder will be the largest element in the module. That is just what we want.


The linear motion transducer

Another important part of the design is how we convert the stepper motor rotation to linear movement:

Exploded01.png Exploded03.png

Exploded02.png Exploded01.png

As shown in the exploded rendering above the adopted solution is a lever system working as a camshaft. Due the reduced space and the low power her we are using a geared stepper motor that has the disadvantage to move relatively slow than the traditional more powerful steppers. Indeed there are many advantages adopting these devices: reduced size, low power consumption, good Kg/cm rotational force (thanks to the geared engine) and a good positioning precision. For our solution we don't need strong force but the movement should be fluid and a bit higher than the max rotation speed of the motor shaft. This is the reason that the use of the camshaft-like lever multiply a bit the speed conversion.


Providing an easy wiring method

Simulation04.png Simulation01.png

Look at the images above. The rendered matrix simulation of some rows of modules shows how the motors wires can be connected to the respective motor controllers. The controllers and the PSoC 4200 array can easily fit on one side of the assembled platform.

The modular matrix can be replicated in multiple ones connected together without difficulty. Now we are ready to make the first prototype to see it in the reality!


In order to be able to visualise the home control interface on the touch screen, a browser is required. The resolution of the touch screen is limited to 800x480, so every pixel counts. By putting the browser in full screen mode and hiding all the navigation bars, maximum space is made available. This is often referred to as "kiosk mode".





Rick has already demonstrated how to put the stock browser "Epiphany" in kiosk mode. In order to try something different and be able to compare with Rick's solution, I decided to use the Chromium browser instead.


Chromium is not available in the default repositories. But according to this thread, Chromium can be sourced from the Ubuntu repositories, in order to install on Raspbian Jessie.


First, add the new source:


pi@piiot1:~ $ sudo nano /etc/apt/sources.list.d/chromium-ppa.list

deb vivid main


Apply the key to verify the downloaded packages:


pi@piiot1:~ $ sudo apt-key adv --keyserver --recv-keys DB69B232436DAC4B50BDC59E4E1B983C5B393194


Update your package list and install chromium:


pi@piiot1:~ $ sudo apt-get update
pi@piiot1:~ $ sudo apt install chromium-browser


Test the installation by launching the browser. I tried it via SSH and got following error:


pi@piiot1:~ $ chromium-browser
[16670:16670:0818/] Gtk: cannot open display:


To solve this issue, specify which display to use the browser with (the touch screen):


pi@piiot1:~ $ chromium-browser --display=:0


Tadaaa! Chromium is installed and running on Raspberry Pi.




With Chromium installed and executable, let's take a look at some interesting switches. Switches are command line parameters that can be passed when launching Chromium, altering its behaviour and/or appearance.


For my application, these seemed like the most relevant switches:

  • --display: specify the display to launch the browser on
  • --kiosk: enable kiosk mode, full screen without toolbars or menus
  • --noerrdialogs: do not display any error dialogs
  • --disable-pinch: disable pinching to zoom
  • --overscroll-history-navigation: disable swiping left and right to navigate back and forth between pages


Launching the full command can then be done as follows:


pi@piiot1:~ $ chromium-browser --display=:0 --kiosk --noerrdialogs --disable-pinch --overscroll-history-navigation=0




At startup, the Chromium browser is started with different tabs. These tabs are not visible due to the kiosk mode though (and can't accidentally be closed either). In order to navigate between these tabs and refresh their content, we need to know how to simulate the correct keypresses, triggering the tab switching.


This is done as follows:


pi@piiot1:~ $ xte "keydown Control_L" "key 3" -x:0 && xte "key F5" -x:0


What this does, is switch tab by simulating the "CTRL + <TAB_ID>" combination, optionally followed by an "F5", refreshing the selected tab.




In order to implement this tab switching functionality, I'm using the 4x4 button matrix called Trellis, which I introduced in my previous post. It connects to the I2C pins and requires two software libraries to be installed.


On the hardware side, nothing fancy: connect the Trellis to the I2C pins and power it via the 5V pin: