Skip navigation

A special case

The position and motion sensors information from the SenseHATSenseHAT on the Raspberry PI3Raspberry PI3 together with the RGB 8x8 display are  the perfect tool to make the controller interactive interface for IoT project.

Making the device easy to control by any kind of user the hardware needs a case. The form design and the impacting bright yellow colour harmonise with the site where the installation will be set in place; it is also an easily identifiable object by visually-impaired users.

The following images shows the rendered design of the case hosting the PI3 and SenseHAT

Interface01Rendered.png Interface02Rendered.png Interface03Rendered.png

Interface04Rendered.png Interface05Rendered.png Interface06Rendered.png

The design has been split in two halves to include the hardware elements as shown below:

Interface07Rendered.png Interface08Rendered.png

Top and bottom parts will be fixed with 4 Parker screws. Based on the first tests the internal air circulation should be sufficient for Raspberry PI3 cooling.

3d printing and assemblilng


The 3D printing operation requires about 5 hrs total with a 0.4mm nozzle and a layer thickness of 0.2mm The case will not be stressed mechanically so it is sufficient setting the internal fill to 25% with a 0.6 thickness to the sides. As shown in the above image most of the time required depends on the need of a full internal support of the empty spaces.

IMG_20160731_105236.jpg IMG_20160731_105304.jpg

The above image shows the "kit" components ready for assembling; note the the open window for the SenseHAT LCD will be sealed with a 2mm opaline acrylic frame: this generates a nice effect for the display readability. The internal hardware does not need to be fixed with screws but a 5mm antistatic foam bed is sufficient, pressing the internal assembly when the screws are closed.


{gallery} Assembly steps


The components ready for assembly


The top side with the Raspberry PI3 fit in place.


The case ready to be closed


Closing the case with the four screws

I’ve been busy with a few other things, so that’s why it has been silent on the IoT front. As I also just past half of the blog posts, I thought this would be good time to reflect on the plan and report the latest status. In this blog I’ll go through all projects/libraries and use cases presented in the plan and show the status. Some to-do’s are marked italic: these might happen after the challenge deadline.


Open Source Projects

These are the projects that I’m making available as open source on my GitHub account during this challenge. They are all set up so that they can be reused by others in their own home automation projects.




Z-WaveAs described in Publishing activity from Z-Way to MQTT messages are published for each status change. It also subscribes to the responding topics, so you’re able to turn on and off devices through MQTT as well. The topics and devices used are completely configurable. With this all major functionality is done.



  • Publish a message on scene activation (e.g. used for each secondary push button on the wall)
  • Get it published on the Z-Way App store, already uploaded June, but still no response
  • Publish energy usage


Zway-MQTT on GitHub




ChefMaking sure I can use Chef to fully install the Raspberry Pi 3 I needed to update a few recipes, but also create a completely new one for Z-Way. This was a major hurdle and took more time than expected, but I learned a lot from it. In Cooking up the nodes: Thuis Cookbook you can learn more about it.


Chef-Zway on GitHub





PlexPlex doesn’t allow me to add a plugin directly in the server, but there is an API and WebSockets for status messages. For now only some research has been done and some models set up. It will be implemented as Java library, likely with a similar set up as I’m using for integrating Java and MQTT.



  • Set up projects
  • Implement models and client code for API
  • Implement models and client code for WebSockets
  • Forward events through CDI and MQTT




The library for using CEC (Consumer Electronics Control) in Java was developed about 10 months ago and already performs the most common functionality (monitoring stands-status, turning on/off devices, changing volume and changing outputs). However it should still be integrated with the remainder of the system.



  • Integrate library with Core
  • Forward messages through MQTT


CEC-CDI on GitHub




interfacebuilder.pngThe work on the MQTT UIKit for iOS is just started and will be the subject of the next blogpost. The goal is to provide reusable UI elements which update their status by subscribing to a MQTT topic and are able to send messages as well.



  • Implement several UI elements (button, slider, info)
  • Integrate them with MQTT
  • Build an app around them


Use Cases


Light when and where you need it



Sensors are placed in both the kitchen and the entrance room. The Core knows about them and as described in Core v2: A Java EE application rules are defined to turn the lights in those rooms on and off depending on movement and time. This works pretty well already!



  • Further optimize the rules
  • See if improvements can be made by using iBeacons


Welcome home


The implementation of this use case is mostly dependent on iBeacons being in place. As they are finally delivered, I can start setting them up.



  • Set up iBeacons
  • Integrate them with the iOS app (which will publish MQTT messages about their current location in the house)


Home Cinema


Home Cinema

The Z-Wave hardware for the home cinema is in place (using a 6-socket PowerNode), so they can be turned on and off. As mentioned above the Plex and CEC libraries are work in progress. When these are there we can make a full integration.



  • Finish Plex and CEC libraries
  • Set up a Raspberry Pi for the CEC communication
  • Integrate them with the Core
  • Add and integrate a DIY ambilight


Mobile & On-The-Wall-UI


iPad on the wall

Work on the iOS UI elements is just started. Further development of the iPhone and iPad apps depends on this. The iPad is already mounted on the wall though!



  • Finish MQTT UIKit
  • Create an app for the dashboard (both mobile and On-The-Wall)
    • Provide basic actions (turning on/off device, triggering scenes and providing information)
    • Show the current movie playing in Plex and pause/start
    • Add speech commands
  • Create a custom app for the kitchen (either iPad or web)


Wake-up light


Work has not started on the Wake-up light, mainly because one of the required components (the MOVE) is not delivered yet. As it’s an Indiegogo project, it’s not certain when it will be delivered. I’m not counting on it for the duration of the challenge. I will start on the wake-up light with only bedroom light gradually turning on.



  • Sleep Cycle doesn’t have a web hook available yet, so it’s still needed to set up a Philips Hue bridge
  • Install Z-Wave dimmer in the bedroom
  • Install and integrate the MOVE


Manual override


Wall switches

Most lights can already be switched manually using the buttons on the walls. Some of them should however be switched using the secondary button, which does a scene activation. I have to add support for this to the Zway-MQTT.



  • Add support for secondary buttons in Zway-MQTT


Energy monitoring & saving


For energy monitoring I only did some research yet. InfluxDB seems to be a good candidate for storing the data. As time is running out, I’m not sure if I’ll be able to fulfill this use case.



  • Let Zway-MQTT publish energy usage
  • Integrate YouLess to record total energy usage of the house
  • Create reports based on the usage



Up to now I mostly set up infrastructure and backend code. It's now time to really start implementing the selected use cases. This is what I will focus on for the upcoming weeks. Although there is only 1 month left until the deadline, I'm confident I can implement most of them on time!

Today while the family was celebrating my 47th successful trip around the Sun on our fine planet Earth we saw a sign advertising an interesting addition to our Farm.  Not only does this have great potential to maintain the Farm but it's ability to recycle trash materials and create bio-usable materials is a great plus as we expand our gardening!


An issue we have been running into with going from a 1/4 acre to 5.5 acres is the fact that the little push lawnmower just doesn't do an adequate job.  As such we have been keeping our eye out for a good replacement.  Now a decent tractor with a mower option would be awesome the Farm budget currently does not allow for that. 


Instead we were intrigued by Gastro Ocular Agro Terminators.  These stomachs with builtin ocular guidance can quickly turn our agriculture waste (weeds/brush) into viable growing material!  And their appeal even broke through to our most hard core technical child, causing him to turn off his tablet and exit the vehicle to get some hands on with these G.O.A.Ts. 


BraedanbabyG01.png  This child believes every problem has a solution, usually a technical one that will involve legos or robotics but he quickly decided G.O.A.Ts would be appropriate additions to our Farm.


After much discussion between the Mrs and I, mainly revolving on how now I have more enclosure/animal areas that I have to incorporate into the IoT Farm, we eventually left the area with one full sized G.O.A.T. and 2 mini G.O.A.Ts working on upgrades.  To be fair these are Pygmy G.O.A.Ts so even compared to Dwarf size they are pretty small and we were able to purchase a carrier and load them into the family transport to head back out to the homestead!


20160730_132842.jpg The Chevy Traverse model we use advertises an 8 person capacity, luckily by dropping the rear 2 person bench we able to add 3 G.O.A.T. capacity as well, leaving enough space for the 5 human passengers to ride back in comfort.  Minus all of the neck pain incited by everyone trying to see the babies in the back. 


Of course our new G.O.A.Ts were kind enough to show us how quickly they can recycle feed material into tiny bio-pellets as they exited their transport.  It seems that motorized transport causes them to switch into pellet creation at a respectful pace!  Happily the pellets are much easier to clean up/move compared to the product that the Chicken's create...




Having successfully traveled to the Homestead the 3 were quickly placed into what previously had been a dog enclosure.  Since canines generally don't eat weeds/shrubs/bushes or other like material the G.O.A.Ts were quite impressed with the buffet choices offered just in this one area.


babygoats01.png  After some fun time of exploring and realizing that the Mini G.O.A.Ts were able to slip through the fencing, they decided to take a rest in the shade and overwhelm all of the humans in the area with a self defense shield of cuteness. 



This seemed to be such a good idea that even the full size G.O.A.T. joined in and our new Farm additions decided to take a nap.


This addition has greatly increased my desire to add picture/video setup for the IoT Farm. 


Anyone have any other suggestions to incorporate IoT with these guys?  Please leave suggestions in the comments!  Thank you!

English translation to the bottom. Full article link: Il quadro che parla ai non vedenti ha un cuore elettronico trofarellese – CentoTorri

Credits: Sandra Pennacini


Screen Shot 2016-07-30 at 11.29.03.png


The 'electronic heart' of the picture that speaks to the blind comes from Trofarello

Nowadays, technology permeates our daily life in every aspect, even though not always for the better.

However, some experts employ their technical and IT knowledge in order to invent and carry out items and devices useful to everyday life. The "Internet of Things" is the name of the "science" dealing with the creation of environments, in the widest meaning of that word, designed to improve the daily lives of the people living in such environments. Basically, it is about connecting everyday objects through the use of computer technology in order to synchronise and automate them. For example, let us think in “smart” house lighting, washing machines, air conditioners etc., that are able to be remote-controlled and, in some cases, to automatically carry out "actions".

That is not a totally new concept, albeit constantly developing.

Innovating innovation comes from a local mind, Enrico Miglino – born in Trofarello but living abroad, even though he is always in touch with the homeland - who features a new concept, aimed at helping people suffering from visual disabilities.

Miglino, please explain us what it is.

"I'm carrying out a project that will transform the way by which we usually approach to technology. That is, it will not be the user who will control the environment, on the contrary, the latter will adapt itself to the user's abilities, enabling them to communicate or experience something which otherwise they would not access."

In a little more details, please?

"We are carrying out an auto-adaptable environment, which is applicable to all contexts in which users are disabled. Specifically, visually impaired. The Internet of Things project is divided into several modules, including one particularly focused on contemporary art. Thanks to the collaboration with the artist Lorenzo Merlo (, we established as the starting point one of his artworks and identified its essential and characteristic elements. From an original visual art work, through various steps, we are building a framework in order to detect the presence of visitors, invite them to approach further and be able to touch them, thus creating a visual and perceptual experience, thanks to which the artwork acquires depth and spatiality".

A picture that 'speaks' and can be touched? A "live" picture?

"Exactly. The original artistic creation by Lorenzo Merlo, supplemented and amended with technology, will become "alive." It will be able to sense the presence of someone nearby, and invite that person to observe and touch it. The artwork will be felt three-dimensionally, allowing the "vision" even to the blind. All this, by dynamically adapting itself to the different needs of the beholder. "

How did the idea was born? And which types of support did you get to carry it out?

"The idea of focusing the scope of the project to the visually impaired was inspired by "MuZIEum", a museum located in Nijmegen, in the Netherlands. In this particular museum, through a participating experience of the (sighted) visitors, we will try to knock down or at least minimize the prejudices and misconceptions we suffer about the visually impaired. The contribution and support I received by the project manager of this initiative Carlijn Nijhof was crucial for this project.

The idea could become reality thanks to the funding by, that provided much of the electronic equipment (components, micro controllers, micro computers, sensors, touch screen, etc.) and thanks to the second sponsor,, that provided all the devices I needed for the movement elements. That was a critical support, considering that the complete work uses about twenty sensors, a sound system, three micro computer, a dozen micro controllers and about 100 electric motors."

Where and when will it be possible to admire, or perhaps I should say "live", the result of this project?

Both the original artwork by Lorenzo Merlo and the “technological version” will be donated to the museum along with the rest of the project components.

We are still in the process of scheduling a date to introduce this work into an event, so far expected around next October.


The PiIoT project requires an interactive user interface supporting some special features. It is an unconventional one including both the most common user approach and an unusual behaviour. We identify two cases: case A and case B.

Case A

The user interact with the interface and the connected system react with a direct feedback. A typical example is the push-button vs LED feedback. In this cases the user can expect what kind of feedback it will happen when he interact with the system.

Case B

The user not necessarily interact with the interface but maybe. This happens every time we press the wrong button on the keyboard (typos), click the mouse on the cancel button instead of confirm and so on.

Case C?

Following our vision there is still a third condition, the Case C: The system detect the user presence then notify the user saying where he is and what he can do. This maybe very helpful for a visually-impaired user as well as a different perceptive experience for a non visually-impaired user. To do this the common interface and feedback approach should be totally revised and inverted for some aspects.


Explaining the case "C"

Abstracting the concept to a generic user - no matter if he is a non-visually impaired visitor, a visually-impaired visitor or a blind person - as the subject is detected the system speak to him inviting him to interact through a coloured fast reactive interface. If the subject can't see the interface the controller is anyway able to respond and the user actions are processed and sent to the interface control process. The user can follow a fast and easy self-learning path based on the suggestions of the system, his actions and his gestures.


A considerable part of this highly interactive interface is based on a Raspberry PI3Raspberry PI3 mounting a SenseHATSenseHAT. The unit will include other components that will be developed and put together. SenseHAT remain the most important part of the interface. The video below shows the first software prototype in action as explained in the next chapters.


Sensors, joystick and display


Temperature, humidity and barometric pressure

This group of data will not be collected and logged as their minor importance, but it is a useful environmental information provided to the user on demand.


Compass will be monitored continuously at a low frequency, about one check every second, to detect if the entire installation is moved from its place. It is expected that the installation - assembled on a semi-mobile structure like a table - can be freely accessed by any visitor but should remain in place.


The group Raspberry PI + SenseHAT is mounted on an elastic support oscillating on the base. The accelerometer is used to know the speed (corresponding to the force the user is moving the module) of the oscillation.

Three axis inclinometer

The three axis inclinometer will detect the direction and slant impressed to the module by the user activating several kinds of feedback.

The joystick

The user will be noticed of the presence of the joystick accessible from the interface module. This will be used to get instant spot information e.g. the environmental state, reset a conditions and more.


Software approach


Before start developing the interface we should consider that the different features of the SenseHAT device will work definitely as a background task and concurrently. The easiest but less efficient approach is creating a set of meta-commands to execute on startup. This means using a considerable amount of machine resources making difficult to control the different tasks; the components should work concurrently but not independently.

The second consideration is about the availability of a good low level library to control the hardware already interfaced with Python. Python language has a lot of high level features making it the perfect partner to develop complex and optimised software to run on SBC like the Raspberry PI, especially when we can count on good performances in charge of the C/C++ libraries.

To make an efficient and flexible interface we should develop our interface controller with a multithread approach.


A custom version of the SenseHAT python interface

The native SenseHAT Python library included with the Raspbian Jessie distribution is not the last updated version, instead available on GitHub python-sense-hat The API documentation of this library is available on All this is not yet sufficient.

We need to include some specific classes supporting Python multithreading and some specific features needed for the project. Instead updating the sense_hat original Python package with the version 2.2.0 it has been included in the project with some customisations. The new classes has been added to the sense_hat package (and other will be added in future) while the original ones are left unchanged for any kind of compatibility with the standard version.


Note: to create a new package in Python the file should be present in the package root. In the normal usage this file is left empty but in our case we will keep the already existing file from the original sense_hat library version 2.2.0


The new Python classes


Installing the evdev package

The new Python classes as well as the standard SenseHAT classes version 2.2.0 needs the evdev package to work correctly with the joystick. To install this extra package we need the Python package installer pip available on the Raspberry PI. Below are the two commands to install pip on the Raspbian Jessie:


$ apt-get install python-dev python-pip gcc
$ apt-get install linux-headers-$(uname -r)


After pip has been installed it is possible to proceed installing evdev


$ sudo pip install evdev


The evdev package bind the kernel userspace character devices usually listed in /dev/input exposing them to Python; this allows the Python programs to read and write input events in the Linux environment. As a matter of fact we just need to be sure that this package is installed in the system but if you are interested on how the package works the API documentation is available on

More details on the iot_sense_hat classes usage and methods can be found in the source documentation. The last update version of the iot_sense_hat package and the example program are available on GitHub:

alicemirror / PiIoT_SenseHAT


Class: DisplayJoystick


from iot_sense_hat import DisplayJoystick


A helper class showing the joystick position on the display


handle_code(code, color)

Show the joystick position code on the SenseHAT display in the selected color. The color value is expressed in the format [R, G, B]


Class: IPStuff


from iot_sense_hat.utility import IPStuff


Class to get hostname and IP address of the Raspberry PI.

Note: the utility package is under development and will be updated with other utility classes specific to manage the IoT SenseHAT interface



Return the string with the hostname


Return the string with the current IP


Class: HatRainbow


from iot_sense_hat.hat_rainbow import HatRainbow


SenseHAT general purpose rainbow generator.

Note: this class will be updated with several different kind of color sequences specific for the IoT interface management



Generate the rainbow screen calculating the next pixel colour for every pixel


Class: EnvironmentStatus


from iot_sense_hat.hat_manager import EnvironmentStatus


Manage the SenseHAT environmental sensors



Acquire three different temperature values then return the calculate average value


Retrieve the environmental sensors status.


Class: Screen


from iot_sense_hat.hat_manager import Screen


Screen settings macro utilities



Optimise the original library clear method. Color can be omitted. Default is all LEDs off


Show the startup sequence

msg(text, color, background, speed)

Show a scrolling message on the display. Right to left fixed directio.


Class: Joystick


from iot_sense_hat.hat_manager import Joystick


The Joystick event controller. When the class is instantiated in the application it is possible to pass the name of a local funciton that is called everytime the joystick is moved running in a secondary thread. For more details see the example program described below.


This class has no direct methods. The thread is started by the start() call to the class instance that executes the run() method.


The example program

IMG_20160727_101811.jpg is the main SenseHAT management for the PiIoT project. This version 1.0beta launch a series of initial actions then start polling the joystick and react showing the joystick position on the display and print on the console the return code and the status.


import time
from iot_sense_hat.display_joystick import DisplayJoystick
from iot_sense_hat.hat_manager import Screen, Joystick
from iot_sense_hat.hat_rainbow import HatRainbow
from iot_sense_hat.utility import IPStuff


First the program import the needed classes, then defiles the joystickDispatcher method


def joystickDispatcher(keycode, status):
  Joystick dispatched method. Exectues the corresopnding function depending on the keycode passed
  if keycode is none the thread should stop
   :param keycode: Keycode ID
   :param status: Keycode Status (On/Off etc.)
   print(keycode, status)

  joyDisplay = DisplayJoystick()

   if keycode is not 0:
  joyDisplay.handle_code(keycode, joyDisplay.WHITE)


joystickDispatcher(...) is the function passed to the Joystick class when it is instantiated. Every time a joystick event occur after the Joystick secondary thread has been launched calling the start() method this function is called. This avoid to setup a complex multithread mechanism in the main application.

The remaining part of the code is the bare main program.


# Main application entry point
if __name__ == '__main__':
   # # Startup message
   hatScreen = Screen()

   # Sensors
   hatSensors = EnvironmentStatus()
   print ("Avg Temp = ", hatSensors.getAvgTemperature())
   print ("Global Env = ", hatSensors.getEnvironment())

   # Node IP address
   nodeIP = IPStuff()
  hatScreen.msg(nodeIP.getHostName() + " - " + nodeIP.getIP())

   # Joystick
   joy = Joystick()

   # Executes the joystick control in the main application
  # CTRL-C to stop and go ahead
   print("Joystick is running in the main thread. Press CTRL-C to end")

   # Executes the joystick control in a separate thread
   joyThread = Joystick(joystickDispatcher, 1)
   print("Joystick will be launched in a separate thread. Press CTRL-C to end")

   # Start the rainbow loop in the main thread
   print("Rainbow will run in the main thread")
  rainbow = HatRainbow()
  counter = 5000
   while counter is not 0:
  counter -= 1


print("Press CTRL-Z to exit")

This is continuation of previous blog post , where we installed motion to stream video from the Pi Camera, which acts as live preview of the security camera.For more details check out -

Pi Control Hub: Spoke 1 :Security Camera - setting up Motion to stream video


As part of this blog post we will setup a simple web server on the Pi using lighttpd and  Single File PHP Gallery 4.6.1 by Kenny Svalgaard to display images that motion stores when an intruder/movement is detected.(I have used the single file PHP gallery  a couple of years back, when using the original Pi model B and the Pi Camera on a simple bird watching project for the summer and which means I can vouch for it ).  As part of the gallery as shown in the picture below, you can preview all the picture taken when movement is detected and also click on the pictures enlarge and even download the full size image to your laptop, if you have to ..



In addition, I am going to move the Pi NoIR cam + SD card + WiFi USB adapter to the Pi Zero as showing in the picture below, because I plan on using the Pi B+ on another spoke.



Here are the steps/list of commands to follow to setup lighttpd followed by single file PHP gallery

#1 SSH into your Pi and Install lighttpd

    sudo apt-get install lighttpd


#2 Install the PHP packages

       sudo apt-get install php5-common php5-cgi php5

    at this point also enable Fastcgi module using the command

        sudo lighty-enable-mod fastcgi-php


#3 Update the permissions of the www directory which was created by the install of  lighttpd

      Change the directory owner and group to

           sudo chown www-data:www-data /var/www

      Give the necessay permisions to the folder

             sudo chmod 775 /var/www

      Then add pi to the www-data group

           sudo usermod -a -G www-data pi


#4 Stop and Start the the lighttpd service and test

       sudo service lighttpd stop

        sudo service lighttpd start

     or you can also use - sudo service lighttpd restart , if everything is setup correctly, when you open your browser to test - http://ipaddressOfPi


In case you come across an error/or the page does not load, check your error logs to debug

     sudo tail /var/log/lighttpd/error.log


#5 Optionally, if you would like to run a test with a small html page of your own, use the following commands

      cd /var/www/html/

    rename the exist file that serves the page in the screenshot above

       sudo mv index.lighttpd.html index.lighttpd.htmlxx

    create an index.html file

        sudo nano index.html

    Copy paste the following code


<title>Lighttpd test</title>
<h1>Testing Lighttpd and PHP setup</h1>
<p>Yes !!! it is working.</p>
<p><?php phpinfo();?></p>


        ctrl+X to save the file in the nano editor

    Stop and start the lighttpd service

       sudo service lighttpd stop

       sudo service lighttpd start



#6 install php5-gd image processing libary to show the thumbnails on the web page, this is required Single File PHP Gallery

   sudo apt-get install php5-gd


#7 downlad the Single File PHP Gallery

     get the zip files from

        sudo wget

     Unzipping and move the files to

         sudo unzip -d /var/www/html

     also dont forget to rename the index.html file created in step 5

        sudo mv index.html index.htmlxx



#8 Upload acouple of picture to the html folder or take a couple of pictures using the pi camera

     use a FTP tool like filezilla to upload a couple of images to the /var/www/html

     Or if you dont have an FTP tool navigate to the  /var/www/html folder and take a couple of pictures using

         sudo raspistill -o test1.jpg

      once done open a URL in the browser http://ipaddressOfPi , where you should see the images you have just taken.



#9 Setting up the motion.conf file to point to the newly create pics folder

     Now create a new folder called pics

       sudo mkdir pics

    Stop the motion service, if you have not done it as part of the previous steps

       sudo service motion stop

   Assign ownership to the target directory for motion

       sudo chown motion /var/www/html/pics

   Now modify the following parameter in the motion.conf file

      sudo nano /etc/motion/motion.conf

    - the target direcotry the new one we created

        target_dir /var/www/html/pics


    - Set a new threshold value to determine when motion is detect, i bumped mine to 3500.You will have to experiment with this value ..Basically Threshold for number of changed pixels in an          image that triggers motion detection (default: 1500).

        threshold 3500

    - And I also choose to have turn off video by setting the ffmpeg_output_movies to off

       ffmpeg_output_movie off


#10 Start motion service and test

         sudo service motion start

     now when you click on the pics section in the browser  http://ipaddressOfPi , you should see all the images captured by motion , when an intruder is detected


  <in my case i am using random things around me on my table >, you can also click on the picture enlarge it as shown in the picture below and click the link to download the full size image to your laptop.



And yes, the steam from the pi cam is still available at http://ipaddressOfPi:8081

Now the Pi 3 is running well, with the camera and OpenCV, we need to install the second camera and Pi as slave system.


Previous posts:

[Pi IoT] Plant Health Camera #5 - OpenCV

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application


Setup the Raspberry Pi B+

The in the kit supplied Raspberry Pi B+ is used as slave camera unit. I placed it in the grey Smarti Pi 'LEGO' case.

Raspbian is installed from a NOOBS image which I put on a 16G SD card.



After installation it automatically boots into graphical mode. I made some changes using the Configuration tool.


As hostname I use pi1iot (the other ones hostname is pi3iot). Since we don't need the graphical user interface the Boot option is changed to CLI (Command Line Interface). Also Auto Login is disabled. In the interfaces tab the Camera is enabled, and in the Localization tab the proper timezone is set.


Connecting the cameras

The standard camera cable is too short to connect to the Pi 3, therefore I ordered a longer (30 cm) cable.


The two cameras are placed next to each other on the LEGO cover of the second Pi.


For the time being the second Pi is connected via ethernet.





Connecting the slave Pi

The slave Pi is connected by using the SSHFS file sharing protocol which allows you to mount a Raspberry Pi's filesystem over an SSH session. This works very convenient. In order to set this up I first had to install SSHFS on the client system, the Pi 3 in this case:

pi@pi3iot:~ $ sudo apt-get install sshfs


Then I created a mount point:

pi@pi3iot:~ $ mkdir pi1iot_share


And mounted the seconds pi home directory (user pi):

pi@pi3iot:~ $ sshfs pi@ pi1iot_share/
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is 12:70:32:f1:cb:45:19:7e:3a:9f:f2:f3:d4:57:5a:c1.
Are you sure you want to continue connecting (yes/no)? yes
pi@'s password: 
pi@pi3iot:~ $ 


Now I logon on the second Pi and take a photo:

pi@pi3iot:~ $ ssh pi@
pi@'s password: 

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul 29 19:00:17 2016 from
pi@pi1iot:~ $ 
pi@pi1iot:~ $ mkdir planthealthcam
pi@pi1iot:~ $ cd planthealthcam/
pi@pi1iot:~/planthealthcam $ raspistill -o cam.jpg
pi@pi1iot:~/planthealthcam $ ls -al
total 3244
drwxr-xr-x  2 pi pi    4096 Jul 29 19:21 .
drwxr-xr-x 19 pi pi    4096 Jul 29 19:00 ..
-rw-r--r--  1 pi pi 3311007 Jul 29 19:21 cam.jpg
pi@pi1iot:~/planthealthcam $ 


Finally logoff the second pi. We are back at the Pi 3. The image can be found on the mounted share.

pi@pi1iot:~ $ logout
Connection to closed.
pi@pi3iot:~ $ 
pi@pi3iot:~ $ ls -al pi1iot_share/planthealthcam/
total 3244
drwxr-xr-x 1 pi pi    4096 Jul 29 19:21 .
drwxr-xr-x 1 pi pi    4096 Jul 29 19:00 ..
-rw-r--r-- 1 pi pi 3311007 Jul 29 19:21 cam.jpg
pi@pi3iot:~ $ 



That's it for now, next step is how to simultaneously take an image with both cameras.

Stay tuned.


This post will be about the display of the actual alarm clock. It will consist of two components, visualising different information.




Enabling I2C


Both components, a 4 digit 7-segment display and a 8x8 LED matrix, make use of an I2C backpack to facilitate wiring and control.


To enable I2C on the Raspberry Pi, launch the "raspi-config" tool from the command line. Select the advanced options and go to the I2C menu. When asked, enable I2C. Reboot the Pi for changes to take effect.


Screen Shot 2016-07-28 at 21.17.27.pngScreen Shot 2016-07-28 at 21.17.30.png


Install the "i2c-tools" package if not already done so, as it will help verify all is as expected:


pi@piiot1:~ $ sudo apt-get install i2c-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  libi2c-dev python-smbus
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 51.3 kB of archives.
After this operation, 227 kB of additional disk space will be used.
Get:1 jessie/main i2c-tools armhf 3.1.1+svn-2 [51.3 kB]
Fetched 51.3 kB in 0s (65.1 kB/s)
Selecting previously unselected package i2c-tools.
(Reading database ... 123842 files and directories currently installed.)
Preparing to unpack .../i2c-tools_3.1.1+svn-2_armhf.deb ...
Unpacking i2c-tools (3.1.1+svn-2) ...
Processing triggers for man-db ( ...
Setting up i2c-tools (3.1.1+svn-2) ...
/run/udev or .udevdb or .udev presence implies active udev.  Aborting MAKEDEV invocation.


You should now be able to detect I2C devices:


pi@piiot1:~ $ sudo i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: 70 71 -- -- -- -- -- --


In my case, both displays have been detected. The first one at address 0x70, the second one at 0x71.

Initially, both displays have address 0x70. It is however possible to short some pads at the back using solder, changing the address to a different value:




Let's take a look at how I connected them to the Pi


Connecting Hardware


3.3V vs 5V ?


The safest option is to connect everything using 3.3V, as the Pi's logic level is 3.3V and using 5V may damage the Pi. The connections would then look like this:

Screen Shot 2016-07-28 at 22.35.58.pngIMG_1902.JPG



Because the displays I used are supposed to be "super bright", powering them using 5V instead of 3.3V would result in the best brightness output. I was however worried as the Pi's I/O uses 3.3V levels. But would powering the displays using 5V damage my Pi's I2C pins?


Doing some research, I found an old discussion from Drew Fustini asking the same thing: Is level shifting really needed for I2C?  Ultimately, I verified the voltage on the I2C pins using a scope, and it seems it is safe (in this particular case) to power the displays using 5V instead of 3.3V and not damage my Pi's I2C pins





Python Library


Adafruit has published a Python library to control most of their LED I2C "backpacks". Like the rest of their software, it can be found on GitHub:


Installing the library is straightforward but requires some dependencies to be installed first:


pi@piiot1:~ $ sudo apt-get install build-essential python-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version.
The following extra packages will be installed:
  libpython-dev libpython2.7-dev python2.7-dev
The following NEW packages will be installed:
  libpython-dev libpython2.7-dev python-dev python2.7-dev
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 18.2 MB of archives.


pi@piiot1:~ $ sudo apt-get install python-smbus python-imaging
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  python-imaging python-smbus
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 19.3 kB of archives.


Once the dependencies are installed, the actual library can be downloaded using the "git clone" command:


pi@piiot1:~ $ git clone
Cloning into 'Adafruit_Python_LED_Backpack'...
remote: Counting objects: 167, done.
remote: Total 167 (delta 0), reused 0 (delta 0), pack-reused 167
Receiving objects: 100% (167/167), 43.64 KiB | 0 bytes/s, done.
Resolving deltas: 100% (110/110), done.
Checking connectivity... done.


Finally, after downloading the library, it ca be installed on the Pi:


pi@piiot1:~ $ cd Adafruit_Python_LED_Backpack/
pi@piiot1:~/Adafruit_Python_LED_Backpack $ sudo python install


Library installation:


7-Segment Display


This will be used to display the most basic information an alarm clock should display: the time.


I wrote a small Python script taking the time, parsing the hours and minutes and concatenating them in a single value. That value is then sent to the display to be visualised. For the colon, it is simply alternated every 500ms


The small script is defined below:


#!/usr/bin/env python

import time
from datetime import datetime
from Adafruit_LED_Backpack import SevenSegment

display = SevenSegment.SevenSegment(address=0x70, busnum=1)

colon = False


while True:
  now =

  hours = now.hour
  minutes = now.minute
  seconds = now.second

  string = str(hours) + str(minutes).zfill(2)

  colon = not colon

  display.print_float(float(string), decimal_digits=0, justify_right=True)


To verify the placement of the colon and the time, I temporarily set the time to Japan and then back to Belgium.


pi@piiot1:~ $ sudo cp /usr/share/zoneinfo/Japan /etc/localtime
pi@piiot1:~ $ sudo cp /usr/share/zoneinfo/Europe/Brussels /etc/localtime


This is the result:



Looks like the time is taken care of, on to the next display!


8x8 LED Matrix


The LED matrix will be used to display additional information in the form of icons.


It could display whether or not a connection to the internet is available or display the forecasted weather with simple icons. Below is an example demonstrating the connectivity check.


The script pings a given host and displays a cross if not reachable or nothing if reachable.


#!/usr/bin/env python

import time
import os
from PIL import Image
from PIL import ImageDraw
from Adafruit_LED_Backpack import Matrix8x8

display = Matrix8x8.Matrix8x8(address=0x71, busnum=1)


image ='1', (8, 8))
draw = ImageDraw.Draw(image)

draw.rectangle((0,0,7,7), outline=255, fill=0)
draw.line((1,1,6,6), fill=255)
draw.line((1,6,6,1), fill=255)

while True:
  host = ""
  response = os.system("ping -W 1 -c 1 " + host)

  if response == 0:



Check the video below to see both displays in action




Here's a short video clip demonstrating the clock and the connectivity check. It's checking connectivity to the Pi behind it, that is being rebooted.





Navigate to the next or previous post using the arrows.


Before proceeding with the motor setting - wiring and firmware - a full cycle of the mechanic components (see the previous post PiIoT - The perfect reading place #8 [tech]: Art-a-tronic, setting the moving parts ) has been printed, manually refined and assembled.

The video below shows the result obtained with one of the four sectors of the Art-a-tronic animatronic.



While the 3D printed components are in production the project development moves its steps on other parts. Stay in touch with the next posts!

Welcome to installment number twenty two  of the Design Challenges Project Summary series here at Element14. For those of you who are new to my content, in this series I will pick a single Design Challenge project from the current challenge (Pi IoT Smarter Spaces) and write a short summary of the project to date. Over the course of each challenge, I try to revisit each project that I cover at least once, and I am sure that some project summaries will get more than one update if they themselves are updated frequently. Unfortunately, projects that stall out, or get abandoned, will not receive any updates. Some project creators like to keep their own project summary going, and this series is not meant to overshadow those post, but to highlight each project from an outsider's perspective.




The subject of this installment is project Smart Competition Home by Caterina Lazaro (clazarom). While smart homes are quickly becoming a dime a dozen these days, Caterina has devised a cool twist to make hers stand out. In the project’s introduction post, she says that not only will she integrate sensors, and control capabilities into her smarthome, but she will be implementing a competition into the system that pits the home’s residents against each other in friendly competition. The competition will be exercise based, with each resident of the home tracking their workouts via their mobile devices, and then points will be given to those with the highest statistics. “For this IoT challenge, we are also proposing a fun version of this IoT smart-house, where current inhabitants are not only treated as part of the system, but will also compete to be the number one,” she said.




A Raspberry Pi 3Raspberry Pi 3 will act as the home’s central node, which will handle all of the data retention duties as well as a command center for all subsequent nodes in the system. Caterina says that she will install a GUI onto this central node to handle local administration, and for remote access will be provided by a webserver running on the Pi as well. Other nodes will be Raspberry Pi based as well, and will use WiFi dongles to handle communications back to the central node. Environmental conditions will be monitored by several different sensors, as well as a couple of security features such as a door sensor, and an alarm button. While initially this project looked very ambitious, I am not convinced that Caterina will be able to pull it off fairly easily.




The key to any successful project is planning, followed by execution. The project’s first update (second post) took care of the first part of that equation by laying out a clear, and detailed attack plan, followed by a full schedule in which each step will take place. As someone who holds a Bachelor's Degree in Business Administration, and studied project management quite a bit, I truly appreciate the careful thought that Caterina has put into this project, especially the way she mapped out each facet of the smart home system. If she sticks to the schedule that has been laid out, the project’s core functionality will be complete by week seven, and she can begin implementing additional functionality to the project of which three “extra” features have been planned.




The project’s second update (post number three), introduced newcomers to MQTT (Message Queue Telemetry Transport), and gave the rest of us a refresher course on how to setup device connectivity between nodes. Starting with the installation of Mosquitto Broker in the central node, Caterina walked us through the install, configuration, and testing steps to get everything running, and included all of the commands needed as well as some commands that are useful once MQTT is up and running. She then briefly mentioned the use of Paho to setup the MQTT clients on the various devices that will be implemented in this project. Head over to the link above to see the full tutorial.




In her next update installment, Caterina began work on implementing the sensor array that will be utilized in her smart competiton home. The sensor node is based on a Raspberry Pi Model B, and will be used to fetch data from the sensors connected to the system. For now, the sensor array will be limited environmental metrics such as temperature, barometric pressure, and humidity, while a door sensor, and an alarm button will also be connected to this node. As data is being gathered, this node will push the data back to the central node using MQTT. Another MQTT client will be installed on a smartphone which will also push data back to the central node. Python and Java will be the languages of choice when coding for the sensors and smartphone.




This update is quite long, and includes everything from the basic communication layout between the clients and central node, as well as descriptions of each sensor, and the source code that was used to address them. As of this update, Caterina says that the smart home aspect of the project is fully functional. “We have set up a simple, yet complete functional infrastructure for our smart home,” she said. “This system only reads some environmental and house state data, which can be viewed in the Central Node of the house, or with an App in the smart phone. In this post, she also made all of the code available on Github, which can be found here. Head over to the post link above to check out this very informative update, and to view a video of the system in action.


Updated: 12, September 2016




With so much work being done in the last update, the Central Node was falling a bit behind, and was due for an upgrade to something that was a bit more user friendly. In update number five, Caterina began work on integrating the Raspberry Pi 7-inch TouchscreenRaspberry Pi 7-inch Touchscreen, as well as building out the User Interface. Getting the touchscreen working was simply a matter of following the official instructions on the touchscreen’s product page. The GUI was built using Python and the GTK library, and all of Caterina’s code can be found on her Github for this project, which I linked to earlier in this summary.  The post finished up with a short tutorial on setting up a MySQL database on the Raspberry Pi 3, and then getting an Apache web server up and running to make the central node accessible from any connected device.




Update number six was a general overview post that served to help readers better understand upcoming competition portions of the project. “The innovation part of this project is the competition system: we want to engage the residents of the house in a competing environment to promote a healthier way of life,” she said. “It can later be expanded for more fun type of activities. For now, the only challenge presented to the roommates is the amount of km walked/run/biked during a month.”



Work continued on the competition system in update number seven, with Caterina focusing on the distance tracking system. One of the competition metrics that will be tracked by the system is the distance each participant walks each week. In the past this would have been hard to accomplish, but thanks to smartphones being so prevalent, it is now easier to do than ever before. Caterina chose to create an Android app that utilizes the GPS unit in each participant's phone, and had the foresight to realize that this method alone had a flaw. Users could simply record their distance traveled by vehicle, which would be very unfair to others who are participating in the challenge. To fix this issue, she combined speed and footstep data from the phone to limit what movement gets tracked. Wrapping this post up, she shared a small tutorial on how to store the collected data in a SQL database, and as you can see from the video above, it works fairly well.




Progress on the competition system continued in update number eight where Caterina detailed the process she uses send the distance data from the smartphone to the central node for processing. The communication is handled over the web via an HTTP_POST, with a confirmation check at the end to ensure that the data has been received and stored on the Raspberry Pi 3. The message sent to the serve “will contain a String with a JSON Array format this way, I can send several samples (several rows in the database), each of them with a key: value format. As a result, when the server receives the HTTP_POST, it will be easy to extract and identify each value,” Caterina said. She does note that this method of sending data from the client to the server is inherently insecure, and is wide open for sniffing, and manipulation for now with encryption coming at a later date.




With work complete on the app side of the competition module, Caterina shifted focus onto using the distance data that is collected, and integrating it into the central node’s UI. “Most of the information will be stored from the Competition service,” she said. “Then, the Python main program will retrieve that information and display the competition in its main GUI. It will also determine who is the monthly winner at the end of each period.” I won’t go into all of the details on how she accomplished this, so you will have to head over to the post to get up to speed.



Update number ten finished out the competition portion of the project with Caterina showing us how she configured the server to send the competition’s status back to the competition app on each participant’s smartphone. This functionality not only serves as a way to see how the user stacks up against the competition, but functions to serve as motivation to push harder to gain a better standing in the results. As I mentioned earlier, this post wraps up almost all of the competition portion of this project, and I have to say that this is a very unique aspect to the traditional smart home. I really to see more metrics added into the competition system in the future, and for this to become a stand-alone package that anyone can install on their home server to create their own healthy competitions with their family.




The project’s eleventh and final update was centered around the User’s node that would display data from the central node on the user’s smartphone. The user node “Will include the smart-house functionalities to that of the competition system. This way, any resident will be able to check the smart house information while connected to the WiFi and switch to Competition mode when leaving to gain some miles,” Caterina said. As always, Caterina has included the source code needed to get this portion of the project up and running, so head over to the full post to check it out.


Wrapping things up, Caterina posted a final two post on the 30th of August. The first post was a full recap of system and how everything is connected, while a final post was dedicated to a look back at the project as a whole, and what will happen after the challenge is over. Both post are well worth a read, and I for one, can not wait to see what the future holds for project Smart Competition Home.


That is going to wrap up my project summary coverage of project Smart Competition Home. I really enjoyed this project, and it’s fresh approach to making a smart home even smarter. The competition aspect of it simply blew me away, and really gave me a new outlook on what a smart home can truly be. Instead of just a bunch of sensors, and relays, a smart home can include competitions, games, entertainment, and anything else we can dream up. I want to offer a huge thanks to Caterina for working so hard on this project, and for demonstrating such amazing out-of-the-box thinking. If you have not yet read through the whole project, I highly suggest doing so by visiting its blog page. Tune in next week for another Design Challenge Project Summary here at Element14. I will be back next week with another Design Challenge Weekly Project Summary. Until then, Hack The World, and Make Awesome!  


With the limited time that remains, things need to get kicked in higher gear. For this post, I worked on creating camera feeds using the camera module that was provided in the kit, in combination with the low cost Pi Zero. I built two of these, one for my shed, and another one for my lab.


Here's how I did it.




Nothing super exciting on the hardware side of things, as it's merely a Pi Zero with wifi dongle and camera module, but I can perhaps share two interesting gadgets I've used


The first one is the ZeroView, which I already mentioned in my [Pi IoT] Alarm Clock #02: Unboxing The Kit. Useful to stick a Pi Zero and camera on any window in a very compact format. Even if you don't attach it onto a window, the spacers can be used to attach a string or similar to attach it somewhere else while keeping everything as one unit.




The second one, is this micro USB converter shim. Helps keep things compact as well!




Moving on to the software side of things ...






The first thing to do is to enable camera support using the "raspi-config" command. It doesn't matter which type or version of the Pi camera is used.


pi@zeroview:~ $ sudo raspi-config


Select the "Enable camera" menu, and when prompted, select the option to enable it.

Screen Shot 2016-07-25 at 20.30.49.pngScreen Shot 2016-07-25 at 20.30.53.png


Don't forget to reboot the Pi before trying to use the camera!




There are different options available to stream from the Pi camera. I've used "motion" before in the Pi NoIR and Catch Santa Challenge, but have come across interesting solutions by Calin Crisan while searching for a more up-to-date alternative.


On Calin's GitHub page, a bunch of different projects are available, even a prebuilt image with all tools included called MotionEyeOS (currently featured on element14's homepage as well: Raspberry Pi Smart Surveillance Monitoring System), because I'm integrating everything into a single interface though, I've opted for the lightweight StreamEye program, which creates an easily embeddable MJPEG stream.


I followed the instructions described on the GitHub page:


pi@zeroview:~ $ git clone
Cloning into 'streameye'...
remote: Counting objects: 133, done.
remote: Total 133 (delta 0), reused 0 (delta 0), pack-reused 133
Receiving objects: 100% (133/133), 52.15 KiB | 0 bytes/s, done.
Resolving deltas: 100% (75/75), done.
Checking connectivity... done.


pi@zeroview:~ $ cd streameye


pi@zeroview:~/streameye $ make
cc -Wall -pthread -O2 -D_GNU_SOURCE -c -o streameye.o streameye.c
cc -Wall -pthread -O2 -D_GNU_SOURCE -c -o client.o client.c
cc -Wall -pthread -O2 -D_GNU_SOURCE -c -o auth.o auth.c
cc -Wall -pthread -O2 -D_GNU_SOURCE -o streameye streameye.o client.o auth.o


pi@zeroview:~/streameye $ sudo make install
cp streameye /usr/local/bin


In the "extras" folder is a script for the Raspberry Pi, allowing the capture of a continuous stream of JPEG images. Launching the command with the "--help" options, gives a list of all other options available.


pi@zeroview:~/streameye $ cd extras/


pi@zeroview:~/streameye/extras $ ./  --help
usage: -w WIDTH -h HEIGHT -r FRAMERATE [options]

This program continuously captures JPEGs from the CSI camera and writes them
to standard output.

Available options:
  -w WIDTH, --width WIDTH
                        capture width, in pixels (64 to 1920, required)
  -h HEIGHT, --height HEIGHT
                        capture height, in pixels (64 to 1080, required)
  -r FRAMERATE, --framerate FRAMERATE
                        number of frames per second (1 to 30, required)
  -q QUALITY, --quality QUALITY
                        jpeg quality factor (1 to 100, defaults to 50)
  --vflip               flip image vertically
  --hflip               flip image horizontally
  --rotation {0,90,180,270}
                        rotate image
  --brightness BRIGHTNESS
                        image brightness (0 to 100, defaults to 50)
  --contrast CONTRAST   image contrast (-100 to 100, defaults to 0)
  --saturation SATURATION
                        image saturation (-100 to 100, defaults to 0)
  --sharpness SHARPNESS
                        image sharpness (-100 to 100, defaults to 0)
  --iso ISO             capture ISO (100 to 800)
  --ev EV               EV compensation (-25 to 25)
  --shutter SHUTTER     shutter speed, in microseconds (0 to 6000000)
  --exposure {off,auto,night,nightpreview,backlight,spotlight,sports,snow,beach,verylong,fixedfps,antishake,fireworks}
                        exposure mode
  --awb {off,auto,sunlight,cloudy,shade,tungsten,fluorescent,incandescent,flash,horizon}
                        set automatic white balance
  --metering {average,spot,backlit,matrix}
                        metering mode
  --drc {off,low,medium,high}
                        dynamic range compression
  --vstab               turn on video stabilization
  --imxfx {none,negative,solarize,sketch,denoise,emboss,oilpaint,hatch,gpen,pastel,watercolor,film,blur,saturation,colorswap,washedout,posterise,colorpoint,colorbalance,cartoon,deinterlace1,deinterlace2}
                        image effect
  --colfx COLFX         color effect (U:V format, 0 to 255, e.g. 128:128)
  -s, --stills          use stills mode instead of video mode (considerably
  -d, --debug           debug mode, increase verbosity
  --help                show this help message and exit
  -v, --version         show program's version number and exit


Finally, to begin streaming, launch the "raspimjpeg" script and pipe ("|") it to "streameye". This starts a webserver, streaming the images.


pi@zeroview:~/streameye/extras $ ./ -w 640 -h 480 -r 15 | streameye
2016-07-25 18:45:44: INFO : streamEye 0.7
2016-07-25 18:45:44: INFO : hello!
2016-07-25 18:45:44: INFO : listening on
2016-07-25 18:45:45:  INFO: 0.5
2016-07-25 18:45:45:  INFO: hello!
2016-07-25 18:46:04: INFO : new client connection from
2016-07-25 18:46:04: INFO : new client connection from


Depending on the selected resolution and frame rate, the result should look a little like this:

Screen Shot 2016-07-26 at 20.20.39.png

I was very impressed by the latency, as it is very low (less than a second) compared to what I've used in the past.




For the final part of this post, I embedded the two image streams in my OpenHAB installation. It only requires a modification in the sitemap, no items need to be defined:


        Frame label="Video" {
                Image url=""
                Image url=""


Refresh the OpenHAB interface, et voila, both streams are embedded:


Screen Shot 2016-07-27 at 20.53.52.png


I moved one camera to the shed, and one to my lab. Yep, that's me writing this very blog post in the lower image (And yes, I still have a lot of cleaning up to do!)




Navigate to the next or previous post using the arrows.


After testing most of the components that came as part of the kit, and doing some re-planing of which components to use as part of Hub, and which ones to go with for the spokes, here is a blog post that show how to setup motion on the Pi to stream video to browser on you Computer/Tablet/Phone. The plan for the Security camera spoke, is to use a Raspberry Pi zero with the NoIR camera  to stream video and detect movement.


For testing the setup I am using the Raspberry Pi B+ as you see in the picture below, as I realized after buying the Pi zero that you need a special HDMI connector to the connect to a screen   and in addition you  cannot use a console cable as the headers pins are not soldered to the Pi zero ..


As part of the setup you can either use a USB webcam or a Pi Camera.

Screen shot of streaming using the Logitech USB webcam

<the screenshot above show video stream from a Logitech USB camera connected to the Pi . Note: at this point in time the Pi NoIR Cam is not in use>


For more info on motion refer to -



Here are the steps/list of commands to follow to setup motion to stream video  on your Raspberry Pi via a browser to  your laptop.


#1 Once you downloaded the latest version of Rasbian from the  Raspberry Pi website ( ) and burn it to the SD card (Note : the name of zip file I used at the time of writing this post ), and also connect your USB webcam or Pi camera to the Pi.



#2 Now run the following commands to update Rasbian packages

        sudo apt-get update

        sudo apt-get upgrade


#3 Install motion using the command

        sudo apt-get install motion


#4 To enable motion to start at every boot of the Pi, change the value of the start_motion_daemon from no to yes

      sudo nano /etc/default/motion


#5 Now setup ownership to the target directory where the images/videos get stored using

      sudo chown motion /var/lib/motion


#6  Once done, reboot your Pi using

       sudo reboot


#7  Once you pi is back up and if you have a monitor connected to the HDMI port ,go to the web browser and type the following url you should see a preview of your camera

      Note: if you try the url http://ipaddressofPi:8081 on a browser on you laptop , you will just get "page not displayed", we will resolve this in the next step by modifying the           

                motion.conf file


#8 But if you are running you Pi headless, dont worry we will modify the motion.conf file to turn off the following two parameter -- webcontrol_localhost and stream_localhost

         sudo nano /etc/motion/motion.conf



#9 In addition also modify the width and height in the motion.conf

       width 640

       height 480


Video stream before changing the width and height



#10  Once you made the changes to the .conf file stop and start motion using

    sudo service motion stop

    sudo service motion start


Here is the video stream after changing width 640 and  height 480 and starting motion



Note: if you are using the Pi cam instead of the USB Camera you will have to enable the Pi Cam driver with the commands in step below for the video steam to work.

In addition there are lot of other parameter that you can modify, and experiment with as part of the motion.conf file, like changing the framerate from 2 to 4, if you think the rendering of the video is slow, this is idea if you are using a Raspberry Pi 3 , but I would not suggest this parameter on a Pi zero or B+


#11 Set for the Pi Cam  - enable the Pi Cam driver

  Now if you plan on using Pi camera , and if you have not enable the drivers , using the command below , you will see the following error  in your browser "UNABLE TO OPEN VIDEO DEVICE",

  to resolve this stop the motion service  and activate the Pi cam driver

    sudo service motion stop

    sudo modprobe bcm2835-v4l2

    sudo service motion start

  you should now see the video stream as shown in the screenshot below, in my case i am using Pi NoIR camera V2

secuitycam_enablepiCamDriver (copy).png

now to step the driver for every boot modify the  rc.local file as shown the screenshot below

  sudo nano /etc/rc.local

add the following line just above exit="0"

  modprobe bcm2835-v4l2


once done you can reboot your pi to test it out using

    sudo reboot


Here is a screenshot with the Pi NoIR camera streaming video to my computer, late in the evening with a table lamp on top of the camera.



As part of this week, I am going to test out how this setup behaves with the Pi zero and if all is well ,design a 3D printed case to house the Pi Zero and Pi Cam..

As described in my first post, OpenCV is an important requirement for my application, In this post I will describe how I installed OpenCV on the Pi 3.


Previous posts:

[Pi IoT] Plant Health Camera #4 - Putting the parts together

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application


Install OpenCV

Next thing we need is OpenCV. I installed it from source, as described in this Install guide: Raspberry Pi 3 + Raspbian Jessie + OpenCV 3 - PyImageSearch tutorial. This is a very well written tutorial, it doesn't make much sense to repeat all steps here, so below I only describe the steps in which I deviate from this tutorial.

First make sure that I indeed run the latest Raspbian version:


pi@pi3iot:~/planthealthcam $ cat /etc/os-release 
PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"
NAME="Raspbian GNU/Linux"
VERSION="8 (jessie)"


Then I followed the steps of the above mentioned Install guide until 'Step #3: Download the OpenCV source code'.

I decided to do a git clone instead of downloading the zip file. This way it is much easier to update to the latest version.

pi@pi3iot:~/opencv $ git clone
Cloning into 'opencv'...
remote: Counting objects: 191347, done.
remote: Compressing objects: 100% (49/49), done.
remote: Total 191347 (delta 22), reused 0 (delta 0), pack-reused 191298
Receiving objects: 100% (191347/191347), 414.50 MiB | 904.00 KiB/s, done.
Resolving deltas: 100% (132424/132424), done.
Checking connectivity... done.
Checking out files: 100% (4791/4791), done.
pi@pi3iot:~ $ git clone
Cloning into 'opencv_contrib'...
remote: Counting objects: 15903, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 15903 (delta 0), reused 0 (delta 0), pack-reused 15897
Receiving objects: 100% (15903/15903), 114.08 MiB | 910.00 KiB/s, done.
Resolving deltas: 100% (9087/9087), done.
Checking connectivity... done.
pi@pi3iot:~ $ 


Tutorial Step #4: Python 2.7 or Python 3? instructs to install pip. In my case pip seems to be already installed:

pi@pi3iot:~ $ pip --version
pip 1.5.6 from /usr/lib/python2.7/dist-packages (python 2.7)

I also installed the virtualenvwrapper, as recommended by the tutorial.

I decided to use python3 for my project:

pi@pi3iot:~/planthealthcam $ mkvirtualenv cv -p python3
Running virtualenv with interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/pi/.virtualenvs/cv/bin/python3
Also creating executable in /home/pi/.virtualenvs/cv/bin/python
Installing setuptools, pip, wheel...done.
virtualenvwrapper.user_scripts creating /home/pi/.virtualenvs/cv/bin/predeactivate
virtualenvwrapper.user_scripts creating /home/pi/.virtualenvs/cv/bin/postdeactivate
virtualenvwrapper.user_scripts creating /home/pi/.virtualenvs/cv/bin/preactivate
virtualenvwrapper.user_scripts creating /home/pi/.virtualenvs/cv/bin/postactivate
virtualenvwrapper.user_scripts creating /home/pi/.virtualenvs/cv/bin/get_env_details
(cv) pi@pi3iot:~/planthealthcam $

Note the (cv) preceding my prompt, indicating that I am in the cv  virtual environment.


Since I cloned the distribution using git, I used the following make build:

cd ~/opencv/
$ mkdir build
$ cd build
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \


Unfortunately the make -j4 build process stoppen after 44% (27 min) with an error compiling ffmpeg:


Building CXX object modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ffmpeg.cpp.o
[ 44%] Built target opencv_dnn
In file included from /home/pi/opencv/modules/videoio/src/cap_ffmpeg.cpp:47:0:
/home/pi/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp: In member function ‘double CvCapture_FFMPEG::get_fps() const’:
/home/pi/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:1138:49: error: ‘AVStream’ has no member named ‘r_frame_rate’
     double fps = r2d(ic->streams[video_stream]->r_frame_rate);
modules/videoio/CMakeFiles/opencv_videoio.dir/build.make:169: recipe for target 'modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ffmpeg.cpp.o' failed
make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_ffmpeg.cpp.o] Error 1
CMakeFiles/Makefile2:6100: recipe for target 'modules/videoio/CMakeFiles/opencv_videoio.dir/all' failed
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
Makefile:147: recipe for target 'all' failed
make: *** [all] Error 


Quickest solution was to exclude ffmpeg from the build:


    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \


Now the make -j4 runs just fine, and after 95 minutes and 37 the make process finished at 100%.

Last step is to install it with:

$ sudo make install
$ sudo ldconfig


Test OpenCV

After some small additional steps described in the tutorial I have a working OpenCV environment.

Lets give it a try:


pi@pi3iot:~/planthealthcam $ workon cv
(cv) pi@pi3iot:~/planthealthcam $ python
Python 3.4.2 (default, Oct 19 2014, 13:31:11) 
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__


Now let's try a small demo program, in this python script a camera image is grabbed, and a canny edge detector is applied, showing the edges in the image. Both the input image and the edges are displayed.

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
rawCapture = PiRGBArray(camera)

# allow the camera to warmup

# grab an image from the camera
camera.capture(rawCapture, format="bgr")
image = rawCapture.array
edges = cv2.Canny(image,100,200)

# display the image on screen and wait for a keypress
cv2.imshow("Image", image)
cv2.imshow("Edges", edges)


This gives me the following result:




An important software requirement for my plant health camera is now ready to use.


stay tuned for the next steps.

The moving parts

The Art-a-tronic animatronic (see the previous posts PiIoT - The perfect reading place #7 [tech]: Art-a-tronic, mechanic design and PiIoT - The perfect reading place #6 [tech]: Art-a-tronic, performing the new opera for details) is built with seven moving components as shown in the images below:

Meccanica3-Frontale.jpg Meccanica5.jpg

The entire 30x30 cm structure has been designed as a series of four modules to be joined together due to the plan surface limitations of the 3D printer. Thus every moving part will be controlled by a stepper motor, as shown in the scheme below.

Art-a-tronic Moving Parts.png


Stepper motors as linear transducers

Every blue part of the animatronic will change its height when it is moved, so seven GearBest stepper motors have been used to make as many linear transducers; one for every moving part. The four synched parts uses a single controller to generate the steps. The movement is controlled by a Cypress PSoC4 micro controller acting as the main unit for the sensors (will be discussed further) and the motors.

The conversion from the rotatory movement of the steppers to the linear movement of the pats uses a very similar method to those used in the CNC machines, 3D printers and more.


The motor joint design

The threaded vertical shaft of the transducer is coaxial to the stepper axis shaft and needs a flexible joint: its role is to fix firmly the two aligned components and give a minimum of flexibility to prevent excessive mechanical stress.

MotorJoint.jpg A motor joint used in many 3D printers z-axis movement


Due to the large number of joints needed for this project - the complete set of moving parts involves about 100 of them - it was worth to design a custom one with some structural changes to make it printable.

The images below show the rendered CAD model

GiuntoMotore1.png GiuntoMotore2.png GiuntoMotore3.png

The same principle of the analog Aluminium component has been kept: robust and flexible able to lock the two coaxial components.

The following semitransparent images show how the internal part is designed; the stepper motor shaft has a different diameter than the M3 threaded bar used for the vertical movement.

Screen Shot 2016-07-24 at 08.44.17.png  Screen Shot 2016-07-24 at 08.44.44.png Screen Shot 2016-07-24 at 08.56.35.png

3d printing and assembling the motor joint

Again after some 3D printing tests the joints are printed with PLA, 100% fill and supports. As these parts are small it is needed to use the raft support to keep them in place until the 3D printing finishes. The resulting g-code requires less than 30 minutes printing time.


Stress tests confirmed this part as easy to assemble, robust and flexible.

IMG_20160723_162325.jpg IMG_20160724_115519.jpg

Connecting the threaded bar to the moving parts

The last step to make any part moving is to connect the M3 threaded bar to the moving part. Also in this case the mechanism is very similar to the 3D printer z-axis movement, as shown in the image below.


The very first tests have been done with an M3 nut, but it was not sufficient; the threaded pipe of the moving part should be longer than just a couple of mm. The image below shows the adopted component and how it fits in the threaded bar.

IMG_20160723_162416.jpg IMG_20160723_162431.jpg

The form factor of this component makes it easy to insert to the end of the moving part axis, but a last problem arises: should be firmly locked, easy to assemble and replace on damage for easy maintenance. So a last 3D printed object has been designed: a lock cap that can be screwed to the moving pipe as shown in the images below.

BloccoViteMotore1.png BloccoViteMotore2.png BloccoViteMotore3.png

And this is the resulting 3D printed sample


The complete movement assembly

The image gallery below shows the complete movement assembly of the parts.


The threaded pipe set in place


The 3D printed cap in place


The 3D printed cap in place


The moving part assembled with the threaded pipe to the bottom


The motor connected to the moving part

In one of my previous posts, [PiIoT#03]: Cheap BLE Beacons with nRF24L01+, I discussed how to create cheap BLE beacons with nRF24L01+ modules. In this post, I'm going to use the inbuilt BLE functionality of Pi3 to detect the presence of such BLE tags and update it a UI. This is done is two steps. First part, I uses a nodeJS script to monitor the presence of such BLE tags around Pi and send MQTT messages of specific topics to a connected broker. Next, these messages are subscribed by a freeboard UI running inside a browser and updated to the dashboard.


Setting up the beacons

A detailed description of faking a BLE beacon is already discussed previously. All you have to do is to upload it with different MAC address to different beacons. A modified version of example sketch is given below:

// RF24 Beacon  
#include <SPI.h>  
#include "RF24Beacon.h"  
RF24Beacon beacon( 9,10 );  
uint8_t customData[] = { 0x01, 0x02, 0x03 };  
void setup() {  
    beacon.setMAC( 0x01, 0x02, 0x03, 0x04, 0x05, 0xF6 );   // Change this
    uint8_t temp = beacon.setName( "beacon1" );   // Change this
    beacon.setData( customData, 2 );  
void loop() {  
    // beacon.sendData( customData, sizeof(customData) );  
    delay( 1000 );  

The code is very similar to the one discussed in previous post, except you need to take care a few things here:

  1. In line#14, change the MAC id for each of the beacons you create. This is used to uniquely identify the beacon by nodejs script.
  2. At line#16, change the beacon name to some thing more meaningful for your use -  like 'car keys' or 'jimmy' for each beacons.
  3. At line#26, appropriately set the delay for advertising.

Now you can upload the code to you beacons and verify the advertising packets with hcitool as mentioned in previous post.


Scanning for beacons with Pi

Next part is a nodejs script to scan for the beacons and publish mqtt messages. For scanning for beacons, I use 'noble' library and for mqtt I use MQTT.js. Both can be installed through npm as below:

$ npm install noble
$ npm install mqtt

Once these packages are installed, we can start writing the script. My script is loosely based on one of the example scripts available with noble package for continuously checking for beacons. Basically the script does this:

  1. Connects to MQTT broker and publish a message on 'helloTopic' which indicates the scanner is online
  2. Switches on the bluetooth and start scanning
  3. Once it founds a valid advertisement, it check whether that tag is new
    1. If new, then add this to the list of online tags 'onlineTags' along with it's uuid and last seen time stamp
    2. If this tag is already in list, update lastSeen time and RSSI value
  4. With a period, run a function
    1. which checks for the lastSeen time of all available beacons and see if anyone went offline
    2. If some tag is missing, publish a message indicating 'offline' status with -100dB rssi
    3. If online, publish a message 'online' status and last known RSSI
  5. Also handle App exits by publishing a disconnect message to broker on 'helloTopic'

The script is given below:

// Scan for BLE beacons and send MQTT messages for each beacon status
var noble = require('noble');
var mqtt  = require( 'mqtt' )
var util  = require('util');

var RSSI_THRESHOLD    = -90;
var EXIT_GRACE_PERIOD = 4000; // milliseconds

var onlineTags = [];

// ---------------- MQTT Connection --------------
var helloTopic      = 'user/vish/devices/jupiter'
var beaconBaseTopic = 'user/vish/devices/ble_tags'

// Configuration of broker
var connectOptions = {
  host: '',
  port: 1883

// -------------- MQTT Event Handling function --------------------
function onConnect() {
  console.log( 'Connected to broker ' + +':' + connectOptions.port )
  // Send a hello message to network
  client.publish(helloTopic, '{"connected":"true"}' )

function onMessage( topic, message ) {
  console.log( "    New message: [" + topic + "] > " + message )

client = mqtt.connect( connectOptions )
client.on( 'connect', onConnect )
client.on( 'message', onMessage )

// ------------------ BLE Scanner functions ----------------------
function onDiscover( peripheral ) {
  console.log( new Date() + ' ' + peripheral.advertisement.localName + ' ' + peripheral.uuid + ' ' + peripheral.rssi )
  if( peripheral.rssi < RSSI_THRESHOLD ) {

  var id        =;
  var newTag    = !onlineTags[id];

  if( newTag ) {
    onlineTags[id] = {
      tag: peripheral
    console.log( new Date() + ': "' + peripheral.advertisement.localName + '" ONLINE  (RSSI ' + peripheral.rssi + 'dB) ')
    // publish a message for beacon
    msg = {
      uuid: peripheral.uuid,
      rssi: peripheral.rssi,
      adv : peripheral.advertisement,
      online: true,
    client.publish( beaconBaseTopic + "/" + peripheral.advertisement.localName, JSON.stringify(msg) )

  // Update rssi & last seen time
  onlineTags[id].rssi       = peripheral.rssi
  onlineTags[id].lastSeen   =;

noble.on('discover', onDiscover );

noble.on('stateChange', function(state) {
  if (state === 'poweredOn') {
    noble.startScanning([], true);
  } else {

// ------ Timed function to check whether the devices are online and publish MQTT packets accordingly ----
function checkNUpdate() {
    // for each device in range
    for( var tagId in onlineTags ) {
        var tag = onlineTags[tagId].tag
        // prepare message
        var msg = {
            uuid: tag.uuid,
            adv: tag.advertisement,
            lastSeen: onlineTags[tagId].lastSeen
        if( onlineTags[tagId].lastSeen < ( ) {
            // Device went offline
    = false
            msg.rssi    = -100
            // delete from the list of visible tags
            delete onlineTags[tagId];
            console.log( new Date() + ': "' + tag.advertisement.localName + '" OFFLINE (RSSI ' + tag.rssi + 'dB) ')
        } else {
            // device in with in range
    = true
            msg.rssi    = tag.rssi
        client.publish( beaconBaseTopic + "/" + tag.advertisement.localName, JSON.stringify(msg) )

// Function to be called at the end of script
function handleAppExit (options, err) {  
  if( err ){

  if( options.cleanup ){
    client.publish( helloTopic, '{"connected":"false"}' )
    client.end( true )

  if( options.exit ){

// Handle the different ways an application can shutdown
process.on('exit', handleAppExit.bind(null, {  
  cleanup: true
process.on('SIGINT', handleAppExit.bind(null, {  
  exit: true
process.on('uncaughtException', handleAppExit.bind(null, {  
  exit: true

// Create the checkNUpdate function to run preiodically
setInterval( checkNUpdate, EXIT_GRACE_PERIOD/2 );


You can save this as 'scanner.js' and then use

$ sudo node scanner.js

to start the script.

Here, my 'hello' topic is of the form 'user/vish/devices/<pi_name>' to identify from which Pi the scanner is working. Once you start the script, you can use any MQTT message viewer like MQTT-Spy to monitor the messages. For each BLE tag in the visibility, Pi will publish a MQTT message on topic  'user/vish/devices/ble_tags/<beacon_name>' with the scan info and last seen time stamp.


Creating a dashboard

Creating dashboard is almost same as the one I explained in my previous post. But before going to freeboard, make sure that you started the scanner.js script. Then only you will be able to find your beacons in the datasource list in freeboard. The procedure of creating new dashboard is as follows:

  1. In Freeboard, add your MQTT broker as a data source subscribing to 'beaconBaseTopic/#'. In my case, my MQTT datasource is named as 'Mercury' and beaconBaseTopic as 'user/vish/devices/ble_tags'
  2. Create a pane for each of the beacons
  3. Inside pane, create a Text widget with title 'Name' and value selected as the "datasources[\"Mercury\"][\"user/vish/devices/ble_tags/beacon1\"][\"adv\"][\"localName\"]".
  4. Create another text field with title as 'Last Seen'. For selecting the value, click on JS Editor option and enter
    return new Date(datasources[\"Mercury\"][\"user/vish/devices/ble_tags/beacon1\"][\"lastSeen\"]).toTimeString()
  5. Next create a Guage widget with title 'RSSI' and as data source select "datasources[\"Mercury\"][\"user/vish/devices/ble_tags/beacon1\"][\"rssi\"]". Put max value as 0 and min value as -100.
  6. Create an Indicator light widget with title 'Online'. For value,enter "datasources[\"Mercury\"][\"user/vish/devices/ble_tags/beacon1\"][\"online\"]"
  7. Like this you can create as many as panes for beacons. Don't forget to change the mqtt topic names to match with your beacon names.

Result will be some thing like below: Here I put monitoring for 4 beacons.

dashSince your scanner.js script is running, you can see that the status is updated online.



Now we are ready to test. For testing, I modified the arduino sketch to emit four different beacon advertisements from same tag. A video of the test is given below:

In the video, I'm using two raspberry pi's - one serves as my MQTT broker (B+) and Pi3 is where i'm running the freeboard server and tag scanner codes. Both are connected to my Wifi.


Happy Hacking,


[PiIot#04]: Freeboarding with MQTT

<< Prev | Index | Next >>

Last two weeks I was traveling and also busy running a summerschool (Successful Summer School Plant Phenotyping - Wageningen UR ), but now the holiday started and I have more time to continue to work on my project.

The Smarti Pi cases, I talked about in my previous post arrived, and I'm quite happy with them.

In this post I will describe how the cases are used. Furthermore I installed and tested the camera with a small python script.


Previous posts:

[Pi IoT] Plant Health Camera #3 - First steps

[Pi IoT] Plant Health Camera #2 - Unboxing

[Pi IoT] Plant Health Camera #1 - Application



As already explained I'm planning to use the Pi 3 as main board and the Pi 2 as slave system. Each system will have a camera connected. The interconnection between the two boards is with ethernet, while the main connection to the Pi 3 is via WiFi. Although It also might be an option to use the WiFi Dongle in the second Pi. Since that was missing in my kit, I hope element14Dave will send it soon, along with the Sense Hat and PiFace Digital 2.

Wifi setup appeared to be very easy, it appeared to be installed by default in the current version of Raspbian and I only needed to connect to my router via the menu ( ).



In my previous blogspot I told you that I planned to buy the 'Smarti Pi Touch' case. For the additional Pi B I decided to use the plain 'Smarti Pi'. Although this case is quite expensive, it has its benefits. Like the 'Smarti Pi Touch' it is also LEGO compatible. And it also comes with a LEGO compatible case for the camera. So now I have two LEGO compatible cases and two LEGO compatible camera cases. For mounting the 'Smarti Pi' to the 'Smarti Pi Touch' I made a small plate of LEXAN which I mounted on the back of the 'Smarti Pi Touch'. With the supplied bracket the second case is mounted. The camera's can now be put on the LEGO plate covering this case.


LEXAN plate, bracket mounted.



Plate will fit on the back.



Like here.


Second case added.



One camera added.


View from the side


Camera setup

I started to connect the color camera to the Pi 3 using the instructions found on The camera software was initially disabled, so I enabled it using the configuration tool.

After a reboot I tested the camera with a small python program:

from picamera import PiCamera
from time import sleep

camera = PiCamera()




Although I preferably control the Pi via a remote connection from my laptop, in this case I'm using a small wireless keyboard and touchpad.

For testing the camera is placed on the front of the case. (I really like this LEGO approach).


Here is a 10 seconds live video shot using the python script above.



In the mean time I started to install OpenCV, which I need to combine the different spectral bands from the two camera's in order to obtain a NDVI image. This will be the subject of my next blogpost.


stay tuned.


I'm back! It's been a while since my last update ... I've been moving to a new house and things were quite hectic. On the plus side, I'm getting a new office/lab and a big shed just for me. They're still full of boxes and I can't find half my things though, but that should be cleared in the coming week




Anyway ... The next component I wanted to integrate in my project were some philips Hue lights. These are RGB remote controlled lights. The first application I'm thinking of, is to install one light in each of my children's room and use it as a night light and wake up light. Using OpenHAB, the light woud turn on when going to bed, and be turned off by the time they fall asleep. In the morning, a similar action would be performed. Turn on the light just before they have to wake up, and a few moments later, turn it off for the rest of the day. Because these lights are RGB, they can be configured to use the theme colour of their rooms.


Let's get into the details of this integration


Hue Bridge


The Philips Hue Bridge is, as the name implies, a bridge device allowing your smart devices (phone, computer, Raspberry Pi, ...) to control up to 50 Hue lights and accessories. It connects to the network via an ethernet cable and receives an IP address via DHCP. As I'll be controlling the bridge from OpenHAB, it will need to be configured with a static IP address instead. This can be done using the Hue app.






Porting the Hue functionality to OpenHAB is not that hard, using the Hue binding. But that would only give control of the light, as in the app on the smartphone. Although it would be possible to control remotely over the internet, rather than via the local network only.




As described in the following thread, the item definition for Hue lights in OpenHAB 2.0 is slightly different. Rather than defining them like this:


Switch  Hue_Bulb_1_Switch    {hue="1"}
Color   Hue_Bulb_1_Color     {hue="1"}
Dimmer  Hue_Bulb_1_Dimmer    {hue="1;colorTemperature"}

Switch  Hue_Bulb_2_Switch    {hue="2"}
Color   Hue_Bulb_2_Color     {hue="2"}
Dimmer  Hue_Bulb_2_Dimmer    {hue="2;colorTemperature"}


The items are defined like this:


Switch  Hue_Bulb_1_Switch    {channel="hue:LCT007:0017882155ad:1:color"}
Color   Hue_Bulb_1_Color <colorwheel>    {channel="hue:LCT007:0017882155ad:1:color"}
Dimmer  Hue_Bulb_1_Dimmer    {channel="hue:LCT007:0017882155ad:1:color"}
Dimmer  Hue_Bulb_1_ColorTemperature    {channel="hue:LCT007:0017882155ad:1:color_temperature"}

Switch  Hue_Bulb_2_Switch    {channel="hue:LCT007:0017882155ad:2:brightness"}
Color   Hue_Bulb_2_Color <colorwheel>    {channel="hue:LCT007:0017882155ad:2:color"}
Dimmer  Hue_Bulb_2_Dimmer    {channel="hue:LCT007:0017882155ad:2:brightness"}
Dimmer  Hue_Bulb_2_ColorTemperature    {channel="hue:LCT007:0017882155ad:2:color_temperature"}




The sitemap is like any other integration, where you define how and where the defined items are visualised.


        Frame label="Lights" {
                Switch         item=Hue_Bulb_1_Switch   label="Room 1 Switch"
                Switch         item=Hue_Bulb_2_Switch   label="Room 2 Switch"
                Colorpicker    item=Hue_Bulb_1_Color    label="Room 1 Color"
                Colorpicker    item=Hue_Bulb_2_Color    label="Room 2 Color"
                Slider         item=Hue_Bulb_1_Dimmer   label="Room 1 Brightness"
                Slider         item=Hue_Bulb_2_Dimmer   label="Room 2 Brightness"
                Slider         item=Hue_Bulb_1_ColorTemperature   label="Room 1 Color Temperature"
                Slider         item=Hue_Bulb_2_ColorTemperature   label="Room 2 Color Temperature"



The above configuration results in something that should look like this:

Screen Shot 2016-07-20 at 21.23.50.png




Once the Hue binding was installed, OpenHAB detected the bridge automatically. In order to be able to pair with the bridge and send commands, the button on the bridge needs to be pressed. This is made clear in the "openhab.log" file.


After pressing the button, a unique user is created, allowing OpenHAB to interface with the bridge. The bridge then goes from OFFLINE to ONLINE, as stated in the logs:


2016-07-19 22:38:31.845 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:38:41.843 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:38:51.846 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:39:01.843 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:39:11.841 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:39:21.844 [INFO ] [binding.hue.handler.HueBridgeHandler] - Creating new user on Hue bridge - please press the pairing button on the bridge.
2016-07-19 22:39:21.867 [INFO ] [binding.hue.handler.HueBridgeHandler] - User '9cx84jRFcbP4e7sCRm4FuYr3e7laPR55oDzTpptj' has been successfully added to Hue bridge.
2016-07-19 22:39:21.951 [INFO ] [smarthome.event.ThingUpdatedEvent   ] - Thing 'hue:bridge:0017882155ad' has been updated.
2016-07-19 22:39:31.903 [INFO ] [me.event.ThingStatusInfoChangedEvent] - 'hue:bridge:0017882155ad' changed from OFFLINE (CONFIGURATION_ERROR): Not authenticated - press pairing button on the bridge. to ONLINE
2016-07-19 22:39:31.963 [INFO ] [g.discovery.internal.PersistentInbox] - Added new thing 'hue:LCT007:0017882155ad:3' to inbox.
2016-07-19 22:39:31.965 [INFO ] [smarthome.event.InboxAddedEvent     ] - Discovery Result with UID 'hue:LCT007:0017882155ad:3' has been added.
2016-07-19 22:39:32.011 [INFO ] [smarthome.event.ThingUpdatedEvent   ] - Thing 'hue:bridge:0017882155ad' has been updated.




Now begins the fun part: defining automated events using the lights. That's where OpenHAB's powerful rules engine comes into play. Using rules, the wake up light and night light functionality can be implemented. Because both my children have the same routine in general, one set of rules can be applied for both.


Night Light


This first implementation of the night light has following properties:

  • turn on the light every single day, at 19:00
  • use color blue foor room 1, green for room 2 (children have themed rooms )
  • use 50% brightness


A first timer is created at the same time, set to trigger 1 hour later, to dim the brightness to 10%.

A second timer turn the light completely off, another hour later.


import org.joda.time.*
import org.openhab.model.script.actions.Timer

var Timer nightLightDim
var Timer nightLightOff

rule "Night Light"
    Time cron "0 0 19 * * ?"   // Every day 19:00 hours
    // Light 1
    sendCommand(Hue_Bulb_1_Switch, ON)
    sendCommand(Hue_Bulb_1_Color, HSBType::BLUE)
    sendCommand(Hue_Bulb_1_Dimmer, 50)
    sendCommand(Hue_Bulb_1_ColorTemperature, 0)

    // Light 2
    sendCommand(Hue_Bulb_2_Switch, ON)
    sendCommand(Hue_Bulb_2_Color, HSBType::GREEN)
    sendCommand(Hue_Bulb_2_Dimmer, 50)
    sendCommand(Hue_Bulb_2_ColorTemperature, 0)

    // Timer Dim
    if(nightLightDim!=null) {
    nightLightDim = createTimer(now.plusMinutes(60)) [|
        sendCommand(Hue_Bulb_1_Dimmer, 10)
        sendCommand(Hue_Bulb_2_Dimmer, 10)

    // Timer Turn Off
    if(nightLightOff!=null) {
    nightLightOff = createTimer(now.plusMinutes(120)) [|
        sendCommand(Hue_Bulb_1_Switch, OFF)
        sendCommand(Hue_Bulb_2_Switch, OFF)


For testing purposes, I temporarily changed the cron expression to trigger within a few minutes. As expected, the rule triggered, and the lights were configured!


The logs show all actions being triggered within less than 100 milliseconds:


2016-07-20 21:36:03.346 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'Hue_Bulb_1_Switch' received command ON
2016-07-20 21:36:03.361 [INFO ] [marthome.event.ItemStateChangedEvent] - Hue_Bulb_1_Switch changed from OFF to ON
2016-07-20 21:36:03.369 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'Hue_Bulb_1_Color' received command 240,100,100
2016-07-20 21:36:03.377 [INFO ] [marthome.event.ItemStateChangedEvent] - Hue_Bulb_1_Color changed from 238,88,0 to 240,100,100
2016-07-20 21:36:03.402 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'Hue_Bulb_1_Dimmer' received command 50
2016-07-20 21:36:03.407 [INFO ] [marthome.event.ItemStateChangedEvent] - Hue_Bulb_1_Dimmer changed from 0 to 50
2016-07-20 21:36:03.428 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'Hue_Bulb_1_ColorTemperature' received command 0


The timer also does its work and reduces the brightness after a given amount of time, as specified in the rule:


2016-07-20 21:42:01.321 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'Hue_Bulb_1_Dimmer' received command 10


Wake Up Light


The wake up light does something similar, as it turns on 15 minutes before having to wake up. The exception here is the cron expression takes into account the day of the week. Wouldn't want to wake up the kids too early during the weekend, right?


var Timer wakeUpLightOff

rule "Wake Up Light"
    Time cron "0 45 6 * * 1-5"   // Every weekday 6:45 hours
    // Light 1
    sendCommand(Hue_Bulb_1_Switch, ON)
    sendCommand(Hue_Bulb_1_Color, HSBType::BLUE)
    sendCommand(Hue_Bulb_1_Dimmer, 80)
    sendCommand(Hue_Bulb_1_ColorTemperature, 0)

    // Light 2
    sendCommand(Hue_Bulb_2_Switch, ON)
    sendCommand(Hue_Bulb_2_Color, HSBType::GREEN)
    sendCommand(Hue_Bulb_2_Dimmer, 80)
    sendCommand(Hue_Bulb_2_ColorTemperature, 0)

    // Timer
    if(wakeUpLightOff!=null) {
    wakeUpLightOff = createTimer(now.plusMinutes(30)) [|
        sendCommand(Hue_Bulb_1_Switch, OFF)
        sendCommand(Hue_Bulb_2_Switch, OFF)




To build the frame of the light and be able to mount it on the beds, I experimented with a new method (new for me at least): heat forming acrylic. Applying heat to certain areas of a piece of acrylic, it can be formed in different shapes. Since this is my first time doing it, the shape is not so complex, but it turned out quite well if I may say so myself.


Here are some pictures of the acrylic being bent into shape:



As you can see, it's possible to make interesting shapes out of a straight piece of acrylic. The tricky part is figuring out the best order to perform the bends in.


To apply heat, I used a mini blow torch and passed back and forth in a line across the section that needed to be bent. Using a different head of the tool, I carved out a hole in the top section to fit the light's socket. Handy little tool!




Now I just need to repeat this for the second bedroom!


With most of the desired components integrated, I'll start working on the actual alarm clock portion of the project. There's only a bit more than a month left!




Navigate to the next or previous post using the arrows.


In this week I have managed to make a big step forward with setting up the Command Center. With this post I finalize the basic components in the RPI3 so hope to work on the features - those that will benefit us like the presence emulator, etc - in the coming weeks. Specifically, the post covers installing and configuring the openHAB that provides a web user-interface with interesting add-ons for home automation, Mosquitto that will "glue" all the code in the Command Center and the openHAB server, and the RF24 module to enable communication with the remote nodes via RF 2.4Ghz.


But first, the links to my previous posts and the project status.


Previous Posts

PiIoT - DomPi: ApplicationPiIoT - DomPi: IntroPiIoT - DomPi 02: Project Dashboard and first steps in the Living room
PiIoT - DomPi 03: Living room, light control via TV remotePiIoT - DomPi 04: Movement detection and RF2.4Ghz commsPiIoT - DomPi 05: Ready for use Living Room and parents and kids´ bedrooms
PiIoT - DomPi 06: Setting up the Command Center - RPI3PiIoT - DomPi 07: Setting up the Command Center (2)


Project Status

Project Status

Mosquitto - MQTT installation

Some projects in the Challenge are leveraging the MQTT so this time I will not go deep into what it is. Why  am I using mosquitto? In the Command Center - RPI3 based - I will be running a number of processes that need to exchange information among them. For example, a C-module will be in charge of interacting with the 2.4Ghz comms, openHAB will be interacting with the Dashboard, also in the future maybe other scripts will be getting data from sensors in the RPI, etc. Mosquitto helps to "glue" all of them so that they can exchange information and actions in a seamlessly manner.




Caterina has written a nice post on how she did install Mosquitto: [Pi IoT] Smart Competition Home #4: Smarthome II - Sensors Node & other clients . I will just put here the command lines I executed without mentioning any caveats - these can be found in her post, thanks clazarom and rhe123 for your comments there!


 sudo wget  
 sudo apt-key add mosquitto-repo.gpg.key  
 rm mosquitto-repo.gpg.key
 cd /etc/apt/sources.list.d/ 
 sudo wget  
 sudo apt-get update  
 sudo apt-get install mosquitto mosquitto-clients


There are, however,  some additional packages that I will require to monitor the MQTT channels with the RF24 and Arduino. These are based on libmosquitto. There is however some sort of incompatibility and I need to tweak a bit the process. First I execute sudo apt-get update . When finished, in the file /etc/apt/sources.list I replace the following line

  deb jessie main contrib non-free rpi

by this other one

  deb wheezy main contrib non-free rpi


This change allows the RPI to successfully download the packages with this command

  sudo apt-get install libmosquitto0-dev libmosquittopp0-dev


Once installed, I just undo the change in the sources.list file by replacing the wheezy word by jessie and finally rerun "sudo apt-get update". Now we have the libmosquito packages from the previous version and running in jessie. To find this out, it has take me... long hours... with a previous RPI2 based in wheezy I did not encounter this problem and it has caught me by surprise.. but there is some sort of satisfaction when you overcome it . You can find additional details on the installation here and on the compatibility problems here.



Let´s test whether the Mosquitto is correctly installed and working. To do so, I run a subscriber in my current SSH session (you can do it in a Terminal window):

 mosquitto_sub -d -t hello/world

and in a separate SSH session (or Terminal window) I execute the publisher

 mosquitto_pub -d -t hello/world -m "If you can read me, Mosquitto is correctly set up :)"


And, voila, here you have a screenshot of the successful communication between both processes using mosquitto:


We are now set for the next step, openHAB!



According to themselves, openHAB is "a vendor and technology agnostic open source automation software for your home.". The cool thing in itself is that it can integrate lots of different systems almost off the shelf. For DomPi, I will be leveraging several features of it:

  • ability to subscribe and publish into mosquitto channels, I will be able to communicate openHAB with any Python or C code and share status, information and actions
  • webserver with an user interface that can be accessed by any browser
  • a configurable sitemaps and item-maps that will allow a quick deployment of how to present info from the rooms into the webserver
  • allows further improvements on the user interface and "look and feel" so that I can make it nicer with time
  • applications that support the design of the sitemaps and the configuration, I will not be using this for DomPi at this stage, but when it becomes more complex, it will be of great help

So all in all, it is very scalable and fits my needs of quick delivery and future improvements in features and visual look


Installation of openHAB

The first thing would be to check what is the current version of openHAB, I´m just going to their download page. In the first section called openHAB Runtime I just mouse over the first download button and see that the url looks like meaning it is the version 1.8.3 at time of writing the post. Note that there is a beta version 2 and will provide enhanced possibilities, since it is still beta, I will not leverage it at this point. To install openHAB I execute these commands:


 sudo mkdir /opt/openhab
 cd /opt/openhab
 sudo wget
 sudo unzip
 sudo rm


note that I put 1.8.3 as it is the current version, you may want to replace it for the relevant version you want to install.


OpenHAB works with add-ons and bindings. This allows you to install only those that you require for your project. For DomPi for example we will need the MQTT binding to enable the communication via mosquitto. I will just download all of the add-ons and then just copy the mqtt binding into the addons file of openHAB. This way, I am just installing this binding, avoiding to overload the RPI with features DomPi does not require. I execute these commands:


 sudo mkdir addons_repo
 cd addons_repo
 sudo wget
 sudo unzip
 sudo rm
 cd /opt/openhab
 sudo cp addons_repo/org.openhab.binding.mqtt-1.8.3.jar addons/org.openhab.binding.mqtt-1.8.3.jar


just remember the earlier comments about the current version.


Configuration of openHAB

The distribution comes with a default config file that I will be using and modifying for DomPi. The first step is to make this file our config file by just copying it as follows:


  sudo cp /opt/openhab/configurations/openhab_default.cfg /opt/openhab/configurations/openhab.cfg


Let´s now edit the file and make some changes. I´m using nano (sudo nano /opt/openhab/configurations/openhab.cfg). Now I locate the section called "Transport configurations", you really need to scroll down… I am replacing the line #mqtt:<broker>.url=tcp://<host>:1883 with my own line mqtt:dompimqtt.url=tcp://localhost:1883 - remember to remove the # which makes the line just a comment. Scrolling down a bit more, I modify this line #mqtt:<broker>.retain=<retain> by this other one mqtt:dompimqtt.retain=true This makes the broker to retain messages sent over the channels so that if a new node appears, the node will get the last of all of the channels' messages - this is good if your nodes may come up at different moments, you ensure all of them have the latest status of the channels of their interest


Now I'm ready to create the default_items file. This file contains the elements that openHAB will interact with, like for example the humidity in the main bedroom. The file gives a name to this element as well as the mqtt channel that openHAB should listen to to get the updates of the humidity. Another example, this file will have an element to turn on and off the light in the living room, in the file it will appear what needs openHAB to publish and in which mqtt channel to send the command.


  sudo nano /opt/openhab/configurations/items/default.items


You can find my current file in the attachments, let me just comment on some of the lines included in the default.items file

 Group All
 Group gPiso (All)

 Group P_Salon "Salon" <video> (gPiso)


This creates a main group called "All" and the ground floor called gPiso that belongs to All. Additionally, it creates a last group called P_Salon that will include the elements in my living room. In the complete file you can find an additional group per room I want to control at this point. After creating the groups, let´s create some items. Let me just pick again two lines of the file:


 Number Nodo03Temperatura "Temperatura [%.1f C]"  <temperature> (P_Salon) { mqtt="<[dompimqtt:casa/salon/temperatura:state:default] "}
 Switch Lampara_3          "Luz"                  { mqtt=">[dompimqtt:casa/salon/luz:command:ON:1],>[dompimqtt:casa/salon/luz:command:OFF:0]"}


The first line creates an item that will read the temperature of the living room. As a reminder of the architecture, the Arduino in the living room transmits via RF 2.4Ghz the temperature to the Command Center, in the Command Center there is some C code that reads the RF message and publish it as a mosquitto message, in this case, it will use the channel: casa/salon/temperatura. Then the openHAB application will receive the temperature via the mqtt message and will update the Node03Temperature item with the value. Coming back to the syntaxis, the "Number" tag defines the type of the value, Nodo03Temperatura is the name of the item, "Temperatura" is the name that will be displayed in the webserver, [%.1f C] defines the number to have 1 decimal digit and will put a C for Celsius. <temperature> is an icon that will appear close to the item in the webserver showing a thermometer. (P_Salon) is the group this item belongs to and the last part { mqtt="<[dompimqtt:casa/salon/temperatura:state:default] "} indicates where it will take the data from, note that the "<" means openHAB will read the data from this channel.


The second line creates a type switch so that you can move it left or right to activate or deactivate it, it is called Lampara_3 (translation Lamp_3), it will display just "Luz" in the webserver (translation: light). The final part { mqtt=">[dompimqtt:casa/salon/luz:command:ON:1],>[dompimqtt:casa/salon/luz:command:OFF:0]"} states what to do if the switch is in our position or the other. Basically the ">" command means that openHAB will write into the casa/salon/luz mosquitto channel and it will send the command 1, which will be shown as ON in the web interface


There are a long number of items that you can use in openHAB and you can find several that will meet your requirements.


Now that we have the items, let´s configure the Sitemap, or how they will be shown on the web interface. I start by creating the relevant file:


  sudo nano /opt/openhab/configurations/sitemaps/default.sitemap


My file looks like this:


sitemap default label="DomPi - My Domek"
 Frame label="Salon" {
 Switch item=Lampara_3
 Text item=Nodo03Temperatura
 Text item=Nodo03Humedad
 Text item=Nodo03Luminosidad
  Text item=Nodo03Movimiento
 Switch item=Forzar_tx
 Frame label="Dormitorio" {
 Switch item=Lampara_2
 Text item=Nodo02Temperatura
 Text item=Nodo02Humedad
 Text item=Nodo02Luminosidad
 Frame label="Habitacion Bebes" {
 Switch item=Lampara_1
  Text item=Nodo01Temperatura
  Text item=Nodo01Humedad
  Text item=Nodo01Luminosidad


 Frame label="Pasillo" {
 Switch item=Luces_Casa
 Text item=Nodo04Temperatura
 Text item=Nodo04Humedad
 Text item=Nodo04Luminosidad



The first line defines the sitemap I want to create and show in the web. Within it there are several frames, each representing on of the rooms or spaces I want to display to the user. Taking the example of the Salon (living room) you can find there a switch item for the lamp, four items displaying the temperature

, humidity, luminosity and movement detection and finally yet another switch to force the node (the Arduinos) to update us on the sensor status.


Let´s have a quick look at the web interface and see how this is getting all together. Just type:




replacing the xx´s with my IP. On the right hand side you can see the screenshot so far.


As you can note there is no data as there has been nothing published into the mosquitto channels that the openHAB is listening to - we don't yet have any communication set up in the RPI between the command Center and the remote nodes. Let´s start solving this limitation! Next step, the RF24 module.


RF 2.4GHz Comms HW and libraries

As discussed in previous posts, the remote nodes - living room, bedrooms, garage and garden, Arduino based - will communicate with the Command Center RPI3 based. This communication is based on the RF 2.4GHz modules. I already connected the board to the Arduino's in previous weeks, let´s now connect the RF board to the RPI3. As with the Arduinos, although the RF board has 8 pins, the pin for the IRQ is not required for DomPi, so I only need to connect 7 as per this table:


NRF24L01+ Radio moduleRPI3 GPIO pin (header pinout here)
CEGPIO22 (15)
VCCAny 3.3v pin
GNDAny GND pin
IRQ(none - not required)

More details here.





The same as for the Arduinos, I´m leveraging the library TMRh20. To install it in the Command Center I run these commands

 mkdir RF24
 cd RF24
 chmod +x


Now, all is set to start the communication between the RPI3 and the Arduino modules using the RF24 comms. The openHAB is ready and the link between the RF24 and the openHAB will be based on mosquitto.



Nodes Dashboard

More development hopefully next week!


Note - I have intensively used this webpage as a guide to several installations and configurations: The Project - Home Automation For Geeks

A beloved family member has passed away and we have been out of state for a few days but I wanted to update with what I found out from the Solar Panels and how all of the new Bunny Rabbits are doing.



Here is the data sticker, both panels have the same information.  I think this looks promising but I don't have it setup to accurately verify Amps yet.


Panel 1

This was from Solar Panel 1, around 9am with good sunlight.


Solar Panel 2


Solar Panel 2


So some promise here with potential!


Need to look into what is needed to convert over to battery for storage and allow for RPi use. 


Here are some Bunny Eye Candy pictures.  The white one is the only original rabbit we have now.  The move to the new location and the high temperatures had the others get sick.


A local Rabbit breeder had needed to downsize so now some more have been added into the Bunny side of the Farm. 


We are keeping them close to the house right now since they receive daily water sprinkling to cool off and ice bottle/packs in their cage for them to adjust their body heat. 


The plan is to stage build a Rabbit Colony area providing for an initial large cage holding with plenty of shade and access.  Then expanding out to the ground and eventually created a large enclosure that will rival the Hen House.  But that is down the Farm Dirt Road a bit.  :-)








The last Bunny was an unexpected addition.  At least on my part.  My wife had been talking about how she had read about wild rabbits being added into Rabbit colonies.  Then, surprise surprise, the other day she caught Peter Cottontail foraging in her garden.  She hunted him down with her bare hands and now she has another addition to the growing Colony! 


We have him currently in quarantine to monitor him and allow him to get a little bigger before integration.



Remember, use of Element 14 can result in mental growth, excitement, experimentation and even Making things.  You have been warned!

Previously I discussed about designing a dashboard using Freeboard and setting up a MQTT Broker with Websockets enabled. In this post, I'm going to use both of them and create a dashboard for visualizing data coming through an MQTT stream. If you don't want to set up your on MQTT broker, I found a open broker from HiveMQ handy. You can connect to it using at port 8000 for websockets connection and port 1883 for normal connection. More info can be found here. Since the dashboard will be running at client end, we'll be using as JS MQTT client with websockets connection.



Setting up freeboard for receiving  MQTT packets was a little pain. There is no official plugins available. I found a descent plugin at but it was not really what I want. I has quite a few draw backs:

  1. It was not working for me. (I still don't know why - might be the way they use connect to broker )
  2. It can support only one topic subscription. This is was a little annoying to me as for each topic I need to create a datasource - which in turn creates one client per topic
  3. Not easy to select the fields in data source

So I decided to create my own plugin using it as a starting point. I wanted to be able to subscribe to multiple topics from one client, easiness in selecting datasources for widgets and with possibility of wildcard subscriptions. The new plugin is used in this blog post for all MQTT subscriptions. Currently I resides inside my main dashboard repository but I'm looking forward to spin it off this project and make it itself a fully fledged project by the end of this challenge. In this post, I'm planning to show only how to use the plugin. I'm not planning to cover the technicalities of writing freeboard plugins or how to set up them. If you want to use this plugin in your project, simple copy the plugin under 'www/freeboard/plugins/thirdparty/freeboard-mqtt-paho' of the git repo and follow the instructions here.


Setting up Freeboard

A whole working setup of the dashboard along with nodejs sever is available in my github. Clone it and it will setup everything. You can run this in your pc or raspberry pi. I'm running this from my Pi3 as this will in future serve as the hub for my network.

To clone the repository, go to some location in your computer/raspberry pi and

$ git clone 

Then navigate to the repo directory and install the dependencies using

$ cd PiIoT-dashboard
$ npm install

This will install the dependencies required to host the dashboard. Inside this directory, you will find a modified version of Freeboard at /www/freeboard/ and my mqtt-plugin inside www/freeboard/plugins/thirdparty/freeboard-mqtt-paho/mqtt.paho.plugin.js

Now you can start the dashboard by

$ node server.js 8080

You can get something similar to:


Now you can go to your browser and type <Pi's IP:8080> to load freeboard:



Adding MQTT Client Data source

Now we can go on to create our own MQTT Datasource. Go to [ADD] and Select 'Paho MQTT Client'.

freeboard-pahoNow fill in the details such as broker address and topics to subscribe. You can subscribe to multiple topics separated by semi-column. No way for user authentication is  available in plugin right now. Also enable 'JSON ENABLED' if your messages are json encoded. For now, I'm dealing with simple text messages. In this dashboard, I wish to monitor the data from three sensors. Each sensor will publish data in topic '/user/vish/test/sensorX' where X can be 0/1/2. That are the topics I'm subscribing to in this screenshot, seperated by semi column.

broker-detailsClick 'Save' once you are finished. Now you will be able to see it in your Datasources.



Adding widgets

Now I'll go to add three guage meters and three sparklines for values from three sensors. Video below shows the procedure for adding these widgets.



We are now ready to go. In order to test the setup, I wrote a small shell script to mock the sensors. It will publish random data on the topics mentioned above. This will be enough to test whether our dashboard is getting updated correctly. Shell script is given below:

# Script to publish random sensor values to MQTT broker
BROKER=<insert broker address>
PORT=1883   # Default port is 1883
for i in {1..5}
    mosquitto_pub -t "/user/vish/test/sensor0" -m $((RANDOM%101)) -h $BROKER -p $PORT;    sleep 1;
    mosquitto_pub -t "/user/vish/test/sensor1" -m $((RANDOM%101)) -h $BROKER -p $PORT;    sleep 1;
    mosquitto_pub -t "/user/vish/test/sensor2" -m $((RANDOM%101)) -h $BROKER -p $PORT;    sleep 1;


Now save this as, make them executable and run:

$ chmod
$ ./


Below is a video of the test.

That's it. Now we are able to visualize data streaming through MQTT broker in freeboard.


Happy hacking,



<< Prev | Index | Next >>

This installment is about installing and using Putty, an open source terminal emulator. In addition, we will cover creating a user account and managing logins via public/private key authentication. The use of public keys eliminates the need for passwords and, therefore, the issues that come with them. Namely, various password "guessing" or stealing schemes.


My recent post [[Pi IoT] Hangar Central #3 -- Unboxing the Challenge Kit] received such rave reviews that I decided to continue in the video blogging realm. Today's episode is just shy of 10-minutes and should set you up to access your Raspberry Pi using SSH with public key authentication. Grab your coffee, tea, or might I recommend Tazo Chai Classic Latte. Regardless, and without further delay...


In keeping with my habit of telling you how a post was put together, this episode was recorded using Screencast-O-Matic ( I ended up spring for the Pro version at $15USD, but it's still quite a bit less expensive than my preferred Camtasia ( which will set you back a cool $299USD. Final production was, once again, with Microsoft Movie Maker ( And, so you did not have to listen to my voice alone, the keyclick comes from ClicKey (


I hope you find this useful, informative, or entertaining.



In this post I continue with the initial setup of the Command Center. I will focus on how to boot from a USB drive and avoid card corruption (EDIT: the openHAB installation will be done in an addtional post with the Mosquitto installation as well). But first, the links to previous posts and the project status.


Previous Posts

PiIoT - DomPi: ApplicationPiIoT - DomPi: IntroPiIoT - DomPi 02: Project Dashboard and first steps in the Living room
PiIoT - DomPi 03: Living room, light control via TV remotePiIoT - DomPi 04: Movement detection and RF2.4Ghz commsPiIoT - DomPi 05: Ready for use Living Room and parents and kids´ bedrooms
PiIoT - DomPi 06: Setting up the Command Center - RPI3


Project Status

Project Status

Setting up the RPI3 to boot from a USB Drive

The first question to answer is, why would I want to boot from a USB Drive?? Well, as many nice things, you come across them via the hard way... The SD cards are very good for quickly storing and accessing data, however, they are not designed to write on them constantly. There is a physical limitation and after some hundreds or thousands of times that you write on the same physical part of the card, this part loses its capability to store data. The effect is that whatever you write there, you lose it, producing SD card corruption. The effect? you will need to get another SD card and if you were cautious enough, recover any backup you did in the past - hopefully recent past...


There are different approaches to avoid SD corruption or at least to minimize its effect, making the SD card last longer: avoid the graphical interface and use only the command line, move to RAM parts of the OS storage, reduce the number of logs and how frequent they are updated, allow plenty of free space, etc, etc. All this has some disadvantages. I have come across another solution, to move most of the OS to a USB Hard Disk. Since the HD are prepared to frequently writing, they last longer. So far I have had 2-3 SD cards corrupted by no HD corruption.


A couple of notes. For the RPI2, technically speaking you don't boot from a USB Drive. The RPI would always require a SD card with the boot partition to actually boot. By modifying a line in the configuration, then it'd refer to the USB drive to continue with the OS boot up. So in fact you still required the SD card and the USB drive, being the advantage that the SD card wouldn't get corrupted since the heavy writing process performed by the OS happens on the USB drive. The second note is about the RPI3. It seems that the hardware does allow direct booting from a USB drive as well as from the network, however, there is not yet support for doing so. More info here. So all in all, the SD card is still required for the RPI3 but this may change in the coming months.


The process looks as follows:

  1. Flash the USB Hard Drive. In my case I don´t currently have any HDD free at home so I will be flashing a 32GB USB pendrive. The pendrives suffer the same as the SD cards with continuous rewrites, but hope that being 32GB it won´t suffer that much and in any case I will move the data to a HDD as soon as I can.
  2. Modify the SD card file to redirect to the USB once the RPI has booted up
  3. Test and clean up


Flashing the USB

Two things to note. Your USB drive will be completely deleted and you will lose any info it was on it as well as any partition it had. The second is that, since I will be flashing a 16GB image, I will need to extend the partition once finished to recover the 16GB remaining until the 32GB size of the pendrive.

I will follow the same steps as when I did the initial back up in PiIoT - DomPi 06: Setting up the Command Center - RPI3. And then I will flash the newly created image to the USB. Before flashing the USB you will probably need to unmount the partition - if there was any there. I did so by using the Mac "Disk Utility" app and clicking on the partition: on the left hand side menu you will find the drive "USB Flash Drive Media" in my case, and hanging from it the existing partition(s). Click on them and then click on unmount.


The commands for backing up and flashing are, as a reminder:

sudo dd if=/dev/rdisk1 bs=1m | gzip > ~/Desktop/pi.gz  
gzip -dc ~/Desktop/pi.gz | sudo dd of=/dev/rdisk2 bs=1m  

The second line points at /dev/rdisk2 that is the destination device where I have the pendrive. These lines will take loooong.


Modify the SD card

In order to tell the bootup process where to find the required filesystem to continue launching Raspbian, I need to modify a file. Before plugging the SD card into my Mac, let's identify what is the USB folder in the RPI. To do so, plug the pen drive to the RPI now and run the command:


pi@dompi:~ $ df
Filesystem     1K-blocks    Used Available Use% Mounted on
...                  ...
tmpfs               5120       4      5116   1% /run/lock
tmpfs             474028       0    474028   0% /sys/fs/cgroup
/dev/mmcblk0p6     64366   19980     44386  32% /boot
tmpfs              94808       0     94808   0% /run/user/1000
/dev/sda7       13801392 3494756   9582568  27% /media/pi/root
/dev/sda6          64366   19962     44404  32% /media/pi/boot
/dev/sda5          30701     449     27959   2% /media/pi/SETTINGS
/dev/mmcblk0p5     30701     449     27959   2% /media/pi/SETTINGS1

you can see there that /media/pi/root is under /dev/sda7. We will add this to the cmdline file later. If you plan to have more USB drives connected, the approach I'm sharing here probably won't work and you need to look for the UUID to specify it. More info here. This "quick" approach shall be ok for DomPi though.


So I plug the SD card into my Mac. The SD card has three partitions, the boot, Recovery and the actual partition with the files. In the boot partition I will modify the /boot/cmdline.txt. Currently it has this single line


dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline rootwait


And replace what says "/dev/mmcblk0p7" by "/dev/sda7". Note that sda7 is what I found out using the df commandMore details here and here.


Test and clean-up

I put both the SD card and the USB pendrive into the RPI and power it on. If all goes fine, this still may not be working it may happen we did not change the cmdline.txt file. So if it works, then you turn it off, take out the USB and power it on again and it does not work... You can be sure that it is booting from the USB! Congratulations. Now, we can clean up the drives. The USB contains a partition called /boot that we can just delete - it is just 66MB but... In the SD card, you can as well delete any partitions that are not the /boot one.


The last part would be to resize the main partition so that I don't lose the 16GB as said before. I will postpone this action to focus on DomPi and work on more features. In any case, the resizing can be done with GParted or via the command line.


Node´s Dashboard

Nodes Dashboard

Back to posting new entries after a long long silent period.... I will try to catch up with my original schedule in the next days, let's go!


In this entry, I will be focusing in the “Sensors Node” of the smart house. As some may suspect, I will be explaining how the house generates data (from a set of sensors: temperature, pressure, door open/close), which can be distributed to various clients. In this architecture, sensors are wired to the node (Raspberry Pi 1), which is in charge of:

  1. Fetching data from each sensor (a complete list of sensors is presented in next sections)
  2. Publish this data to the “Central Node” (Raspberry Pi 3). To do so, the Sensors Node implements an MQTT client to be publishing this content to the MQTT broker(*) in the Central Node.

(*)Previous post described installation and setting up of MQTT broker, Mosquitto, in a Raspberry Pi 3.


Furthermore, a subscriber MQTT client is built in the Central Node, to display the read data in a terminal interface. The other MQTT client will be developed in a smartphone, so the user can access the information directly from their handheld device.


All MQTT clients are implemented using Paho, which provides libraries in Python (used in Sensors Node and Central Node scripts) and Java (used in the smartphone application)



Wireless communication among nodes is achieved using the same home WiFi network.

Sensors Node: an MQTT publisher client

Initial setup: Raspberry Pi 1- Raspbian 8.0 (jessie) / SSH connection enabled


The sensors node is the main source of information of the smart house. Instead of having independent, autonomous and wireless sensors which can directly send their information to the Central Node, I am using simpler devices that have to be directly connected to a microcontroller. Since the Central Node should not be dependent on where these sensors are located, I include a new node "Sensors Node" (a Raspberry Pi).


Is a fully automated sensors system? Well... not that much: other approaches present each sensor as its own device, which can be accessed remotely without the need of any intermediate node. This version, however, show a cheap, fast to build platform which, in highly populated systems, could serve to reduce wireless interference (by having several sensors wired up to one node, instead of wirelessly connected).


List of elements and sensors





Raspberry Pi

Temperature Sensor


Pressure Sensor


Door Sensor

Magnetic switch

Alarm button



Hardware connections


The following figures represents the schematics of the Raspberry Pi 1 node. It features its 4 sensors and the corresponding Raspi 1 GPIO port they are connecting to.



And the real sensors node….



Sensors board  Sensors and Raspi



This has been connected just to test the node: we still need to locate all the components in their right place: specially the door switch!!

More details regarding each sensor are provided in the following lines.



Temperature and Pressure sensors


Our temperature and pressure sensors are both connected to the I2C ports of Raspberry Pi 1, SCL and SDA.


temperature and pressure


The I2C protocol only needs two buses to connect several sensors together. Each individual component is identified by a unique byte-size address. In our case:

  • Temperature sensor - Ox40
  • Pressure sensor - Ox60

We want to emphasize that, even if we are using these default addresses, both chips are designed with two extra inputs which can define new addresses. Thus, we can have more than one of each of these sensors in the platform (for example, distributed among the rooms).


Door Switch & Alarm button


Alarm and door switches each will use a dedicated GPIO port. They both behave as switches, although they present opposite type of behavior.



This switch works following a usually closed behavior. When the sensor is embedded in a magnetic field, it will stay closed. When there is not such a field, it behaves as a short circuit.

This door switch is usually sell with two parts: a magnet and the sensor to be wired in the control system. Each part will be located at different parts of the door itself. The magnet will be placed in one side of the door, whereas the sensors is at the doorjamb. This way, when the door is closed both parts are facing each other (magnetic field affecting the sensor), though when it opens they separate (with no magnetic field around, the sensor will open the circuit, providing a 1 for the digital input).

We include a LED in the circuit, to have a visual confirmation when the door is open.





This switch works following a usually open behavior. It will be open by default, and when the button is pressed the circuit will be closed. For this reason, the schematic implemented has the button between the GPIO port and ground. The digital input will be showing a 1 at its entrance until the button is closed (it

will connect to ground and change to 0).

Again, visual confirmation is included with a LED.






Software implementation

Adafruit_logo (3).png

Initial setup: Raspberry Pi 1- Raspbian 8.0 (jessie) / SSH connection enabled


In the sensors node, there is a Python script running to read the sensors information and deliver it to the central node. The code can be found in


I made use of the Adafruit project, to read the temperature sensor: they have very complete libraries for several libraries, including the TMP006. The pressure sensor also have available libraries in Python, such as this . Additionally, Paho libraries are imported to creat the MQTT client.


NOTE: Both Paho and Adafruit will require some module to be install for Python:

               PAHO - run the command sudo pip install paho-mqtt

               ADAFRUIT - make sure the gpio module is install. Run the command sudo pip install rpi.gpio


There are three main python files in the project:

  • - the main python file. It deploys the MQTT client and read sensor functions. It works as an infinite loop, which in each iteration fetches new data from the sensors and sends it to the broker
  • - manages temp sensor
  • - manages pressure sensor
  • - includes door sensor and alarm button

Sensors reading

I2C Communication

Files:, and  MPL3115A2MPL3115A2


The python program in the Raspberry Pi ( uses some Adafruit libraries to interact with the sensors. Using their default addresses, it will first initialize each of them, and then read the input data periodically. The information obtained is:

  • From the temperature sensor: temperature of the room and temperature of the device
  • From the pressure sensor: pressure of the room and altitude.


GPIO digital inputs reading

Files: and


The program will be reading either 0 or 1 from the door and alarm button. We will also choose to of the GPIO pins to connect to them. In each case:

  • Door sensor: GPIO pin 22* | Behavior: 1 door open/ 0 door closed
  • Aalarm button: GPIO pin 27* | Behavior: 1 / 0 button pressed

*Using GPIO mode BCM functions to state of door and alarm button:

#Pins number:
door_pin = 22
inc1_pin = 27

#Set pins
GPIO.setup(door_pin, GPIO.IN)
GPIO.setup(inc1_pin, GPIO.IN)
#GPIO.setup(inc2_pin, GPIO.IN)

#Function to read the door state
def check_door():
    #Read input value
    input_state = GPIO.input(door_pin)
    return input_state

#Function to read any of the two other incidents
def check_inc(input):
    #Check either pin
    input_state = 0
    if input == 1:
        input_state = GPIO.input(inc1_pin)
        input_state = -1
    return input_state


Data Delivery



The sensors node will also implement an MQTT client. The data already collected from the sensor will be published with MQTT. We define a MQTT publisher client. It will be sending data messages categorized as “sensors/type_sensor” topics.


In the same python program that is fetching data, we import Paho Python’s library and create a publisher. This publisher will:

  1. Connect to the broker. We used its local IP address(*) to send a connection request.
  2. Publish messages containing:

a) Data from the device

b) Topic representing each sensor

(*)This local IP address is hardcoded in the program an needs to be change every time we connect to a new network or Raspberry PI 3 is restarted. Next version of the code will have a scan kind of option to show the user available devices in the WiFi and select the correct one


Fragment of the, the loop:

#Loop to read and publish
while True:
    #Read sensors
    door = door_sensor.check_door()
    door_state = "unknown"
    if door == 0:
        door_state = "closed"
    elif door == 1:
        door_state = "open"
    print("Door: "+ door_state +" - "+  str(door))
    client.publish(id+"/"+d_topic,str(door) +" "+door_state)
    warning = door_sensor.check_inc(1)
    if warning == 0:
        client.publish(id+"/"+w_topic, "!!!")
        print ("!Warning: "+ str(warning))
    obj_temp = temp_sensor.readObjTempC()
    die_temp = temp_sensor.readDieTempC()
    client.publish(id+"/"+"devTemp", str(obj_temp) +" C")
    client.publish(id+"/"+t_topic, str(die_temp)+ " C")
    pressure = pres_sensor.pressure()
    altitude = pres_sensor.altitude()/100
    client.publish(id+"/"+p_topic, str(pressure)+ " Pa")
    client.publish(id+"/"+a_topic, str(altitude)+" meters")


d_topic, t_topic, p_topic and a_topic are of the type "sensors/sensor" (sensors/door , sensors/temperature ...)


Other clients

Central Node subscriber

Initial setup: Raspberry Pi 3 - Raspbian 8.0 (jessie) / SSH connection enabled / Mosquitto MQTT broker installed

Files: and

The first subscriber MQTT client will be ont he broker. It has two main functions:

  • Verify the  correct functioning of the sensor node and other publishers
  • Display publishers data in the main GUI of the platform
  • Fetch data from the publishers to:
    • Store in a database
    • Other uses

Again, we use Paho libraries and create a python script to implemente the MQTT client. It will connect to the broker (local, to itself) and subscribe to the topic "sensors/#" (to read the infro published by Sensors Node).


In order to display this information, we created a console interface Using curses, we can define the visual of our screen. This program shows a boxed table which will be updating its data with every new incoming data. Other information is displayed, such as the result of the connection to the broker or key iteractions with the program. code for the call back on message received:

def on_message(client, userdata, msg):
    #print(msg.topic+ " " + str(msg.payload))
    content = str (msg.payload)
    topic = str(msg.topic)
    #Evaluate message:
    if (topic == "sensors/temperature"):
        console.update_values(screen, console.TEMP_POSITION, content)
    elif (topic == "sensors/devTemp"):
        console.update_values(screen, console.TEMP_POSITION + 1, content)
    elif (topic == "sensors/pressure" or content.find("Pa")!= -1):
        console.update_values(screen, console.PRES_POSITION, content)
    elif (topic == "sensors/altitude" or content.find("meters")!= -1):
        console.update_values(screen, console.ALT_POSITION, content)
    elif (topic == "sensors/door" or content.find("door")!= -1):
        console.update_values(screen, console.DOOR_POSITION, content)
    elif (topic == "sensors/warning" or content.find("warning")!= -1):
        time =
        console.update_values(screen, console.ALARM_POSITION, "!! ALARM : "+ str(time))


    #Check if there is any user input
    if (console.get_char(screen)):


Smartphone subscriberandroid-logo.jpg


Initial setup: Smartphone - Android 4.2.2


It would be very convenient for the residents to have an app in their own smartphone to access the smart house system. Consequently, I developed the first version of the same: an app which works as another MQTT subscriber client. Smarthome_Client apk


We still use Paho libraries, as there is also a Java version which works perfect for Android apps.


At this point, the smarthome application is only able to read the information sent from the sensor node. In the future, this application should be able to interact with the system. Send some command, set up some parameters etc. Moreover, the game-competition part of this project will also have an Androidd application to monitor each individual progress, and will be added to the "Smart home application".




We start our three nodes, verify the local IP addresses used and let the information flow.

To test that everything is working as expected, we check the subscriber client in the central node. Let’s see what happens when the system is up:

The video shows two terminals connected thru SSH to each of the Pi:

  • Left terminal - Raspberry Pi 3 "Central Node": implements the MQTT subscriber
  • Right terminal - Raspberry Pi 1 "Sensors Node": implements the MQTT publisher




We have set up a simple, yet complete functional infrastructure for our smart home. This system only reads some environmental and house state data, which can be viewed in the Central Node of the house, or with an App in the smart phone.



The full code of the project can be found at:


The mechanic design and many of the motion solutions adopted and experimented in this part of the project will be reused creating the Dynamic Surface as mentioned in PiIoT - The perfect reading place #5 [doc]: Architecture and design


About the moving parts

To increase the perception of the image a flat design should be transformed in a solid object where some image elements (hairs, eyes, lips, nose) react to the human interaction: hairs, nose, lips and eyes extrude at different heights with an excursion of 5-10mm. To reach this goal the back side of the Art-a-tronic will include motors, motor controllers and a micro controller activating the animation when a presence is detected nearby the opera through an ultrasonic sensor and touch enabling via cap sensing some areas nearby the installation.


Hardware components

The moving mechanism is conditioned by the adopted hardware. The parts involved in this model will be the following:


Cypress CY8CKIT-049-42XX

The micro controller will manage the stepper motors as a direct feedback from an ultrasonic sensor.

As the Art-a-tronic is a IoT node it will interact with the enOcean wireless network.


Geared Stepper motor.jpg

5V geared micro stepper motor provided by GearBest


Motor controller.jpg

Stepper motor controller based on L298 provided by GearBest


Making the parts moving

The 3D design of the moving parts has been created by steps explained below with a series of rendered images. The final Art-a-tronic is moved by seven stepper motors.


Building the base

The white base covers two roles: it is the visible support of the Art-a-tronic including the fixed parts (the white background n the flat image). When in standby the moving blue components should appear at the same level so the white visible parts are extruded.

Meccanica2-Frontale.jpg Meccanica4.jpg

As shown in the above images the base (white) contour has been reduced a while to permit the up-down (blue) movement.

The base is also the support of the entire structure; an external border is provided to make a frame helping to assemble the four parts in a single solid element. To slide vertically the moving objects we should include a support plane moved from the bottom by the stepper motors. This requires and extra 2mm engraving in the base (white) to host the support surface.

The images below shows the rendered base model.

Meccanica7.jpg Meccanica8.jpg

Meccanica6.jpg Meccanica9.jpg


Supporting the moving parts

When the moving elements (blue) are pushed up outside the base should be kept in place. The top moving components are assembled over their respective top supports hosting the motor screwed shaft and three stabilisation pins for every element. The base is enforced by a bottom support including a hole for every motor shaft and three pipe-guides for the stabilisation pins.

Meccanica11.jpg Meccanica10.jpg

Top and bottom pre-assembled rendering

MecPart04.jpg MecPart04B.jpg

Top and bottom rendering of the top supports

MecPart01.jpg MecPart01B.jpg

Top and bottom rendering of the bottom supports


Adding the assembly elements

Due the 3D printer limits - the max printable area is few less than 200x200 mm - every part is built with four separate elements. Thus we should add extra-components to keep together the final structure.

Two supports (top and bottom) has been added to every side of the frame and a rectangular support complete the final building to the bottom center of the ideal cross separating the four sectors.

MecPart03.jpg MecPart03B.jpg


Moving eyes, nose and lips

The four parts of the (blue) hairs lays on the (white) base but the remaining three parts (requiring other three motors) follows the opposite logic: eyes, nose and lips are pushed from the bottom emerging from the base holes. These parts are self-guided inside the base.

MecPart02.jpg MecPart02B.jpg

In the eyes, nose and lips the top supports have been removed so the components height includes the thickness of the base layer.


Ready for printing

The images below shows all the 3D printable parts of the first 3d designed prototype, top and bottom view.

ModelPartsA.jpg ModelPartsB.jpg

ModelPartsC.jpg ModelPartsD.jpg

It is needed to complete the assembly of a first set of components to complete the motors and electronic parts setup.


3D print test and minor issues

A first group of test parts has been printed and things sounds good. Some minor issues will require small changes then the final parts can be finally created:


  • Include a small support area to the top and bottom supports for better stability
  • Increase the support pins diameter for more robustness
  • Remove the support holes. The parts will be assembled with super-strong PVC and PLA glue already tested for reliability. This will significantly reduce the weight of the moving parts.
  • Replace the M4 nut for the screwed motors shaft with a tubular M4 support


The following images shows the 3D printed parts used for testing

IMG-20160710-WA0008.jpg.jpeg IMG-20160710-WA0007.jpg.jpeg

IMG-20160710-WA0006.jpg.jpeg IMG-20160710-WA0002.jpg.jpeg

Another update to keep you seeing movement!


Quick side note tomorrow I am hoping to test some Solar panels that I have been given to see if they are useable.  They were damaged but I am hoping for enough power generation that they are still viable for the IoT Farm.


The birds have reached a point where they have communicated that if they are not allowed access to the outside world under my control, they will try to Chicken Rush the next person who opens the main Human door to check on them.  :-)


Bowing to the potential of their fowl fury I have re-opened the bird access door (it was damaged during the tractor transport phase) and placed a ladder/ramp for their ease of access.  They seem to appreciate this greatly and are now less aggressive when the Human door is opened.  At first the blinding sun with 100 degree+ Fahrenheit temps seemed to make them less inclined to explore much, so I added a couple of pallet sun shade/wind breaks at either end of their house coupled with a large watering station in the shade and now they are enjoying themselves greatly.


Deana Chicken Coop.jpgHere is my wonderful wife, "mother-hen"ing the hens to be sure everyone is good to go.


Regan Chicken feeding01.jpgHere is Regan assisting with one of the Barred Rocks.


Kenna Chicken 01.jpgKenna with another Barred Rock.


Regan Chicken01.jpgRegan posing with one of her "pretties".


Stay tuned for more updates.  I am hoping that one of the solar panels will be good and can be tied into this project.


The name originates joining the two words Art and Animatronic. Based on an original opera of the digital artist Lorenzo P. Merlo, I have further processed the opera to make it animated. The final goal is to empower the image components of a visual-art image making it a series of solid parts linked together that can be touched as well as viewed.


The original: first changes applying image processing

The image below shows the original opera, a digital art by Lorenzo Merlo on vinyl surface


Accordingly with the artis we have planned the following modifications:

  • Crop the image to reduce its original proportions to a square size 30x30 cm
  • Monochrome image As the components will be 3D printed, we decided to adopt two colours only
  • Catching the image essential elements the final one should be simplified due the limitations of the adopted production technology.


The image processing has been done with Adobe Photoshop CS5


Step1: cropping and color reduction

The 30x30 cm cropped image has been processed for colour reduction.

Avoiding a direct color reduction (e.g. with the Photoshop posterize filter) the image has been converted from full color RGB to 256 colours palette. Then a selective colour process has been applied manually to reach the desired details on a single colour layer.



Step 2: detail removal

Some too small details are impossible to manage converting the flat colour element to a series of 3D components:

  • Hairs and eyes contour should be refined
  • Lips and nose "pixelation" should be reduced

Note that developing every step I have collaborated with the artist. Our effort was to keep the essential image elements despite the drastic simplifications applied.



Step 3: optimisation for 3D printing

The image size is be 30x30 cm but the available 3D printer has a maximum printable area limited to 19x20 cm. To create the printable parts the image has been divided in four identical squares (15x15 cm).

Block01.jpg Block02.jpg Block03.jpg Block04.jpg

Block05.jpg Block06.jpg Block07.jpg

Note that there are seven different parts as eyes, nose and lips will be moved independently.


Step 4: background optimisation

The image blocks should move inside a support. As well as the black parts has been optimised and converted in separate objects also the white background should be converted in separate objects. So the white parts of the background has been converted in single components. These are the fixed relief parts of the support, while the black components are the moving ones. The red colour in the image below is just to show where the white and black components will fit; red represents an empty space.



Step 5: extracting the image components

At this point every white and black component is on a separate layer. To manage the bitmap elements in a 3D environment the next step has been to extract Adobe Illustrator paths selecting every area of the image. The AI file format can be read as a path curve by most of the 3D CAD programs as well as the Autocad DXF format.

Screen Shot 2016-07-03 at 15.03.40.png

The image above shows how the AI paths appear when imported in Rhino 4 CAD program. The curves have already been converted in flat surfaces.


3D modelling the proof of concept

Before creating the entire mechanics to make the Art-a-tronic live I proceeded generating a simplified proof of concept including only the black components of the image. Every element has been fixed to a thickness of 5 mm. with the eyes and the lips more prominent than the hairs components.


{gallery} 3D proof of concept

Screen Shot 2016-07-03 at 15.16.10.png

The surfaces extruded by 5mm

Screen Shot 2016-07-03 at 16.12.38.png

Rendered model: 3D front view (original size is 30x30 cm)

Screen Shot 2016-07-03 at 16.03.39.png

Rendered model: 3D perspective view (original size is 30x30 cm)

Screen Shot 2016-07-03 at 16.15.45.png

Rendered model: 3D perspective view (original size is 30x30 cm)


3D printing the proof of concept

The rendered images with a lateral light the effect sounds good so the further step is to see if we can represent the same effect in the real world.


Some information on the printing parameters

  • Filament: PLA deep blue
  • Filament diameter: 1.75 mm
  • Nozzle diameter: 0.4 mm
  • Layers Thickness: 0.2 mm


The images below shows the results obtained. It is the worth to turn the page and move to design the mechanics of the movement.

IMG-20160705-WA0000.jpg.jpeg IMG-20160705-WA0001.jpg.jpeg IMG-20160705-WA0002.jpg.jpeg

Note: the [doc] distinguish the documental posts from the [tech], technical posts


It was necessary waiting for the confirmation of some key aspects of the project to reach a definitive design. The creation of the components as in the design architecture is be possible as I was confirmed in the past days by a second sponsorship from the GearBest company that has provided a 2.5A laser engraver (I have already published an instructables tutorial about this machine) and 100 geared stepper motors With controller controllers. A clear detail of the usage of this components will be shown below.


On-site installation notes

One of the aspects that condition the architecture design are the installation lineguides. As the project will be hosted inside a museum it should integrate with the already existing structure. The main goal if this organisation is to create a user experience for the visitor aiming to explain how visually-impaired people can interact with the world around themselves. The perfect reading place, following this vision is a technology-empowered environment that can adapt the way it can interface with the users. These realisation conditions have been focused in the following rules:


  • The installation should be easily moveable
  • Should be easily managed by non-skilled personnel
  • Should be part of the non-impaired visitors of the site (integrated with the other experiences already available)
  • Will demonstrate the autonomous adaptability of the environment empowered by the IoT technology
  • Should be part of an enjoyable experience for the users (avoid technical presentation and more game-oriented interaction)
  • Should point the attention to the Internet of Things technology helpful capabilities, especially for visually-impaired users


Functional scheme

PiIoT Functional scheme.png

The entire project is divided in three main areas:

Main interaction point

It is the core of the system, installed on a accessible desk. It includes the UI components both traditional and non-conventional: beside the keyborad, mouse, display and touch screen there will be text-to-speech reading system, auto-zooming of the images, gesture recognition and more. So it will be possible to grant the access to the information content to any kind of subject. Position sensors and NFC reader together with other kind of sensors the main interaction point should be able to recognise the approaching subject, adapting its behaviour and enabling the proper interfaces.



The name originates joining the two words Art and Animatronic. Based on an original opera of the digital artist Lorenzo P. Merlo I have further processed the opera to make it animated. The final goal is to empower the image components of a visual-art image making it a series of solid parts linked together that can be touched as well as viewed.


Dynamic surface

It is a prototypal design of a modulating surface that can represent 3D variable shapes. It can be considered as an extension of the traditional flat display.



PiIoT Connectivity.png

The connectivity introduces two approaches: Internet connectivity enable users (mostly from the main interaction point) to navigate for browsing, email access, chat and more. An optional external access is possible from the outside through a remote server gateway. Internet connection is granted by the WiFi still existing on-site.

The three main components Main interaction point, Dynamics surface and Art-a-tronic are connected together a series of linked IoT nodes (based on the enOcean technology) together with some environmental sensors.

Internet of Your Things is a project about creating Personal IoT spaces for common people and putting people in the center rather the things.
This page contains the index of the blog posts I created as part of the project


  1. [PiIoT#00]: Internet of Your Things  - Introduction to project, description and goals
  2. [PiIoT#01] : Designing a dash board - Setting up a web based dashboard for monitoring data
  3. [PiIoT#02] : Setting up MQTT broker with WebSockets - Install Mosquitto broker with websockets and JS client tests
  4. [PiIoT#03]: Cheap BLE Beacons with nRF24L01+ - Faking bluetooth tags with low cost nRF24 modules
  5. [PiIot#04]: Freeboarding with MQTT  - Freeboard + MQTT = Simplifying Visualization!!!
  6. [PiIoT#05]: Presence Monitoring with BTLE Beacons - Updating the presence of BTLE Fake beacons to UI
  7. [PiIoT#06]: Ambient monitoring with Enocean sensors - Enocean gateway to MQTT bridge in Python
  8. [PiIoT#07]: Internet of Music Players - Music, Pi and Mopidy: Let's Party!!
  9. [PiIoT#08]: Sensing with SenseHat  - Viewing sensehat data with freeboard


This list will be updated as I publish more contents. Keep reloading


Happy hacking,

- vish


  Start >>

This blog post introduces how to create cheap bluetooth beacons with nRF24L01+ wireless modules. nRF24L01+ is a 2.4GHz wireless module from Nordic semiconductors and happens to share a very similar protocol as BLE packets. This similarity in packets along with some tinkering in code can enable these modules to act as a ble beacon advertising a valid packet. This hack is first published at "Bit-Banging" Bluetooth Low Energy - Dmitry Grinberg and from then there has been various ports to make this work on a variety of platforms. But there exists some compromises in packet length and data that can be transmitted. More technical details can be found at Lijun // Using nRF24L01+ as A Bluetooth Low Energy Broadcaster/Beacon. Most exciting thing is these modules are available for a little more than $1 at many outlets making this one of the cheapest way to fake BLE beacons.


In this post I uses an arduino with nRF24 and uses a library I wrote some time ago to fake BLE advertisement packets. I intend to use these in later stage of the project as bluetooth tags attached to objects and BLE presence detection.


Hardware Setup

Hardware setup is fairly string forward. You just have to connect nRF modules as the the way you uses it normally with arduino.

Pin connections are given below:


Arduino (nano)



My setup looks like this:



Downloading library

I have written a small library to make the procedure easy. You can get it from :

To install the library, go to /<your_arduino_sketch_folder>/libraries and issue:

git clone 



Programming arduino

I have given an example along with the library. Once you successfully install the library, you will be able to go to File > Examples > RF24Beacon > beacon in your arduino ide to open the example. It looks like:

// RF24 Beacon

#include <SPI.h>

#include "RF24Beacon.h"

RF24Beacon beacon( 9,10 );

uint8_t customData[] = { 0x01, 0x02, 0x03 };
void setup() {

    beacon.setMAC( 0x01, 0x02, 0x03, 0x04, 0x05, 0xF6 );
    uint8_t temp = beacon.setName( "myBeacon" );
    beacon.setData( customData, 2 );


void loop() {

    // beacon.sendData( customData, sizeof(customData) );
    delay( 1000 );


  • In line#7, I configure the connection of nRF module. I should be RF24Beacon beacon( pin_CE, pin_CSN );. If you are using an alternate hardware setup, you might like to modify this.
  • In line#14, MAC address for the beacon is set. You can skip this, in that case a random MAC (based on the build date of sketch) will be used as MAC address.
  • Line#16 sets the name which will shown while scanning. Here comes the first limitation - you can set only a name of length atmost 14 characters. Not that bad, this will be enough for my applications.
  • Line#17 sets the custom data that can be send with the packet. And this is the second limitation - Your beacon name+data should not take more than 14 bytes. Here my name is 8 bytes length. So I can pack 6 bytes of data. Here I'm only putting in 2 bytes.
  • Line#24 is the function which transmits a BLE packet. Here, I'm sending a packet every 1 second (line#26).
  • Line#26 shows how to send a a packet with custom data. This way I will be able to send some sensor reading also in the packet .



Testing with Raspberry Pi

Now it's time to test the hack. Power up your raspberry Pi. I'm using Pi3 here. Alternatively, any older version of Pi with a bluetooth dongle supporting BLE can also be used.

First we need to install a few packages. Some might be already installed in Pi, but l'm giving the full list.

$ sudo apt-get update
$ sudo apt-get install screen bluez bluez-tools bluez-hcidump


Now you can use hcitool to listen to advertised packets:

$ sudo hcitool lescan --duplicates

This will list all the available BLE packets around you. There you will be able to see one with name "myBeacon" (or the name you set in line#16). '--duplicates' flag is added ti continuously list the beacon - otherwise hcitool will list it only once.


Next we'll take a look into the raw BLE packet we are receiving at raspberry pi. What we need to do is to start hcitool for scanning and then use hcidump to display the packet. Since I want to run both the command side by side, I'll be using screen utility. You can find more about screen and how to use it here.


So in first screen, start hcitool with:

$ sudo hcitool lescan --duplicates

and in second screen, start hcidump with

$ sudo hcidump -R -X

This is how my output looks like after running these commands:


Now you will be able to build a cheap ble tags which can be monitored by Pi.


Happy hacking,



<< Prev | Index | Next >>

So, I was able to get the iLumi controlled via a python script that in turn I was able to add to OpenHAB and could control the iLumi BR30 from an OpenHAB ColorPicker, however when I attempted to add the EnOcean Pi 902 to the Raspberry Pi 3 and performed the update/upgrade to support the device I lost both the Wi-Fi and the Bluetooth functionality of the Raspberry Pi 3.   I was able to recover the W-Fi connection on the RasPi 3 by performing a dist-upgrade and then updating the firmware on the system to 4.4.3.  This corrected the Wi-Fi issue but I am still unable to get the Bluetooth device on the RasPi 3 to enable.  The system seems to think there is no bluetooth device on the board which I know there is one since it was working prior to the upbreak.  I'll have to see if I can get this to come up again or just add a usb bluetooth dongle to the board to get past it.

This assisted me in getting the WiFi back up and working:


Also, I was never able to get the RasPi to see the EnOcean Pi 902 so I was wondering if anyone knew if the device is supported with Raspberry Pi 3 and Jessie or do I need to use the RasPi B+ to connect the EnOcean Pi 902.  

The kit arrived just in time and I have worked on setting up the environment for the Command Center, based on the Raspberry Pi 3. In this post I cover the initial Command Center setup, securing SSH with SSH keys, installing and tunneling VNC through SSH, a post focused mostly on security. But first an index to my previous posts for reference - thanks mcb1 for your suggestion last time - and the project status view.


Previous Posts

PiIoT - DomPi: ApplicationPiIoT - DomPi: IntroPiIoT - DomPi 02: Project Dashboard and first steps in the Living room
PiIoT - DomPi 03: Living room, light control via TV remotePiIoT - DomPi 04: Movement detection and RF2.4Ghz commsPiIoT - DomPi 05: Ready for use Living Room and parents and kids´ bedrooms


Project Status

Project Status

Initial Setup of the Command Center

Let's start setting up the new RPI3 just received from Element14 Although it is not required, I like to make an initial backup of the SD card. To do so, after inserting the SD card in the adapter I put it in my Mac. Please note, that all of the commands will be for Mac (sorry about this limitation...) and will be using the Terminal app. The command I'm using for the backup is:


sudo dd if=/dev/rdisk1 bs=1m | gzip > ~/Desktop/pi.gz


A couple of comments on this, the /dev/rdisk1 is the device (SD card) I just inserted, to find yours, you can use the app in Mac "Disk Utility" and locate the SD card on the left hand side menu. You will find it without the "r", that is "disk1". By using /dev/rdisk1 with the command, you access the disk in a raw mode making the data transfer much faster. Another note, you need to be super user - or log with an account with these rights - to execute the dd command. Last comment, I send it through gzip (| gzip in the command line) to avoid storing 16Gb, the final file goes up to 2.5 Gb. This process will take long, it will take looong, in my computer took me 20 something mins… You can check the progress by pressing Control-T every now and then. But… be patient… I insist, this step is not necessary as you can always obtain the lastest NOOBS from the here, but I prefer doing so and storing this in my home server. Whenever you want to restore this image, you can do it with the command below and same comments as above.


gzip -dc ~/Desktop/pi.gz | sudo dd of=/dev/rdisk1 bs=1m


In any case, backing up the SD is a healthy thing to do from time to time.


After plugging the RPI to my TV via HDMI and  connecting an external keyboard and mouse to the USB ports, it is time to start it up and continue configuring the RPI. I have realized that the NOOBS that came with the SD directly starts the graphical interface, which I personally prefer for the first steps. The first key step is to change the password, better now than to forget about this later and expose your RPI and your whole home network on the Internet. Just go to Menu->Preferences->Raspberry pi Configuration and you can change it there. While we are here, I change the Hostname to DomPi, a bit of customization for the project


Wifi Setup - what I find great in the RPI3 is that the Wifi module is already built in, no need of any dongle which makes this much easier. One important thing is that the RPI gets always the same IP address, otherwise, each time you boot it up, it may change and you will need to find out the IP before being able to connect. You can configure a static IP via the RPI configuration, but I prefer to put enforce the IP address on the router instead. I let my router manage all of the IP's. This also makes it easier if I ever take the RPI to another home/network, as it will continue to use the DHCP to get the address. All in all, I obtain the MAC address of the Wifi interface in a Terminal and with the ifconfig command.


A note on security and SSH Keys

As said above, security on devices connected to the Internet is a must, and I learned it via "almost" the bad way. The first time I had a RPI I installed the VNC to be able to control the Desktop in another computer and avoiding to have to connect it always to the TV. Since I wanted to access VNC from a remote location, I opened the ports in the router and went on vacations. After a couple of days, I wanted to log on to the VNC to work something out and... I could not enter. Luckily enough I did put a "difficult" password and the effect all in all is that I could not log into my own RPI but neither could the intruder. I was so surprised that somebody is even interested in breaking into my network - I mean, I'm just yet another home in the Internet...


My lesson learned from that project and that will apply here is to configure the SSH server in the RPI to request a 4096bit shared key. This should be better than any password I can create, hehe, and remember. Let´s start generating the public and private keys in my Mac. To do so, I typed:


ssh-keygen -b 4096


and changed the name of the files to "id_rsa_dompi_cx", and left the default folder. When asked for the passphrase, I typed a phrase I can remember and is long enough. This passphrase will encrypt the key to be generated. The next step is to copy the public key just generated in the Mac into the RPI. From a Terminal window in the Mac I typed:


scp .ssh/ pi@


You can modify the IP address to fit yours. The next step is to add the public key just copied to the authorized key file in the Raspberry Pi. To do so, I did ssh to the RPI (or just type the following directly on a bash window on the RPI) and typed:


chmod 700 .ssh
cat .ssh/ >> .ssh/authorized_keys
chmod 600 .ssh


With this, the RPI is already accepting a SSH request from my Mac if it is encrypted with the private key. In this way, I don't need to type any password. To test the connection you can execute this command on the Mac Terminal:


ssh -i .ssh/id_rsa_dompi_cx pi@


It should ask for the passphrase. If you allow that it is included in the key chain, you will not need to type any password any more when ssh´ing into the RPI To make it easier and faster I have created an alias:


nano .bash_profile


And added this line:


alias sshdompi='ssh -i .ssh/id_rsa_dompi_cx pi@'


After relaunching the Terminal, I just need to type sshdompi and I will be ssh´d into the RPI without any password typing or remembering any other command.


The final step is to configure the SSH to only accept connections that are based on public keys. For that in the RPI I edit the following file:

sudo nano /etc/ssh/sshd_config


And I checked that it had these lines in it:

PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes

and I relaunched it by: sudo /etc/init.d/ssh reload


I am conscious I just briefly touched upon security and there are many more points to take into account, but this is not the objective of this post. There is an interesting post [Pi IoT] IoT Security: Tips to Protect your Device from Bad Hackers and I would recommend this web for SSH Keys.


Installing tightvnc

To get the VNC server in the Command Center I followed the instructions in here:  There is no special call out rather than I did set it up to run at boot.


Tunneling VNC over SSH

As said above, it can't be strengthened enough the importance of securing devices exposed to the Internet. The tightvnc version says it transmit between devices without encryption. So the best I can do is to tunnel the VNC over SSH and achieve two goals at the same time:

  1. the data will be encrypted between my Mac and the RPI - specially important when I communicate to it over the Internet
  2. instead of only using a password to connect to VNC, this solution uses first the private-public key encryption as per above and then the VNC password, limiting brute force attacks - as I wrote in the first paragraphs, I learned this the hard way...

There are two parts, on my Mac and on the RPI.

Mac VNC over SSH setup

Starting by the Mac, there are as well two steps, first is a sort of ssh forwarding a local port to the correct RPI port and second is launching the VNC client against the local port instead of the RPI remote port. The ssh command looks like (and an explanation of the parameters can be found here):


ssh -L 5901: -N -f -l pi


Since it can be challenging to remember the command I just created an alias as per above:

alias sshvncint='ssh -L 5901:localhost:5900 -N -f -l pi'


Actually I created two alias, the internal one (above) and yet an external one that I use when not at the home network - just modifying the ip address and entering the router WAN´s  (remember to configure the routers´ports). When launching VNC, the address of the VNC server to enter is no longer the RPI IP address, but "localhost:5901" or "localhost:1" if you modify the VNC config and specify there the port to use in the Mac.


Force VNC to accept only SSH

In principle avoiding somebody from the Internet to access my VNC server without a SSH tunnel and the right private key should already be protected just by not opening the VNCserver port on the router. However, it does not harm to quickly setup the VNC to just accept sessions originated in the RPI or localhost and ignore the rest. To do so I edit again the file /etc/init.d/vncboot (see again this link ) and modify the appropriate line by:


su - $USER -c "/usr/bin/vncserver :1 -geometry 1280x800 -depth 16 -pixelformat rgb565 -localhost -nolisten tcp"



the -localhost parameter makes VNC to only listen to the port on the local interface and the -nolisten tcp disables the port 6001. Further info here.


To sum up, we have now a VNC server that only listens to connections on the local host, these connections can come from the RPI  or from the outside. If coming from the outside, they have to arrive via the SSH tunnel. To connect to the SSH tunnel, the client has to have the appropriate private key that the RPI will confirm against the public key. Besides that, once the connection with the VNC server is established, the server will ask yet for the VNC password. All in all, I'd say quite a secure communication to avoid exposing our network on the internet. This can still be improved by modifying the default ports of the SSH and VNC if you wish to.



Nodes´ Dashboard

Nodes Dashboard



     For my project I need to have a microphone for giving voice commands and detecting sounds. Because RaspberryPi board don't have a microphone input, I used for this task an external USB sound card adapter, Konig 3D Sound, based on C-Media CM108 Audio Controller. I also add a small audio amplifier and speaker for music and audio feedback.

     USB sound card is connected to RaspberryPi through an USB Hub which act also as power distribution board for all components, see below image:




In order to install/enable USB sound card, let's check USB devices attached to RPi:


pi@pilot1:~ $ lsusb
Bus 001 Device 005: ID 0d8c:013c C-Media Electronics, Inc. CM108 Audio Controller
Bus 001 Device 004: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


and sound modules:


pi@pilot1:~ $ cat /proc/asound/modules

 0 snd_bcm2835
 1 snd_usb_audio

showing that the USB soundcard is visible.

Next, I need to set snd_usb_audio (my USB soundcard) to be default sound playing device (position 0). One way to do this is to edit alsa-base.conf to load snd-usb-audio as first option. Compared to Raspbian Wheezy, in Raspbian Jessie this file no longer exists by default so I have to create it with the following content:


pi@pilot1:~ $ sudo nano /etc/modprobe.d/alsa-base.conf

options snd_usb_audio index=0
options snd_bcm2835 index=1
options snd slots=snd-usb-audio,snd-bcm2835


In /usr/share/alsa/alsa.conf configuration file check to have following lines:


defaults.ctl.card 0

defaults.pcm.card 0


Make sure you have ~/.asoundrc file populated as follow:


pcm.!default {
        type hw
        card 0
ctl.!default {
        type hw
        card 0


If not installed, install alsa-base, alsa-utils and mpg321 (or mpg123, mplayer, etc.) :


sudo apt-get update
sudo apt-get upgrade
sudo apt-get install alsa-base alsa-utils mpg321 mplayer
sudo reboot


To check your configuration use command:


amixer -c 0 - to display current settings. Mine looks like this:


pi@pilot1:~ $ amixer -c 0
Simple mixer control 'Speaker',0
  Capabilities: pvolume pswitch pswitch-joined
  Playback channels: Front Left - Front Right
  Limits: Playback 0 - 151
  Front Left: Playback 137 [91%] [-2.69dB] [on]
  Front Right: Playback 137 [91%] [-2.69dB] [on]
Simple mixer control 'Mic',0
  Capabilities: pvolume pvolume-joined cvolume cvolume-joined pswitch pswitch-joined cswitch cswitch-joined
  Playback channels: Mono
  Capture channels: Mono
  Limits: Playback 0 - 127 Capture 0 - 16
  Mono: Playback 26 [20%] [4.87dB] [off] Capture 0 [0%] [0.00dB] [on]
Simple mixer control 'Auto Gain Control',0
  Capabilities: pswitch pswitch-joined
  Playback channels: Mono
  Mono: Playback [on]
pi@pilot1:~ $




alsamixer -c 0 - to modify speakers and microphone levels.


To check if the sound card is really working launch for example:


aplay -D plughw:0,0 /usr/share/sounds/alsa/Front_Center.wav


Ok, with playback working, let's check the recording side. Plug microphone into USB sound card input and launch:


arecord -D plughw:0,0 -f cd /home/pi/Music/test.wav


Use Ctrl+C to stop recording.


Check the result with the following command:


aplay -D plughw:0,0 /home/pi/Music/test.wav


Use alsamixer -c 0  to adjust sound levels to meet your requirements. That's it. Now both audio playback and recording are working on Raspberry Pi.


If someone know a better/cleaner way to install an external USB sound card with audio out and microphone, please share it in comments bellow.


     Now is time to take all pieces and put everything in a nice box. As with most of my projects, it was a real struggle to find an enclosure to fit all components I have so far and at the same time to be suitable to be put in plain sight. So after spending more than a week on this subject, I found an enclosure from a broken wireless router which looks like the right choice, still not perfect, but will do for now:




Situation is about the same on the sensors side where the hunt for enclosures is still on. So far only one of them have an enclosure made from a kitchen timer...



Oddly enough, "antenna" is actually the temperature sensor which have to be as far as possible from ESP module to be able to show real temperature values. So far I did not found a way to shield the sensor from radio interference caused by ESP module and keep it inside the enclosure.

The Kit arrived and one of my children had moved it into my aspiring office area, placing it with some old Element 14 boxes.  Yet forgetting to tell me that something new had came!  So just a quick update to show that yes I do have my entire Kit!  Now to make sure the kiddos know to let Dad know when new items come in!


Pi IoT Kit.jpg


This kit Included the Following:


Raspberry Pi 3


Both the EnOcean Sensor boxes show 902 US Version.


Very excited to start putting them together, the Sensor Kit has made me rethink some of my ideas and new things are simmering in how to use these great items!



After quite some time from the last post, things finally calmed down permitting the writing of the #2 and soon the 3# post .



Even if I am a newcomer to the element14 community I quickly realized that in here there is an abundance of very capable developers and craftsmen creating very interesting products and introducing literary super cool concepts with ease. On that regard, I decided to to differentiate a bit from the other projects, before jumping directly into the implementation and getting the hands "dirty" with tools, hardware and software


In this post I will introduce my own opinions and objectives for smart spaces and for the IoT as means of building automation in general, as I have experienced and have read. So this and the next post will follow a think first and act later approach. Nevertheless, in the meantime I am actually working on the prototype in the background so more (practical) posts to follow soon.



Identifying the market

This task is far from trivial and of course way beyond the scope of this challenge. On the other hand, what most of us dream one day, is the market to take off and eventually making the living spaces smarter, will be as simple as creating an application for a smartphone. Thus, less DYI, less customized per case setups, less frustration and hopefully less business failing to deliver their promises. The smart object should firstly become essential to everyday life before becoming a commodity.


I do frequently read diverse marketing and engineering materials on the subject, some easier and some harder to get their point. Most are investigating a narrow scope of the IoT potential, wearables, energy, sports analytics, cloud or even elder monitoring to name a few. By luck I found this report by McKinsey focusing on the connected homes and thus directly on the scope of this challenge. It's very easy to go through their infographics and I do recommend you to do so. Myself I will quote two elements, firstly security and safety are having the largest smart object market share followed by the utilities management and secondly 40% of the owners don't even know their brand of the devices!


The adoption rate as expected varies by the social class...


...each one identifying different barriers to adoption:




Design objectives I mentioned in the previous post (here) the, quite optimistic to be honest, scope of this project is to create/design, as best as possible, a building management system (BMS) and not a smart home setup solely for me and my curiosity. Thus the scalability and expandability of the system both in available technologies and cost are taken into account when selecting the hardware and software.


Additionally, since I want a more of a generic design, I chose the hardware and software that are surely less supported by the community and maybe the market but offer some innovating concepts that I believe can finally alleviate most of the bottlenecks as seen by the general consumer public and not tinkers and engineers retired or not.





Below are my primary aims driving my decisions during the design, hardware picking as well as software development.

Be as low cost as possible for an offered functionality.

This means that  even if I can have 10 Raspberry Pis each one acting as a thermometer, it doesn't mean that it is the optimal way. Remember that in order to have an IoT building automation dominance over the

established protocols like KNX, LonWorks, BACnet, the IoT solutions should be at least as cheap as those are. At the moment, as is the current market state and if someone whats to achieve the same

functionality as those building automation system; he have to pay much much more, let alone the frustration (indeed for the general public) for installing and configuring it.


To address that I will chose as low cost as possible hardware that is just enough for the specific required functionality. Optimizing a system thought takes considerable amount of time and not all of them will be done during this project. I will try though to highlight my insights for the future steps if a specific module of what I propose can be further scaled down reducing the cost. A good example of it is the partial transfer of the intelligence all of us are going to put on the Raspberry or other micro computers to much more thin hardware and embedded system using solely C code instead of high level programming and heavy libraries. Although truly ideal, the overhead can be huge withing this scope of challenge and would have been left for future.

Differentiate from the established automation systems, integrate human in the center.

If we need to make the IoT for building management to grow to the mass market scale then we just don't need simply to redo the same functionality already achieved by the established systems. IoT brings in addition the ubiquitous networks, computing and sensing capabilities. It simply enables the intelligence in the last meter of the networks (the things). Thanks to the later and the easy data analytics recently emerged, the human element can better be integrated and its stochastic behavior modeled and predicted which is a true differentiation aspect compared to the legacy automation systems.


For this reason I will try to create some concepts, sensing and actions that are usually hard to reach by the established automation system either because they are isolated (garden monitoring), they are of sub-par importance for their control device cost (per device monitoring) or they are simply unable be offered at their current state (occupant tracking with smartphones)

Give high level of energy and data feedback to the occupant.

This is related as well to the previous one in forgetting the human, not only in comfort and design decisions taking, but also in the everyday analytics. Many scientific and market reports

show that by giving the occupant awareness of the energy it uses there is a 10-20% potential of decreasing the energy just by educating them. That's a sweet posibility if we consider the low required investment for offering that. Just a simple metering point like the smart meters at the mains phase could give you measurable insights. Make that a competition with your neighbors or a game like clazarom and the potential increases.


So graphing and data mining is very very important tools in our arsenal in order to give high level of data to the occupants and various other stakeholders like the management agency of the building or the city authorities and transportation companies if you are lucky enough to live in the "Beta testing" Smart Cities. Hopefully there are already plenty of analytics as a service solutions in the cloud which can help towards that direction.

Security not to be forgotten!

Most of the engineers although concerned about privacy, they are mostly concerned about the security and cryptography aspects. So we tend to search for best encryption protocols, most reliable key exchange, most robust communication when on the other hand we take light-hearted where we search for websites and where we store our personal emails. Well my experience with public is that even if they still use the latter tools in their online presence, they tend to be over concerned about the privacy aspect of IoT. Maybe it has to do as well with the fact that the object are far more tangible compared to lets say an email account scanned by a bot in order to serve you targeted advertisement. So yes, the emails privacy (for the general public) have far less leverage compared to lets say a hijacked smart webcam.


Make it easy!

Very closely related to the previous one but from an other perspective. We as engineers are used to design complex system that just work and are frequently complex not only to debug and configure but also to run in the long term. Most users are not engineers and most probably the don't know the brand let allow how it works. A solution is of course, currently followed by the most new businesses and kickstarter campaigns, is isolated functionality, targeted and limited scope products with a set and forget (for a bit) approach. I thing in this group belong, among other, the smart locks, the smart thermostats and most probably the security alarms.


And yes, though some should intentionally be kept simple, especially in the security domain, the IoT potential is in the connected objects. So my belief is that we have a conflict of interest by the businesses in the interoperability addressing and providing at the same time a self-contained easy to use and deploy product. If you attempt to address both as a young company you will face the burden of interoperability of that many fragmented services and objects that could become unsustainable for a start up.




Profile01-128.pngMake it personal!

The IoT's inherent differentiating factor is monitoring much more closely the human activities and much better integrating their habits. From my experience, having a system that heavily interferes with the habits and lifestyle of the consumer, needs to place him at prominent consideration if we wish to have mass market adoption.


I am not expert in that domain, but I will try my best, for example with motion and location tracking modules and probably attempting to learn part of his interest in comfort parameters (to be checked again).

And yet, after all, make them sustainable!

Well, after all we have not to forget that smart spaces can lead to reducing the energy use or at least better regulate the power and time of use. All those are cool aspects and belong to the differentiation factors of smart object and big data. As designers for intelligent building systems we have to consider the sustainability aspect not of the controlled spaces but the actual devices. We are happily estimating the billion of devices to be present in some years, what about their environmental cost? Don't get me wrong, I am not implying the minuscule (or not) energy used by the objects but mostly of the "grey energy" required to produce them and decommission them. And that's... is an open question for the future...






Project Introduction  Project Outline & Navigation

Since quite some time my home automation is build around a core app made in Java EE. As part of this challenge I'm improving the core and integrating it more tightly through MQTT. The core is responsible for coordinating all nodes and enforcing the business rules for our home.



ArchitectureThe Thuis core runs on the Raspberry Pi 3 in a WildFly container. Like all nodes it's connected to the MQTT broker for communication with the rest of the system. It also takes care of communication with some external applications, like Plex (media server) and Netatmo (weather station). In this blog I'll focus on the integration of MQTT, Z-Way devices and the rules.


The building blocks of the application are as follows:

  • Model:
    • Devices
    • Rooms
    • Scenes
    • Rules
  • Commands
  • Controller


Each of these will be described in this blog. Currently all models and rules are defined as static objects in Java code, the goal is to put this in the database at some point and make it editable through a UI, that will however be after the challenge ends.


Device Model

The two base models are Device and Room. To be combined with Device there are different types of Actuator and Sensor. The following image and table show the interfaces. There are several implementations, for example there is MqttSwitch which implements Device and Switch and defines a switch which can be controlled through MQTT.

Device, Actuator and Sensor class diagram


DeviceGeneral definition of a (virtual) device: an identifier and it's status (type will be overridden by more specific interfaces)
ActuatorA device with controls
extends Actuator
A switch with on/off and toggle features
extends Switch
A switch which also has can have values between 0 and 100, where 0 is off
extends Switch
A thermostat supporting a set point for the temperature. It can be turned on/off, which will put it on predefined on and off temperatures
SensorA device that provides a (single) value of a sensor
extends Sensor
A sensor with a value that can be true or false
extends Sensor
A sensor with an arbitrary value and a configurable unit of measurement


All devices are virtual devices, each with a single function. For example a sensor that can measure both temperature and trigger on movement will be implemented as two sensors: a MultilevelSensor and a BinarySensor.


In this blog post we'll describe two of our rooms: the living room and the kitchen. This gives us the following definitions:

package nl.edubits.thuis.server.devices;

public class Rooms {
  public static Room living = new Room("living");
  public static Room office = new Room("office");
  public static Room kitchen = new Room("kitchen");


package nl.edubits.thuis.server.devices;

import static;
import static;
/* other imports */

public class Devices {
  public static MqttSwitch livingMoodTop = new MqttSwitch(living, "moodTop");
  public static MqttSwitch livingMoodBottom = new MqttSwitch(living, "moodBottom");
  public static MqttSwitch livingMoodChristmas = new MqttSwitch(living, "moodChristmas");
  public static MqttDimmer livingMain = new MqttDimmer(living, "main");

  public static MqttBinarySensor kitchenMovement = new MqttBinarySensor(kitchen, "movement");
  public static MqttMultiLevelSensor kitchenTemperature = new MqttMultiLevelSensor(kitchen, "temperature", Units.CELSIUS);
  public static MqttMultiLevelSensor kitchenIlluminance = new MqttMultiLevelSensor(kitchen, "illuminance", Units.LUX);
  public static MqttSwitch kitchenMicrowave = new MqttSwitch(kitchen, "microwave");
  public static MqttSwitch kitchenCounter = new MqttSwitch(kitchen, "counter");
  public static MqttDimmer kitchenMain = new MqttDimmer(kitchen, "main");


Command Model

As you might have noticed in the interfaces above, several methods return Command. A command is a runnable class that can be executed to fulfill a task, for example turning on a Switch with Switch.on(). For each type of command there is an implementation. The one used most by the devices defined above is the MqttCommand which publishes a MQTT message, so for example Z-Way will receive it and take action. The implementation is quite straight forward:

package nl.edubits.thuis.server.automation.commands;

/* imports */

public class MqttCommand implements Command {
  String topic;
  String content;

  public MqttCommand(String topic, String content) {
    this.topic = topic;
    this.content = content;

  public void runSingle() {
    MqttService mqttService = CDI.current().select(MqttService.class).get();
    mqttService.publishMessage(topic, content);


Command class diagram


Commands can be encapsulated in other commands. By encapsulating you can either compose several commands into a single one, or add a condition to the command. The following commands of this type exist:


A command that executes the encapsulated command when a certain condition is met. The condition is defined using a Predicate lambda. Three default conditions are available:

  • illuminance – shortcut for predicates based on illuminance, for example execute when illuminance is below 100lux
  • whenOn/whenOff – execute a command when a given device is turned on/off
ListCommandExecute several commands in order
PrioritizedCommandExecute a command with a different priority, for example USER_INITIATED: the highest priority which will let the command jump in front of the execution queue
WaitForCommandWait for a condition to be true before executing another command. Two default conditions are available: waitForOn and waitForOff which wait for the status of a device to turn on or off before executing another command


Another way of combining commands is using a Scene. This is an object that contains two ListCommand, one for activating the scene and one for deactivating it. A scene for turning on and off the mood lighting in the living room is defined like:

package nl.edubits.thuis.server.devices;

import static nl.edubits.thuis.server.devices.Devices.livingMoodBottom;
import static nl.edubits.thuis.server.devices.Devices.livingMoodChristmas;
import static nl.edubits.thuis.server.devices.Devices.livingMoodTop;
/* other imports */

public class Scenes {
  public static Scene mood = new Scene("mood",


Observing MQTT messages

The Core observes MQTT messages arriving on basically any topic. It then checks if there are any devices (or better ObserveMqttStatus implementations) matching this topic. The status of these devices is then updated. When a sensor gets a new value an event is emitted. These events (and individual MQTT messages) can triggered rules. This all happens in the MqttObserverBean. This bean also takes care of updating the status of any scenes or rooms including this device.


The connection with the MQTT broker is handled by the MQTT-CDI extension made by Alexis Hassler, to which I contributed some improvements in the past. This CDI extension abstracts the actual connection away. When MQTT messages arrive on a subscribed topic they are fired as CDI events which can be observed using the @MqttTopic annotation. This way you can very easily observe any messages arriving:

public void onMessageLivingMain(@Observes @MqttTopic("Thuis/device/living/main") MqttMessage message) {
  logger.log("Light in the living was turned "+message.asText());


For publishing messages a service method is available.



To enable sensors (or other events) to trigger commands there are rules. A rule is an Observer of either an MQTT topic or a SensorChanged event. As result one or more commands are executed. An example of a rule is the following:

package nl.edubits.thuis.server.automation;

/* imports */

public class Rules {

  private Controller controller;

  public void onKitchenMovement(@Observes @SensorChange("kitchen/movement") BinarySensor sensor) {
    LocalTime now =;

    if (sensor.getStatus() && Devices.kitchenIlluminance.isLowerOrEqual(80)) {
      if (TimeUtils.isBetween(now, LocalTime.of(6, 0), LocalTime.of(10, 0))
          || TimeUtils.isBetween(now, LocalTime.of(12, 30), LocalTime.of(13, 30))
          || TimeUtils.isBetween(now, LocalTime.of(20, 30), LocalTime.of(21, 30))) {
        // Breakfast/Lunch/After dinner;;
      } else if (TimeUtils.isBetween(now, LocalTime.of(17, 30), LocalTime.of(20, 30))) {
        // Dinner;;;
      } else {;
    } else {;;;


This example covers most of the basic options. It observes the movement sensor in the kitchen. When it's status becomes true and it's not very light it will check 3 timeframes. Depending in which timeframe the current time fits a combination of lights is turned on. This way you always have the most useful lights for the task ahead. The timing might need some optimization, but this is a good start.



Taking care of the actual execution of commands and scenes are the Controller and the CommandExecutor. Take the example rule above: it triggers several lights to be turned on or off. These commands are passed on to the controller. The controller takes the command, determines it's priority and then puts it on a JMS queue:

package nl.edubits.thuis.server.controller;

/* imports */

public class Controller {

  private JMSContext context;

  @Resource(mappedName = Resources.COMMAND_QUEUE)
  private Queue commandQueue;

  public void run(Command command) {
    run(command, 0);

  public void run(Command command, long deliveryDelay) {
           .send(commandQueue, command);

(for the purpose of this blogpost the code is simplified a bit)


A JMS MessageListener, the CommandExecutor, is used to listen to the commands which are added to the queue and execute them. Because of the way the JMS queue works, commands are executed in chronological order while respecting the priorities. This means that all commands with the same priority are executed exactly in order they were added (FIFO), but when a command of a higher priority is added to the queue it's moved in front. This is used for situations where for example the home theater is starting up (which takes a few minutes in total) and someone triggers a motion sensor. This command gets the USER_INITIATED priority and is therefore executed at the first possible moment, in front of all other steps of starting up the home theater. Something that's not time sensitive (for example automatically turning off the heating at night) gets a LOW priority and will therefore never block any more important commands.


Some commands can take a long time and you don't want them to block the queue. For example a WaitForCommand takes until the condition becomes true. In this case the condition is tested once and when the result isn't true yet the command is added to the queue again with a small timeout.



The most important part of the Core is now done, but most rules still have to be implemented. In a later stage some more external systems will be added to the core, for example for controlling the Home Theatre. I'm also aware that, to keep this blog post from growing too much, I have simplified some code samples and didn't cover every detail. If you're interested in a certain detail, please let me know and I'll explain it more!

This week I did the first steps in getting the project started.

  • Installed Raspbian on the Pi 3
  • connected the 7" Touchscreen display to the Pi


Every thing went flawlessly. I mainly did the same steps as rhavourd in ([Pi IoT] Hangar Central #3 -- Unboxing the Challenge Kit), but didn't make such nice video's / blog post.

Just a note on the I2C connections, these are not needed for the Pi 3, since they are in the DSI interface cable.


Here are some images of the result:





Now I have the Pi 3 running, I need a case, in which also the other Pi and two cameras can be mounted.

I found a very nice one, the Smarti Pi Touch, which also includes a case for the camera, The case includes the popular LEGO compatible camera case that works with the Pi camera. It has  a lego compatible back and can attach to the case on the back lego plate or the front if you choose to have LEGO compatibility on the front. This will make it very suitable for the application.

I have to think about how to connect the second Pi and Camera.


I will order this case next week, and try to get an extra camera case for my second camera.




stay tuned!


The IoT Tower Light is a small project I created not so long ago, to be used as a notification system. One of the aims of this challenge, was to incorporate its control in OpenHAB.


IoT Tower Light


The project is basically a light in which I've replaced the internal circuitry by a Particle Photon and a NeoPixel Ring. Using IFTTT, different animations can be triggered using a smartphone and an internet connection.


A dedicated blog post exists on my website, containing wiring and code: Internet of Things Tower Light – Frederick Vandenbosch


Here's a build video of the original project, with a demo at the end:





Because the light currently uses the IFTTT service, it requires an active internet connection. This may not always be the case though, so in order to ensure local control without internet, a new mechanism needs to be added.


The protocol of choice for most in this challenge has been MQTT, and it won't be different for this project. I modified the Photon's code to add support for MQTT. This means the Photon will now listen to events from both IFTTT and MQTT, ensuring backwards compatibility with the DO Button app.


Screen Shot 2016-07-01 at 20.58.43.png


The new, extended code looks like this:





The integration in OpenHAB is rather straightforward.


First, a new numeric item is defined, linked to the MQTT binding. It is configured such that the value of any command received on this item is published to the MQTT broker:


Number TowerLight <light> {mqtt=">[piiot:TowerLight:command:*:${command}]"}


Next, the item is added to the sitemap. Note that it is using the "switch" definition. The possible numeric values are linked to labels, to have meaningful buttons in the GUI:


Switch item=TowerLight label="TowerLight" mappings=[0="Off", 1="Siren", 2="Pulse", 3="Blink"]


This results in four buttons, each with a different label, linked to a command. When clicked, the command is passed to the item, which in turn triggers the linked MQTT binding.


2016-07-02 09:54:11.981 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'TowerLight' received command 1
2016-07-02 09:54:12.006 [INFO ] [marthome.event.ItemStateChangedEvent] - TowerLight changed from 0 to 1
2016-07-02 09:54:14.747 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'TowerLight' received command 2
2016-07-02 09:54:14.771 [INFO ] [marthome.event.ItemStateChangedEvent] - TowerLight changed from 1 to 2
2016-07-02 09:54:18.975 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'TowerLight' received command 3
2016-07-02 09:54:19.004 [INFO ] [marthome.event.ItemStateChangedEvent] - TowerLight changed from 2 to 3
2016-07-02 09:54:23.115 [INFO ] [smarthome.event.ItemCommandEvent    ] - Item 'TowerLight' received command 0
2016-07-02 09:54:23.138 [INFO ] [marthome.event.ItemStateChangedEvent] - TowerLight changed from 3 to 0


In the GUI, the buttons are visualised like this:

Screen Shot 2016-07-01 at 23.05.19.png


Because the MQTT broker is on the local network, as opposed to the IFTTT service on the internet, the responsiveness has improved greatly.



That integration went smoothly, on to the next part!




Navigate to the next or previous post using the arrows.


Welcome to another quick blog post showing the evolving IoT Farm!


Originally I had planned on building a hen house and linking it together with a Rabbit enclosure to allow for centralized observation and control.  But we found out our neighbors were getting out the chicken raising hobby after having an encounter with a Mountain Lion in their coop.  They offered us this completely assembled, insulated, electrical installed, shingled, awesome Hen House, we just needed to get it over to our property.  It is pretty much the size of a small shed.


After weeks of trying to hire a Shed mover to get our new Chicken Casa moved our neighbor took some chain and hooked it up to his trusty tractor and pulled it down the road and through our field to its new location. 


New Coop.jpg

Regan my trusty farm hand is eager to get the chicks moved in to their new house!


Chicks in New Coop.jpg


Here they are all moved in and getting the hang of the new pecking order and food options.


The downside of the new larger building is it went into a farther away location.  So now I am investigating if I want to run power out that far or if it is feasible to run a solar option.


Well to be honest, I am trying to find enough electrical wire to safely get power out to the new location while also looking into a solar power option.  Because who doesn't think solar would be an awesome implementation?