Skip navigation

I removed the manufactures electronic circuit board and replaced it with a Raspberry Pi computer so that I could customize the operation of my cat's Litter Robot.
My friend suggested that I also make it tweet. You can follow my cat's litter box usage on twitter:  It has been working great.


IoT Farm: Blog 1 - Intro

Posted by jkutzsch May 29, 2016

Wow, I have to admit to being incredibly humbled and thankful for the opportunity to participate in this Design Challenge.  Even more especially after having an opportunity to view a couple of the other sponsored participants proposals.  Some really interesting and exciting project plans out there!  I can't wait to get to working on mine and will eagerly look forward to seeing how the others are doing.


I will definitely offer suggestions/comments to the others and look forward to receiving the same back, I honestly think that with the vast range of Element14 members, their knowledge and creativity, this design challenge will be very informative and fun!


Here is some basic information cut from my application to start as an Introduction:


Old MacDonald had a farm


And on his farm he had some Raspberry Pis


With IoT here

And IoT there

Here some IoT, there some IoT

Everywhere some IoT

Old MacDonald had a farm



Grand Mesa Back Acres


Element 14 has some incredible Design Challenges, but this one that brings in the new Raspberry Pi 3 with IoT based on designing Smarter Spaces is a perfect tie in with the fact my family has just purchased a new home with 5 1/2 acres.  We have 18 new baby chicks being raised in the garage and 4 young rabbits set in cages ready to have their new living Smart Spaces created and connected to IoT.






Helper of the IoT Farm!

I believe the kit setup for this Design Challenge is a great base to work with in connecting small farming implementations.  Of course all of it can be applied to various scaled farming projects so I think that the readers of Element 14 would find my Design plan interesting and of actual value.


One of the downsides of having property in a rural location is the fact that natural predators are more common and as such our plans are for having a secured location for the Rabbits and Chickens to live.  With the animal location being a little farther from the house to allow for expansion and composting, using the Raspberry PIs to monitor and alert will be a great asset to anyone working with small animal farming.


While the Rabbit hutch/Colony Cage will be separated from the Chicken Coop/fenced chicken yard, they will be side to side and allow for a central Smart Space storage area and is a prime location to place the Raspberry Pi.


⦁ From within this central Smart Space I will implement a USB weight Scale to monitor food storage and alert when food starts running low to allow for planning on a supply run back into town for animal food from the agriculture store.

⦁ In the Rabbit Colony Cage a Noir Camera will be placed inside the nesting box to allow for remote monitoring of birth and babies development.

⦁ In the Chicken Coop the Sense Hat will be installed to monitor environmental conditions and allow for any implementations that might be required to allow for appropriate temperature changes to ensure the chickens have the best living conditions.  This could include heat lamp use during the winter months and/or fan implementation during the hotter summer months.

⦁ In addition the Chicken Coop will have lighting controlled via the PI and light sensors outside.  This allows for the training and automating of having the Chickens come inside at Dusk.

⦁ The 8mp Camera will be mounted on top of the Central Smart Space to allow remote viewing of the gated areas and track for potential predator threat.  Depending on the range of view more camera's may be implemented to further expand monitoring.

⦁ The Raspberry Pi LCD screen will be used as the interface at the main house F.O.C.  (Farm Operations Center).

⦁ Potential additional implementation of IoT will be researching how to remotely open and close the Chicken Coop Chicken door at specific times with a sensor to verify the door is open or closed.


I am interested in seeing what type of feedback the Element 14 community may be inclined to provide while this project is underway and I think that some flexibility to allow for additional modifications suggested by the community will make this project even more fun and inclusive.


This Design project has some incredible potential and I look forward to participating with the Element 14 members.


The intelligent home is not a new notion to the building domain. It has been long since the home automation and “smart home” concepts were introduced. However, currently this field gets increasing attention not only because of being environmentally friendly, using solar energy and reducing the waste of it but also due to the introduction of innovating interactive and intelligent technologies based on the IoT. These aspire to revolutionize the way occupants consider and interact with their living spaces bringing energy awareness, increased and personalized comfort, in addition to the improved well-being.


The problem

Unfortunately, while the technical factors of the intelligent building have long being studied, the wider socio-cultural aspects have been largely neglected or were conducted mainly in theoretical studies, largely dissociated from the actual building management infrastructure. To make matters worse, although the IoT were introduced with overwhelming interest; their use for facilitating the creation of the intelligent building was met with key issues limiting the growth of the new smart homes into a mass market.  Additionally, the lack of true understanding of user needs and the inherent fact of interfering with their habits further perplexed the situation.


To summarize the Smart Home has three big problems from the consumer point of view:

It’s expensive: a connected lock costs two to three times compared to a “dumb” lock , so does a connected thermostat.

It’s complicated to install and nothing works together: A lot of expensive engineering hours, frustrating results in the end

Every year a new product or ecosystem is launched: that makes all the previous systems look obsolete and is perhaps the most aggravating of all.


The proposed solution

The project is built around the triplet "atomic" vision shown in the following figure. The intelligent home "molecule" is composed of three major "atoms", the energy management unit, the occupant centered intelligence and the core building management infrastructure. A lot of research is done in each individual "atom" by the academic groups as well as industry. This project, within the given 14weeks will attempt to attack the problem with a systemic approach, by integrating the three domains in a common design in order to deliver the concept for the Intelligent Home v2.0. In this next generation smart home the occupants and smart grid, the prime stakeholders of the system, are taken into account to the decision making process. Each one is represented and has its interests addressed by one dedicated "atom". Unlike current designs of smart home focusing solely on IoT and how to incorporate as many as possible connected devices, this system design seeks to define a justified architecture offering services to the occupants (wellness monitoring, comfort optimization, reduce cost, lifestyle) as well as to the smart grid (auxiliary services, renewable integration, demand response for financial benefits, etc).


Figure 1: The Intelligent Home v2.0 based on a triplet of atoms


To facilitate the creation of a universal design and not a customized solution, aggravating the fragmentation issues of IoT, the building management system is designed in a layered manner, abstracting the internal devices protocols,decoupling them both in time and space and overcoming their limits. There are two basic pillars in that approach. Firstly, create an embedded and sensor networks federator abstracting all the space, time and energy limited network topologies from the managing logic. Secondly, offer an easy to use restfulAPI that is built on top of this federator. The ultimate concept is to create an ecosystem of high level programming interface and data generation sources, in a similar manner to the ones offered by operating systems of a smartphone. Likewise, it does not matter which device model somebody has, the experience of the intelligence/apps is functionally the same.

Thus, the energy and human oriented hard science studies get decoupled from the underlying standards as well as from the building architecture particularities; catalyzing the integration of scientifically high performing algorithms to commodity hardware and IoT. This has in addition the potential to reduce the cost and avoid the protocol/ecosystem lock-in. This extends the possibilities beyond just intelligent buildings and home automation but also to future smart grid participation and home installed renewables and batteries integration.


Figure 2: The complete system architecture;

EMS the energy management system, BMS the building management system


Challenges and Threats


The proposed system is a multidisciplinary approach with challenges arising from the domain of energy management, computer science as well as the communications. The team is composed of electrical engineer doctoral students in the field of the smart building system design and energy management. Despite the increased work required to deploy such a system, many modules have been considered and designed within the research scope and this is a great opportunity to have real-life setup demonstration.

There are however two major challenges that are in fact beyond engineering. Firstly, how do you design the frontend in order to be user friendly and still convey enough information and energy awareness feedback to the user that will be willing to use, well after the initial period of time. Secondly, how do you efficiently manage the energy and power use without impacting the comfort of the occupants.

Finally, the bidirectional communication with the smart grid envisioned in the scope of this challenge will just stay a vision until there is adequate support by the energy utilities. However, the design of BMS/EMS is done in such a way that it is possible to have demand response and other smart grid interactions in the future with minimum modifications.


The SciFi book picture scenario depicted by Ray Bradbury in 1953 seems one of the fiction visions of the future that is not possibile. At the actual date, about ten years before the date of the events of his novel, things seems going different. And we are just working to make if possible today, something better taking care of the readers and the books. Creating the best reading environment.

This post is just a reportage, a collection of images that better than many words describe the ideal context were the Smart reading place will take place.



{gallery} Reading places











Obviously these images refers to a real case.

My propsal for the Pi IoT Design Challenge was not good enough to be sponsored which probably means this project is not a contender to win anything, but putting the effort into the proposal and dreaming up the possibilities got me enthusiastic enough to proceed anyway.

I have been dabbling in home automation for a few years, but the various projects are scattered all over the house, so this is a great opportunity to set up a central control room that starts to leverage the synergies available. I have noticed that home automation can seem a little mundane and not too exciting, so I would like to infuse this project with a pop culture motif to increase interest level.

I like the clean design of Star Trek technology and I think it would lend itself well to the home automation command center theme. I have a large alcove that could be converted to a Start Trek style bridge décor with appropriate control consoles.

My continuing mission is to explore the strange new world of Internet of Things and seek out new technologies and opportunities.

I have approval from the Admiral at Star Fleet HQ to dress up the alcove with Star Trek décor and I might even convince her to supply the Ship's Computer voice, because what is the point of having a space ship if you can't video a couple of skits to explain and demonstrate its features.

Another reason for proceeding with the project is once the Admiral has made a ruling, it is not cool to pull out.

Here are some of my home automation technologies and ideas on how they can be related to the Star Trek motif:

  • I have designed and built the ultimate smart thermostat – called Henrietta. In the Start Trek theme this would become “Life Support” and would be an interactive wall-mounted display. It is also controlled via Bluetooth, so one of the Trek consoles would be able to bring up a Henrietta interface. The Henrietta user interface haspersonality traits, with winking eyes to acknowledge commands. This can be explained because the life support system was recently serviced by some Binars who tend to infuse intelligent personalities into computer systems.


  • I have designed and built some BLE lighting systems. One of these could be passed off as a Ferengi Thought Maker artifact. The iphone control “console” for the BLE LEDs could look like a Star Trek LCARS display.


  • I am building a Bluetooth automated solar powered snow removal system called Clear Walk which has motorized mirrors to redirect sunshine and a solar panel to charge the battery. The mirrors correspond to a Star Trek main deflector and the solar cells correspond to Bussard collectors.


  • I have a network of EnOcean sensors monitoring doors and windows and house functions. These will be aliased to a Star Trek space ship – portals for windows and airlocks or hatches for doors. The bay window will be the observation deck. The garage will be the shuttle bay and the van will be a shuttle. The project kit will allow this network to expand and maybe switch from a PC to a Raspberry Pi as the EnOcean host. The status screen will be a Star Trek Security Station.


  • I have Raspberry Pi and PiCam, monitoring the planet's surface, that can be piped to the “main viewer” (HDTV).
  • My Cel Robox 3D printer will become a Star Trek replicator. It has already made some communicators and aphaser.
  • The Star Trek Science Station will have some of my electronics instrumentation. My Keysight multimeter will become a Tricorder.
  • The science station display can monitor deep space with long range sensors – my back yard solar powered Bluetooth weather station.


This project involves tablet computers, smart phones, PCs, Raspberry Pi's and EnOcean sensors - and maybe even 7 of 9.

I have a number of other projects that I might find time to include, such as a robot I built called BorgBot and a weather station I had proposed, housed in an AVRO Arrow. No promises though, as outfitting the IoT alcove with 6 or more Star Trek panel displays with corresponding networked computers all integrated into appropriate décor is aready making for a very tight schedule. However this project plays out, it should provide a place to showcase my future projects and a platform for further IoT exploration.


Links to the Pi IoT Design Challenge site:

Pi IoT

Pi IoT - Smarter Spaces with Raspberry Pi 3: About This Challenge


Links to blogs about the Star Trek IoT Alcove project:

Pi IoT - Star Trek IoT Alcove - Blog 1

element14 and the photon torpedo - Pi IoT Blog 2

How many tablets to use? Pi IoT Blog 3

Starship Enocean Voyager

The Starship Enocean Voyager - Pi IoT Blog 4

LCARS (Library Computer Access Retrieval System){Star Trek} - Pi IoT Blog 5

LCARS Tablets

Make Life Accessible - Clear Walk - Melting Snow - blog 19

I would like to start with thanking element14 for giving me this opportunity to be a sponsored challenger in a great challenge like this. I'm really excited to be a part of this and is looking forward to learn some new things, build some cool stuff and ultimately have some real fun. In this blog post, I would like to give a glimpse to what I'm planning to do over the course of next 12 weeks.

I believe that internet of things is not just about connecting just things together. It should be about connecting people to their things in a meaningful and easy to access way. Although 'things' play an important role, ultimately IoT should make people's life better. It should not be just about monitoring things around you, but about making a symphony between the user and his/her things to enrich their daily life. My project proposal for this challenge aims at putting things and users together - exploring new ways how personal IoT can help us.




Internet Of You

For this challenge, I want to look into three different aspects of an user - environment around him/her. things attached to him/her and finally his/her health itself.

The Environment:

The Environment

By environment, I mean the personal space around us, the ambiance and the nature. One of the best use case will monitoring the temperature, humidity, air quality etc., around you. By using the Pi Sense Hat, I will be able to measure the temperature, pressure and humidity of the environment. And in my city, we have public access to the AQI data. Together with the weather reports fetched through Yahoo Weather API, this will give the user a comprehensive  way to plan his daily activity. All these data will displayed in the screen provided for the challenge. I'll be using freeboard with nodejs to collect and data and update it to the dash board.


The Thing:

The Thing

This includes the things around the user. For this challenge, I will be connecting a light bulb and a fan directly to raspberry pi. Since my television and music system has a remote control, rather than connecting them directly to raspberry pi, I'll be using LIRC library on raspberry pi to train the remotes and control them using rPi.


The User:

The User

A personal IoT solution should put it's user at it's center. With this view, I want this project to include the monitoring of health and fitness aspects of the user. I owns a Pebble Time(and I'm proud of it ). It has a descent activity tracking feature including sleep data, number of steps walked etc., Although the watch does display these data, I want it to be analyzed in a better way. By using Pebble Health API, I'll be able to forward these data to my raspberry pi. This data will also be displayed using the widgets in freeboard.


By covering these three aspects of a user, I believe this project will show a fresh view of approaching personal IoT.


Beyond Touch Interface:

The touch screen provide with the kit will be definitely a great way to interact to Pi but I believe in a personal space, Natural User Interfaces will be more appealing to an end user. With this view, I'm planning to integrate vision and speech to my project.

Speech - Jasper:


Jasper project is developed to provide speech based interface specifically for Raspberry Pi. By using an USB Microphone, I will be able to make Raspberry Pi listen to the users and act accordingly. With 10 times the power of original Pi, Pi3 will be able to provide a faster interface to users. I also believe that this will be a good example to show off the computational capacity of mighty Pi3. As part of the project, simple commands like "Turn ON/OFF light/fan" will be trained to the system. Custom python scripts can be written to Jasper to process these commands and switch relays in PiFace to switch appropriate devices. Also commands like "Show me  the weather updates" will update the screen with current environment information.


Vision - openCV:


By giving vision to raspberry pi, we will be able to leverage gesture user interfaces. My already completed project for openCV based gesture recognition is available in github. I will just have to port it to use the Pi Camera/NoIR Camera in the kit. This project also includes a face detector to allow only the authorized user to control the Pi.


Re Imagining Personal IoT

Next I would like to cover three uses cases I'm planning to show as part if this challenge:


Presence Detection based activation of things:  Internet of things should be also about intelligent things. IoT should enable your devices to make smart intelligent decisions. This usecase show cases how we can use the inbuilt BLE of Pi and user's smart phone to make the environment aware of the presence and take smart decision. It works as the following - Inbuilt BLE in Pi will always be in scanning mode. When the user enters the room, Pi will be able to detect his smartphone based on the MAC address of the BLE in phone. These MAC addresses can be stored in Pi and Pi only choses to act if it sees a familiar MAC ID. Once the user presence is detected, Pi can take one of the preprogrammed actions - if its night time, then switch in the light and if it's day time may be switch on the fan. Moreover as Pi can detect multiple users, it can be programmed to act differently to different users. Also Pi stores all these information and will be displayed in dash as a chart showing which users where there in the room as what times. This can also be helpful in monitoring the user presence from a remote place as Pi can stream these data over internet.


Intelligent Wake Up Alarm: One of the thinks I struggle almost every morning is to get up on time. Sometimes I completely missed my alarm, other times alarm wakes me up when I'm not ready to get out of bed, ie, when I was in deep sleep. It is widely known than the best time for alarm to go off is when you are in light sleep. With my Pebble Time able to detect the sleep patterns and depth of the sleep, I will be able to use this data to sound the alarm. The end usage will be like this - say you chose a window of 30 minute and you set the alarm at 6.00AM. Now between 5.30 AM and 6.15AM, once pebble detects you are in one of those lightest sleep phase, it send command to raspberry pi via my smartphone to sound the alarm. Speakers connected to raspberry pi will play your preset alarm sound to wake you up. In this way you will get a fresh start for the day. Also after a minute of sounding your alarm, Pi will switch in the light in your room to make sure that you don't go back to sleep again. For the implementation part, I will use Pebble Health API to forward the sleep data to raspberry pi over WiFi. Raspberry Pi with this sleep data will take decision of when to play your alarm. More over this sleep data will be stored in Pi and displayed in dash with Daily, Weekly and Monthly averages to understand your sleep easily. Relay in  PiFace hat connected to Pi will be used to switch the light ON/OFF.


With this I would like to put my pen down for now. It's going to be busy days for next 3 months and filled with fun. My best wishes to all challengers and looking forward to learn from each other.

- vish


<< Prev | Index | Next >>


I have been lurking in the element14 space for many months now. I am always amazed at the interesting projects and wealth of information being shared. I am humbled to have been chosen to give back to the community that has given so much. For the sake of the "Smarter Spaces Design Challenge", I am a private pilot and member of a flying club whereby a group of roughly 50 individuals share ownership of a fleet of 4 airplanes. What follows is the submission for my "Smarter Spaces: Hangar Control System".



A system is needed to control pre-heating of a fleet airplanes operated by a large number of pilots. Airplanes are kept in unheated hangars which means that the engines quickly cool to the ambient temperature. Prior to use, pilots are required to preheat the aircraft engine. The preheating cycle can take up to 2-hours. Pilots do not live close enough to the airport to conveniently turn on the heater at the appropriate time, nor are there operations staff to initiate the preheat cycle. Thus, a method to remotely operate the heaters is needed.


Necessity of the System

(This portion represents a summary from The Whys and Hows of Preheating, Mike Busch.

Airplane engines are, by their very nature, strong and durable machines. At the same time, they are very sensitive devices that must be operated carefully to avoid catastrophic results. Of major concern to the aircraft engine are the startup conditions. After shutdown, the oil settles to the bottom of the engine seeping away from critical parts and, more importantly, the various components that make up the engine begin to contract. Aircraft engines are made of dissimilar metals: The crankcase, pistons and cylinder heads are aluminum; the crankshaft, camshaft, connecting rods and cylinder barrels are made from steel. When heated, aluminum expands about twice as much as steel. Likewise, when cooled, aluminum contracts about twice as much as steel.


Consider a steel crankshaft, which is suspended by thin bearing shells supported by a cast aluminum crankcase. As the engine gets colder, all of its parts shrink in size, but the aluminum case shrinks twice as much as the steel crankshaft running through it. The result is that the colder the temperature, the smaller the clearance between the bearing shells and the crankshaft. That clearance is where the oil goes to lubricate the bearings and prevent metal-to-metal contact.


How significant is this problem? Take the Teledyne Continental Motors (TCM) IO-520-series engines used in many general aviation singles and twins, for example. The IO-520 overhaul manual lists the minimum crankshaft bearing clearance as 0.0018 inch (that's 1.8 thousandths) at normal room temperature. What happens to that clearance when you start cooling the engine down? TCM doesn't say, but tests performed by Tanis Aircraft Services in Glenwood, Minn. indicated that an IO-520 loses 0.002 inch (2.0 thousandths) of crankshaft bearing clearance at -20°F. An engine built to TCM's minimum specified bearing fit at room temperature would actually have negative bearing clearance at -20°F. In other words, the crankshaft would be seized tight!


Why not leave the heater on all of the time? There has been considerable controversy about whether or not it's a good idea to leave an electric preheating system plugged in continuously when the airplane isn't flying. Our first concern is one of utility costs. Second, both TCM and Shell have published warnings against leaving engine-mounted electric preheaters on for more than 24 hours prior to flight. I am not here to solve a debate that has been raging for as many years as there have been planes operating in below freezing environments. Let’s just accept that continuous heating is not desirable.


Current System in Use

Through a clever use of cell phones and relay switches, each hangar has its own phone number. Two hours before flight, the pilot places a call to the “hangar’s cell phone” to activate the engine heater. This system is elegant in its simplicity. Unfortunately, it has been plagued by unforeseen issues: Most seriously being that “non-contract” phones are purchased each winter which results in new phone numbers being assigned. With the new phone numbers come all of the former callers of that phone number who are still dialing it. Each wrong call results in a 2-hour preheat cycle. During a recent month, the heater operated 24-hours per day for 30 days!


Proposed System

Create an intelligent control that provides a convenient and cost effective means for pilots to preheat the aircraft engines. Some key considerations are:

  1. Universally accessible interfaces: web, SMS, and phone
  2. The hangars do not have internet access.
  3. The individual hangars do not have access to adjacent hangars which eliminates running cables to each hangar.
  4. Initial and recurring costs are a primary concern.
  • Client side will use jQuery Mobile to implement a single web-client application across the desired devices.
  • The SMS and IVR components will use Twilio as the front end.
  • Turn on/off engine heater. Due to energy requirements, GPIO connected to a 20/30-amp SSR will perform the actual load switching.
  • Turn on/off cabin heater. Use GPIO pin connected to suitable relay.
  • Provide temperature conditions of engine compartment. This feedback would provide the pilot with knowledge that the engine has come to a safe starting temperature. In addition, it would allow for the regulation of preheating by cycling the heater to maintain the desired preheat temperature.
  • Provide temperature conditions of cabin. In addition, provide for the regulation of cabin temperature by cycling heater.

Hangar Control System

User Interfaces

To satisfy the desires and technical prowess of all the pilots, the proposed system shall be available using a variety of media: web browser supporting all of desktops, tablets, and smart phones; SMS or text messaging; and, IVR or telephone “interactive voice response”.

  • The web server will be written in Python with Flask as the framework.
  • Client side will use jQuery Mobile to implement a single web-client application across the desired devices.
  • The SMS and IVR components will use Twilio as the front end.
  • Turn on/off engine heater. Due to energy requirements, GPIO connected to a 20/30-amp SSR will perform the actual load switching.
  • Turn on/off cabin heater. Use GPIO pin connected to suitable relay.
  • Provide temperature conditions of engine compartment. This feedback would provide the pilot with knowledge that the engine has come to a safe starting temperature. In addition, it would allow for the regulation of preheating by cycling the heater to maintain the desired preheat temperature.
  • Provide temperature conditions of cabin. In addition, provide for the regulation of cabin temperature by cycling heater.


A single Raspberry Pi 3 running Raspian implements a local area communication hub, Hangar Central. Communication is performed using WiFi or, should this project receive the “Challenger’s Kit”, an EnOcean Transceiver. The “Hangar Central” component maintains the current state of the available hangars and communicates with the other hangars in the network. Hangar Central also provides the single point of contact for the pilot interfaces and all security and authentication. While it is intended to be a “headless” device, the Raspberry Pi LCD Touchscreen included in the Challenger’s Kit would provide a compelling reason to include an “in-hangar” local status and control interface.


Each “Hangar Device” provides the current state of an individual hangar as well as an endpoint for controlling the available devices in that hangar.


Each hangar will have a hangar control device:

  1. Raspberry Pi used to communicate with Hangar Central.
  2. Turn on/off engine heater. Due to energy requirements, GPIO connected to a 20/30-amp SSR will perform the actual load switching.
  3. Turn on/off cabin heater. Use GPIO pin connected to suitable relay.
  • Provide temperature conditions of engine compartment. This feedback would provide the pilot with knowledge that the engine has come to a safe starting temperature. In addition, it would allow for the regulation of preheating by cycling the heater to maintain the desired preheat temperature.
  • Provide temperature conditions of cabin. In addition, provide for the regulation of cabin temperature by cycling heater.


I understand that this project may appear similar to products which are presently available. The differences lie in the target audience: existing products present a solution for individuals or small groups as evidenced in the “user management” and authentication components; also, existing products attempt to provide a “general use” power control with appeal across a wide variety of use cases.


In contrast, Hangar Control targets a large group of users with very similar usage needs. User management and authentication is based around the needs of a frequently changing population, while “hangar control” is tailored to the specific needs of environmental control and status for airplanes and their users.


“Hangar Control System” is arguably a “Smarter Spaces” project worthy of both inclusion in the 2016 element14 Design Challenge and sponsorship as represented by the Challenger's Kit.


Thank you for your consideration.




Rick Havourd

To make my proposal a reality I need quite some electronics in the house. Luckily we already have several Z-Wave devices and Raspberries in the house, and a sponsored Kit on the way. But what other hardware do we still need?


More Z-Wave devices

Z-Wave Dimmer modules

The entrance, living room, kitchen and office already have most lights controllable through Z-Wave. We do however plan to make this happen in the bedroom (and bathroom) as well to create a wake-up light. For this I’ll order the Fibaro Dimmer 2 or Qubino Dimmer modules, plus some dimmable LED bulbs and switches. Before ordering I’ll try to find out which combination is working best nowadays, because in the office I’m using the first generation Fibaro Dimmer and that one unfortunately buzzes while it’s dimmed. In a later blogpost I'll let you know which one I choose.



Estimote Beacons

Next to PIRs I’ll be using iBeacons to detect where in the house we are. I’ve chosen the latest generation of Estimote beacons for this. I’ve used the first generation before and they are nice little beacons with a very powerful SDK. Using at least 6 beacons it should be possible to give a quite accurate approximation of your location in the house. I’ll be giving it a try when they arrive.

This generation is brand new, so I had to pre-order them.



Another pre-order is the MOVE, which is currently available on Indiegogo. The MOVE is a bluetooth controlled motor to motorize existing blinds and shades. There is a micro USB connector as well which can be used to control it, so when it arrives I’ll experiment with both Bluetooth and USB control.

It will be used in the bedroom as part of the wake-up light. The estimated shipping date is currently in June.


HDMI capturing

For the ambilight in the home cinema I intend to use Hyperion. A HDMI splitter will be used so I can intercept the signal to the TV and feed into HDMI-to-RCA converter and a capturing device, which in turn will feed the images into a Raspberry Pi. The software interprets the image and controls the LEDs through an Arduino. Most materials are ordered from AliExpress, so it will probably arrive in about a month.



Now it’s time to be patience and wait for the arrival of all materials. In the meanwhile I’ll be working on some other parts, more about that in the next post!

For the very beginning just a copy of the above mentioned Design Challenge application, see below. Further updates will follow soon.


Autonomous solar systems for hot water supply used in mild climate conditions are usually build from solar collectors, an accumulation tank, photo-voltaic panels providing electric power for a solar fluid pump and a controller switching the pump on and off (based on comparison of actual solar fluid temperature at collectors and hot water temperature inside the tank). One such a system was measured for several month [1], [2] in different weather conditions and based on the results quite different control unit was proposed. As the original measurements were done on a Raspberry Pi connected to ADC Pi V2 interface board the same hardware platform is also used for the newly created design. Internet access for remote control and display is implemented through a browser interface.


Basic Functionality Description

Measurements described in [1], [2] revealed that this particular solar system has its limitation residing in insufficient photo-voltaic power generated during corner cases (in particular during sun rise and sun set) and in cloudy weather. Though the controller switched the pump on that was in fact not running due to not enough electric power available. This finding led to the first design decision:

1. Photo-voltaic power output is directly (only via a 2 Amp Schottky diode, see below) connected to the pump.

This solution has two major advantages: System functionality remains autonomous (not dependent on external power source) and furthermore the pump controller is not needed at all. So if the controller is connected (for optimization purposes, see further) than in case of its breakdown the overall system still remains functional.

The second design decision was to improve solar energy intake by:

2. Connecting an external power supply (again via the Schottky diode, not to interfere with the photo-voltaic source) to the pump.

This power supply is of PC ATX type. On / off switching is provided via Power on signal by a normally opened contact of a controller relay (to ensure overall system functionality in case of a blackout).


Controller Design

For historical reasons the design is based on Raspberry Pi SBC equipped with ADC Pi Plus converter for temperature measurements with five external industrial thermistors connected. (Another option not requiring ADC converter is a set of five DS18B20 based one wire temperature sensors.) There is one temperature sensor connected to: the solar collector, the heat exchanger input, the heat exchanger output, the hot water tank and the hot water output.

Second expansion board used in the controller design is PIFACE DIGITAL 2. (Unfortunately this board, on the contrary to ADC Pi Plus, does not conform to PiHat specifications.) Only those two switching relays soldered on the board are utilized: The first one for switching the external power supply on / off by connecting / disconnecting its Power on signal to / from ground via normally opened contact. The second relay prevents tank overheating by switching on electromagnetic valve on hot water output when the temperature there reaches 95 DGC.

A touchscreen is connected to display graphics with measured values (and also voltage actually supplied by photo-voltaic panel is measured by a spare ADC channel for information purposes) and to eventually manually control on / off switching of the external power supply (to switch the heating fluid pump on and off). See the example picture of a web interface below (courtesy of Ladislav Lebeda).


Controller web interface screen shot. Published with courtesy of Ladislav Lebeda.


This article describes a possibility of a controller replacement on a commercially available autonomous hot water solar system with Raspberry Pi based design. Usage of this SBC equipped with necessary expansion boards together with connection of the external power supply allows not only better available solar energy utilization but also more data collection, their retention in a database and remote Internet access. From the stored data solar power intake calculation is possible and also graphical presentation of measured values can be provided, please visit the above published web page to see different possibilities.



[1] MÍČKOVÁ, Petra. Kvantifikace energetického přínosu řídící jednotky solárního systému. Brno, 2013. Bakalářská práce. Vysoké učení technické v Brně.

[2] HAVLÍČEK, Lukáš. Kvantifikace energetických ztrát fototermického solárního systému ohřevu TUV při napájení fotovoltaikou. Brno, 2014. Diplomová práce. Vysoké učení technické v Brně.


PiIoT - DomPi: Intro

Posted by mg.sergio May 25, 2016

Hi All,


I´m very excited about this Challenge and hope that with the help of all of the Challengers I´ll be able to create and develop this project and have a lot of fun.


The detailed idea of my DomPi can be found in the application: PiIoT - DomPi: Application I submitted. The intention of this intro post is to state the problem I want to solve, summarize the main features or capabilities, summarize the key modules of the solution and create a dashboard that I will be using throughout the contest to know where I am (for the readers and... hehe, for myself!).


What's the problem I want to solve

Mainly three. First, I want to emulate that somebody is present at home whenever we leave it for some days. We leave in the ground floor and I don't feel comfortable leaving the flat without knowing if all is ok. Second, create an alarm system that fits my needs by integrating my IP Cam, some PIR detectors and some intelligence on it. Third, I want my home to become as autonomous as I can by knowing more about my home for us to live better, as an example I want to automatically control the humidity to avoid a dry home environment, which is not healthy, by turning on the hoover-robot every few days, etc. Besides the main problems, I'm solving others such as knowing if my wife is already at home, or if I should bike or take the car to work, whether my car is in the garage or many others as detailed below. All this at a lower cost than in the market and 100% customizable for my requirements.


Main Features or Capabilities

Below you can find a summary of the main features that I plan to develop. Since the time is limited and I´d like to create several features, I have split them in three phases. My intent is to cover all of the features under Phase 1 - MVP and be able to develop several of the phases 2 and 3. If you have any preference or suggestions, please let me know and will try to modify the planning!


FeatureShort Description

Phase 1 -MVP (*)

Phase 2Phase 3
Presence EmulatorEmulate there is someone at home by turning on/off the lights and the TV/RadioX
Lights ControlTurn on/off the lights in every room with the DomPi solutionX
Lights Control - TVTurn on/off the lights via the TV remote controlX
Environment ConditionsGet temperature, humidity and luminosity from each roomX
Motion DetectionDetermine if there is somebody at home via PIR sensors, IPCam and PiCamX
Alarm - BasicIf there is motion detected or sound detected, inform me about the eventX
Weather and Pollution ForecastDisplay the Weather and Pollution forecasts obtained from the InternetX
Park AssistanceAssist us when parking the car in our garageX
Car PresenceInform if the car is parked in the garageX
Welcome at homeTurn on the TV/Radio and the lights as required when somebody arrives homeX
Presence IdentificationInform about who is at home - for example if I want to know if my wife has arrivedX
Automatic TV offTurn off the TV/Radio automatically when leaving home, and as requiredX
Light StatusDetermine if we left the lights on - for those lights that can´t be controlled by the projectX
Alarm - AdvancedSteer the IPCam to the movement, use RFID card reader to activate/deactivate, ring the alarm via my soundbarX
Temperature AlarmNotify us if the flat temperature falls below or increases above some thresholds X
Bike Smart RecommendationBased on the weather, pollution and my bike habits, determine "smartly" if I should take the bikeX
Light and MotionTurn on/off lights based on motion being detected, speciall in the garage and during the nightX
Automatic HooveringAutomatically start/stop the hooverX
Automatic HumidifierAutomatically start/stop the humidifierX
Home Temperature ControlPhysically turn the heating key to start/stop itX
Rain detection and Soil humidityDetermine if it is currently raining and if the plants need some wateringX
Flood detectionDetermine if there is any flood at home or in the garageX
Fire detectionDetermine if there is a fire at home and inform usX
Intruder in the GardenDetermine if there is an intruder in the garden via weight sensors under the tiles or some volumetric sensor (non-PIR)X
Secure CommunicationsEncrypt data among the modules and ensure integrityX
Advanced IntelligenceVia heuristic or neural network that will take the inputs of all of the sensors, the three cameras and also the latest changes to the actuators, create a smarter approach to: ring the alarm, turn the lights on/off, etcX
Patrol RobotA small car with a PiCam on it that can patrol the house if needed to check status beyond the sensors (phase 4, unfortunately...)


(*) MVP stands for Minimum Viable Product


Key Modules

There will be six modules or nodes in total. You can find more details in the application form and a summary below:

  • Command Center: placed in the living room, will be the heart of the whole solution, obtaining the data from the rest of the modules and displaying it in my TV via HDMI
  • Control Panel: This will allow human interface and will be placed at the door entrance (inside home). It will provide information on the weather forecast and pollution forecast (obtained via Internet), as well as key information from the DomPi - like people presence, recommendation on going by car or biking, outside temperature, rain, is the car in the garage, etc.
  • Two smaller modules, one in each room. They will read the environment: temperature, humidity, luminosity and motion and inform the Command Center
  • Garage module: this component will enable features such as park assistance, is my car in the garage, etc
  • Garden module: this component will enable features such as is it raining, is there any intruder, what´s the outside temperature, etc


Pi Cams


+   Foscam IPCam

Foscam IPcam.jpg

+   PIR Sensors


+   Intelligence (RPI)


=   Advanced Security !!


[ Picture Sources: Pi IoT - Smarter Spaces with Raspberry Pi 3: The Kit- UPDATED! ]


Development Approach

Since my intention is to develop as many of the above features as I can, I will focus more on functionality than optimization. I know that some will think that quality and speed can come together but...


Hope this post provides a good initial view of what the solution is about, as well as the priorities. Any comments or suggestions are more than welcome!

What makes me tick

I am passionate about IoT at work and at home - yep, my wife is "delighted". I am passionate about making life easier. And I am passionate about automating tasks. All this implies that I spend long and healthy hours in the night where, instead of sleeping and resting, I'm creating and interconnecting things at home that simply make me feel I am in a better home, more automated, smarter.


Why I want to become a Element14Challenger

I love to be challenged and stretched beyond my IoT knowledge and think of innovative areas where to make people's life easier, every single day. In this project I put together day-to-day necessities I have at home with future developments and solutions outside the box. By this Design Challenge I hope to broaden my experience, to have fun, participate with other 15+ IoT lovers and test whether my ideas are disruptive beyond my "home-universe".


What's the problem I want to solve

Mainly three. First, I want to emulate that somebody is present at home whenever we leave it for some days. We leave in the ground floor and I don't feel comfortable leaving the flat without knowing if all is ok. Second, create an alarm system that fits my needs by integrating my IP Cam, some PIR detectors and some intelligence on it. Third, I want my home to become as autonomous as I can by knowing more about my home for us to live better, as an example I want to automatically control the humidity to avoid a dry home environment, which is not healthy, by turning on the hoover-robot every few days, etc. Besides the main problems, I'm solving others such as knowing if my wife is already at home, or if I should bike or take the car to work, whether my car is in the garage or many others as detailed below. All this at a lower cost than in the market and 100% customizable for my requirements.


How is this idea disruptive

Raspberry + Arduino + Home automation does not sound very disruptive, right? Well, depends on the details, as usual. Much further than the minimum-to-be features such as temperature and lighting control, I look for a holistic command center at home that, allows distributed intelligence into the different "things", learns from our habits to anticipate our requirements (when to turn on a light or switch off the TV, etc) and also interconnects with existing systems at home rather than having to buy new ones. Since nowadays mobile phones are ubiquitous, the SmartSpace will also be controlled via a mobile app and from the Internet, all securely done.


What's the biggest challenge to get all this working

Time. 14 weeks look like a lot except if you plan to design and implement many of cool features and have them tested. To overcome it, if I become a Challenger, during the first weeks and posts I will focus on prioritizing the features to those that are the most attracting and disruptive. Besides the time challenge, there is the aesthetic challenge and I know who is the "judge" at home for it, meaning that the final solution needs to be neat and nice from the outside, not just from the inside.


Getting into the Details

With this project, I'm transforming three physical places into a SmartSpace all together: my home, my garage and my garden. I am evolving them into more comfortable, more informative and more safe & secure.  This all makes a "magic" 3 by 3 matrix of drivers and places:


Driver vs PlaceHomeGarageGarden

Presence - is my wife already at home? and my in-law?

Lighting at every room - automatic and remote as well as controlled via the TV remote control

Hoovering - automatic and remote

TV and Radio - automatic turn on/off when arriving or leaving home

Humidifier - turn on/off depending on room humidity

Home temperature control - automatic and remote (future)

Park assistance

Presence - did I leave the car inside?

Motion Detection and Lighting

Motion Detection and Lighting

Temperature, humidity and luminosity in every room

Motion Detection - is somebody at home?

Weather forecast (Internet)

Pollution forecast - shall I take the bike or car? (Internet)

Temperature and humidity

Luminosity - did I leave the lights on?

Temperature, humidity and luminosity

Rain detection - to bike or not to bike

Soil humidity - shall I water the plants

Safe & Secure

Presence emulator when not at home - Automatic Smart Lighting, TV/Radio

Alarm Device

  • activate/deactivate via RFID Card Reader at the door entrance
  • PIR motion detection
  • Pi Cam movement detection
  • IPCam - Foscam interlock, movement and sound detection via webAPI
  • IPCam to move and focus to where the movement was detected
  • Ring alarm - a Phillips radio and my soundbar will be the buzzers, email, pictures and video, SMS and phone call
Patrol Robot: small car with a PiCam on it to patrol my home (future)Secure communications: encrypt data between the modules and the Command Center and ensure integrity (future)
  • movement detection
  • flood detection
IP44 case
  • motion and distance detection
  • weight sensor under key tiles to enhance the alarm

IP55/IP66 case

Screen Shot 2016-05-09 at 23.24.16.png

Modules of the SmartSpace

The above diagram with the flat and the garden should help to better understand the flat distribution, showing where the main components: the Command Control and the Control Panel


Command Center

The Command Center governs the actions required to make my SmartSpace more comfortable and secure. It captures the data, analyses it and, by combining it with the DB knowledge, actuates. The command center allows also manual override. As a fault back plan, the things connected to the Command Center will be designed to allow some autonomy in case they cannot connect with the Command Center after a time threshold - you don't want to have a blinking lamp just because a module lost connectivity with the center...


Knowledge. In the first approach the knowledge (what to do when) will be programmed and as an evolution the Command Center will contain Machine Learning / AI / Neural Networks algorithms to learn from my family and be able to replicate our habits automatically. For example, it will know when we have left the home and automatically turn on the alarm and the presence emulator. Also when we enter home, if there was nobody inside before, it will turn on the lights and put our favorite TV channel if it is the right time.


Hardware and Software. The HW will be the RPI 3 connected to my TV in the living room. It will communicate with the other "things" via the EnOcean and will include as well a RF-433Mhz to control 3 RF-plugs and the hoover, a RF-2,4GHz to communicate with other things, the Pi Camera, the Sense Hat, an IR transmitter and receiver and a PIR. An external HDD will be used to avoid SD corruption - I plan to use a mySQL DB to record the relevant events. Besides the SO and the relevant libraries for the sensors and modules, the Command Center will include the OpenHAB server, MQTT and mySQL, these are great apps to enable human interaction and internal-Linux communication. The Command Center will also determine the presence of people at home via polling the mobiles' IP's and the motion sensors.


Main features to design and implement in the Command Center.

1) Presence detection. It will of course leverage the PIR present in each single module of the rooms (see "Things in the room" section below) as well as my Foscam IPCam (see "Alarm System" section below) and the PI Cams but also it will track the family mobile phones to know who is at home. This can be in two different ways, the first will be ping-based - just sending pings every now and then to check IP connectivity with the phones (they are IP-fixed making this easier). The second way is by making queries to the router, this would be optimal as the router can provide some more in-depth information. I will commence with the first option and deploy the second one in the improvement phase.

2) Automating appliances. I have a hoover-robot that can be controlled via a RF remote control, it is a Solac EcoGenic and I intend to replicate the main commands of the remote to be able to steer it. To do so, I will identify the RF frequency and protocol to use in my SmartSpace. Additionally, I want to automate switching on and off the humidifier in the main room. This can be tricky since apparently does not have a way to control it easily and may require to open it and connect some wires :!

3) Presence emulator. This is one of the key components to implement. The emulator will work with the 3-RF plugs via the 433Mhz transmitter and with the TV or Radio via the IR transmitter. The idea is straightforward, emulate that somebody is at home by turning on and off the lights in an intelligent and random manner as well as the TV/Radio. All this replicating the relevant remote control.

4) Comfortable lighting. We can currently control the living room light via the RF plug and its remote. But, hehe, it happens that we never have the remote handy, since it is on the bedside table. This means that we need to stand up, go to the bedroom, press the button and come back... My intention is to make the home more comfortable: use the TV remote control to act on the light. By adding an IR receiver to the Command Center, it will detect when a seldom utilized key is pressed to turn it on or off. For example, it will interpret a double press on the red key within 3 seconds as "turn on/off" the living room light. This way, if it takes longer, it means I'm managing the TV rather than the light - 3s is usually enough since the TV needs to load this and that page and the "user" needs to read before pressing the same key another time. This will be done for the rest of the lights with the green and yellow buttons.

5) Provide weather and pollution forecast. As I pretend to visualize some key information from home both in the TV and in the Control Panel (see section below), the Command Center module shall also capture from internet the weather and pollution forecasts for my city and add them to the dashboard.

6) Knowledge acquisition. Instead of programming the heuristics on how to react to different sensor data, I plan to use some automated learning for the system to learn our habits during the improvement phase of the project. The idea is that probably a Neural Network can be more efficient on controlling the alarm or the presence emulator than trying to program it all. By providing some patterns with the sensor status (PIR's status, luminosity status, etc) the NN can learn and better react to the changing environment. Training the NN should not be quite time consuming since the variability is more limited than in some other complex environments where neural networks are used.


Note: depending on the performance in real life, it may be required to delegate some of these functions to an Arduino Uno or Pro-mini, such as the IR emitter and receiver or the 2 RF modules. If so, the Arduino would probably be connected via USB to the RPI.


Future development: we leave in a city very cold in winter and adjusting the inside temperature can bring some savings at the end of the month. I´d like to develop a control system and that can turn on and off the heating. However, the heaters require physically turning the knob as shown in the picture. Therefore, the physical module required for this feature can be complex as I'll need a servo strong enough to turn it and small enough to make sure it is aesthetically acceptable.


Solac hoover-robot


Heating system to physically control



[ Picture sources: ]


Control Panel

This will allow human interface and will be placed at the door entrance (inside home). It will provide information on the weather forecast and pollution forecast (obtained via Internet, a good starting point is the Weather Underground API), as well as key information from the SmartSpace - like people presence, recommendation on going by car or biking, outside temperature, rain, is the car in the garage, etc.


Hardware and Software. The HW wil be the RPI B+, the LCD Touchscreen and the Pi Camera. Additionally it will include four sensors (temperature, humidity, luminosity and PIR) and the Wifi dongle. Finally it will have a NFC Mifare RFID Card Reader to allow my family to swipe a card to activate/deactivate the alarm, etc. These cards will have access and users controls - restricting some actions to the babysitter or some visitors. The Control Panel will obtain the SmartSpace data from the OpenHab server via Wifi and the weather and pollution via Internet. Depending on the radio coverage (to be tested) with the garage, this module may include the RF unit to communicate with the Thing in the garage or it can be delegated to the Command Center unit.


Main features to design and implement.

1) attractive human interface with key information on the main screen. The key information is the weather and pollution forecast, an icon showing whether the car is in the garage, some icons showing whose presence is detected at home (mobile phones as explained in the "Command Center" section). If there is still enough place while keeping the aesthetic aspect, I will add some other sensor status and actuators (light status, PIR, alarm on/off, etc). On a second page if required, it will be shown all the environmental data (temperatures, etc). In a second phase, it can also show historical data on graphics, the data would be obtained from the Command Center DB.

2) RFID reader. In principle the Control Panel will be autonomous, meaning that the access rights for the card being swiped are included in this module and there is no need to query the Command Center. In a second phase, while implementing, the design shall be modified to allow these queries.


Things in the rooms

In each of the three rooms there will be a module consisting of an Arduino Pro-mini, four sensors (temperature, humidity, luminosity and PIR) and a RF-2,4GHz. These will communicate with the Command Center via radio. Their main function is just to inform of the environment situation and any motion, therefore will be quite a light module. They will be quite light and easy to hide of the eye-sight to avoid non-geek people get stressed

Thing in the garage

This unit will help us with parking the car. It detects via ultrasound the distance between the car and the three walls and advice the right position to park without scratching it Also it will report to the Command Center whether the car is in the garage or not - nothing worse than rushing to the garage and discover that you left the car at the street the night before... Additionally, it will provide environmental data (temperature, etc), flood alarm and automatic lighting.


Hardware and Software. The HW will be an Arduino Mega probably with a TFT screen to show the car position. Additionally it will include eight sensors (temperature, humidity, luminosity, PIR, flood detection and 3xultrasound sensors) and a RF-433Mhz to turn on/off the light via a RF-plug. It will communicate with the Command Center either via a RF-2,4GHz or the EnOcean. If the walls and ceiling prove too thick, I will use the RF-433Mhz or try an additional antenna. Physically it will all be included in a IP44 case to avoid problems with water drops and dust.


Main feature to design and implement.

1) Parking assistance. There will be one ultrasound sensor (like the HC-SR04) per side - both sides and the back part of the garage. In principle the information shown int he TFT will be quite simple: the screen will be divided in three columns, one per PIR, each with a Red-Amber-Green status to make it more visual and a numeric number with the distance.

2) Motion detection. The motion detection not only helps on detecting and intruder (if the alarm is set at home) but also when we approach it will turn on the lights and also start displaying the parking assistance on the TFT.

3) Environmental information. Besides sending the environmental data (temperature, humidity and luminosity), this module will show it via TFT if the TFT is touched - this way when I am in the garage and want to check the temperature I don't need to run home but just see it on the TFT.


[ Picture Source: ]


Thing in the garden

Finally this unit will control what happens in the garden. It should notify if it is raining and how much and will recommend to take the bike or not. I hope not to forget to water the plants (oops) with the soil humidity sensor. A key feature will be detecting movement in the garden, since it will be outside, a PIR may not be suitable, therefore I will need to investigate some other way of detecting "volume" motion in the garden. My idea is to enhance this volume detection so that I can control if there is an adult stepping in, the reasoning behind is to feel comfortable when my little kid is playing in the garden that she is safe from strangers. Another yet important feature to enhance the security at home is to detect if somebody is stepping in the garden, I will control this via weight sensors as well that can even replace the motion sensors, if proved it is not feasible or within budget to use them.


Hardware and Software. In principle the controller will be an Arduino Pro-mini or Nano. Besides the rain-drop counter and the soil humidity, temperature and luminosity sensors, it will require the sensor to detect the volume and the 2 or 3 weight sensors under the tiles. To communicate this unit will use the RF-2,4Ghz. All will be embedded in a IP55/IP66 case.


Alarm System

Among the different systems, I'm very interested in the alarm system. The key components are the Pi Cameras, the Foscam IPCam, the PIR sensors, weight sensor (for the garden tiles) and the intelligence programed in the RPI. The Foscam IPCam includes an API that can move the camera, activate the motion and sound detection, record video and alert via email. Whenever some motion is detected, the Command Center will determine which camera of the three is the best suited to record the possible intruder. If it is the IPCam it will then determine whether the camera should be moved to record the sequence or not. If it was a false alarm and no motion is detected after some time, the Command Center will instruct the IPCam to return to its default position. Once it is confirmed there is an unauthorized person at home it will start recording the event and notify me on my mobile phone via email and in the future also by sms and a phone call (GPRS card required).


Finally it will ring the alarm at home once the first seconds of recording are saved - I want to keep it as a proof. To ring the alarm I will use the Phillips radio and the soundbar in the living room. Since the radio is usually off, the Command Center will turn it on via the IR transmitter, select the audio input as the jack port, the RPI will send the audio via the jack connector. Additionally, the RPI will connect to the soundbar via the Bluetooth module and transmit the alarm buzz.


Intelligence will be implemented to avoid false positives but ensuring I am notified if an intruder breaks in. I will need to figure out the best approach probably via some heuristics or a neural network - don´t want to disturb the neighbours  in the middle of the night, specially  if it was in the end a false alarm. This heuristic or neural network will take the inputs of all of the sensors, the three cameras and also the latest changes to the actuators - I experienced that sometimes turning on the light triggers the Foscam IPCam motion detection and this needs to be avoided or minimized.


Pi Cams


+   Foscam IPCam

Foscam IPcam.jpg

+   PIR Sensors


+   Intelligence (RPI)


=   Advanced Security !!



[ Picture Sources: Pi IoT - Smarter Spaces with Raspberry Pi 3: The Kit- UPDATED! ]


As a future evolution of the Alarm System, I'm planning to include a movable camera by installing a PICam on top of a small autonomous car. From my experience when I am out of home, I'd like some times to check if I left my computer at home or if the house is tidied up for some unexpected guest, etc. But on top of that, if some motion is detected where there are no cameras, sending the car to patrol the area would be the best thing to do!


To Sum up

I'm very passionate about this Challenge and all the real-life applications I will add to my home, making me feel in a smart, secure and comfortable home. These features should be easily applicable to anybody's home opening a great potential for others to leverage the progress from the final result as well as any blog discussion during the challenge-ing weeks. There will be lots of discussions on how to best training the SmartSpace to learn our habits and bring great value, and also finding a volume sensor within budget will be attracting. There are many enhancement on the presented system and it will be fun to select the most disruptive ones for the Great Final!


Many thanks and looking forward to having some fun!

Sergio Martinez


About myself: for the last 4 years the IoT is being my hobby and my passion. Before that and even before the IoT term, at the University, I loved the microcontrollers, digital circuits and all the opportunities that they open. Then I worked for a leading company as DSP and systems engineer and currently I share my job time with a side role where I'm helping the site IoT center, working closely with start-ups and engineers.


Smart-homes are one of the most popular applications in IoT. It is very convenient when you can access your house information (and never wonder again whether the door was left open or the kitchen stove is on), and even have some basic automatic features implemented (temperature auto-set, lights off when they are not needed etc).

We implemented a prototype with these temperature and pressure recording system a few weeks ago (just to test if could have a simple MQTT architecture), with only some basic monitoring features. We want to add persistence to the system (a storing database), an actual interface and some remote capabilities. Also, new interesting devices can be included in the platform: a gas detector or a camera to be accesses when out of the home.

But wait... there is more: for this IoT challenge, we are also proposing a fun version of this IoT smart-house, where current inhabitants are not only treated as part of the system, but will also compete to be the number one.

The original platform is exercise oriented. We will implement first the simplest case: a running kind of competition. Here, each person will use their phones to track the miles they have run during the week, so the person with a higher number will get more points and a higher position in the general ranking. We can add some extra information, to get some variety: use the accelerometer/gyroscope of the phone or some other wearable devices (a wristband or a heart rate monitor for example).


Firstly, we set up the smart home environment. The Raspberry Pi 3 will act as the central node of the implementation. It will be working as the main component of a smarthouse as well as the data collector of all the competition participants. Apart from its processing function, a display and interface will also be implemented, so that this central node can be a user’s access point as well.

Briefly, functions implemented by the central node/Raspberry Pi 3:

  • Main bridge between platform and user. It will host a Graphical User Interface, with its corresponding functions.
  • Smart-home central node, collecting sensors information and showing it on screen.
  • Competition central node, collecting data from users and configuring the general ranking. This ranking will also be displayed.

Features to be included in the Pi:

  • GUI itself
  • Database, to store the incoming data
  • Web server, to provide remote access to the service
  • Corresponding hosts for the smart-home central node and the competition central node.

Apart from the central node, the platform will include the following components:

  • Sensors and a sensor node (Raspberry Pi)
  • Wearable devices and a mobile device to manage it (Smart-phone)



We will not only implement the competition game, but also some basic functions of a smarthome: temperature and pressure reading, a door sensor and a simple alarm button. In order to control all these sensors, we can use another Raspberry Pi to work as a sensor node.

In the end, the list of devices in the platform is:

- Central node – Raspberry Pi 3

- Sensors node – Raspberry Pi (with a WiFi dongle)

- Mobile user device – smartphone (with Android OS)

- Temperature sensor – TMP006

Pressure and altitude sensor  MPL3115A2MPL3115A2

- Door sensor – usually closed magnetic switch

- Alarm button – usually open switch

There are two levels of communication in this platform:

1)Fetching data - wired connections.

Our proposed sensors will first connect to a “sensor node”, a Raspberry Pi. Depending on the sensors, they will be:

  • Directly connecting to a GPIO port of the Pi. Such is the case of the door sensor and the alarm buttons (both of them acting as switches). The hardware implementation provides a 1 or 0 input in the GPIO (whether the switch is open or closed)
  • Using the I2C protocol and the corresponding GPIO ports (SDA and SCL).


2) Distribution of data - wireless connections.



We will be using WiFi connection to implement the MQTT message protocol. This protocol is based on a client/ broker architecture. Clients are then divided in publishers (generating data) and subscriber (receiving the data they are interested in). The exchange between publishers and subscribers is done via the broker.

In this system, these elements are represented as:

  • Broker - central node: Raspberry Pi 3
  • Publisher - sensors node: Raspberry Pi
  • Subscriber - user mobile device: Smartphone (in our case, Android). There is a secondary subscriber in the broker too (which will be doing the data recording)



With this structure, we were able to received the sensors data in both the broker and the smartphone in a continuous basis.

A security connection to he broker is still needed. We can crate a certificate authority in the broker, which will be generating certificates for all clients. In order for a connection to be accepted, the request needs to come with the adequate certificate. Distribution of certificates as well as the secure connection itself are the major challenge of this implementation.

As stated before, we will start by adding a data recording system: the main node will also have a database where sensors reading can be stored. In this same node, we can implement a secure web server, to access this stored data even if we are not connected to the home WiFi.

Moreover, we would include some monitoring functions: with a camera at the living room, to be turn on by the user (ideally, when they leave home), we could check what is happening int he house.

Competition game

With a smarthome implemented, we want to increase the reach of the platform. This way, our home users will not feel disconnected even when they leave the apartment. The proposed scenario is built as a runners competition kind of game: using the smart-phone information, our smart home will also define a ranking. They way any user can go up in this ranking is by running/walking more and more miles. As a result, the smart home capabilites will also be extended outside the house itself, to be able to follow our happy runners.

We will have a web server built on the Raspberry Pi(our home central node). Each smartphone device will have an application that:

  • Monitors the user's run distance
  • Sends that information to home, to the central node.


Furthermore, this central node will implement a simple routine to add the distance and display our inhabitants in decreasing order. This simple routine can further be extended (with some additional code in the phone to) to add some more challenging/original tasks: such as, assigning extra point to people going to certain areas (we will have to include geo location capabilities).


The former running game idea can be applied to other games as well: inside and outside the house. The same central node can run simple game application (for example, a detective kind of solve the mystery short challenge). We can build a simple autonomous (which will use some infrarred sensor to avoid collisions) mini car with the Raspberry Pi 2, that will leave its refuge at certain times and needs to be taken back to its original location. Even outside, the maps functionality can be used to motivate the users to visit new placed in their city, do some tourism... It can be targeted for kids (simpler in house games) or adults (as the runners app itself)



With this project, we want to implement a basic but fully functional smart house. To add an original feature, on top of the sensors reading and monitoring, we proposed a running competition among the habitants of the house.

This way, the main display of the smart home will show some house keeping data (such as temperature or pressure), but also the current status of the competition.

This same structure can also be applied to other types of games: have some games on the central node, enter a “city rally” discovery game ...


Multispectral images including red and near-infrared bands have proved their efficiency for vegetation-soil discrimination and agricultural monitoring in remote sensing applications. But they remain rarely used in ground and UAV imagery, due to a limited availibility of adequate 2D imaging devices.

I was very happy to see that the new 8 Mpixel Pi Camera and the Pi Noir Camera were added to the kit of this challenge.

In the past I did an extensive test on the spectral properties of the old Pi Noir camera (Pi NoIR and Catch Santa Challenge - Review ). Now with the new sensor, with higher resolution and sensitivity I would rather like to use them to make a camera system for plant health analysis. Two approaches can be used: first obtain simultaneously the near-infrared and blue bands from the Pi Noir Camera. This can be done using the infra blue filter which was provided with the first Pi Noir camera (is this still the case?). NDVI values can be obtained obtained from the Pi Noir camera and can be compared with reference values for a set of soil and vegetation luminance spectra. Second approach is to use both cameras and use the NIR band from the Pi Noir and the red band from the Pi Camera. The images can be overplayed using image processing software like OpenCV, and the NDVI can be calculated.

The expected quality of the images are sufficient to obtain NDVI bands which can now be acquired with high spatial resolution, opening new opportunities for crop monitoring applications.


Plant Health measurements

The Normalized Difference Vegetation Index (NDVI) is a numerical indicator that uses the visible and near-infrared bands of the electromagnetic spectrum, and is adopted to analyze remote sensing measurements and assess whether the target being observed contains live green vegetation or not. NDVI has found a wide application in vegetative studies as it has been used to estimate crop yields, pasture performance, and rangeland carrying capacities among others. It is often directly related to other ground parameters such as percent of ground cover, photosynthetic activity of the plant, surface water, leaf area index and the amount of biomass. NDVI was first used in 1973 by Rouse et al. from the Remote Sensing Centre of Texas A&M University. Generally, healthy vegetation will absorb most of the visible light that falls on it, and reflects a large portion of the near-infrared light. Unhealthy or sparse vegetation reflects more visible light and less near-infrared light. Bare soils on the other hand reflect moderately in both the red and infrared portion of the electromagnetic spectrum (Holme et al 1987).

Since we know the behavior of plants across the electromagnetic spectrum, we can derive NDVI information by focusing on the satellite bands that are most sensitive to vegetation information (near-infrared and red). The bigger the difference therefore between the near-infrared and the red reflectance, the more vegetation there has to be.


The NDVI algorithm subtracts the red reflectance values from the near-infrared and divides it by the sum of near-infrared and red bands.




This formulation allows us to cope with the fact that two identical patches of vegetation could have different values if one were, for example in bright sunshine, and another under a cloudy sky. The bright pixels would all have larger values, and therefore a

larger absolute difference between the bands. This is avoided by dividing by the sum of the reflectances.

Theoretically, NDVI values are represented as a ratio ranging in value from -1 to 1 but in practice extreme negative values represent water, values around zero represent bare soil and values over 6 represent dense green vegetation.

(Also see: )


Here is an example from ( )





Building an Raspberry Pi based NDVI Camera

Two approaches can be used:

  1. It is possible to capture all the information needed to compute NDVI using only the Pi Noir camera. If a filter is added that passes NIR and blocks only red light, then the red channel will record mostly NIR light. The blue channel which will record mostly blue light (some NIR light will also be captured in each channel) can be used to represent wavelengths that are absorbed by plants.  This can be done using the infra blue filter which was provided with the first Pi Noir camera (is this still the case?). NDVI values can be obtained obtained from the Pi Noir camera and can be compared with reference values for a set of soil and vegetation luminance spectra.
  2. Second approach is to use both cameras and use the NIR band from the Pi Noir and the red band from the Pi Camera. The images can be overlaid using image processing software like OpenCV (OpenCV | OpenCV ), using image registration functions (OpenCV: Image Registration ) and the NDVI can be calculated. Here is an example from (Automatic Optical and Infrared Image Registration for Plant Water Stress Sensing | InTechOpen)


Results of SIFT based registration algorithm (no matching key-points) (a) 5906 keypoints found in optical image, (b) 447 keypoints found in IR image, (c) 5666 keypoints found in optical image, (d) 605 keypoints found in IR image


The expected quality of the images are sufficient to obtain NDVI bands which can now be acquired with high spatial resolution, opening new opportunities for crop monitoring applications.


Physical build

Regarding the provided kit, I'm planning to use the Pi 3 as main computer with one camera attached. I expect that just one camera can be connected to a Pi, so the Pi B+ wil be used for the second camera. The Pi's will communicate over ethernet.

The touch screen will be used as user interface and also show the resulting NDVI images and numeric results.

The sense hat will be an valuable addition since during plant measurements the humidity and temperature are important parameters for instance in Greenhouses. Same for the enOcean sensors.


About me: I'm currently researcher machine vision and plant phenotyping at the Wageningen University in the Netherlands. Started as an electronic engineer. In 2004 I got a PhD. from Delft University of Technology on Spectral imaging for measuring biochemicals in plant material. From 2004 working on machine-vision and robotics projects focused on agricultural research.  I have been an electronics hobbyist and radio amateur for more than 30 years and I work on Wireless, DSP, SDR and embedded products.




Pi NoIR and Catch Santa Challenge - Review

OpenCV | OpenCV

Automatic Optical and Infrared Image Registration for Plant Water Stress Sensing | InTechOpen


I would like to start this post by thanking element14 and the sponsors for selecting my proposal for this design challenge. Thank you!




There are two rooms which we, and probably most people, are in when at home: the living room and the bedroom. That is why I would like to build two control units which will blend in those rooms, yet provide full control of all IoT devices connected to the network.


The first control unit, for the living room, will consist of a Raspberry Pi with 7” touch screen. It can provide a visual representation of the status of the home and connected devices.

The second control unit, for the bedroom, will consist of a Raspberry Pi and smaller display (7-segment, 8x8 matrix, ...), in the form of an alarm clock.


Both control units will run the same software and configuration and should be in sync at all times. If the user introduces a new device or adapts the configuration on one unit, it should automatically be applied to the second. This provides a form of redundancy, ensuring the user  maintains control of his home or space at all times.


Let’s dive in the details of what I would like to achieve.


Control Units



The first unit will be based on the IoT Alarm Clock I built two years ago. It was created as a proof of concept, but was not as performant as expected. Using the Pi 3’s power and the knowledge acquired these past years, a new and improved version should be achievable. The components involved in the build would be packaged in a good looking alarm clock. I have in mind a combination of wood and white plexiglass.


The second unit would be mounted on the wall or resting on a cabinet or shelf in the living room.

Using the same type of wood as the alarm clock, a frame would be created to house the components and create a good looking unit.




The software to visualise and control the different IoT devices in the house would be OpenHab. OpenHab is a user friendly tool with support for a very wide variety of protocols and devices. It also provides a customisable web interface with plenty of different widgets. OpenHab requires a lot of resources though. That is where the power of the Pi 3 comes in. With multiple cores, the overall performance should improve drastically.


The user may not always want to interface via touch, or in the case of the alarm clock, this wouldn’t even be possible. That is why, in addition, voice control is foreseen to allow an easy and natural ay of interfacing with the control units. Voice control can be achieved in different ways. I have experimented with Jasper in the past, and recently with Amazon Echo. An alternative to be considered and looking interesting is Wit.Ai.


As mentioned earlier, it is also important to have both control units in sync at all times. The configuration files should be automatically updated on both units. I’m thinking in using an application like Puppet to distribute and enforce the configuration. By having a periodic Puppet run, a server hosting the master files pushes them to the control units. This has two advantages: files cannot be tampered with, as they will periodically be enforced and the administrator only needs to make a change once to update all involved devices. This could be used to easily recreate a device in case of complete crash or corruption of the SD card.


IoT Devices


Over the past few years, our home has become more intelligent with plenty of IoT devices. Homemade or bought. The devices we currently have in our home and which would benefit from this control unit are:

  • Cat Feeder
  • EnOcean sensors
  • Tower Light
  • Philips Hue lights
  • Domotics



Did you spot the pet feeder in the Pi IoT: Challenge Overview Video? (hint: around 0:41) I built it as part of element14's Forget Me Not Design Challenge. This device can be updated to have the data sent to the new control unit created in this challenge, gathering all information centrally, avoiding the use of a separate user interface.


The EnOcean sensors have the huge advantage of being self-powered. They have been running since the Forget Me Not challenge in 2014 and haven’t required intervention since! The different sensors provide the state on doors and windows (closed or not), temperature and humidity and a master switch capable of switching on or off anything.

towerlight.pngThe Tower Light is one of my recent IoT projects. It is an internet connected light which is installed in the garage, as this is where I spend most of my time. If my wife needs me for whatever reason, the light starts flashing to draw my attention. It supports different animations, which could mean different things like “dinner”, “visitor” or “emergency” for example.


Philips has a varied range of remote controlled lights. It is possible to group them, define presets, change brightness or color. This can be used to turn the lights off when going to bed, but also to create a certain atmosphere.


Finally, my wife and I bought a new house and are moving the first week of July. The house was built in 2010 and has a variety of domotic appliances. I would like to experiment and try to control these from the same control unit, creating a true unified interface for all IoT and automation devices in our new home. I don't have information on the system yet, meaning I'll only be able to start working on this aspect of the project once we actually move.




The IoT may be fun, but it is also dangerous. I would like to focus on some security aspects for this project, as gathering all information in a single location facilitates usage, but makes your home or space more vulnerable as all information and controls are available in a single location. Research will be done in order to apply security measures and avoid (as best as possible) abuse of the system.




As you can see, I have a clear vision for this challenge, which I intend to realise, as I have done in the past with previous design challenges. The Raspberry Pi is probably my favourite SBC, and I hope this shows in the projects I create.


Looking forward to your feedback and an exciting challenge alongside the other finalists!


Navigate to the next post using the arrow.



An IoT architectural implementation for smart individual reading spaces in public libraries

This project aims to satisfy (at least) the following main guidelines:


  • Help and improve the interaction between library users and paper books
  • Empower the environmental care
  • Improve and make as easy as possible the users social interaction around the Book as the most powerful knowledge engine
  • Create a smart relationship place
  • Create the reading point of the future with local environmental improvements


Remain the fact that the most important aspect that should be enforced as much as possible is the assertion Book is the most powerful knowledge engine


Detailed lineguides

The following points are the main line-guide I refer to. These will be facets of the project I will try to explore and expand implementing following a high structured and modular approach. The first implementation - hopefully in a public library - will become a modularised IoT application easily applicable to other similar environments without too architectural modifications.

Experiences teach, anyway, that when the prototype of a project finishes, there are a lot of things that can be empowered, made better and costs of the modules can be drastically reduced in further implementations.


  • Automated user recognition inside the library context
  • Post-it: The digital local messaging mechanism for in-library message exchange
  • Book-centered user experience improved and made easy with IoT and environmental automation
  • Paper book annotation made easy without impacting the original books
  • Smart book-selfie feature:"I read this book" progressive group images with interaction on or individual images expressing the concept "I love this author"
  • Environmental control including saving paper copies, reducing any kind of physical paper usage (e.g. page copies, energy harvesting space optimisation and more
  • Direct-scan feature: The user can scan by itself the pages that he is interested
  • Book lending and automated restitution system: Includes the notification to the other library users interested to read this book. Users can also accept that the other readers can message him while the Book is not available for lending (Post-it reader messages)
  • Reading place lighting control and more
  • Book annotation sharing


A last note on data

The smart reading place exchanges data, mostly related to books. These information are compatible and follows the standard of OPAC specifications:

The online public access catalog (often abbreviated as OPAC or simply library catalog) is an online database of materials held by a library or group of libraries. Users search a library catalog principally to locate books and other material available at a library. In simple language it is an electronic version of the card catalogue. OPAC is the gateway to library's collection.


As this project will be released as open source and also the OPAC system for public libraries is a Linux application available as open source software (worldwide diffused), opening this IoT modular system to the OPAC specifications probably represent the opportunity to create a smart reading environment with the widest possible flexibility.


References: and Librarian Opac

Once upon a time I started to automate my home using some cheap plugin switches, Arduino's and homemade sensors. Over time the Thuis app (the name of the system, "home" in Dutch) evolved and grew, and now it's time to bring everything to a higher level. Imagine arriving home and being welcomed by your iPhone giving you direct actionable notification, or walking around while the house adjusts to your needs. Lots of things are currently already possible using off the shelve solutions, however you'll always lack the freedom and power compared to designing this yourself. I'm solving this using a flexible communication layer based on MQTT. By giving all parts one way of communication in common we can mix several off the shelve solutions with our own. This brings me to the two goals of this project:

  • Improve my own home with an integrated home automation solution based among others on Z-Wave, Java, iOS, iBeacon, Plex and a Raspberry Pi as core
  • Contribute to the community by making several parts available as open source libraries for others to reuse


Why am I applying?

I'm an IoT enthusiast already for quite some time and I'm experimenting a lot in my own home with home automation. I believe it's time to get more involved in the community and publish some of my personal work back. This challenge seems like the perfect opportunity for that! So in summary:

  • The theme of the challenge fits me well, since I already have some experience with IoT, and feel that participation in this project will help me to contribute to the community of other IoT experts
  • The Raspberry Pi is a part of my home since it was introduced, in the meanwhile it was upgraded from a 1B to a 2B and this is a good time to move up to the model 3
  • If I'm selected the Kit will be a great reason to add some home-made actors and sensors
  • There are nowadays so many great products on the market, but making them work together can be tough
  • My girlfriend is enthusiastic about our home-app and as UX-designer would like to contribute as well
  • And of course, I love to improve my own home


Selected use cases

After a long day of work coming home should be a relaxing experience. With this project I intend to make this a reality as much as possible. I have chosen a few use cases to focus on, which I will describe here.


Light when and where you need it

Take for example the kitchen: depending on your activity, your need for light are very different. While cooking the lights above the worktop and stove should be on, maybe even the ceiling lights as well. But when you're there just to brew a cup of coffee a small light will do. And what about when it's still quite light outside? The light plan should feel natural. It can be based on data from sensors, knowledge about habits and other activities in the house.Using several ambient light sensors, PIR movement sensors, and possibly iBeacons, the system can detect where something is happening. When combined with previous behavior actions can be triggered.


Welcome home

Getting home in the winter and finding your home at a comfortable temperature is something most home automation systems or thermostats like Nest provide. However wouldn't it be nice to help you in your most likely activities too? Such as when getting home in the evening you get a notification which allows you to turn on the home cinema system. Or you get home when it's late and dark lights guide you to the bathroom and bedroom and will turn off when you lie down.The detection of coming home will be a combination of several iBeacons around the home which are sensed by an iPhone in combination with some PIRs. Based on the time and where in the house the sensors are triggered the wanted actions will happen.


Home Cinema

Home Cinema involves many parts: TV, receiver, speakers, and for playing media an Apple TV and BluRay player. Just turning them all on or off is already a hassle. Using some Z-Wave switches and a Raspberry Pi attached via HDMI most of this can be turned into a single action. In combination with Plex and some APIs we can make the selection of your favorite TV series easy as well. The Raspberry will communicate through CEC with the TV and receiver, through MQTT it will receive commands from the core of the system and report back if anything happens.A possible addition to this will be adding a DIY Ambilight. This can be done by capturing the images from HDMI to a Raspberry Pi, which will drive an Arduino, which in turn will drive the LEDs mounted on the back of the TV.


Mobile & On-The-Wall-UI

A home automation system can do quite some things automatic, but it can't read your mind yet. So there needs to be an interface to tell it what you want. For this an iPad will be installed in the living room for easy access to the most important actions and data, including controlling the home cinema. When walking around these actions are also available on the iPhones of me and my girlfriend. In the kitchen a Raspberry with the 7" touchscreen will give similar controls, this time giving data useful while for example preparing breakfast, such as the weather forecast and maybe even delayed trains.Two possibilities for further expansion are a dedicated iPhone as a home cinema remote and using an even more natural way of communication: speech. The latter will involve having a microphone connected to either a Raspberry or the wall-mounted iPad which always listens to a trigger phase (much like Apple's "Hey Siri") and afterwards can execute actions. This might however be a bridge too far for now.

Mockup for the iPhone app

Mockup of the iPhone app


Wake-up light

As alarm clock I use the Sleep Cycle app: it measures the quality of your sleep and wakes you up at the right moment in your sleep cycle. In the 30 minute window the app might sound the alarm it would be great if the bedroom adjusted gradually to daylight. In the summer slowly opening the curtains (using a MOVE) would be enough, in the winter it should also turn on the lights. By doing this waking up should be a lot easier.Unfortunately it doesn't have any APIs yet (they might add a web hook after I suggested it), but I want to prepare before they do or to find an alternative that does. Or maybe I can simulate a Philips Hue bridge which it already supports?


Manual override

Nothing is more annoying than lights to turn on or off when you don't want it, so it's essential to provide a manual override for each as well. I will use traditional wall-switches for this, which will be integrated in the system through Z-Wave.


Energy monitoring & saving

One of the promises of home automation is that you can save energy. As you will add quite some (always-on) electronics this can be overrated. However most switches can help you monitor the usage of individual appliances. I use a YouLess device to get the overall usage. Using this information you can actually save energy: knowing is half the battle. Out of the box the current power usage and totals are available, but you can only really act on it when you have usage over time and trends. That's why I will work on aggregating the data and making it available for analysis.



The end product of this challenge will be an improved home automation system in my home. The core of the system will be a Raspberry Pi 3 with a WildFly server. The Java EE application running on this will manage the logic of the house and bridge the several parts together. On this same Raspberry the Razberry/Z-Way software will be installed to be able to communicate with Z-Wave devices. Another Raspberry will be used as part of the Home Cinema setup and will be hooked up through HDMI to the AV receiver. It will control a.o. the receiver and TV via CEC. Other apps in the system include the iOS apps for both iPhone and iPad, which act as remotes.An overview of the home with all the devices can be seen below:

Thuis plan img3.jpg

Map of the home with devices. Numbers refer the hardware list below.


The most important aspect of the project will be the communication between the different nodes. The nodes communicate with each other using MQTT. This is a very light weight communication protocol providing a publisher/subscriber setup. It's very suitable for this project as it's able to send around information quickly and it's platform independent. To be developed and published as open source to my Github are at least:

  • MQTT integrations for a.o. Z-Way and Plex
  • A CEC Java library (with support for forwarding events to an MQTT topic)
  • MQTT based UI components for iOS
  • (optional) MQTT monitor which records events to a database for analysis

All these will help others who are working on a DIY home automation system, especially when they use MQTT for their communication.





  • Raspberry Pi 3 with Razberry – Core server based on Java EE and Z-Way (1)
  • Raspberry Pi 1B – Connection with HDMI CEC (3)
  • Raspberry Pi 2B + 7" touch screen – Wall control in the kitchen
  • ReadyNAS – Plex Media Server (2)
  • iPhones – Serve as personal remote
  • iPad – On-The-Wall-UI
  • Z-Wave switches – Switching power sockets to turn devices on/off, most measure power usage as well
  • Z-Wave sensors – Multi sensors (motion, light, temperature)
  • Z-Wave radiator valves – Controlling the temperature
  • Estimote beacons – iBeacons to detect presence
  • Home Cinema setup – a.o. Sony TV, Denon Receiver, Apple TV (4, 5)
  • YouLess – Measures overall power usage
  • MOVE – Motorizes the curtains (6)
  • Optional: Arduino based sensors
  • Optional: Raspberry Pi/Arduino based Ambilight



  • Raspbian – OS of the Raspberries
  • WildFly – Application server for Java EE
  • Mosquitto – MQTT broker
  • Z-Way – Software accompanying the Razberry
  • Plex – Media Server and Player





While testing the Z-Wave switches and sensors I already noticed that it happens quite often that some signals get lost. It works most of the time, but it happens that for example one of the lights doesn't turn off automatically. I will have to find a solution for this, maybe by adding a verification step after each action and then try again when it didn't work?

A similar issue will be the case with CEC, as different brands use their own version of the standard which are not always compatible and documented.



In this proposal I'm describing quite some use cases and I have many more in my mind. Time will be a scarce resource when I want to implement them all. Possibly I will have to simplify some of the use cases at first.


About me

I'm a Dutch developer who likes to tinker with hardware. I'm mostly developing software in Java, Swift and some Processing (Arduino). For about 6 years I taught Computer Science at a high school, during which I also developed quite some course materials about Arduino (published by a Dutch educational publisher) and App Inventor (Dutch, available on GitHub in LaTeX). Currently I'm co-founder of the startups ZEEF and LinkPizza.


IoT my belongings - Intro

Posted by crimier May 21, 2016

Hello, fellow challengers!


I've decided to join this project. I've been designing different projects around Raspberry Pi computers for about 4 years now, and this contest seems to be aligned with what I've been planning to implement this summer as a hobby project. Let me introduce my goals for a little bit.


I depend on many things around me and belonging to me. I expect my belongings and tools to be there when I need them, and I expect my workplace (which incidentally is the room I reside in) to be in a working condition and be as adaptable to my needs as possible. I expect my bike to not be stolen when I'm away from it, I'd like it to record my trips, as well as track its location. Also, I'd like to have a set of sensors I could set up at a temporary workplace if I happen to work somewhere else. Is that IoT? Maybe not fully. Is it going to be fun? Hell yeah!


I've got a small room automation system. It's based on a Raspberry Pi, OpenHAB and some RS485-connected Arduinos (I'll introduce you, RS485 is wonderful). However, it's a hack and a hack that worked unreliably lately, which is what you don't want in a home automation system. Therefore, I'm dismantling it and trying to design a new system, again, using OpenHAB, but adding many more sensors and capabilities. I'll also transfer my Pi2 portable desktop install to a Pi3, adding more capabilities and integration with my new home automation setup and improve portability by designing a custom battery pack. I hope my documentation of all it helps. Next things next - I'll introduce you to the interface I'll be using!




It's pyLCI, an external interface for Raspberry Pi and other Linux boards I've designed and implemented, which is open-source and can be integrated in many Raspberry Pi projects to dramatically include their usability and configurability. It uses character LCD displays (the simplest ones you can get, HD47780-compatibles, screens from 16x2 are supported) and buttons to create menus you can navigate to configure&control your system and running applications. It's very accessible and tackles many problems encountered when working with Raspberry Pi computers, as well as greatly reduces the necessity to use command-line, web interfaces or another devices to configure your Raspberry Pi, as a nice side-effect =)

pyLCI supports many ways to connect a screen and buttons to your Raspberry Pi, including, but not limited to, PiFace Control and Display shields. You can see more information about it here, on Hackaday, check out the documentation and see the code. Here's a discussion on Element14 forums.


So far, this is a project I'll start in about 2-3 weeks, as right now I'm now working on pyLCI and the Pip-Boy.  Until that, expect about one update a week outlining my plans and telling about hardware I'm planning to use.

Wearable assistant manifesto


  • I love Fallout game series. I wish I was playing Fallout 4 now, but there's hacking to do. Eh. Maybe I'll get to it later.
  • I love the Pip-Boy idea. It manages your resources, ammunition, tasks to be completed, plays radio broadcasts, has something like GPS inside and generally is a nice thing. Of course, the idea of a wearable personal assistant predates the Pip-Boy idea - but it's so far the most popular comparison to my project.
  • Thus, my dream was born. I always need something to manage my task lists to stop forgetting about my assignments. GPS would be very nice given that I often need to get to places I've never visited even though our city is comparably small. I need something to playback music because I just love listening to music while I'm doing things. I also would like something to help me with my hacking. On top of that, if it tracks my sleep schedules, I'm golden. So far, nothing you couldn't do with a Raspberry Pi =)
  • I love Raspberry Pi boards. The community is huge, main distros are well-polished and even though there are some quirks, it's an excellent base for my projects. Moreover, it makes them much more repeatable! So, I'll start with one. A Raspberry Pi B+ - I don't need too much processing power, but improved power management and 4 USB ports is a nice touch.
  • I'm developing #pyLCI and it's one more thing for me to love. Connect a character screen, some buttons and you've got an interface which incidentally is quite good for Linux-based wearables capable of running Python. So, I'm using pyLCI with a character screen and a wearable glove-like input device which is currently in the "concept" stage. The screen isn't going to show me any graphics or GPS maps, though, but I think I can solve it with a web interface on an Android phone or something. Later, I can attach a HDMI/SPI screen and deal with the "graphics" part.
  • I don't like modern phones. They're not hackable, and it very much sucks for me. The main reason - they're mainly not open-source. Not only it's a privacy/security concern, it's hard for me to change stock apps on phones, making them act however I want them to act. Moreover, hardware is not hackable either. Want to add a FM/IR transmitter to your phone? "Screw you, buy the newer one which may or may not have good support of this functionality, it might suck in many different ways and we'll stop releasing updates anyway." Or "Yeah, buy this FM transmitter, it might not work with your next phone though, we ain't giving you an API, just a limited app we made in a couple of hours". Battery life sucks, and extra-capacity batteries ain't that great of an experience. So, if I can put together a wearable personal assistant - it's going to be my phone, too.
  • I don't mind if this bracelet is kinda bulky. I'll work on making it more slim and it *is going to* become smaller, but so far a working bracelet is much better than a slim one. Besides, have you ever noticed how huge PipBoys are? Besides, initially, I'll add at least 4 18650 cells to it, so that battery life doesn't suck. Optimising power is a complicated thing, and not yet what I'm skilled at, so I'll just fix it by adding more batteries than a smartphone could spend in a couple of days.
  • I like the idea of lifelogging. I often forget some details about things I just talked with somebody about, and wish I could replay our chat just to recall it. As I'm all about documenting my hacking, I take plenty of photos, I even have a separate phone for that but sometimes it feels like it just doesn't cut it. I also like the "lifelogging for safety" - the idea of having a photo of person following you on the street posted to your Dropbox as soon as he starts looking threatening is quite comforting.
  • I hate apps which can access every aspect of my life because they have access to my phone. For example, Android doesn't make open-sourcing code necessary (neither it should, of course), and you never know what an app could do - you might never know if it sends your call or message history to third parties, especially with all the long permission lists most apps have. I also know that there's a ton of open-source software for Linux, and the only way they can send my data somewhere else is through a third-party attacker targeting specifically my device. That's much, much better and about as much security for my money as I can get.
  • This whole "bracelet" idea looks like an idea with big potential to me. Not only in a "I'd wear that" form - it's a freaking open-source portable computer you control and you can modify. Your imagination, skills and the scientific laws is just about your limit in making it do what you want to do. Besides, nothing says "a creative engineer" more than a huge electronic device on your arm.


Last but not least, I have the skills to hack this thing together. So, with all the reasons listed, why wouldn't I?