Skip navigation
2016

The day I had been waiting for is finally here! My challenge kit has arrived. I was feeling a bit nostalgic and thinking about all of those claymation TV shows of my youth, so I decided to make my own. Without further ado:

 

 

Sidebar: The Time Lapse Video

I tried to produce the video using only the Raspberry Pi, but that turned out to be way too much effort (and distraction). In the end I decided to use the tools I had available or those that I was most familiar with. I shot a number of still images with my Canon G10. The images were humongous at 4416x3312 pixels for a whopping 5MB. I copied the images to a shared directory on my linux/Windows7 machine. Using imagemagick under linux I then resized them to a manageable 800x600 which resulted in an average filesize of 115KB. To stitch together a time lapse video, I used ffmpeg, again under linux.

 

sudo apt-get install imagemagick
cd ~/Unwrap
mkdir 800x600
ls -1 *.JPG | awk '{print "convert -resize 800x " $1 " 800x600/" $1}' | sh -v
cd 800x600
sudo apt-get install ffmpeg
ffmpeg -r 3 -i IMG_%*.JPG -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4

 

Once I had the video with the time effect that I wanted, I switched over to my Windows7 machine. I used Applian Replay Music to record the audio track from YouTube. I then used Microsoft Movie Maker to add the title and credit frames and, ultimately, the soundtrack. I know that Movie Maker can also make a video from a bunch of stills, but I was having a tough time getting the timing even across all of the frames. Changes to the video proved problematic as well because it seemed as though frame timing would change or revert back to the default settings. Alas, I digress.

 

In the end, I am pleased with my cheesy little video and hope that you find some enjoyment from it as well. Don't worry I (probably) won't use this presentation format after this blog post!

 

The "Central" In Hangar Central

The Challengers Kit came with so many fun components that it is my goal to use all of them during the course of this project. The figurative central hub, Hangar Central, will be built using:

Used in ProjectUsed in This PostIncluded in the Challengers Kit

X

 

 

 

 

X

 

 

 

 

 

 

X

X

Raspberry Pi 3

Raspberry Pi B+

NEW! EnOcean Sensor Switch Design Kit

Pi Noir Camera 2

8 MP Camera

Raspberry Pi LCD 7" Touchscreen

Sense Hat

EnOcean Pi*

EnOcean Sensor Kit* ;

Pi Rack

PiFace Digital 2

WiFi Dongle

SD Card NOOBS

2.5 AMP International Raspberry Pi Power Supply

 

Setup Raspberry Pi 3 and LCD Touchscreen

And keeping with this weeks "claymation" theme...

 

A couple of notes in regards to wiring up the units:

  • The power supply can apparently be hooked up to either the RPi or the display as the jumpers between 5V and GND provide a common path between the two.
  • Before connecting the ribbon cable, gently pull or slide out the white tab on the RPi. This is a tension release which makes inserting the cable a stress free affair. After the cable is in, gently press the cable frame back in and the cable will be held snuggly in place.

 

I was hoping to put together the central unit using only the components included in the kit. I might have been able to do it except for the fact that Raspian doesn't have an onscreen keyboard, so although I was able to get the machine up and running without any external parts, I still had to connect a keyboard to install the on screen matchbox-keyboard.

 

# Install the virtual keyboard
sudo apt-get install matchbox-keyboard

# After installing the keyboard, reboot. 
# It should appear on the desktop in Menu->Accessories->Keyboard

 

After installing matchbox-keyboard, I still did not have a menu option for it. If that is the case for you, simply go into Menu->Preferences->Main Menu Editor->Applications->Accessories. Enable the keyboard by clicking its' "Show" option.

Enable keyboard option

Status Update

At this point I have the Raspberry Pi 3 running Raspian with a minimal display. The next step will be to get the central hub configured with a user interface that we can use for testing. Stay tuned...

This post covers the "final" versions of the code and hardware used for the nodes in the living room, parents and kids´ bedrooms. I say "final" as I expect to require some bugs, the nice thing of prototyping . In the project status update, you can see that in this post I´m wrapping previous features and putting together the detailed workflow of the nodes.

Project Status

The Living Room node - Process flow

Let´s start by the Living Room node as the other two will be a subset of this one. As a summary of posts 2 and 3, the hardware is an Arduino Nano, a movement sensor PIR, RF 433Mhz, IR receiver, temperature sensor DS18B20, humidity sensor DHT11, a photo-resistor and a RF2.4Ghz for the comms. Additionally I have added a button and a LED as you never know when you can need them - in the posts you can find links to all of the components and some reasons for choosing them. For the software, the specific libraries I am using are the RF24, RF24Network, DallasTemperature, DHT, IRremote and one that I wrapped myself some code on the Internet into a library: DIO_lib. You may check the posts 2, 3 and 4 for some explanations on the libraries and external links.

 

An additionaly homemade library I'm using is the Defs_y_Clases.h, here I define some commands for the RF24 and I have also created a timer object to be able to control the time elapsed - this object takes care as well of overflows, approximately after 50 days the "long uint" variable will overflow, since the project will be used in the long run, there is a need to take care of this. The overflow control is though still under testing... Attached to the post.

 

As you can see in the diagram, the set-up and the loop functions are quite straight forward.

Processflow

The living room node has a number of sensors that are polled synchronously in the loop function -  it is the program which decides when to read the data. Additionally, it has the IR receiver that is read asynchronously, meaning that the IR library captures the IR data when it comes and then the loop function reads and process is synchronously. This approach, sync/async, is the same one used for the RF 2.4Ghz Comms. The library reads the data when it comes and then the loop function process is in sync.

 

Arduino code of the living room node:

 

#include <RF24Network.h>
#include <RF24.h>
#include <OneWire.h>
#include <DallasTemperature.h>
#include <SPI.h>
#include <DHT.h>

#include <Defs_y_Clases.h>
#include <DIO_lib.h>
#include <IRremote.h>
/*
 * Code for the Node at the living room
 * It includes:
 *  IR receiver - reads IR data from the IR remote control
 *  Temperature sensor
 *  Humidity sensor
 *  Luminosity sensor
 *  RF card at 2.4GHz
 * 
 */

const bool debug = false;  //If true, enables debug dump via Serial port

// Pin for the LED
int ledPin = 6;
// Pin where the IR receiver is connected to
int RECV_PIN = 9; 
// Pin where the switch is connected
#define PIN_SWITCH_LIVING  5
//Pin for the PIR
#define PIN_PIR_LIVING     2


// Variables for the IR Receiver
IRrecv irrecv(RECV_PIN);
decode_results results;
#define BUTTON_SONY_RED     0x52E9
#define BUTTON_SONY_GREEN   0x32E9
#define BUTTON_SONY_YELLOW  0x72E9
#define BUTTON_SONY_BLUE    0x12E9

// Data wire is plugged into port 3 on the Arduino
#define ONE_WIRE_BUS 3 

// The DHT data line is connected to pin 4 on the Arduino
#define DHTPIN 4
#define DHTTYPE DHT11   // Leave as is if you're using the DHT22. Change if not. // DHT 11 //#define DHTTYPE DHT22   // DHT 22  (AM2302) //#define DHTTYPE DHT21   // DHT 21 (AM2301)
DHT dht(DHTPIN, DHTTYPE);

// Setup a oneWire instance to communicate with any OneWire devices
OneWire oneWire(ONE_WIRE_BUS);
DallasTemperature sensors(&oneWire); // Pass our oneWire reference to Dallas Temperature. 
//NOTE: in line below I need to modify the address if I use a different temp sensor
DeviceAddress insideThermometer = {0x28, 0xFF, 0x1B, 0xB8, 0x54, 0x15, 0x03, 0xDD}; //Temp address of the sensor: 28FF1BB8541503DD // arrays to hold device address

// Photocell variable
byte photocellPin = A3;

// Radio with CE & CSN connected to pins 7 & 8
RF24 radio(7, 8);
RF24Network network(radio);

// Constants that identify this node and the node to send data to
const uint16_t this_node = 3;
const uint16_t parent_node = 0;

// Time between updates to the node (in ms)
unsigned long interval = 60000;  // every x/1000 sec
// The below objects are type Temporizador (timer). They replace the need of using the delay()
// This allows the script to run and do other tasks while waiting for the time to pass
Temporizador tempo_tx(interval, true);                    //timer for the RF transmisions, every 60s it will force sending an update to the master node
Temporizador tempo_tx_maxdelay(900000, false);            //Maximum delay Id allow without having sent a message to the master node, 15 mins 
Temporizador tempo_watchdog(3600000, false);              //Watchdog, if no update in 1h, activates watchdog
unsigned long sec2wait4tempo = 4000;                      //secons 2 wait for the timers, moved to variable to check if problems with constant alocation
Temporizador tempo_RED(sec2wait4tempo, false);            //Timer for the red button - if the red button is pushed twice within the 4000ms, it does something (see later)
Temporizador tempo_GREEN(sec2wait4tempo, false);          //Timer for the green button - if the red button is pushed twice within the 4000ms, it does something (see later)
Temporizador tempo_YELLOW(sec2wait4tempo, false);         //Timer for the yellow button - if the red button is pushed twice within the 4000ms, it does something (see later)

// Variables for the sensors
float h, t; //humidity and temperature
int l; //luminosity
bool is_motionPIR = false;    //Stores current status of the PIR motion sensor
bool is_motionPIR_prev=false; //Stores prev status of the PIR motion sensor, 
                              //if different than current, then forces to send a message:new movement detected

//Variables for the DIO - the RF-controlled plugs
const int pinDIO = 10;        // Pin where the RF 433Mhz transmitter is located
DIO_lib DIO1(ENCHUFE1, pinDIO); //note-translation, enchufe = plug
DIO_lib DIO2(ENCHUFE2, pinDIO);
DIO_lib DIO3(ENCHUFE3, pinDIO);
bool light1_status = true;
bool light2_status = true;
bool light3_status = true;
bool issecondtime_RED = false;    //States if the button is already pushed for the 1st time
bool issecondtime_GREEN = false;
bool issecondtime_YELLOW = false;

int status_switch = 0;        //States if the switch on the board was pushed
bool force_send_msg = false;  //Forces the transmission of the RF 2.4Ghz message to the node

struct message_1 { // Structure of our message to send
  int16_t temperature;  //Temperature is sent as int16: I multiply by 100 the float value of the sensor and send it
                        //this way I avoid transmitting floats over RF
  unsigned char humidity;
  unsigned char light;
  unsigned char motion;
  unsigned char dooropen;
};
message_1 message_tx;

struct message_action { // Structure of our message to receive
  unsigned char cmd;
  unsigned char info;
};
message_action message_rx;

RF24NetworkHeader header(parent_node); // The network header initialized for this node


void setup(void)
{
  if (debug) Serial.begin(9600);

  // Initialize all radio related modules
  SPI.begin();
  radio.begin();
  delay(50);
  radio.setPALevel(RF24_PA_MAX);  //This can lead to issues as per https://arduino-info.wikispaces.com/Nrf24L01-2.4GHz-HowTo
                                  //use this radio.setPALevel(RF24_PA_LOW); if there are issues
  delay(50);
  radio.setChannel(108);          //Set channel over the WIFI channels
  delay(50);
  radio.setDataRate(RF24_250KBPS); //Decrease speed and improve range. Other values: RF24_1MBPS y RF24_2MBPS
  delay(50);
  network.begin(90, this_node);

  // Initialize the DHT library
  dht.begin();

  //Get oneWire devices on bus
  sensors.begin();
  sensors.setResolution(insideThermometer, 12);

  // Prepare IR
  irrecv.enableIRIn(); // Start the receiver

  // Configure LED pin
  pinMode(ledPin, OUTPUT);

  //Config pin for the switch
  pinMode(PIN_SWITCH_LIVING, INPUT_PULLUP);

  //Config PIR
  pinMode(PIN_PIR_LIVING, INPUT);

  digitalWrite(ledPin, LOW);
  if (debug) {
    Serial.print("Starting Node Living room with IR. Node number: ");
    Serial.println(this_node);
    Serial.println("Remember to change termometer address if copy/pasting the script");
  }

  status_switch = digitalRead(PIN_SWITCH_LIVING); //read the switch status
}

void loop() {
  // Update network data
  network.update();

  //Receive RF Data 
  while (network.available()) {
    receive_data();
  }

  //Receive IR
  IR_read_exec();

  if (tempo_tx_maxdelay.is_Time()) {
    //If we enter here, means that tmax has already elapsed and we are to force data transmission
    force_send_msg = true;
  }

  if (switch_just_pushed()) {
    //If the switch is just pushed, we force sending the RF data
    force_send_msg = true;
  }

  if (change_inPIRsensor()){
    //If there is change in PIR, forces the update to the node
    force_send_msg = true;
  }
  //Get and send the sensor data. This is done only if the timer is over or we force RF data
  if (tempo_tx.is_Time() || force_send_msg) {
    tempo_tx_maxdelay.reiniciar_t0();  //Reinitialize the timer
    read_sensors_data();
    send_sensors_data();
    force_send_msg = false;
  }
  check_watchdog();
}

void receive_data(){
  RF24NetworkHeader header;
  message_action message_rx;
  network.peek(header);
  if (header.type == '2') {
    tempo_watchdog.reiniciar_t0();   //message received, it means there is communication
                                     // with the master, reinitiates the watchdog timer
    network.read(header, &message_rx, sizeof(message_rx));
    if (debug) {
      Serial.print("Data received from node ");
      Serial.println(header.from_node);
    }

    unsigned char cmd = message_rx.cmd; 
    unsigned char info = message_rx.info;
    if (debug) {
      Serial.print("Command: "); Serial.println(cmd);
    }     
    switch (cmd) {
      case LUZ_ON:    //Note translation: LUZ = light
        turn_on_light(info);
        modify_light_status(info, LUZ_ON);
        send_cmd_per_RF(ACK, cmd);
        break;
      case LUZ_OFF:
        turn_off_light(info);
        modify_light_status(info, LUZ_OFF);
        send_cmd_per_RF(ACK, cmd);
        break;
      case MODIFICAR_T_SLEEP: //Note translation: modificar = modify
        interval = message_rx.info * 1000;
        send_cmd_per_RF(ACK, cmd);
        break;
      case FORCE_SEND_MSG:
        force_send_msg = true; 
        send_cmd_per_RF(ACK, cmd);
        break;
      case TODAS_LUCES_ON:  //Note translation: Todas luces = all of the lights
        int i;
        for (i=1;i<=3; i++) {
          turn_on_light(i);
          modify_light_status(info, LUZ_ON);
        }
        send_cmd_per_RF(ACK, cmd);
        break;
      case TODAS_LUCES_OFF:
        int j;
        for (j=1;j<=3; j++) {
          turn_off_light(j);
          modify_light_status(info, LUZ_OFF);
        }
        send_cmd_per_RF(ACK, cmd);
        break;
      default:
        //Do nothing
        break;
    }

  } else {
    // This is not a type we recognize
    network.read(header, &message_rx, sizeof(message_rx));
    if (debug) {
      Serial.print("Unknown message received from node ");
      Serial.println(header.from_node);
    }
  }
}

void read_sensors_data() {
  sensors.requestTemperatures();
  h = dht.readHumidity();
  t = sensors.getTempC(insideThermometer);

  //Sensor for PIR is read already when loop calls function change_inPIRsensor
  //no need to re-read it here
  
  // Read photocell and constrains data within a range and then maps it to range 0% - 100%
  l = analogRead(photocellPin);
  l = constrain(l, 70, 850);
  l = map(l, 70, 850, 100, 0);

}

void send_sensors_data(){
  header.type = '1'; // Headers will always be type 1 for this node

  // Only send values if any of them are different enough from the last time we sent:
  //  0.5 degree temp difference, 1% humdity or light difference, or different motion state
  float t1 = t*100;   //multiplies by 100 to get read of float and transmit only int16
  int16_t t_tx = (int16_t) t1;  
  if (abs(t_tx - message_tx.temperature) > 30 || //sends data if there are significant changes in the sensors measurements
      abs(h - message_tx.humidity) > 1.0 || 
      abs(l - message_tx.light) > 5.0 ||
      force_send_msg) {
      // Construct the message we'll send
      message_tx = (message_1){ t_tx, (unsigned char) h, (unsigned char)l, is_motionPIR, 0 }; //codigo original: {f, h, p, m, d}

    // Writing the message to the network means sending it
    if (network.write(header, &message_tx, sizeof(message_tx))) {
      tempo_watchdog.reiniciar_t0();   //reinitiates the watchdog timer
      if (debug) Serial.print("Message sent\n"); 
    } else {
      if (debug) Serial.print("Could not send message\n"); 
    }

    if (debug) {
      Serial.print("Temp_tx: "); Serial.println((message_tx.temperature));
      Serial.print("Temp_detected: "); Serial.println((t)); 
      Serial.print("Hum_tx: "); Serial.println(message_tx.humidity);
      Serial.print("Hum_detected: "); Serial.println((h)); 
      Serial.print("Luminosity: "); Serial.println(message_tx.light);
      Serial.print("Motion PIR: "); Serial.println(message_tx.motion);
    }
  }
}

void turn_on_light(int luz){
  byte i;

  for (i=0;i<3;i++) {
    //repeats 3 times sending the RF signal
    noInterrupts(); //need to stop interrupts, otherwise the signal is not understood by the RF plugs
    switch (luz) {
      case LUZ_1:
        DIO1.On();
        break;
      case LUZ_2:
        DIO2.On();
        break; 
      case LUZ_3:
        DIO3.On();
        break;
    }
    interrupts();
    delay(15+i);
  }
  if (debug) { 
    Serial.print("Light turned on: "); Serial.println(luz);
  }
}

void turn_off_light (int luz){
  noInterrupts();
  switch (luz) {
    case LUZ_1:
      DIO1.Off();
      break;
    case LUZ_2:
      DIO2.Off();
      break; 
    case LUZ_3:
      DIO3.Off();
      break;
  }
  interrupts();
  if (debug) {
    Serial.print("Light turned off: "); Serial.println(luz);
  }
}

void modify_light_status(int luz, int valor) {
  switch (luz) {
    case LUZ_1:
      if (valor==LUZ_ON) light1_status=true;
      else light1_status=false;
      break;
    case LUZ_2:
      if (valor==LUZ_ON) light2_status=true;
      else light2_status=false;
      break; 
    case LUZ_3:
      if (valor==LUZ_ON) light3_status=true;
      else light3_status=false;
      break;
  }
}

void IR_read_exec() {
  unsigned char cmd=NO_ACTION;
  unsigned char info=NO_ACTION;
  if (irrecv.decode(&results)) {
    if (debug) Serial.println(results.value, HEX);
    if (results.value==BUTTON_SONY_RED) {
      if (debug) Serial.println("Detected RED. ");
      if (issecondtime_RED) {
        if (debug) Serial.println("issecondtime_red = true");
        //If red button is pressed nad it was pressed before within the timing of the timer (4000ms) 
        //then we execute the action: either on or off
        if (!tempo_RED.is_Time()) {
          if (debug) Serial.println("Red is within Time");
          issecondtime_RED = false;
          if (light3_status) {
            light3_status = false;
            turn_off_light(LUZ_3);
            cmd = LUZ_OFF;
            info = LUZ_3;
          } else {
            light3_status = true;
            turn_on_light(LUZ_3);
            cmd = LUZ_ON;
            info = LUZ_3;
          }
        } else {
          //If red is received but after the time, we consider it as the first time it is pressed
          //and reinitalize the red timer
          tempo_RED.reiniciar_t0();
          if (debug) Serial.println("Red is out of Time");
        }
      } else {
        //If red is received for the 1st time, we reinitiate the timer and mark it as pressed
        issecondtime_RED = true;
        tempo_RED.reiniciar_t0();
      } 
    } else if (results.value==BUTTON_SONY_GREEN) {
      if (issecondtime_GREEN) {
        //Same as with red button
        if (!tempo_GREEN.is_Time()) {
          issecondtime_GREEN = false;
          if (light2_status) {
            light2_status = false;
            turn_off_light(LUZ_2);
            cmd = LUZ_OFF;
            info = LUZ_2;
          } else {
            light2_status = true;
            turn_on_light(LUZ_2);
            cmd = LUZ_ON;
            info = LUZ_2;
          }
        } else {
          //Same as with red button
          tempo_GREEN.reiniciar_t0();
        }
      } else {
        //Same as with red button
        issecondtime_GREEN = true;
        tempo_GREEN.reiniciar_t0();
      } 
    } else if (results.value==BUTTON_SONY_YELLOW) {
      if (issecondtime_YELLOW) {
        //Same as with red button
        if (!tempo_YELLOW.is_Time()) {
          issecondtime_YELLOW = false;
          if (light1_status) {
            light1_status = false;
            turn_off_light(LUZ_1);
            cmd = LUZ_OFF;
            info = LUZ_1;
          } else {
            light1_status = true;
            turn_on_light(LUZ_1);
            cmd = LUZ_ON;
            info = LUZ_1;
          }
        } else {
          //Same as with red button
          tempo_YELLOW.reiniciar_t0();
        }
      } else {
        //Same as with red button
        issecondtime_YELLOW = true;
        tempo_YELLOW.reiniciar_t0();
      } 
    }
    if (cmd != NO_ACTION) {
      send_cmd_per_RF(cmd, info);
    }
    irrecv.resume(); // Receive the next value
  }
}

void send_cmd_per_RF(unsigned char cmd, unsigned char info) {
  //Sends via RF the command and info
  //This is the way to update the master of the light status, ACK, etc
  header.type = '2'; 
  message_action message_x_RF;
  message_x_RF = (message_action){ cmd, info }; 

  // Writing the message to the network means sending it
  if (network.write(header, &message_x_RF, sizeof(message_x_RF))) {
    if (debug) Serial.print("Message sent RF\n"); 
    //There is communication with the master -> reinitiate the watchdog timer
    tempo_watchdog.reiniciar_t0();   //reinitiates the watchdog timer
  } else {
    if (debug) Serial.print("Could not send message RF\n"); 
  }

}

bool switch_just_pushed() {
  int v = digitalRead(PIN_SWITCH_LIVING);
  bool r=false;

  if (v==LOW) {
    status_switch=LOW;
    digitalWrite(ledPin, LOW);
  }
  else {
    if (status_switch==LOW) {
      //if low, then it is the first time we pushed the button
      //this means we have detected a raising signal 
      status_switch=HIGH;
      r=true;
      digitalWrite(ledPin, HIGH);
    }
  }
  return r;
}

bool change_inPIRsensor() {
  //True if there is a change in the PIR sensor
  bool b;
  is_motionPIR = (digitalRead(PIN_PIR_LIVING)==HIGH);  //updates current state of the PIR
  b = (is_motionPIR!=is_motionPIR_prev);            //if current != previous state -> change
  is_motionPIR_prev = is_motionPIR;
  return b;  
}

void check_watchdog() {
  //Controls the watchdog. Checks that there has been some communication with the 
  //central node in the last 1h, either receiving a message from the center
  //or sending a message successfully to the center
  if (tempo_watchdog.is_Time()) {
    //At this point in time, this does nothing special, just print error via Serial
    //Improvement: to deactivate the automation and force lights to off, etc
    if (debug) Serial.println("Alarm!!! Watchdog timer. No successfull communication with the center");    
    tempo_tx_maxdelay.reiniciar_t0();
  }
}

 

One issue to call out, call it mistake or just the typical bug you don´t know how to solve and spend lot of time trying to fix, is that turning on and off the lights with the TV remote was sometimes working and others not. I reviewed the code several times, thought there would be interference or just a week signal, maybe there was a problem with the double click of the TV buttons, etc... Until I found the solution, I had to add the lines 323 and 335 and also 344 and 354. Basically I had to disable any interrupts when sending the RF 433Mhz instruction to the RF plugs. My speculation: there are lots of IR interference and probably also on the 2.4Ghz, meaning that I "guess" that the Arduino was called via interrupts by the IR and RF24 libraries. Even if these could take only microseconds up to 1 millisecond, this pause in the RF433Mhz message is enough to damage the message and the plug would not recognize it correctly.

 

Another issue I am facing is that from time to time the Arduino is not processing the IR command to turn on/off the light. My feeling this time is that I need to send the RF433 message a couple of times -  basically add a for loop from line 345 to 355. More to come when I test it!

A couple of pics of the node:

Livingroom-node_commentedLivingroom-node

I hope that the rest of the code together with the info shared in previous posts is self explanatory, if not, just ask!

 

The Kids and parents rooms' nodes - Process flow

There are three main differences between these nodes and the living room one. The first one is that they don't require the IR receiver nor the RF433Mhz components, no need to implement reading the TV remote or sending commands to the RF plugs as this is only done by the living room node. The second difference is that they are based on Arduino Pro Mini instead of Nano, not a big difference in pings and performance, the only call out is that the Pro Mini you need to solder them. The third difference is how you power up the Arduino, while the Nano has a built in USB adapter, the Pro Mini comes without it. I still have to figure out the best way to power it up, probably just by cutting any old USB cable and connecting it to the RAW pin (which has a built in regulator and you can apply higher voltage than 3.3V and it will adapt it to the Arduino). So far I am using the FT232RL USB to Serial, which by the way is required for programming it from the computer.

 

The soldering itself is not that difficult but I looked in the Internet for some "best practices" and found a couple of good ideas. Here is a short video as an example.

In my case, it took me a bit longer, not a professional on soldering Besides, that, nothing really special to raise. This is the processflow for the nodes:

Processflow

Here is the code I am using for the kids' room that will be just the same as for the parent's but changing the line 42 to reflect the bedroom's ID - you can find the rooms' ids under post 4 PiIoT - DomPi 04: Movement detection and RF2.4Ghz comms

 

#include <RF24Network.h>
#include <RF24.h>
#include <OneWire.h>
#include <DallasTemperature.h>
#include <SPI.h>
#include <DHT.h>

#include <Defs_y_Clases.h>

const bool debug = true;  //If true, enables debug dump via Serial port

// Pin for the LED
int ledPin = 13;
// Pin for the switch
#define PIN_SWITCH_KIDS  5
// Pin for the PIR
#define PIN_PIR_KIDS       2

// Data wire is plugged into port 3 on the Arduino
#define ONE_WIRE_BUS 3 

// The DHT data line is connected to pin 4 on the Arduino
#define DHTPIN 4
#define DHTTYPE DHT11   // Leave as is if you're using the DHT22. Change if not. // DHT 11 //#define DHTTYPE DHT22   // DHT 22  (AM2302) //#define DHTTYPE DHT21   // DHT 21 (AM2301)
DHT dht(DHTPIN, DHTTYPE);

// Setup a oneWire instance to communicate with any OneWire devices
OneWire oneWire(ONE_WIRE_BUS);
DallasTemperature sensors(&oneWire); // Pass our oneWire reference to Dallas Temperature. 
//NOTE: in line below I need to modify the address if I use a different temp sensor
DeviceAddress insideThermometer = {0x28, 0xFF, 0x5B, 0x0D, 0x01, 0x16, 0x04, 0xBC}; 
      //Temp address of the sensor: 28FF5B0D011604BC // arrays to hold device address

// Photocell variable
byte photocellPin = A3;

// Radio with CE & CSN connected to pins 9 & 10
RF24 radio(9, 10);
RF24Network network(radio);

// Constants that identify this node and the node to send data to
const uint16_t this_node = 1;
const uint16_t parent_node = 0;

// Time between packets (in ms)
unsigned long interval = 60000;  // every x/1000 sec
// The below objects are type Temporizador (timer). They replace the need of using the delay()
// This allows the script to run and do other tasks while waiting for the time to pass
Temporizador tempo_tx(interval, true);          //timer for the RF transmisions, every 60s it will force sending an update to the master node
Temporizador tempo_tx_maxdelay(900000, false);  //Maximum delay Id allow without having sent a message to the master node, 15 mins 
Temporizador tempo_watchdog(3600000, false);    //Watchdog, if no update in 1h, activates watchdog

// Variables for the sensors
float h, t; //humedad y temperatura
int l; //luminosidad
bool is_motionPIR = false;    //Stores current status of the PIR motion sensor
bool is_motionPIR_prev=false; //Stores prev status of the PIR motion sensor, 
                              //if different than current, then forces to send a message:new movement detected
int status_switch = 0;        //States if the switch on the board was pushed
bool force_send_msg = false; //Fuerzo a enviar el msg independientemente del estado

struct message_1 { // Structure of our message to send
  int16_t temperature;  //Temperature is sent as int16: I multiply by 100 the float value of the sensor and send it
                        //this way I avoid transmitting floats over RF
  unsigned char humidity;
  unsigned char light;
  unsigned char motion;
  unsigned char dooropen;
};
message_1 message_tx;

struct message_action { // Structure of our message to receive
  unsigned char cmd;
  unsigned char info;
};
message_action message_rx;

RF24NetworkHeader header(parent_node); // The network header initialized for this node

void setup(void)
{
  if (debug) Serial.begin(9600);

  // Initialize all radio related modules
  SPI.begin();
  radio.begin();
  delay(50);
  radio.setPALevel(RF24_PA_MAX);  //This can lead to issues as per https://arduino-info.wikispaces.com/Nrf24L01-2.4GHz-HowTo
                                  //use this radio.setPALevel(RF24_PA_LOW); if there are issues
  delay(50);
  radio.setChannel(108);          //Set channel over the WIFI channels
  delay(50);
  radio.setDataRate(RF24_250KBPS); //Decrease speed and improve range. Other values: RF24_1MBPS y RF24_2MBPS
  delay(50);
  network.begin(90, this_node);

  // Initialize the DHT library
  dht.begin();

  //Get oneWire devices on bus
  sensors.begin();
  sensors.setResolution(insideThermometer, 12);

  // Configure LED pin
  pinMode(ledPin, OUTPUT);
  
  //Config pin for the switch
  pinMode(PIN_SWITCH_KIDS, INPUT_PULLUP);

  //Config PIR
  pinMode(PIN_PIR_KIDS, INPUT);
  
  digitalWrite(ledPin, LOW);
  if (debug) {
    Serial.print("Starting Node Kids room. Node number: ");
    Serial.println(this_node);
    Serial.println("Remember to change termometer address if copy/pasting the script");
  }
  status_switch = digitalRead(PIN_SWITCH_KIDS); //read the switch status
}

void loop() {
  // Update network data
  network.update();

  //Receive RF Data 
  while (network.available()) {
    receive_data();
  }

  if (tempo_tx_maxdelay.is_Time()) {
    //If we enter here, means that tmax has already elapsed and we are to force data transmission
    force_send_msg = true;
  }

  if (switch_just_pushed()) {
    //If the switch is just pushed, we force sending the RF data
    force_send_msg = true;
  }

  if (change_inPIRsensor()){
    //If there is change in PIR, forces the update to the node
    force_send_msg = true;
  }

  //Get and send the sensor data. This is done only if the timer is over or we force RF data
  if (tempo_tx.is_Time() || force_send_msg) {
    tempo_tx_maxdelay.reiniciar_t0();  //Reinitialize the timer
    read_sensors_data();
    send_sensors_data();
    force_send_msg = false;
  }
  check_watchdog();
}

void receive_data(){
  RF24NetworkHeader header;
  message_action message_rx;
  network.peek(header);
  if (header.type == '2') {
    tempo_watchdog.reiniciar_t0();   //message received, it means there is communication
                                   // with the master, reinitiates the watchdog timer
    network.read(header, &message_rx, sizeof(message_rx));
    if (debug) {
        Serial.print("Data received from node ");
        Serial.println(header.from_node);
    }

    unsigned char cmd = message_rx.cmd; 
    unsigned char info = message_rx.info;
    if (debug) {
      Serial.print("Command: "); Serial.println(cmd);
    }    
    switch (cmd) {
      case MODIFICAR_T_SLEEP:
        interval = message_rx.info * 1000;
        send_cmd_per_RF(ACK, cmd);
        break;
      case FORCE_SEND_MSG:
        force_send_msg = true;
        send_cmd_per_RF(ACK, cmd);
        break;
      default:
        //Do nothing
        break;
    }

  } else {
    // This is not a type we recognize
    network.read(header, &message_rx, sizeof(message_rx));
    if (debug) {
      Serial.print("Unknown message received from node ");
      Serial.println(header.from_node);
      }
    }
}

void read_sensors_data() {
  sensors.requestTemperatures();
  h = dht.readHumidity();
  t = sensors.getTempC(insideThermometer);

  //Sensor for PIR is read already when loop calls function change_inPIRsensor
  //no need to re-read it here
  
  // Read photocell and constrains data within a range and then maps it to range 0% - 100%
  l = analogRead(photocellPin);
  l = constrain(l, 70, 850);
  l = map(l, 70, 850, 0, 100);  
}

void send_sensors_data(){
  header.type = '1'; // Headers will always be type 1 for this node

  // Only send values if any of them are different enough from the last time we sent:
  //  0.5 degree temp difference, 1% humdity or light difference, or different motion state
  float t1 = t*100;   //multiplies by 100 to get read of float and transmit only int16
  int16_t t_tx = (int16_t) t1;  
  if (abs(t_tx - message_tx.temperature) > 30 || //sends data if there are significant changes in the sensors measurements
      abs(h - message_tx.humidity) > 1.0 || 
      abs(l - message_tx.light) > 5.0 ||
      force_send_msg) {
      // Construct the message we'll send
      message_tx = (message_1){ t_tx, (unsigned char) h, (unsigned char)l, is_motionPIR, 0 }; //codigo original: {f, h, p, m, d}

    // Writing the message to the network means sending it
    if (network.write(header, &message_tx, sizeof(message_tx))) {
      tempo_watchdog.reiniciar_t0();   //reinitiates the watchdog timer
      if (debug) Serial.print("Message sent\n"); 
    } else {
      if (debug) Serial.print("Could not send message\n"); 
    }

    if (debug) {
      Serial.print("Temp_tx: "); Serial.println((message_tx.temperature));
      Serial.print("Temp_detected: "); Serial.println((t)); 
      Serial.print("Hum_tx: "); Serial.println(message_tx.humidity);
      Serial.print("Hum_detected: "); Serial.println((h)); 
      Serial.print("Luminosity: "); Serial.println(message_tx.light);
      Serial.print("Motion PIR: "); Serial.println(message_tx.motion);
    }
  }
}

void send_cmd_per_RF(unsigned char cmd, unsigned char info) {
  //Sends via RF the command and info
  //This is the way to update the master of the light status, ACK, etc
  header.type = '2'; 
  message_action message_x_RF;
  message_x_RF = (message_action){ cmd, info }; 

  // Writing the message to the network means sending it
  if (network.write(header, &message_x_RF, sizeof(message_x_RF))) {
    if (debug) Serial.print("Message sent RF\n"); 
    //There is communication with the master -> reinitiate the watchdog timer
    tempo_watchdog.reiniciar_t0();   //reinitiates the watchdog timer
  } else {
    if (debug) Serial.print("Could not send message RF\n"); 
  }

}

bool switch_just_pushed() {
  int v = digitalRead(PIN_SWITCH_KIDS);
  bool r=false;

  if (v==LOW) {
    status_switch=LOW;
    digitalWrite(ledPin, LOW);
  }
  else {
    if (status_switch==LOW) {
      //if low, then it is the first time we pushed the button
      //this means we have detected a raising signal 
      status_switch=HIGH;
      r=true;
      digitalWrite(ledPin, HIGH);
    }
  }
  return r;
}

bool change_inPIRsensor() {
  //True if there is a change in the PIR sensor
  bool b;
  is_motionPIR = (digitalRead(PIN_PIR_KIDS)==HIGH);  //updates current state of the PIR
  b = (is_motionPIR!=is_motionPIR_prev);            //if current != previous state -> change
  is_motionPIR_prev = is_motionPIR;
  return b;  
}

void check_watchdog() {
  //Controls the watchdog. Checks that there has been some communication with the 
  //central node in the last 1h, either receiving a message from the center
  //or sending a message successfully to the center
  if (tempo_watchdog.is_Time()) {
    //At this point in time, this does nothing special, just print error via Serial
    //Improvement: to deactivate the automation and force lights to off, etc
    if (debug) Serial.println("Alarm!!! Watchdog timer. No successfull communication with the center");    
    tempo_tx_maxdelay.reiniciar_t0();
  }
}

 

A couple of pic of the kids room.

Kidsroom-solderingKidsroom-preKidsroom-post

 

Nodes´ Dashboard

And to finalize, the dashboard of the nodes.

Nodes Dashboard

Hello

 

     After playing a little with Adafruit MQTT python client and feeds, I decided to change the data transmission protocol between nodes, from HTTP requests to MQTT, RaspberryPi acting as broker+client and ESP8266 nodes being clients.

 

     Looks like Mosquitto is  a very popular MQTT broker among RaspberryPi users (and not only), installation on RPi side is pretty straightforward, since Mosquitto is included in Raspbian Jessie repositories:

 

          sudo apt-get install mosquitto mosquitto-clients

 

Mosquitto parameters and behavior are imported from a configuration file (/etc/mosquitto/mosquitto.conf) - more details on it's configuration can be found on Mosquitto site.

 

Mosquitto broker can be started with sudo /usr/sbin/mosquitto -c etc/mosquitto/mosquitto.conf or after reboot it is automatically started.

 

On RaspberryPi3 client side I used Eclipse Paho MQTT Python Client library, installation package, installation instructions, examples and documentation can be found on Paho project site and Github and on PyPi website. Install can be as simple like this:

 

                   pip install paho-mqtt

 

or using the full code from Github repository and setup instructions from PyPi.

 

On ESP8266 side, I used PubSubClient for Arduino by Nick O'Leary, install package and documentation can be found on his Github page and website.

 

How it works

 

     For those without a previous experience on MQTT, MQTT is a Client/Server publish/subscribe messaging transport protocol designed to be lightweight and easy to implement. The server (broker) manage  topics on which clients publish data or subscribe to topics to retrieve data. A few key points of this system are:

      - publishers and subscribers are decoupled from each other, they have to know only the identity of the broker which manages and eventually filter messages.

      - publisher and subscriber don't have to run at the same time and publishing/subscribing don't have to be synchronized in any way.

      - operation of publishers and subscribers is not halted during publishing/subscribing process.

 

     As I said before, on my project RaspberryPi play the role of central node, by running the Mosquitto broker and a MQTT Python client which subscribe to sensors feed topic in order to collect data sent by WiFi nodes and store it locally for further processing. The subscribe client is implemented in pilot_mqtt.py available on Github repository. Code is pretty much commented so I detail only a couple of points:

 

Data received by subscribing client is stored in two folders:

 

DIR_BASEINST = "/var/www/html/datainst/" - here are stored instantaneous values read by sensors and these values are later published on sensor feeds for Adafruit dashboard

DIR_BASEHIST = "/var/www/html/datahist/" - here are stored all values received from wireless nodes and put in individual files for each node

 

mqttc = mqtt.Client()
mqttc.on_message = on_message - this is executed when a message arrive on "sensorfeed" topic without a predefined target (temperature, humidity, etc)

# client follow and process only messages published on following topics, by calling corresponding functions
mqttc.message_callback_add("sensorsfeed/temperature/#", on_message_temperature) - on_message_temperature is called when a message arrive on "sensorsfeed/temperature/#" topic
mqttc.message_callback_add("sensorsfeed/humidity/#", on_message_humidity)
mqttc.message_callback_add("sensorsfeed/light/#", on_message_light)
mqttc.message_callback_add("sensorsfeed/detection/#", on_message_detection)

 

Character "#" is used as wildcard for all levels under "sensorsfeed/temperature/" for example.

Next part is the code which deal with received temperature values:

 

def on_message_temperature(mqttc, obj, msg):
  global receivedvaltemp
  receivedvaltemp = str(msg.payload) - this is the payload of message published by wireless nodes
  recsplit = receivedvaltemp.split("=") - payload  is split in temperature value and MAC address of the node
  tempval = recsplit[1]
  nodemac = recsplit[2]
  print "Node MAC from temperature feed = ", nodemac
  tempdatahist = DIR_BASEHIST + nodemac + "_temphist" - this is the name of the file which store logged data
  tempdatainst = DIR_BASEINST + nodemac + "_tempinst" - this is the name of the file which store instant values
# fhtdh = file handler temp data hist - this file store a log temperature values sent by nodes
# if a log file for temperature data of a node did not exist, it is created - nodes can be added on the fly without previous configuration
  if not exists(tempdatahist): - if the log file did not exist it is created and then values are write inside it. This is useful because log files for new nodes are added on the fly without any previous configuration
  fhtdh = open(tempdatahist,'w')
  fhtdh.close()
  fhtdh = open(tempdatahist,'a')
  fhtdh.write(time.strftime("%D-%H:%M:%S ") + tempval + '\n')
  fhtdh.close()


#   fhtdi = file handler temp data inst - this file store only the last temperature value sent by nodes
# being an instantaneous value, the content of the file is overwritten every time
  fhtdi = open(tempdatainst,'w')
    fhtdi.write(tempval + " " + nodemac)
    fhtdi.close()


  print "Temperature ", tempval

 

Next is the publisher\subscriber client implemented on wireless ESP8266 modules with the help of PubSubClient for Arduino. Code for ESP modules is on Github repository - esp_mqtt.ino

 

On the publish side, the ESP node works as follow:

- connect to wireless router through WiFi

- once wireless connection is established, attempt to connect to MQTT broker

- on the subscribing part, ESP node is subscribing to "sensorsfeed/commands/espMAC" topic. espMAC is the node own MAC address and is read on the run time. In this way each node subscribe to own commands topic for receiving commands.

- on the publishing part, ESP node is publishing to "sensorsfeed/temperature" topic, on which pilot_mqtt.py client is listening. The message payload contain temperature value and MAC address of publishing node. In this way the subscribing python client can identify the source of temperature values and put these values in corresponding log files.

 

As in HTTP version, ada_mqtt.py client is used to publish instant temperature values to Adafruit dashboard and is located in /home/pi/pilot along with node_config.txt file.

 

Hardware

 

     On the hardware side, I needed more ESP nodes in order to collect environment values from multiple points around the house. For these new nodes I decided to use ESP-01 modules, they are smaller and I hope I'll be able to embed them more easily in common things around the house. They have only two GPIOs available but I think it will be enough for now.

 

IMG_1054.JPG

 

IMG_1056.JPG

 

IMG_1058_2.jpg

 

One thing I noticed, the DS18B20 temperature sensor don't like to be nearby live electronics, especially in the vicinity of the RF part of ESP modules. Temperature readings were at least 5 deg. Celsius higher that in reality. So I moved sensor from onboard socket on a 10cm twisted wire and adverse effects disappeared.

 

Github repository for Mosquitto version.

The kit for the contest arrived a few days ago and am now sorting through the awesome devices to further understand how to interface with them as well as their use model.  There have been plenty of pics posted of the kit, so I'll just list what was in the box I received:

 

- Raspberry Pi 3

- Raspberry Pi B+

- Raspberry Pi 7" Touchscreen Display

- Raspberry Pi Sense HAT (A.K.A. Astro Pi)

- PiFace Digital 2 with Relays, push buttons and such

- PiFace Control & Display 2

- Raspberry Pi NoIR Camera V2

- Raspberry Pi Camera V2

- Wi-Pi - Wireless USB Dongle

- PiFace Pi Rack

- 16GB Noobs micro SD

- EnOcean Pi Devices:

  - EnOcean Pi 902 Wireless interface

  - EnOcean Switch Design Kit

     -  ECO 200 Energy Harvester

     -  PTM 330 Radio Transmitter

     -  PTM 210 Batteryless Switch Module

     -  Clamp Switch 3D Printed

     -  PTM 210 Rockers

     -  ECO 200 Housing

  - EnOcean Sensor Kit

     - STM332/330 Temperature Sensor Module

     - STM329/320 Magnet Contact Transmitter Module

     - PTM210/220 Pushbutton Transmitter Switch Module

 

This is quite the list and I thank the good folks at Element14, Raspberry Pi Foundation, PiFace and EnOcean for providing such and awesome collection of tools to work with.

 

I was listening to the IoT Podcast with Stacey Higginbotham and Kevin Tofel and Stacey gave a bit of a review of the iLumi BR30 Outdoor BLE Light which was very timely since I was looking for a good outdoor LED solution for the Stall area that could be controlled remotely as well as on a timer.  After listening to the show, I was at Best Buy where both the Indoor and Outdoor versions of the iLumi BR30 were on sale for $15 each so I snagged one of each.  The issue I ran into was that my old 'i' devices I have can not run the Apple app and Google Play would not let me load the iLumi app on my generic Android tablet.   I did not find any libraries for python or anything else for the iLumi so I proceeded to see if I could hack the access to the light and control turning it on as well as the colors with Bluez running on the RasPi.  After working through a few BLE Light samples as well as poking around the device, I ended up getting the correct handles to pull off the Model and Manufacture as well as the handle to control the list itself.  I'm able to initialize the light, turn it on and off as well as have some control of the color settings on the light.  Its not perfect but it is a start.  Also, I integrated the Bluez commands with a modified version of one of the Sense Hat examples and can control the iLumi via the Joystick on the Sense Hat.  The next step is to add a sitemap, item and rules in OpenHAB to interface with the iLumi from a python script but it is a start.  The end plan is to be able to control the iLumi from a EnOcean switch to add remote control to the BR30.  Once issue I did run into, which Stacey mentioned as well, is that the BR30 has a very short BLE range, but I think the combo of the EnOcean switch and Pi interface will eliminate that restraint. 

 

I have a python script that runs Bluez to grab the Mac address from a BLE device and then Initialize it as well as pull off a few bits of info from the device just to id what it is.

This is a screen shot of the end result:

bluez scan of iLumi BR30

i have a video showing the working Sense Hat joystick controlling the iLumi BR30 but I need to add audio and upload it so youTube I guess, but here is a screen shot of that:

Sense Hat contol of iLumi BR30 via bluez

Next steps: Complete the Design Doc and work on the OpenHAB interface. 

Now we have a working Raspberry Pi 3 with Z-Way it's time to connect it to the backbone of Thuis: MQTT. For this I developed a custom userModule/app. In this post I'll explain what it does, how to use it and how it works.

 

Goal

The app enables Z-Way to send and receive MQTT messages. Whenever the status of one of the selected devices changes it will be published to a topic. Based on these topics some other topics are available to change the status of the devices or request a status update.

 

How to use

 

Install

Zway-MQTTZ-Way allows developers to create userModules and publish them to their App Store. Unfortunately they still didn't reply to my submit, so it's available straight away. Luckily they do have a beta-token mechanism, so it's still possible to install it. I've used BaseModule by @maros as basis, so you'll need to install this as well. To install it follow these steps:

  1. Go to the Management page of your Z-Way
  2. Open the App Store Access tab
  3. Add mqtt_beta as token
  4. Now go to Apps, and then Online Apps
  5. Search for Base Module and install it
  6. Search for MQTT and install it

 

Configure

MQTT PublicationWhen you go to the settings of the app you'll find quite some options. The first section are the basic settings needed for MQTT: the client ID and hostname/port/username/password of the MQTT broker. Next are the common topic prefix (we'll use Thuis/device for now) and postfixes for requesting a status update and for setting the status of a device.

 

More interesting is the second section, which are the publications. When you add a publication you first select a type: a single device or tagged devices. We'll use the latter for now. Now you will add some tags to select which devices will be part of this publication.

 

Next is deciding on the topic for the publication. You can use two placeholders: %deviceName% and %roomName%. They will be replaced by the actual values for the corresponding device. In this example we'll use the topic %roomName%/%deviceName%.

 

The last option is wether or not to publish status updates as retained messages, we'll turn it on.

 

Receive status updates

Now you've configured the Z-Way MQTT app so we can use it to receive updates on our devices. We'll use mosquitto_sub to demonstrate this:

robin@thuis-server-core:~# mosquitto_sub -v -t Thuis/#
Thuis/device/kitchen/counterSwitch on
Thuis/device/kitchen/counterSwitch off
Thuis/device/office/dimmer 0
Thuis/device/kitchen/luminescence 39


As you can see a devices called 'Counter Switch' in the room 'Kitchen' was turned on and off, the dimmer in the office was turned down, and the luminescence sensor in the kitchen gave an updated status.

 

Interact

There are two interactions available: requesting a status update and setting a value. Both can be demonstrated using mosquitto_pub. To request the status of the dimmer in the office publish a message to Thuis/device/office/dimmer/status with an empty message:

robin@thuis-server-core:~# mosquitto_pub -t Thuis/device/office/dimmer/status -m ""

 

When you subscribe to the dimmer's topic in the meanwhile you receive the new status:

robin@thuis-server-core:~# mosquitto_sub -v -t Thuis/device/office/dimmer
Thuis/device/office/dimmer 0

 

Then to set the dimmer to 60% you send the message 60 to Thuis/device/office/dimmer/set:

robin@thuis-server-core:~# mosquitto_pub -t Thuis/device/office/dimmer/set -m "60"

 

The light will turn on and you'll see the status changed:

robin@thuis-server-core:~# mosquitto_sub -v -t Thuis/device/office/dimmer
Thuis/device/office/dimmer 60

 

Currently values between 0 and 100 are supported for dimmers (devices supporting the SwitchMultiLevel command class) and on/off for switches (devices supporting the SwitchBinary command class).

 

How does it work

Z-Way apps are made in JavaScript. Their basic structure is defined in the developer manual. As said before I'm using the BaseModule as basis as it provides some useful functions for interacting with devices, for example filtering status updates to includes only updates with actual changes. The full project can be found on my GitHub as Zway-MQTT. Here I'll explain some interesting parts of the code.

 

MQTT client

From a userModule you can call out to external utilities, but only when you give explicit permission. I would like to avoid this, so I searched for a pure JavaScript solution. Most JavaScript-based MQTT libraries use WebSockets, which is unfortunately not available in Z-Way. Luckily I found a module by @goodfield. He found and modified a MQTT client for use within Z-Way. I cleaned it up and started using it:

self.client = new MQTTClient(self.config.host, parseInt(self.config.port), {client_id: self.config.clientId});

 

Publishing status updates

To receive updates we have to subscribe to updates sent by the Z-Way core. I'm using the modify:metrics:level event, which is the filtered version from BaseModule.

self.callback = _.bind(self.updateDevice, self);
self.controller.devices.on("modify:metrics:level", self.callback);

 

In updateDevice I retrieve the new value from the device and transform it if needed. Then I'll look up all publications matching this device (for example because they are tagged with the configured tag) and execute a MQTT publish for them.

MQTT.prototype.updateDevice = function (device) {
  var self = this;

  var value = device.get("metrics:level");
  if (device.get("deviceType") == "switchBinary" || device.get("deviceType") == "sensorBinary") {
    if (value == 0) {
      value = "off";
    } else if (value == 255) {
      value = "on";
    }
  }

  self.processPublicationsForDevice(device, function (device, publication) {
    var topic = self.createTopic(publication.topic, device);
    self.publish(topic, value, publication.retained);
  });
};

 

You'll notice the createTopic call, this call takes care of merging the prefix and configured topics plus it replaces the placeholders. Device and room names are camel cased to have nice and valid topics.

MQTT.prototype.createTopic = function (pattern, device) {
  var self = this;

  var topicParts = [].concat(self.config.topicPrefix.split("/"))
                     .concat(pattern.split("/"));

  if (device != undefined) {
    topicParts = topicParts.map(function (part) {
      return part.replace("%roomName%", self.findRoom(device.get("location")).title.toCamelCase())
                 .replace("%deviceName%", device.get("metrics:title").toCamelCase());
    });
  }

  return topicParts.filter(function (part) {
    return part !== undefined &amp;&amp; part.length &gt; 0;
  }).join("/");
};

 

Reacting on interaction

To react on status requests and new values the app will subscribe to all topics starting with the prefix. It will then filter out the actions. If it receives an action message it will try to find the corresponding publication and device bases on the topic. It will then take the requested action.

self.client.subscribe(self.createTopic("/#"), {}, function (topic, payload) {
  var topic = topic.toString();

  if (!topic.endsWith(self.config.topicPostfixStatus) &amp;&amp; !topic.endsWith(self.config.topicPostfixSet)) {
    return;
  }


  self.controller.devices.each(function (device) {
    self.processPublicationsForDevice(device, function (device, publication) {
      var deviceTopic = self.createTopic(publication.topic, device);

      if (topic == deviceTopic + "/" + self.config.topicPostfixStatus) {
        self.updateDevice(device);
      }

      if (topic == deviceTopic + "/" + self.config.topicPostfixSet) {
        var deviceType = device.get('deviceType');

        if (deviceType.startsWith("sensor")) {
          self.error("Can't perform action on sensor " + device.get("metrics:title"));
          return;
        }

        if (deviceType === "switchMultilevel" &amp;&amp; payload !== "on" &amp;&amp; payload !== "off") {
          device.performCommand("exact", {level: payload + "%"});
        } else {
          device.performCommand(payload);
        }
      }
    });
  });
});

 

We can now easily talk to Z-Wave devices through MQTT messages. Later I'll add support for more command classes, like thermostat and power usage. Someone already requested battery level updates through GitHub, which is an interesting addition as well.

 

IMG_1614.JPGAs explained in my unboxing post, I wasn't originally planning to use the Sense HAT in my project. Though the more I read about it, the more I wanted to experiment with it anyway. And since all my other electronics gear is in boxes, waiting for next week's house move, now's the perfect time to see what this HAT has to offer!

 

Sense HAT

 

The Sense HAT, is as the name implies, loaded with sensors. In this paragraph I'll cover which sensors are available and how to fetch their data using the Python reference API. The HAT also has display capabilities using LEDs and input using a joystick.

 

The Python module for the Sense HAT seemed to be installed by default in my case. If it isn't, you can install it using following command:

 

pi@raspberrypi:~ $ sudo apt-get install sense-hat

 

Also, make sure I2C is enabled (can be done using "raspi-config").

 

Sensor Data

 

The Sense HAT has a lot of sensors on board. Let's see how to retrieve this data

 

Temperature, Humidity & Pressure

 

By default, the temperature is retrieved from the humidity sensor. It is however possible to retrieve it from the pressure sensor instead.

 

The example below illustrates how to retrieve the data from the humidity and pressor sensor:

 

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

try:
    while True:
        th = sh.get_temperature()
        tp = sh.get_temperature_from_pressure()
        p = sh.get_pressure()
        h = sh.get_humidity()

        th = round( th, 1 )
        tp = round( tp, 1 )
        p = round( p, 1 )
        h = round( h, 1 )

        print( "Temp (H) = %s°C    Temp (P) = %s°C    Prsr = %smb   Hmdt = %s%%" %(th,tp,p,h) )
        sleep( 1 )

except KeyboardInterrupt:
    print( "Exiting..." );

 

And the output should be similar to this:

Screen Shot 2016-06-27 at 19.59.45.png

 

Note that my HAT seems to be suffering from an issue, resulting in low humidity and even negative temperatures (it's really not that cold in Belgium ...). More about this in the last paragraph.

 

Accelerometer, Gyroscope & Magnetometer

 

Getting the orientation for all 3 axis can be done with a single call: "get_orientation". The data is returned in degrees. Note that no delays should be added in the function, as the sensor relies on multiple measurements to calculate the values. Adding delay will result in incorrect values.

 

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

try:
    while True:
        pitch, roll, yaw = sh.get_orientation().values()

        pitch = round( pitch, 1 )
        roll = round( roll, 1 )
        yaw = round( yaw, 1 )

        print( "Pitch = %s°    Roll = %s°    Yaw = %s°" %(pitch, roll, yaw) )

except KeyboardInterrupt:
    print( "Exiting..." );

 

The output:

Screen Shot 2016-06-27 at 21.25.16.png

 

The magnetometer can be used for compass applications:

 

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

try:
    while True:
        north = sh.get_compass()
        north = round( north, 1 )

        print( "North = %s°" %(north) )

except KeyboardInterrupt:
    print( "Exiting..." );

 

Unfortunately, those results seemed unreliable (or I was testing it incorrectly), as when turning the board around it's Z axis, the north would remain between 120° and 150° while I would expect it to go from 0° to 360°.

Screen Shot 2016-06-27 at 21.41.00.png

 

Turns out a calibration file is provided by default, which should cover most cases. Not working as expected, I deleted it, and ran the calibration steps described here.

 

The result was better, as I was able to cover the entire range of values:

 

North = 49.2°
North = 49.2°
...
North = 358.7°
North = 358.8°
...
North = 12.0°
North = 12.1°

 

Finally, the acceleration data can be retrieved as follows:

 

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

try:
    while True:
        x, y, z = sh.get_accelerometer_raw().values()

        x = round( x, 0 )
        y = round( y, 0 )
        z = round( z, 0 )

        print( "X = %s    Y = %s    Z = %s" %(x, y, z) )

except KeyboardInterrupt:
    print( "Exiting..." );

 

In below example, I shook the Pi in the Y direction. You can see the values (in Gs) go from positive, to negative and back to positive, representing the shaking motion:

Screen Shot 2016-06-28 at 18.21.05.png

 

Joystick

 

To be able to scroll between menus and trigger actions, a 5 button joystick is foreseen.

 

The buttons of the joystick are mapped to keyboard keys: cursors and return key. Using code, it is possible to catch those events and trigger actions accordingly.

 

#!/usr/bin/env python

import pygame

pygame.init()
pygame.display.set_mode((640, 480))

try:
    while True:
        for event in pygame.event.get():
            print(event.key)

except KeyboardInterrupt:
    print( "Exiting..." );

 

Pressing the joystick's buttons results in the following output.

Screen Shot 2016-06-27 at 22.02.17.png

 

There are 5 different numbers, repeated twice. The repetition represent the "press" and "release" actions. The actual numbers are key codes. A listing can be found here, the sequence above is then decoded as "left", "up", "right", "down", "enter".

 

LED Matrix

 

The Sense HAT has an onboard 8x8 RGB LED matrix which can be used to display images or text.

 

Text

 

In the code below, I created an example program demonstrating various attributes of the "show_message" function. It is possible to define the colour of the text and background, the scroll speed and the rotation.

 

#!/usr/bin/env python

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

try:
    sh.set_rotation(0)
    sh.show_message("test")
    sleep( 1 )
    sh.set_rotation(90)
    sh.show_message("test", text_colour=[255, 0, 0])
    sleep( 1 )
    sh.set_rotation(180)
    sh.show_message("test", text_colour=[255, 255, 0], back_colour=[0, 0, 255])
    sleep( 1 )
    sh.set_rotation(270)
    sh.show_message("test", scroll_speed=0.5)
    sleep( 1 )

except KeyboardInterrupt:
    print( "Exiting..." );

 

The code above results in following animation:

 

 

Shape

 

Drawing shapes on the LED matrix, is done by defining an array of color values, representing each LED. In below example, the "wink" array defines a winking smiley face. The background is defined by the array "O" as black (no color), the drawing by X as blue.

 

#!/usr/bin/env python

from sense_hat import SenseHat
from time import sleep

sh = SenseHat()

X = [0, 0, 255]
O = [0, 0, 0]

wink = [
O, X, O, O, O, O, O, O,
X, O, X, O, O, X, X, X,
O, X, O, O, O, O, O, O,
O, O, O, O, X, O, O, O,
O, O, O, X, X, O, O, O,
O, O, O, O, O, O, O, O,
O, X, O, O, O, O, X, O,
O, O, X, X, X, X, O, O
]

try:
    sh.set_pixels(wink)

except KeyboardInterrupt:
    print( "Exiting..." );

 

It's possible to define multiple arrays of shapes and colour and combine them, as demonstrated below:

 

 

Issue ?

 

There seems to be an issue with some of the Sense HATs used in this challenge.

 

The problem first surfaced when vish created a discussion about his HAT reporting negative temperature and humidity. Using the same code, mine reported correct values. The next day though, I was suddenly suffering from the same negative temperatures. Doing some searching online, I came across this thread on the official Raspberry Pi forums, in which it is described as a hardware issue, which has been resolved in the factory in the mean time. Some faulty units could still be in stock at some distributors though.

 

Could this be the same issue?

Screen Shot 2016-06-26 at 14.40.02.png

 

In case the humidity is not important for your application, there is no blocking issue, as the temperature can be retrieved from the pressure sensor instead.

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

Last week I explained how I'm using Chef to provision my Raspberry Pi's and the recipes in my Thuis cookbook. Back then I didn't have a Raspberry Pi 3 yet, so I tested it on an older model. This week the kit arrived, so I'll bring the bootstrapping in practice on the Raspberry Pi 3! Not everything went as expected, so this is a good subject for this blogpost.

 

The Kit

The Kit

 

Installing the Raspberry Pi 3

As mentioned before I'm using raspbian-ua-netinst as a basis for my install, as this gives me a very lean install of Raspbian. The maintainers of the projects didn't update the installer yet for the Raspberry Pi 3, so there are some manual steps to go through for now. This is discussed in issue #375. It boils down to the following steps:

  1. Format the SD card
  2. Copy all files from the latest version to the SD card
  3. Fix the files for the Raspberry Pi 3:
    1. Copy (replace all) the files from the firmware repo using the handy tarball made by @eLvErDe
    2. Edit /boot/config.txt by adding the following at the end:
      [pi3] initramfs installer-rpi2.cpio.gz
  4. Insert the SD card in the Raspberry Pi and power it on
  5. Now wait until the installation finishes (it will take quite some time, when you don't have a screen attached wait until the ethernet connection becomes silent)
  6. SSH into the Raspberry using the default username root and password raspbian
  7. Now fix the kernel by installing and running rpi-update:
    apt-get install rpi-update rpi-update reboot now
    
  8. Edit /boot/config.txt again by changing [pi2] into [pi3]

 

The Raspberry Pi is now ready to be bootstrapped.

 

Bootstrapping Chef

Bootstrapping Chef is a matter of running one simple command from my workstation:

knife bootstrap 10.0.0.201 -t raspbian-jessie-chef.erb --ssh-user root --ssh-password 'raspbian' --node-name 'thuis-server-core' --run-list 'recipe[thuis::thuis-server-core]'

However I ran into a few complications. During my testing I didn't run into these because of the iterative process and some differences between the hardware of the two Raspberries.

 

Complication 1: systemd needs services to be reloaded

After installing WildFly the recipe tries to start the services; this however fails with a cryptic No such file or directory. The reason for this happens to be that systemd needs to be reloaded after adding new init-scripts. The WildFly recipe doesn't take care of this and therefor runs into the issue. To resolve the issue I forked the cookbook and added a systemctl daemon-reload command to it (see PR #38). This ensures it can start WildFly as expected.

 

Complication 2: For Z-Way bluetooth needs to be turned off

RazBerryThe RazBerry uses the serial IO pins on the Raspberry Pi, the same are used by the built-in Bluetooth connection as well. To be able to use the RazBerry and run Z-Way, Bluetooth has to be disabled. The original install script takes care of this, but after I converted it to be used in a Chef cookbook this didn't work anymore. That's why I converted these commands to Ruby blocks in my Chef recipe now. This looks as follows:

ruby_block '/etc/inittab' do
 block do
  file = Chef::Util::FileEdit.new('/etc/inittab')
  file.search_file_delete_line(/[^:]*:[^:]*:respawn:\/sbin\/getty[^:]*ttyAMA0[^:]*/)
  file.write_file
 end
end

ruby_block '/boot/cmdline.txt' do
 block do
  file = Chef::Util::FileEdit.new('/boot/cmdline.txt')
  file.search_file_delete(/console=ttyAMA0,115200/)
  file.search_file_delete(/kgdboc=ttyAMA0,115200/)
  file.search_file_delete(/console=serial0,115200/)
  file.write_file
 end
end

ruby_block '/boot/config.txt' do
 block do
  file = Chef::Util::FileEdit.new('/boot/config.txt')
  file.insert_line_if_no_match(/dtoverlay=pi3-miniuart-bt/, 'dtoverlay=pi3-miniuart-bt')
  file.write_file
 end
end

 

Basically this deletes the serial connections from /etc/inittab and /boot/cmdline.txt and adds an option to /boot/config.txt to disable Bluetooth.

 

Complication 3: sudo commands not found

When trying to do a command using sudo, e.g. sudo shutdown now I got a Command not found error. This happens because /sbin and other secure directories are not on the PATH for a non-root user. When you use sudo it should be and for this a small piece of configuration is added for the sudo cookbook:

default['authorization']['sudo']['sudoers_defaults'] = [
  'env_reset',
  'secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"'
]

 

Configuring Z-Way

As I already have a working Z-Way setup at home the configuration is pretty easy. It boils down to:

  1. Browse to http://10.0.0.201:8083 (where 10.0.0.201 is the IP of your Raspberry)
  2. Choose a password of your own choice
  3. Restore the backups:
    1. For the controller (*.zbk): http://10.0.0.201:8083/expert/#/network/control
    2. And for the module automation (*.zab): http://10.0.0.201:8083/smarthome/#/admin
    3. If you used userModules then copy or install these again (their settings are part of the backup, but not the actual files)

 

Deploying an application to WildFly

This is easy as dropping an .ear of .war file to /opt/wildfly/standalone/deployments. Alternatively you can deploy the app as part of a Chef recipe like:

wildfly_deploy 'my-app-1.0.war' do
  url 'http://artifacts.company.com/artifacts/my-app.1.0.war'
  runtime_name 'my-app.war'
end

 

In the upcoming blogs I'll work on the Java app and integrating Z-Way with MQTT, then we'll do the actual deployment.

 

With the complications out of the way, the Raspberry Pi 3 is now fully ready to receive some app deployments, which is what I'll work on now!

In last blog, I discussed how to create dashboards for data monitoring using freeboard. In this post, I will use freeboard to display the data acquired from sense hat. In my PiIoT architecture, I want to create a distributed system, I will be creating a separate script for reading sensor values from sense hat and publish it to an MQTT broker. MQTT is a widely used IoT protocol for communication between the nodes in a network. For more info on MQTT, read this blog. Script handling the dashboard can subscribe to this and update the display. This way, I will be able to attach sense hat to one node and display the data to the display attached to another rpi where the dashboard script is running. In this post, I wil cover the steps to install Mosquitto with websockets in raspberry pi.

 

Installing Mosquitto with websockets

Installing Mosquitto broker is fairly simple if you don't want websockets functionality. Websockets allows MQTT clients running in web browsers to communicate with the broker. Since I'm going to write some of my scripts in nodejs and is expecting some client functionality in web browser, I need an installation with WebSockets enabled.

 

First step is to keep the system up to date

sudo apt-get update
sudo apt-get upgrade

 

Now go on to install the dependencies:

sudo apt-get install libssl-dev cmake libc-ares-dev uuid-dev daemon

 

Next, clone libwebsockets library to a convenient location:

cd ~/Downloads
wget https://github.com/warmcat/libwebsockets/archive/v1.3-chrome37-firefox30.tar.gz
tar -xvf  v1.3-chrome37-firefox30.tar.gz

 

Build libwebsockets:

cd v1.3-chrome37-firefox30/
mkdir build
cd build
cmake ./..
make
sudo make install
sudo ln -s /usr/local/lib/libwebsockets.so.4.0.0 /usr/lib/libwebsockets.so.4.0.0
sudo ldconfig

 

Now, download latest version of mosquitto and unzip:

cd ~/Downloads
wget http://mosquitto.org/files/source/mosquitto-1.4.tar.gz
tar -xvf mosquitto-1.4.tar.gz
cd mosquitto-1.4

 

Open 'config.mk' file and enable websockets for build by replacing

# Build with websockets support on the broker.
WITH_WEBSOCKETS:=no

with

# Build with websockets support on the broker.
WITH_WEBSOCKETS:=yes

We are set to build mosquitto

make
sudo make install

If everything goes fine, now Mosquitto with websockets is installed in raspberry pi.

 

Now we need to create a config file tell mosquitto which ports to listen and which protocols to handle:

sudo cp /etc/mosquitto/mosquitto.conf.example /etc/mosquitto/mosquitto.conf  

Now edit the contents and add as follows:

listener 1883           # Under default listner
listener 9001           # Under additional listeners 
protocol websockets#    # Under additional listeners

 

Testing the installation

To check the installation, I'm going to host a simple webpage which listens to a particular topic from broker and updates it to the screen. I'm going to use a modified version of the nodejs server used in last blog post to host the webpage. We also need to download the client library in js to the same folder.

Create a new folder:

cd ~/Documents
mkdir mqtt-wss-test
cd mqtt-wss-test
wget https://raw.githubusercontent.com/eclipse/paho.mqtt.javascript/master/src/mqttws31.js 
touch server.js
touch index.html

 

Open 'server.js' and put in this contents:

var http  = require("http"),
    url   = require("url"),
    path  = require("path"),
    fs    = require("fs"),
    mime  = require("mime")
    port  = process.argv[2] || 8888;

http.createServer( function(request, response) {

  var uri = url.parse(request.url).pathname;
  var filename = path.join(process.cwd(), uri );

  // console.log( "New Request: \n" +
  //              "    URI : " + uri + "\n" +
  //              "    file: " + filename + "\n" );

  fs.exists( filename, function(exists) {
    if( !exists ) {
      response.writeHead( 404, {"Content-Type": "text/plain"} );
      response.write( "404 Not Found\n" );
      response.end();
      return;
    }
    if ( fs.statSync(filename).isDirectory() ) {
      filename += '/index.html';
    }
    fs.readFile( filename, "binary", function(err, file ) {
      if(err) {
        response.writeHead( 500, {"Content-Type": "text/plain"} );
        response.write( err + "\n" );
        response.end();
        return;
      }
      response.writeHead( 200, {"Content-Type": mime.lookup(filename)} );
      response.write( file, "binary" );
      response.end();
    });
  });
}).listen( parseInt(port,10) );

console.log( "Server running as PORT " + port + "\nPress CTRL + C to stop" );

 

Now edit 'index.html' and put these contents:

<html>
    <head>
        <script src="mqttws31.js"></script>
        <title> MQTT WSS Test Page </title>
    </head>

    <body>
        <div id="status">
            Connecting to broker...
        </div>
        <div id="messages">
        </div>
    </body>
    <script>
        // Create a client instance
        client = new Paho.MQTT.Client( '192.168.1.54', Number(9001), "clientId-"+ Math.random());
        // set callback handlers
        client.onConnectionLost = onConnectionLost;
        client.onMessageArrived = onMessageArrived;
        // connect the client
        client.connect({
            onSuccess:onConnect
        });
        // called when the client connects
        function onConnect() {
            // Once a connection has been made, make a subscription and send a message.
            var status = document.getElementById( "status" );
            console.log("onConnect");
            status.innerHTML = "Connected to broker";
            client.subscribe("/TestMessages");
            message = new Paho.MQTT.Message("Hello MQTT :)");
            message.destinationName = "/TestMessages";
            client.send(message);
        }
        // called when the client loses its connection
        function onConnectionLost(responseObject) {
            if (responseObject.errorCode !== 0) {
                var status = document.getElementById( "status" );
                console.log( "onConnectionLost: "+responseObject.errorMessage );
                status.innerHTML = "Connection Lost : "+responseObject.errorMessage;
            }
        }
        // called when a message arrives
        function onMessageArrived(message) {
            var status = document.getElementById("messages");
            console.log("New message: "+message.payloadString);
            var currentdate = new Date();
            status.innerHTML = status.innerHTML +
                               currentdate.getDate() + "-" +
                               (currentdate.getMonth()+1)  + "-" +
                               currentdate.getFullYear() + " "  +
                               currentdate.getHours() + ":"  +
                               currentdate.getMinutes() + ":" +
                               currentdate.getSeconds() +
                               ": " + message.payloadString + "<br/>";
        }
    </script>
</html>

Here, change the IP address of broker to your Pi's IP in line 16. Save both the files.

 

Now start mosquitto with:

mosquitto -c /etc/mosquitto/mosquitto.conf

And you will be seeing something simillar to:

1466857348: mosquitto version 1.4 (build date 2016-06-25 17:26:05+0530) starting
1466857348: Config loaded from /etc/mosquitto/mosquitto.conf.
1466857348: Opening ipv4 listen socket on port 1883.
1466857348: Opening ipv6 listen socket on port 1883.
1466857348: Opening websockets listen socket on port 9001.

 

Also start the nodejs server:

nodejs server.js

 

Go to your web browser and goto http://<Pi's IP>:8888. you will be able to see that the client from your web browser is connecting to boker running at Pi and publishing a Hello message.

Go to pi's terminal and type:

mosquitto_pub -t "/TestMessages" -m "This is from rpi"

 

You should be able to see this message in your web browser now.

 

If everything works fine, you have a successful installation of Mosquito with Websockets

 

An alternate script to which connects to a mosquitto's public server is attached.

 

Happy Hacking

- vish

 

<< Prev | Index | Next >>

MuZIEum Live 1600x1600 8.jpg

On past June 23 inside this PiIoT challenge project I have tried a new (and experimental) way for project updates: planned live events shared Twitter with Periscope. After some few troubles on the settings, the first three minutes of this event was lost, then things worked almost well; remain some problem of a bit degraded quality that will be solved easier. In the future other live vents will be scheduled sharing the most important phases and moments of the project and will be published as events in this Element14 section.

I should thank jancumps and fvan for the support and helpful hints on twitter. Below the link of the recorded live event.

 

https://www.periscope.tv/w/1BRJjjRXkpdJw

 

The activity fork point

The PiIoT perfect reading place is currently based on a different approach than I have done before, with previous projects; Two parallel rails who will meet before the end of the job: a dynamic, in-motion project design by one side and the electronics hardware and software IoT modules creation by the other.

 

MuZIEum Live 1600x1600 3.jpg

Dynamic architecture model

The project architecture involves spaces and subjects interaction, visually-impaired and not. The design is a model - based on a series of non invasive interventions -  formalised together with the helpful support of the MuZIEum team. This is a very interesting phase, involving bidirectional knowledge-transfer, learning and discussion aiming to find together the better use of the IoT modular solutions I will provide creating a smart place model reacting to the subjects interaction. This kind of empowered environment will follow different paths depending on the kind of subject. We mostly focus our attention the visually-impaired subjects and non-impaired but the same model can be applied for example to adults and child and in general to different classes of subjects that for some reason approach the same environment in a totally different way.

 

MuZIEum Live 1600x1600 2.jpg

Modular hardware implementation

Based on the general line-guides of the project and the primary goal of creating a smart environment self-reading to support visually-impaired subjects - the IoT components and other added further will be structured creating a series of IoT modules easy to assemble and easy to install and replace. Accordingly with the design model and the physical site the IoT modular system will be set and connected in a subnetwork using the already available WiFi connection to access the Internet where needed.

 

MuZIEum Live 1600x1600 5.jpg

Thus we will alternate design blog posts following the proceedings of the environment model design for the pilot PiIoT modules installation and tech blog posts dedicated to the components features, testing, source code etc.

 

The kit has arrived!

 

There are a lot of parts in this kit, so I want to take the time to write a little bit about every part and and some thoughts and ideas I have on how to use these during the challenge.

 

Kit

 

The kit arrived last week via UPS. At first, the box didn't seem that big for the parts I was expecting. Once opened and laid out on the table though, all was accounted for!

IMG_1598.JPGIMG_1599.JPGFullSizeRender.jpg

I'd like to use this opportunity to thank element14 and the sponsors again for putting together such a great kit. Special thank to element14Dave for taking care of the logistics side of things. Sending kits like this all over the world is not an easy task!

 

Let's go over every part now

 

Parts

 

Raspberry Pi 3 & B+

 

Not much to explain here, as this is what the challenge is all about: leveraging the power of the Raspberry Pi 3 for the creation of an IoT Command Center.

In my opinion, the coolest feature of the Pi 3 is the onboard wifi. And with the latest Jessie release automatically copying the wpa_supplicant file from the boot partition upon first boot, the Pi 3 can be set up headlessly, wirelessly.

 

For my project, I'll be using a combination of Pi 3 and Pi Zeros. The Pi B+ will be given to a friend looking to take his first steps with Raspberry Pi

 

Touch Screen

 

The biggest box in the kit, is that of the touch screen. The official Raspberry Pi Touch Screen was released beginning of September last year.

 

IMG_1626.JPG

Here are some specs:

                  • 7" diagonal size
                  • 800x480 resolution
                  • 10-point capacitive touch
                  • 70° viewing angle

 

The display connects to a provided control board, which in turn is then connected to the Raspberry Pi using the DSI (= Display Serial Interface), connector. In the box, are dupont wires to power the Pi via the GPIO pins, but I prefer to use a short, 90° angle USB cable to keep the GPIO free for HATs.

 

I plan to use the display to visualise data from OpenHAB and allow control via the touch functionality. Rather than using a stand, I'll be embedding the screen in a custom project enclosure.

 

Sense HAT

 

Fun Fact: Did you know that two Raspberry Pis with Sense HAT are currently in space on the ISS? If you didn't, you may want to have a look at Astro Pi!

 

Anyway, the Sense HAT is, as the name implies, a HAT packed with sensors. ("HAT" stands for "Hardware Attached on Top")

 

IMG_1614.JPG

The following sensors are available on the Sense HAT:

                  • temperature sensor
                  • humidity sensor
                  • gyroscope
                  • accelerometer
                  • magnetometer
                  • barometric pressure sensor

 

It also has a 8x8 RGB LED matrix, capable of visualising data, and a 5 button joystick.

 

I didn't plan on using it originally, but seeing the features, and the board up close has given me some ideas. The MagPi Sense HAT experiments book is a great starting point! 

 

Camera Module (Regular + NoIR)

IMG_1617.JPG

 

The camera module has recently received an upgrade (and at the same time, so did our challenge kit) when version 2 was announced earlier this year. The camera's sensor has been upgraded from a 5MP to 8Mp version, and remains compatible with all models Raspberry Pi. Others improvements include: better image quality, color fidelity and low-light performance.

 

I plan to combine a camera with the new Pi Zero v1.3, using an accessory I recently purchased, the ZeroView. The camera's feed will be made available on the command center for remote monitoring purposes. I'll probably have it facing the garden or perhaps even inside the chicken coop.

 

Depending on the application, I will use either the regular or NoIR version. Or I could use both, we'll see.

 

PiFace Digital 2

 

The PiFace Digital 2 is a HAT meant to make interfacing with inputs/outputs easier by providing:

  • 2 relays ( Max. 20V/5A)
  • 4 switches
  • 8 LED indicators
  • 8 digital inputs
  • 8 open-collector outputs

 

There is a graphical emulator and simulator available, allowing you to test in software before actually using the hardware. The quick start guide covers hardware, software and some example applications.

 

The PiFace Digital 2 will not be used in my project, as I rely on other technologies to wirelessly control lights and sockets. Other input/output can be covered using the standard Python GPIO functions.

 

PiRack

 

The PiRack enables you to connect up to four HATs on the Pi's GPIO header.

 

This is a version for the 26 pin header, not the 40 pin header introduced since the Raspberry Pi B+. The PiRack is useful for prototyping, but is a little bit impractical when it comes to embedding multiple HATs in a compact solution. Rather than mounting boards vertically in a 1 by 4 configuration, my suggestion for an alternative version would be to put the HATs in the same horizontal plane, in a 2 by 2 configuration? And while we're at it, update it to the 40 pin header.

 

EnOceanPi & EnOcean Kits

 

IMG_1621.JPGI've been using EnOcean sensors ever since the Forget Me Not Design Challenge . After almost two years, the sensors (contact and temperature) I had installed are still operational and did not require any maintenance. So when I found out about the new Switch Kit, I was very excited.

 

The Switch Design Kit is new, but I love the look and feel of the 3D printed clicker. The parts fit inside nicely and the switch itself has a really nice click. It seems to be printed on a powder based 3D printer, and I wonder if I'll be able to get similar results using my FDM 3D printer. Something to try out at some point during the challenge.

 

Unfortunately, there is a small hiccup. The sensor kit which was received works with a frequency of 902MHz, which is meant for the US. As the EnOceanPi which was received works on the EU frequency, 868MHz, they are incompatible and unusable. Hopefully element14Dave can assist with this issue, and nobody else ended up with an incompatible kit.

 

Accessories

 

Finally, some accessories were provided, such as a 16Gb microSD card, a wifi dongle and the official 5V/2A microUSB power supply. You can never have too many of those!

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

Just an image of the components I received in the mail today:

 

IMG_1054.jpgIMG_1055.jpgIMG_1056.jpg

 

SenseHat, Wi-Pi and PiFace digital are missing, but these are no showstoppers for the project.

Stay tuned!

After receiving my package on Friday evening and unboxing all the Raspberry Pi goodies in the element14 brown box .. And after being overcome by happiness ! looking at the awesome Pi 7" display , I knew the first thing that I had to do was to 3D print a holder/Stand for the screen.

 

So , here you go !! download the 3 STL files attached and 3D print yourself a stand for your Pi 7" screen. The STL file (PlacardPiIOT.stl) is optional which connects the two legs.

And connected at the back of the display is Raspberry Pi B+ .To attach the legs of the stand to the Pi Display you will need 4-40x3/8 screws that you can get at your local hardware store.

 

{gallery:width=900,height=400,autoplay=true} 3D Printed Stand for Pi Display

PiDisplayFront.jpg

Front: 3D printed stand

4.jpg

Top: Power and audio jack connected from the top

3Dfiles.JPG

Legs after 3D Printing: to attach the legs to the stand you will need 4-40x3/8 screws

8.jpg

Back: Display connected to a Raspberry Pi B+

 

All the part above where printed in 1.75 mm PLA and here are some suggested setting silcer settings

  • Layer height for the legs:0.3 mm
  • Infill: 20%
  • Temperature : 205 C (i am using Hatchbox filament, please check you filament manufacture site for your filament  )
  • Printing the 3 Stl files took be about 50 mins..
  • To paint the letter, you can use Uni Paint markers

 

 

Now, once you have loaded the SD card with the latest version of Raspbian Jessie , you will observe that the OS/Desktop loads up side down, with the setup as shown in the pictures above. Basically, you can turn the display ,but I found that adding the power cable and audio cable from the top makes it much easier, as shown in the pictures above.

 

Now to solve the upside-down issue you need to add the following line to the config.txt file, if you have a wireless keyboard connected open the terminal and run the following command

 

sudo nano /boot/config.txt

 

and then add the following line to the file

lcd_rotate=2

 

lcdRotate.jpg

 

At this point, I would like to take a moment to thank element14 for sending out the awesome hardware for PiIoT challenge ...

In my last post I explained the basics of Chef, and in the last week I worked on defining the configuration of each node. I selected several cookbooks from the Supermarket and wrote some myself. Using a series of recipes I defined the software and configuration of two of the nodes of Thuis in the Thuis Cookbook. In this post I’ll show you my choices and give some code samples to let you set up your own Chef config.

 

Let’s start with some of the cookbooks I’m using from the Supermarket:

Cookbook
Description
aptTakes care of keeping the apt-get cache up to date
firewallInstall and configure UFW
hostnamesAutomatically configures the hostname of each node based on a pattern
mosquittoInstall and configure Mosquitto (MQTT broker)
tarDownload and extract a tar file
sshdInstall and configure the SSH deamon
sudoConfigure which users can sudo
timezone_lwrpConfigure the time zone
usersAdd default users with SSH keys
wildflyInstall and configure WildFly and Java

 

 

Basics: default recipe

Execute on each the device the default recipe provides the basic setup of a node. All it does is installing a few packages and including other recipes:

package ['rpi-update', 'nano']

include_recipe 'apt'
include_recipe 'hostnames'
include_recipe 'timezone_lwrp'
include_recipe 'mosquitto::client'
include_recipe 'thuis::firewall'
include_recipe 'thuis::sshd'
include_recipe 'thuis::users'

 

Of course the recipes have some configuration, this is provided through the attributes file:

# System
default['set_fqdn'] = '*.internal.thuisapp.com'
default['tz'] = 'Europe/Amsterdam'
default['authorization']['sudo']['passwordless'] = true

# Firewall
default['firewall']['allow_ssh'] = true
default['thuis']['open_ports'] = [80, 8080, 9990] # TODO: finetune

 

The firewall, sshd and users recipes are part of the thuis cookbook, the first two are fairly straight forward:

# Include the default firewall recipe
include_recipe 'firewall::default'

# Allow incoming on the configures ports
ports = node['thuis']['open_ports']
firewall_rule "open ports #{ports}" do
  port ports
end

 

# Don't allow logins through SSH using a password
openssh_server node['sshd']['config_file'] do
  PasswordAuthentication 'no'
end

 

The users recipe is based on this nice blogpost: it's creating my default user (robin), adding my SSH key and allows it to sudo without using a password.

 

Specifics per node

For both nodes (more will follow later) I've created a specific recipe. These recipes include the default recipe plus take care of the specific needs of that node.

 

thuis-server-core

As mentioned in my post about the architecture of Thuis the core node will use WildFly, a MQTT broker and Z-Way:

include_recipe 'thuis::default'

include_recipe 'mosquitto'
include_recipe 'thuis::wildfly'
include_recipe 'z-way'

 

Mosquitto for now uses the default configure of the cookbook, I'll finetune this later. I did have to update the cookbook a bit as it didn't have support for Debian/Raspbian Jessie yet, for that I did a PR, which is accepted.

 

WildFly required a bit more effort as the standalone.xml configuration file wasn't update with the latest version and the underlying java cookbook doesn't have support for the ARM packages needed for the Raspberry Pi. The latter was solvable using some custom configuration which selects a different download based on the architecture:

# Java
default['java']['arch'] = kernel['machine'] =~ /x86_64/ ? 'x86_64' : kernel['machine'] =~ /armv/ ? 'armhf' : 'i586'

java_home_arch = 'i386'
if (node['kernel']['machine'] == 'x86_64')
  java_home_arch = 'amd64'
end
if (node['kernel']['machine'] =~ /armv/)
  java_home_arch = 'armhf'
end
force_default['java']['java_home'] = "/usr/lib/jvm/java-#{node['java']['jdk_version']}-#{node['java']['install_flavor']}-#{java_home_arch}"

default['java']['jdk']['8']['armhf']['url'] = 'http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-arm32-vfp-hflt.tar.gz'
default['java']['jdk']['8']['armhf']['checksum'] = '79dda1dec6ccd7130b5204e75d1a8300e5b02c18f70888697f51764a777e5339'

default['java']['jdk']['8']['x86_64']['url'] = 'http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz'
default['java']['jdk']['8']['x86_64']['checksum'] = '6f9b516addfc22907787896517e400a62f35e0de4a7b4d864b26b61dbe1b7552'

 

Next is overriding the WildFly standalone.xml configuration:

include_recipe 'wildfly::default'

resources("template[#{::File.join(node['wildfly']['base'], 'standalone', 'configuration', node['wildfly']['sa']['conf'])}]").cookbook 'thuis'

 

The template file is a copy of the original, but updated using a diff between the original and the version from WildFly 10.0.0.Final. The needed configuration in attributes.rb is:

# WildFly
default['wildfly']['version'] = '10.0.0.Final'
default['wildfly']['url'] = 'http://download.jboss.org/wildfly/10.0.0.Final/wildfly-10.0.0.Final.tar.gz'
default['wildfly']['checksum'] = 'e00c4e4852add7ac09693e7600c91be40fa5f2791d0b232e768c00b2cb20a84b'
default['wildfly']['enforce_config'] = true
default['wildfly']['mysql']['enabled'] = false
default['wildfly']['postgresql']['enabled'] = false
default['wildfly']['jpda']['enabled'] = false
default['wildfly']['java_opts']['other'] = ['-client']

 

It changes the version to use, disables a few modules and important on the Raspberry Pi it uses -client instead of -server in Java options.

 

Chef Z-Way cookbook directory structureThe most difficult to get working was Z-Way. There is no cookbook available yet, so I had to build this one from scratch. I could have taken a relative easy way out by just letting the cookbook execute Z-Way's install script, but as I want to learn Chef I went for the hard road. On the right you can see the structure of the cookbook. The full cookbook is available on Github at Edubits/chef-z-way.

 

The cookbook uses a few recipes to install Z-Way to the device, and more importantly to safely upgrade it while retaining the configuration of all devices. It also installs the required services and enables them to automatically start like:

service 'z-way-server' do
  service_name 'z-way-server'
  supports restart: true
  action [:enable, :start]
end

 

A few things are kept as in the original script in the template file install.sh.erb and executed during as part of the install recipe. As I'm currently testing on an old Raspberry Pi with the Razberry hardware, I could not verify the full installation yet.

 

thuis-server-tv

The TV node will mostly take care of connecting to the home cinema system using CEC. For this a Java EE application is used, so WildFly is needed here as well. Next to WildFly we need libcec.

include_recipe 'thuis::default'

include_recipe 'thuis::wildfly'
include_recipe 'thuis::libcec'

 

libcec in Jessie is only version 2 and we need 3, so we'll grab this package from the Stretch repository:

# Add stretch apt repository
apt_repository 'stretch' do
  uri          'http://archive.raspbian.org/raspbian/'
  distribution 'stretch'
  components   ['main']
end

package ['libcec3', 'cec-utils'] do
  default_release 'stretch'
end

 

Testing

Chef has a very nice way of testing cookbooks using Kitchen and Vagrant: it spins up a virtual machine from an image and runs the recipes on that. I used this for the general testing, however quite some of my changes and configuration are specifically made for the Raspberry Pi. This means to test those I had to use an actual device. For this I used my good old Raspberry Pi 1B. This required some patience, as one chef-client run without any changes takes about 5 minutes on this device. As there are some differences in architecture between the 1B and the 3B I expect there will be some small changes needed when deploying the cookbooks to the new device.

Raspberry Pi 1B

 

Bootstrap the nodes

As soon as the kit arrives (somewhere in the coming week) I will install them using raspbian-ua-netinst, assign a static IP in my router and then I can bootstrap the Raspberry Pi's with just one command each:

knife bootstrap 10.0.0.201 -t raspbian-jessie-chef.erb --ssh-user root --ssh-password 'raspbian' --node-name 'thuis-server-core' --run-list 'recipe[thuis::thuis-server-core]'
knife bootstrap 10.0.0.202 -t raspbian-jessie-chef.erb --ssh-user root --ssh-password 'raspbian' --node-name 'thuis-server-tv' --run-list 'recipe[thuis::thuis-server-tv]'

Now the Pi's are fully installed and ready to be used and it's time to actually build & deploy automation software and connect some hardware! Let the fun begin

 

emonPi is an open-hardware energy monitoring solution based on the Raspberry Pi. It was originally launched as a Kickstarter around April last year and raised almost £25,000 with a £15,000 goal. It comes in different flavours, from board only to fully assembled device with custom aluminium enclosure. I have the board only, and will be using it in a combination with a Raspberry Pi Zero rather than the suggested Raspberry Pi 3. The less energy the monitor consumes, the better, right?

 

If you want to know more about the emonPi solution, have a look at their Kickstarter video:

 

 

Let's set it up and start measuring the power consumption!

 

Hardware

 

For initial testing, I set up the bare board on top of the electricity meters, allowing measurements where the wires enter the fuse box. I'll be moving house in July, so I'll foresee a nicer setup then

 

WARNING!

Before investigating where, how and actually connecting the clamp, turn OFF the power! Do not work while the conductors are LIVE! If you don't know what you're doing, ASK someone who does!

 

To retain double insulation, make sure anything leaving the fuse box is sleeved!

IMG_1592.JPGIMG_1595.JPGIMG_1597.JPG

 

On the picture on the left, is the emonPi board controlled by a Raspberry Pi Zero which is mounted under it! The whole things is powered by a mini USB cable providing 5V. Remote connectivity is achieved using a wifi dongle connected to the Pi Zero's USB OTG port.

 

On the second picture, in the middle, is the clamp used to measure the power consumption. A so-called "CT current sensor". The sensor acts as an inductor and responds to the magnetic field of the conductor it is clamped to, making it possible to calculate how much current is passing through that conductor.

 

On the third picture, on the right, the protection panel of the fuse box has been restored while foreseeing an exit for the sensor's connection to emonPi. In the final setup, emonPi will be mounted on the wall next to the fuse box (in the new house).

 

Software

 

The software side of thing is rather straightforward, as a Raspbian Jessie Lite image with all the emonPi software included is available for download.

 

emonPi_System_Diagram.png

 

The image offers a lot of software preconfigured and activated. Perhaps a little bit too much. That's why I disabled a number of software components and kept a minimum in order to still be able to fetch the actual power measurements, while not unnecessarily exposing unused interfaces.

 

By default, the filesystem is set to read-only. This is explained by a message when connecting via SSH:

 

The file system is in Read Only (RO) mode. If you need to make changes,
use the command 'rpi-rw' to put the file system in Read Write (RW) mode.
Use 'rpi-ro' to return to RO mode. The /home/pi/data directory is always in RW mode.

 

To make changes, such as disabling certain services, the system needs to be set to "read-write".

 

pi@emonpi:~ $ rpi-rw

Filesystem is unlocked - Write access
type ' rpi-ro ' to lock

 

The different, undesired services can be disabled. What it boils down to, is that I'd like to keep the MQTT broker to be able to subscribe to the measurement data, and the emonHub which is doing the actual decoding of the data. The rest is useful when using emonPi as a standalone device, but less so in my case, where all data is gathered to a central point.

 

pi@emonpi:~ $ sudo systemctl disable openhab.service
pi@emonpi:~ $ sudo systemctl disable nodered.service
pi@emonpi:~ $ sudo systemctl disable emonPiLCD.service
pi@emonpi:~ $ sudo systemctl disable apache2.service

 

Once finished, set the filesystem back to "read-only".

 

pi@emonpi:~ $ rpi-ro

Filesystem is locked - Read Only access
type ' rpi-rw ' to unloc

 

 

OpenHAB

 

The measurement data from emonPi is exposed via MQTT, by default, the following settings apply:

 

  • URL: tcp://<emonpi>:1883
  • User: emonpi
  • Password: emonpimqtt2016
  • QoS: 2
  • Retain: false

 

I'd recommend you change the default credentials to more secure ones, using:

 

pi@emonpi:~ $ mosquitto_passwd

mosquitto_passwd is a tool for managing password files for mosquitto.

Usage: mosquitto_passwd [-c | -D] passwordfile username
       mosquitto_passwd -b passwordfile username password
       mosquitto_passwd -U passwordfile
 -b : run in batch mode to allow passing passwords on the command line.
 -c : create a new password file. This will overwrite existing files.
 -D : delete the username rather than adding/updating its password.
 -U : update a plain text password file to use hashed passwords.

See http://mosquitto.org/ for more information.

 

 

The MQTT setup for my project looks like this:

Screen Shot 2016-06-18 at 20.54.43.png

 

By configuring the MQTT settings from earlier in OpenHAB and subscribing to the correct topics, the measurements are available on the central control unit.

 

#
# Define your MQTT broker connections here for use in the MQTT Binding or MQTT
# Persistence bundles. Replace <broker> with an ID you choose.
#

# URL to the MQTT broker, e.g. tcp://localhost:1883 or ssl://localhost:8883
emonpi.url=tcp://192.168.0.123:1883

# Optional. Client id (max 23 chars) to use when connecting to the broker.
# If not provided a default one is generated.
#<broker>.clientId=<clientId>

# Optional. User id to authenticate with the broker.
emonpi.user=emonpi

# Optional. Password to authenticate with the broker.
emonpi.pwd=emonpimqtt2016

# Optional. Set the quality of service level for sending messages to this broker.
# Possible values are 0 (Deliver at most once),1 (Deliver at least once) or 2
# (Deliver exactly once). Defaults to 0.
emonpi.qos=2

# Optional. True or false. Defines if the broker should retain the messages sent to
# it. Defaults to false.
emonpi.retain=false

 

The items are defined to use OpenHAB's MQTT binding, which connects to the configured broker and receives the data from the subscribed topics.

 

Number  emonpi_ct1      "Power 1 [%d W]" <energy>    { mqtt="<[emonpi:emon/emonpi/power1:state:default]" }
Number  emonpi_ct2      "Power 2 [%d W]" <energy>    { mqtt="<[emonpi:emon/emonpi/power2:state:default]" }

 

The result is power data from up to two current clamps available on the central interface.

EmonPi-OH2.png

 

Once persistence will be added, and the data graphed over time, patterns in power consumption will emerge.

 

This data will be useful for different things, such as:

  • Knowing how much power is consumed at what time. Over time, consumption could be improved!
  • Trigger alarms/notification should consumption exceed threshold while away (on holiday).

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

Muzieum-1600x1600 6.jpg

MuZIEum is a strange compound word - also in Dutch - a wordplay with museum and see (ZIE in Dutch)

MuZIEum = ZIE & Museum
- museum open your eyes -

It is the place where the IoT reading place will be tested and installed thanks to the great MuZIEum team and their open-minded approach to innovation.

 

A few words about the place and its initiative

(For more details please follow the official link: MuZIEum)

It is a museum and an interactive place but also a pretty nice event place.

The images below gives an idea of the impression I had when I met the project manager CARLIJN NIJHOF. MuZIEum offers to the visitors a comfortable area where they can experience the different reality perceived by visually impaired people.

During a long time our cultural heritage forced us to consider visually impaired people through a barrier, a sort of fence separating us and them in two different groups living in different worlds. MuZIEum provide an interactive experience to non visually impaired visitors helping them discovering how thin and meaningless this barrier is, as well as experiencing how extremely difficult is to move and interact in a world without light.

 

Muzieum-1600x1600 1.jpg{gallery} MuZIEum in Nijmegen, NL

Muzieum-1600x1600 2.jpg

Muzieum-1600x1600 4.jpg

Muzieum-1600x1600 7.jpg

Muzieum-1600x1600 9.jpg

Muzieum-1600x1600 12.jpg

Muzieum-1600x1600 13.jpg

Muzieum-1600x1600 17.jpg

Muzieum-1600x1600 16.jpg

Muzieum-1600x1600 8.jpg

Muzieum-1600x1600 11.jpg

 

The challenge in the challenge

Today, visually impaired people can use a variety of different tools that make their life easier; gone is the time of the Braille typewriters: new technologies offers text to voice devices, talkback features, smartphone apps able to identify objects pointed by a camera, walking cans equipped with sensors and ultrasound support for obstacle detection, voice controlled devices and many other tools that facilitate the interaction with reality. So how the IoT technologies can be helpful to empower these already existing disability supports?

 

The answer is what this project aims to reach, thanks to the cooperation and wide knowledge of the MuZIEum team I can count on. Visually impaired people can make their daily life easier thanks to some technological devices - most of them portable - In a nutshell, they manage tools for better interaction with the world around them.

The challenge is to create a series of IoT nodes performing two main tasks:

  • Enable the autonomous environment adaptability: It is the environment that reacts by recognising the visually impaired subject and so avoiding them to use any special tools. The living context improve and specialise its ability to support the interaction with different kind of users.
  • Illustrate and teach to non-visually impaired people: MuZIEum visitors can observe and experience by their own the environment adaption to reduce the gap through low impact interaction

The next step is to meet the MuZIEum team in order to design the on-site project architecture.

 

General guidelines

Some main guidelines have already been defined to design the most important aspects of the project:

  • Wireless connected nodes: the IoT components will be easily adaptable to different environments without redefining the single component configuration
  • Modular independent blocks: every component should be introduced in the asset without the need to change the already created design and configuration
  • Self-identifying modules: every implemented IoT module should support one-button activation, self identifying itself on the network of already existing modules.
  • Specialised modules: every IoT components has its own specialisation associated to a different form factor and colour easy to identify by both visually impaired and non-impaired users

 

One of the primary applicative goals of the project is to simplify the sharing of experiences between visually impaired and non-impaired users in close relations reducing the perceptive gap

 

Introducing the main components

Here are the main components and their usage planned to use building the nodes. The final building blocks will be defined further in a more structured schematic design. Note taht more components of the same type can be used in different nodes.

  • Daylight POV camera - Face recognition, gesture recognition, imaging
  • Infrared camera - No-light imaging, image processing, special images acquisition
  • Audio output - Text speech, sound output, audio alerts
  • Audio input - Voice sensing, voice recognition, audio message acquisition
  • Ultrasound sensors - Presence and obstacle detection, components self-position, movement detection
  • NFC / RFID - Recognise tagged elements, identify and exchange data packets with NFC equipped mobile devices
  • IoT network gateway - Controller module managing the IoT modules network
  • Multipoint Touch Screen - Synchronised visually impaired and non-impaired people interaction and experience sharing

Movement detection and RF comms are the last two components for the minimum setup of a node in DomPi. The RF comms allows each of the remote nodes (Arduino based) to send and receive data from the Command Center (Raspberry Pi 3 based). The movement detection enables the project to build up three additional features: the alarm, determine presence at home and automatic lights. I will start with the easier component, the movement detection, but first, let me share the features I will be developing in this post:

Project Status

Movement detection - PIR sensor

This component shall help the Command Center (CC) to deliver the three features mentioned above, meaning that the remote notes will just update the CC on the status - is motion detected or not - and it will be the CC who decides what action to trigger. The "intelligence" will reside in the CC. Let´s see the details of the hardware and software pieces.

 

Hardware and wiring

The sensor I will leverage for detecting movement at home, in the garden or garage is the HC-SR501, a passive infrared sensor. It works at +5V source and at +3.3V TTL. I have connected the Data pin of it (see picture below) to the pin 2 of the Arduino Nano. The sensor allows some configuration via hardware:

  • sensing distance: with the potentiometer T2 you can adjust the distance from 3 to 7 meters (10 to 23 ft). I will set up all of the PIRs at its maximum distance,
  • trigger approach: you can select the repetitive triggering (the output remains high while it detects presence in its range) or the non-repetitive (after some seconds the output goes to low and starts scanning again). This can be selected via the jumpers L(non-repetitive) and H (repetitive). I will set up all of the PIRs to the non-repetitive position. There is no special reason for this, but the main usage of the PIRs at the beginning will be for the alarm and with the non-repetitive it will be easier to avoid false positives - in the end I doubt that a burglar would stand up not moving for minutes and minutes at home... If however, I get a false positive - f.e. a bad initial reading of the PIR, etc - the output will go down after x secs allowing the CC to interpret this as a false alarm.
  • time delay adjust: you can adjust the seconds it waits before forcing a low output, between 5s and 300s. I´m adjusting it almost to the minimum, around 5-10s delay

 

PIR_wiringPIR_pinsPIR_pic

Pictures sources: PIR1, PIR2

 

An important note is that this sensor as a broad sensing angle, 110º, making it a good fit for DomPi where I want to control rooms and corridors, but maybe not that suitable if you pretend to control a narrower space. An interesting note on the PIR is that as a passive sensor "that measures infrared (IR) light radiating from objects in its field of view. (...) the temperature (...) in the sensor's field of view will rise from room temperature to body temperature. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection", source Wikipedia.

 

Software

There are two main alternatives to detect movement: with interrupts and by periodically polling the sensor. I did start with the interrupts, and attached an interrupt to be called each time there was a change in the PIR status (from nothing detected to something detected and the other way round). Below is part of the code that would allow the first approach. With the Arduino Nano you can attach interrupts to the pins 2 and 3 - in my case the PIR is in the pin 2.

 

#define PIR_PIN_LIVING  2
int PIRstatus=0;
...
pinMode(PIR_PIN_LIVING, INPUT);
attachInterrupt(digitalPinToInterrupt(PIR_PIN_LIVING), processPIRchange, CHANGE);
...

void processPIRchange() {
  PIRstatus = digitalRead(PIR_PIN_LIVING);
}

 

After several testing and a couple of hours invested I realized that the interrupts where not working properly. It did detect me correctly but the rest of the code was not working properly. Right now I have several components in the living room node that I use for testing: the PIR component, the RF light control, the environment measurement, the IR receiver (see posts 2 and 3) and the RF 2.4Ghz piece. My feeling is that some other library may be using the interrupts and it is not getting well along with my PIR. The responsiveness of the node went down dramatically making it not fit for purpose... So I decided to avoid interrupts and go to the second best option, polling the sensor periodically. For the DomPi project, in terms of usability, it won´t change anything, the main loop is fast enough to read the PIR quickly and detect any movement in time.

Let me share with you the complete code in the next post, hopefully some minor issues will be sorted out!

 

RF 2.4GHz comms - the NRF24L01

This is a key component of the DomPi. Since there will be five remote nodes (living room, two in the bedrooms, garden and garage), the best solution is to connect them with the Command Center via wireless - I don´t see myself making holes to reach the garage in my building of flats... I could use some Wifi dongle for the nodes, but the solution would not be that light nor cheap. Potentially, a RF 433Mhz transmitter and receiver per node could make the trick. I finally opted for a NRF24L01, it is a transceiver (transmits and receives with the same circuitry) and there are very nice libraries allowing a sort of RF network.

 

Hardware and wiring

The first thing to note is that there are some known problems with powering this chip directly from Arduinos such as the Nano or Mega that I pretend to use in DomPi. Since the power of these is limited to 50mA, it may not deliver the enough current to support the NRF24L01. There are a couple of workarounds to solve it: use an independent power unit, set two capacitors (10uF and 0.1uF) between its Vcc and Gnd or insert a base module between the Arduino and the NRF24L01 to power it up. To avoid further delays in the project, I will start by using the base module (see pic below). The good news is that the NRF24L01 seems to work ok with the Raspberry Pi.

 

NRF24_a

NRF24 Module

NRF24_b

 

Base Module for the NRF24

NRF24_c

 

 

NRF24 connected to the base modlue

Pictures: NRF24_1, Base_module, NRF24+Base_module

The wiring is not much complex but some care is needed. There are 8 pins in the base, one of them, the IRQ, is not required for the DomPi project. For the other 7, we have Gnd and Vcc, which should go to +5V, note that if you don´t use the base module then you have to connect the NRF24 to +3.3V or it can be damaged. The 5 remaining allow a SPI communication with the Arduino, the downsize of it is that 3 pins are fixed and you need to connect them to the right pins in the Arduino. These are the MISO, MOSI and the SCK that in the Nano case they should connect to 12, 11 and 13 respectively. I foresee some difficulties when working on the Garage node as the TFT base will connect to these pins and it will be difficult to physically access them. The last 2 pins, CE and CSN, can be selected via software when calling to the object. In my case I left them at: 7 and 8 respectively. I found this page very helpful while setting up the module.

 

Software

To use the NRF24L01 besides the Arduino SPI.h library, I am leveraging the library written by TMRh20. The great thing about this library, besides the support and forums you can get, is that I can use it on the Arduinos as well as on the Raspberry Pi, so... it is a great fit to the DomPi! As usual, to use the C++ library in Arduino, you just need to import the .zip file via de Arduino IDE (menu Sketch->Include Library->Add zip library), alternatively you can just paste the uncompressed file into the Arduino->libraries folder and next time you start the IDE, there it will be.

 

A bit below I have included part of the Arduino code related to this component. It follows the examples included in the library: I use two objects, the RF24 to set up the radio functions, and the RF24Network which enables a network based on the radio object. There are two interesting points on the code, you need to set the id for the parent_node - to which the node will talk to - and also the id of this node. For the DomPi I have reserved the following ids:

  • 0 for the Command Center, RPI 3
  • 1 for the kids´ bedroom
  • 2 for the parents´ bedroom
  • 3 for the living room
  • 4 for the garden
  • 5 for the garage

The second point is lines 13 to 27. They define the message structure that will be sent to the master or received from it. Since these are the remote nodes, they will send the temperature, humidity, luminosity, motion status and another char more for future expansion. On the other hand, the remote nodes will receive from the CC a command - determines what needs to be done like "turn on light" - and an info char - with additional information like the number of the light to execute the command on. With each loop, we update the network and if there is a package received from the Command Center it calls the function receive_data() to process it.

 

#include <RF24Network.h>
#include <RF24.h>
#include <SPI.h>

// Radio with CE & CSN connected to pins 7 & 8
RF24 radio(7, 8);
RF24Network network(radio);

// Constants that identify this node and the node to send data to
const uint16_t this_node = 3;
const uint16_t parent_node = 0;

struct message_1 { // Structure of our message to send
  int16_t temperature;  //Temperature is sent as int16: I multiply by 100 the float value of the sensor and send it
                        //this way I avoid transmitting floats over RF
  unsigned char humidity;
  unsigned char light;
  unsigned char motion;
  unsigned char dooropen;
};
message_1 message_tx;

struct message_action { // Structure of our message to receive
  unsigned char cmd;
  unsigned char info;
};
message_action message_rx;

RF24NetworkHeader header(parent_node); // The network header initialized for this node

void setup() {
  // Initialize all radio related modules
  SPI.begin();
  radio.begin();
  delay(5);
  radio.setPALevel(RF24_PA_MAX);  //This can lead to issues as per https://arduino-info.wikispaces.com/Nrf24L01-2.4GHz-HowTo
                                  //use this radio.setPALevel(RF24_PA_LOW); if there are issues
  delay(5);
  radio.setChannel(108);          //Set channel over the WIFI channels
  delay(5);
  radio.setDataRate(RF24_250KBPS); //Decrease speed and improve range. Other values: RF24_1MBPS y RF24_2MBPS
  delay(5);
  network.begin(90, this_node);
}

void loop() {
  // Update network data
  network.update();

  //Receive RF Data 
  while (network.available()) {
    receive_data();
  }
  //Additional Code for IR, sensore measurement, etc
}

 

There also three important notes.

  • Line 39 sets the wireless channel to the 108. This module operates in 2.4Ghz that is the same band as the IEEE 802.11, the standard Wifi at home besides the 5Ghz band. By selecting the 108 channel, we should be out of the wifi band hence having less interference.
  • Line 36 sets the emitting power to the maximum that the NRF24L01 can provide. Although this looks like the right thing to do, due to the power issues mentioned before, the standard recommendation is to set it to the minimum power. In my case, with the base module it seems to be ok with the max power.
  • Line 41 sets the transmission speed to the lowest one allowed. The first reason is that we are transmitting just a few bits, below 100 bits including any headers. Additionally, the project does not require responses to the microsec. The second reason is the Shannon theorem. There is a lot of maths and of communications theory, but the summary is that the higher the speed, the "cleaner" the environment has to be (signal to noise,SNR, ratio). In general, the further both nodes will be, the "dirtier" the environment is (lower SNR) and therefore the maximum speed will decrease. If we force a higher speed than the maximum, we lose information and the nodes won´t communicate with each other. By selecting the minimum speed possible we can achieve longer distances or "dirtier" environment. Since I want to communicate with the garage, the SNR will be quite low, so it makes sense to decrease the speed. PS: This is also the reason why it takes long time to transmit data and pictures from the spaceships that visit the planets (like Pluto recently), the SNR is so low that according to the Shannon theorem the speed needs to be really low - sorry for the digression

 

All in all, I am testing the communications with a second Arduino and after some fine tuning, it is working quite well. I´m curious to check whether I can communicate with the garage!

 

Hope to be able to share the complete code of the living room in the next post!

 

Nodes´ Dashboard

After this week, most of the basic components of the living room node are finished and several of them will be easily replicable in all the remote nodes, so hope to see more greens coming in shortly!

Nodes Dashboard

Any comments, suggestions or ideas are more than welcome!

Thanks for reading

Hello

 

In this post I'll show you some more details on wireless nodes schematics and operation and also the code I write so far for nodes and Raspberry web server.

 

Wireless ESP8266 nodes

 

At this moment I have two wireless nodes based on ESP8266-201 and built on perf board, which read ambient temperature and send the readings to a server running on RaspberryPi3.

From previous hardware build, I changed the Pololu adjustable power regulator with a LM3940 3.3V power regulator. This regulator accept 5V as input and provide stabilised  3.3V/1A output. This setup will allow me to power the nodes with 5V required for some sensors and servos with LM3940 provide power for ESP board.

 

wifi_node_bb.jpg

wifi_node_schem.jpg

 

For ESP8266 module programming I choose Arduino IDE. Arduino environment needs a little setup before it can be use for this.

 

Into Additional Board Manager URLs field in the Arduino v1.6.4+ preferences, enter http://arduino.esp8266.com/stable/package_esp8266com_index.json.

 

arduino1.JPG

 

Next, use the Board manager to install and update(if needed) the ESP8266 package.

 

arduino2.JPG

 

From "Tools" select "Generic ESP8266 Module" (at least this one works for me). I let all board settings default except "Port" which have to match your serial adapter COM port and "Upload Speed" which have to be set at "115200".

 

arduino3.JPG

 

Before you going to flash the esp_pilot.ino to ESP8266-201 Board you need to change the SSID and the password to match your WiFI router settings and the IP Address for Apache web server.

Put ESP board into bootload mode by connecting GPIO 0 pin to GND and reset the board.

 

After these steps are done the code upload should be possible (esp_pilot.ino on Github repository). After upload is done, ESP node should start sending data to RaspberryPi web server.

 

 

RaspberryPi environment setup

 

For Raspberry I used the latest available Raspbian Jessie. Good thing, everything I tested was fine, given the operation mode of this project and because I usually run my RPis headless,

my top concern was wireless adapter to work properly.

 

In order to have access from wireless nodes to web server on Raspberry, I need to have a static IP address for RaspberryPi wireless adapter so the following config files have to be edited:

 

- sudo nano /etc/wpa_supplicant/wpa_supplicant.conf - configure SSID and Password for WiFi router

 

- sudo nano /etc/dhcpcd.conf - add the following at the end of the file, of course put your own IPs.

  interface wlan0

  static ip_address=192.168.1.150/24

  static routers=192.168.1.1

  static domain_name_servers=192.168.1.1

 

The above IP address will be used by the nodes to send data to web server.

 

After remote access was completed, I proceed to install Apache2 and Php5:

 

- sudo apt-get update

- sudo apt-get upgrade

- sudo apt-get install apache2 apache2-utils

- sudo apt-get install libapache2-mod-php5 php5 php-pear php5-xcache php5-mysql php5-curl php5-gd

 

That's it. Reboot the RPi and you should have a functional web server with Php support. The default folder for web server is /var/www/html so I placed here the php script (add.php from Github repository) for receiving and processing data sent by wireless nodes. Add.php needs to have two folders in web server root folder, "/var/www/html/datahist" - for storing a log of temperature values and "/var/www/html/datainst" - for storing only instantaneous values.

 

These instantaneous values are feed by a python script (mqtt_pilot.py from Github repository) to AdafruitIO service to be displayed on a dashboard format. This script use MQTT protocol to send/receive data and is implemented with the help of Adafruit mqtt python client library, a full tutorial and repository is available on their website/Github repo. A nice thing is bidirectional communication between dashboard and mqtt pyton script, dashboard can not only receive feeds from RaspberryPi3 but also send commands through switches and buttons. Switches don't have anything behind yet, i use them so far only to see if messages are sent/received.

Mqtt_pilot.py script is placed in /home/pi/pilot folder along with node_config.txt file, used to pair nodes MAC addresses with AdafruitIO feeds names.

 

Code for Raspberry and ESPs is available and updated as often as possible to Pilot Github page.

 

When my RPi is online you can see things happen HERE.

 

enocean.pngEnOcean offers a range of wireless, self-powered sensors, measuring: temperature, humidity, contact, etc ... I've been using them in my house ever since the Forget Me Not Design Challenge. After almost two years, I can say that I've never had to perform any maintenance on the sensors and they are truly reliable. The indirect light shining on them from nearby windows has been enough to keep them powered all this time!

 

If you're still not convinced and have some time to spare, you can watch this ad from EnOcean, explaining how using their solution can save you a lot of time:

 

 

Anyway, let's proceed with getting things set up

 

 

EnOceanPi

 

IMG_1565.JPGThe EnOceanPi is the gateway through which all sensor data can be collected. It connects to the Pi via the old 26pin GPIO header, but is compatible with the 40pin version. EnOceanPi uses only 4 pins on the GPIO header: 3.3V and GND, and UART Tx/Rx.

 

Unlike with previous generations Raspberry Pi, additional configuration is required to be able to use the EnOceanPi on Raspberry Pi 3. This is because the Pi 3 has onboard Bluetooth which makes use of the UART interface on "/dev/ttyAMA0". The serial console has moved to the mini-UART on "/dev/ttyS0". Additionally, in order to have stable, consistent serial output using the mini-UART interface, an new parameter was introduced in the "/boot/config.txt" configuration file. The parameter is called "enable_uart" and when set to "1", will fix the core frequency to the minimum. Unless the "force_turbo" parameter is set to "1" as well, forcing maximum core frequency. Forcing turbo will consume more power and may require a heatsink for better heat dissipation.

 

More information on this topic can be found on the Raspberry Pi forums: https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=141195

 

Here's what to do:

 

As per the instructions, add the "enable_uart" parameter. Optionally, add "force_turbo" as well.

 

pi@piiot1:~ $ sudo nano /boot/config.txt

 

# Enable UART

enable_uart=1
#force_turbo=1

 

Because we don't want the serial console to interfere with the EnOceanPi module, we disable it.

 

pi@piiot1:~ $ sudo nano /boot/cmdline.txt

 

Remove the reference to serial0, as illustrated below.

 

#dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

 

Finally, to apply the changes, reboot the Pi.

 

pi@piiot1:~ $ sudo reboot

 

The serial interface is now usable and dedicated to the EnOceanPi module.

 

EnOceanSpy

 

During the FMN, I used FHEM to determine the IDs of the EnOcean sensors. I have found a more lightweight option, namely EnOceanSpy.

 

You could also use "hexdump", which is available without installing any additional tools. It doesn't print the telegrams as cleanly though, making the parsing slightly more difficult, especially the first time.

 

pi@piiot1:~ $ sudo stty -F /dev/ttyS0 57600
pi@piiot1:~ $ sudo hexdump -C < /dev/ttyS0
00000000  55 00 0a 07 01 eb a5 00  00 66 00 01 80 9d c1 00  |U........f......|
00000010  01 ff ff ff ff 30 00 da  55 00 0a 07 01 eb a5 00  |.....0..U.......|
00000020  00 65 00 01 80 9d c1 00  01 ff ff ff ff 33 00 cd  |.e...........3..|
00000030  55 00 0a 07 01 eb a5 00  00 65 00 01 80 9d c1 00  |U........e......|
00000040  01 ff ff ff ff 31 00 e7  55 00 0a 07 01 eb a5 00  |.....1..U.......|

 

Let's continue with EnOceanSpy ...

 

Installing

 

Clone the GitHub repository using the "git" command:

 

pi@piiot1:~ $ git clone https://github.com/hfunke/EnOceanSpy
Cloning into 'EnOceanSpy'...
remote: Counting objects: 139, done.
remote: Total 139 (delta 0), reused 0 (delta 0), pack-reused 139
Receiving objects: 100% (139/139), 82.14 KiB | 0 bytes/s, done.
Resolving deltas: 100% (65/65), done.
Checking connectivity... done.

 

Change directory to the cloned repo:

 

pi@piiot1:~ $ cd EnOceanSpy/

 

In the source file, remove the following block of code. As it expects the serial interface to either be "/dev/ttyAMA0" or "/dev/ttyUSB0", it won't work with EnOceanPi on Pi3 where the interface is called "/dev/ttyS0".

 

pi@piiot1:~/EnOceanSpy $ nano EnOceanSpy.c

 

// Check content of args
if ((strcmp(argv[1], "/dev/ttyUSB0") != 0)
  && (strcmp(argv[1], "/dev/ttyAMA0") !=0) )
{
  printf("Error: %s is not a valid port name!\n", argv[1]);
  return -1;
}

 

After the modification, compile the source code. It only takes a second.

 

pi@piiot1:~/EnOceanSpy $ gcc -o EnOceanSpy EnOceanSpy.c

 

Using

 

Run the "EnOceanSpy" command and specify the serial interface used by EnOceanPi. Triggering the EnOcean sensors should result in hexadecimal data being displayed on screen.

 

pi@piiot1:~/EnOceanSpy $ ./EnOceanSpy /dev/ttyS0
Starting EnOceanSpy...
2016-06-12 09:46:54  55 00 0A 07 01 EB A5 00 00 63 00 01 80 9D C1 00 01 FF FF FF FF 31 00 B7
2016-06-12 09:46:59  55 00 0A 07 01 EB A5 00 00 62 00 01 80 9D C1 00 01 FF FF FF FF 2D 00 F9
2016-06-12 09:47:01  55 00 0A 07 01 EB A5 00 00 62 00 01 80 9D C1 00 01 FF FF FF FF 2D 00 F9
...

 

This data can now be parsed to determine the sensor ID required by the EnOcean binding in OpenHAB.

 

Interpreting

 

Unless you are planning to write your own application to parse the EnOcean telegrams, the most useful/relevant part is the sensor ID, which can be used by another application such as OpenHAB.

 

The structure and content of such a telegram is described in detail in the EnOcean documentation:

 

Below is an illustration of the relevant data of the documentation, and how it maps to the data received via EnOceanPi.

Screen Shot 2016-06-12 at 19.29.32.png

 

Based on this, I was able to retrieve all sensor IDs and configure them in OpenHAB, to be used with the EnOcean binding.

 

OpenHAB

 

I have chosen to use OpenHAB for this project, so I configured my sensors there.

 

Four things need to be done:

  • Tell OpenHAB to use "/dev/ttyS0" for serial communication
  • Install and configure the EnOcean binding
  • Define the items with the correct data
  • Populate the sitemap with those items

 

In "/etc/default/openhab", set:

  • "USER_AND_GROUP" to "pi:pi"
  • "JAVA_ARGS" to "-Dgnu.io.rxtx.SerialPorts=/dev/ttyS0"

 

Next, in "/etc/openhab/configurations/openhab.cfg", configure the EnOcean binding:

 

################################# EnOcean Binding #####################################
#
# EnOcean USB adapter serial port
enocean:serialPort=/dev/ttyS0

 

Install the EnOcean binding:

 

pi@piiot1:~ $ sudo apt-get install openhab-addon-binding-enocean

 

Finally, define your items and sitemap.

 

Items:

 

Switch EnOcean_sensor_00298B1A_A "Master A" <switch> (Switches) {enocean="{id=00:29:8B:1A, eep=F6:02:01, channel=A}"}
Switch EnOcean_sensor_00298B1A_B "Master B" <switch> (Switches) {enocean="{id=00:29:8B:1A, eep=F6:02:01, channel=B}"}

Contact EnOcean_sensor_0180FC58 "Door 1 [MAP(en.map):%s]" <door> (Contacts) {enocean="{id=01:80:FC:58, eep=D5:00:01, parameter=CONTACT_STATE}"}
Contact EnOcean_sensor_0180AAFA "Door 2 [MAP(en.map):%s]" <door> (Contacts) {enocean="{id=01:80:AA:FA, eep=D5:00:01, parameter=CONTACT_STATE}"}

Number EnOcean_sensor_01809DC1 "Room 1 [%.1f °C]" <temperature> (Temperature_Chart) {enocean="{id=01:80:9D:C1, eep=A5:02:05, parameter=TEMPERATURE}"}
Number EnOcean_sensor_0181A67A "Room 2 [%.1f °C]" <temperature> (Temperature_Chart) {enocean="{id=01:81:A6:7A, eep=A5:02:05, parameter=TEMPERATURE}"}

 

Sitemap:

 

sitemap demo label="EnOcean"
{
        Frame label="Switches" {
                Switch item=EnOcean_sensor_00298B1A_A
                Switch item=EnOcean_sensor_00298B1A_B
        }
        Frame label="Temperature" {
                Text item=EnOcean_sensor_01809DC1 valuecolor=[>25="orange",>15="green",>5="orange",<=5="blue"]
                Text item=EnOcean_sensor_0181A67A valuecolor=[>25="orange",>15="green",>5="orange",<=5="blue"]
        }
        Frame label="Contacts" {
                Text item=EnOcean_sensor_0180FC58
                Text item=EnOcean_sensor_0180AAFA
        }
}

 

The result should look something like this, although the GUI itself may differ, as I'm experimenting with PaperUI.

Screen Shot 2016-06-14 at 20.30.33.png

 

Using the sensors results in realtime updates:

 

 

Now you know how to set up EnOceanPi and sensors!

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

Chef

Inspired by one of the other challengers I started to experiment with Chef. fvan is using Puppet to provision his Raspberry Pi's. To get more knowledge on the several tools out there we decided to both try one. So that's why I started using Chef.

 

Chef vs Puppet

Both tools are open source projects built for automatically provisioning nodes with software and configuration. They both have a fairly similar setup using a server and clients on all nodes. The biggest difference is in how you manage your configuration. Puppet uses a Ruby-based DSL which is similar to JSON, Chef uses pure Ruby. This makes Chef a bit more powerful out-of-the-box. As Rich Morrow said in his report:

Whereas Chef tries to provide more power to the user, Puppet puts safety rails around them.

Some noteworthy differences:

ChefPuppet
LanguageRubyDSL
ExecutionOrder enforcedModel driven
ApproachProgrammer's approachSysadmin friendly
Used by a.o.Facebook & AdobeTwitter & Intel

 

To learn more about Puppet I refer you to Frederick's posts, here more about Chef!

 

The basics

A typical setup consists of three elements: your workstation, a server, and nodes.

Chef setup

The server is the central repository for all code and it also keeps knowledge about every node it manages. From your workstation you write and verify the configuration policy and then upload it to the server. When you run chef-client on a node the latest code is downloaded from the server and the node's configuration is brought up-to-date.

 

Preparation

To start you need a set up workstation and server. You can install a server locally or use a hosted version. Hosted Chef is free up to 5 nodes, so that's how we'll start.

  1. Install the Chef Development Kit
  2. Sign up for the trial of Hosted Chef
  3. Create an Organization at https://manage.chef.io/
  4. Go to the Administration tab, selection your new organization and click Generate Knife Config
  5. Save knife.rb to ~/.chef
  6. Copy your private key (created during signup) to ~/.chef as well
  7. Test the connection between your workstation and the server with knife ssl check

 

Your first cookbook & recipe

A cookbook is a set of configuration describing a service or node. It consists of recipes, template files and attributes. A recipe describes everything that is required to configure part of a system, for example which software packages to install, how to configure them or execute other recipes.

 

For this example we'll create a simple cookbook which creates a file, just like in Fredericks example. Start with generating the cookbook:

chef generate cookbook test-chef

 

You now have the following directory structure:

.
└── test-chef
    ├── Berksfile
    ├── chefignore
    ├── metadata.rb
    ├── README.md
    ├── recipes
    │   └── default.rb
    ├── spec
    │   ├── spec_helper.rb
    │   └── unit
    │       └── recipes
    │           └── default_spec.rb
    └── test
        └── integration
            ├── default
            │   └── serverspec
            │       └── default_spec.rb
            └── helpers
                └── serverspec
                    └── spec_helper.rb

 

As you can see there is already a default recipe created. Let's edit that to contain the following:

file '/tmp/testfile' do
  content 'test content'
  mode '0444'
end

 

Now upload it to your server:

knife cookbook upload test-chef

 

And bootstrap a node:

knife bootstrap ADDRESS --ssh-user USER --ssh-password 'PASSWORD' --sudo --use-sudo-password --node-name node1 --run-list 'recipe[test-chef]'

 

You'll see that Chef is installed on the node and in the end the file is created according to the recipe. When you run <code>chef-client</code> from the node it will check if there is a new version and if the file is still there. If something is not according to the recipe – it will fix it.

 

Supermarket

Of course there are already lots of cookbooks created by others, they are shared in the Supermarket. You can easily add a dependency to one of these cookbooks. There are several ways to override attributes used in the cookbook, or even override complete files to make sure it does exactly what you want.

 

You should have a basic understanding of Chef now, there is however much more. To let you get more familiar with Chef they have some excellent tutorials. In upcoming blogposts I'll show my cookbooks as well.

 

The ugly: Chef on a Raspberry Pi

Now the bad news: Chef doesn't support the Raspberry Pi out-of-the-box, so you can't bootstrap it as easily as on other platforms. It also needs a fairly recent version of Ruby, which is not available in the Jessie repository. Luckily you can overcome this by using a custom bootstrap script and using the Stretch (the upcoming version of Debian/Raspbian) repository.

 

A good starting point is the Raspbian-Bootstrap by Dayne. It's made for Wheezy, so we'll have to update it a bit. It took some experimenting, but at some point I found a working solution. We can skip the custom Ruby build (which takes a long time) by using the ruby2.3 package from the Stretch repository and I've updated the syntax of the configuration part to the one from the default Chef bootstrap.

 

This brings us to the following steps:

  1. Take a Raspberry Pi with a clean Raspbian install (I use raspbian-ua-netinst, as it gives a minimal install)
  2. Download my version of Raspbian-Bootstrap
  3. Use knife to bootstrap with a custom template:
    knife bootstrap PI_IP_ADDRESS -t raspbian-jessie-chef.erb --ssh-user root --ssh-password 'raspbian' --node-name 'NODE_NAME' --run-list 'recipe[thuis-base::default]'
    
    This will do the following:
    • Add the Raspbian Stretch repository to Apt and update the Apt index
    • Remove any existing versions of Ruby
    • Install Ruby 2.3 plus build tools from the Stretch repository
    • Install gems needed for Chef + Chef itself
    • Add basic configuration for Chef
    • Start chef-client for the first time running in this case my base cookbook thuis-base
  4. Start using Chef on your Raspberry!

In this post I´m covering three components: RF plug control, detecting IR from the TV remote and measuring the node´s environment. This last one I intend to reutilize it in most of the other nodes. The point is that there is lots to cover before the kit arrives, and from what we are reading, this can happen really quick

 

As suggested by DAB last time, I will add a snapshot with the features I´m discussing in each post. I hope this will make it easier to follow the project development!

Project Status

This is a snapshot of all of the features from the PiIoT - DomPi: Intro. At the end of the post, you can find the nodes´ dashboard with some cells coming into green!

 

Lights Control

This feature leverages the RF plugs from the DIO Chacon producer that I have at home. As explained in the previous post, they operate at 433Mhz, which can be easily replicated by an Arduino or Raspberry Pi. I have decided to create a small class (DIO_lib) to implement the required code. In this way, each of the three plugs will be managed independently by a DIO_lib object.

 

Below is the h file for the Arduino class. DIO_lib.h - note that the class wrapper is "home" made by myself, meaning... it may not be the best C code ever... but it works

/* 
Library based on different codes found in the Internet as per shared in the Element14 Design Challenge posts.

This library has an object, DIO_lib, that helps control my RF plugs from the DIO Chacon brand.

 RF header of DIO:
  - 26 bits header. It incluyes “00” and the specific code for each plug set, as per below
  - "0" a bit of “0”
  - On/Off a bit to state on or off
  - 4 bits identifying the plug to actuate on
  
 DIO id for the RF set: 14137952 
 Plug 1: "0000" added at the end
 Plug 2: "0001"
 Plug 3: "0010"

 */

#ifndef DIO_lib_h
#define DIO_lib_h

#include <Arduino.h>

#define DIO_CMD_OFF  0 //Turn off
#define DIO_CMD_ON   1 //Turn on 
#define ENCHUFE1    1  //Id for the plug in the library
#define ENCHUFE2    2
#define ENCHUFE3    3
#define CABECERA_DIO 14137952

class DIO_lib {
public:
  DIO_lib(int enchufe, int pin);
  void setPIN(int pin); //Select pin where the RF 433Mhz module is connected
  void sendCmd(int cmd); //send on or off
  void On();
  void Off();
  
private:
  int _pin;
  int _enchufe;

  void itob(unsigned long integer, int length);
  void transmit(int blnOn);
  void sendPair(boolea
  unsigned long power2(int power);
};


#endif

In the attachments section you can find this file and also the .cpp Additionally, I have added an examples folder. If you add the library to your Arduino IDE in the regular way, you can use the examples and the library as usual. Hope it helps!

 

Lights Control via the TV remote

As commented in the previous post, I will use the TV remote to turn on and off the lights connected to the RF plugs. Using the Arduino IR library with its dump example and the IR receiver, I have identified the IR codes of the buttons I want to detect. Mainly the red, yellow, green and blue ones.

 

At the beginning I implemented a code that would detect the red button, this button acted as an activator for the Arduino. It made the Arduino to wait up to 4 secs until a yellow-green-blue button was detected. The yellow would activate the plug 1, the green activates the plug 2, etc. This worked well in the "alpha" test, meaning myself. But with the Beta tester (my wife), I realized that it was not the easiest way: you need to move your finger! Finally I´m implementing another approach. If you click twice the red button, it turns on and off the plug 3, first double click it turns it on, the next one off, the third one on, and so on. The green button turns on/off the plug 2 and the yellow button acts on the plug 1.

 

If you are thinking that there is no "double" click in a TV remote, you are right. I´m allowing up to 4 secs to get the second click. My guess is that by allowing "only" 4 seconds, it is a good trade-off: it is enough time for the user to click it twice without rush and stress. And also should not interfere with the TV usage. Meaning, I seldom use those buttons in my TV. When I use them is to move between menus, and this usually takes more than 4 secs (loading the page, me reading what I need and moving to the next page should take longer). If by using this I see that it does interfere, I will just move it down to 2 secs.

 

IR-TV-BlocksI have already some code that makes these actions, let me share it with you hopefully in the next post with most of the features for the living room node.

 

Environment Conditions

This feature obtains the temperature, humidity and luminosity of the room. The idea will be to reuse the same code for all of the nodes that require this feature. From the sensors perspective, I will be using the DHT11 for the humidity and the DS18B20 for the temperature (see reasons for these in the previous post). The output of both of them is digital and require some decoding, therefore, I´m leveraging two libraries: DHT.h and DallasTemperature.h together with the OneWire.h. For the luminosity sensor i will take the GL5528, its output is analog and I am connecting it to the A3 analog pin of the Arduino Nano.

Problems encountered: In theory, you should be able to connect both the DHT11 and the DS18B20 to the same Arduino input pin, but for some reason that I have not looked deeply into, I am having issues sharing the same pin. Since I don´t foresee problems with running out of digital pins, I have decided to keep both sensors in different pins.

 

The connections for the DHT11 andDS18B20 sensors are very much the same: a resistor from +5V to the relevant pin and then it comes the sensor pin data. For the photoresistor, the second leg of it is connected to ground.

environment connections

The  libraries for the DHT11 and the Dallas sensor are quite intuitive and besides some fine tune, they are straightforward when looking at the examples. One call out though, for the Dallas sensor there are two modes of invoking the class. It seems that if you provide the device id when creating the object, the Arduino and the sensor will work faster - probably because it avoids the discovery phase each time it is called (TBC). In principle, I don't expect issues by spending some more milisecs in each thermometer reading, however, by setting up ones per node does not take much time and... well, I decided to invoke the object with the device address - to get its address, I just leveraged the tester example.

 

Getting the luminosity is as well quite straightforward. I just need to read the value of the analog pin A3 when required. The only trick, is that I want to map the luminosity into a 0-100 scale before the value is sent to the Command Center. This mapping requires some fine tuning and Arduino IDE offers two functions that will help me, the constrain() and the map(). The only point missing is that I´d need to take some measurements along the day and night to understand what is the range of the readings, before I can properly constrain them and map them to the 0-100 scale I want to. Therefore, I need to spend some efforts still in this fine tune.

 

Let me share with you the code of these functions in the next post, hopefully the living room node will be almost finished.

 

Nodes´ Dashboard

Nodes DashboardIf anything needs further details or you have any suggestion, please do let me know!Write to you in the next post.

My PiIot Design Challenge project is an (airplane) hangar control system, "Hangar Central". One of the components will be a remotely scheduled and operated engine block heater, which I intend to power using a relay whose low-voltage control circuit is driven by one of the Raspberry Pi GPIO 3.3V on/off pins. I haven't built the remote heater yet as I am still experimenting with various components. On my test bench I have a Raspberry Pi that has jumpers from pin 6 (ground) and pin 11 (set to output 3.3V when "on") connected to a resistor/LED pair. If you're reading this then you have most certainly seen (ad nauseam) a program to turn an LED on and off. For the sake of development, the LED represents a heater and when it is on, the heater is running.

 

led-gpio17

I decided to use the gpiozero library (http://gpiozero.readthedocs.io/en/v1.2.0) because the API is very readable and intuitive. The gpiozero API provides a class called "LED" which allows us to focus on treating individual GPIO pins as lights that we can turn on and off rather than circuits to bring high or low. Let's get a quick script going using gpiozero:

 

# Use the gpiozero library to control the GPIO pins
from gpiozero import LED

# Allow us to sleep
import time

# Use a descriptive variable name and assign it to a GPIO pin
# GPIO 17 is physical pin #11
heater = LED(17)

# To start preheating an engine (in this case, turn on the LED)
heater.on()

# Wait long enough for us to see the light
time.sleep(5)

# To turn off an engine heater
heater.off()

 

Running this program on the RPi will turn on and off the LED. Pretty basic and incredibly easy -- Precisely the reason that the Raspberry Pi has found such great success.

 

While waiting for the Challenge Kit to arrive, I have started messing around with some of the issues I foresee and the concepts surrounding physical computing. I travel quite a bit for both work and personal reasons. When traveling for work, I have no time for playing around. When I am not at work, I am usually taking my two sons to music lessons, soccer practices, or karate classes. I often find myself with anywhere from 30 minutes to 2 hours of downtime. During these times, I have been learning Python and how to interact with the RPi. As small and portable as the Raspberry Pi is, it's still not convenient to carry around. I needed a way to develop for the Pi without developing on the Pi.

 

Right now, I am sitting in a parking lot with my laptop on and no Raspberry Pi in sight. I want to do some coding for my RPi, so I am going to rig up enough of an environment to let me do so. Enter the concept of "stubbing". Basically, we're going to provide enough of a model to represent the GPIO pins on the Raspberry Pi so that we can write the major portions of a program and then test the physical functionality when the Raspberry Pi is available. I understand that some of you may consider a stub module far too obvious to write about, but hopefully others will find benefit in the discussion or, at the very least, a refresher (reminder) that not every problem needs to be solved before starting.

 

Above, I provided a quick example of the gpiozero LED class. Now I want duplicate the (programming) interface so that we can program "offline". Use your favorite editor to create a Python script called "gpio_stub.py":

 

# Create our own LED class
class LED(object):

 # Initialize our version of an LED and use variables to duplicate what
  # would happen in hardware
 def __init__(self, pin, active_high=True, initial_value=False):
    self.pin = pin
    self.value = initial_value
    self.active_high = active_high
    
  # "Turn on" the LED by recording a True value
  def on(self):
    self.value = True
    
  # "Turn off" the LED by recording False
  def off(self): 
    self.value = False
    
  # Is the light on or off?
  @property
  def is_lit(self):
    return self.value

 

We now have the basics of an interface for programming the GPIO just like we do with the gpiozero library. To have your own program function both on the RPi as well as your development machine, place the following snippet wherever you would import the gpiozero module -- Most likely at the top of your Python program:

 

# If we're not running on a Raspberry, use a GPIO stub so we can still get some coding done
from platform import processor
if processor() == 'x86_64':
  from lib.gpio_stub import LED
else:
  from gpiozero import LED

 

The above snippet checks the processor type in an attempt to identify what kind of machine we are on. I realize that there may be other processor types being returned, for example working on a Mac, etc. Simply find out what processor type your machine is reporting and use that for the test condition. I did not want to test operating system types as that was (even more) ambiguous than identifying a processor. Again, the goal is to identify our current development environment and not necessarily a robust solution to be used in production.

 

I hope that this article gives you some ideas about how you can do some RPi tinkering while you're on the road. In addition, if you're either waiting for something to be written or come available, write a little placeholder ("stub") module to implement the missing pieces.

 

Enjoy,

Rick

     First step in implementing the home automation system is the monitoring part. This part consist in remote wireless nodes which send temperature and humidity measurements values to a central node made with RaspberryPi, where this values are recorded and processed.

 

     These nodes will be built around ESP8266 wireless modules and can be configured to be sensors, actuators or both. On first phase, each node will have its role set from firmware loaded on it, future plans are to have same firmware loaded on all nodes and roles are assigned/changed on the fly from central node.

Central node will be built around RaspberryPi3 which is also the core of the whole system, where measurements are processed and actions are issued to actuators according to predefined scenarios. Also instantaneous and a history of measured values for each sensor are accessible through a web interface and are sent to a cloud service.

 

     For this first step I acquired the following parts:

 

- Raspberry Pi 3 - installed and updated to latest Raspbian Jessie version. Being first time I installed Jessie, I had part of a few surprises especially on the networking side, my settings made on Wheezy did not work anymore. A little  research was required to put everything back on track. Some work is still required on wireless and bluetooth part.

- ESP8266-201 - for first test purposes, I bought two ESP8266-201 modules. I choose this version, among many ESP8266 versions because this version have many GPIO pins accessible compared for example with ESP8266-01         modules. Besides, ESP8266-201 have both onboard and external antenna which might be required if nodes are spread around the home.

One downside, for me at least, these modules work on 3.3V and are quite power hungry, so I think the nodes will not be battery powered.  I considered and Adafruit HUZZAH ESP8266 Wifi  because have onboard voltage regulator and level shifting to 3.3V, but for now ESPs were easier to get.

- Voltage regulator - I use for now Pololu Adjustable Step-Up/Step-Down Voltage Regulator S7V8A set to 3.3V. I'll use a 3.3V fixed voltage regulator for next version.

- DS18B20 - 1-Wire digital thermometer - quite easy to use with ESP8266 thanks to existing libraries.

 

     I built so far two of these wireless nodes, one on breadboard, one on perfboard, I plan to build at least two or three more nodes to cover all places I need. The code to send measurements to RPi is still work in progress, I'll post it as soon as it is usable along with the code on RPi side.

ESP_breadboard

ESP_node1

ESP_node2

All the best

-=Seba=-

In this entry I discuss the implementation of  MQTT (Message Queue Telemetry Transport) as the connectivity protocol. The Message Queue Telemetry Transport protocol was born as a machine to machine connection protocol for devices where a light implementation is needed. It consumes little bandwidth and its software stack implementation does not require much memory space (perfect for our IoT applications!).images.jpeg

 

MQTT deploys a publish/subscribe architecture, with a broker in the middle. To provide a multicast to multicast communication, information sources and destination are categorized as publisher and subscriber clients respectively. These clients connect to a central broker, which will manage the income information and distribute it accordingly.

The protocol behavior is orchestrated by the broker. Publishers will connect to the broker and send their data packages. Each of this packages will have a topic (the publisher sets this topic label to identify the type of information), used by the broker to organize them in queues. Subscriber clients can then subscribe to the topics of their interest. New packages arriving to the queue are received by the subscribed clients.

 

In order to work with MQTT, we will make use of two open source projects:

  • Mosquitto – to implement the broker
  • Paho – to implement the clients

 

Communication among broker and clients is achieved using the home WiFi network.

 

MQTT Elements

 

This MQTT architecture has three main elements. Their relationship is shown in the figure bellow:

 

MQTT-Home.jpg

 

 

 

 

Raspberry Pi 1 - Publisher

  • Sensor Node
  • Generates data (sensors reading)
  • Sends data to broker

 

 

Raspberry Pi 3 - Broker

  • Central node
  • Receives messages publishers
  • Organizes messages in topic queues

 

 

 

Samartphone - Subscriber

  • User's devices
  • Connected to broker
  • Displays sensors reading

 

 

          Figure 1. MQTT architecture

 

Next sections explain the choice of software and setup for broker and clients

 

mosquitto-200px.png

1) Installation of Mosquitto Broker in the central node of our smart home (Raspberry Pi 3).

Initial setup: Raspberry Pi 3 – Raspbian 8.0 (jessie) / SSH connection enabled

 

A described in its project web “Eclipse Mosquitto™ is an open source (EPL/EDL licensed) message broker that implements the MQTT protocol versions 3.1 and 3.1.1. MQTT provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for "Internet of Things" messaging such as with low power sensors or mobile devices such as phones, embedded computers or microcontrollers like the Arduino.”

 

I will install the broker in a Raspberry Pi 3, which is already running Raspbian OS. Afterwards, I will configure this broker and test that it is actually running.

 

Install

<OPTIONAL> To do so, you may first need to include the corresponding Debian repository for Mosquitto files, with the following commands:

From Jessie's version on, this step is not needed anymore (as Mosquitto is officially part of the package){thanks rhe123 for the explanation in your comment}. So, if you have an older version of your a apt-get install does not work directly you may want add the repository. Otherwise, next command lines can be skipped

sudo wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
sudo apt-key add mosquitto-repo.gpg.key
cd /etc/apt/sources.list.d/ 
sudo wget http://repo.mosquitto.org/debian/mosquitto-jessie.list
sudo apt-get update

</OPTIONAL>

 

Then, I install Mosquitto broker. Additionally, we will also install the default mqtt clients included in the project:

sudo apt-get install mosquitto mosquitto-clients

 

Configure

Mosquitto allows for most of its parameters to be configured (such as the user name, maximum messages in queue, listener characteristics...). By default, however,  Mosquitto is started without any configuration file (using its default values). I will have to create one mosquitto.conf to be used as a parameter when the broker is started.  A mosquitto.conf can be created in the directory /etc/mosquitto/mosquitto.conf (*NOTE: the name of the file is irrelevant, as long as it is .conf type)

 

Broker can be started with the following command:

sudo /usr/sbin/mosquitto -c etc/mosquitto/mosquitto.conf 

 

Mosquitto project has a very detailed explanations for this config file mosquitto-conf-5.html

I also attach a config file (can be found at /usr/share/doc/mosquitto/examples/mosquitto.conf.example). I found it very useful to start changing the parameters.

 

In this project, I modify the following parameters:

  • MQTT general characteristics:
    • user mosquitto
    • persistence true (save messages information)
    • Security:
      • clientid_prefixes secure- (only clients of the type secure-clientname are allowed to connect... better to use something more original than secure- for the prefix). At this first trial, no prefix is included but we implement it later in the project.
      • allow_anonymous false (clients connecting without user name are not allowed)
  • Listener parameters --> it defines the broker's behavior when clients are trying to connect
    • port 1883 (default)
    • max_connections -1 (no limits for the # of connections)
    • Certificate based SSL/TLS support --> include security with a Certificate Authority which generates a certificate for each client. Clients can be required to provide a valid certificate to connect with the broker. (I plan to have this feature enabled, but could not make it work in my first trials so.... no Certificate based SSL/TLS support as of now)

 

Test

Since mosquitto-clients wre also installed, I will use them to test our Mosquitto setup. I open three different terminals, to start the broker and create a subscriber client and a publisher client

1. Start the broker in terminal 1

2. Create an mqtt subscriber client. It will be listening for message with topic 'test/hello' in terminal 2

3. Create an mqtt publisher client. It will send a simple message "hello world!" (original, yeah!) with topic 'test/hello' in terminal 3

 

mosquitto.png

Figure 2. Start MQTT broker with our config file

          

subscriber.png

Figure 3. Create a subscribe client. It is subscribed to the topic test/hello

publisher.png

Figure 4. Create a publisher client. Send the message "Hello world!"

 

(*) Mosquitto broker is automatically started. We stop it first, to then start a broker with the desired configuration

 

We got the message in our subscriber, hurray!!

 

Useful Mosquitto commands

 

STOP BROKER

sudo /etc/init.d/mosquitto stop

 

START BROKER

sudo /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf 

 

SUBSCRIBER CLIENT

mosquitto_sub -h <host> -p <port> -i <user> -d -t what/topic 

 

PUBLISHER CLIENT

mosquitto_pub -h <host> -p <port> -i <user> -d -t what/topic -m "new message!" 

 

 

MQTT broker (Mosquitto) successfully installed in Raspberry Pi 3

MQTT broker tested with local MQTT clients

 

 

 

2) Paho Clients (Raspberry Pi 1 / Smart phone & Raspberry Pi 3) paho_logo_400.png

Initial setup: Raspberry Pi 1 - Raspbian / Smartphone - Android  4.2.2/ Raspberry Pi 3 - Raspbian 8.0

 

Our clients, both Publisher and Subscriber, are created using Paho project. Paho provides client implementations of MQTT, in various programming languages. Main advantages we obtained from Paho are compatibility (available for different platforms and programming languages, such as Python and Java) and community support.

We are using Paho to implement three clients: one publisher (in our sensor node) and two subscribers (one in the broker and the other in the smartphone for the user).

 

MQTT Connectivity setup

 

We now have a running broker installed in the Raspberry Pi 3. We write down the values any MQTT client will need to connect to it:

  • Host: local IP address of Raspberry Pi 3
  • Port: 1883
  • Client prefixes (if configured): secure-

We can develop publisher and subscriber clients, connected to our WLAN/LAN.

 

In my previous post, I talked about Puppet and how I intend to use this configuration management tool during the challenge.

Today, I will be showing you a puppet module I am working on for the automatic installation and configuration of OpenHAB.

 

Do keep in mind that the module will evolve over the course of the challenge, as I implement new features and functionality.

 

Puppet Module

 

The puppet module I created consists of different folders and files, each with a specific function. The structure is as follows:

 

pi@puppet:/etc/puppet/modules/openhab $ tree .
.
├── files
│   ├── openhab.items
│   ├── openhab.map
│   ├── openhab.persist
│   ├── openhab.rules
│   └── openhab.sitemap
├── manifests
│   ├── configuration.pp
│   └── init.pp
└── templates
    └── openhab.cfg.erb

 

Files

 

The "files" folder contains static files which will be copied to a specific location on the target node. The content of the files will not change, though the attributes such as file name, permissions, owner, etc ... could.

 

In this case, I have put my openhab configuration files with my specific sitemap, items, etc ... in that folder. Someone else using this puppet module, could replace these files with his own.

 

Manifests

 

The manifests contain the logic of what should be configured and how. To keep a clear overview, the logic can be spread across different manifests.

 

In this particular case, two manifests were created:

  • init.pp: responsible for the installation of the openhab runtime
  • configuration.pp: responsible for the deployment of configuration files and accompanying openhab addons

 

Rather than explaining the full code of the manifests, I will highlight some interesting bits, helping you understand how I achieved certain things.

 

1) OpenHAB is not part of the APT repository by default. So in order to be able to install it, the correct source needs to be added:

 

apt::source { 'openhab':
  location => 'http://dl.bintray.com/openhab/apt-repo',
  release => 'stable',
  repos => 'main',
  include_src => false,
}

 

This will add the source and automatically force a "apt-get update", making OpenHAB available for installation.

 

2) If OpenHAB is not installed, install it:

 

package { 'openhab-runtime':
  ensure => 'installed',
}

 

3) Make sure OpenHAB is running and automatically started at boot:

 

service { 'openhab':
  ensure => 'running',
  enable => true,
}

 

4) Make sure "openhab.cfg" is present in "/etc/openhab/configurations/", based on the "openhab.cfg.erb" template file. If the file is updated, restart the OpenHAB service to apply the new configuration:

 

file { '/etc/openhab/configurations/openhab.cfg':
  content => template('openhab/openhab.cfg.erb'),
  notify => Service['openhab'],
}

 

 

I hope this helps you understand the puppet manifest a little bit better and how it could apply to the deployment of other files and applications. For those of you interested in the full code, I invite you to have a look at the OpenHAB puppet module in the PiIoT GitHub repository I have created for this challenge, which will contain all files produced as part of this project. I will however replace all credentials and keys by dummy ones

 

Templates

 

This folder contains files of which the content has been templated to automatically be modified depending on certain variables in the manifests.

 

For example, if the variable "persistence" is set to true, it will add the MySQL Persistence section in the configuration file, along with the MySQL credentials and database defined in the configuration manifest.

 

It looks like this inside the templated "openhab.cfg" file:

 

<% if @persistence == true -%>
############################ SQL Persistence Service ##################################
# the database url like 'jdbc:mysql://<host>:<port>/<database>' (without quotes)
mysql:url=jdbc:mysql://localhost/<%= @db_name %>'

# the database user
mysql:user=<%= @db_user %>

# the database password
mysql:password=<%= @db_pass %>

# the reconnection counter
#mysql:reconnectCnt=

# the connection timeout (in seconds)
#mysql:waitTimeout=
<% end -%>

 

OpenHAB

 

Well, now that the openhab puppet module is explained, let's use it by performing a "puppet run" on the desired node.

 

pi@piiot1:~ $ sudo puppet agent -t

 

Et voila! That's all it takes to install, fully configure and start OpenHAB on any node, with the same, reproducible configuration.

You will want to read the "puppet run" output to see things are as expected.

 

For example, verify OpenHAB has been installed:

 

Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for piiot1.home
Info: Applying configuration version '1465327686'
Notice: /Stage[main]/Openhab/Apt::Key[openhab]/Exec[177ff3e62fb92bf7bc3ea298823a66b3db2d9c3f]/returns: executed successfully
Notice: /Stage[main]/Openhab/Apt::Source[openhab]/File[openhab.list]/ensure: created
Info: /Stage[main]/Openhab/Apt::Source[openhab]/File[openhab.list]: Scheduling refresh of Exec[apt_update]
Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Openhab/Package[openhab-runtime]/ensure: created
Info: /Package[openhab-runtime]: Scheduling refresh of Service[openhab]
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/sitemaps/openhab.sitemap]/owner: owner changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/sitemaps/openhab.sitemap]/group: group changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/rules/openhab.rules]/owner: owner changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/rules/openhab.rules]/group: group changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/items/openhab.items]/owner: owner changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/items/openhab.items]/group: group changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/Package[openhab-addon-persistence-mysql]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/transform/openhab.map]/owner: owner changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/transform/openhab.map]/group: group changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/persistence/mysql.persist]/owner: owner changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab::Configuration/File[/etc/openhab/configurations/persistence/mysql.persist]/group: group changed 'openhab' to 'pi'
Notice: /Stage[main]/Openhab/Service[openhab]/enable: enable changed 'false' to 'true'
Notice: /Stage[main]/Openhab/Service[openhab]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 96.06 seconds

 

The output is readable enough to understand what is happening: First the sources were added, the repo was then updated, OpenHAB installed, configuration files deployed and finally OpenHAB was started. All with a single command.

 

Conclusion

 

I hope this post has helped you understand a little bit better how Puppet works and can be useful for consistent, reproducible installations. There is initially some effort in setting up the required modules, but not all of them need to be created from scratch like I did for OpenHAB. There is a community sharing puppet modules for various applications, so it is always good to check if something exists before creating your own (it is a good exercise though ).

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

Pi Control Hub is a DIY home automation system which can be extended in the future for more IoT capabilities. The name "Pi Control Hub" , correlates spoke and hub network topology, where Raspberry Pi 3 and the 7'' touch screen are going to act as the HUB aka the command and info center. And the other components as part kit including the EnOcean Sensor kit and some sensor that I already have lying around at home, will act as the SPOKE's, here are the feature -

  • DIY Security Camera
  • Blinds Automation
  • Key-less door entry
  • Lights on/off when you enter/leave home
  • Inside-Outside Weather Condition monitor using OpenWeatherMap API
  • Running Amazon Echo API on the Raspberry Pi - which means you have very own DIY Alexa at home   and you can ask her all sorts of questions like what is the Weather Condition, news and more.
  • All the features above and the security camera feed will be part of a Web app hosted on the Pi 3 written in Python-Flask , which means you will able to access Pi Control hub using Tablet/Computer and Phone when you are on you home Wifi.

 

There will be an individual blog post for each of the features aka the spokes of the Hub, so stay tuned!! .In addition, design 3D printed cases/holder for each of the components, I will include the STL files as part of the blog post so that each feature in the project can be replicated separately..

Details Feature to be implemented

Control Hub

The Control Hub flask web app running on boot up which will display the following

-The live feed from the security camera , and the last 3 times something moving was detected

-Display the temperature and humidity inside the house using DHT 11 senosr

-Display the Weather of conditions the outside temperature,humidity and weather condition using OpenWeatherMap current weather and forecast  API

-Also display the temperature from outside the house using the  EnOcean temperature sensor

- When you enter house/leave the house EnOcean Magentic contact sensor will detect this which will trigger the relay..

-Install the Amazon Echo API  python version to activate Alexa, here I am thing of using an Arcade button - to mimic the Amazon Dot functionality .

- And Yes , how can the control hub be complete with out music - will have to install  Mopidy ( https://www.mopidy.com/ ) to stream my Spotify playlist

PiHubonly.jpg

<image above is a mashup of images in the Kit Section of PiIoT design challange>

I am going to have to 3D print a case or may have to laser cut some acrylic at my local makerspace , given that the bed size of my 3D printer is small ..

 

 

Spokes of the Hub

  • DIY Security camera (Currently Work in Progress using an A+ that I have handy)

Like any other security camera this the DIY security camera you will have basic feature, like Pi camera feed will show up on the Control Hub. You will have two buttons(as part of the web app) below the feed to take a picture and take a 1 min video .

This will use Pi B+ with one of the Pi Cameras which will come as part of the kit

SecuityCamera.jpg

  • Blinds Automation

The hardware for this Pi Zero (borrowed from a friend, who has agreed to give me his ,till I find him a replacement with the added bonus of the  new pi Zero with Camera connector )+ Wifi adapter + Continuous rotation servo or  gear motor (i will need a motor driver if i am using a gear motor )+Photo cell and capacitor for detecting light(currently have all these components) + 3D printed case

This will need open and close button on the Control Hub screen

 

BlindsAutomation.jpg

 

  • Keyless door entry

The hardware for this Pi B+ (used in the Security Camera - I will have to figure out the wiring to the door lock ) + Servo + 3D printed parts

As part of this feature you will be able to unlock the door if your smart phone is connected to the home wifi router and you know the secret password. Here I have the 3D printed part and the servo parts ready from a previous project , will have to figure out how/if I can use the Pi B+ from the security camera

KeylessDoor.jpeg

 

  • Light ON/OFF when you enter leave

Here I plan to use the PiFace digital 2 - connected to Contol hub Pi 3 + Lamp +  Humidifier during the winter (Connected to the second relay) + EnOcean push switch

As part of this feature the lamp will come on as soon as you enter your home, keeping time in mind. In addition in the winter the humidifier will come on base on the DHT11 humidity reading inside the house once you enter.

The relay can also be triggered using EnOcean push Switch , this will be handy when my mother visit me , as she does not have a fancy smart phone ..

 

If time permits(Once I completed the Hub and spokes mentioned above)

Waiting for the components to arrive, I thought to do something with my RPi B+. As my project revolves around the convergence of people and their things, a dash board to view the data is an inevitable part of the system. This week I'm exploring the software design for such a dash for displaying the information. I'm going to use 'Freeboard' - a damn sexy real time dash board. There is a cloud version of freeboard available, ut I'm going to self host my freeboard with a nodejs server (Yes, I'm a little data/privacy freak ). In this post, I'm discussing how to install latest nodejs package in RPi and how to host freeboard using a nodejs server.

 

Installing latest Nodejs

Although an official nodejs version can be installed via apt-get, it is too old (mine reports v0.12). So first thing is to go to official nodejs site and download the latest version (v4.4.5 as on 5th Jun 2016). Flocks at nodejs team has made it really simple to install. Just download the right package, extract, and copy to the system path - and you are ready. First go to https://nodejs.org/en/download/ and scroll down to 'Additional Platforms' section. There you can find ARM binaries for ARM6/7/8. For B/B+ we need ARM6 binary, B2 requires a ARM7. I'm not sure about B3, because test tool reports it as ARMv7 but actually it's ARMv8 - so I'm waiting for my B3 to test it. Now right click over which binary build you want to download and select 'Copy link location'(in Firefox, I hope there will be an equivalent in other browsers as well). This will copy the download link for the build you want to download. Now go to Pi's command line and type in the following:

$ mkdir nodejs
$ cd nodejs
$ wget https://nodejs.org/dist/v4.4.5/node-v4.4.5-linux-armv6l.tar.xz 
$ tar -xvf node-v4.4.5-linux-armv6l.tar.xz
$ cd node-v4.4.5-linux-armv6l
$ sudo cp -R * /usr/local/

This should download, extract and install nodejs in your Pi B+. Replace the URL at line#3 with the corresponding URL for your Pi version. Now you can try the command below and should get a similar output.

$ node -v
v4.4.5

If you are getting similar output, you have successfully installed nodejs into you RPi.

Let's test the installation by creating a simple server which listens to port 8080 and greets the visitor with a welcome message. Create a file named 'main.js' and with contents:

// simple nodejs server

var http = require( "http" );

// Create a server instance at port 8080
http.createServer( function(request, response) {
    // Send a 200 header
    response.writeHead( 200, {'Content-Type': 'text/plain'});

    // Response data
    response.end( 'Hello World, this is Raspberry Pi B+\n' );
}).listen( 8080 );

// Print a message
console.log( "Server started at PORT 8080\n" );

Save 'main.js', go to command line and type:

$ node main.js

It should show an output like:

nodejs-simpleServer-outputNow go to a web browser in your computer and visit URL http://<rPi's IP>:8080/ and you will get a response like below:

nodejs-simpleServer-FFX

Serving Freeboard

Now we can create a server for hosting Freeboard and design dashboards. First you have create a new folder to keep the webapp. then download Freeboard.

$ mkdir webapp
$ cd webapp
$ git clone https://github.com/Freeboard/freeboard.git 

Now create a new file name "server.js" inside "webapp" directory and save the contents below:

var http  = require("http"),
    url   = require("url"),
    path  = require("path"),
    fs    = require("fs"),
    mime  = require("mime")
    port  = process.argv[2] || 8888;

http.createServer( function(request, response) {

  var uri = url.parse(request.url).pathname;
  var filename = path.join(process.cwd()+"/freeboard", uri );

  // console.log( "New Request: \n" +
  //              "    URI : " + uri + "\n" +
  //              "    file: " + filename + "\n" );
  
  fs.exists( filename, function(exists) {
    if( !exists ) {
      response.writeHead( 404, {"Content-Type": "text/plain"} );
      response.write( "404 Not Found\n" );
      response.end();
      return;
    }
    if ( fs.statSync(filename).isDirectory() ) {
      filename += '/index.html';
    }
    fs.readFile( filename, "binary", function(err, file ) {
      if(err) {        
        response.writeHead( 500, {"Content-Type": "text/plain"} );
        response.write( err + "\n" );
        response.end();
        return;
      }
      response.writeHead( 200, {"Content-Type": mime.lookup(filename)} );
      response.write( file, "binary" );
      response.end();
    });
  });
}).listen( parseInt(port,10) );

console.log( "Freeboard Server running as PORT " + port + "\nPress CTRL + C to stop" );

 

Save this file and enter command

$ node server.js

This will start the nodejs sever and you will be able to see freeboard editor loaded by default. Now go to http:/<Pi's IP>:8888/ from your web browser. You will be able to see a webpage like the one below. Yayy... you are now successfully self hosting Freeboard.

freeboard-default

 

The interface is so intuitive that it barely needs an introduction. You can add first sources for your data under "Datasources". Then Click on "Add Pane" to create widgets.

 

Designing your first damn-sexy dashboard

Since I don't have the hardwares for the challenge yet, I'm going to display weather data from my current town and hometown via Yahoo Weather. This where freeboard is really awesome. I can create a dash board design, share that JSON file with my friend and he can also instantly get the dashboard. SO I have already designed my dash. You just need to copy it and use.

Go to "freeboard" folder. Download and save "dash_weather.txt" attachment in this folder. Rename it to "dash_weather.json".

Now you can visit http://<PI's IP>:8888/#source=dash_weather.json from you browser. This will display the dash board designed to show weather from two places in India, my country.

dash

To edit the places, go to Add Source section and edit the URLs to suit your place. You can get help from https://developer.yahoo.com/weather/ to find your place.

 

Now I have a dashboard to display data from my sensors, I'm eagerly waiting for the hardwares to arrive for start the build.

 

Happy hacking,

vish

 

<< Prev | Index | Next >>

In order to start developing any of the features in PiIoT - DomPi: Intro there is a need to create the basic system, build the nodes and connect them. In this post I will describe the part of the architecture that should help deploy them.

 

Project Dashboard

The focus this week is on creating a simple project dashboard to track progress on the key developments of the nodes. There will be seven nodes.

 

Five of them will be Arduino based: kids´room, parents´room, living room, garage and garden. The Arduinos are perfectly fitted for the tasks that these nodes need to execute: interact with some simple sensors, managed the RF communications and be a cost effective solution. Additionally, yes, hehe, I got several Arduinos at home waiting for being used Below there is a dashboard with the current status - the intention will be to move as many as possible to the green color!

 

The two remaining nodes are Raspberry Pi based and require more processing capacity than the previous ones and also interact with more complex HW to display the Home status. They are the Control Panel and Command Center nodes and besides some communications and sensors, they will also display the key information in to the TFT touchscreen of the project and to my TV. I plan to create a similar dashboard for these noes in the coming weeks - they are a bit more complex and want to give them several thoughts.

 

Project_Dashboard

 

As you can see, many lines are repeated across the modules to allow some code reuse, however, as you all know, this is not a fabric… meaning that each time I will need to make sure the connections are right, also I have different Arduino types (nano, pro mini, uno, mega) making it less of a copy-paste. In the end there will be issues coming up, hence I decided to track the progress totally independently.

 

Living room node. First steps

I want to start with some quick-wins that show here and at home the potential of the project. I have started with two features: light control and IR detection and I´m also having a look at the temperature and humidity sensors.

 

Light Control

At home I have three DIO Chacon plugs that are controlled via a RF remote controller. They are like these ones. Googling for it, I have found people that have recreated the protocol like here and here. Basically it consists of 26 bits, including a code for “your set”, the plug number and if it has to go on or off.

 

Chacon_packet

 

The first step is to discover my set´s code, luckily enough there is some code around to dump the RF signal via the Serial port and extract the header - as soon as I find the one that works with me, will post the link. So far I will be using the most common RF433Mhz to receive and transmit. Like those in the picture. Rx-Tx433Mhz

The idea is to connect the receiver RF module and dump the full packet sent by the original remote control. I will copy the code programmed in my set and will try to replicate the RF packet using the emitter module. If all goes ok, I should be able to turn on and off the three plugs at home.

 

Control lights with the TV remote control

The second feature I´m developing is to control the RF plugs with the TV remote control Many times we are in the living room and it becomes dark and guess where the plugs control is far away yep However the TV remote is always in the room so I thought that even better than to control it via the mobile phone(web server it is more comfortable with the TV remote control For doing this I have connected a  TSOP31238TSOP31238 IR receiver like the one in the picture directly to the pin 9 of the Arduino IR_receiver

 

The process will be similar than with the RF packet. I plan to get the IR code that my remote control sends and use it to turn on or off the Chacon plugs from the previous paragraph. Thinking about which buttons to leverage from the TV control, I need to hijack those that are seldom used at home and also that have no effect on the TV while we watch it. For example, I can´t use any number to turn light 1 on/off since it will interfere with us watching the television and... you don´t want to mess around with your hobby things and your family members, hehe. I´m thinking to use the red-yellow-green-blue buttons that are mostly used in the TV menus. I have tested them and while we are watching in the TV, they produce no effect. The Arduino library IRremote.h comes with an example to dump the IR codes, so I will be using it for this purpose. Let´s see how this evolves.

 

Temperature and humidity

At home I have the DHT11 sensor that measures both the humidity and the temperature. In principle, I could use this sensor, however, I have realized that the temperature provided by the DHT11 has no decimals, meaning that the precision is at 1ºC. In theory this should be ok, but since we have at home another thermometer that measures to the 0.1ºC, I need to avoid comparisons between my system and any other stuff that may look better at home This means, I need to provide at least the same accuracy and here enters into play the Dallas sensor DS18B20. This has a resolution up to 0.0625ºC, fair enough!

The drawback is that I will need two components for the nodes, instead of only one. In principle, all of the Arduinos have enough digital pins so that should not be a problem.

OpenHAB Pi - my home automation center

 

I've got an idea that's easily divided into smaller parts for easier implementation. First part to be made is a center for home automation. To be exact, I'm automating my room. I've got light switches, power sockets and window blinds to be automated, and I want a security system for my room. Once I'm there, I can throw in more features! However, I'll skip the part about what I'm going to do and tell about how I'm going to do things.

 

20160604_194825.jpg

 

The Pi Model B I'm using is scarred from all the experiments it's been in - which you can see from all the shoddy soldering, mods and lacking connectors. However, it's still alive, GPIOs haven't been damaged and therefore it's suitable for all of my home automation needs. Furthermore, it's going to be out of sight most of the time, so looks are even less important. I've bought the most crucial components for my implementation and that should be enough to get started.

 

First of all, I want to make a box for it. I think it should be made from acrylic, since I like the look and feel of layered acrylic boxes. It's going to have power, USB and input/output connections accessible on headers. If I ever need to remove/add things or change the connectors, I just plug/unplug the connectors and that's it. It will also be portable, as a result - though I can't imagine taking my home automation center somewhere =) My motto is "if it's not neat, it will not work reliably" and having an enclosure only helps the "neat" part.

What's in the box though?

 

Peripherals:

 

  1. A GPIO-connected 16x4 display - for pyLCI
  2. A keypad found in my hackerspace connected through I2C with a nice faceplate - for pyLCI
  3. An Arduino for monitoring voltages in the box, possibly notify about the blown fuses
  4. A USB hub for external USB ports
  5. A USB-to-RS485 dongle for connecting Arduinos over RS485 line
  6. A USB-Bluetooth dongle for iBeacon (to be exact, Eddystone Bluetooth beacon)
  7. A PiFace Digital 2 shield - 8 inputs, 8 outputs, for the most basic of tasks such as getting light switch state and controlling LED lamps
  8. A USB-UART adapter connected to Pi's serial console for debugging in the worst situations when everything, including pyLCI, has crashed and debugging is needed
  9. An external LED strip that's powered while the Pi is booting - to make sure there is at least some light in the room while OpenHAB is not running yet and lights are not yet controllable
  10. 12V 10A PSU for powering everything in the box, the distant Arduinos on RS485 and some random things on my workbench, such as a soldering iron, breadboard PSUs for prototyping and a USB charging station
  11. A DC-DC set to 5 volts for powering the Pi and other 5V peripherals
  12. 2 power rails (+12V and GND) for easier wiring of 12V rail inside the box
  13. Fuse box inside the enclosure to make sure externally accessible power connectors can't cut the power inside by accident and therefore disable the entire box

 

Possible additions:
  • An emergency override board to switch to manual input/output control in case OpenHAB glitches out
  • Some temperature sensors for measuring the PSU temperature to make sure nothing bad happens because of overheating
  • A simple soundcard for playing emergency tones/notifications
  • A UPS based on Li-ion batteries for cases when the power runs out
  • A WiFi dongle for emergency automatic connections to an external WiFi
  • A cheap phone or a GSM module to send notification SMS and, possibly, receive commands from me
  • An IR receiver to control the basic functions from an IR remote

 

Hardware parts are put together. After a week, I'll assemble it and start installing the software.

In the introduction of this project, only a brief idea was presented: a smart home which also includes a competition system to get its residents involved. There was also a discussion of the communication among some devices and function that can/should be included, but not much planning or specifics were provided. What is the smart home going to consist of? When do we start having a competition system and how far we go with the games? Is there even enough time in 14 weeks to do so? With this entry, I will try to break the general project into functional blocks.

At the end, there is a tentative schedule: it never hurts to have some timeline to look back to and discover how far from these tentative deadlines I am

Well, lets get to it!

 

Work & Milestones

 

We can divide the whole projects in two main sub systems: smart home and competition system. The intention is to build the competition system as a main feature in the whole smart home.

(NOTE: More details on software installed/ coded and hardware designed/connected will be provided at each stage).

 

SMART HOME

 

Basics

Function

Devices

How to

Connectivity setup: MQTT

Raspberry Pi 3

Raspberry Pi 1

Smartphone

Broker installed in Raspberry Pi 3

Publisher client in Raspberry Pi 1

Subscriber client in smartphone

Subscriber client in Raspberry Pi 3

Sensors reading

Sensors type 1:

I2C protocol – connect to the corresponding Raspberry Pi 1 I2C ports

 

Sensors type 2:

  • - Door switch
  • - Alarm button

Direct connection to Raspberry Pi 1 GPIO ports

 

Raspberry Pi 1

Reads GPIO ports

Implements MQTT publisher client -> sends data to Raspi 3

 

Raspberry Pi 3

Implements MQTT broker

Data storage

Raspberry Pi 3

Implements MQTT subscriber client -> read data from Raspi 1

MySQL database to store data

GUI – general home access

Raspberry Pi 3

Same MQTT subscriber client

Displays read data

Mobile app – individual home access

 

Implements MQTT subscriber client -> read data from Raspi 1

Displays info

Web portal – remote access

Raspberry Pi 3

TBA

Extra 1: Announcement board

Function

Devices

How to

User sets

  • - Task
  • - Announcement

Raspberry Pi 3

Include a menu to input:

Notes

Task that should be finished within a deadline (i.e. cleaning)

Data storage

Raspberry Pi 3

Database

Display task/announcements

Raspberry Pi 3

Update main GUI to include an “Announcements” tab

 

 

COMPETITION SYSTEM

 

Basic: Run competition

Function

Devices

How to

Record user’s run distance

Android smart phone

Mobile app.

1) record run distance with either:

  • - Use maps framework: get miles
  • - Count steps/use phone gyroscope

2) send distance to smart home

       -       Send to home server IP address

Data storage

Raspberry Pi 3

Implement home server – Apache

Create PHP interface to fetch data coming from phone

Store data in smart home database - MySQL

Display data

Raspberry Pi 3

Update home GUI:

  • - Individual data
  • - General table with best results

Extra 1: Tourist/Discovery system

Function

Devices

How to

New destination selection

Raspberry Pi 3

Select a reasonable location to visit

Display it on the home GUI

Allow remote access to the selected location

Mobile app- geo location

Android smartphone

Update mobile app:

  • - Map frame work to detect person’s location
  • - Read new location from Raspi 3
  • - Send when the person gets to that location

Extra 2: Smart house inner challenges

TBA

 

 

 

 

Schedule

 

1-2 Weeks (Monday) 23 May 2016 – 5 June 2016

  • Application
  • Initial set up and planning
  • Build “Basic smart house” I
    • Connectivity setup
    • Sensors reading
    • Simple Interface

3-4 Weeks (Monday) 6 June 2016 – 19 June 2016

  • Build “Basic smart house” II
    • Data storage
    • Mobile App
    • Improved GUI
    • Web portal
  • Build Basic run competition I
    • Raspi 3 server

5-6 Weeks (Monday)20 June 2016 – 3 July 2016

  • Build Basic run competition II
    • Mobile app
  • Test run competition I
    • Distance calculations
    • Mobile to server communication
  • Raspi 3 GUI updated

7-8 Weeks (Monday) 4 July 2016 – 17 July 2016

  • Extra 1 – discovery/tourist competition

9 – 10 Weeks (Monday) 18 July 2016 – 31 July 2016

  • Extra 1 – announcement board

11- 12 Weeks (Monday)1 August 2016 – 14 August

  • Extra 2 – smart house inner challenges

13-14 Weeks (Monday) 15 August 2016 – 28 August

  • Clean up: final modifications
  • Prepare project submission

(Monday) 29 August 2016 -  Project submission

To make the Thuis app as flexible and efficient as possible a solid architecture is needed for the system. Different systems have different properties, which are most important, for Thuis it's important to provide the wanted functionality, be reliable, usable and adaptable.

Functionality and usability are the most obvious: without the wanted functionality the system doesn't do anything. And when it's not easier in use as the non-automated (old-fashioned) situation, it won't be used at all. Thuis should be reliable because it's influencing lots of functionality in the house: if for example the lights don't work, this can be problematic when it's dark. This includes recoverability as well: whenever something goes wrong it should not put the system in a non-working state, but should be able to at least do it's basic functionality. The last property is adaptability, which is important since all products on the market evolve rapidly. It should be possible to keep up with the latest products and easily add new hard- and software to the system.

 

Architecture overview

Software Architecture

This blog provides a general overview of the architecture. In the upcoming blogs I will go in-depth into each module.

 

Core

Raspberry PiAt the heart of Thuis there is a Java EE application running in a WildFly container on a Raspberry Pi 3. On the same node Z-Way (a Z-Wave controller), Mosquitto (a MQTT broker) and a database (which one is to be decided later) are running. They are sharing the same node to save budget, but because of the modular set up they can be split on to multiple nodes.

Initially the core application will be built around a MQTT observer: it subscribes to all available topics and knows what to do when certain messages arrive. All the rules live here, has knowledge of all devices, and keeps their status up-to-date. Different types of commands can be linked to each device and executed for them. Execution happens in prioritized JMS queue. This makes it possible to execute some commands in a predefined order and prioritize user initiated actions above background tasks.

 

Communication

MosquittoCommunication between the core modules takes place through MQTT. This is connectivity protocol designed for machine-to-machine communication, especially in IoT environments. It's extremely light weight and provides a publish/subscribe way of communication.

MQTT in this case is used to provide a bus-structure, in a way comparable to the CAN bus used in cars. Each node can publish messages to any topic. These messages will be delivered to any nodes that are subscribed to that topic. For example when Z-Way detects a movement in the kitchen it publishes this to the topic Thuis/kitchen/movement. Another node is subscribed to this and can take action by turning on a light. This way the coupling between the different nodes is very loose and it's possible to exchange devices easily.

While MQTT is available for a lot of platforms, it's not possible to hook up anything directly to it. That's why some modules use a bridge. The most important examples are the Z-Wave devices. A Z-Wave node can only communicate through Z-Wave with each other and the controller. In this case the controller acts as a bridge to connect it to the other parts of the system. The same is the case for services that have an API, but no extensions, like Plex. For these the Core will act as a bridge.

 

User interaction

As mentioned in the proposal user interaction takes place in several ways. From the start these are physical buttons (Z-Wave), some iPhones and an iPad. A web and voice interface will be added later. Just like any other modules they communicate through MQTT. To make the interfaces easily adjustable and maintainable several UI components will be developed which will be connected to one or more MQTT topics. They automatically update based on a subscription and can publish messages to a topic when a user interacts with them.

 

This should give you some more insight in the building blocks of the Thuis system. Next up will be the setup of the MQTT broker and the first integration: Z-Way.

 

Puppet is an open-source configuration management tool that helps automate the deployment and management of files and applications on target hosts. A puppet master contains the definition of the desired configurations in files called manifests or modules. Agents can query the master in order to know which configuration changes to apply.

Screen Shot 2016-06-02 at 19.16.55.pngpuppet labs logo from Puppet Keynote by Luke Kanies

 

 

In this post, I will cover the installation of the puppet master and puppet agent, along with a simple example manifest. I intend to fully puppetize my project, delivering a set of manifests and modules, making it possible to recreate it from scratch, with minimal effort.

 

Let's get puppetizing!

 

Preparation

 

For this guide, I am using a Raspberry Pi 3 running the latest version of Raspbian Jessie.

 

After connecting via SSH (via ethernet or wifi), the file system was expanded using "raspi-config":

Screen Shot 2016-05-29 at 14.29.35.png

 

Using the same application, the hostname was changed to "puppet":

Screen Shot 2016-05-29 at 14.29.58.pngScreen Shot 2016-05-29 at 14.30.07.png

 

Finally, the software has been fully updated:

 

pi@puppet:~ $ sudo apt-get update && sudo apt-get upgrade -y

 

With that taken care of, we can proceed to the actual installation.

 

Puppet Master

 

The puppet master holds the definition of every node by means of reusable modules. A node is then defined by a specific combination of modules, in order to obtain a specific role.

 

Screen Shot 2016-06-02 at 19.14.57.pngLifecycle of a Puppet Run from Puppet Keynote by Luke Kanies

 

 

 

Installation & Configuration

 

Installing puppet master is straightforward and can be done using the following command:

 

pi@puppet:~ $ sudo apt-get install puppetmaster-passenger

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2 apache2-bin apache2-data apache2-utils augeas-lenses facter hiera libapache2-mod-passenger libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libaugeas0 libev4 libjsoncpp0 liblua5.1-0 libpci3 pciutils puppet-common puppetmaster-common
ruby-activemodel ruby-activerecord ruby-activerecord-deprecated-finders ruby-activesupport ruby-arel ruby-atomic ruby-augeas ruby-blankslate ruby-builder ruby-hiera ruby-i18n ruby-json ruby-minitest ruby-passenger ruby-rack ruby-rgen ruby-safe-yaml ruby-selinux
ruby-shadow ruby-thread-safe ruby-tzinfo ssl-cert virt-what
Suggested packages:
apache2-doc apache2-suexec-pristine apache2-suexec-custom augeas-doc mcollective-common augeas-tools ruby-rrd librrd-ruby puppet-el ruby-ldap ruby-stomp stompserver vim-puppet ruby-builder-doc rails ruby-passenger-doc openssl-blacklist
The following NEW packages will be installed:
apache2 apache2-bin apache2-data apache2-utils augeas-lenses facter hiera libapache2-mod-passenger libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libaugeas0 libev4 libjsoncpp0 liblua5.1-0 libpci3 pciutils puppet-common puppetmaster-common
puppetmaster-passenger ruby-activemodel ruby-activerecord ruby-activerecord-deprecated-finders ruby-activesupport ruby-arel ruby-atomic ruby-augeas ruby-blankslate ruby-builder ruby-hiera ruby-i18n ruby-json ruby-minitest ruby-passenger ruby-rack ruby-rgen
ruby-safe-yaml ruby-selinux ruby-shadow ruby-thread-safe ruby-tzinfo ssl-cert virt-what
0 upgraded, 44 newly installed, 0 to remove and 0 not upgraded.
Need to get 5,905 kB of archives.
After this operation, 23.8 MB of additional disk space will be used.
Do you want to continue? [Y/n]

 

It will install the necessary dependencies.

 

Once installed, you should be able to query puppet to verify the installed certificates:

 

pi@puppet:~ $ sudo puppet cert --list --all

+ "puppet.home" (SHA256) 03:2D:C0:4A:33:FC:C7:A0:3B:45:A0:41:DC:B6:B4:97:3B:B3:32:39:67:3B:F5:69:17:C4:B9:46:50:EE:D7:99 (alt names: "DNS:puppet", "DNS:puppet.home")

 

When new agents will attempt to perform a puppet run, a certificate will be generated on the master. Only when it is signed by the master, will the agent be able to successfully perform the puppet run.

 

To start or stop the puppet master, simply call the apache2 service:

 

pi@puppet:~ $ sudo service apache2 status|start|stop|restart

 

Manifests & Modules

 

In puppet, manifests describe the configuration actions to be performed. This can be as simple as ensuring a certain file with specific content is present (or absent), but also more advanced like automatically installing and configuring a certain application.

 

To test the installation of the puppet master, a simple manifest can be used.

 

pi@puppet:~ $ sudo nano /etc/puppet/manifests/site.pp

file { '/tmp/testfile':
ensure => present,
mode => '0444',
content => 'test content',
}

 

This manifest will ensure the file "testfile" is present in the "/tmp" folder, with read permission containing the text "test content". If the file doesn't exist, or the permissions or content are not as expected, puppet will correct it accordingly.

 

This example showcases a standalone manifest. Manifests can be combined with files and templates into a module. The purpose of a module is to complete a set of related actions, such as installing an application and configuring it for example.

 

Let's install the agent and test out this simple manifest.

 

Puppet Agent

 

Puppet agent's role is to retrieve the local machine's configuration by contacting the puppet master, retrieve the configuration and apply it.

 

For this purpose, I used a second Raspberry Pi. I used the same preparation steps, except I gave it a different hostname: "piiot1".

 

Installation

 

Installing and activating the agent is as easy as installing the master:

 

pi@piiot1:~ $ sudo apt-get install puppet

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libxfce4ui-1-0 xfce-keyboard-shortcuts
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
  augeas-lenses facter hiera javascript-common libaugeas0 libjs-jquery libpci3 libruby2.0 libruby2.1 libyaml-0-2 pciutils puppet-common ruby ruby-augeas ruby-hiera ruby-json ruby-rgen ruby-safe-yaml ruby-selinux ruby-shadow ruby2.0 ruby2.1 rubygems-integration
  virt-what
Suggested packages:
  augeas-doc mcollective-common apache2 lighttpd httpd augeas-tools puppet-el vim-puppet etckeeper ruby-rrd librrd-ruby ri ruby-dev bundler
The following NEW packages will be installed:
  augeas-lenses facter hiera javascript-common libaugeas0 libjs-jquery libpci3 libruby2.0 libruby2.1 libyaml-0-2 pciutils puppet puppet-common ruby ruby-augeas ruby-hiera ruby-json ruby-rgen ruby-safe-yaml ruby-selinux ruby-shadow ruby2.0 ruby2.1 rubygems-integration
  virt-what
0 upgraded, 25 newly installed, 0 to remove and 10 not upgraded.
Need to get 8,866 kB/8,876 kB of archives.
After this operation, 36.1 MB of additional disk space will be used.
Do you want to continue? [Y/n]

 

Once the agent is installed, an initial "puppet run" is required to generate the certificate on the master:

 

pi@piiot1:~ $ sudo puppet agent -t

Info: Creating a new SSL key for piiot1.home
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for piiot1.home
Info: Certificate Request fingerprint (SHA256): D2:C5:40:64:A0:B9:C2:90:A0:19:EE:BF:24:3F:87:95:3E:33:38:26:48:07:1B:6B:A7:BD:10:73:D3:F7:75:F7
Exiting; no certificate found and waitforcert is disabled

 

On the puppet master, it is possible to list the certificates. The certificate for node "piiot1" is not signed yet, preventing the actual execution of the manifest.

 

pi@puppet:~ $ sudo puppet cert --list --all

"piiot1.home" (SHA256) D2:C5:40:64:A0:B9:C2:90:A0:19:EE:BF:24:3F:87:95:3E:33:38:26:48:07:1B:6B:A7:BD:10:73:D3:F7:75:F7
+ "puppet.home" (SHA256) 03:2D:C0:4A:33:FC:C7:A0:3B:45:A0:41:DC:B6:B4:97:3B:B3:32:39:67:3B:F5:69:17:C4:B9:46:50:EE:D7:99 (alt names: "DNS:puppet", "DNS:puppet.home")

 

On the same master, the certificate can be signed, as the node is known to be part of our setup.

 

pi@puppet:~ $ sudo puppet cert sign piiot1.home

Notice: Signed certificate request for piiot1.home
Notice: Removing file Puppet::SSL::CertificateRequest piiot1.home at '/var/lib/puppet/ssl/ca/requests/piiot1.home.pem'

 

A "+" sign is added in front of the certificate, indicating it is signed.

 

pi@puppet:~ $ sudo puppet cert --list --all

+ "piiot1.home" (SHA256) 23:44:D3:B6:70:68:77:F9:9E:A7:EB:72:09:E9:F1:67:FA:24:53:47:99:BB:D9:2F:74:A1:CC:3E:50:76:55:01
+ "puppet.home" (SHA256) 03:2D:C0:4A:33:FC:C7:A0:3B:45:A0:41:DC:B6:B4:97:3B:B3:32:39:67:3B:F5:69:17:C4:B9:46:50:EE:D7:99 (alt names: "DNS:puppet", "DNS:puppet.home")

 

With the certificate signed, it is now possible to perform a puppet run and execute the actions defined in the manifest.

 

Puppet Run

 

Repeating the same command to perform the puppet run, now results in the execution of the manifest defined on the master:

 

pi@piiot1:~ $ sudo puppet agent -t

Info: Caching certificate for piiot1.home
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for piiot1.home
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for piiot1.home
Info: Applying configuration version '1464532909'
Notice: /Stage[main]/Main/File[/tmp/testfile]/ensure: created
Notice: Finished catalog run in 0.10 seconds

 

The test file has been created, has the required permissions and content:

 

pi@piiot1:~ $ ls -l /tmp/testfile

-r--r--r-- 1 root root 12 May 29 16:41 /tmp/testfile

 

pi@piiot1:~ $ cat /tmp/testfile

test content

 

Conclusion

 

Success! Puppet master and agent have successfully been deployed. We can now continue by defining more elaborate manifests and modules which will help us simplify future deployments.

 

It may be hard to grasp or see the advantages of setting up puppet based on this example, but instead of deploying a simple file, imagine this "puppet run" installing and configuring a set applications. Sounds rather neat, right? That's what you can expect for next post!

 

 


arrow_prev.png

 


Navigate to the next or previous post using the arrows.

arrow_next.png

OVERVIEW


I would like to propose my project for the Pi IoT – Smarter Spaces with Raspberry Pi 3 contest as a Remote Horse Feeder System (a.k.a. Smart Horse Stall) utilizing a Raspberry Pi 3 as the home (main) control unit (gateway) and a Raspberry Pi B+ as the stall (remote) controller for the feeder system and environment monitor unit. This is a system that has sensors located in the horse stall that indicate when a horse is in his stall ready to be fed which will trigger a PIR sensor sending a signal to the main controller unit triggering a visual and audio alert (perhaps and email notice of the event). The system shall also provide the ability for the user to program a feeding schedule to automate the feeding time as well as monitor the feed usage on a daily, weekly or monthly basis. The system shall have a remote based feature with camera in the home unit and the stall unit with audio so the user can interact with the horse(s) from the home unit. The stall system shall control a motorized feeder system that dispenses the appropriate amount of horse feed at the appropriate automatically at a preprogrammed time or via a manual over ride if needed. Also, the stall system shall monitor the amount of available feed used and shall notify the user if the feed has reached the minimum on hand amount that was preprogrammed into the system. The stall system shall have sensors that will monitor the temperature, humidity and condition of the stall which will be displayed on the home unit LCD display. Optimistically the Horse Feeder system shall provide Cloud based access to remotely monitor the system as a monitoring option when there is no access to the home unit. The motivation for this was from my father in-law who is getting older and can not take care of his horses like he used to and this is intended to allow him to keep in contact with his horses when he is not able to go outside to do so.

HARDWARE REQUIREMENTS

 

Home (main) unit:

Raspberry Pi 3

8 MP Camera (Indoor)

Raspberry Pi LCD 7” Display

EnOcean Pi

 

 

Stall (Remote) unit

Raspberry Pi B+

Pi Noir Camera (outdoor)

Sense Hat

EnOcean Sensor Kit

Pi Rack (To connect sensor Kits)

PiFace Digital 2

WiFi Dongle for Raspberry Pi B+

 

 

SOFTWARE DEVELOPMENT TOOLS

 

Embedded Linux

OpenHab (will be considered)

Python

C++,Java, JS

Node,js

BlueZ

MQTT

Android / IOS (Opportunistic)

 

The "Pi IOT Design Challenge - Smarter Spaces with Raspberry Pi 3" it is a very interesting challenge and I want to present my idea about a possible implementation.

 

     Home automation is aimed to improve life quality in a living space by automating environment control (temperature, humidity, ventilation, air quality, etc.), security and operation of various appliances. But it is hard and sometimes not easy/practical to automate all aspects of a living/working space. By putting remote access on top of an automated system, the people concerned about a specific space can be notified about unplanned events and act accordingly or can change the parameters of the automated system to suit new requirements.

 

       The goal of my project is to automate and connect my living space, with the aim to build a solution that can be generalized and scaled in ways to be applicable to other types of living/working spaces.

  For my specific use case, this attempt will cover three main areas:

 

  - environment control - regulate temperature, humidity and ventilation with the help of the two ventilation units, an air conditioning unit and motorised blinds.

  - security - record movement outside of front door with Raspberry camera, take picture of person ringing the doorbell and sent via email, check if front and balcony doors are locked when leave home.

  - appliance control and monitoring - control aquarium and hamster lights and feeders and water plants, especially when I'm not at home.

 

 

  The system will have following components:

 

- A master control unit - RaspberryPi3 + Camera + PiFaceDigital2 - will be the master part of the system and provide following functions:

  - provide interface for display status and control system functions with the help of a web page

  - snap a picture when someone rings at the door

  - motion detection

  - send notifications for events

  - communicate with actuators directly or through relays (PiFaceDigital2) - for example control water for plants, lights for aquarium, etc

  - will be mounted directly on front door, because have direct access to door visor, it is close to plants pots and it is in perfect spot to be seen when leave home and have a last check.

 

- Mobile control unit - RaspberryPi B+ + WiFi dongle + temperature/humidity sensor - will be a mobile control and sensing unit and provide following functions:

  - temperature and humidity measurement - info is sent to master unit in order to take the required actions to bring the environment parameters at desired values (or as close as possible).

  - being mobile can be placed in any place to take measurements and optional video/audio surveillance

  - backup for remote access and control of appliances if master is not available

 

- Actuators and end effectors:

  - ventilation and air conditioning units - used to control temperature and humidity

  - ventilation units are now manually controlled by radio (remote controlled power outlets)

  - air conditioning unit - IR manual control

  - aquarium - remote controlled power outlets by RaspberryPi

  - blinds - will be actuated by standard hobby servos

  - water pump for plants

  - feeder for the hamster and aquarium will be made also with hobby servos

 

  I will attempt to implement all these functions by creating independent nodes and each node will control an actuator or effector. These nodes are remote connected to Master control unit or Mobile control unit. Remote connection will be radio, infrared or a combination of these.

 

  I will present a use scenario, thought for my particular case (please look at the picture for a better understanding.)

apt_new.png

  My apartment is on the third floor, have a east-west orientation and it is structured on two levels. On lower level is living room, bathroom and kitchen, on top level two bedrooms.

I have plants placed on both eastern and western windows, trouble is, they are sensitive to direct sunlight. So in the morning I have to close the blinds from the east part and ideally open them after noon and do the opposite for the blinds on the west side. Problem is, there is nobody home to do these maneuvers and to prevent plants to be harmed and also to prevent heat come through the windows during summer, the blinds are closed most of the day.

An automated system to manage the blinds and control water for plants would help plants to have more light in a more controlled way.

 

  An other aspect is air conditioning. The AC unit is placed on top floor and can provide cooling(and some heating) for entire space. Problem is, it cannot be programmed by default to start at a certain hour,

run for a determined amount of time, switch then to dehumidification or turn off, and so on. Moreover, if bedrooms doors are opened the cold air will spread in to these rooms and the lower floor will get

very little cold air. I plan to deal with these shortcomings by providing a more flexible control of AC unit and to provide an automatic way to control when bedrooms doors are opened or closed with the help

of two small robots attached to the doors. These robot are remote controlled by the two control units (Master and Mobile) according to temperature requirements configured by user.

 

There are also many other aspects that can be monitored and controlled remotely, these are only two examples of how this system can be implemented. Below is the system schematic:

system.png

   Thank you for your time

        Seba