Skip navigation
1 2 3 Previous Next

Raspberry Pi Projects

261 posts

I tried to use the Element14 Pi Desktop.  My mSATA disk works when I use a USB adapter on Windows.  When connected in the desktop, I try

sudo  ppp-hdclone

which doesn't see any drives.

When I reboot with bootable microSD card reinserted, I see four raspberries but nothing happens after that.  It appears all USB connected devices no longer work.

 

Can my Raspberry Pi 3 be unbricked from this situation or is this Pi gone forever?

rctho

What's a PiJack?

Posted by rctho Nov 21, 2017

I needed to extend the wifi range of my router with a minimal footprint in my apt. I didn't want more peripherals plugged in all over my apt. I used a RPi0w for the wifi repeater and jammed it into a two gang receptacle box. Two usb ports (one is for console connection) and kill switch, or reset button in case the computer hangs. The console connection will also power the Pi in case the power goes out.

 

First i had to attach the antenna. I found tutorials online that looked easy enough at first but the pics were scaled about 1000% of the pi's actual size. When i looked at the board and the size of the components i had to re-solder on, i was quickly discouraged. It doesn't look the greatest, but it got it done and it works. That's what matters.

 

 

Next i cut all the cords to size then soldered on the usb, micro usb, and pinout thingy. I used shrink tube and hot glue so that no wires are exposed. I used the internal circuit from a standard 5v transformer/ regulator commonly used to recharge a cell phone to hardwire the pi to my apt. it will act the same as if i plugged it in.

 

I used Leviton Qickports for the wall plate. two ports i drilled holes in for both the external antenna and reset button. Got all my components together. Time to assemble.

 

The transformer/regulator fit in perfect. All of the pre soldering was done so at this point, it was pretty much plug and play. I taped the pi to the side, then connected all the components.

 

Got it all in there. Easy street!

 

 

I had to disable the bluetooth to use the uart console gpio pins. console, usb, antenna, and reset switch all work great. I'm now configuring the wifi repeater. I'm going to make sure everything works before i put it in the wall.

 

Other than a wifi repeater, this device doesn't have much purpose. I could connect a usb camera, or I could swap out the usb for hdmi to connect it to a tv.. i wanted to make it more than i knew what i would do with it. Does anyone have any suggestions for what else this can be used for?

Hi everyone!

 

We're a group of students from Royal Holloway, University of London participating in AstonHack 2017 and we think we've come up with something neat.

https://astonhack.co.uk/We've made a command line interface for Monzo Bank! Sadly, as the API is currently read-only, so only data viewing purposes for now.

The monzo-cli tool has many useful features, these include:

Viewing current account balance.

Example:

./monzo balance

View pending transactions - these are transactions that have not been fully processed.

Example:

./monzo pending

View full transaction history - this will display all transactions ever made using the account:

Example:

./monzo transactions

View total liquidity of the account - this means seeing the total money inputted into the account, total money taken from the account, and net total;

Example:

./monzo spent

Use filters to categorise data - sort transactions by payment category (used with pending or transactions):

Examples:

./monzo pending eo

 

We'll be adding more features in the future!

Please leave any feedback, or recommendations for new features, as we are always open to suggestions

 

For now, we'll include the project source code in this forum post, but we plan to include a link to the GitHub repo once the API keys expire.

 

monzo.py

https://gist.github.com/andrewnicolalde/5ded27f2564ce76e65231a66f71cbe36

 

monzo bash script (run this)

https://gist.github.com/andrewnicolalde/f91fd9acba7cc948c82a4b8034683ed7

A pedometer is an electronic device that estimates the distance traveled by a person by recording the number of steps walked. Pedometers use an accelerometer to count the number of steps. A Raspberry Pi SenseHAT records acceleration along X, Y, and Z axes. You can use Simulink to record this data over a duration of time using the MAT-file logging feature. You can then use MATLAB to analyze the imported MAT-files to count the number of steps.

To use the MAT-file logging feature with the Simulink Support Package for Raspberry Pi hardware, you must have a Simulink Coder license.

For those who are not familiar with Simulink, I would recommend you complete the Getting Started with Raspberry Pi Hardware and MAT-file logging on Raspberry Pi Hardware examples that are available on MathWorks website.

Required Hardware

To recreate this project, you must have the following hardware:

Create a Simulink model for Raspberry Pi Hardware

1. Open the Log Accelerometer data using Raspberry Pi Hardware model by typing raspberrypi_countstep in MATLAB Command Window. You will see a block diagram that looks like the image shown here.

2. In your Simulink model, click Simulation > Model Configuration Parameters to open the Configuration Parameters dialog box.

3. Under the Hardware Implementation pane, select Raspberry Pi in the Hardware board list. Do not change any other settings.

4. Click Apply to save your changes, and then click OK.

Enable MAT file logging

Here are step-by-step instructions on how to enable MAT-file logging to save acceleration data as MAT-files.

1. To open the Model Configuration Parameters dialog box, click the gear icon on the Simulink model toolbar.

2. Browse to Code Generation > Interface > Advanced Parameters, or type MAT-file logging in the search box.

3. Select the MAT-file logging option and click Apply to save the changes.

4. Click OK to close the dialog box.
5. In the Simulink model, double-click the Scope block, and click the gear icon to open the Configuration Properties dialog box.
6. In the Logging tab, select the Log data to workspace option, and click Apply to save the changes.
7. On the Simulink model toolbar, set the Simulation stop time parameter. This parameter specifies the duration for which the signals are logged. After the simulation stop time elapses, the logging of signals stops. However, your model continues to run. For example, if the Simulation stop time parameter is specified as 10.0seconds, the signals are logged for 10.0 seconds, and then the logging stops. However, the model continues to run for indefinite time.

Deploy the Model on Raspberry Pi Hardware

1. On the Simulink model toolbar, click the Deploy To Hardware button. This action builds, downloads, and runs the model on the Raspberry Pi hardware.

2. Walk a few steps while holding the Raspberry Pi™ hardware. Make sure that you walk at least for the duration specified by the Simulation stop time parameter.

Import and Analyze Data

To import the generated MAT-files from the hardware to your computer after the logging is completed, follow these steps -

1. In the MATLAB command window, use the following command to create a raspberrypi object. The parameters specified in this command must match the board parameters specified in Simulation > Model Configuration Parameters > Target hardware resources > Board Parameters.

r = raspberrypi(<IP address>, <username>, <password>);

2. Use the getFile function to copy the MAT-files from the Raspberry Pi™ board to your computer.

getFile(r,<filename>)

Here, r specifies the raspberrypi object and filename specifies the path and name of the file created. After importing the MAT-files, you can use it like a regular MAT-file for any further analysis in MATLAB®.

3. Load the MAT files into workspace variables.

load('raspberrypi_countstep_1_1.mat');

a(:,:) = rt_simout.signals.values(1,:,:) * 9.8;

a = a';

t = rt_tout;

4. Plot raw sensor data.

plot(t, a);

legend('X', 'Y', 'Z');

xlabel('Relative time (s)');

ylabel('Acceleration (m/s^2)');

5. Process raw acceleration data.

To convert the XYZ acceleration vectors at each point in time into scalar values, calculate the magnitude of each vector. This way, you can detect large changes in overall acceleration, such as steps taken while walking, regardless of device orientation.

x = a(:,1);

y = a(:,2);

z = a(:,3);

mag = sqrt(sum(x.^2 + y.^2 + z.^2, 2));

Plot the magnitude to visualize the general changes in acceleration.

plot(t, mag);

xlabel('Time (s)');

ylabel('Acceleration (m/s^2)');

The plot shows that the acceleration magnitude is not zero mean. Subtract the mean from the data to remove any constant effects, such as gravity.

magNoG = mag - mean(mag);

plot(t, magNoG);

xlabel('Time (s)');

ylabel('Acceleration (m/s^2)');

The plotted data is now centered about zero and clearly shows peaks in acceleration magnitude. Each peak corresponds to a step being taken while walking.

6. Count the number of steps taken.

Use findpeaks, a function from the Signal Processing Toolbox™, to find the local maxima of the acceleration magnitude data. Only peaks with a minimum height above one standard deviation are treated as a step. This threshold must be tuned experimentally to match a person's level of movement while walking, hardness of floor surfaces, and other variables.

minPeakHeight = std(magNoG);

[pks, locs] = findpeaks(magNoG, 'MINPEAKHEIGHT', minPeakHeight);

The number of steps taken is simply the number of peaks found.

numSteps = numel(pks)

Visualize the peak locations with the acceleration magnitude data.

hold on;

plot(t(locs), pks, 'r', 'Marker', 'v', 'LineStyle', 'none');

title('Counting Steps');

xlabel('Time (s)');

ylabel('Acceleration Magnitude, No Gravity (m/s^2)');

hold off;

This shows how you can make use of the IMU sensor on Raspberry Pi Sense HAT to count the number of steps a person walked.

Halloween is one of my favorite holidays here in the US, and so much so that I spent a few years of my life thinking up and building smart Halloween props and animatronics for the haunted attraction industry. I won’t go too deep into the details of the business, but a few friends and myself founded a company a few years back with the “help” of some investors. It was my first tech startup, and like many tech companies, we had the hardware and software to revolutionize a decades stagnant industry that quite honestly did not want to change. To make a long story short, none of the original founders are part of that business anymore, with myself backing out in late 2015.

 

 

One of my duties in the company was to brainstorm and prototype new and innovative props that utilized modern technology, while remaining easy enough to use for the aging haunted house owners to be able to program. Often this was accomplished by making props that just worked once powered up, other times this involved utilizing our custom Raspberry Pi based animatronic / whole scene controller unit on the finished prop. However, during the prototyping phase, I would always develop the project using a bare Raspberry Pi or Arduino, and I loved this part about the business the most. The thrill of coming up with a concept, and then building and presenting a working prototype during our weekly all-hands meeting was exhilarating. This is why I love creating Halloween projects every year here at Element14. It gives me the perfect excuse to build some of those concepts that I never got around to prototyping when I was co-owner of the company.

 

 

One of those product ideas that I never got around to building was a smart-mirror that was fully functional while hiding a mind and body jarring jump scare that would be triggered when someone stopped in front of the mirror for more than a few seconds. So when I was asked to come up with a second Halloween project this year, I instantly thought of the smart-mirror. What I did not anticipate was the level of frustration and failure I would experience while building it. Don’t worry though, in the end, I managed to work up something that works 90% of how I wanted it to, and I am going to continue refining this project over the next couple of months, but for now, the smart mirror does work, it just lacks many of the features I wanted it to have. Before we get into the build, I would like to take a moment to talk about what failures I encountered during this project in hopes that a reader may be able to help with the javascript programming when I reboot this project in a month or two.

 

Experiencing Failure

 

As I said in the paragraph above, this project was one of the most frustrating, and stress inducing projects I have ever encountered. My issues began when building the wood frame that would house the mirror in its final form, but I am not here to rant about that, because I simply got a measurement wrong at some point in my CAD design. This was easily solved, and I only lost an hour or three rebuilding it. The real frustration kicked in shortly after deciding to use the MagicMirror2 software to power the magic mirror portion of the project.

 

 

MagicMirror2 is an amazing smart-mirror package if you are just wanting to build a feature-rich, highly functional smart-mirror. I really like this software, and it appears to be regularly updated, and has a very active community behind it. I can not recommend it enough if you are building a normal smart-mirror. The deficiencies begin to show themselves when one wants to detour from the traditional functionality that most smart-mirror builders desire. To be short, attempting to play a full-screen video, display a full-screen .GIF, or simply display a static .jpg in full screen mode on top of the MagicMirror2 display is quite difficult, if not impossible all together.

 

I spent three 12-16 hour days trying to get a video via OMX Player, HTML5, and various other raspberry-pi based video players to work when a GPIO Pin is pulled high by a PIR sensor. After I realized I was just not skilled enough in JavaScript programming to do this on my own, I asked a friend who is a great programmer for help, and six hours later we were still stumped. So I reached out via GitHub to one of the MagicMirror2 module developers, and he attempt to help me figure it out for several hours as well. In the end, the general consensus was that at the moment, without some extensive JavaScript coding, and a deep understanding of how MagicMirror2 and Node.JS work, that it was not possible in time to get this project published on time.

 

So after missing my deadline, and feeling like a complete failure, I picked myself up, tossed MagicMirror2 and all of my code into the garbage and went searching for a new approach. After a few hours of searching, I happened to come across a repository on GitHub that contained a smart-mirror program that was written in Python. This was the best possible outcome for me after the failure as Python is a language I can easily write, and understand. I really would like to make this work with the MagicMirror2 software as it is much more feature rich, and you can do some really cool stuff with Node.js, so if you would like to help me figure that out, please get in touch! Ok, enough about my failures, let's get into the actual project.

 

Parts Required

 

Hardware

Raspberry Pi 3 With NoobsRaspberry Pi 3 With Noobs
PIR SensorPIR Sensor

HT-255D Crimp ToolHT-255D Crimp Tool
Crimp ConnectorsCrimp Connectors

HDMI DisplayHDMI Display
HDMI CableHDMI Cable


3D Printing Files

 

Software

 

Setting Up Your Raspberry Pi and Downloading The Code

 

 

Before we begin installing the software packages we will need to make our spooky smart-mirror, you will need to install the latest version of Raspbian onto the SD card that will go into your Raspberry Pi. If you are using a fresh, empty SD card, then you can follow the video above to learn how to install the latest version of Raspbian to the SD card. If you already have an SD card with Raspbian installed, we can update and upgrade Raspbian from the command line. To do this remotely from the command line on your computer, connect to your Pi to your network via WiFi or a network cable, and then log in (my preferred method) via a terminal app such as Terminal, Putty, or CMDR and then enter the following commands.

 

 

sudo apt-get update

 

 

Then select “Yes” if prompted

 

When the update is finished running, its time to check to see if there is an upgrade available, and install it. To do this, run the following command. This could take a while, so sit back and watch some YouTube videos, or check out my Design Challenge Weekly Updates while you wait.

 

sudo apt-get upgrade

 

Then select “Yes” if prompted

 

Once everything is up to date, shutdown the Pi and connect a HDMI monitor, or the TV you will be using for your mirror. I did my initial development using the official Raspberry Pi 7” Touch Screen. Now restart the Pi, and access it once again from a terminal program on your computer.

 

Before any of the fun happens, we need to install my fork of the  Smart-Mirror software. To do this you will need to use Git. If you do not have Git installed, or you have never used it before, here is a helpful tutorial. The Magic Book-bag portion of the tutorial is not relevant to this project, but it does help you understand how to use Git better.

 

The Smart Mirror Code

 

Once Git is installed, and you have your SSH key saved in your Git settings, navigate to the home/pi directory again and run the following commands. This will clone my Smart-Mirror-With-Halloween-Jump-Scare repository to the Raspberry Pi.

 

cd /home/pi
git clone git@github.com:CharlesJGantt/Smart-Mirror-With-Halloween-Jump-Scare.git

 

Navigate to the folder for the repository

 

cd Smart-Mirror

 

Install the Smart-Mirror software’s dependencies (Make sure you have pip (https://pip.pypa.io/en/stable/installing/) installed before doing this.)

 

sudo pip install -r requirements.txt
sudo apt-get install python-imaging-tk

 

Select “Yes” if prompted

 

 

At the moment, the weather widget is broken due to an API change, but for what we are doing with this project at the moment, that does not matter much. With that said, you should still register a free developers account at darksky.net and enter your API key in the smartmirror.py file as pictured above. To do this, enter the following commands.

 

sudo nano smartmirror.py

 

And edit line 23 with your API key. Then exit nano with ctrl+x or cmd+x and press enter to keep the same file name.

 

Screen Orientation and Cleanup

Before we can test the Smart-Mirror install, we need to take care of some minor, but required task. The first task is to rotate the display by 90 degrees so that our smart mirror can hang in portrait orientation. To do this, enter the following commands, and edit the config file.

 

sudo nano /boot/config.txt

 

Arrow down to the bottom of the file, and add the following line of code. Then save the file by pressing ctrl+x or cmd+x and press enter to keep the same file name. If your screen's rotation is 180 degrees off after this rotation, change the number to 3 instead of 2.

 

lcd_rotate=2

 

Now we need to hide the taskbar, and the only way to do this from the command line is to edit another config file. Enter the following command to edit the necessary file.

 

sudo nano /home/pi/.config/lxpanel/LXDE-pi/panels/panel

 

In the “Global” section at the top of the file, you need to modify the following parameters:

 

autohide=0 heightwhenhidden=2

 

Replace that line with the following line

 

autohide=1 heightwhenhidden=0

 

With those small task taken care of you can now test the Smart-Mirror install by running the following command. Note that you will have to run this from the terminal app on the Pi itself for it to work properly. This is because TKinter will only open if ran natively. There are ways to run this command remotely, but I have found them to be buggy.

 

sudo python smartmirror.py

 

An error may pop up about the weather module, but ignore it, and the screen should turn black with white clock and news text appearing on the screen.

 

The Jump Scare Code

 

Ok, now that we have the Smart-Mirror running, it's time to connect the PIR sensor to the Raspberry Pi’s GPIO header. Follow the diagram below paying close attention to the Power and GND wires.

 

      • PIR Sensor VCC Pin to RPi 5V
      • PIR Sensor Data Pin to RPi GPIO5
      • PIR Sensor GND Pin to RPi GND

 

Now let's take a look at the jumpscare.py file that is inside of the Smart-Mirror directory. You might be wondering why I did not just include this code in the smartmirror.py code, and my reason is just because I expect that file to be updated soon by its creator to fix the weather API, and I also like the idea of being able to turn off the jump scare feature by killing the jumpscare process.

 

Open the jumpscare.py file in Nano.

 

sudo nano jumpscare.py

 

Starting at the top of the file we import the following libraries:

 

import RPi.GPIO as GPIO
import time
import os
import sys

from subprocess import Popen

 

 

Next we have to set which GPIO numbering schema we will be using. I always use the BCM schema. There are two different numbering schemes for the GPIO pins on the Pi.  The Broadcom chip-specific pin numbers (BCM) and P1 physical pin numbers (BOARD.

Here’s a reference showing all the pins on the P1 header, along with their special functions and both BCM and BOARD numbers:

 

GPIO.setmode(GPIO.BCM)

 

Now we need to setup the GPIO and tell the Pi which pins are what. In the first line we are telling the Pi to set GPIO pin 5 as an input, and to attach a pulldown resistor to it. The second line sets up GPIO pin 26 as an output. I left this in the code so that you could connect an LED to pin 26 to use for troubleshooting motion triggers.

 

GPIO.setup(5, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(26, GPIO.OUT)

 

Before we get into the loop, we need to declare a variable called "motionDetected". We can use this variable to count motion triggers in our loop. We also need to tell the Pi where the video we want to play is located. Since you cloned this repository from my Github, the zombie.mp4 file will be in the Smart-Mirror directory.

 

motionDetected = 0
movie1 = ("/home/pi/python_programs/zombie.mp4")

 

I’m going to break the loop down line by line to better help you understand what's going on. In the line below we are defining our loop and stating that while True do this.

while True:

 

Here we are telling the Pi to watch the state of GPIO 5

 

input_state = GPIO.input(5)

 

In the next block of code we are telling the Pi that if the input state of GPIO equals True (high) then print “Motion Detected” in the terminal, increment the motionDetected variable by one, and then wait for 0.2-seconds before moving on to the next line of code.

 

if input_state == True:
        print('Motion Detected')
        motionDetected += 1
        time.sleep(0.2)

 

Finally, we finish things up with another if statement that says if motionDetected equals 1 then set GPIO pin 26 HIGH, then make sure no instance of OMXplayer is running, then open a video player with the video that was defined earlier in the code. Next we tell the code to wait for 60 seconds before continuing with the loop, resetting the motionDetected variable to 0, and set GPIO pin 26 low to turn off the debugging LED. Note that you can delay or speed up how frequent the jump scare triggers by adjusting the time.sleep(60) setting.

 

if motionDetected == 1:
        GPIO.output(26, GPIO.HIGH)
        os.system('killall omxplayer.bin')
        omxc = Popen(['omxplayer', '-b', '-o', 'local', movie1])
        player = True
        time.sleep(60)
        motionDetected = 0
        GPIO.output(26, GPIO.LOW)

 

The full code is listed below.

# This Code Triggers a video to play on
# the raspberry pi when motion is detected
# via a PIR sensor on BCM pin 5. # Written By
# Charles Gantt 2017
# http://www.themakersworkbench.com
# & http://www.youtube.com/c/themakersworkbench
# https://github.com/CharlesJGantt/Smart-Mirror-With-Halloween-Jump-Scare

import RPi.GPIO as GPIO
import time
import os
import sys

from subprocess import Popen

GPIO.setmode(GPIO.BCM)

movie1 = ("/home/pi/Smart-Mirror-With-Halloween-Jump-Scare/zombie.mp4")

GPIO.setup(5, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(26, GPIO.OUT)
motionDetected = 0

while True:
    input_state = GPIO.input(5)
    if input_state == True:
        print('Motion Detected')
        motionDetected += 1
        time.sleep(0.2)
    if motionDetected == 1:
        GPIO.output(26, GPIO.HIGH)
        os.system('killall omxplayer.bin')
        omxc = Popen(['omxplayer', '-b', '-o', 'local', movie1])
        player = True
        time.sleep(5)
        motionDetected = 0
        GPIO.output(26, GPIO.LOW)

 

With all of the code finished up, let's set both python programs to run on boot. To do this we are going to write a simple bash script that tells the Pi to run both of the python files in the background.

 

cd /home/pi/Smart-Mirror-With-Halloween-Jump-Scare
nano launcher.sh

 

Now type in this script

#!/bin/sh
# launcher.sh
# navigate to home directory, then to this directory, then execute python scripts, then back home

cd /
cd /home/pi/Smart-Mirror-With-Halloween-Jump-Scare
sudo python smartmirror.py &
sleep 10
sudo python jumpscare.py &
cd /

 

We need to make the launcher script an executable file. To do this, enter the following command.

 

chmod 755 launcher.sh

 

Since we will be using crontab to trigger this script. We need to make a directory to log any errors that may occur. This will help with troubleshooting. Enter the following commands:

 

cd
mkdir logs

 

Now lets add the script to the crontab. Add the following line to the very bottom of the crontab file.

 

@reboot sh /home/pi/bbt/launcher.sh >/home/pi/logs/cronlog 2>&1

 

Now you can reboot the Pi to see if it worked. Enter the following command to reboot the Pi.

 

sudo reboot

 

When the Pi finishes booting, you should see the GUI load, then the smart mirror window open. If you wave your hand in front of the PIR sensor, the jumpscare.py script should trigger the zombie.mp4 video, and once finished, the smart mirror screen should reappear.

 

The Smart Mirror

 

With our code finished, it’s time to make our smart mirror. This is the part of the project where my end result may differ from yours. I chose to order a new 32” LED TV from Amazon, and try my hand at creating the two way mirror from window tint film and standard plate glass. I also wanted to create a wooden frame to house the TV inside so that it had the appearance of a hand-crafted mirror. Fortunately, I have a complete, fully stocked woodworking shop here at home, and whipping up a frame was a few hour process. As I mentioned at the beginning of this project, I did get my math wrong and made the first version of the frame incorrectly, and the TV screen did not fit. I was able to correct this, but if you do not have an abundance of time-saving tools, and extra wood to work with, take time, and measure your screen’s dimensions carefully. The only advice I can really offer is to leave about 1/16” clearance around the edge of the screen to account for expansion as the steel frame of the TV as it warms up.

 

 

I am not going very in depth here about the process I used to build the frame because how you frame the mirror is arbitrary and not very relevant to getting the mirror to work. You could even just tape a 2-way mirror to the front of the TV and the effect would be the same. Some people even create these little mirrors from 15” laptop screens, or HDMI monitors. You do not have to use an actual TV. I simply used a brand new 32” TV because I will be rebuilding this mirror, with a much more refined frame that will be built from exotic hardwood. I do however plan on making a video on that build with a complete step by step guide for my YouTube channel. So if you would like to check that out, it should be out sometime towards the end of the year.

 

 

I didn’t get many photos of the glass cutting or tinting process as that was another major issue I ran into during this project. Initially I decided that I would cut my own glass, as it is something I have done in the past, and it saves a good bit of money. My mistake was thinking that the glass I bought from a big box home repair store would be of a high enough quality to actually be easy to cut with the standard score and snap method. I broke $38 worth of glass before I gave up defeated, and called a local glass shop that explained to me that the glass quality that big box home improvement stores sell is just too low quality, and it's not annealed very well giving it a harder surface that is prone to flaking in the scoring process. The higher quality glass that glass shops stock is designed to be cut with laser sharp accuracy, and to minimize errant cracks in the cutting process. They showed me how quick and easy a good, high-quality piece of glass is to cut, and $17.38 later I was on my way home with the glass.

 

 

That afternoon, I attempted to tint the glass by myself and while I came very close to succeeding, I botched the tinting process twice. This was 100% my fault, and instead of following the poorly written directions that came with the mirror tint window film, I watched a few window tinting tutorials on YouTube, and realized that by adding a little dish soap to the water I was spraying the glass and film with, the process was much easier, and provided a better result. Living in a home with four dogs, and a couple of cats as well as an attached woodworking shop did end up haunting me a bit during the tinting process though. I spent a lot of time picking specks of dust and animal hair out of the wet tint, and still managed to trap a few dust particles and hair under the tint. Since this is a Halloween prop, I am not to bothered by that. When I rebuild this into a proper smart mirror, I will order a piece of chemically tinted 2-way mirror glass to avoid these issues all together. Another advantage to chemically treated glass is that your mirror can be made from tempered safety glass which means there is a much less chance of injury if it does shatter or fall off the wall and break.

 

 

The one thing that I made that may help you along with your build is the corner brackets I designed and 3D Printed that hold the mirror and TV firmly to the frame’s bezel. These brackets take about 20-minutes to print each on a Prusa i3 MK2s at a 0.2mm resolution. If you would like to use these brackets in your project, you can download them from my Thingiverse by clicking here.

 

So without going into too much detail, here are some photos of the frame build.

 

 

 

Now that we have the frame built and it’s hanging cable attached, it's time to attach the Raspberry Pi to the back of the TV. If you have room, and a 3D Printer, you can print this handy dandy VESA mount Rasberry Pi case bottom that I found on Thingiverse. If you want to print the top piece as well, that is just fine. I only printed the bottom as I wanted my Pi to have good airflow. Unfortunately on the TV I am using, the VESA mount was only part of the rear plastic, so I ended up attaching the Pi to a single screw hole. When I rebuild the mirror, I will print a custom case with mounting points that fit the screw locations on the grey steel backing plate.

 

As you can see, it mounts to the tv with standard M4 machine screws. Then the Pi attaches to it with small 3mm screws. Then all that is required is to attach a USB cable from the Raspberry Pi 3, and the TV’s USB port.

 

 

Finally, we need to attach the PIR Sensor to the top of the frame. To do this, I found a nice and compact PIR sensor case on Thingiverse, which I remixed, and designed a small extension arm for. Download it here. This is held together with M3 machine screws and nuts. To mount it to the top of the frame, I just used more small screws like I used on the corner brackets.

 

 

To finish up the PIR sensor mounting I needed to make up a cable that would connect it to the Raspberry Pi. Using a pin and crimp kit, I made the cable about three inches longer than it needed to be to add some strain-relief and prevent the cable from putting too much tension on the pins of the Pi.

 

 

Now all that is left is to test the smart mirror and jump scare out, and to do this, I simply stood it up on my workbench. Check out the demo video above, and I hope it shows off the jump scare well enough. This was the final point of frustration for me during this project. I shot a nice video with my DSLR, and lavaliere microphone, but it appears that something is broken in my brand new camera as it will not record audio from its microphone jack. Thankfully, Canon has a spectacular warranty department, and it will be fixed in a couple of weeks. I plan on taking the smart mirror to a friend's Halloween party, and will update this post with a video of some people getting scared if that happens.

 

So, what are my final thoughts on this project? Well I can honestly say that even after all of the frustration, stress, and unfortunate events, that I am, for the most part, proud of how it turned out. There are things I wish I would have done a different way, and some features that I left off simply because of time constraints. As I mentioned earlier, I am going to continue developing this project over the coming months, and hope a few people will join me in that journey, but for now, I have a working smart mirror that also features a cool jump scare, so I will call this one a win. Albeit, a small win, but a win nonetheless. I guess that the take away from my experience on this project would be that perseverance always pays off, and as long as you refuse to give up, anything is possible. Thanks for taking the time out of your day to read this tutorial. If you would like to see me create more cool stuff like this, please leave a comment below, and hopefully I will get assigned more projects like this! I will see you on the next one, and until then, remember to Hack The World and Make Awesome!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Media Center: Part 2

Join Les Pounder as he guides us through turning a Raspberry Pi into a Media Center!

Learn about Raspberry Pi, XBMC, Plex and even Kodi streaming services.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part - Coming Soon

Part 2: Identifying my needs and planning the build

 

So what are my needs?

 

I work from home and I like to have something on in the background as I work, so my use case will be for a device which can keep me entertained while I work.

 

Project Goals

 

  • The project should connect to my home wifi.
  • It should have its own screen and speaker.
  • Input will be via a touchscreen.
  • I like to watch YouTube videos and listen to podcasts.
  • I want to watch films on the device.
  • It should connect to my hard drive via a network share.

 

So to accomplish the project I will need plenty of kit.

 

 

 

The Raspberry Pi 3 has plenty of power for this project, maybe more than I need as this project could also be created with a Pi Zero W, but then I would need to source a USB sound card.

To the Pi 3 I will connect Pimoroni’s Hyper Pixel an 800x480 screen that fits on top of the Raspberry Pi 3 and uses the GPIO. the Hyper Pixel board is fantastic, sure it might not be an HD screen but the image quality is superb, and it can run at 60fps. The only issue with the Hyper Pixel is that the screen backlight uses PWM to control the brightness, this technically renders the analog audio output useless, but fear not! If I keep the backlight on at full brightness then I can use the analogue audio!

(Excuse the mess...)

 

The minimum SD card size for LIBREELEC is 8GB but now 16GB are really cheap and the extra space may come in handy.

As I am using the Pi 3 and the Hyper Pixel from a single power source I need to make sure to supply enough power, and the official 2.5A power supply will do the job nicely.

Speakers are easy to find, I’m using a cheap analogue speaker that has it’s own battery and it can be recharged from micro USB. So I’ll use a micro USB to USB cable connected from the Pi 3 USB port to keep the battery charged and the speaker ready for use.

Unless I put the kit in a case it will just be a mess of wires so using a suitable project case and a few well placed holes, into which we shall use brass standoffs to keep everything secure and well placed and more importantly it will keep my desk almost tidy!

Purchasing an MPEG licence is an optional step. The power of the Pi 3 CPU is enough for software decoding of standard definition MPEG streams, but should you need to decode HD streams then purchasing a MPEG-2 licence key for around $3 is a no brainer.

 

So there we have it, a starting point from which this project can be born!

 

In Part 3 of this project I will build the basic system and test that it works. Then in Part 4 I will configure the project to meet my needs.

Raspberry_Pi_Logo.svg.png

Raspberry Pi Media Center: Part 1

Join Les Pounder as he guides us through turning a Raspberry Pi into a Media Center!

Learn about Raspberry Pi, XBMC, Plex and even Kodi streaming services.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

What is a media centre?

In the 1980s to 2000s a media centre was a wooden cabinet filled with VHS, DVD, Cassettes and CDs. But in the mid 2000s this changed and media was consumed and catalogued inside vast digital media centres. From a 64MB MP3 player USB flash drive that I purchased in 2003, to the ubiquitous iPod full of music, the media centre has evolved, shrunk and more intelligent! The same has happened with our movie collections, no longer are they occupying shelves of space, rather digital shelves are groaning with content that we have purchased from many different providers.

 

So what is this blog post about?

In this blog post, the first of four such posts, we shall examine the different options that we have available. Then in the second post we shall determine what type of media centre meets the needs of our users. In the final two posts (3 and 4) we shall build our own media centre and configure it to provide us with a wealth of legally obtained content.



Android Boxes

By Tzah (Own work) [Public domain or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

Image by Tzah (Own work) [Public domain or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons https://commons.wikimedia.org/wiki/File:DroidBOX_Android_Kodi_TV_Box.JPG

 

Android boxes are very common. They are relatively cheap and can turn your television into a Smart TV. Typically they come with Android, and more often than not that version is quite old and possibly full of spyware. To add further complexities to these boxes, they can be tailored to include “access” to streaming movies services and sports channels, not that the owners of such content would know as quite often these additions are illegal modifications.

These boxes are typically found via online auction sites, but in recent months, especially in the United Kingdom we have seen a clamp down on boxes that come configured for illegal content.

These boxes are a solution to watching your movies, but their illegal software installs can make them a dubious purchase and there is little or no support from the providers.

 

Streaming Services

We all know of at least one streaming service. Netflix, Amazon Prime Video, Hulu, CBS All Access are all examples of content providers giving you access to the their content. And therein lies the problem, you never own the content in the same manner as you own physical media (of course the physical media is never truly yours as you are unable to “rip” the content and store it on your own systems)

As soon as you stop paying for the subscription, the flowing tap of media ceases and you are left without the latest series of Star Trek or Stranger Things.

These services are legal and provide a good level of customer support. You can also watch the content on the move, handy for long journeys and commuting.

Raspberry Pi Based Solutions

The Raspberry Pi provides the flexibility of all the above services, yes including the morally and legally dubious illegal streams. Thanks to the Raspberry Pi’s GPU (Broadcom VideoCore IV) it can handle 1080p video without using the CPU.  So we get high definition video, and HDMI connectivity. This is even available with the Raspberry Pi Zero!

 

Kodi

On the Raspberry Pi we have a choice of software to cater to our media needs. We can run Kodi media centres with OSMC and LIBREELEC, both of which can be downloaded via the Raspberry Pi website.

OSMC and LIBREELEC, being part of the Kodi family, offer installable plugins to enhance your media. You can watch Youtube videos, stream content from online providers (Hak 5, Element14 and many others) or you can stream radio and podcasts from many providers. This also means that you can stream movies, sport and pay per view television, and no I’m not going to show you how to do that. While OSMC and LIBREELEC are great for managing your home library, we do have one issue, namely that your media is locked at home! You can’t stream the media from your Pi to another device. But with the next option you can!

 

Plex

Plex is a popular streaming service that offers users the opportunity to stream content from their Raspberry Pi to any device in and outside the home. In fact we have a great tutorial that you can follow that takes you through the steps necessary to turn a Raspberry Pi 3 into a home media streaming server!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
This is the Final Part

The final project!

In previous projects we have used sensors to detect soil moisture and relay that data to us via email, forcing us to go outside and water the garden. Then we developed a self watering system based on the same sensor, which was connected to a pumping system that fed water to our garden all controlled by the humble Raspberry Pi Zero W.

In this final project we shall create another device that will enable us to water the garden from our desk / sofa. This uses the Google AIY kit that came as part of a special issue of The MagPi magazine but it is now being offered for sale via other retailers. Using this kit we build an interface that enables our voice to trigger watering the garden, all we need to do is press a button and speak the words “water the garden”. This message is sent over the network using MQTT (Message Query Telemetry Transport) which uses a publisher - broker - subscriber model to send messages to devices that are listening on a particular “topic” in this case a Raspberry Pi Zero W will be listening for these messages, and when it receives them it will trigger a relay to life, connecting a peristaltic pump to 12V power supply and pumping water from a water butt and to our thirsty garden.

 

MQTT?

In this project we use MQTT to relay messages from one device to another. Ideally we need three devices on the network

 

  • A Publisher: The Raspberry Pi 3 AIY Kit which sends the trigger phrase across the network
  • A Broker: Any computer running the MQTT broker software. In this project we use the Pi Zero W.
  • A Subscriber: The Pi Zero W, which is looking for the trigger phrase and acts upon it.

 

But for this project the Pi Zero W that is watering our garden is both a broker and a subscriber. This is acceptable for our small network but for larger projects, with multiple publishers / subscribers it would be prudent to use a machine as a broker.

 

MQTT works by the publisher and subscriber both being on the same topic, similar to a channel. The publisher sends a message using a certain topic, and the subscriber receives it. A real world example of this model is YouTube. Content is created by Publishers, who upload it to their channel (Topic). YouTube then acts as a Broker, offering the content to Subscribers who will choose what Channels (Topics) to watch.

 

 

  • For this project you will need
  • A Google AIY kit
  • A Raspberry Pi 3Raspberry Pi 3
  • Pi Zero W
  • A transparent waterproof box
  • USB battery
  • Jumper jerky (Dupont connections)
  • Relay Board
  • 12V Peristaltic Pump
  • Plastic hose to match diameter of pump
  • 12V power supply (for outdoor use)
  • Waterproof box to store everything, also for 12V power supply!
  • Water Butt / Storage

 

All of the code for this project can be downloaded from my Github repo.

 

Building the hardware

Raspberry Pi 3 AIY Kit

 

The kit comes with a round arcade button, but I had a lot of triangular buttons that I wanted to test.

 

The hardware build is split into two, as we have two machines to work on. First we shall start the build on the Raspberry Pi 3 AIY Kit.

 

 

Building and configuring the Google AIY kit is straightforward, and for the latest guidance head over to https://aiyprojects.withgoogle.com/ where you can also learn how to check, debug and configure the kit.

To assemble the kit refer to https://aiyprojects.withgoogle.com/voice/#assembly-guide

For debug and testing the kit https://aiyprojects.withgoogle.com/voice/#users-guide

In order to create this project we need to turn on billing for our project. But don’t worry as we get 60 minutes of free use per month. To turn on billing follow the guidance at https://aiyprojects.withgoogle.com/voice/#makers-guide-3-1--change-to-the-cloud-speech-api

 

For this part of the project, expect to dedicate around 90 minutes to build and test the kit.

 

Pi Zero W Controller

The other part of the project is our trust Pi Zero W connected to a relay, used to control the 12V circuit for our peristaltic pump which will pump water from a water butt to our plants using a rotating motion to “squeeze” the water through the connected plastic hose. The relay is controlled from the GPIO of our Pi. In this case we connect the relay to 5V, GND and the Input of the relay to GPIO27. This is the same as in Project 2, but we have changed the GPIO used to control the relay as GPIO17 was a little twitchy in our tests.

 

Relay Connection

Connect the relay to GPIO27 using the female to female jumper jerky connectors as per the diagram. You will also need to provide the 12V power supply to the peristaltic pump. The + connection from the 12V supply goes to the relay, via the normally open connection, the icon looks like an open switch.


Software Build

Connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use in a Terminal type the following.

 

 

hostname -i

 

 

 

Still in the terminal and now let's install the MQTT software that will turn our Pi Zero W into a broker, an MQTT term for a device that manages the messages passed from the Publisher (our Pi3 AIY Kit) and the Subscriber (also our Pi Zero W).

 

sudo apt update  sudo apt install mosquitto 



Now let's start the MQTT broker service on the Pi Zero W. We need to do this so that it can make the connection between our Pi3 and Pi Zero W. In the Terminal type

 

 

sudo service mosquitto start

 

 

 

With that running we can now perform the final install before starting the code. This will install the MQTT library for Python 3. In the Terminal type

 

 

sudo pip3 install paho-mqtt

 

 

So that’s all the configuration completed, lets open the Python 3 editor from the Programming menu and start writing Python code. For this you will need to create a new file and save it as Garden-Watering-Device.py.



We start the code for our Pi Zero W by importing  three libraries. From GPIO Zero we import the DigitalOutputDevice class, used to create a connection from our Pi Zero W to the relay. We then import time, used to control how long we water the garden for. Lastly we import the MQTT client.

 

from gpiozero import DigitalOutputDevice
import time
import paho.mqtt.client as mqtt

 

Next we create an object used to connect our relay to the GPIO via GPIO pin 27.

 

 

relay = DigitalOutputDevice(27)

 

 


Our next step is to create a function which will contain the code necessary to connect our Pi Zero W to the MQTT network we have created. This function will connect and provide a code which will identify if we have connected to the network correctly. Then the Pi Zero W is configured to be a subscriber listening on the topic “garden”.

 

def on_connect(client, userdata, flags, rc):
        print("Connected with result code "+str(rc))
        client.subscribe("garden")

 

 

Another function but this time the function will react to messages over the MQTT network. The first step of the function is to create a variable called “message” and this will store the payload, converted to a string. Then using string slicing we remove the unwanted data from the message, which goes from position 2 in the string, to the second to last position. Then we print the message for debug purposes.

 

 

def on_message(client, userdata, msg):  
        message = str(msg.payload)
        message = message[2:(len(message)-1)]
        print(message)

 

 

Still inside the function we now create a conditional test that will check the contents of the “message” variable against a hard coded value, in this case “water garden”. If the result of the test is True, so the two match, then we print to the shell that the watering has started. Then the relay is turned on, a pause of 2 seconds for testing purposes, then the relay is turned off. The code then waits for 10 seconds before ending the function.

 

 

       if(message=="water garden"):
                print("Watering Garden")
                relay.on()
                time.sleep(2)
                relay.off()
                time.sleep(10)

 

 

Outside of the function we now move on to the code that will call the functions. First we create an object, “client” and in there we store the MQTT client function. Then we connect to the network using our on_connect function., then we call the on_message function to handle receiving messages. We then connect to the MQTT network, specifying the IP address of the broker, which is this Pi Zero W, so we can use 127.0.0.1. Lastly we instruct MQTT to constantly loop and check for messages.

 

 

client = mqtt.Client()  
client.on_connect = on_connect  
client.on_message = on_message  
client.connect("BROKER IP ADDRESS", 1883, 60)
client.loop_forever()

 

 

That's all of the code for this part of the project. Save the code and click on Run to test it. If all works correctly now is the time to move on and the next step is to make the code exectubale and enable it to run when the Pi Zero W boots.

 

So how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

#!/usr/bin/python3

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

sudo chmod +x Garden-Watering-Device.py

 

 

Now in the same terminal, launch the project by typing

 

 

./Garden-Watering-Device.py

 

 

Now the project will run in the terminal, Waiting for the correct message to be sent over MQTT.

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

sudo crontab -e

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

@reboot /home/pi/Garden-Watering-Device.py

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source and the 12V circuit for our pump. Power up the Pi Zero W, and the first part of this project is complete. Time to move on to the Raspberry Pi 3 AIY Kit.

 

Raspberry Pi 3 AIY Kit

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Raspberry Pi 3 AIY kit to the desktop.

 

Before we start any coding we need to install the Python3 MQTT library. So open a Terminal and type.

 

sudo pip3 install paho-mqtt 

 

 

After a few moments the software will be installed. Close the Terminal window.

Starting the code for this part of the process and luckily for us there is some pre-written code for this project. To use the code click on the Dev Terminal icon on the desktop. This will launch a special version of the Terminal, with all of the software setup completed enabling us to use the AIY software. With the terminal open type

 

 

cd src/

 

 

Inside the src directory there are a number of files, but in particular we are interested in cloudspeech_demo.py. Before any changes are made, make a backup of the file just in case!

 

cp cloudspeech_demo.py cloudspeech_demo_backup.py

 

So now that we have a backup of the code, we need to edit the original file. For this we used IDLE3 to edit the file, and to open it type.

 

Idle3 cloudspeech_demo.py

 

Inside the file we need to make a few additions. Firstly we need to add two extra libraries. Time to control the pace of the project, and the MQTT library.

Add these to the imports.

 

import time
import paho.mqtt.client as mqtt

 

 

Just after the imports we need to add a function that will handle connecting to our MQTT network. This will return a result code, 0 means we are connected with no issue.

 

def on_connect(client, userdata, flags, rc):  
        print("Connected with result code "+str(rc))

 

The next section to edit is the main() function, this is used to detect voice input using a recognizer function. This will listen for audio, record and then translate using the cloud. Lets add another recognizer phrase that will listen for the phrase “water the garden”.

 

   recognizer.expect_phrase('water the garden')

 

Still inside the main function, we now move into a series of if..elif conditional statements. You can see the final elif is a test to see if the word “blink” has been recognised. After this elif, create a new elif test, this time it will check to see if the phrase “water the garden” has been spoken.

 

           elif 'water the garden' in text:



So when this phrase is recognised we print to the Python shell that the watering has started, this is a debug step that can be left out. We then create an object called “client” that stores a reference to the MQTT library. We then use that object to connect to the network using the function we created earlier.

 

               print('Watering Garden')
                client = mqtt.Client()
                client.on_connect = on_connect

 

Next we connect to the broker, in this case our Pi Zero W will be the broker so we need to know its IP address. We also connect to the default MQTT port, 1883, and set a 60 second wait until timeout. To the MQTT network we then publish on the “garden” topic the phrase “water garden”, which our subscriber, the Pi Zero W is listening for.

 

               client.connect("BROKER IP ADDRESS", 1883, 60)
               client.publish("garden","water garden")

 

Still inside the elif conditional test, we add a few lines of code that will turn on the LED inside the pushbutton that comes in the AIY kit. This will be a response to running the code and watering the garden. After a second we then turn off the LED ending the code for the conditional test, and the code for this part of the project.

 

 

               led.set_state(aiy.voicehat.LED.ON)
               time.sleep(1)
               led.set_state(aiy.voicehat.LED.OFF)
               time.sleep(1)

 

Save the code, and exit from IDLE3. We need to run the code from the dev terminal so type the following.

 

./cloudspeech_demo.py

 

When ready press the button and say the magic words “water the garden”. You should now see the text appear on the screen, and the LED flash once. The message “water garden” will be sent over MQTT to our Pi Zero W, and it will start to water the garden.

 

#ShareTheScare this Halloween

Visit our Halloween space and find out how to win one of two Cel Robox 3D Printers. There are two ways to win:

Share your Project and Win
#ShareTheScare Competition
The Ben Heck Show Episodes

 

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

 

 

Dolls.

 

Why are dolls so scary to me? They watch you. Follow you. Walk around at night. Always evil. Always!

 

My fear must stem from a movie I saw as a child called “Dolls.” It frightened me so bad, I literally could not sleep, not even in the day! No other film did that to me. The Chucky series, Goosebumps episodes with the ventriloquist puppet, none of these scared me as a kid. It was something about that movie, Dolls… fuel for nightmares.

 

I’m not the only one. Dolls of various types always freak people out. Just take that movie “Annabelle,” the prequels to “The Conjuring” horror film series. All of which feature the doll, Annabelle.

 

To be honest, every single scary doll in anyone’s house I know… has been thrown out, burned or buried. Thank goodness!

 

However, for this project, there was a doll shortage. Who knew I would need one of those hideous things one day?

 

I tried many antique shops. But very freaky doll they had cost a fortune! Wouldn’t that be a double smack? I buy a 200 dollar doll, and it comes to life to get me! Luckily, some local resale stores had a few options. I found this one below… not as scary as I wanted it though.

 

I wanted to animate a doll to look like something left by a child on a porch. As someone approaches, it would slowly stand up. Guaranteed to freak everyone out. I had a few ways to do this in mind, but I thought the simple puppet on a string should do the trick.

 

In this project, we are going to talk about two important skills to learn: One – Raspberry Pi stepper motor control. Two – making a Scary Doll move. For an added bonus, we’ll add some scary sounds to go along with the doll moving.

 

 

Concept:

 

The dolls movement is controlled by a stepper motor hidden behind it. The doll’s head is attached to a clear fishing line, going up to a pulley where it is attached to the motor on the ground behind it. Turning the motor in different directions controls the movement of the doll up and down from lying to standing to floating.

 

When it comes to motion control of any type, especially at low speeds, stepper motors are the way to go. I know the doll isn’t all that heavy, but a stepper motor has the highest holding force of any motor type. So, accidental unwinding will not be an issue.

 

Also, it will help with situations where the doll stands up slowly.

 

Another useful feature of a stepper motor is you can keep track of how it turns. The stepper motor used for this project is 200 steps per revolution. So, let’s say it take the motor 10 full rotations to raise the doll – that is 2000 steps. I can just send the stepper driver 2000 steps to stand up, then 2000 steps in the other direction to lay back down.

 

I know some of you are worried about missing steps. That is definitely and issue if the stepper is under a load. If you take a look at my drink-mixing robot, the Drinkmotizer, missing steps what a major problem. Drinkmotizer featured a leadscrew that is considered a load on the motor. Plus it would get sticky from the beverage fluid dripping on the leadscrew. I would experience binding and missing steps too frequently.

 

However, with the Scary Doll, there is almost a zero load on the motor. The doll is only a few ounces after all. Unless someone pulls on the string holding the doll, then missing steps would not be a problem.

 

What would stop the stepper motor from spinning? How do you control the motion?

 

To do this, I wanted to set virtual limits in the software. Typically, CNC or motion control devices have physical limits. When a carriage reaches a certain point, it presses a button, and the software interprets that as a limit – stopping all motion in that direction. However, with the doll, I thought that might be too hard to implement. So, I would set limits virtually.

 

The user moves the doll to one point, presses a button to set a limit. Then to another point and presses a button for the second limit point. Then, the software would not allow the motor to turn outside those parameters. This way, you can create canned cycles that stay within a certain distance envelope.

 

 

BOM/Parts:

Scary Doll

Fishing Line

1x Low current Stepper motor

Gecko 210X Stepper Controller

Raspberry Pi 3

Speakers with Audio Input

4x Momentary Push buttons

1x Full-size breadboard

 

Schematic and design:

 

 

The actual build, setup away from the doll.

 

Gecko stepper settings

 

 

Code - How the code works:

 

In the main loop, button presses are checked. When a movement button is pressed, a direction is set based on the corresponding button. The direction is either Clockwise or counterclockwise of the stepper motor. This is set by the direction pin of the Gecko stepper motor controller. From our Raspberry Pi, we use the GPIO to write the pin 0 or 1 corresponding to the direction. When the button is pressed, we jump into a routine called rampUP() which incrementally increases the speed of the stepper motor to its full speed that the user sets based on the time between pulses. This is to ensure smooth operation of the motor. Steppers do not like to go from 0 rpm to a fast speed without gradually accelerating to its set speed. If this is how it's controlled it will most likely stall. Low speeds can be started up without the need for ramping up. Voltage applied to the motor windings is also a factor. The Gecko 210X has a voltage input range of 18VDC to 80VDC.

 

The higher the voltage, the better the ability of the motor to achieve a higher speed without stalling. One of the first things to do when entering the rampUP() routine is to enable the driver, so we change the enable pin to a 1. We only want this enabled when the stepper is going to move or else the stepper will heat up unnecessarily when not moving. Before the motor moves, The music starts playing with a call to mpg123 library using os which is playing a mp3 in the same folder as the .py file. This plays out of the audio jack of the Raspberry pi, which you can hook up to a speaker with an AUX cable. The motor ramps up from a starting low pulse that decreases with time, so there are faster transitions between high and low pulses making the motor turn faster. The high pulse is a very short time which is static and does not change in the pulse. Controlling the speed with the time of the low portion of the pulse.

 

The number of steps is hard coated to reel the fishing line into a certain limit and let the line back out at the end to the same point it started at. Driving the motor one way a certain number of steps and reversing the motor direction the same number of steps. Which results in the doll moving up from a starting laid down position to a raised position back to a laid down position. The Doll pauses for 3 seconds standing or crawling. Moving the Doll across the floor then standing up required more fishing line to be let out on the pulley and setting the doll farther away from the fulcrum.  


-control of stepper, theory, concept

-how limits were handled

-how the sound was played

 

Difficulties:

- Keeping the Fishing line wrapped around the shaft coupler and taught to the doll. Something like a fishing rod spool would probably solve this issue.

- Keeping the doll from spinning too much. It would rotate on the fishing line. The only way around this would be a two line system to prevent that.

 

Other uses of the system:

- Moving a doll isn’t the only option. You could lift much larger objects, skeletons and ghosts come to mind. Or something smaller like fake bugs.

- This tutorial shows you how to turn a stepper motor. Anything CNC is possible. Linear stages, CNC router, etc.

 

If I had more money/time:

- I would love to animate more of the doll or puppet. Almost like a marionette with no puppeteer. The almost natural movement of arms and such, I imagine, would be very creepy.

- Find a better, scarier, doll for the project.

-Film in front of a porch with people walking up

 

Cabe

http://twitter.com/Cabe_Atwell

Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

Can our garden water itself?

In this project we continue our search to keep our garden well watered, but this time we start fresh with a new project...A self watering garden!

 

Re-using some of the kit from Project 1, in this project we introduce relays, 12V circuits and peristaltic pumps that will water our garden based on the soil moisture sensor from Project 1. All we need to do is keep a water butt full of water, either through rain or grey water collection!

 

IMG_20170920_151615.jpg

 

For this project you will need

 

Building the hardware

Aside from our Pi Zero W, the main player in this project is the Rasp.IO Analog Zero board, which provides us with an analog to digital converter, the MCP3008. Yes you can buy the chip on its own for only a few dollars / pounds, but the Analog Zero board is a convenient form factor that offers a “less wires” alternative.

The main sensor we are using is a simple soil moisture sensor from Velleman. The moisture sensor is a simple analog sensor which connects to the 3V and GND pins on the Analog Zero and the output of the sensor is connected to A0. The output from the sensor is in the form of a voltage from 0V to 3.3V (as we are using the 3.3V power from the Pi Zero GPIO) if there is no conductivity, i.e the soil is dry then no voltage is conducted, if the soil is wet then the soil will most likely conduct all of the voltage.

 

The other part of the project is a relay, used to control the 12V circuit for our peristaltic pump which will pump water from a water butt to our plants using a rotating motion to “squeeze” the water through the connected plastic hose. The relay is controlled from the GPIO of our Pi. In this case we connect the relay to 3V, GND and the Input of the relay to GPIO17.

 

The Analog Zero will take a little time to solder, and we shall also need to solder the pins for I2C and solder the 3V and GND pins for later. Once soldered, attach the Analog Zero to all 40 pins of the GPIO and then connect the sensor and relay board as per the diagram. You will also need to provide the 12V power supply to the peristaltic pump. The + connection from the 12V supply goes to the relay, via the normally open connection, the icon looks like an open switch.

 

DSC_2803.JPGDSC_2806.JPGIMG_20170920_151134.jpg

 

Build the project so that the wiring is as follows.

 

Circuit.png

 

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use. Now open a terminal and enter the following command to configure SPI connection.

 

 

 

sudo raspi-config

 

 

Yes we can use the GUI “Raspberry Pi Configuration” tool found in the Preferences menu, but having raspi-config available to us over an SSH connection is rather handy should we need it.

 

 

Once inside raspi-config, we need to navigate to “Interfacing Options” then once inside this new menu go to the SPI option and press Enter, then select “Yes” to enable SPI. While not strictly necessary, now would be a great time to reboot to ensure that the changes have been made correctly. Then return to the Raspbian desktop. With the hardware installed and configured, we can now move on to writing the code for this project.

 

Writing the code

To write the code for this project we have used the latest Python editor, Thonny. Of course you are free to use whatever editor you see fit. You will find Thonny in the Main Menu, under the Programming sub-menu.

 

We start the code for this project by importing two libraries. The first is the GPIO Zero library, used for simple connections to electronic components. In this case we import the MCP3008 class for our Analog Zero board and then we import DigitalOutputDevice, a generic class to create our own output device.

 

from gpiozero import MCP3008, DigitalOutputDeviceimport time

 

 

Now lets create two objects, the first, soil is used to connect our code to the Velleman soil moisture sensor, connected to A0 on the Analog Zero board, which is channel 0 on the MCP3008 ADC. Our second object is a connection to the relay, which is triggered by an output device, on GPIO17.

 

 

soil = MCP3008(channel=0)relay = DigitalOutputDevice(17)

 

 

Moving on to the main part of the code we create a loop that will constantly run the code within it. Inside the loop the first line of code creates a variable, soil_check. This variable will store the value passed to it by the MCP3008, which is handled via the soil object. As this value is extremely precise we use the round function to round the returned value to two decimal places.

 

 

 while True:    soil_check = round(soil.value,2)

 

 

Next we print the value stored in the variable to advise the user on the soil moisture level, handy for debugging the code! Then the code waits for one second.

 

 

   print('The wetness of the soil is',soil_check)    time.sleep(1)

 

 

To check the soil moisture level we use an if conditional test. This will test the value stored in the soil_check variable against a hard coded value. In this case 0.1 was found to be very dry soil, but of course you are free to tinker and find the value right for your soil. If the soil is too dry then the condition is passed and the code is executed.

 

 

   if soil_check <= 0.1:

 

 

 

So what is the code that will be run if the condition is met? Well remember the relay object that we created earlier? We are going to use that object to turn on the relay, effectively closing the open switch and enabling the 12V circuit to be completed. This will trigger the peristaltic pump to life and pump water into the plants. Now for testing we set the time to two seconds, but in reality this will be much longer, depending on the length of hose that the water needs to pass through.  So when enough water has been passed  we need to turn off the relay, cutting the 12V circuit. The code then waits for 10 seconds before the loop repeats. Again these times are in seconds for test purposes, but in reality they would be in minutes.

 

       relay.on()        time.sleep(2)        relay.off()        time.sleep(10)

 

So that’s it, we have now built and coded the project and it is ready to be tested. To test the code in Thonny, click on the “play” button located in the menu, or press F5. Now as there is no conductivity between the prongs of the soil moisture sensor the code will trigger and start to water the plants, obviously be careful with this!

Once checked, place something conductive between the two prongs and you will see that the output is just printed to the Python shell and no watering is triggered. When you are finished press the red stop button to halt the code.

 

So now that we have code, how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

 

 

#!/usr/bin/python3

 

 

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

 

sudo chmod +x self_watering.py 

 

 

 

Now in the same terminal, launch the project by typing

 

 

./self_watering.py

 

 

 

Now the project will run in the terminal, checking our soil moisture levels and watering as necessary!

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

 

sudo crontab -e

 

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

 

@reboot /home/pi/self_watering.py

 

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Now reboot the Pi Zero W and for now ensure the soil moisture sensor has no connection between the prongs. After a about a minute, the project should be running, and your pump should start pumping water into the plants.

 

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source, ensure the soil sensor is out of the box. Place the project in your garden, and make sure the soil moisture sensor is firmly in the ground. Power up the Pi Zero W, and now your garden can now water itself!

You may have seen my blog post about creating a small portable media center that I can easily take on holiday to hook up to the hotel TV. If not, you can find it here;

 

Raspberry Pi powered media center to take on holiday

 

To reduce the amount space it took up, I used a cheap USB keypad which could be used to control the media center. It worked really well & having something hard-wired meant I didn't have to worry about a Bluetooth-paired device needing re-pairing.

 

However, what I then realised was it would be good to be able to use a spare remote control instead. I was using the OpenElec distribution and looked through their documentation for how to do this, but only found references to version 3 of the software (it's on version 7) and how to get LIRC working with it. There were plenty of blog posts on hooking up IR support, but a lot of them were written 2-3 years ago, and the software has moved on somewhat.

 

Hardware Setup

 

What I did first was buy a suitable IR receiver. I chose the Vishay TSOP4838TSOP4838 (which costs less than £1) because of the voltage range (2.5-5.5v) and receiver frequency (38KHz). If you look at the datasheet for the product, you'll see which pins should get wired up to the Pi;

 

 

Simply wire pin 1 to GPIO 18, pin 2 to GND, and pin 3 to a 3.3v power pin, e.g.

 

 

By using some short F-F jumper wires and a small cut in the side of the case, I was able to position the reciever neatly(ish) on the side.. it's still easily removable, but you could integrate it into the case a bit more seamlessly than this

 

 

 

 

Software Setup

 

Before this project I was using OpenElec, but had limited success getting the IR support working properly. I switched to OSMC which I'd read had better IR support through the main UI. I think I was actually on the right track with OpenElec, but I realised later that the old vintage Xbox remote I was trying to use wasn't 100% working.

 

If you're going to use a remote control that's officially recognised, then you can jump this part about learning IR remote control codes.

 

Learning IR remote commands

 

The remote I found in the loft was an old DVD player remote which (unsurprisingly) wasn't in the list of pre-recognised remotes in the OSMC installation. I needed to get the Pi to learn the IR pulses being sent out by the remote and map them to the Kodi functions.

 

1. First off, you need to telnet to the Pi. Username: osmc, Password: osmc.

 

2. Next you need to stop the LIRC service which is being locked/used by Kodi

 

sudo systemctl stop lircd_helper@lirc0

 

3. Now you can run the IR learn mode.. this will record what it finds to the config file you specify;

 

irrecord -d /dev/lirc0 /home/osmc/lircd.conf

 

4. Follow the on-screen instructions which will recognise your remote.

 

One observation I had was that this only worked properly if I stopped after the first prompt to press lots of keys on the remote.. if I completed the second stage, the key mapping didn't work, e.g.

 

If I ignored the second phase & let it abort, the learn process worked

 

 

When it's working, you'll be able to enter the Kodi function (like KEY_UP, KEY_DOWN, etc)  & map it to a key press on your remote;

 

Once you've mapped all the functions you want, we then need to move back to OSMC and tell it to use that config file we've just written.

 

OSMC Settings

 

In OSMC you need to do the following;

 

1. Disable the CEC service (via System Settings > Input > Peripherals > CEC Adapter), which seems to be needed for LIRC to work.

2. Now go into OSMC settings and pick the Raspberry Pi icon

 

 

2. Go into Hardware Support and enabled LIRC GPIO Support. You shouldn't need to change anything if you connected the sensor to GPIO 18.

 

 

3. Now go back and select the Remote Control option;

 

 

4. Ignore the list of pre-installed remotes and select Browse;

 

 

5. Navigate to the folder where LIRC wrote your config file;

 

 

6. Confirm the change & reboot the box;

 

 

That should be it.. your remote should be able to control everything in Kodi.

Here is my  Raspberry Pi Wireless project to display daily Flickr  explore photos on a used Apple Cinema Display:
https://atticworkshop.blogspot.com/2017/08/raspberry-pi-zero-wireless-photoframe.html

See my wireless motion detection system after this link, it's pretty awesome! Or, carry on...

 

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

 

 

This project will retrofit an old washer or dryer to alert you via text message when the clothes are done.

 

With the IOT market hot right now, many appliances have applications in this realm. Recently we have seen internet connected cooking appliances and refrigerators. Of all the appliances in a house, the one that has remained mostly the same in its process is the washer and dryer. Most people dread using these machines because like baking, you have to wait and tend to the process when needed. With a washer, if you leave your clothes in there for too long without transferring all the clothes to the dryer, you risk having your clothes start to smell like mold or dry out, in which you have to rewash them. If you leave your clothes in the dryer for too long, they will wrinkle. In which you have to send them for another heated spin. Ideally, the clothes get transferred to the dryer as soon as the washer is done and the clothes are taken out of the dryer and folded or hung as soon as the dryer is done. People are either too busy or don’t hear the buzzer when it's done. These days, people are better at responding to their phone than when the dryer or washer is done. At this point, most washer and dryers only have the capability to remind you using a buzzer or chime which is short and sweet. Easy to forget or not hear at all. To make it life easier, why can’t that buzzer or chime reminder be a text message, something we are all now are very good at responding to.

 

I based this project on another I did some time ago using a Bealebone, but now it's ported to a Raspberry Pi since the Pi 3 has build in WiFi. I had to try it.

 

For this project we used a Raspberry Pi 3 to text your phone. Yes, that's all you need to send a text. Most people don’t realize that you can send a text (sms) via email. So by hooking up the Raspberry Pi 3 to wifi and using a email server we can send a text via email. The carriers for cell phone service have provided an easy way to do this.

 

This website popular mechanics list the ways to do this for most carriers:

http://www.popularmechanics.com/culture/web/how-to/a3096/email-to-text-guide/

It is the same way you address an email: number@insertcarrierhere.com provided in the list below:

 


Parts:

Raspberry Pi 3

MMA8452Q 3-Axis Accelerometer

USB Battery Pack (Any external pack will work, here is an OK one.)

MicroUSB Cable

2 Industrial Magnets

1 Rocker switch

1 Panel Mount LED

Project Box

 

------------------------------------------------------------------------------------------------------------------------------

The Schematic:

Pi washer texter schematic.JPG

The schematic is simple. The accelerometer is attached to the Raspberry Pi 3 with four project wires.

 

The hardest part is OPTIONAL, adding an indicator LED and on/off switch. Technically, you can just plug in the Pi 3 to the USB battery very time you want to use it. But, if you want an easy fire-and-forget kind of device, place in that switch and LED!

 

The build:

How this is built doesn’t matter. At all.

 

All I did was slap the components inside a project box (enclosure). It can be inside any shape box it will all fit inside. However, with my build I wanted to mitigate any issue.

- I wanted to mount the accelerometer as ridged as possible inside the box. This is to make sure that most of the movement senses is from the machine it is attached to.

- I used two large rare-earth magnets to make sure it attaches to the washer/dryer as firmly as possible. Since the whole system works off of the idea the machine will have some vibration it can sense, it’s best to make sure it doesn’t get shaken off the machine!

- Portability and temporary use needed to be considered. I didn’t want to attach the sensor system to the machines permanently. I would only use it once a week or so anyway. Then I can turn it off and store it.

 

For those who want to see how I put it all together, see the following gallery:

 

{gallery} Raspberry Pi 3 washer dryer texter

20170703_180436.jpg

The main components are attached to the lid of the enclosure, since it is easier to attach standoffs.

20170703_180427.jpg

The battery and he magnets are hot-glues to the bottom of the main enclosure compartment.

20170703_180421.jpg

Although project wires are long, they do not interfere with the battery.

20170703_203520.jpg

The micro-USB connection that powers the Pi 3 is spliced inside the box for the on/off switch and the LED/resistor.

20170703_203527.jpg

20170703_203532.jpg

20170703_180427.jpg

20170703_174015.jpg

This is the complete system enclosed in the box and turned on.

 

 

Function:

We created a wireless box that attaches to your washer or dryer via magnets. There is a switch and LED on the top of the box that let you turn on and off the device and show you an indication whether the box is on or off. When the user is using their washer or dryer, they simply turn on the device before they start the washer or dryer and turn it off when they retrieve their clothes after the washer or dryer is finished with the load. The device works by detecting if the washer/dryer is on or off by reading its vibration. There are of course cycle changes in a washer and dryer that would fool the device into thinking that it is off when the motor stops for up to 30 seconds usually. A timer is implemented into the code to determine if the washer/dryer has stopped for 1 minute. Since a cycle change takes less than 1 minute, it only sends a text after 1 minute of no activity. Ensuring that the washer/dryer is done with that load. If the washer/dryer starts back up within that 1 minute then it continues to read the vibration until 1 minute of no activity to send a text. To measure whether the washer/dryer was on or off the accelerometer measures the X axis of the 3 axis provided. This is because the X axis is the horizontal plane of the surface of the dyer that moves the most.

 

The vibration of the washer/dryer is a side to side motion and less of an up and down. So we only need to use the X axis for measurement. There is a subroutine that measures out 50 readings in 10 seconds. So that is a reading every 200ms. After 10 seconds of readings, the subroutine returns the current state of the device. It returns whether or not the accelerometer X axis numbers are in range of the baseline which is taken when the device is first turned on and the washer/dryer is off. The 50 readings are compared, then calculated, and it is determined whether the values of X axis are in-range or out-of-range. The in-range values indicate that the device does not detect vibration therefore system mode is in standby mode and waiting for the appliance to start. Once the device detects vibration, the mode is then set to ON and the cycle and timing detection starts. When a cycle ends the device vibration readings will go “in-range” and the cycle check mode starts and the 1 minute timer will start. If no activity, the device will go into Finish mode and send a text. Then go back to standby to wait for another start.

 

Code:
The texting part works by sending an email via the Raspberry Pi 3. Since we are using email, we need an email handling service like gmail to send the email which gets translated into a text by the carrier. To set this up you need an email to login to, for example, in python:


#Assign sender and receiver, X represents the digits of the phone number
sender = 'youremail@gmail.com'

receiver = 'XXXXXXXXXX@vtext.com'

 

#Next we create the message:
header = 'To: ' + receiver + '\n' + 'From: ' + sender

body = 'Laundry is DONE!'

 

signature = '- Sent From Rpi'

 

#Load the gmail server and port into the class “mail”

mail = smtplib.SMTP('smtp.gmail.com',587)

 

#run a subroutine with your email login and password for your gmail.
          def sendText():      

   

               mail.ehlo()     

               mail.starttls()

     mail.ehlo()

     mail.login('youremail@gmail.com', 'password’)

     mail.sendmail(sender, receiver, '\n\n' + body + '\n\n' + signature)

     mail.close()


Running the sendText() function will send your text with the initialized variables loaded into it.


Wifi connection:  

The python code for this project was written in the Raspbian OS using the python 3.4.2 IDE. VNC Viewer was used to View the Raspberry Pi’s Desktop with VNC server installed on the Raspberry Pi. Once the box is connected to wifi using the Raspberry Pi’s wifi IP address, you can ssh into it using a terminal program like putty or VNC viewer to see the Raspbian Desktop. Typing in “ifconfig” into the terminal gives you the IP address of the Raspberry Pi connected via wifi.

-Screenshot of the Raspbian Desktop, showing python code in python 3.4.2 IDE and terminal



I2C Library:

Python has a couple different libraries to use for i2c communication. For this project we used smbus for python 3.4.2. This requires us to install the python3-smbus package by typing “sudo apt-get install python3-smbus” into the terminal. The project code uses the function calls, bus.write_byte_data(device address, register address, data to write) and bus.read_i2c_block_data(device address, start register, and number of bytes to read). The “bus.” is set = at the beginning of the program, bus=SMBus(1). Which sets the variable “bus” to SMBus(1) for easier writing in the code. The SMBus(1) tells the library we are using i2c bus number 1 to read and write on. The Raspberry Pi uses i2c bus 1 by default. We write to the MMA8452Q chip’s configuration registers to configure the chip for use via i2c bus. Especially the register that puts the accelerometer into active mode so we can read values from the digital output registers to retrieve the acceleration data on the Raspberry pi. “bus.read_i2c_block_data()” lets us read the first 7 registers into an array. We then take the X acceleration data from the X output register and parse out that data into variables.    

Accelerometer Connection:

The accelerometer is powered with 3.3V from the Raspberry Pi and communicates via I2C. The python program writes to the configuration registers setting up how the data should display and configuring the mode you want to use. For this project we used the XYZ mode where the device is pulled from the I2C where the X axis values are translated into g’s (acceleration). The python program reads the values from a register on the accelerometer and all mathematical translation from the pulled values to the units it uses to determine its state is done by the python program.

- Sparkfun MMA8452Q

- Alternative Accelerometer here

 


Program Running Terminal Screenshots:

 

The program is run via ssh and executed using “sudo python3 txter.py”

-Running the Python script from the terminal, displays the hardware and variables being initiated

-Showing the readings, current state, and mode

-Showing the device reading the appliance vibration meaning its ON 

-Starting the 1 minute timer when the values are back in range.

-Checking if the stop of vibration was a cycle change or finished

-Checking timer to see if 1 minute has passed without activity.

-1 minute has passed without activity meaning the laundry load is finished, sending text


Text Received:

Screenshot_20170703-185313.png

  • Screenshot of received text on phone

This project is about a digital picture frame aimed at family members, such as grandparents.

 

The idea is that parents taking pictures of their children, can easily share those pictures with the children's grandparents by making them appear on the picture frame automatically. In turn, the grandparents can "like" the pictures, letting the children's parents know which pictures are their favourites.

 

By making use of a specific software platform called resin.io, multiple instances of this picture frame can be deployed for various family members, without hassle.

 

Screen Shot 2017-08-29 at 18.34.39.png

 

Features

 

The project makes use of different services. Here's an overview:

 

Screen Shot 2017-08-28 at 16.51.45.png

 

The picture frame offers following features:

  • simple user interface to navigate the pictures, start a slideshow or like a picture
  • periodically download pictures from a shared Dropbox folder
  • send push notifications whenever a picture is liked
  • Turn the picture frame's display off every evening, and back on every morning

 

Let's take a closer look at the software and hardware for this project, and how you can build your own connected picture frame.

 

Hardware

 

The following hardware components are used in this project:

 

Assembly is super easy, following these steps:

  1. Mount the Raspberry Pi 3 to the Raspberry Pi Touchscreen
  2. Connect the jumper wires from the screen's board to the Pi for power
  3. Slide the Touchscreen assembly through the enclosure's front bezel
  4. Screw everything in place

Do not insert the microSD card or power on the frame yet, as the software needs to be

Image-1 (1).jpg

 

Software

 

The complexity of the project is in the software. Let's break it down.

 

resin.io

 

Resin.io makes it simple to deploy, update, and maintain code running on remote devices. Bringing the web development and deployment workflow to hardware, using tools like git and Docker to allow users to seamlessly update all their embedded linux devices in the wild.

Resin.io's ResinOS, an operating system optimised for use with Docker containers, focuses on reliability over long periods of operation and easy portability to multiple device types.

To know more details about how resin.io works, be sure to check out this page: How It Works

Sign up for a free account and go through the detailed Getting Started guide. From there, you can create your first application.

 

Application Creation

 

Setting up a project requires two things:

  • application name: ConnectedFrame
  • device type: Raspberry Pi 3

 

Screen Shot 2017-08-26 at 21.38.55.png

 

After completing both fields and creating the application, a software image can be downloaded for the devices to boot from. The useful part is that the same image can be used for every device involved in the project. Select the .zip format, which will result in a file of about 400MB, as opposed to 1.8GB for the regular .img file.

Screen Shot 2017-08-26 at 21.38.45.png

Before downloading the image, connectivity settings can be specified, allowing the device to automatically connect to the network once booted. Enter the desired SSID and matching passphrase.

 

Flashing SD Card

 

Once the image specific to the application is downloaded, it needs to be flashed to a microSD card for the Raspberry Pi to boot from.

 

There is a tool available for doing just that, by the same people from resion.io, called Etcher. It works on mac, Linux and Windows, is simple to use and gets the job done.

Screen Shot 2017-08-26 at 21.50.54.png

 

Launch Etcher, select the downloaded image file. Etcher should automatically detect the SD card, all that remains is to click the "Flash" button.

 

The SD card is ready to be inserted in the Raspberry Pi.

 

Configuration & Environment Variables

 

Some raspberry Pi configuration changes are typically made by editing the /boot/config.txt file. Resin.io allows users to do this via the user interface, by defining Device (single device) or Application (all devices) Configuration Variables.

 

In config.txt, pairs of variables and values are defined as follows: variable=value

 

Using the Device/Fleet Configuration, the variable becomes RESIN_HOST_CONFIG_variable and is assigned the desired value.

 

For example, rotating the LCD touch screen is normally done by appending lcd_rotate=2 to /boot/config.txt. As a configuration variable, this becomes RESIN_HOST_CONFIG_lcd_rotate with value 2.

Screen Shot 2017-08-26 at 18.01.22.png

 

Another type of variables, are Environment Variables, which can again be defined at application or device level.

 

Screen Shot 2017-09-03 at 09.57.08.png

 

These environment variables can be used by the operating system, such as "TZ" which is used to set the correct timezone, but also by scripts.

 

Following environment variables are used by the connected frame Python script:

  • DISPLAY: display to use for the Tkinter user interface
  • DROPBOX_LINK: link to dropbox shared folder
  • IFTTT_KEY: personal IFTTT webhooks key to trigger notifications
  • DOWNLOAD_INTERVAL_HOURS: interval in hours to download photos from the dropbox folder
  • CAROUSEL_INTERVAL_SECONDS: interval in seconds to automatically switch to the next photo
  • FRAME_OWNER: the name of the person the frame belongs to, used to personalise the "like" notification

 

Most are to be set at application level, though some variables such as FRAME_OWNER are specific to the device.

The link to the shared dropbox folder ends with "?dl=0" by default. This has to be changed to "?dl=1" in the environment variable, to allow the application to download the pictures.

 

Application Deployment

 

I've been developing a Python application using Tkinter to create the graphical interface for the picture frame.

The layout is simple: four interactive buttons (two on each side), with the picture centralised.

 

Deploying an application with resin.io requires some additional files, defining which actions to perform during deployment and which command to use to start it. The full code and accompanying files for this project can be found on GitHub.

 

You can clone the repository for use in your resion.io application, reproducing the exact same project, or fork it and modify it as you desire!

 

git clone https://github.com/fvdbosch/ConnectedFrame 
cd ConnectedFrame/

 

In the top right corner of your resin application dashboard, you should find a git command. Execute it in the cloned repository.

 

git remote add resin gh_fvdbosch@git.resin.io:gh_fvdbosch/connectedframe.git

 

Finally, push the files to your resin project:

 

git push resin master

 

If all went well, a unicorn should appear!

Screen Shot 2017-08-26 at 17.55.45.png

 

In case of problems, a clear error message will appear, telling you what exactly went wrong.

 

IFTTT

 

"IFTTT" stands for "If this, then that" and is an online platform that enables users to connect triggers and actions for a plethora of services.

 

For this particular project, the webhooks service is used to trigger notifications to the IFTTT app on a smartphone.

Screen Shot 2017-08-28 at 21.19.27.png

 

The trigger is part of the code and needs to remain as is, though the action could be modified to suit your own personal needs.

 

Demo

 

Enough with the theory, let's see the frame in action!

 

 

What do you think? Is this something you could see family members use? Let me know in the comments!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

 

Project1B.JPG

 

Gardening is a delightful hobby, but it can be a chore and one particularly bothersome job is watering the garden. Water it too often and the plants can die, too little and they can also die! But surely technology can offer a solution to this age old problem? Well yes it can, and in this tutorial we shall be using the Raspberry Pi Zero W, the lower power Pi with added Bluetooth and WiFi, along with two sensors, a soil moisture sensor to check if our garden needs water and an Adafruit Flora UV sensor to measure the ultraviolet index of the sun. This data is then emailed to our preferred inbox via a Gmail account that we can use to send the messages on demand.

 

pebble.jpg

Our soil moisture sensor sensor will need an analog to digital converter to convert the analog voltage from our soil moisture sensor into something that the Pi can understand. In this case we shall be using the MCP3008 via an add on board from Rasp.IO called Analog Zero.

 

So let's get started by looking at the bill of materials for this project.

 


All of the code, and a detailed circuit diagram can be found on the Github page for this project. (Zip Download link)

 

Building the hardware

 

 

Rasp.io Analog Zero

Aside from our Pi Zero W, the main player in this project is the Rasp.IO Analog Zero board, which provides us with an analog to digital converter, the MCP3008. Yes you can buy the chip on its own for only a few dollars / pounds, but the Analog Zero board is a convenient form factor that offers a “less wires” alternative.

 

Vellemen Soil moisture SensorAdafruit Flora UV Sensor

The two sensors that we are using are a simple soil moisture sensor from Velleman, and an Adafruit Flora UV sensor based upon the SI1145 IC. The moisture sensor is a simple analog sensor, hence the use of the Analog Zero, but the Flora UV sensor uses I2C so we need to have access to those GPIO pins, which luckily the Analog Zero provides.

 

The Analog Zero will take a little time to solder, and we shall also need to solder the pins for I2C and solder the 3V and GND pins for later. We will also need to solder wires from the Flora UV sensor to attach to our Analog Zero.

Once soldered, attach the Analog Zero to all 40 pins of the GPIO and then connect the sensors as per the diagram.

 

Diagram of the circuit

 

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use. Now open a terminal and enter the following command to configure SPI and I2C connections.

 

 

sudo raspi-config

 

 

raspi-config-main.png

Yes we can use the GUI “Raspberry Pi Configuration” tool found in the Preferences menu, but having raspi-config available to us over an SSH connection is rather handy should we need it.

 

 

Once inside raspi-config, we need to navigate to “Interfacing Options” then once inside this new menu go to the SPI option and press Enter, then select “Yes” to enable SPI. Then do the same for the I2C interface.

 

raspi-config-SPI.png

 

While not strictly necessary, now would be a great time to reboot to ensure that the changes have been made correctly. Then return to the Raspbian desktop. With the hardware installed and configured, we can now move on to installing the software library for this project.

 

Getting started with the software

We only have one software library to install for this project, and that is a Python library for working with the SI1145 sensor present on the Flora UV sensor. But before we install that library we need to ensure that our Pi Zero W has the latest system software installed. So open a terminal and type the following to update the list of installable software, and then install the latest software.

 

 

sudo apt update && sudo apt upgrade -y

 

With the software updated we now need to run another command in the terminal, and this command will install the Python library for our UV sensor.

 

 

git clone https://github.com/THP-JOE/Python_SI1145

 

 

Now change directory to that of the library we have just downloaded.

 

cd Python_SI1145


Now we need to run an install script so that the library will be available for later use. In the terminal type.

 

sudo python3 setup.py install


This library was designed to work with Python 2 but it installs cleanly and works with Python 3. But if you try out the built in examples you will need to add parentheses around the print statements, in line with Python 3 usage.

 


We can now close the terminal and instead let's open the Python editor, Thonny, found in the main menu under the Programming sub menu.

Once Thonny opens, immediately save your work as soil-uv-sensor.py

 

As ever with Python, we start by importing the libraries that we shall be using.

The first library is GPIO Zero, the easy to use Python library for working with the GPIO. From this library we import the MCP3008 class that will enable us to use the Analog Zero. Our second library is time, and we shall use that to add delays to our code, otherwise the project will spam our mailboxes with emails! Our third library is the SI1145 library that we shall use to read the data from our UV sensor. The fourth, fifth and sixth libraries are used to send emails, the smtplib enables Python to send an email, and the email.mime libraries are used to construct an email.

 

from gpiozero import MCP3008
import time
import SI1145.SI1145 as SI1145
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText


So we now move on and create two objects, one that is used to create a connection to the Flora UV sensor, called “sensor”. The other object is called “soil” and here we make a connection to the soil moisture sensor that is currently connected to A0 (Channel 0 of the MCP3008) via the Analog Zero.

 

 

sensor = SI1145.SI1145()
soil = MCP3008(channel=0)

 

 

Our main body of code is a while True loop that will ensure that our code runs continuously. Our first few lines in this loop will read the UV sensor and store the value into a variable, UV, which is then divided by 100 to give us a UV index value, which is then printed to the Python shell for debug purposes.

 

 

while True:
    UV = sensor.readUV()
    uvIndex = UV / 100.0
    print('UV Index:        ' + str(uvIndex))

 

 

In order to get the reading from our soil moisture sensor, we first need to make a new variable called soil_check and in this variable we store the value that is being sent A0 / Channel 0 of the MCP3008. Typically this would be read as a voltage, with 100% conductivity providing 3.3V But in this case the MCP3008 class from GPIO Zero returns a float value between 0.0 and 1.0 With 0.0 being no conductivity, so dry soil, and 1.0 meaning we have perfect conductivity and probably an over watered garden. You will also notice that we round the figure to two decimal places, as the value returned from the soil moisture sensor is quite precise and goes to many decimal places. We then print this value to the Python shell before sleeping for a second. Now for the purposes of testing the delay between checks is rather small, but in real life this delay would be measured in hours.

 

 

    soil_check = round(soil.value,2)
    print('The wetness of the soil is',soil_check)
    time.sleep(1)

 

 

So now that we have the data, let's use it. For this we need an if condition to check the value stored in soil_check against a hard coded value. In my case I used 0.1, but you are free to alter this to suit the plants / garden that you have. In my case I wanted to know if the soil became really dry, so any value equalling or lower than 0.1 will trigger the alert.

 

   if soil_check <= 0.1:


Now we start to construct the email that will be sent should the alert be raised. The first part of any email is to say who the email is from and who it is being sent to.

 

        fromaddr = "YOUR EMAIL ADDRESS"
        toaddr = "EMAIL ADDRESS TO SEND TO"


Next we construct our email as a MIME multipart message, in other words we can add more content to our email than a standard email. For this project we use multipart to enable the use of a subject line. But this could also be used with attachments such as video / images. Here we set up with our from and to email addresses, and then we set up the subject of the email.

 

        msg = MIMEMultipart()
        msg['From'] = fromaddr
        msg['To'] = toaddr
        msg['Subject'] = 'Garden requires water'

 

The next line we come across forms the body of our email, and it is made up from the readings taken by our sensors. These values are stored as floats in the variables soil_check and uvIndex and we then use concatenation to add them to a string readings which is then stored in the body.

 

        readings = 'Soil is '+str(soil_check)+'wet and the UV index is '+str(uvIndex)
        body = readings

 

 

Then we attach all of the email contents ready to be sent.

 

 

        msg.attach(MIMEText(body, 'plain'))

 

 

In order to send the message we need to have a connection to the Gmail server.

 

 

        server = smtplib.SMTP('smtp.gmail.com', 587)

 

 

So we now need to ensure that our connection is secure, so we use Transport Layer Security.

 

        server.starttls()

 

 

Now let's login to our Gmail account obviously you will need to use your own account.

 

 

server.login(fromaddr, "PASSWORD")

 

 

Our next step is to create a new variable text which will contain our email message converted to a string.

 

 

        text = msg.as_string()

 

 

We can now finally send the email using our email address, the address of the recipient, and the text that we have just converted.

 

        server.sendmail(fromaddr, toaddr, text)

 

 

Our last two lines of code close the connection to the Gmail server, and then instructs the project to wait, in this case for 10 seconds, but in reality this value will be much longer, otherwise you will receive lots of email spam!

 

 

        server.quit()
        time.sleep(10)

 

 

So that’s it, we have now built and coded the project and it is ready to be tested. To test the code in Thonny, click on the “play” button located in the menu, or press F5. Now as there is no conductivity between the prongs of the soil moisture sensor the code will trigger it to send an email. So check your inbox to see if it has arrived. Once checked, place something conductive between the two prongs and you will see that the output is just printed to the Python shell and no email is triggered. When you are finished press the red stop button to halt the code.

 

So now that we have code, how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

 

#!/usr/bin/python3

 

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

sudo chmod +x soil_uv.py

 

 

Now in the same terminal, launch the project by typing

 

 

./soil_uv.py

 

 

Now the project will run in the terminal, printing output to the shell, and the emails should start to be sent as there is no connection between the two prongs of the sensor.

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

 

sudo crontab -e

 

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

@reboot /home/pi/soil_uv.py

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Now reboot the Pi Zero W and for now ensure the soil moisture sensor has no connection between the prongs. After a about a minute, the project should be running, and your inbox should start to receive emails.

 

WorkingProject.jpg

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source, ensure the soil sensor is out of the box, but keep the UV sensor inside the box, oh and make sure the container is transparent! Place the project in your garden, and make sure the soil moisture sensor is firmly in the ground. Power up the Pi Zero W, and now you can wait for your garden to tell you when it needs watering.

 

Next time...

In the next blog post in this series, we shall build a system to automatically water our garden when an alert is triggered.

Filter Blog

By date: By tag: