Skip navigation
2017

Raspberry Pi Projects

October 2017 Previous month Next month

A pedometer is an electronic device that estimates the distance traveled by a person by recording the number of steps walked. Pedometers use an accelerometer to count the number of steps. A Raspberry Pi SenseHAT records acceleration along X, Y, and Z axes. You can use Simulink to record this data over a duration of time using the MAT-file logging feature. You can then use MATLAB to analyze the imported MAT-files to count the number of steps.

To use the MAT-file logging feature with the Simulink Support Package for Raspberry Pi hardware, you must have a Simulink Coder license.

For those who are not familiar with Simulink, I would recommend you complete the Getting Started with Raspberry Pi Hardware and MAT-file logging on Raspberry Pi Hardware examples that are available on MathWorks website.

Required Hardware

To recreate this project, you must have the following hardware:

Create a Simulink model for Raspberry Pi Hardware

1. Open the Log Accelerometer data using Raspberry Pi Hardware model by typing raspberrypi_countstep in MATLAB Command Window. You will see a block diagram that looks like the image shown here.

2. In your Simulink model, click Simulation > Model Configuration Parameters to open the Configuration Parameters dialog box.

3. Under the Hardware Implementation pane, select Raspberry Pi in the Hardware board list. Do not change any other settings.

4. Click Apply to save your changes, and then click OK.

Enable MAT file logging

Here are step-by-step instructions on how to enable MAT-file logging to save acceleration data as MAT-files.

1. To open the Model Configuration Parameters dialog box, click the gear icon on the Simulink model toolbar.

2. Browse to Code Generation > Interface > Advanced Parameters, or type MAT-file logging in the search box.

3. Select the MAT-file logging option and click Apply to save the changes.

4. Click OK to close the dialog box.
5. In the Simulink model, double-click the Scope block, and click the gear icon to open the Configuration Properties dialog box.
6. In the Logging tab, select the Log data to workspace option, and click Apply to save the changes.
7. On the Simulink model toolbar, set the Simulation stop time parameter. This parameter specifies the duration for which the signals are logged. After the simulation stop time elapses, the logging of signals stops. However, your model continues to run. For example, if the Simulation stop time parameter is specified as 10.0seconds, the signals are logged for 10.0 seconds, and then the logging stops. However, the model continues to run for indefinite time.

Deploy the Model on Raspberry Pi Hardware

1. On the Simulink model toolbar, click the Deploy To Hardware button. This action builds, downloads, and runs the model on the Raspberry Pi hardware.

2. Walk a few steps while holding the Raspberry Pi™ hardware. Make sure that you walk at least for the duration specified by the Simulation stop time parameter.

Import and Analyze Data

To import the generated MAT-files from the hardware to your computer after the logging is completed, follow these steps -

1. In the MATLAB command window, use the following command to create a raspberrypi object. The parameters specified in this command must match the board parameters specified in Simulation > Model Configuration Parameters > Target hardware resources > Board Parameters.

r = raspberrypi(<IP address>, <username>, <password>);

2. Use the getFile function to copy the MAT-files from the Raspberry Pi™ board to your computer.

getFile(r,<filename>)

Here, r specifies the raspberrypi object and filename specifies the path and name of the file created. After importing the MAT-files, you can use it like a regular MAT-file for any further analysis in MATLAB®.

3. Load the MAT files into workspace variables.

load('raspberrypi_countstep_1_1.mat');

a(:,:) = rt_simout.signals.values(1,:,:) * 9.8;

a = a';

t = rt_tout;

4. Plot raw sensor data.

plot(t, a);

legend('X', 'Y', 'Z');

xlabel('Relative time (s)');

ylabel('Acceleration (m/s^2)');

5. Process raw acceleration data.

To convert the XYZ acceleration vectors at each point in time into scalar values, calculate the magnitude of each vector. This way, you can detect large changes in overall acceleration, such as steps taken while walking, regardless of device orientation.

x = a(:,1);

y = a(:,2);

z = a(:,3);

mag = sqrt(sum(x.^2 + y.^2 + z.^2, 2));

Plot the magnitude to visualize the general changes in acceleration.

plot(t, mag);

xlabel('Time (s)');

ylabel('Acceleration (m/s^2)');

The plot shows that the acceleration magnitude is not zero mean. Subtract the mean from the data to remove any constant effects, such as gravity.

magNoG = mag - mean(mag);

plot(t, magNoG);

xlabel('Time (s)');

ylabel('Acceleration (m/s^2)');

The plotted data is now centered about zero and clearly shows peaks in acceleration magnitude. Each peak corresponds to a step being taken while walking.

6. Count the number of steps taken.

Use findpeaks, a function from the Signal Processing Toolbox™, to find the local maxima of the acceleration magnitude data. Only peaks with a minimum height above one standard deviation are treated as a step. This threshold must be tuned experimentally to match a person's level of movement while walking, hardness of floor surfaces, and other variables.

minPeakHeight = std(magNoG);

[pks, locs] = findpeaks(magNoG, 'MINPEAKHEIGHT', minPeakHeight);

The number of steps taken is simply the number of peaks found.

numSteps = numel(pks)

Visualize the peak locations with the acceleration magnitude data.

hold on;

plot(t(locs), pks, 'r', 'Marker', 'v', 'LineStyle', 'none');

title('Counting Steps');

xlabel('Time (s)');

ylabel('Acceleration Magnitude, No Gravity (m/s^2)');

hold off;

This shows how you can make use of the IMU sensor on Raspberry Pi Sense HAT to count the number of steps a person walked.

Halloween is one of my favorite holidays here in the US, and so much so that I spent a few years of my life thinking up and building smart Halloween props and animatronics for the haunted attraction industry. I won’t go too deep into the details of the business, but a few friends and myself founded a company a few years back with the “help” of some investors. It was my first tech startup, and like many tech companies, we had the hardware and software to revolutionize a decades stagnant industry that quite honestly did not want to change. To make a long story short, none of the original founders are part of that business anymore, with myself backing out in late 2015.

 

 

One of my duties in the company was to brainstorm and prototype new and innovative props that utilized modern technology, while remaining easy enough to use for the aging haunted house owners to be able to program. Often this was accomplished by making props that just worked once powered up, other times this involved utilizing our custom Raspberry Pi based animatronic / whole scene controller unit on the finished prop. However, during the prototyping phase, I would always develop the project using a bare Raspberry Pi or Arduino, and I loved this part about the business the most. The thrill of coming up with a concept, and then building and presenting a working prototype during our weekly all-hands meeting was exhilarating. This is why I love creating Halloween projects every year here at Element14. It gives me the perfect excuse to build some of those concepts that I never got around to prototyping when I was co-owner of the company.

 

 

One of those product ideas that I never got around to building was a smart-mirror that was fully functional while hiding a mind and body jarring jump scare that would be triggered when someone stopped in front of the mirror for more than a few seconds. So when I was asked to come up with a second Halloween project this year, I instantly thought of the smart-mirror. What I did not anticipate was the level of frustration and failure I would experience while building it. Don’t worry though, in the end, I managed to work up something that works 90% of how I wanted it to, and I am going to continue refining this project over the next couple of months, but for now, the smart mirror does work, it just lacks many of the features I wanted it to have. Before we get into the build, I would like to take a moment to talk about what failures I encountered during this project in hopes that a reader may be able to help with the javascript programming when I reboot this project in a month or two.

 

Experiencing Failure

 

As I said in the paragraph above, this project was one of the most frustrating, and stress inducing projects I have ever encountered. My issues began when building the wood frame that would house the mirror in its final form, but I am not here to rant about that, because I simply got a measurement wrong at some point in my CAD design. This was easily solved, and I only lost an hour or three rebuilding it. The real frustration kicked in shortly after deciding to use the MagicMirror2 software to power the magic mirror portion of the project.

 

 

MagicMirror2 is an amazing smart-mirror package if you are just wanting to build a feature-rich, highly functional smart-mirror. I really like this software, and it appears to be regularly updated, and has a very active community behind it. I can not recommend it enough if you are building a normal smart-mirror. The deficiencies begin to show themselves when one wants to detour from the traditional functionality that most smart-mirror builders desire. To be short, attempting to play a full-screen video, display a full-screen .GIF, or simply display a static .jpg in full screen mode on top of the MagicMirror2 display is quite difficult, if not impossible all together.

 

I spent three 12-16 hour days trying to get a video via OMX Player, HTML5, and various other raspberry-pi based video players to work when a GPIO Pin is pulled high by a PIR sensor. After I realized I was just not skilled enough in JavaScript programming to do this on my own, I asked a friend who is a great programmer for help, and six hours later we were still stumped. So I reached out via GitHub to one of the MagicMirror2 module developers, and he attempt to help me figure it out for several hours as well. In the end, the general consensus was that at the moment, without some extensive JavaScript coding, and a deep understanding of how MagicMirror2 and Node.JS work, that it was not possible in time to get this project published on time.

 

So after missing my deadline, and feeling like a complete failure, I picked myself up, tossed MagicMirror2 and all of my code into the garbage and went searching for a new approach. After a few hours of searching, I happened to come across a repository on GitHub that contained a smart-mirror program that was written in Python. This was the best possible outcome for me after the failure as Python is a language I can easily write, and understand. I really would like to make this work with the MagicMirror2 software as it is much more feature rich, and you can do some really cool stuff with Node.js, so if you would like to help me figure that out, please get in touch! Ok, enough about my failures, let's get into the actual project.

 

Parts Required

 

Hardware

Raspberry Pi 3 With NoobsRaspberry Pi 3 With Noobs
PIR SensorPIR Sensor

HT-255D Crimp ToolHT-255D Crimp Tool
Crimp ConnectorsCrimp Connectors

HDMI DisplayHDMI Display
HDMI CableHDMI Cable


3D Printing Files

 

Software

 

Setting Up Your Raspberry Pi and Downloading The Code

 

 

Before we begin installing the software packages we will need to make our spooky smart-mirror, you will need to install the latest version of Raspbian onto the SD card that will go into your Raspberry Pi. If you are using a fresh, empty SD card, then you can follow the video above to learn how to install the latest version of Raspbian to the SD card. If you already have an SD card with Raspbian installed, we can update and upgrade Raspbian from the command line. To do this remotely from the command line on your computer, connect to your Pi to your network via WiFi or a network cable, and then log in (my preferred method) via a terminal app such as Terminal, Putty, or CMDR and then enter the following commands.

 

 

sudo apt-get update

 

 

Then select “Yes” if prompted

 

When the update is finished running, its time to check to see if there is an upgrade available, and install it. To do this, run the following command. This could take a while, so sit back and watch some YouTube videos, or check out my Design Challenge Weekly Updates while you wait.

 

sudo apt-get upgrade

 

Then select “Yes” if prompted

 

Once everything is up to date, shutdown the Pi and connect a HDMI monitor, or the TV you will be using for your mirror. I did my initial development using the official Raspberry Pi 7” Touch Screen. Now restart the Pi, and access it once again from a terminal program on your computer.

 

Before any of the fun happens, we need to install my fork of the  Smart-Mirror software. To do this you will need to use Git. If you do not have Git installed, or you have never used it before, here is a helpful tutorial. The Magic Book-bag portion of the tutorial is not relevant to this project, but it does help you understand how to use Git better.

 

The Smart Mirror Code

 

Once Git is installed, and you have your SSH key saved in your Git settings, navigate to the home/pi directory again and run the following commands. This will clone my Smart-Mirror-With-Halloween-Jump-Scare repository to the Raspberry Pi.

 

cd /home/pi
git clone git@github.com:CharlesJGantt/Smart-Mirror-With-Halloween-Jump-Scare.git

 

Navigate to the folder for the repository

 

cd Smart-Mirror

 

Install the Smart-Mirror software’s dependencies (Make sure you have pip (https://pip.pypa.io/en/stable/installing/) installed before doing this.)

 

sudo pip install -r requirements.txt
sudo apt-get install python-imaging-tk

 

Select “Yes” if prompted

 

 

At the moment, the weather widget is broken due to an API change, but for what we are doing with this project at the moment, that does not matter much. With that said, you should still register a free developers account at darksky.net and enter your API key in the smartmirror.py file as pictured above. To do this, enter the following commands.

 

sudo nano smartmirror.py

 

And edit line 23 with your API key. Then exit nano with ctrl+x or cmd+x and press enter to keep the same file name.

 

Screen Orientation and Cleanup

Before we can test the Smart-Mirror install, we need to take care of some minor, but required task. The first task is to rotate the display by 90 degrees so that our smart mirror can hang in portrait orientation. To do this, enter the following commands, and edit the config file.

 

sudo nano /boot/config.txt

 

Arrow down to the bottom of the file, and add the following line of code. Then save the file by pressing ctrl+x or cmd+x and press enter to keep the same file name. If your screen's rotation is 180 degrees off after this rotation, change the number to 3 instead of 2.

 

lcd_rotate=2

 

Now we need to hide the taskbar, and the only way to do this from the command line is to edit another config file. Enter the following command to edit the necessary file.

 

sudo nano /home/pi/.config/lxpanel/LXDE-pi/panels/panel

 

In the “Global” section at the top of the file, you need to modify the following parameters:

 

autohide=0 heightwhenhidden=2

 

Replace that line with the following line

 

autohide=1 heightwhenhidden=0

 

With those small task taken care of you can now test the Smart-Mirror install by running the following command. Note that you will have to run this from the terminal app on the Pi itself for it to work properly. This is because TKinter will only open if ran natively. There are ways to run this command remotely, but I have found them to be buggy.

 

sudo python smartmirror.py

 

An error may pop up about the weather module, but ignore it, and the screen should turn black with white clock and news text appearing on the screen.

 

The Jump Scare Code

 

Ok, now that we have the Smart-Mirror running, it's time to connect the PIR sensor to the Raspberry Pi’s GPIO header. Follow the diagram below paying close attention to the Power and GND wires.

 

      • PIR Sensor VCC Pin to RPi 5V
      • PIR Sensor Data Pin to RPi GPIO5
      • PIR Sensor GND Pin to RPi GND

 

Now let's take a look at the jumpscare.py file that is inside of the Smart-Mirror directory. You might be wondering why I did not just include this code in the smartmirror.py code, and my reason is just because I expect that file to be updated soon by its creator to fix the weather API, and I also like the idea of being able to turn off the jump scare feature by killing the jumpscare process.

 

Open the jumpscare.py file in Nano.

 

sudo nano jumpscare.py

 

Starting at the top of the file we import the following libraries:

 

import RPi.GPIO as GPIO
import time
import os
import sys

from subprocess import Popen

 

 

Next we have to set which GPIO numbering schema we will be using. I always use the BCM schema. There are two different numbering schemes for the GPIO pins on the Pi.  The Broadcom chip-specific pin numbers (BCM) and P1 physical pin numbers (BOARD.

Here’s a reference showing all the pins on the P1 header, along with their special functions and both BCM and BOARD numbers:

 

GPIO.setmode(GPIO.BCM)

 

Now we need to setup the GPIO and tell the Pi which pins are what. In the first line we are telling the Pi to set GPIO pin 5 as an input, and to attach a pulldown resistor to it. The second line sets up GPIO pin 26 as an output. I left this in the code so that you could connect an LED to pin 26 to use for troubleshooting motion triggers.

 

GPIO.setup(5, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(26, GPIO.OUT)

 

Before we get into the loop, we need to declare a variable called "motionDetected". We can use this variable to count motion triggers in our loop. We also need to tell the Pi where the video we want to play is located. Since you cloned this repository from my Github, the zombie.mp4 file will be in the Smart-Mirror directory.

 

motionDetected = 0
movie1 = ("/home/pi/python_programs/zombie.mp4")

 

I’m going to break the loop down line by line to better help you understand what's going on. In the line below we are defining our loop and stating that while True do this.

while True:

 

Here we are telling the Pi to watch the state of GPIO 5

 

input_state = GPIO.input(5)

 

In the next block of code we are telling the Pi that if the input state of GPIO equals True (high) then print “Motion Detected” in the terminal, increment the motionDetected variable by one, and then wait for 0.2-seconds before moving on to the next line of code.

 

if input_state == True:
        print('Motion Detected')
        motionDetected += 1
        time.sleep(0.2)

 

Finally, we finish things up with another if statement that says if motionDetected equals 1 then set GPIO pin 26 HIGH, then make sure no instance of OMXplayer is running, then open a video player with the video that was defined earlier in the code. Next we tell the code to wait for 60 seconds before continuing with the loop, resetting the motionDetected variable to 0, and set GPIO pin 26 low to turn off the debugging LED. Note that you can delay or speed up how frequent the jump scare triggers by adjusting the time.sleep(60) setting.

 

if motionDetected == 1:
        GPIO.output(26, GPIO.HIGH)
        os.system('killall omxplayer.bin')
        omxc = Popen(['omxplayer', '-b', '-o', 'local', movie1])
        player = True
        time.sleep(60)
        motionDetected = 0
        GPIO.output(26, GPIO.LOW)

 

The full code is listed below.

# This Code Triggers a video to play on
# the raspberry pi when motion is detected
# via a PIR sensor on BCM pin 5. # Written By
# Charles Gantt 2017
# http://www.themakersworkbench.com
# & http://www.youtube.com/c/themakersworkbench
# https://github.com/CharlesJGantt/Smart-Mirror-With-Halloween-Jump-Scare

import RPi.GPIO as GPIO
import time
import os
import sys

from subprocess import Popen

GPIO.setmode(GPIO.BCM)

movie1 = ("/home/pi/Smart-Mirror-With-Halloween-Jump-Scare/zombie.mp4")

GPIO.setup(5, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(26, GPIO.OUT)
motionDetected = 0

while True:
    input_state = GPIO.input(5)
    if input_state == True:
        print('Motion Detected')
        motionDetected += 1
        time.sleep(0.2)
    if motionDetected == 1:
        GPIO.output(26, GPIO.HIGH)
        os.system('killall omxplayer.bin')
        omxc = Popen(['omxplayer', '-b', '-o', 'local', movie1])
        player = True
        time.sleep(5)
        motionDetected = 0
        GPIO.output(26, GPIO.LOW)

 

With all of the code finished up, let's set both python programs to run on boot. To do this we are going to write a simple bash script that tells the Pi to run both of the python files in the background.

 

cd /home/pi/Smart-Mirror-With-Halloween-Jump-Scare
nano launcher.sh

 

Now type in this script

#!/bin/sh
# launcher.sh
# navigate to home directory, then to this directory, then execute python scripts, then back home

cd /
cd /home/pi/Smart-Mirror-With-Halloween-Jump-Scare
sudo python smartmirror.py &
sleep 10
sudo python jumpscare.py &
cd /

 

We need to make the launcher script an executable file. To do this, enter the following command.

 

chmod 755 launcher.sh

 

Since we will be using crontab to trigger this script. We need to make a directory to log any errors that may occur. This will help with troubleshooting. Enter the following commands:

 

cd
mkdir logs

 

Now lets add the script to the crontab. Add the following line to the very bottom of the crontab file.

 

@reboot sh /home/pi/bbt/launcher.sh >/home/pi/logs/cronlog 2>&1

 

Now you can reboot the Pi to see if it worked. Enter the following command to reboot the Pi.

 

sudo reboot

 

When the Pi finishes booting, you should see the GUI load, then the smart mirror window open. If you wave your hand in front of the PIR sensor, the jumpscare.py script should trigger the zombie.mp4 video, and once finished, the smart mirror screen should reappear.

 

The Smart Mirror

 

With our code finished, it’s time to make our smart mirror. This is the part of the project where my end result may differ from yours. I chose to order a new 32” LED TV from Amazon, and try my hand at creating the two way mirror from window tint film and standard plate glass. I also wanted to create a wooden frame to house the TV inside so that it had the appearance of a hand-crafted mirror. Fortunately, I have a complete, fully stocked woodworking shop here at home, and whipping up a frame was a few hour process. As I mentioned at the beginning of this project, I did get my math wrong and made the first version of the frame incorrectly, and the TV screen did not fit. I was able to correct this, but if you do not have an abundance of time-saving tools, and extra wood to work with, take time, and measure your screen’s dimensions carefully. The only advice I can really offer is to leave about 1/16” clearance around the edge of the screen to account for expansion as the steel frame of the TV as it warms up.

 

 

I am not going very in depth here about the process I used to build the frame because how you frame the mirror is arbitrary and not very relevant to getting the mirror to work. You could even just tape a 2-way mirror to the front of the TV and the effect would be the same. Some people even create these little mirrors from 15” laptop screens, or HDMI monitors. You do not have to use an actual TV. I simply used a brand new 32” TV because I will be rebuilding this mirror, with a much more refined frame that will be built from exotic hardwood. I do however plan on making a video on that build with a complete step by step guide for my YouTube channel. So if you would like to check that out, it should be out sometime towards the end of the year.

 

 

I didn’t get many photos of the glass cutting or tinting process as that was another major issue I ran into during this project. Initially I decided that I would cut my own glass, as it is something I have done in the past, and it saves a good bit of money. My mistake was thinking that the glass I bought from a big box home repair store would be of a high enough quality to actually be easy to cut with the standard score and snap method. I broke $38 worth of glass before I gave up defeated, and called a local glass shop that explained to me that the glass quality that big box home improvement stores sell is just too low quality, and it's not annealed very well giving it a harder surface that is prone to flaking in the scoring process. The higher quality glass that glass shops stock is designed to be cut with laser sharp accuracy, and to minimize errant cracks in the cutting process. They showed me how quick and easy a good, high-quality piece of glass is to cut, and $17.38 later I was on my way home with the glass.

 

 

That afternoon, I attempted to tint the glass by myself and while I came very close to succeeding, I botched the tinting process twice. This was 100% my fault, and instead of following the poorly written directions that came with the mirror tint window film, I watched a few window tinting tutorials on YouTube, and realized that by adding a little dish soap to the water I was spraying the glass and film with, the process was much easier, and provided a better result. Living in a home with four dogs, and a couple of cats as well as an attached woodworking shop did end up haunting me a bit during the tinting process though. I spent a lot of time picking specks of dust and animal hair out of the wet tint, and still managed to trap a few dust particles and hair under the tint. Since this is a Halloween prop, I am not to bothered by that. When I rebuild this into a proper smart mirror, I will order a piece of chemically tinted 2-way mirror glass to avoid these issues all together. Another advantage to chemically treated glass is that your mirror can be made from tempered safety glass which means there is a much less chance of injury if it does shatter or fall off the wall and break.

 

 

The one thing that I made that may help you along with your build is the corner brackets I designed and 3D Printed that hold the mirror and TV firmly to the frame’s bezel. These brackets take about 20-minutes to print each on a Prusa i3 MK2s at a 0.2mm resolution. If you would like to use these brackets in your project, you can download them from my Thingiverse by clicking here.

 

So without going into too much detail, here are some photos of the frame build.

 

 

 

Now that we have the frame built and it’s hanging cable attached, it's time to attach the Raspberry Pi to the back of the TV. If you have room, and a 3D Printer, you can print this handy dandy VESA mount Rasberry Pi case bottom that I found on Thingiverse. If you want to print the top piece as well, that is just fine. I only printed the bottom as I wanted my Pi to have good airflow. Unfortunately on the TV I am using, the VESA mount was only part of the rear plastic, so I ended up attaching the Pi to a single screw hole. When I rebuild the mirror, I will print a custom case with mounting points that fit the screw locations on the grey steel backing plate.

 

As you can see, it mounts to the tv with standard M4 machine screws. Then the Pi attaches to it with small 3mm screws. Then all that is required is to attach a USB cable from the Raspberry Pi 3, and the TV’s USB port.

 

 

Finally, we need to attach the PIR Sensor to the top of the frame. To do this, I found a nice and compact PIR sensor case on Thingiverse, which I remixed, and designed a small extension arm for. Download it here. This is held together with M3 machine screws and nuts. To mount it to the top of the frame, I just used more small screws like I used on the corner brackets.

 

 

To finish up the PIR sensor mounting I needed to make up a cable that would connect it to the Raspberry Pi. Using a pin and crimp kit, I made the cable about three inches longer than it needed to be to add some strain-relief and prevent the cable from putting too much tension on the pins of the Pi.

 

 

Now all that is left is to test the smart mirror and jump scare out, and to do this, I simply stood it up on my workbench. Check out the demo video above, and I hope it shows off the jump scare well enough. This was the final point of frustration for me during this project. I shot a nice video with my DSLR, and lavaliere microphone, but it appears that something is broken in my brand new camera as it will not record audio from its microphone jack. Thankfully, Canon has a spectacular warranty department, and it will be fixed in a couple of weeks. I plan on taking the smart mirror to a friend's Halloween party, and will update this post with a video of some people getting scared if that happens.

 

So, what are my final thoughts on this project? Well I can honestly say that even after all of the frustration, stress, and unfortunate events, that I am, for the most part, proud of how it turned out. There are things I wish I would have done a different way, and some features that I left off simply because of time constraints. As I mentioned earlier, I am going to continue developing this project over the coming months, and hope a few people will join me in that journey, but for now, I have a working smart mirror that also features a cool jump scare, so I will call this one a win. Albeit, a small win, but a win nonetheless. I guess that the take away from my experience on this project would be that perseverance always pays off, and as long as you refuse to give up, anything is possible. Thanks for taking the time out of your day to read this tutorial. If you would like to see me create more cool stuff like this, please leave a comment below, and hopefully I will get assigned more projects like this! I will see you on the next one, and until then, remember to Hack The World and Make Awesome!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Media Center: Part 2

Join Les Pounder as he guides us through turning a Raspberry Pi into a Media Center!

Learn about Raspberry Pi, XBMC, Plex and even Kodi streaming services.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part - Coming Soon

Part 2: Identifying my needs and planning the build

 

So what are my needs?

 

I work from home and I like to have something on in the background as I work, so my use case will be for a device which can keep me entertained while I work.

 

Project Goals

 

  • The project should connect to my home wifi.
  • It should have its own screen and speaker.
  • Input will be via a touchscreen.
  • I like to watch YouTube videos and listen to podcasts.
  • I want to watch films on the device.
  • It should connect to my hard drive via a network share.

 

So to accomplish the project I will need plenty of kit.

 

 

 

The Raspberry Pi 3 has plenty of power for this project, maybe more than I need as this project could also be created with a Pi Zero W, but then I would need to source a USB sound card.

To the Pi 3 I will connect Pimoroni’s Hyper Pixel an 800x480 screen that fits on top of the Raspberry Pi 3 and uses the GPIO. the Hyper Pixel board is fantastic, sure it might not be an HD screen but the image quality is superb, and it can run at 60fps. The only issue with the Hyper Pixel is that the screen backlight uses PWM to control the brightness, this technically renders the analog audio output useless, but fear not! If I keep the backlight on at full brightness then I can use the analogue audio!

(Excuse the mess...)

 

The minimum SD card size for LIBREELEC is 8GB but now 16GB are really cheap and the extra space may come in handy.

As I am using the Pi 3 and the Hyper Pixel from a single power source I need to make sure to supply enough power, and the official 2.5A power supply will do the job nicely.

Speakers are easy to find, I’m using a cheap analogue speaker that has it’s own battery and it can be recharged from micro USB. So I’ll use a micro USB to USB cable connected from the Pi 3 USB port to keep the battery charged and the speaker ready for use.

Unless I put the kit in a case it will just be a mess of wires so using a suitable project case and a few well placed holes, into which we shall use brass standoffs to keep everything secure and well placed and more importantly it will keep my desk almost tidy!

Purchasing an MPEG licence is an optional step. The power of the Pi 3 CPU is enough for software decoding of standard definition MPEG streams, but should you need to decode HD streams then purchasing a MPEG-2 licence key for around $3 is a no brainer.

 

So there we have it, a starting point from which this project can be born!

 

In Part 3 of this project I will build the basic system and test that it works. Then in Part 4 I will configure the project to meet my needs.

Raspberry_Pi_Logo.svg.png

Raspberry Pi Media Center: Part 1

Join Les Pounder as he guides us through turning a Raspberry Pi into a Media Center!

Learn about Raspberry Pi, XBMC, Plex and even Kodi streaming services.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

What is a media centre?

In the 1980s to 2000s a media centre was a wooden cabinet filled with VHS, DVD, Cassettes and CDs. But in the mid 2000s this changed and media was consumed and catalogued inside vast digital media centres. From a 64MB MP3 player USB flash drive that I purchased in 2003, to the ubiquitous iPod full of music, the media centre has evolved, shrunk and more intelligent! The same has happened with our movie collections, no longer are they occupying shelves of space, rather digital shelves are groaning with content that we have purchased from many different providers.

 

So what is this blog post about?

In this blog post, the first of four such posts, we shall examine the different options that we have available. Then in the second post we shall determine what type of media centre meets the needs of our users. In the final two posts (3 and 4) we shall build our own media centre and configure it to provide us with a wealth of legally obtained content.



Android Boxes

By Tzah (Own work) [Public domain or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

Image by Tzah (Own work) [Public domain or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons https://commons.wikimedia.org/wiki/File:DroidBOX_Android_Kodi_TV_Box.JPG

 

Android boxes are very common. They are relatively cheap and can turn your television into a Smart TV. Typically they come with Android, and more often than not that version is quite old and possibly full of spyware. To add further complexities to these boxes, they can be tailored to include “access” to streaming movies services and sports channels, not that the owners of such content would know as quite often these additions are illegal modifications.

These boxes are typically found via online auction sites, but in recent months, especially in the United Kingdom we have seen a clamp down on boxes that come configured for illegal content.

These boxes are a solution to watching your movies, but their illegal software installs can make them a dubious purchase and there is little or no support from the providers.

 

Streaming Services

We all know of at least one streaming service. Netflix, Amazon Prime Video, Hulu, CBS All Access are all examples of content providers giving you access to the their content. And therein lies the problem, you never own the content in the same manner as you own physical media (of course the physical media is never truly yours as you are unable to “rip” the content and store it on your own systems)

As soon as you stop paying for the subscription, the flowing tap of media ceases and you are left without the latest series of Star Trek or Stranger Things.

These services are legal and provide a good level of customer support. You can also watch the content on the move, handy for long journeys and commuting.

Raspberry Pi Based Solutions

The Raspberry Pi provides the flexibility of all the above services, yes including the morally and legally dubious illegal streams. Thanks to the Raspberry Pi’s GPU (Broadcom VideoCore IV) it can handle 1080p video without using the CPU.  So we get high definition video, and HDMI connectivity. This is even available with the Raspberry Pi Zero!

 

Kodi

On the Raspberry Pi we have a choice of software to cater to our media needs. We can run Kodi media centres with OSMC and LIBREELEC, both of which can be downloaded via the Raspberry Pi website.

OSMC and LIBREELEC, being part of the Kodi family, offer installable plugins to enhance your media. You can watch Youtube videos, stream content from online providers (Hak 5, Element14 and many others) or you can stream radio and podcasts from many providers. This also means that you can stream movies, sport and pay per view television, and no I’m not going to show you how to do that. While OSMC and LIBREELEC are great for managing your home library, we do have one issue, namely that your media is locked at home! You can’t stream the media from your Pi to another device. But with the next option you can!

 

Plex

Plex is a popular streaming service that offers users the opportunity to stream content from their Raspberry Pi to any device in and outside the home. In fact we have a great tutorial that you can follow that takes you through the steps necessary to turn a Raspberry Pi 3 into a home media streaming server!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
This is the Final Part

The final project!

In previous projects we have used sensors to detect soil moisture and relay that data to us via email, forcing us to go outside and water the garden. Then we developed a self watering system based on the same sensor, which was connected to a pumping system that fed water to our garden all controlled by the humble Raspberry Pi Zero W.

In this final project we shall create another device that will enable us to water the garden from our desk / sofa. This uses the Google AIY kit that came as part of a special issue of The MagPi magazine but it is now being offered for sale via other retailers. Using this kit we build an interface that enables our voice to trigger watering the garden, all we need to do is press a button and speak the words “water the garden”. This message is sent over the network using MQTT (Message Query Telemetry Transport) which uses a publisher - broker - subscriber model to send messages to devices that are listening on a particular “topic” in this case a Raspberry Pi Zero W will be listening for these messages, and when it receives them it will trigger a relay to life, connecting a peristaltic pump to 12V power supply and pumping water from a water butt and to our thirsty garden.

 

MQTT?

In this project we use MQTT to relay messages from one device to another. Ideally we need three devices on the network

 

  • A Publisher: The Raspberry Pi 3 AIY Kit which sends the trigger phrase across the network
  • A Broker: Any computer running the MQTT broker software. In this project we use the Pi Zero W.
  • A Subscriber: The Pi Zero W, which is looking for the trigger phrase and acts upon it.

 

But for this project the Pi Zero W that is watering our garden is both a broker and a subscriber. This is acceptable for our small network but for larger projects, with multiple publishers / subscribers it would be prudent to use a machine as a broker.

 

MQTT works by the publisher and subscriber both being on the same topic, similar to a channel. The publisher sends a message using a certain topic, and the subscriber receives it. A real world example of this model is YouTube. Content is created by Publishers, who upload it to their channel (Topic). YouTube then acts as a Broker, offering the content to Subscribers who will choose what Channels (Topics) to watch.

 

 

  • For this project you will need
  • A Google AIY kit
  • A Raspberry Pi 3Raspberry Pi 3
  • Pi Zero W
  • A transparent waterproof box
  • USB battery
  • Jumper jerky (Dupont connections)
  • Relay Board
  • 12V Peristaltic Pump
  • Plastic hose to match diameter of pump
  • 12V power supply (for outdoor use)
  • Waterproof box to store everything, also for 12V power supply!
  • Water Butt / Storage

 

All of the code for this project can be downloaded from my Github repo.

 

Building the hardware

Raspberry Pi 3 AIY Kit

 

The kit comes with a round arcade button, but I had a lot of triangular buttons that I wanted to test.

 

The hardware build is split into two, as we have two machines to work on. First we shall start the build on the Raspberry Pi 3 AIY Kit.

 

 

Building and configuring the Google AIY kit is straightforward, and for the latest guidance head over to https://aiyprojects.withgoogle.com/ where you can also learn how to check, debug and configure the kit.

To assemble the kit refer to https://aiyprojects.withgoogle.com/voice/#assembly-guide

For debug and testing the kit https://aiyprojects.withgoogle.com/voice/#users-guide

In order to create this project we need to turn on billing for our project. But don’t worry as we get 60 minutes of free use per month. To turn on billing follow the guidance at https://aiyprojects.withgoogle.com/voice/#makers-guide-3-1--change-to-the-cloud-speech-api

 

For this part of the project, expect to dedicate around 90 minutes to build and test the kit.

 

Pi Zero W Controller

The other part of the project is our trust Pi Zero W connected to a relay, used to control the 12V circuit for our peristaltic pump which will pump water from a water butt to our plants using a rotating motion to “squeeze” the water through the connected plastic hose. The relay is controlled from the GPIO of our Pi. In this case we connect the relay to 5V, GND and the Input of the relay to GPIO27. This is the same as in Project 2, but we have changed the GPIO used to control the relay as GPIO17 was a little twitchy in our tests.

 

Relay Connection

Connect the relay to GPIO27 using the female to female jumper jerky connectors as per the diagram. You will also need to provide the 12V power supply to the peristaltic pump. The + connection from the 12V supply goes to the relay, via the normally open connection, the icon looks like an open switch.


Software Build

Connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use in a Terminal type the following.

 

 

hostname -i

 

 

 

Still in the terminal and now let's install the MQTT software that will turn our Pi Zero W into a broker, an MQTT term for a device that manages the messages passed from the Publisher (our Pi3 AIY Kit) and the Subscriber (also our Pi Zero W).

 

sudo apt update  sudo apt install mosquitto 



Now let's start the MQTT broker service on the Pi Zero W. We need to do this so that it can make the connection between our Pi3 and Pi Zero W. In the Terminal type

 

 

sudo service mosquitto start

 

 

 

With that running we can now perform the final install before starting the code. This will install the MQTT library for Python 3. In the Terminal type

 

 

sudo pip3 install paho-mqtt

 

 

So that’s all the configuration completed, lets open the Python 3 editor from the Programming menu and start writing Python code. For this you will need to create a new file and save it as Garden-Watering-Device.py.



We start the code for our Pi Zero W by importing  three libraries. From GPIO Zero we import the DigitalOutputDevice class, used to create a connection from our Pi Zero W to the relay. We then import time, used to control how long we water the garden for. Lastly we import the MQTT client.

 

from gpiozero import DigitalOutputDevice
import time
import paho.mqtt.client as mqtt

 

Next we create an object used to connect our relay to the GPIO via GPIO pin 27.

 

 

relay = DigitalOutputDevice(27)

 

 


Our next step is to create a function which will contain the code necessary to connect our Pi Zero W to the MQTT network we have created. This function will connect and provide a code which will identify if we have connected to the network correctly. Then the Pi Zero W is configured to be a subscriber listening on the topic “garden”.

 

def on_connect(client, userdata, flags, rc):
        print("Connected with result code "+str(rc))
        client.subscribe("garden")

 

 

Another function but this time the function will react to messages over the MQTT network. The first step of the function is to create a variable called “message” and this will store the payload, converted to a string. Then using string slicing we remove the unwanted data from the message, which goes from position 2 in the string, to the second to last position. Then we print the message for debug purposes.

 

 

def on_message(client, userdata, msg):  
        message = str(msg.payload)
        message = message[2:(len(message)-1)]
        print(message)

 

 

Still inside the function we now create a conditional test that will check the contents of the “message” variable against a hard coded value, in this case “water garden”. If the result of the test is True, so the two match, then we print to the shell that the watering has started. Then the relay is turned on, a pause of 2 seconds for testing purposes, then the relay is turned off. The code then waits for 10 seconds before ending the function.

 

 

       if(message=="water garden"):
                print("Watering Garden")
                relay.on()
                time.sleep(2)
                relay.off()
                time.sleep(10)

 

 

Outside of the function we now move on to the code that will call the functions. First we create an object, “client” and in there we store the MQTT client function. Then we connect to the network using our on_connect function., then we call the on_message function to handle receiving messages. We then connect to the MQTT network, specifying the IP address of the broker, which is this Pi Zero W, so we can use 127.0.0.1. Lastly we instruct MQTT to constantly loop and check for messages.

 

 

client = mqtt.Client()  
client.on_connect = on_connect  
client.on_message = on_message  
client.connect("BROKER IP ADDRESS", 1883, 60)
client.loop_forever()

 

 

That's all of the code for this part of the project. Save the code and click on Run to test it. If all works correctly now is the time to move on and the next step is to make the code exectubale and enable it to run when the Pi Zero W boots.

 

So how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

#!/usr/bin/python3

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

sudo chmod +x Garden-Watering-Device.py

 

 

Now in the same terminal, launch the project by typing

 

 

./Garden-Watering-Device.py

 

 

Now the project will run in the terminal, Waiting for the correct message to be sent over MQTT.

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

sudo crontab -e

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

@reboot /home/pi/Garden-Watering-Device.py

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source and the 12V circuit for our pump. Power up the Pi Zero W, and the first part of this project is complete. Time to move on to the Raspberry Pi 3 AIY Kit.

 

Raspberry Pi 3 AIY Kit

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Raspberry Pi 3 AIY kit to the desktop.

 

Before we start any coding we need to install the Python3 MQTT library. So open a Terminal and type.

 

sudo pip3 install paho-mqtt 

 

 

After a few moments the software will be installed. Close the Terminal window.

Starting the code for this part of the process and luckily for us there is some pre-written code for this project. To use the code click on the Dev Terminal icon on the desktop. This will launch a special version of the Terminal, with all of the software setup completed enabling us to use the AIY software. With the terminal open type

 

 

cd src/

 

 

Inside the src directory there are a number of files, but in particular we are interested in cloudspeech_demo.py. Before any changes are made, make a backup of the file just in case!

 

cp cloudspeech_demo.py cloudspeech_demo_backup.py

 

So now that we have a backup of the code, we need to edit the original file. For this we used IDLE3 to edit the file, and to open it type.

 

Idle3 cloudspeech_demo.py

 

Inside the file we need to make a few additions. Firstly we need to add two extra libraries. Time to control the pace of the project, and the MQTT library.

Add these to the imports.

 

import time
import paho.mqtt.client as mqtt

 

 

Just after the imports we need to add a function that will handle connecting to our MQTT network. This will return a result code, 0 means we are connected with no issue.

 

def on_connect(client, userdata, flags, rc):  
        print("Connected with result code "+str(rc))

 

The next section to edit is the main() function, this is used to detect voice input using a recognizer function. This will listen for audio, record and then translate using the cloud. Lets add another recognizer phrase that will listen for the phrase “water the garden”.

 

   recognizer.expect_phrase('water the garden')

 

Still inside the main function, we now move into a series of if..elif conditional statements. You can see the final elif is a test to see if the word “blink” has been recognised. After this elif, create a new elif test, this time it will check to see if the phrase “water the garden” has been spoken.

 

           elif 'water the garden' in text:



So when this phrase is recognised we print to the Python shell that the watering has started, this is a debug step that can be left out. We then create an object called “client” that stores a reference to the MQTT library. We then use that object to connect to the network using the function we created earlier.

 

               print('Watering Garden')
                client = mqtt.Client()
                client.on_connect = on_connect

 

Next we connect to the broker, in this case our Pi Zero W will be the broker so we need to know its IP address. We also connect to the default MQTT port, 1883, and set a 60 second wait until timeout. To the MQTT network we then publish on the “garden” topic the phrase “water garden”, which our subscriber, the Pi Zero W is listening for.

 

               client.connect("BROKER IP ADDRESS", 1883, 60)
               client.publish("garden","water garden")

 

Still inside the elif conditional test, we add a few lines of code that will turn on the LED inside the pushbutton that comes in the AIY kit. This will be a response to running the code and watering the garden. After a second we then turn off the LED ending the code for the conditional test, and the code for this part of the project.

 

 

               led.set_state(aiy.voicehat.LED.ON)
               time.sleep(1)
               led.set_state(aiy.voicehat.LED.OFF)
               time.sleep(1)

 

Save the code, and exit from IDLE3. We need to run the code from the dev terminal so type the following.

 

./cloudspeech_demo.py

 

When ready press the button and say the magic words “water the garden”. You should now see the text appear on the screen, and the LED flash once. The message “water garden” will be sent over MQTT to our Pi Zero W, and it will start to water the garden.

 

#ShareTheScare this Halloween

Visit our Halloween space and find out how to win one of two Cel Robox 3D Printers. There are two ways to win:

Share your Project and Win
#ShareTheScare Competition
The Ben Heck Show Episodes

 

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

 

 

Dolls.

 

Why are dolls so scary to me? They watch you. Follow you. Walk around at night. Always evil. Always!

 

My fear must stem from a movie I saw as a child called “Dolls.” It frightened me so bad, I literally could not sleep, not even in the day! No other film did that to me. The Chucky series, Goosebumps episodes with the ventriloquist puppet, none of these scared me as a kid. It was something about that movie, Dolls… fuel for nightmares.

 

I’m not the only one. Dolls of various types always freak people out. Just take that movie “Annabelle,” the prequels to “The Conjuring” horror film series. All of which feature the doll, Annabelle.

 

To be honest, every single scary doll in anyone’s house I know… has been thrown out, burned or buried. Thank goodness!

 

However, for this project, there was a doll shortage. Who knew I would need one of those hideous things one day?

 

I tried many antique shops. But very freaky doll they had cost a fortune! Wouldn’t that be a double smack? I buy a 200 dollar doll, and it comes to life to get me! Luckily, some local resale stores had a few options. I found this one below… not as scary as I wanted it though.

 

I wanted to animate a doll to look like something left by a child on a porch. As someone approaches, it would slowly stand up. Guaranteed to freak everyone out. I had a few ways to do this in mind, but I thought the simple puppet on a string should do the trick.

 

In this project, we are going to talk about two important skills to learn: One – Raspberry Pi stepper motor control. Two – making a Scary Doll move. For an added bonus, we’ll add some scary sounds to go along with the doll moving.

 

 

Concept:

 

The dolls movement is controlled by a stepper motor hidden behind it. The doll’s head is attached to a clear fishing line, going up to a pulley where it is attached to the motor on the ground behind it. Turning the motor in different directions controls the movement of the doll up and down from lying to standing to floating.

 

When it comes to motion control of any type, especially at low speeds, stepper motors are the way to go. I know the doll isn’t all that heavy, but a stepper motor has the highest holding force of any motor type. So, accidental unwinding will not be an issue.

 

Also, it will help with situations where the doll stands up slowly.

 

Another useful feature of a stepper motor is you can keep track of how it turns. The stepper motor used for this project is 200 steps per revolution. So, let’s say it take the motor 10 full rotations to raise the doll – that is 2000 steps. I can just send the stepper driver 2000 steps to stand up, then 2000 steps in the other direction to lay back down.

 

I know some of you are worried about missing steps. That is definitely and issue if the stepper is under a load. If you take a look at my drink-mixing robot, the Drinkmotizer, missing steps what a major problem. Drinkmotizer featured a leadscrew that is considered a load on the motor. Plus it would get sticky from the beverage fluid dripping on the leadscrew. I would experience binding and missing steps too frequently.

 

However, with the Scary Doll, there is almost a zero load on the motor. The doll is only a few ounces after all. Unless someone pulls on the string holding the doll, then missing steps would not be a problem.

 

What would stop the stepper motor from spinning? How do you control the motion?

 

To do this, I wanted to set virtual limits in the software. Typically, CNC or motion control devices have physical limits. When a carriage reaches a certain point, it presses a button, and the software interprets that as a limit – stopping all motion in that direction. However, with the doll, I thought that might be too hard to implement. So, I would set limits virtually.

 

The user moves the doll to one point, presses a button to set a limit. Then to another point and presses a button for the second limit point. Then, the software would not allow the motor to turn outside those parameters. This way, you can create canned cycles that stay within a certain distance envelope.

 

 

BOM/Parts:

Scary Doll

Fishing Line

1x Low current Stepper motor

Gecko 210X Stepper Controller

Raspberry Pi 3

Speakers with Audio Input

4x Momentary Push buttons

1x Full-size breadboard

 

Schematic and design:

 

 

The actual build, setup away from the doll.

 

Gecko stepper settings

 

 

Code - How the code works:

 

In the main loop, button presses are checked. When a movement button is pressed, a direction is set based on the corresponding button. The direction is either Clockwise or counterclockwise of the stepper motor. This is set by the direction pin of the Gecko stepper motor controller. From our Raspberry Pi, we use the GPIO to write the pin 0 or 1 corresponding to the direction. When the button is pressed, we jump into a routine called rampUP() which incrementally increases the speed of the stepper motor to its full speed that the user sets based on the time between pulses. This is to ensure smooth operation of the motor. Steppers do not like to go from 0 rpm to a fast speed without gradually accelerating to its set speed. If this is how it's controlled it will most likely stall. Low speeds can be started up without the need for ramping up. Voltage applied to the motor windings is also a factor. The Gecko 210X has a voltage input range of 18VDC to 80VDC.

 

The higher the voltage, the better the ability of the motor to achieve a higher speed without stalling. One of the first things to do when entering the rampUP() routine is to enable the driver, so we change the enable pin to a 1. We only want this enabled when the stepper is going to move or else the stepper will heat up unnecessarily when not moving. Before the motor moves, The music starts playing with a call to mpg123 library using os which is playing a mp3 in the same folder as the .py file. This plays out of the audio jack of the Raspberry pi, which you can hook up to a speaker with an AUX cable. The motor ramps up from a starting low pulse that decreases with time, so there are faster transitions between high and low pulses making the motor turn faster. The high pulse is a very short time which is static and does not change in the pulse. Controlling the speed with the time of the low portion of the pulse.

 

The number of steps is hard coated to reel the fishing line into a certain limit and let the line back out at the end to the same point it started at. Driving the motor one way a certain number of steps and reversing the motor direction the same number of steps. Which results in the doll moving up from a starting laid down position to a raised position back to a laid down position. The Doll pauses for 3 seconds standing or crawling. Moving the Doll across the floor then standing up required more fishing line to be let out on the pulley and setting the doll farther away from the fulcrum.  


-control of stepper, theory, concept

-how limits were handled

-how the sound was played

 

Difficulties:

- Keeping the Fishing line wrapped around the shaft coupler and taught to the doll. Something like a fishing rod spool would probably solve this issue.

- Keeping the doll from spinning too much. It would rotate on the fishing line. The only way around this would be a two line system to prevent that.

 

Other uses of the system:

- Moving a doll isn’t the only option. You could lift much larger objects, skeletons and ghosts come to mind. Or something smaller like fake bugs.

- This tutorial shows you how to turn a stepper motor. Anything CNC is possible. Linear stages, CNC router, etc.

 

If I had more money/time:

- I would love to animate more of the doll or puppet. Almost like a marionette with no puppeteer. The almost natural movement of arms and such, I imagine, would be very creepy.

- Find a better, scarier, doll for the project.

-Film in front of a porch with people walking up

 

Cabe

http://twitter.com/Cabe_Atwell

Filter Blog

By date: By tag: