Skip navigation
2015

Foginator-Banner-004.jpg

 

With the month of October almost here, I have kicked these Halloween projects into high-gear. I previously said that update number four would be all about lighting, but since I just finished the audio portion of Project Trick or Trivia, I thought this would be a good time to tackle the audio portion of Foginator2000. Since Trick or Trivia will be playing triggered audio events based on the buttons, I did not want to muddy the dynamic sound-stage with too much audio. So Foginator 2000 will only be playing ambient audio. I may go back and add in a single triggered event with some sort of greeting that would play either when the fog triggers, or right after it finishes. In this update, I am going to show you how I managed to get the audio portion of this project up and running.

 

If you follow my Trick or Trivia project, you will find that much of this installment is the same. This is because the audio needs for both projects are quite similar, and I did not feel the need to reinvent the wheel for this update. With that said, I had originally planned on running the background / ambient audio from within the same python script that the main program was in, but I slowly realized that this was not needed. After several hours of experimentation with the audio on project Trick or Trivia, I decided that the best route to take for always-on, ambient audio was to create a separate python script that would play the ambient audio loop when the Raspberry PiRaspberry Pi booted up.

 

The Hardware

 

Below you will see a list of the hardware used to build out the audio portion of this project. In addition to these components, you will need the following tools: a soldering iron, solder, flush cutters, wire strippers, 3-10 feet of 2-pair cable, and a 6-inch or longer 3.5mm to 3.5mm audio extension cable.

 

Newark.com


Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B,

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

ADAFRUIT USB WIFI MODULE

40P118440P1184

Speaker

1

VISATON SPEAKER, 20 kHz, 8OHM, 4W

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display

 

MCM Electronics

 

MCM Part No.

Notes

Qty

Manufacturer / Description

28-12812

Audio Amp

1

Audio Amplifier Kit 2 X 5W RMS

 

 

Building the Velman 2x5W Amplifier

 

 

One of the major things that I have learned from being in the Haunted Attraction industry is that lighting and sound are two of the biggest “make it or break it” features of a successful prop. When I was putting together the kit for this project, I knew I wanted audio to be a big part of the project. The Raspberry Pi makes it quite easy to add audio to a project, but unfortunately, unless your project makes use of earbuds, you will need to add an amplifier to the project to drive more powerful speakers.

 

20150928_203121_HDR.jpg

 

For this project, I chose the Velleman 2x5W Amplifier kit from MCM Electronics. This kit is designed for even the most novice maker to be able to assemble, and it’s quite powerful for its small size. I also chose to use a single small three-inch, eight-Ohm speaker from Visatoneight-Ohm speaker from Visaton. This speakers is a little undersized for this project and this amp, but it works just fine as long as you do not max out the amp’s volume control.

 

20150928_203803_HDR.jpg

 

The kit is very straight forward, does not include any confusing, hard-to-identify parts, nor does it utilize any SMD parts that would make it hard to solder. The toughest part to solder in the whole kit is the power indicator LED, as you need to bend it at a very specific point if you want to follow the build instructions 100% word-for-word. I built this entire board in less than 10 minutes.

 

20150928_205244_HDR.jpg

 

I sort of went off script and soldered up several of the amp’s components at once. If you follow the directions, you will solder each type of component step by step. This was way to slow for me, and I have hand soldered so many SMD boards in the past few months, that I can solder a through-hole board like this with my eyes closed.

 

20150928_210327_HDR.jpg

 

I finished up the board with a second round of soldering. This time I soldered the IC, and other large / heavy components. When soldering terminal blocks, ICs, and other components that are hard to keep in place, or that have several leads, I like to solder one of the leads on an end of the component first. This lets me lock the component in place, then I can use my fingers to re-align the part while re-heating that single solder joint.

 

20150928_210334_HDR.jpg

 

It’s hard to see in this photo, but I set the potentiometer all the way to the left, then placed the knob on it with the indicator dot down in the bottom left corner. This will place the dot almost perfectly opposite this position when the volume is turned to max.

 

20150928_210340_HDR.jpg

 

The one thing I always say about soldering is that flux is your friend. Velleman must know this as well because they coated the entire bottom of the PCB in a very sticky resin-based flux. I still used my flux pen on a per-joint basis as I like flux on the component leads I am soldering as well.

 

20150928_212852_HDR.jpg

 

Wiring up the speaker is pretty straight forward as Visaton was kind enough to mark the leads with + and - symbols to identify its leads. For those wondering, the + lead is almost always the larger of the two leads. Rumor has it, that this was adopted as common practice first in the automotive industry back in the 1970s. You will note that I used some spare two-conductor, shielded microphone wire. You can use any two-conductor wire you have, just pay attention to the polarity. The speaker will work even if it’s reversed, but the best sound quality comes from a properly wired speaker.

 

20150928_213444_HDR.jpg

 

Connect the other end of the speaker wire to the amp while paying attention to the polarity. You can also connect the power cable to the screw terminals to the left at this point. The amp requires a 6-14v 1A DC power source. You can power this with an old 9v or 12v wall-adapter, or even a 9-volt battery, but the battery will struggle to output enough current to keep the amp at full capacty.

 

 

The Ambient Audio Code

 

 

To start off let's quickly cover the background / ambient audio working and how I set it up to begin when the Raspberry Pi Boots. Below is the Python script that I wrote to play the mp3 file I selected as the ambient source. I have broken out each section, and commented on what it does. You can download this code used in this tutorial at the Github repository for this project. The audio files are available for download from here. If you do not want to modify the code, create a folder in the Desktop directory called “audio” and move all three of the mp3 files into it.

 

 

To get started we need to import the pygame library. I know a lot of you would have liked to see me use OMXplayer, but there were some things I could not get to work as they should, and I just chose to use something I was familiar with instead.

 

import pygame







 

Next we need to define the path to the ambient.mp3 file, and give it a name.

 

audio_path = '/home/pi/Desktop/audio/ambient.mp3'







 

Now we need to set a variable to True

 

var = True







 

Now we need to write a while-loop to play our mp3 file, and set it to only play if var is equal to True.

 

while var ==True:







 

Now we need to initialize PyGame.

 

    pygame.mixer.init()







 

Then we need to load the MP3 file we want to play.

 

    pygame.mixer.music.load(audio_path)







 

Now we need to set the pygame player’s volume. The range is between 0.0 and 1.0 so a setting of 0.5 would be half way.

 

    pygame.mixer.music.set_volume(1.0)







 

Finally we need to tell pygame to play the MP3 file, and set it to loop five times.

 

    pygame.mixer.music.play(5)







 

The full code is pasted below. Alternatively you can download this code used in this tutorial at the Github repository for this project. The audio files are available for download from here. If you do not want to modify the code, create a folder in the Desktop directory called “audio” and move all three of the mp3 files into it.

 

import pygame

audio_path = '/home/pi/Desktop/audio/ambient.mp3'

var = True

while var ==True:
    pygame.mixer.init()
    pygame.mixer.music.load(audio_path)
    pygame.mixer.music.set_volume(1.0)
    pygame.mixer.music.play(5)







 

 

Navigate to the project files folder adn then open a new file called ambient.py using the Nano text editor by entering the following command

 

sudo nano ambient.py

 

Then copy and paste the code above into the file. Save and exit, and then use the following command to test the pi.

 

sudo python ambient.py

 

You should hear the ambient.mp3 file begin to play if you have the amplifier / speaker combo we just built hooked up via a 3.5mm to 3.5mm audio cable from the amp to the Raspberry Pi. To get this python script to run on boot, we need to add it to the Raspberry Pi’s crontab. Enter the following command in the terminal to create a new crontab entry.

 

sudo crontab -e

 

Now paste the following line at the bottom of the crontab.

 

@reboot sudo python /home/pi/Desktop/Foginator2000/ambient.py







 

then save and exit out of the file. Reboot the Raspberry Pi using the command below. When the Pi reboots, you should hear the ambient.py file playing after you login.

 

sudo reboot

 

If the audio is quite low despite the amplifier’s volume being maxed out, you will need to turn the Raspberry Pi’s volume up. This is as simple as entering the small command found below, into the terminal.

 

amixer cset numid=1 -- 400

 

The range of amixer’s volume is -10200 and +400 in centi-dB units. Since we are using an external amplifier, we can set the Raspberry Pi’s volume to its max setting at +400, and adjust the volume on the amp accordingly. Once you have the volume set, you should be able to reboot the Pi, and the ambient audio will begin playing when you log in. I did not shoot a video of this for this installment, but if you check out the video below from my Trick or Trivia project, you will get the idea of whats going on with the ambient audio.

 

 

Well that is going to wrap up this weeks installment of the Foginator2000 project. Check back in a few days for the next update where I cover how to get Individually Addressable RGB LEDsIndividually Addressable RGB LEDs working with the Raspberry Pi, and how they will be incorporated into this project! If you have not checked it out yet, head over to my other Halloween 2015 Raspberry Pi project, Trick or Trivia, that makes use of the new 7-inch Touchscreen LCD from Raspberry Pi.7-inch Touchscreen LCD from Raspberry Pi.

 

Win this Kit and Build-A-Long


  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. October 16th -  Final Assembly and Testing
  7. October 23th - Project Wrap-up

Trick-or-Trivia-Banner003.jpg

 

Welcome back to the Trick or Trivia Blog. In this installment, I am going to show you how I managed to get the audio portion of this project up and running. I hit a slight bump in the road shortly after setting down to figure all of this out. I had planned on having several different layers of sound playing at once, but have only managed to get a background and foreground set of layers working together, and I think that will be enough.

 

I had originally planned on running the background / ambient audio from within the same python script that the main program was in, but I slowly realized that this was not only a bad idea, but it simply might not work with the way I plan on randomizing questions. I will confess that I spent more than a few hours trying to get the ambient audio working in a subprocess, and several other parallel processing methods, but failed miserably, and decided to run the ambient audio another way. I decided to take the easy route, and just write a separate python script that would allow me to play the ambient audio loop when the Raspberry PiRaspberry Pi booted up. By removing this process from my main TrickOrTrivia.py script, I was able to move on to getting the audio working with the buttons. Before we get into how I did that, let’s take a quick look at the hardware that is used in this installment of Trick Or Trivia.

 

The Hardware

 

Below you will see a list of the hardware used to build out the audio portion of this project. In addition to these components, you will need the following tools: a soldering iron, solder, flush cutters, wire strippers, 3-10 feet of 2-pair cable, and a 6-inch or longer 3.5mm to 3.5mm audio extension cable.

 

Newark.com


Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B,

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

ADAFRUIT USB WIFI MODULE

40P118440P1184

Speaker

1

VISATON SPEAKER, 20 kHz, 8OHM, 4W

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display

 

MCM Electronics

 

MCM Part No.

Notes

Qty

Manufacturer / Description

28-12812

Audio Amp

1

Audio Amplifier Kit 2 X 5W RMS

 

 

Building the Velman 2x5W Amplifier

 

 

One of the major things that I have learned from being in the Haunted Attraction industry is that lighting and sound are two of the biggest “make it or break it” features of a successful prop. When I was putting together the kit for this project, I knew I wanted audio to be a big part of the project. The Raspberry Pi makes it quite easy to add audio to a project, but unfortunately, unless your project makes use of earbuds, you will need to add an amplifier to the project to drive more powerful speakers.

 

20150928_203121_HDR.jpg

 

For this project, I chose the Velleman 2x5W Amplifier kit from MCM Electronics. This kit is designed for even the most novice maker to be able to assemble, and it’s quite powerful for its small size. I also chose to use a single small three-inch, eight-Ohm speaker from Visatoneight-Ohm speaker from Visaton. This speakers is a little undersized for this project and this amp, but it works just fine as long as you do not max out the amp’s volume control.

 

20150928_203803_HDR.jpg

 

The kit is very straight forward, does not include any confusing, hard-to-identify parts, nor does it utilize any SMD parts that would make it hard to solder. The toughest part to solder in the whole kit is the power indicator LED, as you need to bend it at a very specific point if you want to follow the build instructions 100% word-for-word. I built this entire board in less than 10 minutes.

 

20150928_205244_HDR.jpg

 

I sort of went off script and soldered up several of the amp’s components at once. If you follow the directions, you will solder each type of component step by step. This was way to slow for me, and I have hand soldered so many SMD boards in the past few months, that I can solder a through-hole board like this with my eyes closed.

 

20150928_210327_HDR.jpg

 

I finished up the board with a second round of soldering. This time I soldered the IC, and other large / heavy components. When soldering terminal blocks, ICs, and other components that are hard to keep in place, or that have several leads, I like to solder one of the leads on an end of the component first. This lets me lock the component in place, then I can use my fingers to re-align the part while re-heating that single solder joint.

 

20150928_210334_HDR.jpg

 

It’s hard to see in this photo, but I set the potentiometer all the way to the left, then placed the knob on it with the indicator dot down in the bottom left corner. This will place the dot almost perfectly opposite this position when the volume is turned to max.

 

20150928_210340_HDR.jpg

 

The one thing I always say about soldering is that flux is your friend. Velleman must know this as well because they coated the entire bottom of the PCB in a very sticky resin-based flux. I still used my flux pen on a per-joint basis as I like flux on the component leads I am soldering as well.

 

20150928_212852_HDR.jpg

 

Wiring up the speaker is pretty straight forward as Visaton was kind enough to mark the leads with + and - symbols to identify its leads. For those wondering, the + lead is almost always the larger of the two leads. Rumor has it, that this was adopted as common practice first in the automotive industry back in the 1970s. You will note that I used some spare two-conductor, shielded microphone wire. You can use any two-conductor wire you have, just pay attention to the polarity. The speaker will work even if it’s reversed, but the best sound quality comes from a properly wired speaker.

 

20150928_213444_HDR.jpg

 

Connect the other end of the speaker wire to the amp while paying attention to the polarity. You can also connect the power cable to the screw terminals to the left at this point. The amp requires a 6-14v 1A DC power source. You can power this with an old 9v or 12v wall-adapter, or even a 9-volt battery, but the battery will struggle to output enough current to keep the amp at full capacty.

 

 

The Ambient Audio Code

 

 

To start off let's quickly cover the background / ambient audio working and how I set it up to begin when the Raspberry Pi Boots. Below is the Python script that I wrote to play the mp3 file I selected as the ambient source. I have broken out each section, and commented on what it does.

 

 

To get started we need to import the pygame library. I know a lot of you would have liked to see me use OMXplayer, but there were some things I could not get to work as they should, and I just chose to use something I was familiar with instead.

 

import pygame





 

Next we need to define the path to the ambient.mp3 file, and give it a name.

 

audio_path = '/home/pi/Desktop/audio/ambient.mp3'





 

Now we need to set a variable to True

 

var = True





 

Now we need to write a while-loop to play our mp3 file, and set it to only play if var is equal to True.

 

while var ==True:





 

Now we need to initialize PyGame.

 

    pygame.mixer.init()





 

Then we need to load the MP3 file we want to play.

 

    pygame.mixer.music.load(audio_path)





 

Now we need to set the pygame player’s volume. The range is between 0.0 and 1.0 so a setting of 0.5 would be half way.

 

    pygame.mixer.music.set_volume(1.0)





 

Finally we need to tell pygame to play the MP3 file, and set it to loop five times.

 

    pygame.mixer.music.play(5)





 

The full code is pasted below. Alternatively you can download this code used in this tutorial at the Github repository for this project. The audio files are available for download from here. If you do not want to modify the code, create a folder in the Desktop directory called “audio” and move all three of the mp3 files into it.

 

import pygame

audio_path = '/home/pi/Desktop/audio/ambient.mp3'

var = True

while var ==True:
    pygame.mixer.init()
    pygame.mixer.music.load(audio_path)
    pygame.mixer.music.set_volume(1.0)
    pygame.mixer.music.play(5)





 

 

Navigate to the Open a new file called ambient.py using the Nano text editor by entering the following command

 

sudo nano ambient.py

 

Then copy and paste the code above into the file. Save and exit, and then use the following command to test the pi.

 

sudo python ambient.py

 

You should hear the ambient.mp3 file begin to play if you have the amplifier / speaker combo we just built hooked up via a 3.5mm to 3.5mm audio cable from the amp to the Raspberry Pi. To get this python script to run on boot, we need to add it to the Raspberry Pi’s crontab. Enter the following command in the terminal to create a new crontab entry.

 

sudo crontab -e

 

Now paste the following line at the bottom of the crontab.

 

@reboot sudo python /home/pi/Desktop/TriviaScrips/ambient.py





 

then save and exit out of the file. Reboot the Raspberry Pi using the command below. When the Pi reboots, you should hear the ambient.py file playing after you login.

 

sudo reboot

 

If the audio is quite low despite the amplifier’s volume being maxed out, you will need to turn the Raspberry Pi’s volume up. This is as simple as entering the small command found below, into the terminal.

 

amixer cset numid=1 -- 400

 

The range of amixer’s volume is -10200 and +400 in centi-dB units. Since we are using an external amplifier, we can set the Raspberry Pi’s volume to its max setting at +400, and adjust the volume on the amp accordingly.

 

 

Correct and Incorrect Answer Audio

 

 

I plan on changing out the audio files used for the correct and incorrect answer triggers, but for now the two that are in the download will work just fine. I wish I had time to get the Correct and Incorrect audio recorded and mastered, but I have not had time to sit down and hook up my recording gear.

 

The first thing we need to do is open the TrickorTrivia.py script and make some modifications. This script is the same as the one we used in the last installment of this tutorial, but has a new name. With that said, I am not going to go over every single line of code. Just the few bits and pieces we need to add.

 

The first thing we need to do is import the pygame library.

 

import pygame





 

Now we need to define two audio paths and give them names.

 

correct_audio_path = '/home/pi/Desktop/audio/correct.mp3'
incorrect_audio_path = '/home/pi/Desktop/audio/incorrect.mp3'





 

Finally we need to edit the blink_led functions, to make the LED illuminate once when the correct and incorrect answers are selected. We also need to add in a few lines to play the correct and incorrect answer audio files. For a more precise breakdown of this audio code, see the ambient code written earlier in this post.

 

def blink_led():
    # endless loop, on/off for 1 second
    while True:
        GPIO.output(26,True)
        pygame.mixer.init()
        pygame.mixer.music.load(correct_audio_path)
        pygame.mixer.music.set_volume(1.0)
        pygame.mixer.music.play(5)
        time.sleep(10)
        GPIO.output(26,False)
        GPIO.cleanup()
        pygame.quit()
        sys.exit()





 

def blink_led_2():
    # endless loop, on/off for 1 second
    while True:
        GPIO.output(19, True)
        pygame.mixer.init()
        pygame.mixer.music.load(incorrect_audio_path)
        pygame.mixer.music.set_volume(1.0)
        pygame.mixer.music.play(5)
        time.sleep(10)
        GPIO.output(19,False)
        GPIO.cleanup()
        pygame.quit()
        sys.exit()





 

That’s all that we have to modify in this script to get the audio files playing when a button is pressed. When the correct answer is chosen, the green LED will illuminate, the correct.mp3 file will play, and when done, the LED will turn off and the script will exit. The same goes for the incorrect answer. The red LED will illuminate, the incorrect.mp3 file will play, and then when finished the LED will turn off and the script will exit. Below is the full code.

 

To test if the code works, enter the Desktop GUI by typing the following command:

 

startx

 

Then open LXterminal, and navigate to the TrickorTrivia.py script.

 

cd /home/pi/Desktop/TriviaScrips

 

Then run the TrickorTrivia.py scrip with the following command:

 

sudo python TrickorTrivia.py

 

This should open the trivia interface and when you select the answer, the corresponding LED should light up and audio file play.

 

 

I really wish I could have gotten the ambient audio working on a sub-process, but it will work just fine the way I set it up by running on boot via crontab. I will admit that the speaker is a little underpowered, and I am not utilizing the Amp to its full potential. This is an easy fix, if you have an old set of small bookshelf speakers laying around. That is going to wrap this installment of Project: Trick or Trivia. Check back in a few days for the next installment. Until then, remember to Hack The World, and Make Awesome!

 

 

Win this Kit and Build-A-Long

 

  1. Project Introduction

  2. Building The Trivia Interface

  3. Interfacing Ambient and Triggered Audio Events
  4. Building The Candy Dispenser & Servo Coding
  5. Carve Foam Tombstone
  6. October 24th -  Assembly and Testing
  7. October 28th - Project Wrap-up

Background

In this tutorial series, we are going to learn C# using the Raspberry Pi 2 and WinIoT. The goal will be to learn about the C# language by building projects with it. So, think about it as learning through building. As we progress through the tutorial, the projects will grow in complexity and we'll start to create some pretty amazing things.

 

Why C#?

C# is a very popular language, and for good reason. It allows for complex programs to be simply and clearly stated. This allows you, as the developer, to spend more time concentrating on the problem that you wish to solve and less time writing/reading code.

 

Why Raspberry Pi 2?

The Raspberry Pi 2 is a low cost, powerful board. The low cost makes it very approachable for hobbyists and the power that it has makes applicable to a wide range of projects.

 

Why WinIoT?

Windows is the operating system that most people are familiar with because they use it every day. This familiarity lowers the barrier to entry and allows for new developers to get up and running quickly.

 

Setup

Getting the initial set up figured out can be the most difficult and frustrating part of getting started with a new board. Fortunately, getting set up with WinIoT on the RPi2 is pretty easy. There is a great tutorial of how to get started here:

 

http://ms-iot.github.io/content/en-US/win10/SetupPCRPI.htm

 

Once you have set up your PC and your Rpi2, you can follow the example to blink an LED. Another option would be to write a classic Hello, World application:

 

Getting Started with Visual Studio

Visual Studio is a very, very powerful program. Unfortunately, with all of that power, sometimes it can become overwhelming. Don't despair though, after the initial shock wears off, it is really pretty easy to use.

 

When you first open Visual Studio, it will present you with a start page in the center of the screen. In the top left, there will be an option to create a “New Project...” Clicking on that will bring up the New Project window. In this window, you want to select the “Blank App (Universal Windows)” option. Then give it a name and click OK. Now if the “Blank App (Universal Windows)” option is not present, you can find it under Templates -> Visual C# -> Windows -> Universal, or by using the search box to search for “universal c#”.

 

Now, once you are in Visual Studio, let's take a quick tour:

VS-Tour.png

The first thing to point out is the Solution Explorer on the right hand side. That shows you the directory structure of the project that you are currently working on. Creating a new project automatically populates some basic information for you with common things that every project needs. You can go ahead and click around in there and see what there is to see. Don't worry if it is a little bit too much at this point. Most of the stuff in there is going to be the same for every project that we do, so we can safely ignore it and just use the defaults.

 

Double clicking on any file in the Solution Explorer will bring it up in the main view. (In the above screen shot, I have MainPage.xaml up in the main view.) There are two files to take a closer look at: MainPage.xaml and MainPage.xaml.cs. To get to MainPage.xaml.cs, just click the triangle on the left of MainPage.xaml in the Solution Explorer to expand/collapse the child elements. MainPage.xaml.cs is a child of MainPage.xaml.

 

When the MainPage.xaml is displayed, by default the screen will be split into a design view and a XAML view. The XAML view shows the source code, which looks a lot like XML. The design view will give you an idea of what it will look like when it is displayed on the screen.

 

XAML is the language that is used to describe what the UI will look like. It's a concise way to tell the computer what goes where, and how it should look. Everything that can be done in XAML can also be done in C#, but it is not nearly as neat and clean. The same XAML code will take many more lines of code in C#. XAML also helps separate how the information is displayed from the business logic. This turns out to be extremely helpful in reusing the same code for various different applications.

 

The MainPage.xaml.cs file has C# code in it. This should look a lot more familiar to you if you have seen other programming languages. C# is where all of the logic of the program will be written.

 

Next up on the tour is the Output window, which is located at the bottom. This window has a bunch of tabs at the bottom of the window. These tabs hold a lot of important information. For example, any compilation errors and warnings will be displayed in the “Error List”.

 

The last thing to discuss in the whirlwind tour is the tool bar at the top. Now, there is definitely a lot going on there, so let's just hit the high points. The most important one is the green triangle that looks like the play symbol. This starts up the application. (If you are into keyboard shortcuts, the keyboard shortcut is F5.) This will kick off the compilation process and upon successful compilation, it will start up the application in debug mode.


Now once the application starts up, the tool bars and windows will switch around. At first this can be a bit weird, but over time you will start to enjoy it because you have very different needs when you are writing and debugging code. This changing of the view really helps get the tools that you need at your fingertips for the job at hand.

 

When the application is running, a pause sign and a stop sign will appear on the tool bar. The stop sign will kill the application and get you back into writing code mode. The pause sign will stop the application and show you where in the code it currently is. This can be extremely helpful if something is running much longer than you expect it to and you want to get an idea of what is going on.

 

Ok, so that's the whirlwind tour. There is a lot more going on in there, but that should be enough to get going.

 

Hello, World

Now, to create our first application, take and place the code below between the <Grid ...> and </Grid> tags.

This piece of code displays the words “Hello, World!” in the middle of the screen.

 

To run the application, just click on the green play icon, and it should pop up a window with the words “Hello, World!” on it:

HelloWorld.png

Nice! Now, to run it on the Raspberry Pi 2, click on the drop down next to the play button that says “x86” and switch that to “ARM”. That drop down specifies the processor type. Your PC has an x86 processor in it, whereas the RPi2 has an ARM processor in it. The good news is that other than this drop down, all of the differences are abstracted away from us as the user! So, we can go on programming without worrying about the underlying processor type. Next, click on the expansion triangle to the right of the play icon (just past the word “Device”), and click on the “Remote Machine” option. This will bring up a “Remote Connections” window. In there will be a section title “Auto Detected” and within that section should be your RPi2 (mine is called “minwinpc”). Click on that and then hit “Select”. Now when you hit play, the program will be running on the RPi2!

 

GitHub

The full source code for this tutorial is also on GitHub:

 

https://github.com/oneleggedredcow/RPi2-Tutorial

Note: This is part 1 of a 2-part series on an experimental user interface

Part 1: - You are here! <---

Part 2: - Not yet written! Bookmark to be notified here when it is available

 

Note: The technique described here is experimental and should be restricted to the sole purpose of UI implementation, and the browser files should be local. Security considerations beyond the scope of this blog post would need to be taken into account if the pages accessed by this technique were remote and accessed via a network.

 

Introduction

This blog post is about a user interface based on web technologies including HTML.

 

It enables the use of typical hardware (such as buttons, LED displays, TFT LCD displays and so on) for the user interface, but with all software control (including display information, text, graphics, styles and user interface response behaviour) implemented in HTML and any other desired web technologies such as JavaScript.

 

The aim is that the same code can be used to build very different looking user interfaces. Furthermore the same code could also be used to provide remote management from a browser. The views can be displayed differently to suit the hardware. With a large screen TFT LCD the user’s experience will be different to a (say) 16-character single-line display, but both should be possible with the same code.

 

An Example – Home Lighting Controller

Just as an example, a low-cost home lighting controller with constrained user interface hardware may have just push-buttons to switch on/off lights in different rooms of the house, and LEDs (one LED per room) to indicate which rooms are switched on.

html-ui-lighting-controller-example2.jpg

 

A more advanced (and expensive) home lighting controller with a TFT LCD and touch-screen may display a plan view of the home with rich graphics, a slider to select the floor of the home, and allow users to touch the desired room and show in a colored graphical image if lights are switched on or off.

 

If network connectivity exists then a web browser-based interface for the lighting controller may implement security including a password, and have a mode with simpler graphics and text so that it can be used from a smart phone.

 

The desire is to allow all of these methods of control to be possible (in a single way) for a scalable experience to suit the hardware that exists. It basically extends the usual web browser techniques to work with diverse input/output devices.

 

Why do this with HTML?

HTML is great for providing information; scripts or programs (along with HTTP) provide the interactivity. In the times when graphics were not always possible simple text-only browsers like Lynx existed to provide access to HTML content. The HTML file was rendered in text-only, and any graphics were ignored. The precedent therefore already exists for HTML on constrained displays (there are also plenty of Internet of Things projects today that extract HTML content and send it to an LCD character display on wireless nodes but they often use functions such as string extraction for extremely basic HTML parsing with no scalable experience and the interaction is basic).

 

HTML and associated technologies such as CSS and JavaScript are attractive because they are fundamental to all web pages and therefore there is a possibility that programmers will have encountered them to some degree. No need to learn Python and other languages or technologies if you don’t want to. It also means that debugging can be simple; just run the code in a browser! There is no need to have access to the physical hardware until you are ready for it. In a team, it simplifies allowing some people to work on physical hardware while others work on low-level interfacing in software and others can work on the user interface logic and application logic.

 

And to be honest another reason to do this was partly curiosity: how feasible is it to build an interface using HTML and associated web technologies for a device with constrained input/output for the user interface, and could it be responsive?

 

Design Overview

The heart of the design involves the need to be able to interpret HTML and JavaScript and any additional libraries. A web browser engine is used to perform these tasks. This allows the user interface programmer to create HTML based content just like any other web project. The difference is that input and output from the web browser engine is not necessarily targeted for a graphical desktop and keyboard/mouse. Instead, for display output, HTML elements and JavaScript variables can read from the page at any time to directly control any desired output device. For user input, push-buttons or any other sensor data can be pushed into the browser engine at any time. To create such a design involves constructing a usable workflow for these output and input tasks respectively. The workflows that have been prototyped are not necessarily the best ways but they function; there is plenty of scope for improvement. I only had limited time to think up a prototype.

 

The diagram below shows how the prototype works. All user interface related content is contained on the right side of the diagram in HTML files. This includes all text strings, images, color schemes, layout, button press rules and desired output events.

 

All hardware interfacing and all other functions that the system needs to support is contained on the left side of the diagram. Any programming language(s) could be used, and in this example JavaScript was chosen, executing on a software platform called Node.js.

 

The remainder of this blog post will refer to the right side and the left side of this diagram frequently.

html-ui-software-architecture.png

 

Note that although the right side of the diagram shows JavaScript, this is unrelated to the main program running on the left side which also happens to use JavaScript in this prototype. The JavaScript on the right side is exclusively intended for user interface related activities and it runs on the separate web browser engine called PhantomJS. The left and right sides will operate like ‘ships that pass in the night’ with the bare minimal interaction.

 

The two sides of the diagram interact with each other through the ability to remotely influence and read activity that occurs on the right side, through an application programming interface (API) available to Node.js that allows events or variables or other content to be pushed to PhantomJS by instructing it to execute any function written in JavaScript on the right side. In a similar vein it is possible for the left side to peek inside PhantomJS and read any JavaScript variables at any time.

 

For an example application, a simple procedure was devised for the left and right sides to communicate through this remote function execution and variable peeking method, and it is discussed next. But first, a brief note about the software libraries and platforms used in this project: Node.JS is fast becoming a reasonably mature platform for projects, however the module that connects it to PhantomJS and PhantomJS itself are not designed for implementing user interfaces; they are designed for testing web pages so they are being used beyond their original scope. It is hoped that over time improvements can occur as these platforms continue to be developed.

 

A Worked Example: LED Toggling

It is easier to see how all this works by examining what needs to be done for a simple project consisting of two LEDs and a single button. The project was tested with a Raspberry Pi. The idea was that the button would be used to alternately turn on the LEDs.

html-ui-hardware.jpg

 

The HTML File

The complete HTML file for the right side of the diagram is shown below (also available on GitHub). This single 45-line file contains the entire logic for the application!

The blue and red circled items are the API which we will define between the left side and the right side.

html-code-example-annotated.png

 

We can define the API as follows:

html-api.png

 

That’s it; the remainder of the HTML file contains a bit of test code that could be used to test the logic in a web browser using a debugger. No Raspberry Pi or any other hardware is required to test it.

 

The Node.js Side

The left side handles the invocation and interactivity with the right side and all interaction with hardware.

 

The code is attached to this post (it is very short, less than 100 lines), but the key snippets of it are described here.

 

The hardware interface is defined here, in terms of general purpose input/outputs (GPIO):

// Inputs
var button1=17; // GPIO17 is pin 11
var button1timer=0;

// Outputs
var led1=27; // GPIO27 is pin 13
var led2=22; // GPIO22 is pin 15
  

 

For more information on Raspberry Pi GPIO, see the Raspberry Pi GPIO Explained guide

 

The variable button1timer is used for implementing a software debounce.

 

The next step is to enable the inputs and outputs:

pi.setup('gpio');
pi.pinMode(button1, pi.INPUT);
pi.pinMode(led1, pi.OUTPUT);
pi.pinMode(led2, pi.OUTPUT);
  

 

The right side is instantiated using the following code:

phantom.create(function(ph) {
  console.log("bridge initiated");
  ph.createPage(function(page){
    page.open("file:///home/pi/development/uitest/index-simple.html", function (status) {
      console.log("page opened");
    });
  });
});
  

 

The code above launches the HTML file described earlier into the web browser engine instance. The HTML file is now running in the browser engine.

 

The debounce is handled as follows: First, a function is registered to be called automatically whenever there is a falling edge on the button input (logic level 0 means the button is pressed, logic level 1 means the button is unpressed).

pi.wiringPiISR(button1, pi.INT_EDGE_FALLING, button1int);
  

 

Whenever a falling edge is seen, the button1int function is called:

  function button1int(param)
  {
    if (button1timer==0){
      button1timer=setTimeout(button1debounce, 20, param);
    }
  }
  

 

The code above registers a function called button1debounce to be executed in 20 milliseconds. This is a switch debounce period (see the Raspberry Pi GPIO Explained guide for an explanation on debouncing).

 

Here is the button1debounce function:

  function button1debounce(_param)
  {
    if (pi.digitalRead(button1)!=0) {
      // button press was too short so we abort
      button1timer=0;
      return;
    }
    _page.evaluate(function(param) {
      buttonPress(param);
    }, function(result) {console.log("buttonPress done"); }, param);
    do_task();
    button1timer=0;
  }
  

 

Three key things can be observed to occur in the code above. Firstly as expected, the button gets debounced. Secondly examine the _page.evaluate section. This section is responsible for calling buttonPress() which is the function in the right side HTML file! This is an example of interaction between the left side and the right side. Finally a do_task() local function gets executed which will be responsible for finding out from the right side if there is any task to perform.

 

The major take-away from this section is that there is no application logic at all. All of that was contained in the HTML file!

 

The do_task function just dumbly does whatever the right side wants it to do. The _page.evaluate function is now used to query the task variable from the right side, and uses it to light up the appropriate LED, and then the right side reset() function is used to let the right side know that the task is complete.

  function do_task()
  {
    _page.evaluate(function () { return task; }, function (taskname) {
    if (taskname=="null"){
    // do nothing
    }
    else {
        if (taskname=="led1on")
        {
          pi.digitalWrite(led1, 1);
          pi.digitalWrite(led2, 0);
          _page.evaluate(function() { reset(); }, function(evresult) {} );
        }
        else if (taskname=="led2on")
        {
          pi.digitalWrite(led1, 0);
          pi.digitalWrite(led2, 1);
          _page.evaluate(function() { reset(); }, function(evresult) {} );
        }
      }
   });
  }
  

 

 

Setting up the Raspberry Pi

A folder is needed for development. One way is to create a main folder for all development work, and create a sub-folder called uitest for this user interface test. From the home folder (/home/pi in my case) it is possible to type the following:

mkdir –p development/uitest
  

 

In order to run the code, install Node.js on the Raspberry Pi using the following commands:

wget http://node-arm.herokuapp.com/node_latest_armhf.deb
sudo dpkg -i node_latest_armhf.deb
  

 

Next, in the development/uitest folder, obtain phantomjs:

git clone https://github.com/piksel/phantomjs-raspberrypi.git
cd phantomjs-raspberrypi/bin
sudo cp phantomjs /usr/bin
sudo chmod -x /usr/bin/phantomjs
sudo chmod 755 /usr/bin/phantomjs
  

 

Now some Node.js modules need installing:

npm install wiring-pi
npm install phantom
  

 

Transfer the index-simple.js and index-simple.html files into the development/uitest folder and then mark the .js file as an executable:

chmod 755 index-simple.js
  

 

Now the code can be run as follows:

sudo ./index-simple.js
  

 

Here is a short (1 minute) video of the functioning project. It can be seen that, with the limited functionality of this project, the interface is very responsive. The responsiveness with more advanced projects is for further study.

 

Summary and Next Steps

It is possible to code the entire user interface logic (and application logic if desired) in a HTML file (known as the ‘right side’ with reference to the earlier diagram) using everyday HTML and JavaScript. This is powerful because it means that tasks can be nicely decoupled during project development. A simple API can be devised that will allow the HTML file to work with the hardware interfacing code and a simple example was shown that allowed LEDs to be alternately controlled using a push-button.

 

The left side was responsible for low-level functions such as hardware control and button debouncing and for establishing communication with the right side which did everything else.

 

As next steps, it would be interesting to exercise the right side web browser engine more deeply in order to create a richer user interface that would be driven by more advanced hardware such as a graphic display, by the left side.

 

The full source code is available on GitHub.

 

Stay tuned for Part 2.

I am a raspberry pi first timer and I want to do a raspberry pi hook up with Christmas lights and music.  I need detailed instructions.  Can anybody provide some good instructions?

Just a couple days left!

 

There's always room for Pi in your Amateur Radio Go Kit. Now you can top it off with the Pi-Go and plug right into your station power.  Great for your embedded auto projects as well@

 

https://www.kickstarter.com/projects/bobrecny/pi-go-amateur-radio-power-for-your-raspberry-pi

Foginator-Banner-003.jpg

 

Welcome to installment #003 of Project: Foginator 2000 part of the 2015 Raspberry Pi Halloween Project series here at Element14. In this week's episode I am going to cover the basics of getting the Raspberry Pi Sense HatRaspberry Pi Sense Hat up and running, and a very light tutorial on how to push this data to the cloud in order to record and analyze the data. The cool thing about this is that we can simply save all of our acquired data to the cloud, and access it from anywhere!

 

The Hardware

 

Below is a table containing the parts you will need for this segment of the project. In addition to these parts you will need to connect the Raspberry Pi to the internet either via a wifi dongle, or a wired Ethernet connection.

 

Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

USB PORT POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

USB WIFI MODULE

49Y756949Y7569RPi Sense Hat1Raspberry Pi Sense HAT

 

 

The Theory

 

 

The idea behind this part of the project is to log environmental data from the immediate area surrounding the fog machine. Using the Raspberry Pi Sense Hat we will measure and log temperature, humidity, and barometric pressure, and then push it up to the cloud. We will trigger this data logging even every time the fog machine trips, and will record the entire event as a “Trick-Or-Treat Event.” Then we will be able to export the data as a CSV file and analyze what temperatures, humidity levels, and barometric pressure levels correlated with spikes in Trick-Or-Treat events.

 

For the purpose of this blog post, we will be simply figuring out how to read the data from the Sense Hat with our Raspberry Pi. Once we have that figured out we can move on to learning how to push that data to the cloud.

 

 

The Sense Hat

rpisensehat.jpg

 

The new Raspberry Pi Sense Hat was released a few weeks ago, and to be quite honest, I had planned this project out before it was released, and I was quite confused as to what this Hat actually did. The Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission that will be performed on the International Space Station in December 2015.

 

 

The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors:

  • Gyroscope
  • Accelerometer
  • Magnetometer
  • Temperature
  • Barometric pressure
  • Humidity

To make things simple, the Raspberry Pi Foundation has created Python library providing easy access to everything on the board. You can find that library here.

 

 

Installing Astro Pi / Sense Hat

 

 

Raspberry-Pi-Sense-HAT.jpg

To get started you need to connect the Sense Hat to a Raspberry Pi 2 by placing it on the GPIO Pins. Note the orientation of the board in the image above.

 

Now we need to install the Raspberry PI Sense Hat library package into Raspbian. So SSH into the Raspberry Pi, or open the terminal if you are using your Pi with a Monitor such as the new Raspberry Pi 7-Inch Touch Screen. If you are following along at home, and building your own Foginator, your Raspbian install should already be updated, but just incase run the following commands in the terminal.

 

sudo apt-get update

sudo apt-get upgrade -y

 

 

Then run the following command which will download the necessary package to get the Sense Hat up and running. This process should take less than five minutes on a Raspberry Pi 2, but it could take longer. Do unplug your Raspberry Pi during this process.

 

sudo apt-get install sense-hat

 

Then to finish up the process you need to restart the Raspberry Pi. To do this, run the following command.

 

sudo reboot

 

Once the Pi has had time to reboot, reconnect via SSH or re-open the terminal.

 

 

Testing AstroPi and The Sense Hat

 

2015-09-23-23_48_42-Start.jpg

 

Let’s create a quick Python script to test that everything was installed and connected correctly. Using the Nano text editor, create a new file named sense_test.py. You can do this with the command below.

 

sudo nano sense_test.py

 

Now copy and paste the script below. This script basically tells the sense hat to scroll the words “Hello World” across the Hat’s LED matrix. When you have the code pasted, exit out of nano while keeping the same file name.

 

from sense_hat import SenseHat
sense = SenseHat()
sense.show_message("Hello World")
































 

Now enter the command below to run the script we just created.

 

sudo nano sense_test.py

 

Now you should see Hello World scroll across the LED matrix. If this works, you are ready to move onto the next step. If not, something went wrong with your install of Astro Pi. Go back over the steps to make sure everything is installed and written correctly.

 

 

Acquiring Data From The Sense Hat

 

temp-test-Start.jpg

 

Following some excellent tutorials on the RaspberryPi.org website, I was able to quickly get data from the temperature, air pressure, and humidity sensors. Surprisingly, this is possible with only a handful of lines of Python code. Below is a breakdown of the code and what each line / section does.

 

First need to import the sense_hat library, the time library, and the system library.

 

from sense_hat import SenseHat
import time
import sys





























 

Now we need to initialize the sense hat, and clear its matrix.

 

sense = SenseHat()
sense.clear()





























 

Now we need to set a variable named var, and give it a value of 30. This is used to close the program after it loops for 30 cycles.

 

var = 30





























 

Now we need to create our while loop, and set it to run if var is greater than 0.

 

while var > 0:





























 

The first data we want to import is the ambient temperature. The Sense Hat library makes this super simple, and all we have to do is tell the program to get the temperature. The default output is in degrees celsius, and is extended to several places after the decimal.

 

  temp = sense.get_temperature()





























 

To round that number to a more friendly tenth of a degree, we can simply tell the program to round the temperature output to the first decimal place.

 

  temp = round(temp, 1)





























 

Now we can print our temperature to the terminal. To do this we simply write a print command, and write some text to explain what this output is so that other people can understand what they are reading.

 

  print("Teperature C",temp)





























 

The same methods apply to the humidity and pressure readings as well, so I won’t list each of them out line by line.

 

  humidity = sense.get_humidity()
  humidity = round(humidity, 1)
  print("Humidity :",humidity)





























 

  pressure = sense.get_pressure()
  pressure = round(pressure, 1)
  print("Pressure:",pressure)





























 

Now we need to tell the program to wait for one second before continuing on. This slows down the rate at which the data is read and output. You can slow this rate down by increasing the number after the time.sleep command, or you can speed it up by decreasing the number.

 

  time.sleep(1)





























 

Now we need to tell the program to decrease the var variable by one.

 

  var = var -1





























 

With our var variable decreased by one, we need to check to see if it has reached 0 yet. If it has, we can tell the program to exit. This allows us to poll data for as long as we want, and not have the loop constantly running. Basically the while loop will run over and over until the var variable has decreased to 0.

 

  if var == 0:
    sys.exit()





























 

Putting It All Together

 

Below is the full code we just wrote, without any comments. Nano into the sense_test.py file and delete all of the code that is in it. Then copy and paste the code below, and save the file.

 

from sense_hat import SenseHat
import time
import sys

sense = SenseHat()
sense.clear()

var = 30

while var > 0:
  temp = sense.get_temperature()
  temp = round(temp, 1)
  print("Teperature C",temp)
  humidity = sense.get_humidity()
  humidity = round(humidity, 1)
  print("Humidity :",humidity)
  pressure = sense.get_pressure()
  pressure = round(pressure, 1)
  print("Pressure:",pressure)
  time.sleep(1)
  var = var -1
  if var == 0:
    sys.exit()





























 

Now run the program we just wrote with the following command:

 

sudo python sense_test.py

 

You should see each data set we are looking for, temperature, humidity, and pressure being displayed in the terminal. The Raspberry Pi will poll and display the data for 30-seconds / loops if you stuck with the 1-second delay we wrote into the while statement. With that working, lets move on to getting this data into the cloud.

 

 

Pushing The Data To The Cloud / Internet Of Things

 

 

initial_state_data_foginator.jpg

A while back I received an email from a company called Initial State who wanted me to try out their new cloud-based data visualization solution that is geared towards makers, and programmers alike. Unfortunately I forgot all about Initial State due to some life events taking me away from making things for a few months. I had not thought about Initial State for months until I was writing up one of my Design Challenge Summaries, and wrote about Rick Reynolds’ (RWReynolds) project, Vertically Oriented Modular System. In a comment, Rick mentioned that he  used Initial State to record and log the data from his project, and I decided to use it for mine as well.

 

You will need to go to InitialState.com and signup for a free account. I have a pro-account, but that is because I plan on using it in a lot of my future projects, but the free account should suffice for most.

 

2015-09-24 00_46_17-Start.jpg

I won’t go into the whole process of installing Initial State’s logger onto your Raspberry Pi, as Initial State has an excellent video tutorial on how to do exactly that. They also have a very comprehensive written tutorial on the subject as well. You can find the video below, it’s about an hour long, for the full tutorial, but the first 20 minutes should give you a good understanding of how to make this work.

 

 

Now let’s modify our code to enable it to begin sending data to the InitialState Cloud. So this by using the Nano text editor to edit the sense_test.py script.

 

This time around we need to import the InitialState Streamer library as well.

 

from sense_hat import SenseHat
import time
import sys
from ISStreamer.Streamer import Streamer




















 

Now we need to set up the InitialState Logger, name the bucket for this project, and then enter your access key. Chage the code below to include your key.

 

logger = Streamer(bucket_name="Sense Hat Environment Stream", access_key="YOUR_KEY_HERE")




















 

Our Setup stays the same

 

sense = SenseHat()
sense.clear()
var = 30




















 

Now everywhere we told the program to print the output of a sensor, we need to change that to tell the program to log and send that data to the InitialState bucket we created earlier in the code. Everywhere you wrote “print” before, change it to logger.log.

 

while var > 0:
  temp = sense.get_temperature()
  temp = round(temp, 1)
  logger.log("Teperature C",temp)
  humidity = sense.get_humidity()
  humidity = round(humidity, 1)
  logger.log("Humidity :",humidity)
  pressure = sense.get_pressure()
  pressure = round(pressure, 1)
  logger.log("Pressure:",pressure)
  var = var -1
  time.sleep(30)
  if var == 0:
    sys.exit()




















 

Bringing It All Together

 

The full code is below.

 

from sense_hat import SenseHat
import time
import sys
from ISStreamer.Streamer import Streamer


logger = Streamer(bucket_name="Sense Hat Environment Stream", access_key="zLahwAUqKbNKv6YvuT5JuO58EiUOavDa")


sense = SenseHat()
sense.clear()
var = 30


while var > 0:
  temp = sense.get_temperature()
  temp = round(temp, 1)
  logger.log("Teperature C",temp)
  humidity = sense.get_humidity()
  humidity = round(humidity, 1)
  logger.log("Humidity :",humidity)
  pressure = sense.get_pressure()
  pressure = round(pressure, 1)
  logger.log("Pressure:",pressure)
  var = var -1
  time.sleep(30)
  if var == 0:
    sys.exit()




















 

With the script edited and saved, run it using the command below.

 

sudo python sense_test.py

 

Now you can go to your InitialState account and click on the bucket you just created to see the data. Remember that IntialState free accounts have a very limited amount of events you can stream each month. Each data point is one event. So running this script in its current configuration would create three events per second for thirty seconds, totaling out at 90 events each time its ran.

 

2015-09-24 01_01_32-Start.jpg

I wanted to set this up to read data for several days, and I am one of those kinds of people who need a visual indicator to tell me that things are working as they should. So I once again modified the code to not only run the loop for several days, but also tossed in a few lines that would display a creeper head from MineCraft on the LED matrix at the end of every data collection cycle. If you are interested, that code is below. I won’t go into how I did it, but you can learn more at this link from the Raspberry Pi Foundation.

 

Code With Creeper Head Appearing On LED Matrix

from sense_hat import SenseHat
import time
import sys
from ISStreamer.Streamer import Streamer


logger = Streamer(bucket_name="Sense Hat Environment Stream", access_key="zLahwAUqKbNKv6YvuT5JuO58EiUOavDa")


sense = SenseHat()
sense.clear()
var = 14400


O = (0, 255, 0) # Green
X = (0, 0, 0) # Black


creeper_pixels = [
    O, O, O, O, O, O, O, O,
    O, O, O, O, O, O, O, O,
    O, X, X, O, O, X, X, O,
    O, X, X, O, O, X, X, O,
    O, O, O, X, X, O, O, O,
    O, O, X, X, X, X, O, O,
    O, O, X, X, X, X, O, O,
    O, O, X, O, O, X, O, O
]


black_pixels = [
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X
]


while var > 0:
  temp = sense.get_temperature()
  temp = round(temp, 1)
  logger.log("Teperature C",temp)
  humidity = sense.get_humidity()
  humidity = round(humidity, 1)
  logger.log("Humidity :",humidity)
  pressure = sense.get_pressure()
  pressure = round(pressure, 1)
  logger.log("Pressure:",pressure)
  var = var -1
  logger.log("Seconds Until Script Exit",var)
  sense.set_pixels(creeper_pixels)
  time.sleep(5)
  sense.set_pixels(black_pixels)
  time.sleep(25)
  sense.clear()
  if var == 0:
    sys.exit()

















 

Run this code by using the command below. Note the “&” on the end. This tells the Raspberry Pi to run the script in the background and return the command prompt in the terminal. This allows you to continue developing on your Pi while the script runs. Also note that this means that if the Pi loses power, or you reboot it, the script will stop and you will need to re-run it. You can get around this by setting the script to run on startup. A good tutorial on how to do this can be found at the following instructable. Additionally, you can download all of the code used in this project from it's Github repo.

 

initial_state_data_foginator3.jpg

 

If you would like to see the data in real time, that my Raspberry Pi / Sense Hat combo is generating, visit the following link or click the image above.

 

 

So that wraps up part 3 of the Foginator2000 project. This was a really fun portion of the project for me as I got to learn how easy it is to push data to InitialState, as well as how easy it is to use the Raspberry Pi Sense Hat. Hats off to the AstroPi team, and the Raspberry Pi Foundation for creating such a feature-rich and easy to use Pi Hat. Tune in in just a few days for my next installment on the Foginator2000 project. Until then remember to Hack The World and Make Awesome!

 

Win this Kit and Build-A-Long


  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. October 16th -  Final Assembly and Testing
  7. October 23th - Project Wrap-up

Foginator-Banner-002.jpg

 

Welcome to installment #002 of my Foginator 2000 Halloween Project here at Element14. In this week's episode I am going to cover the basics of automating an ADJ VFI1300 1300W Fog MachineADJ VFI1300 1300W Fog Machine via a Raspberry PiRaspberry Pi and a Parallax PIR Motion SensorParallax PIR Motion Sensor. The process is fairly simple, and only involves a handful of lines of Python code, so even the code-beginner should easily be able to get this working.

 

20150916_212859_HDR.jpg

 

Below is a table containing the parts you will need for this project. In addition to these parts you will need a drill, drill bit, soldering iron, stranded hook up wire, and some female to female jumper wires.

 

Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

USB PORT POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

USB WIFI MODULE

 

 

 

MCM Part No.

Notes

Qty

Manufacturer / Description

83-14732

Relay Module

1

TinkerKit Relay Module

555-19400

Fog Machine

1

Fog Machine Hurricane 901

28-17976

Parallax PIR Sensor

1

PIR Infared Measurement Sensor Module

 

 

The Theory

 

 

The fog machine will trigger when trick-or-treaters trip its motion sensor. Throughout this project we will call this a “trick-or-treat event,” or “T&T Event” for short. So when the T&T event happens, the fog machine to fire off and begin fogging out the immediate area. To do this we need to first take a look at how a fog machine works, and methods to trigger a fog-machine based on motion detection. First we will take a quick look at how a fog machine works.

 

fog-machine-diagram1.jpg

In the image above you can see how the basic operations of a standard fog machine works. Fog liquid is pumped from a reservoir into a heater block, which flashes to a vapor and exits out of a nozzle due to the pressure created by the expanding gasses. Everything is controlled from a central control interface, and depending on the quality and brand, this could be nothing more than a few passive components all the way up to a full scale MCU-based controller.

 

On most fog machines with manual remotes, a small LED indicator is present that illuminates when the heater block has reached optimal temperature. On the ADJ 1300W that we are using, this LED, as well as the push button are powered by low voltage, but on many fog machines, these components can be powered by mains voltage. Exercise extreme caution and high voltage safety when modifying any fog machine.

 

So in theory we should be able to use a relay to trigger the fog machine using the wires that go to the push button. As an added bonus we should be able to use the signal from the “ready” led to tell the Raspberry Pi that the fog machine is armed and ready to spray fog. If your fog machine uses high-voltage for its LED, then this will not work, and you will have to design a rectifying solution. So the bulk of this blog post will be showing you how to modify the manual remote control for the ADJ VFI1300 1300W Fog Machine, this method may work on other ADJ fog machine products, but I am not sure. Remember that this will most certainly void your warranty, and you are assuming all risk associated with modifying a product to perform in a way it was not designed to do from the factory. I take on no responsibility if anything should go wrong.

 

 

The Schematic

 

 

Fog-Control-Schematic.jpg

As you can see in the image above, my plan is to utilize the TinkerKit relay module as a sort of “smart” switch to fire the fog machine. The PIR-based motion sensor will be used to sense motion which will tell the Raspberry Pi 2 to trigger the relay. I am using seven of the Raspberry Pi 2’s GPIO pins including both of its 5-volt pins, two of its ground pins, and three actual I/O pins.

 

Raspberry-Pi-GPIO-Layout-Model-B-Plus-rotated-2700x900.png

 

Using the Raspberry Pi 2 GPIO Pinout reference above, you can see that the three I/O pins I am using are (BCM schema)

  • Raspberry Pi GPIO4 to TinkerKit Relay Module
  • Raspberry Pi GPIO17 to Parallax PIR Motion Sensor
  • Raspberry Pi GPIO27 to Fog Machine Remote “Ready” Indicator LED (used in a later project update)

 

With our schematic planned out, let’s begin the build by modifying the ADJ VFI1300 1300W Fog Machine’s manual remote control. Again, if you are modifying any fog machine’s remote control other than the exact model that I am using in this tutorial, beware that it could utilize mains voltage instead of low voltage. Proceed with caution.

 

 

Modifying The Remote

 

 

20150916_222341_HDR.jpg

Here we can see the manual remote that came with the ADJ VFI1300 1300W Fog Machine. Notice the “Output” button and the “Ready” LED. These are the objects we will be hacking some wiring to in order to connect them to our Raspberry Pi and TinkerKit relay module.

 

20150916_222411_HDR.jpg

Opening the remote up is quite simple and only requires the removal of four phillips head screws. Save the screws as we will be putting this back together when the modifications are complete.

 

20150916_223556_HDR.jpg

Once the back has been removed you will see a bundle of wires attached to two leads of the momentary push button and the “ready” indicator LED. To fire the fog machine one simply needs to close the “trigger” circuit by pressing the button. This means that we can easily modify this to utilize a relay to trigger the fog machine. Additionally we can use the low-voltage signal from the indicator LED to tell the Raspberry Pi that the fog machine is armed and ready to fire.

 

20150916_225217_HDR.jpg

Before we can solder in the wiring, we need to make room to place our three binding post. Since this case is tapered, you need to drill the holes just above the ADJ logo as seen in the image above. Make sure that you space the holes so that all three binding post will fit, and their mounting hardware does not get obstructed by the adjacent binding post’s hardware.

 

20150916_225648_HDR.jpg

As it turns out, the plastic used in the casing for the remote is quite cheap, and very brittle. I used a brand new, very sharp drill bit to drill a pilot hole, and then a larger one that was just as sharp, and the plastic fractured and flaked in a few places. If this were a proper glass-filled ABS or Nylon case this would not have happened.

 

20150916_230341_HDR.jpg

Take notice of how close the nuts are that fasten the binding post to the case. I failed to account for the proper spacing when laying out where to drill the holes, and somehow I got lucky enough that everything just barely fit. I would suggest that you add a dab of CA glue, hot glue or some other adhesive to these nuts as they tend to back off the threads after some time passes.

 

20150916_230351_HDR.jpg

As you can tell, I am not using the binding post I listed in the parts list for this post. I forgot to order the correct ones when I designed the kit, and am using some I found in my scrap parts box. The ugly green binding post was painted with craft acrylic paint from a local hobby store. It did not adhere as well as I wanted, but it serves its purpose of identifying the post as being different.

 

20150916_231709_HDR.jpg

Hacking the switch to work with our relay is super simple. You simply need to solder one of the switches leads to either the red or black binding post, and then solder the other switch lead to the other binding post. While you are in here, solder a wire from the anode side of the LED to the other binding post.

 

20150916_231808_HDR.jpg

I messed up and accidentally heated the heat shrink on the LED wire with my soldering iron which caused it to no longer slide over the LED’s lead. Instead of cutting it off and adding a new piece in, I simply chose to wrap it in 3M Super 33 Electrical Tape.

 

20150917_000450_HDR.jpg

Now simply screw the bottom of the case back on, and then cut three 24” lengths (or longer depending on your needs) of the stranded hook-up wire. I chose to use red, black, and yellow. For a cleaner look, I chucked the wires up in my cordless drill, and twisted them together. I lost a few inches in length because of this, but it looks much cleaner and is easier to manage. Connect the three wires to the binding post as shown in the image above. Red to red, black to black, and yellow to yellow.

 

 

Connecting The PIR Motion Sensor

 

 

Pir-Schematic.jpg

Connecting the Parallax PIR sensor to the Raspberry Pi is quite easy as well, and only requires a few lines of code to test its functionality. To start you will need to connect the PIR sensor to the Raspberry Pi as per the diagram above, following the pinouts below.

 

  • PIR “Out” Pin  to Raspberry Pi GPIO17
  • PIR “VCC” Pin to Raspberry Pi 5V Pin
  • PIR “GND” Pin to Raspberry Pi GND Pin

 

20150916_213334_HDR.jpg

With those connected, you will need to boot up your Raspberry Pi 2 with the NOOBS SD Card installed, or any SD Card with Raspbian installed on it. It is also advisable to update your version of Raspbian to the latest. If you need to learn how to do this, I briefly cover it in part two of my other Halloween Project, Trick or Trivia.

 

The Code

 

With everything updated, and the SSH connection still open, type the following command to create a python file for testing the PIR sensor

 

sudo nano test-pir.py


This will create the file and open it in the nano text editor. Before we paste the python code in, lets take a look at it step by step and what each step does.

 

We need to import the Raspberry Pi GPIO library, the time library, and the system library. So python can execute our code.

 

import RPi.GPIO as GPIO
import time
import sys


































 

Now we need to set up our GPIO pin. Note that we are using the BCM schema to run our code. For those of you who do not know, the Raspberry PI’s GPIO pins can be configured two different ways, GPIO.board and GPIO.bcm.  When setting the GPIO pins mode, you are telling it what numbering scheme your code will be adhering too. Unfortunately the pin numbering between each of the different modes changes on various models and revisions of the Raspberry PI. So if you are not running a Model B+ or Raspberry Pi 2, you will need to search the internet for the proper pinouts for the mode you select.

 

Raspberry-Pi-GPIO-Layout-Model-B-Plus-rotated-2700x900.png

 

The GPIO.BOARD option specifies that you are referring to the pins by the number of the pin the the plug - i.e the numbers printed on the board (e.g. P1) and in the middle of the diagram above.

 

The GPIO.BCM option means that you are referring to the pins by the "Broadcom SOC channel" number, these are the numbers after "GPIO" in the green rectangles around the outside of the diagram above.

 

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
GPIO.setup(17, GPIO.IN)

























 

Now we can build a function to test our code. We will call the function fire_fog. The code below it basically says to set GPIO 4 to True (high) for three seconds (triggering the fog machine, then set it to False (stop firing fog), then the code cleans up the GPIO Pins, and exits the script.

 

def fire_fog():
    GPIO.output(4,True)
    time.sleep(3)
    GPIO.output(4,False)
    GPIO.cleanup()
    sys.exit()
   

























 

Now we need to tell the to wait for a high signal on GPIO17 (movement detected by the PIR sensor), and to run the fire_fog function if that high signal is present. I put this in a “while” statement so that it would loop over and over until motion is detected. There are better ways to do this, but I am still brushing up on my python.

 

while 1:
    time.sleep(3)
    if GPIO.input(17)==True:
        fire_fog()

























 

Now past the following lines of code into the file. You can also download this code from it's GitHub Repo. If you are using a terminal such as putty, you can simply copy this code and right click in the terminal to paste it.

 

import RPi.GPIO as GPIO
import time
import sys

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
GPIO.setup(17, GPIO.IN)

def fire_fog():
    GPIO.output(4,True)
    time.sleep(3)
    GPIO.output(4,False)
    GPIO.cleanup()
    sys.exit()

while 1:
    time.sleep(3)
    if GPIO.input(17)==True:
        fire_fog()

























 

Now save and exit out of the nano text editor. If everything is connected correctly, and the code is correct, we can move onto connecting the TinkerKit relay module to the Raspberry Pi and test our PIR sensor out.

 

 

Connecting And Testing The Relay

 

 

Fog-Control-Schematic.jpg

Now connect the relay as shown in the schematic above, and the pinout below. Only make the Raspberry Pi to Relay connection at this time.


  • Tinkerkit Relay + Pin to Raspberry Pi 5V Pin
  • Tinkerkit Relay Signal Pin (Middle) to Raspberry Pi GPIO4
  • Tinkerkit Relay - Pin to Raspberry Pi GND Pin

 

20150916_213438_HDR.jpg

With everything connected, we can now test the PIR and Relay systems. From the terminal, enter the following command:

 

 

sudo python test-pir.py


20150916_213558_HDR.jpg

 

 

The script will wait a few seconds before looking for motion, so wait a few seconds, and then wave your hand in-front of the sensor. It will illuminate, indicating that motion was detected. If you do not hear the relay click on, then wait a few more seconds and try again. If the click does not happen after waiting for 15 seconds, then check the code again. If the code is working, you will hear the relay click on for three seconds and then click off.

 

 

Bringing It All Together

 

 

Relay-Pins.jpg

Now connect the red and black wires to the “Comm” (Common) and “NO” (Normally Open) contacts on the relay. Don’t worry about the “ready” (Yellow) wire for the moment. We will revisit it in another post.

 

20150917_000506_HDR.jpg

If you have not filled the fog machine with fog juice yet, now would be the time to do so. (I recommend Hog Fog as it is super clean, thick, low-lying, and have used it personally many times!)  With everything connected you can connect the remote’s cable to the fog machine, and then connect the fog machine to the mains supply. Switch the fog machine on, and wait for about 5 minutes for the green light to come on. The run the script again from the terminal.

 

sudo python test-pir.py

 

If everything is correct, then the PIR sensor should begin looking for motion three seconds after the script has been triggered. These three seconds allow for you to get out of the way when setting the Foginator2000.

 

 

That is going to end it for this installment of Project: Foginator2000! Tune in next week for installment 3, in which I work on the environment sensing coding & testing aspect of the project. If you have not yet seen my other Halloween project for 2015 here at Element14, head over to Trick or Trivia: A trivia-based Halloween Candy Dispenser - Part 001. Until next time, remember to Hack The World and Make Awesome!

 

Win this Kit and Build-A-Long


  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. October 16th -  Final Assembly and Testing
  7. October 23th - Project Wrap-up

We've been cruelly teasing you all week about a project we're about to kick off here at element14, as we alluded to the brand new Raspberry Pi case we had delivered.

Cab-Tweet-00.jpg Cab-Tweet-01.jpg

Cab-Tweet-02.jpg Cab-Tweet-03.jpg

But you guys were too quick for us. Although there were a few wry suggestions about what was being delivered -- including a rather excellent notion about building a Raspberry Pi juke box (watch this space) -- most of you astute Pi eaters sussed out that we've bagged ourselves a classic, upright arcade cab that'll soon become the company's most effective time wasting tool to date.

 

Arcade Machine14

The facts are these:

  • With your (not insignificant) assistance, we're going to strip out the cab and retrofit it with a Raspberry Pi.
  • The RPi will be used to emulate a bunch of classic video games.
  • The cab will be installed here in the element14 offices for the staff and visitors to play.
  • Any money collected by the cab will be given to a community-selected charity each month.

 

Great plan eh? But there's no shortage of work to be done before the pixels can be set free.

 

Retro Revival

There are a few factors we're endeavouring to accomplish in this project. Firstly, we want to keep the whole thing under the £500 mark, thereby making the build comparable to buying a contemporary games console. So anyone who builds along with us can divert any funds they might have allocated to an Xbox or PlayStation, and have themselves an arcade cab instead. It's a gift to yourself that keeps on giving.

 

We also want to re-use as much of the cab as possible. It's got a power supply in there already, and it still has its old 15KHz arcade monitor, which you can't beat for that authentic look. Hooking it up to a Raspberry Pi isn't going to be any small task, of course, and we're not even sure if it's working right now.

 

And then there's the controls. We'll need to interface the joysticks and buttons with the Raspberry Pi, which in itself isn't a particularly difficult task. At least, it wouldn't be, if the RPi had enough GPIO inputs for eight joystick directions (four for each stick), 12 game buttons, two game start buttons, a working coin mechanism, and however many controls we need to operate the OS front end and emulators. Hmm.

 

So we'll be calling on the element14 community for input and assistance at most every turn, and would love to hear any hints, tips or ideas you might have about putting together a Raspberry Pi arcade machine. In the meantime, we'll bring you regular blogs and discussions about how the project's going, and how you can help.

 

For now though, here's a look inside and outside the cabinet. I'll be downstairs, cleaning it for the first time in 30 years...

Raspberry Pi Arcade Cabinet 01.jpg Raspberry Pi Arcade Cabinet 02.jpg Raspberry Pi Arcade Cabinet 03.jpg Raspberry Pi Arcade Cabinet 04.jpg Raspberry Pi Arcade Cabinet 06.jpg Raspberry Pi Arcade Cabinet 05.jpg

I thought I would share my simple design for a fog generating machine, I made this a few years ago as a pesticide doser. It might be of interest to those of you making a PI based Halloween special, just dont add the pesticide!

 

I could not find any cheap units when I made this a few years ago, maybe there is a cheap off the shelf solution now? but I still prefer using things built than bought.

This unit just requires water and 24v to operate, no chemicals needed.

 

 

 

 

For the build tutorial check out:

Automated Green House Blog:12 - Pesticide Doser, Cheap DIY Mister

 

 

I look forward to watching the Pi Halloween projects,

Mike

Trick-or-Trivia-Banner-002.jpg

 

Hello everyone! It’s been a week and I have been working hard at designing and writing the trivia interface for this project, and I am proud to say that after an absence of more than five years from Python, I was successful in getting a decent looking GUI built. Unfortunately I have yet to figure out how I will be randomizing the questions that the device ask trick-or-treaters, but that is something I can work on later in the project. So let’s jump in and take a look at how I created the trivia interface.

 

In my first post, I mentioned that I had planned on using the Drupal CMS to build the trivia interface. After some conversation with another Drupal developer, I came to the conclusion that using a PHP-based framework and building a database was way overkill when I wanted to just display a few questions on a screen. So I decided to just write the whole trivia interface in Python, a language I have not touched in more than five years. Unfortunately my entire Python experience was in the command line, and I had never tried writing a GUI before.

 

TrickOrTriviaScreen.jpg

 

After some research I found out that it is extremely easy to create a simple, low-level user interface in Python by using a GUI library named Tkinter. For those of you who have never heard of Tkinter (like me), it’s Python's de-facto standard GUI (Graphical User Interface) package. It is a thin object-oriented layer on top of Tcl/Tk. Using Tkinter I was easily able to create a nice full-screen application that would clearly present the question and its answer buttons to the user.

 

 

Installing The Raspberry Pi 7-InchTouch Screen

 

 

TrickOrTrivia_002 (5).jpg

Before I show you the interface and the code that created it, let’s take a look at what we will be displaying the code on. Element14 sent me the brand new Raspberry Pi 7-inch touch-screen LCDRaspberry Pi 7-inch touch-screen LCD to use in this project, and after spending a couple of weeks with this screen, all I can say is WOW. I have used several different screens with the Raspberry PiRaspberry Pi, and Beaglebone BlackBeaglebone Black boards, and none has been as easy as this screen was to use.

 

 

TrickOrTrivia_002 (1).jpg

Getting the Raspberry Pi 7-inch Touch Screen up and running is as simple as connecting the large ribbon cable to the back of the driver board, and then securing the driver board to the screen with the provided hardware.

 

 

TrickOrTrivia_002 (2).jpg

With the driver board mounted to the LCD screen, connect the smaller ribbon cable to the driver board. I found it was easier to do it this way since I have larger fingers.

 

 

TrickOrTrivia_002 (3).jpg

With thetwo LCD ribbon cables connected, mount the Raspberry Pi to the stand-offs, and then connect the DSI cable to both boards.

 

 

TrickOrTrivia_002 (4).jpg

Now connect the two power jumper wires to both boards as shown in the above image. You will also want to insert the SD card into the Raspberry Pi. If you are running the latest version of raspbian, you can now plug a 5V 2A power source into the LCD screen’s driver board. If you do not have a 5V 2A power source, you can remove the power wires that connect the two boards, and then connect a single 5V 1A power source to the Raspberry Pi and One to the LCD screen’s driver board.

 

You will have to connect your Raspberry Pi to video source via the HDMI port for this process, or SSH into the Pi if you know it’s IP address. If you are using the NOOBs SD card that comes with the kit for this tutorial, you will have to update to the latest Raspbian, as the one installed on it is out of date.

 

To see which version of Raspbian your Pi is running, use the following command in the terminal:

 

uname -a

 

Which should return something that looks like this.

 

 

Linux raspberrypi 4.1.6-v7+ #810 SMP PREEMPT Tue Aug 18 15:32:12 BST 2015 armv7l GNU/Linux

 

 

If your version of Raspbian is out of date, run the following commands in the terminal:

 

sudo apt-get update

 

Then

 

sudo apt-get upgrade -y

 

 

With the update and upgrades ran, you can now restart your Raspberry Pi, and the LCD screen should now work and display the login screen.

 

TrickOrTrivia_002-(6).jpg

 

 

Enter the desktop GUI by typing the following command after logging in.

 

startx

 

 

 

 

TrickOrTrivia_002 (6).jpg

 

Now that you are on the desktop, let’s look at how I built the trivia GUI.

 

 

GUI Build UsingTkinter

 

tkinter-logo.png

As I mentioned above, I decided to write my own GUI instead of using a heavy framework like Drupal to create a simple quiz interface. I looked at a few options and finally decided on a GUI solution that was already built into Python called Tkinter. I chose this because it was very lightweight which would save loading times, and because it seemed very simple to use.

 

The Knowledge

 

One of the biggest sellers for me on Tkinter was the sheer amount of documentation that is available online. New Mexico Tech has a great Tkinter reference on their website which was very helpful when troubleshooting, but the best resource I found was a series of Videos on YouTube from a user named TheNewBoston. I would highly recommend heading to his channel and watching his tutorial series on using Tkinter.

 

 

TheNewBoston's first Tkinter Tutorial

 

 

I ended up watching the entire series, but if you only have time to watch a few, everything up to video six will give you the knowledge needed to complete this project. Watch the videos if you are following along at home, and check out my code below which I have broken out to better explain what each section does.

 

The Code

 

We need to import Tkinter, the Raspberry Pi GPIO as GPIO, Time, and system libraries.

 

from Tkinter import *
import RPi.GPIO as GPIO
import time
import sys
































 

 

Here we set up the GPIO pins. For the purpose of this tutorial I turn GPIO warnings to off. Then we need to set the GPIO Pinout to the BCM layout. Finally we need to define two output pins that will turn on a pair of LEDs.

 

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(19, GPIO.OUT)
































 

 

For those of you who do not know, the Raspberry PI’s GPIO sins can be configured two different ways, GPIO.board and GPIO.bcm.  When setting the GPIO pins mode, you are telling it what numbering scheme your code will be adhering too. Unfortunately the pin numbering between each of the different modes changes on various models and revisions of the Raspberry PI. So if you are not running a Model B+ or Raspberry Pi 2, you will need to search the internet for the proper pinouts for the mode you select.

 

  • The GPIO.BOARD option specifies that you are referring to the pins by the number of the pin the the plug - i.e the numbers printed on the board (e.g. P1) and in the middle of the diagram below.
  • The GPIO.BCM option means that you are referring to the pins by the "Broadcom SOC channel" number, these are the numbers after "GPIO" in the green rectangles around the outside of the diagram below.

Raspberry-Pi-GPIO-Layout-Model-B-Plus-rotated-2700x900.png

Raspberry Pi B+ and Raspberry Pi 2 GPIO Pinout.

 

 

We need to set the state to True

 

state = True





























 

 

This section defines a function called blink_led. At the end of this script I add a sys.exit line that is executed three seconds after I turn off the GPIO pin for the last time. This exits out of the quiz script, allowing us to reset for the next trick-or-treater.

 

def blink_led():
# endless loop, on/off for 1 second
    while True:
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.cleanup()
        time.sleep(3)
        sys.exit()





























 

 

This section sets up another function that blinks the same pattern, but this time we name the function blink_led_2.

 

def blink_led_2():
# endless loop, on/off for 1 second
    while True:
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.cleanup()
        time.sleep(3)
        sys.exit()





























 

 

Now we need to set up Tkinter. The first thing we need to do is set up a basic box that will hold the elements of our GUI. We do this by telling the program that root=Tk().

 

root = Tk()





























 

 

Now we need to set the window to not have a border, title, or any of the normal minimize, maximize, and close buttons.

 

root.overrideredirect(True)





























 

 

This line tells the program to set the window to full screen and auto size it to the screens resolution.

 

root.geometry("{0}x{1}+0+0".format(root.winfo_screenwidth(), root.winfo_screenheight()))





























 

 

Now we need to tell the program to set the focused window to this one.

 

root.focus_set()





























 

 

We also need to set the window’s background to black.

 

root.configure(background='black')





























 

 

With the basic GUI window setup we can move on to setting up the elements that will appear in the z. This consist of three labels on top that span three rows of a grid, followed by two answer buttons on the next row spaced apart by two columns, and another row with two buttons and the same spacing below it. Finally we need to add two more labels to the bottom that describe what the rewards are for correct and incorrect answers. You will see that I have styled the elements inline with basic styling options, and placed the elements using Tkinter's .grid Geometry Manager.

 

 

Setting up the Labels.

 

 

In each of these three labels you can see that I have placed them in the root box, and included font, font size, background color, and foreground color attributes. and the set each of the labels to display using the .grid method followed by some placement style and padding attributes.

 

 

label_1 = Label(root, text="Welcome to Trick or Trivia", font=("Helvetica", 36), bg="black", fg="white")
label_1.grid(columnspan=6,padx=(100, 10))
label_2 = Label(root, text="Answer the question for candy!", font=("Helvetica", 28), bg="black", fg="red")
label_2.grid(columnspan=6, pady=5, padx=(100, 10))
label_3 = Label(root, text="Casper is a friendly ____!", font=("Helvetica", 32), bg="black", fg="green")
label_3.grid(columnspan=6, pady=5, padx=(100, 10))





























 

 

Setting up the Buttons

 

Just like the Label elements, I defined the buttons by placing them into the root window, adding text, and including attributes for font type and font size. Here is a reference (http://effbot.org/tkinterbook/button.htm). What makes the buttons special is the “command” attribute I included at the end of the button setup line. This calls the function it names, which in our case is one of the two blink_led functions we wrote earlier. Finally I displayed the button using the .grid method with some styling attributes.

 

button_1 = Button(root, text="Ghost", font=("Helvetica", 36), command=blink_led)
button_1.grid(row=4, column=2, pady=5, padx=(100, 10))
button_2 = Button(root, text="Ghast", font=("Helvetica", 36), command=blink_led_2)
button_2.grid(row=4, column=4, sticky=W, padx=(100, 10))
button_3 = Button(root, text="Ghoul", font=("Helvetica", 36), command=blink_led_2)
button_3.grid(row=5, column=2, pady=5, padx=(100, 10))
button_4 = Button(root, text="Gremlin", font=("Helvetica", 36), command=blink_led_2)
button_4.grid(row=5, column=4, sticky=W, padx=(100, 10))





























 

 

Finally I added two more labels with the same styling and placement attributes as before.

 

label_4 = Label(root, text="Correct Answer = 3 Pieces", font=("Helvetica", 20), bg="black", fg="green")
label_4.grid(columnspan=6, padx=(100, 10))
label_5 = Label(root, text="Incorrect Answer = 1 Piece", font=("Helvetica", 20), bg="black", fg="red")
label_5.grid(columnspan=6, padx=(100, 10))





























 

 

Finally we need to tell the program to stay in the event loop until we close the window.

 

root.mainloop()





























 

 

Putting It All Together

 

Here is what the code looks like as a whole. You can download this code from my Github repo for this project or by clicking here.

 

 

from Tkinter import *
import RPi.GPIO as GPIO
import time
import sys


GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(19, GPIO.OUT)


state = True


def blink_led():
# endless loop, on/off for 1 second
    while True:
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.output(26,True)
        time.sleep(1)
        GPIO.output(26,False)
        time.sleep(1)
        GPIO.cleanup()
        time.sleep(3)
        sys.exit()


def blink_led_2():
# endless loop, on/off for 1 second
    while True:
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.output(19, True)
        time.sleep(1)
        GPIO.output(19, False)
        time.sleep(1)
        GPIO.cleanup()
        time.sleep(3)
        sys.exit()


root = Tk()
root.overrideredirect(True)
root.geometry("{0}x{1}+0+0".format(root.winfo_screenwidth(), root.winfo_screenheight()))
root.focus_set()  # <-- move focus to this widget
root.configure(background='black')
root.config(cursor="none")


label_1 = Label(root, text="Welcome to Trick or Trivia", font=("Helvetica", 36), bg="black", fg="white")
label_1.grid(columnspan=6,padx=(100, 10))
label_2 = Label(root, text="Answer the question for candy!", font=("Helvetica", 28), bg="black", fg="red")
label_2.grid(columnspan=6, pady=5, padx=(100, 10))


label_3 = Label(root, text="Casper is a friendly ____!", font=("Helvetica", 32), bg="black", fg="green")
label_3.grid(columnspan=6, pady=5, padx=(100, 10))


button_1 = Button(root, text="Ghost", font=("Helvetica", 36), command=blink_led)
button_1.grid(row=4, column=2, pady=5, padx=(100, 10))


button_2 = Button(root, text="Ghast", font=("Helvetica", 36), command=blink_led_2)
button_2.grid(row=4, column=4, sticky=W, padx=(100, 10))


button_3 = Button(root, text="Ghoul", font=("Helvetica", 36), command=blink_led_2)
button_3.grid(row=5, column=2, pady=5, padx=(100, 10))


button_4 = Button(root, text="Gremlin", font=("Helvetica", 36), command=blink_led_2)
button_4.grid(row=5, column=4, sticky=W, padx=(100, 10))


label_4 = Label(root, text="Correct Answer = 3 Pieces", font=("Helvetica", 20), bg="black", fg="green")
label_4.grid(columnspan=6, padx=(100, 10))


label_5 = Label(root, text="Incorrect Answer = 1 Piece", font=("Helvetica", 20), bg="black", fg="red")
label_5.grid(columnspan=6, padx=(100, 10))




root.mainloop()





























 

Copy and paste this code into a new file on your Raspberry Pi’s desktop, or from the command line enter the following commands.

 

cd Desktop

sudo nano TrickorTriviaQuiz.py

 

 

 

 

Then right click to paste the copied code into the new file, and exit nano.

 

 

Setting up the LEDs

 

TrickOrTrivia_002-(8).jpg

 

Follow the diagram above and wire up your LEDs to GPIO pins 19 and 26 . Since the Pi’s GPIO pins output a 3.3v signal, you do not need resistors if you are using a single red, green LEDs.

 

 

Running the program

 

TrickOrTriviaScreen.jpg

 

Now from the desktop, open the terminal (if you were not in the GUI or pasting the code remotely. With the terminal open, navigate to the desktop directory and enter the following command to run your new quiz program.

 

sudo nano TrickorTrivia.py

 

The trivia screen should pop up like in the video below. If you select a correct answer, then the green LED will illuminate and flash three times. If an incorrect answer is selected, then the red LED will illuminate and flash three times.

 

 

The LEDs used in this tutorial are just for troubleshooting purposes, and are not part of the kit. You can use any LEDs you have laying around, or two of the red 10mm LEDs included in the kit. In a future update, I will show you how to write a function that rotates a servo via the GPIO pins and we will replace the blink_led calls with this. For now this proves that the quiz script works, and that we are okay moving forward with the next step in the project. Check back next week for another update, and until then, Hack the World and Make Awesome!

 

 

Win this Kit and Build-A-Long

 

  1. Project Introduction

  2. Building The Trivia Interface

  3. Interfacing Ambient and Triggered Audio Events
  4. Building The Candy Dispenser & Servo Coding
  5. Carve Foam Tombstone
  6. October 24th -  Assembly and Testing
  7. October 28th - Project Wrap-up

We've made this funny project for a German crane manufacturer. It's a PONG game based on the Raspberry Pi 2, written in Python and controlled via 2 heavy remotes for cranes.

 

Introduction

There is plenty of jitter on signals generated using GPIO on the Raspberry Pi and many other computers, and this is due to the fact that Linux can context-switch user processes at any desired point in time dependant on its scheduler (for more information on this, see the Raspberry Pi GPIO Explained guide).

 

One solution is to use custom driver code, but this is a non-trivial exercise.

 

Another typical workaround is to offload time-sensitive I/O to external hardware such as an Arduino or (say) a hardware PWM generator for controlling servos or dimming lights more smoothly.

 

However, what about GPIO configured for input? I was curious if there was any way to improve it.

 

This short, experiment-based blog post very briefly discusses what can be done, and uses an example of building a voltmeter with the Raspberry Pi. It can be seen that with a few code-level methods it is possible to greatly improve GPIO input timing based measurements.

This is the experiment topology, it is described further below. The device on the left is a photodiode (with the usual anode and cathode connections like any diode) and it generates a small voltage whenever light falls on it.

photo-topology.png

Here is the experiment in 30 seconds:

 

Measuring Solar Cell Voltage

The Raspberry Pi GPIO Explained Guide shows examples for configuring inputs on the Raspberry Pi and using them to connect up switches. Code examples are provided in several languages. There is also a voltmeter project in that guide, which measures the frequency of pulses from a voltage-to-frequency converter integrated circuit. The integrated circuit generates a frequency proportional to the applied voltage.

The code relies on averaging to provide a more accurate result. It functions well, with very consistent measurements within a few percentage points of the expected value – not bad for a super-simple circuit.

I got my bread board ready, and connected up the input to a photodiodephotodiode. Photodiodes are small silicon devices that can be used as very miniature solar cells! Here it is in comparison to an LED:

photodiode-annotated.jpg

 

Here is the breadboard layout:

photodiode-circuit.jpg

 

For the full circuit diagram and details, see the GPIO Explained guide. The only difference is that I’ve added the photodiode anode connection to the circuit input, with the cathode connected to 0V. You could also optionally add a small 47nF capacitor across it, in case the room lighting has any flicker. The black square-shaped photodiode can be seen plugged in on the right side of the breadboard, close to the row marked 15. A clear photodiode (rather than a visible-blocking, infra-red pass filter version which is what you can see in the photo) would have been better for room lighting measurement, but it was all I had and it still worked.

 

I was expecting a voltage of around 250-350mV to be generated by this tiny photodiode/solar cell in normal room light (and this would cause the voltmeter circuit to generate a frequency of 2.5-3.5kHz), and I wanted to measure it using the Raspberry Pi. I could then log the voltage generated during the day, or see if someone switches a lamp on in the room, or place a couple of the photodiodes in series, and so on., as mini science experiments. But then I got curious about how to improve it.

 

The first addition was to write some code (which will be published in a few days after some tidying) to plot the measured voltage every second. Basically the code runs a small web server on the Raspberry Pi, and draws a graph if anyone types the IP address of it into their web browser. I can view it on any web browser running on the Raspberry Pi, or my laptop, or mobile phone. The x-axis shows in seconds how long the experiment has been running, and the y-axis shows the measured voltage. This was the result in normal home lighting conditions:

solar-chrome.png

 

Stressed Out

There is also a nice stress test program that can be installed (type sudo apt-get install stress ), and you can use it to provide the cores on the Raspberry Pi with useless time-consuming things to do. You can see the CPU consumption shoot up to close to 100% on any number of its cores, or all four cores if you wish.

 

To use it, this command stresses out all 4 available cores, for 60 seconds. Press Ctrl-C at any time to quit:

stress --cpu 4 --timeout 60

 

If you type top –d 1 in a command window, you’ll see a list of processes and their current CPU usage. Press ‘q’ to quit at any time.

top.png

 

Try running the stress program while the voltmeter is running, and the calculated voltage becomes very inaccurate because now the scheduler needs to make CPU time available for the stress program’s tasks (processes). Here is what occurs:

solar-fluctuate-stress.png

 

Sleeping, Averaging, Voting, Scheduling!

I experimented with a few tricks to try to get more consistent measurements.

 

One trick was to force a sleep before computing the frequency. The theory was that the scheduler might be unlikely to context-switch immediately after the sleep function, provided the sleep was long enough to allow the scheduler to enable some other process run. The theory failed - I didn’t get good results with this method; it seemed to have no useful effect.

 

Another trick was to still use averaging, but now also implement a ‘voting’ method. The plan of attack was that for slow-changing voltages, the code would calculate three sets of averaged values in quick succession, and then ‘average the averages’ by using two of the averaged values out of the set of three, and discard one out of the three.

 

But which two to choose? The chosen two would be the ones that are most agreeable, i.e. are numerically closest to each other. I figured I was in good company because this is approximately how the space shuttle computers worked. This voting method improved results somewhat!

solar-averaging-voting.png

 

Note that mathematically it would have been very useful to plot a probability density function from multiple results and see what the distribution reveals about scheduler impact to the measurements, but I didn’t explore this in the very short amount of time I had. It would be an interesting experiment though.

 

The next method was to attempt to influence the scheduler; it is possible to dynamically influence the priority and scheduling algorithm on-the-fly with Linux. I selected a policy called SCHED_FIFO and set the priority to the highest value. Here is the C code that can perform this:

 

#include 
#include 
#include 
#include 


int
main(void)
{
  struct sched_param sp;
  int ret;
  sp.sched_priority=99; // (0..99) highest priority is 99
  ret=sched_setscheduler (getpid(), SCHED_FIFO, &sp);
  if (ret!=0)
  {
    printf("Error setting scheduling policy\n");
  }
  // rest of your program
  
  return(0);
}

 

The results were much improved (see screenshot below). With great power comes great responsibility, and it is strongly advised that code runs to completion quickly (or changes the priority again dynamically) and that code should be simple enough that there is low risk of it hanging for a long time.

solar-sched-fifo.png

 

Summary

No high-end instrumentation would do this, but if you want to use Raspberry Pi inputs for home-science-experiment-grade timing applications, consider scheduler, averaging and voting techniques! It was a lot of fun to measure the small solar cell voltage with such a simple circuit connected up to the Pi, logging away.

foginator001jpg.jpg


Halloween has always been one of my favorite holidays, so much so that I founded a business that makes professional-grade props and controllers for Haunted Houses. This naturally led to Element14 asking me to create a couple of projects that were easy enough for the community to follow along and replicate the project at home. For my second project, I am going to be creating a automated Fog Machine controller that utilizes a Raspberry Pi, and the new Sense Hat.

The Concept


foginator-Flow-Chart.jpg


The basis of this project is to utilize the all new Sense Hat from the Raspberry Pi Foundation to work in conjunction with the Raspberry Pi and a Relay to activate a fog machine when guest trigger a motion sensor. For those of you who might not follow new Raspberry Pi products as myself, the Sense Hat features an array of environment sensors that relay data back to the Raspberry Pi. To be honest, I struggled to figure out how I would tie the Sense Hat back into this project, and after a few conversations with friends I think I have come up with an idea.


I am going to utilize the Sense HatSense Hat to log data about the environmental conditions on Halloween Night and use the fog machine triggers to log what I will call a “Trick or Treat” event. When the night is over, I will compile the data and try to determine if swings in temperature, humidity, or air pressure correlates to a rise or fall in trick or treat events.


senshat.jpg


I will also utilize a few more of the Raspberry Pi’s GPIO pins to trigger some special effects lighting (NeoPixels), and will play an array of spooky ambient sounds, halloween-themed music, and (if time allows) audio events when the fog machine is triggered. One final bonus feature will be to integrate some form of remote notification when a trick or treat event occurs.

This project will progress a little faster than my Trick or Trivia Candy Dispenser project, mostly because I am building both at the same time. I purposely kept this project a little more simple for this reason. You will also note that I have chosen to use an off-the-shelf relay module this time instead of building my own as I will be doing in my other project. This is both in the spirit of saving some time, but as well to illustrate that there is alternative solutions that are ok to use as well.


The Hardware



Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

USB PORT POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

USB WIFI MODULE

18J555818J5558

Home Pir Sensor

1

PIR MOTION SENSOR

40P118440P1184

Speaker

1

SPEAKER, 20 kHz, 8OHM, 4W

26Y845826Y8458

Fog Coloring Rings

1

NEOPIXEL RING - 16 X WS2812

26Y851226Y8512

Ambient LEDs

1

NEOPIXEL 8MM THROUGH HOLE LED

26Y852826Y8528

AMbient LEDs

1

NEOPIXEL 5MM THROUGH HOLE LED

26Y846026Y8460

Mood LEDs

1

NEOPIXEL DIGITAL RGB 1M 144LED BLACK

34C109234C1092

PSU Vreg

1

LM7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

58K379658K3796

PSU LED Resistor

1

METAL FILM RESISTOR, 1KOHM, 250mW, 1%

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K794969K7949

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 47UF, 50V, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%

14N941814N9418

PSU LED

1

LED, RED, T-1 3/4 (5MM), 2.8MCD, 650NM

49Y756949Y7569RPi Sense Hat1Raspberry Pi Sense HAT




MCM Part No.

Notes

Qty

Manufacturer / Description

83-14732

Relay Module

1

TinkerKit Relay Module

555-19400

Fog Machine

1

Fog Machine Hurricane 901

28-12812

Audio Amp

1

Audio Amplifier Kit 2 X 5W RMS

83-15748

Logic Level Converter

1

8 Channel Logic Level Converter

21-15178

Project Enclosure

1

ABS Case Gray - 5-5/8" x 3-1/8" x 1-3/16"



As you can see, in this project I am using a fairly high-end fog-machine. Any fog machine will work as long as it has a wired remote. You can use a $19.99 200W fog machine from a party store, or a $699 fog machine from a professional event supply shop as long as it has a wired remote with a push button. Additionally, you will need to purchase some form of water-based fog juice. There are hundreds of brands out there, and all will work just fine, but some of the more professional brands like Froggy’s Fog will have better results. You could even make your own fog juice and I will include a recipe for that in a later blog post.


The ADJ 1300w Fog Machine In Action. Video courtesy ADJ Lighting.


In addition to the parts listed above, you will need a few yards of 3-conductor wire (4-wire phone cable works well), or 100 feet or more of single conductor wire that will need to be paired up for the NeoPixel and Audio components. Finally, you will need a 3.5mm audio extension cable, an ethernet patch cable, or a wifi router. A soldering iron will also be needed to assemble parts of the kit. Having some heat shrink tubing, electrical tape, zip ties, and a hot glue gun on hand would be advised as well.

If you have any questions, suggestions, or comments in general, please feel free to leave them below, or by sending me a private message here at Element14. If anyone chooses to follow along at home and build their own Trick or Trivia Candy Dispenser, please post photos, and even blog post if you can as I am very excited to see your work!

I will be posting an update every Friday with the project wrapping up on October 16th. I have taken the liberty of laying out each of the weekly milestones below.


Win this Kit and Build-A-Long


  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. October 16th -  Final Assembly and Testing
  7. October 23th - Project Wrap-up

 

#ShareTheScare this Halloween

Visit our Halloween space and find out how to win one of two Cel Robox 3D Printers. There are two ways to win:

Share your Project and Win
#ShareTheScare Competition
The Ben Heck Show Episodes

 

Trick-or-Trivia-Banner-pre-launch.jpg

 

Halloween has always been one of my favorite holidays, so much so that I founded a business that makes professional-grade props and controllers for Haunted Houses. This naturally led to Element14 asking me to create a couple of projects that were easy enough for the community to follow along and replicate the project at home. For my first project, I am going to be creating a very unique halloween candy dispenser that utilizes a trivia game to dispense the candy.

 


The Concept


Trick_Trivia_Flow_Chart.jpg

 

The idea behind this project is to create a halloween-based candy dispenser that requires the trick or treaters to answer a simple Halloween trivia question in order to obtain the maximum amount of candy. When the trick or treater steps in front of the candy cauldron, the Raspberry Pi will display a trivia question on the touch-screen LCD hat along with four answers. If the trick or treater gets the answer right, he or she will be rewarded with three pieces of candy. If the trick or treater gets the answer wrong, they will only receive a single piece of candy.

In addition to this functionality, the Raspberry Pi will play a “Correct” audio event if the answer is correct. Neopixel LEDs will illuminate and flash a pattern. If the answer is incorrect, the Raspberry Pi will play an “Incorrect” audio event and flash a different pattern on the neopixel LEDs. The Raspberry Pi will also control several “ambient” LEDs around the base and in another area of the device.

 

foamtombstone.jpeg
DIY Foam Tombstone. Image courtesy DiyNetwork.com



At the moment, I am planning on building a custom tombstone from pink insulation foam, and using either my X-Carve CNC to carve it out, or using a combination of an Exacto knife and an old soldering iron to carve and shape the tombstone. Originally I had plans on using a 4-foot to 5-foot Frankenstein statue, but was unable to locate one that was within my budget of $100 for the physical prop.

I will either be utilizing Drupal or straight PHP and Python to build the trivia interface. Alternatively I may use a combo of all three to get the job done. Basically I need to have the LCD display a question and have touch buttons for four different answers. When the correct answer is chosen, the Pi will need to activate the candy dispenser and kick out the appropriate number of pieces of candy.

 

candydispenserconcept.jpg


The candy dispenser will be a simple magazine design that holds Starburst candies which will be pushed out and dropped into a cauldron by a simple hobby servo. I will be 3D Printing the candy magazine and dispenser mechanism in the sake of saving some time, but I will include the sketchup design files so that those of you following at home can figure out how to construct this from wood, foam core, styrene, or some other material you are familiar with.


The Hardware.

 

Most of the electronic components for this project can be found in the Trick or Trivia kit found here at Element14, with some parts needing to be purchased from MCM Electronics.

 

 

This whole project is based around the new Raspberry Pi 7-inch DSI Touch Screen DisplayRaspberry Pi 7-inch DSI Touch Screen Display. This new screen is purposely designed for the Raspberry Pi, and utilizes the DSI port on the Raspberry Pi itself which frees up the HDMI port for other multimedia duties.

 

Newark.com

 

Newark Part No.

Notes

Qty

Manufacturer / Description

36K745036K7450

5v coil relay

1

OMRON SPDT, 5 VDC, 10A Relay

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B,

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

ADAFRUIT USB WIFI MODULE

26Y846026Y8460

Mood LEDs

1

ADAFRUIT NEOPIXEL Strip 1M 144LED

26Y845526Y8455

Ambient LED

1

ADAFRUIT NEOPIXEL STICK

40P118440P1184

Speaker

1

VISATON SPEAKER, 20 kHz, 8OHM, 4W

87K702787K7027

10mm LEDs

5

LED, RED, T-3 (10MM)

58K382758K3827

Resistors

5

METAL FILM RESISTOR, 220 OHM, 250mW, 1%

10M846410M8464

Flyback Diode for Relay

1

1N40011N4001 Rectifier Diode 50 V 1 A

34C109234C1092

PSU Vreg

1

7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

58K379658K3796

PSU LED Resistor

1

METAL FILM RESISTOR, 1KOHM, 250mW, 1%

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K794969K7949

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 47UF, 50V, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%,

14N941814N9418

PSU LED

1

RED, T-1 3/4 (5MM)

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display

 

MCM Electronics

 

MCM Part No.

Notes

Qty

Manufacturer / Description

28-17452

Servo

1

28-17452 - TowerPro SG-5 Standard Servo

28-12812

Audio Amp

1

Audio Amplifier Kit 2 X 5W RMS

83-15748

Logic Level Converter

1

8 Channel Logic Level Converter

21-15178

Project Enclosure

1

ABS Case Gray - 5-5/8" x 3-1/8" x 1-3/16"

 

As you can see, this project has quite a few components that are required, but there are a few that are totally optional. If you wanted to save a little money, you could purchase the 30 or 60 pixel per meter strips of neopixels and save a good bit. Also, you could forgo the NeoPixels all together and save close to $100.

In addition to these parts that you will need to order, you will also need to pick up two 4-foot x 8-foot sheets of 1-inch thick rigid home insulation foam from your local hardware store. If you live in a colder climate than I do, then you might be able to find rigid insulation foam up to three inches thick, and could skip having to laminate foam together. You will also need a few yards of 3-conductor wire, or 100 feet or more of single conductor wire that will need to be paired up for the NeoPixel and Audio components. Finally, you will need an 3.5mm audio extension cable, a ethernet patch cable, or a wifi router. A soldering iron will be needed to assemble parts of the kit, as well as to carve some of the tombstone. Other handtools such as screwdrivers, pliers, and wire cutters are needed.

If you have any questions, suggestions, or comments in general, please feel free to leave them below, or by sending me a private message here at Element14. If anyone chooses to follow along at home and build their own Trick or Trivia Candy Dispenser, please post photos, and even blog post if you can as I am very excited to see your work!

I will be posting an update every week with the project wrapping up on October 16th. I have taken the liberty of laying out each of the weekly milestones below.

 

Win this Kit and Build-A-Long

 

  1. Project Introduction

  2. Building The Trivia Interface

  3. Interfacing Ambient and Triggered Audio Events
  4. Building The Candy Dispenser & Servo Coding
  5. Carve Foam Tombstone
  6. October 24th -  Assembly and Testing
  7. October 28th - Project Wrap-up

Filter Blog

By date: By tag: