Skip navigation
2015

Previous posts for this project here:

http://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/tags#/?tags=pieintheface

 

Here is a demonstration of my final product.  I will be making another post after Halloween to go over the final code.

 

Its outside ready to annoy people!

IMG_0511.JPG

 

Here is the video demonstration:

To keep people upto date:

 

The Animated grim has just been setup outside and it is epic!

I have been mad busy making the thing after breaking a few parts. I will be blogging about the build early next week.

 

Goodluck with the projects guys,

Mike

Foginator-Banner-007.jpg

 

Welcome back to Project: Foginator 20000! In this installment we are going begin wrapping the build up by moving everything from  the breadboard, to a prototyping PCB, and then placing it all into an enclosure. Then we will wrap everything up with a demonstration of the final code. There will be one more installment after this one, which will include actual data from Halloween night. I will also finalize the bill of materials in the final post as well.

 

 

The Parts

 

The only special parts we are using tonight is a piece of prototyping board, and a project enclosure. You will need something to cut holes into the project box. I used a dremel multitool with a cutoff wheel, drill bits, and some hand files to accomplish this. You will also need a hot glue gun, to secure the PCBs and the motion sensor.

 

MCM Part No.

Notes

Qty

Manufacturer / Description

21-16075

Prototyping Board

1

Circuit Board - 750 Holes

21-15178

Project Enclosure

1

ABS Case Gray - 5-5/8" x 3-1/8" x 1-3/16"

 

 

 

The Build

 

20151022_141353.jpg

 

To get started let’s look at how the project box is laid out internally. Idealy, a 5” thick box would be used so that the Raspberry PI and Sense Hat could be placed in the enclosure as well. Unfortunately I was unable to find one large enough at Newark or MCM Electronics. So we are going to use this enclosure that is just large enough to house the rest of the electronics.

 

20151022_141435.jpg

 

The inside of the project enclosure has several standoffs on one side, and just the enclosures screw standoffs on the other. I am going to use the clean side as the bottom of the box since none of my boards will align with the mounting standoffs on the other side.

 

20151022_141932.jpg

 

This is the main portion of what we need to move to a prototyping PCB. The Arduino Nano, a pulldown resistor, and the LED cable connections.

 

20151022_142047.jpg

 

I am again using a Protostack protoboard as I have several laying around. You can use any prototyping board you would like though. I chose to only solder the pins that I was using as well as the 5V and GND pins.

 

20151022_143613.jpg

 

Here you can see that I used jumper wires to connect the GND and 5V lines on the Arduino Nano to the GND and VCC rails on the prototyping board. That is why I love these little boards from Prototstack. They are laid out like a breadboard, with five of the holes connected, and the power rails encircling everything.

 

20151022_144936.jpg

 

With the power connections made, I soldered in the Neopixel strip. I am not a fan of the microphone cable I used for the connection when soldering to these boards. Their holes are designed for smaller through hole pins, but with a little finesse it fits well. You will notice that I did not solder in the Neopixel ring. I think I killed it on accident, as I could not get it to light up at all.

 

20151022_190944.jpg

 

Here you can see the basic layout . Notice I notched both the Arduino protoboard and the 5V power supply board. I forgot to take pics of this process, but I used a hacksaw blade to cut them. A dremel would work too but it produces a lot of glass fiber dust that is very bad to breath. The relay sits on the right, and the PIR sensor will be placed in the top cover through a hole.

 

20151022_210930.jpg

 

This enclosure needs several holes for the wiring to exit. Here you can see some of them laid out. I used a dremel tool, a drill and drill bit, and some hand files to create and clean up these holes.

 

20151022_211726.jpg

 

Here’s the hole roughed out for the PIR sensor. It is important to get the sensor mounting flush with the top of the case for space concerns inside.

 

20151022_211726.jpg

 

Once everything is fitting nice and tight, I used hot glue to secure the PIR sensor to the case. Note that I hot glued the jumper wires to the pins as well. This prevents them from pulling loose later.

 

20151022_212703.jpg

 

Here you can see everything glued into place, and the power wire ran to the power supply. Not pictured is the dobs of glue that I used to hold all the wires that exit the case in place.

 

20151022_215504.jpg

 

While it is not the most discrete motion sensing project enclosure ever made, it sure does look good once everything is closed up nice and tight.

 

20151022_215757.jpg

 

Thinking ahead in case I ever wanted to reprogram the Arduino, I cut a slot that would allow me to plug in a USB cable. I made the slot oversize as the USB cables I like to keep handy, have a bit of a thicker encasement around the tip.

 

20151022_215820.jpg

 

Unfortunately I was unable to fit the Raspberry Pi and Sense Hat inside this project enclosure as I had planned. I needed to keep the stack close to the peripheral boards, so I simply hot glued it to the back of the enclosure.

 

20151022_220103.jpg

 

And a final shot of everything connected and ready to go! Note that the Raspberry Pi has been reoriented in this image. I forgot about the audio cable, and the original way I mounted it would not allow for it to be plugged in and the enclosure still be able to sit upright.

 

20151023_123342.jpg

 

Due to a lack of time, I simply chose to mount the project enclosure on top of the fog machine using some velcro and hot glue. While this is not ideal, it does work quite well.

 

20151030_162624.jpg

 

Looking at it from the back, you can see how starved for space this project is. Note the speaker. I found that for small things, blue tape on the fog machine’s surface helps hold the hot glue better.

 

20151030_162637.jpg

 

Looking at the back you can see how I mounted the audio amplifier and the fog machine remote switch. Again, blue tape came in handy here to help the hot glue stick better.

 

20151030_162722.jpg

 

The Final Code

 

 

I have merged all of the code together and added in some print lines that help troubleshooting any issues. To record the data, you will need to sign up for an Initial State account, and then generate a new API key for your account. You can download this code from its Github Repository which can be found here.

 

 

__author__ = 'Charles Gantt'
# This code is part of the Foginator 2000 project developed for the Halloween15 Raspberry Pi Project event at Element14.com and can be found at http://bit.ly/foginator2000


import RPi.GPIO as GPIO
import time
import sys
from sense_hat import SenseHat
from ISStreamer.Streamer import Streamer


logger = Streamer(bucket_name="Foginator2000_Data_10/23/2015", access_key="zLahwAUqKbNKv6YvuT5JuO58EiUOavDa")


sense = SenseHat()
sense.clear()
sensing = True
fog_Armed = True




GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
GPIO.setup(17, GPIO.IN)
GPIO.setup(21, GPIO.OUT)


O = (0, 255, 0) # Green
X = (0, 0, 0) # Black


creeper_pixels = [
    O, O, O, O, O, O, O, O,
    O, O, O, O, O, O, O, O,
    O, X, X, O, O, X, X, O,
    O, X, X, O, O, X, X, O,
    O, O, O, X, X, O, O, O,
    O, O, X, X, X, X, O, O,
    O, O, X, X, X, X, O, O,
    O, O, X, O, O, X, O, O
]


black_pixels = [
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X,
    X, X, X, X, X, X, X, X
]


var = 0


def is_integration():
  while sensing == True:
        print "IS Start"
        temp = sense.get_temperature()
        temp = round(temp, 1)
        logger.log("Teperature C",temp)
        print "T1"
        humidity = sense.get_humidity()
        humidity = round(humidity, 1)
        logger.log("Humidity :",humidity)
        print "H1"
        pressure = sense.get_pressure()
        pressure = round(pressure, 1)
        logger.log("Pressure:",pressure)
        print "P1"
        logger.log("Trick Or Treat Event #",var)
        print "ToT Event Logged"
        sense.set_pixels(creeper_pixels)
        time.sleep(2)
        sense.set_pixels(black_pixels)
        sense.clear()
        print "IS Done"
        break


def fire_fog():
    while fog_Armed == True:
        print "Trick or Treat Event Sensed"
        print "Lights Start"
        GPIO.output(21,True)
        time.sleep(2)
        print "Relay Triggered"
        GPIO.output(4,True)
        time.sleep(10)
        print "Relay Disabled"
        GPIO.output(4,False)
        time.sleep(20)
        print "Lights Disabled"
        GPIO.output(21,False)
        is_integration()
        print "Trick or Treat Event Finished"
        time.sleep(10)
        print "Watching For Motion"
        break


while True:
    time.sleep(3)
    if GPIO.input(17)==True:
        var = var +1
        print ("var =", var)
        print "Motion Detected"
        fire_fog()
        

 

This code has been modified to run continuously while always looking for a motion trigger. To make this code run when you plug the Raspberry Pi in, you will need to set the python script to run on startup via the cron tab.

 

 

The Data

 

 

To recap, we are using the Sense Hat to record a few environmental data points including air temperature, humidity and air pressure. The Raspberry Pi then pushes that data to a remote server at Initial State, which then processes it, and displays it in nice graphs, and other visualizations. I have also added a fourth metric called “Trick or Treat Event” that simply increments by one every time a motion event is detected.  As you can see in the image below, everything seems to be working perfectly. You can check out the full stream here.

 

2015-10-30-02_08_07-New-notification.jpg

 

In the video below, you can see me walking into the room, and the sensor tripping when the fog fires. If I have some free time on Halloween before the Trick or Treaters arrive, I am going to add in another meter or so of Neopixel strips to increase the illumination.

 

 

That is going to wrap up this installment. I will be back in just a few days with a complete wrap up of this whole project and the results from Halloween night!

 

 

Win this Kit and Build-A-Long

 

  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. Final Assembly and Testing

  I added startup for all the main pieces.  On both Pis, this meant adding a line to near the bottom of /etc/rc.local. The line goes just above the exit line you see at the bottom.  Here is what I added:

  sh /root/start_watching.sh &

 

I started the shell script running in the background.  The shell script waits 30 seconds, then mounts the samba share of the other Pi.  I start this script in the background so the main console stays ready for logins, if I were to plug a keyboard and monitor in.  On the Raspberry Pi 2 B, it then runs the python script that watches for files in /public and activates the display to show virtual fog.  On the Raspberry Pi B+, I start 2 python scripts in the background.  The first of those watches the PIR and notifies the 2 B when it sees movement.  The second script watches for new files in /public and lights up the NeoPixels when it finds a new file.  I can now power both systems up and with no interaction, itall works fine.  So, I assembled the pieces in the top of a box and I'll leave it running overnight. I'd like to have the spooky audio, so my first task for Saturday is to see if I can make or get an audio cable.  I'll add pictures of the assembled system on the next blog entry.  I need another USB cable to download the pictures.  This has been a fun little project so far and it is nice to see it running.

 

I am including all those files so you can look at them

  The files are named a little differt in the attachments than how they are on disk, so I will explain.

b2_start.sh - this is /root/start_watching.sh on the Raspberry Pi 2 B

bplus_start.sh - this is /root/start_watching on the Raspberry Pi B+

pir-trigger.py - this is the python script to watch the PIR module and notify the 2B

turn_on.py - this is the python module to turn on the NeoPixel stick and where the fog machine would be activated

turn_on_fog.py - this is the script to turn on the virtual fog on the Sense Hat

With Halloween tomorrow we be everyone is doing the last minute touches to their project. Chrystal and I had so much fun doing this we want to make it a tradition. Below is video of the dispenser and the Wishing Hell, hope you enjoy. Sorry in advance for the blurry spots and shakiness, no matter how many times I through the camera at the wall it didn't improve.

 

 

 

Thank you for watching,

 

Chrystal & Dale Winhold

  As I mentioned last time, I added a Raspberry Pi B+ to the project, basically to take advantage of its GPIO pins.  Today, I was going to start interfacing the Arduino , which is driving the Neopixel stick.  I had read that there was a nice library for letting ArmV6 Raspberry Pis Raspberry Pis drive NeoPixels.  So, I movel the NeoPixel stick over to the secondary Raspberry Pi in thes project, which is a Raspberry Pi B+.  I tried out the library and it works quite well so far.  So, now this Pi is detecting people walking up with the PIR sensor, and driving the NeoPixe stick.  I expect it will have no problem controlling the relay board.  That is really simple, and timing is not crucial.  I thought I might be keepingthe B+ busy enough that the NeoPixels might not look good, but the Raspberry Pi is doing just fine.  Here is a picture of the B+ with the PIR module, and NeoPixel stick connected.

Raspberry Pi B+ with PIR and NeoPixel stick

 

The Raspberry Pi B+ is in a nice case I printed from a design on thingiverse.  I am still experimenting with how to mount the Raspberry Pi 2 B with the Sense Hat.  It may justget mounted on a spare VESA plate I laser cut.  The HAT gets in the way for most cases and I can'y getto the laser cutter to design a custom case at the moment.  Let me give you a close up on the breadboard showing the connection a little better.

breadboard with connections from Raspberry Pi B+, PIR module and NeoPixel stick

 

I will also add a short video showing the NeoPixel stick running from the B+/  The B+ is running the PIR detection script at the same time and is commanding the Raspberry Pi 2 B to scroll fog messages across the Sense Hat.

 

  The final assembly blog entries by the project designer(Charles Gantt)have not shown up yet, so I am going to  improvise.  I'll probably mount things in a box and tie them down with twine. I'll show more on that next time.

by LDR (age 10) and Thermistor (age 8)



Last Halloween we saw a brilliant Dalek pumpkin in a Cambridge college quad. It had a candle in and looked sinister but funny at the same time. We decided to make one ourselves this year (Halloween and Doctor Who are big on our street).

 

For Christmas, we got our first Pi (thank you @drlucyrogers) , loaded with NodeRed. We did the traffic lights and a game, so we got ambitious. We were going to make an operational NodeRed pumpkin Dalek (not fully operational - that would be terrifying).

 

Choosing the Materials

 

Eye stalk and LED: We used a clear plastic pen for the stalk, with a blue LED, and a transparent lid on the end. We used card circles for the rings.

Ears: We used two transparent, plastic measuring-cups with red LEDs for the ears.

Main body bumps: The Cambridge pumpkin Dalek used plastic shot-glasses to light up. We needed something smaller and found plastic test-tubes on the web.

Gun and sucker: Sadly, our home plunger was too big. However, the whisk was perfect. We used a black lid and metal rod instead of the plunger.

 

Carving, Drilling and Testing

Making the LidLid Wiring

 

We cut off the top and scooped the seeds out. I imagine this part is similar to a real Dalek. We drilled holes for the wires to pass into the Dalek for the eye stalk and two ears. The stalk and ears push into the pumpkin flesh. Then we drilled twelve holes for the test-tubes and pushed them through. We also cut vents and an access hatch at the back.

 

Testing the LEDs

We soldered the yellow LEDs onto wires to hold them in place. They sit in the test tubes.

 

Testing the Servo

We used a separate power supply to power the servo. It took us ages to work out that we needed to join the negative terminal to the ground on the Pi! Once there, it also took us time to work out the slightly strange PWM on NodeRed - sending a "5" to the PWM output made it turn left 90 degrees - sending "15" made it turn right. Higher numbers just make the servo keep turning.

 

After much experimenting, we found some transparent plastic in the shed. We used this to fix the servo to the main body. We used wire staples to fix the servo rotor to the head.

 

Testing the PIR

We wanted to use a sensor to start the Dalek boot-up sequence automatically. The PIR worked very well connected to the same power supply as the servo (5V). We limited the number of messages to one per minute to prevent multiple signals travelling through the flow.

 

Writing the Flow

It took some time setting up the GPIO elements. Once these were done, LDR and Thermistor were skilled at sequencing adding delays and resetting at the end.

 

Sound

Sorting the sound took a bit of time. We found a kid's mini speaker with satisfyingly low sound quality. Works a treat. We downloaded various sfx mp3 files and transferred them across via terminal to the .node-red folder on the pi. Finally we made a new flow element by pasting this code into node-red:

 

[{"id":"65872231.9a78dc","type":"exec","command":"mplayer -really-quiet /home/pi/.node-red/dalek.mp3","addpay":false,"append":"","useSpawn":"","name":"EXTERMINATE!","x":1316.8081817626953,"y":245.25010299682617,"z":"ebd21660.142de8","wires":[[],[],[]]}]

We repeated this for a cool sfx boot up mp3 and robot shut down mp3 and added these to the flow. Sounds great.

 

Terrifying Small Children

Today is the 30th October. We will be tweeting the results tomorrow!


@BenRogersEdu

Previous posts for this project here:

http://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/tags#/?tags=pieintheface

 

I have the following functionality:

  • Load up images in a specific structure
  • Load up sounds in a specific structure
  • Animate eyes, with a realistic yet random blinking method
  • Animate the mouth to make it appear to be talking
  • The eye can be poked (demonstrated on last post) which results in a specific action.

 

To Do:

Play sounds

Finish Physical Pumpkin.

Make "scare" technique using PIR sensor.

 

 

 

 

 

The eye Poke

 

 

squint.png

 

The poking Sequence for the left eye, the right eye is the same with variables swapped:

def getPoking(self):
if(self.leftPoking):
if(self.pokeStart == 0):
self.leftEyeBlit = display.blit(self.leftEyeBlink[-1],(self.leftEyeX,self.leftEyeY))
self.lastPoke = time.time();
self.pokeStart = time.time();
else:
if(time.time() - self.pokeStart > 10):
self.leftPoking = 0
self.pokeStart = 0
self.leftEyeBlit = display.blit(self.leftEye,(self.leftEyeX,self.leftEyeY))







 

 

In this code, the eye poke is the same image as when the eye is nearly fully closed in a blink.  I will change this to blit the squint image that is given in the file structure.

 

There are a couple variables here that come into play.

pokeStart = the time the poke sequence started, if it has started, else it is 0

lastPoke = the time the last poke occurred

 

You can see on line 8 above:

if(time.time() - self.pokeStart > 10):

 

We can only start the poke sequence every 10 seconds.  I plan to put this into a configurable variable.

 

Detecting the Eye Poke

Whenever an images is blitted to the screen, it returns a rectangle object that gives the coordinates of the rectangle the image is on.

In my main game loop I have this code:

for event in pygame.event.get():

if event.type == QUIT:

pygame.quit()

sys.exit()

elif event.type == KEYDOWN:

if event.key == K_ESCAPE:

pygame.quit()

sys.exit()

elif event.type == pygame.MOUSEBUTTONDOWN:

pos = pygame.mouse.get_pos()

if(faces[0].eyes.leftEyeBlit.collidepoint(pos)):

faces[0].eyes.leftPoking = 1

elif(faces[0].eyes.rightEyeBlit.collidepoint(pos)):

faces[0].eyes.rightPoking = 1;





 

The event MOUTBUTTONDOWN gives us the x,y coordinates of the button press, or in this case a screen touch.  Given a rect object, there is a function called collidepoint, which given a x,y point will return true if that coordinate is within the rectangle.  If it is, we run the poke sequence for the eye that is poked.

 

 

 

Playing Sounds an the Talking Mouth

Playing sounds in pygame is fairly trivial.  I will be using the Music object to play sounds.  This is a multi-threaded approach as the sound played in the background of your code, and releases your code pointer after it is started.

 

The way this will work is I will have several sound bytes, some of which play together in a specific order the same way the images are grouped together.  Once a sound file is loaded,it can be played then continuously polled to see if it is done.

 

The sound will begin playing, and I will tell the mouth to start talking.  In the main game loop, the sound will be polled and if it is completed, the mouth will be told to stop talking.  The next sound in the sequence will then be played with a small pause between.  If the sequence is completed, a new sequence will be picked at random.

 

There are 2 special sequences.  One of which is the eye poke.  The eye can only be poked every 10 seconds, and the sound and animation sequence will play.  This will override the currently playing sequence.

 

The other special sequence is the scare sequence.  This is chosen at random in between randomly played sequences.

 

 

The Talking Mouth

 

def talk(self):
if(self.talking == 0): #determine when to talk again
self.mouthBlit = display.blit(self.mouth,(self.mouthX,self.mouthY))
if (time.time() - self.lastTalk >= self.talkDelay):
print("talk!" + "(" + str(self.talkDelay) + ")")
self.talking = 1
self.talkIndex = 0
self.talkDirection = 1
lastTalkSwap = time.time()
self.lastTalk = time.time()
else: #animate talking
if(time.time() - self.lastTalkSwap >= 0):
self.talkIndex = self.talkIndex + self.talkDirection
if(self.talkIndex >= len(self.mouthTalk)):
self.talkIndex = len(self.mouthTalk)-1
self.talkDirection = -1
if(self.talkIndex == 0 and self.talkDirection == -1):
## print("mouth reset")
self.talking = 0
self.talkDirection = 1
self.lasttalk = time.time()
## print("blink index:" + str(self.blinkIndex))
self.talkBlit = display.blit(self.mouthTalk[self.talkIndex],(self.mouthX,self.mouthY))
self.lastTalkSwap = time.time()





 

 

This is run the same way as the eyes.  An array of images is sequenced in order with a small delay between the blitting.  The array is played forwards, then backwards as one complete animation sequence.  Unlike the eye however, it does not stop there.  It continues to play the array back and forth until told to stop otherwise.

 

The variables in play here are:

talkDelay = The delay between talk blits, this is left small but can be configured.  I Emerpically determined a value that looked good for my images.

talkIndex = The current index of the image being blitted.  This cycles forwards and backwards through the mouth talking image array.

talkDirection = This is direction the talkIndex is moving.  it is 1 when moving forward, and -1 when moving backwards.  You can see on line 16 how this is set.

talkSwap - This last time the images were swapped, the time between swaps must be greater than or equal to talkDelay

 

 

 

The complete code thus far:

 

import os

import pygame,sys, time, random

from pygame.locals import *

 

#a generic function for loading a set of images from a given directory

def LoadImages(imagesPath):

imageList = list()

for dirname, dirnames, filenames in os.walk(imagesPath):

for filename in sorted(filenames):

try:

imageList.append( pygame.image.load(os.path.join(dirname, filename)))

except:

pass

return imageList



#a generic function for loading a set of sounds from a given directory

def LoadSounds(imagesPath):

soundList = list()

for dirname, dirnames, filenames in os.walk(imagesPath):

for filename in sorted(filenames):

try:

soundList.append( pygame.mixer.Sound(os.path.join(dirname, filename)))

except:

pass

return soundList

 

#define the face and sub classes, which is just used to keep all the images together that go together

 

 

 

 

class Mouth:

def __init__(self,path):

self.mouthTalk = LoadImages(os.path.join(path, 'mouth/talk/'))

print("Mouth images:" + str(len(self.mouthTalk)))

self.mouth = pygame.image.load(os.path.join(path, 'mouth/mouth.png'))

self.mouthX = 0

self.marginX = 20

self.marginY = 0;

self.mouthW = 480/2

self.mouthH = 350

self.mouthY = (480-350) / 2

self.lastTalk = time.time()

self.lastTalkSwap = time.time()

self.talkMin = 1

self.talkMax = 6

self.talkDelay =.03

self.talking = 0

self.talkIndex = 0

self.talkDirection = 1

self.leftPoking = 0

self.rightPoking = 0

self.lastPoke = 0

self.pokeStart = 0

self.mouthBlit = pygame.Rect(0,0,1,1)



def talk(self):

if(self.talking == 0): #determine when to talk again

self.mouthBlit = display.blit(self.mouth,(self.mouthX,self.mouthY))

if (time.time() - self.lastTalk >= self.talkDelay):

print("talk!" + "(" + str(self.talkDelay) + ")")

self.talking = 1

self.talkIndex = 0

self.talkDirection = 1

lastTalkSwap = time.time()

self.lastTalk = time.time()

else: #animate talking

if(time.time() - self.lastTalkSwap >= 0):

self.talkIndex = self.talkIndex + self.talkDirection

if(self.talkIndex >= len(self.mouthTalk)):

self.talkIndex = len(self.mouthTalk)-1

self.talkDirection = -1

if(self.talkIndex == 0 and self.talkDirection == -1):

## print("mouth reset")

self.talking = 0

self.talkDirection = 1

self.lasttalk = time.time()

## print("blink index:" + str(self.blinkIndex))

self.talkBlit = display.blit(self.mouthTalk[self.talkIndex],(self.mouthX,self.mouthY))

self.lastTalkSwap = time.time()

 

 

 

 

 

 

class Eyes:

def __init__(self,path):

self.leftEyeSquint = LoadImages(os.path.join(path, 'eye/left/squint/'))

self.leftEye = pygame.image.load(os.path.join(path, 'eye/left/eye.png'))

self.leftEyeBlink = LoadImages(os.path.join(path , 'eye/left/blink/'))

## self.rightEyeSquint = LoadImages(os.path.join(path , 'eye/right/squint/'))

## self.rightEyeBlink = LoadImages(os.path.join(path , 'eye/right/blink/'))

self.rightEyeSquint = list()

self.rightEyeBlink = list()

self.leftEyeX = 800/2-480/4;

self.leftEyeY = 0;

self.marginX = 20

self.marginY = 0;

self.leftEyeW = 480/2

self.leftEyeH = 480 / 4

self.rightEyeX = 800/2-480/4

self.rightEyeY = 480/2

self.rightEyeW = 20

self.rightEyeH = 20

self.lastBlink = time.time()

self.lastBlinkSwap = time.time()

self.blinkMin = 1

self.blinkMax = 6

self.blinkDelay = random.randint(self.blinkMin,self.blinkMax)

self.blinking = 0

self.blinkIndex = 0

self.blinkDirection = 1

self.leftPoking = 0

self.rightPoking = 0

self.lastPoke = time.time()

self.pokeStart = 0

self.leftEyeBlit = pygame.Rect(0,0,1,1)

self.rightEyeBlit = pygame.Rect(0,0,1,1)



def getBlink(self):

if(self.blinking == 0): #determine when to blink again

if(not self.leftPoking):

self.leftEyeBlit = display.blit(self.leftEye,(self.leftEyeX,self.leftEyeY))

if(not self.rightPoking):

self.rightEyeBlit = display.blit(self.rightEye,(self.rightEyeX,self.rightEyeY+self.marginX))

if (time.time() - self.lastBlink >= self.blinkDelay):

print("Blink!" + "(" + str(self.blinkDelay) + ")")

self.blinking = 1

self.blinkIndex = 0

self.blinkDirection = 1

lastBlinkSwap = time.time()

if(self.blinkDelay <= 2):

self.blinkMin = 4

else:

self.blinkMin = 1

if(self.blinkDelay >= 4):

self.blinkMax = 3

else:

self.blinkMax = 6

self.blinkDelay = random.randint(self.blinkMin,self.blinkMax)

self.lastBlink = time.time()

else: #animate blinking

if(time.time() - self.lastBlinkSwap >= .05):

self.blinkIndex = self.blinkIndex + self.blinkDirection

if(self.blinkIndex >= len(self.leftEyeBlink)):

self.blinkIndex = len(self.leftEyeBlink)-1

self.blinkDirection = -1

if(self.blinkIndex == 0 and self.blinkDirection == -1):

## print("reset")

self.blinking = 0

self.blinkDirection = 1

self.lastBlink = time.time()

## print("blink index:" + str(self.blinkIndex))

if(not self.leftPoking):

self.leftEyeBlit = display.blit(self.leftEyeBlink[self.blinkIndex],(self.leftEyeX,self.leftEyeY))

if(not self.rightPoking):

self.rightEyeBlit= display.blit(self.rightEyeBlink[self.blinkIndex],(self.rightEyeX,self.rightEyeY+self.marginX))

self.lastBlinkSwap = time.time()







def getPoking(self):

if(self.leftPoking):

if(self.pokeStart == 0):

self.leftEyeBlit = display.blit(self.leftEyeBlink[-1],(self.leftEyeX,self.leftEyeY))

self.lastPoke = time.time();

self.pokeStart = time.time();

else:

if(time.time() - self.pokeStart > 10):

self.leftPoking = 0

self.pokeStart = 0

self.leftEyeBlit = display.blit(self.leftEye,(self.leftEyeX,self.leftEyeY))



# elif(self.rightPoking):

# if(not pokeStart)

# self.rightEyeBlit = display.blit(self.rightEyeBlink[-1],(self.rightEyeX,self.rightEyeY))

# self.lastPoke = time.time();

# self.pokeStart = time.time();



class Face:

def __init__(self,path):

#load each component of the eyes using the generic image loading function



self.talkSounds = LoadSounds(os.path.join(path , 'sounds/talk'))

self.singSounds = LoadSounds(os.path.join(path , 'sounds/sing'))

self.scareSounds = LoadSounds(os.path.join(path , 'sounds/scare'))



#create the eyes class

self.eyes = Eyes(path)

self.mouth = Mouth(path)

#define vars for face



#emperically determined coordinates



self.mouthX = 20

self.mouthY = 150

self.mouthW = 200

self.mouthH = 200



def PrintInfo(self):

print str(len(self.eyes.leftEyeSquint)) + ' left squint images loaded'

print str(len(self.eyes.leftEyeBlink )) + ' left blink images loaded'

print str(len(self.eyes.rightEyeSquint )) + ' right squint images loaded'

print str(len(self.eyes.rightEyeBlink )) + ' right blink images loaded'

print str(len(self.mouthTalk )) + ' talk images loaded'

print str(len(self.talkSounds )) + ' talk sounds loaded'

print str(len(self.singSounds )) + ' sing sounds loaded'

print str(len(self.scareSounds )) + ' scare sounds loaded'

 

 





#main code here

pygame.init()

display = pygame.display.set_mode((800, 480),pygame.FULLSCREEN | pygame.HWSURFACE | pygame.DOUBLEBUF )

pygame.display.set_caption('Funny Pumpkin')

 

 

#Create a list of faces classes, this example only has 1 face, but multiple faces can be used

faces = list()

#load the default face

faces.append(Face('./faces/default/'))

 

#test the class

##faces[0].PrintInfo()

for i in range(len(faces[0].eyes.leftEyeBlink)):

faces[0].eyes.leftEyeBlink[i] = pygame.transform.scale(faces[0].eyes.leftEyeBlink[i], (faces[0].eyes.leftEyeW-faces[0].eyes.marginX,faces[0].eyes.leftEyeH-faces[0].eyes.marginY))

faces[0].eyes.leftEyeBlink[i] = pygame.transform.rotate(faces[0].eyes.leftEyeBlink[i],270)

faces[0].eyes.rightEyeBlink.append(pygame.transform.flip(faces[0].eyes.leftEyeBlink[i],False,True))

faces[0].eyes.leftEye = pygame.transform.scale(faces[0].eyes.leftEye, (faces[0].eyes.leftEyeW-faces[0].eyes.marginX,faces[0].eyes.leftEyeH-faces[0].eyes.marginY))

faces[0].eyes.leftEye = pygame.transform.rotate(faces[0].eyes.leftEye,270)

faces[0].eyes.rightEye = pygame.transform.flip(faces[0].eyes.leftEye,False,True)

 

for i in range(len(faces[0].mouth.mouthTalk)):

faces[0].mouth.mouthTalk[i] = pygame.transform.rotate(faces[0].mouth.mouthTalk[i],270)

faces[0].mouth.mouthTalk[i] = pygame.transform.scale(faces[0].mouth.mouthTalk[i], (faces[0].mouth.mouthW-faces[0].mouth.marginX,faces[0].mouth.mouthH-faces[0].mouth.marginY))

faces[0].mouth.mouth = pygame.transform.rotate(faces[0].mouth.mouth,270)

faces[0].mouth.mouth = pygame.transform.scale(faces[0].mouth.mouth, (faces[0].mouth.mouthW-faces[0].mouth.marginX,faces[0].mouth.mouthH-faces[0].mouth.marginY))

 

 

 

 

 

 

 

 

#global vars

FPS = 30

 

 

r = list()

 

 

#main game loop

i = 0

faces[0].eyes.getBlink()

while(1):

faces[0].eyes.getBlink()

faces[0].eyes.getPoking()

faces[0].mouth.talk()

for event in pygame.event.get():

if event.type == QUIT:

pygame.quit()

sys.exit()

elif event.type == KEYDOWN:

if event.key == K_ESCAPE:

pygame.quit()

sys.exit()

elif event.type == pygame.MOUSEBUTTONDOWN:

pos = pygame.mouse.get_pos()

if(faces[0].eyes.leftEyeBlit.collidepoint(pos)):

faces[0].eyes.leftPoking = 1

elif(faces[0].eyes.rightEyeBlit.collidepoint(pos)):

faces[0].eyes.rightPoking = 1;







pygame.display.update()







severian

Foginator 2000 parts arrived

Posted by severian Oct 30, 2015

I got a box of nice parts to build my copy of the Foginator 2000

Raspberry Pi 2B, Sense Hat and other parts

  My biggest  confusion was what to do about the other parts I would need.  I did not know if they were arriving seperately or whether I needed to order them.  As time was growing short, I decided to proceed with experimenting and plan to visit Tanners Electronics in Carrollton to get more parts locally.  Fortunately, Tanners is a great local resource if you are near Dallas, Texas.

 

  I started by connecting the Sense Hat to the Raspberry Pi to see how it works. I found several example programs  on a site where the board was referred to by its old name(i.e., AstroPi)  I tested these and generally I am pleased.  The temperature reads somewhat high.  That is either because it needs calibration, or because the parts around it warm it up. Some calibration seems reasonable.  I have to do that with other temperature sensors I use.

 

I see one problem, and I expect I will have to modify the project a bit to get around it.  There are no GPIO pins to directly connect to on the Sense Hat.  The project author mentions getting a special pin extender, but I don't have time for that.  So, I'll just use another Raspberry Pi.  I'll do GPIO stuff, like the PIR sensor on the second unit.  The primary Pi will be the master program and use the Sense Hat.  I'll talk between them over ethernet.

Next, I went to the NeoPixel stick and a NeoPixel Ring.and connected them to an Arduino, like the project author does.  I have used longer LED strips on a couple of projects, but never the NeoPixel stick or ring.  They both work quite well, just as I expected.  I can see several uses for these products on other projects.  I'll need to measure power consumption on the ring for one hand held devive I had prototyped with 5mm LEDs.  The NeoPixel will look much better, and should fit well with the LilyPad, a GPSmodule, and a battery.

 

  The other Raspberry Pi I have with me tonight is running Ubuntu Mate 15.04.  I tried it with the PIR sensor and got a segmentation error.  I'll bring a Raspberry Pi B+ or 2 B with a fresh Raspbian image in tomorrow to try that again.

I need to pck up another 1000 micro farad cap so I can connect the stick and the ring.

I am going to stop here for now.  I have another point or two, but I want to see if I can edit a blog entry on this platform.

 

Note:  This content appeared in my own blog area yesterday.  I meant it to appear here.  I could not figure out how to move it, so I recreated it/

  This is my second blog entry about this project.  I must have saved the first one to the wrong area on Element14, and I'll work on moving it after I finish this entry.  As I mentioned in part 1, I could not get the special pin extender in time to mount the PIR detector on the Raspberry Pi with the Sense Hat.  So, I elected to add another Raspberry Pi to the project and connect all devices other than the Sense Hat to it.  It is a Raspnerry Pi B+, which I had as a spare.I connected the HC-SR501 PIR module using 3 pins.  I first tried using 3.3V on the Raspberry Pi to go to the HC-SR501 module.  I had read a couple of places that the module worked better at that voltage.  Well, not for me.  I connected Vcc on the HC-SR501(henceforth referred to as PIR) to 5Von the Pi, out on the PIR to GPIO7 on the PI, and ground on the PIR to ground on the PI.  I turned the sensitivity down a bit(counter clockwise on the left pot).

 

  I then setup samba on the Raspberry Pi 2 B.  I am going to use a simple method to communicate.  When motion is detected on the B+, it will create a file in the directory the 2B is sharing.  The 2 B runs a script that watches that directory and when the file is created, the fog machine gets turned on, and the notification file is deleted, so we can get the next notification.

 

  Here are the two programs I used for my PIR communications.  I have not figured out how to make them look right on the blog, and I apologize for that  The B+ has the PIR device and has the samba share mounted in a directory called /lee/lemonpi.  The workgroup name I use for samba is lee and the Raspberry Pi 2B has a hostname of lemonpi.  Th Pi B+ runs the following Python script(pir-trigger.py).

from subprocess import call

import RPi.GPIO as GPIO                           #Import GPIO library

import time                                       #Import time library

GPIO.setmode(GPIO.BOARD)                          #Set GPIO pin numbering

pir = 7                                          #Associate pin 26 to pir

GPIO.setup(pir, GPIO.IN)                          #Set pin as GPIO in

print "Waiting for sensor to settle"

time.sleep(2)                                     #Waiting 2 seconds for the sensor to initiate

print "Detecting motion"

while True:

   if GPIO.input(pir):                            #Check whether pir is HIGH

      nice_time = time.strftime('%l:%M%p %Z on %b %d, %Y') # ' 1:36PM EDT on Oct 18, 2010'

      print "Motion Detected at", nice_time

      call(["touch","/lee/lemonpi/notice"])

      time.sleep(2)                               #D1- Delay to avoid multiple detection

   time.sleep(0.1)    

 

  The Raspberry Pi 2B is running samba.  It is sharing a directory called /public and the last few lines of /etc/samba/smb.conf look like this:

[public]

   comment = Lemon Pi's public area

   path = /public

   guest ok = yes

   browseable = yes

   create mask = 0775

   directory mask = 0775

   read only = no

 

  The Raspberry Pi 2 B is running the following Python script(turn_on_fog.py):

import pyinotify

from sense_hat import SenseHat

import time

from subprocess import call

 

sense = SenseHat()

start_time = 0     # start at 0 so first pir event will trigger

 

class EventHandler(pyinotify.ProcessEvent):

    def process_IN_CLOSE_WRITE(self, event):

        global start_time

        elapsed_time = time.time() - start_time

        if elapsed_time > 60:   # Only trigger once per this many seconds

            start_time = time.time()        # reset timer

            sense.show_message("Fog envelops you")  # or start fog machine here

            print "Triggered by closing file name " + event.pathname

       # if you want to see when ignored events come in, uncomment this block

       # else:

       #       print "ignoring pir notice", elapsed_time

        # if a second trigger comes in while I was busy processing the first, an

        # error message will be printed.  It may safely be ignored.

        call(["sudo","rm", event.pathname])

 

def watch(filename):

    wm = pyinotify.WatchManager()

    mask = pyinotify.IN_CLOSE_WRITE

    wm.add_watch(filename, mask)

 

    eh = EventHandler()

    notifier = pyinotify.Notifier(wm, eh)

    notifier.loop()

 

if __name__ == '__main__':

    watch('/public')  # trigger on any new file in this directory

 

  The only thing you may not recognize there is import pynotify.  If you don't have that installed, you can just do a "sudo pip install pynotify".  If you don't have pip installed, just do a "sudo apt-get install python-pip"

 

This all proves my idea for using 2 PIs should work.  I'll have to modify Charles Hantt's programs to incorporate similar logic.

Here is my first project using Raspberry Pi.

Electric Chair prop

 

It's an animatronic Halloween Electric Chair Prop using Raspberry PI GPIO.

 

watch him in action on youtube

https://www.youtube.com/watch?v=x4cLGjlNryQ

 

 

P and Relays

 

The PI and software:  (what I learned to do with the PI on this project)

- written in Python using IDE

- opens and plays an MP3 file

- configures GPIO

- opens a .csv text file with ones and zeros and outputs to the GPIO port at a regular interval (every 33 msec)

- reads an input on the GPIO to trigger the prop

- auto executes the program at bootup

 

The sequence is 1 minute long

- waits for the switch to ground an input pin

- hits a relay to turn on a flood light and a turn on a yellow beacon light on top of the electric panel

- the side lights flash in sequence with a buzzer warning sound for 10 seconds

- the electric noise starts and he starts to shake (there's a hedge shaker toy in his head)

- he starts to scream and pneumatics start jolting him forward

- the electric noise gets louder and he jerks forward and back more violently

- the strobe light starts flashing in the electric panel

- the sequence ends as a fog machine pipes fog through his head

 

pictures of the PI, a relay board, and a pneumatic solenoid switch

As I read through Charles Gantt's blogs on the Foginator 2000, I saw that he used an off the shelf relay board to switch the fogger on and off.  However, I didn't want to spend the money when I could just as easily create my own relay board.  So, I sketched up the following schematic.  The inputs on the left go to the Raspberry Pi board and the outputs at the top go to the fog machine's remote control.

schemeit-project-2.png

I pulled the following parts from Seeed Studio's Arduino Starter Kit to build the board.  The perf-board was purchased separately.

 

 

Part
Manufacturer
Manufacturer's Part Number
R1VariousVarious
Q1On SemiP2N2222A
D1Diodes Inc1N4001
RY1Tianbo

HJR-4102E-L-05V

Perf-boardRadio ShackN/A

 

I assembled the circuit on the perf-board, using the extra component lead lengths to connect the parts together.  Then I added wires to connect up to the Raspberry Pi.  Using Charles' python code and a PIR sensor from Radio Shack, I was able to successfully switch the relay upon motion.  Here's a close-up of the completed board and a picture of the test setup. 

 

2015-10-24 17.54.20.jpg2015-10-24 17.53.53.jpg

Now, I have a relay board of my own creation that I can re-use in future projects when needed!

Previous posts for this project here:

http://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/tags#/?tags=pieintheface

 

My current progress:

 

 

 

Eyes are blinking, mouth is talking, and the eyes can be "poked".   This post is more of a construction post, I will show my updated code and comments in my next post tomorrow.

 

One thing I am finding challenging is getting a pumpkin that is the right size for the face.  Because the screen is tall but narrow, the face is out of proportion if I attempt to use the full screen.  So I have opted to use half the screen.  I am creating a pumpkin from scratch for this purpose, and the top portion of the screen will be covered by a hat.  I am considering having a window in the hat to see the top of the screen.  perhaps I can show a video here.

 

 

 

 

 

A few pics.

 

I started with half of a foam pumpkin, but the face is too round for the screen.

IMG_0486.jpg

So I used a heat gun to flatten it out.

IMG_0487.jpg

I traced the face onto a piece of paper.

IMG_0489.jpg

 

IMG_0490.jpg

Then I cutout the face.

IMG_0494.jpg

This gave me a template on where to cut the face out on the pumkin

IMG_0495.jpg

My version of a "hot knife" for cutting foam.

IMG_0497.jpg

Cutting out the face.

IMG_0498.jpg

Trial fitting to the Raspberry Pi Screen.

IMG_0499.jpg

Hello everyone,

Candi is almost ready to give to children. Tombstone is done, programming is 95% complete, treats are bought, infinity wishing well is almost complete. We have been very busy. We decided to have the children pick a ghost or ghoul to randomly pick how much candy to give. Chrystal felt that the little children would find it to difficult to have questions and not enjoy it. She added sound effects and made it fun to play. We will upload pictures and video tomorrow for everyone to see. We will include the Scratch script we programmed. The wishing well (wishing hell) as we call it turned out way better then we ever expected. It is mind blowing, messes with the eyes.

 

More tomorrow

Dale and Chrystal

While I am waiting for answers about starting Python interpreter in privilege mode mentioned in my previous blog Step by Step Build Trick or Trivia Halloween Candy Dispenser #3, I'd like to continue my journey. This is going to be a short blog about how I work on LED blink.

 

I wrote a small piece code LED_Blink_Test.py which blinks LED(toggling red/green LEDs every second) until I hit the enter key. It works as expected.


import RPi.GPIO as GPIO
import time
import sys
import select


GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(19, GPIO.OUT)


state = True


def toggle_leds():
  global state
  if state:
  GPIO.output(26, True)
  GPIO.output(19, False)
  state = False
  else:
  GPIO.output(26, False)
  GPIO.output(19, True)
  state = True

# endless loop until enter key is stroked, green & red LEDs alternately on/off for 1 second
while True:
  while sys.stdin in select.select([sys.stdin], [], [], 0)[0]:
  line = sys.stdin.readline()
  if line:
  GPIO.cleanup()
  exit(0)
  else:
  toggle_leds()
  time.sleep(1)


 

I used the same pins to drive LEDs as Charles Gantt used in his blog, but I didn't directly connect LEDs to those pins. The reason for it because each pin will consume more than 25mA if they are directly driven by pins. I am not very comfortable to pull such a big current from an I/O pin unless I see it's specified in its datasheet. Some kind of current limit is required. I don't have appropriate resistors to limit the current to 5 to 10mA per pin, however, the kit includes a few diodes, so I put two diodes in serie to limit the current to about 3mA. The LED isn't super bright, but definitely visible when it lights up.

 

IMG_1539.JPG

 

Make sure you run Python in privilege sudo python LED_Blink_Test.py. Otherwise, you will have run-time problem.

Screen Shot 2015-10-25 at 11.49.19 AM.png

To check the GUI interface, I have to comment out all GPIO related statements. Then run python TrickorTriviaQuiz.py and GUI shows up like this:


Screen Shot 2015-10-25 at 12.39.36 PM.png

Stay tune for the next blog.

Following my previous blog Step by Step Build Trick or Trivia Halloween Candy Dispenser #2, I will start with the Python GUI in this blog.

 

I got the LCD display work and I can SSH to my Pi 2 in the last two blogs, so I decided to try out Python GUI code in the SSH session. However, it didn't work. I thought I could create a widget on the LCD by typing Python code root=TK() in SSH session. Obviously, I was wrong.


pi@candydispenser1 ~ $ 
pi@candydispenser1 ~ $ python
Python 2.7.3 (default, Mar 18 2014, 05:13:23) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from Tkinter import *
>>> root=Tk()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1712, in __init__
    self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
>>> 











It seems I have to use GUI desktop to get the widget shown up. However, I don't have a keyboard to connect my Pi, so I decided to use remote desktop. I used VNC. On the Pi 2, I installed TightVNC server and on my Mac computer, I installed VNC viewer. As described in https://www.raspberrypi.org/documentation/remote-access/vnc/, I used the following command to install VNC server on Pi 2.

sudo apt-get install tightvncserver









However, I didn't install xtightvncviewer because it isn't free. Instead, I installed VNC Viewer for Mac (https://www.realvnc.com/download/viewer/). Now I am ready to use Tkinter for Python GUI programming.

Tkinter_widget

Next, I will check out the control of Pi's GPIO pins. I typed the following statements in Python

import RPi.GPIO as GPIO
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(26, GPIO.OUT)






and it got runtime error and we need root access. Then I restarted Python interpreter with root privilege: sudo python and it worked. No runtime error anymore.

Screen Shot 2015-10-24 at 10.00.56 PM.png

Now, I am ready to follow Charles_Gantt's blog Trick or Trivia Halloween Candy Dispenser #002 - Building The Trivia Interface.


I created the file containing the whole program(Charles Gantt's code in Put it all together section in his blog#2 mentioned above), then ran the program as shown below. Unfortunately, it gave me errors no matter I ran Python interpreter in privilege mode or not. I posted my question in the comments of Charles Gantt's blog and hopefully he can answer my questions soon.


I will continue this series of blogs and stay tune for the next oneStep by Step Build Trick or Trivia Halloween Candy Dispenser #4 - LED blink test.


The Halloween projects from CyborgDistro are under way. This should show you some of the awesome Multi-Robot Pi-Borg costumes you can make with Raspberry Pi and a couple of inexpensive robot arms!


We know you've always wanted to become a Pi-Borg -- now is your chance to upgrade your hardware/software stack directly to #CyborgDistro!

 

(1) A fully-fledged Autonomous Cyborg Backpack!

 

11149104_10103331559953975_8929134801135954655_n.jpg 10492613_10102589654613225_8949492634966480232_n.jpg

 

 

https://tandonp.wordpress.com/autonomous-cyborg-backpack/

System Specs:

  • 2 Dagu 6DOF Arms
  • SSC32 Servo Controller w/ custom enclosure
  • Raspberry Pi B w/ portable USB battery and Adafruit Case
  • 2 Ultrasonic sensors on the sides of the backpack serve to detect obstacles and your phone beeps if you get close to something. This helps you protect the extra arms from damage.
  • 2 web cams allow you to see behind you.
  • Tekkeon External Battery for powering the servo controller and rasp pi
  • Currently controlled with an Android smartphone app!

 

(2) The Wearable Multiclaw

 

img_20150430_104327.jpg?w=300&h=225 img_20150430_105433.jpg

 



https://tandonp.wordpress.com/wearable-multiclaw/

System Specs:

  • The grippers are $5 grippers from SparkFun. Never before has having lots of grippers ever been so affordable (which is subsequently spawning this multi-robot cyborg revolution).
  • The proto-electronics used are the Pololu Micro Maestro Controller ($20), the Raspberry Pi ($35), along with some batteries.
  • The grippers are mounted on inexpensive ($5) wristbands.
  • The Python software runs on the Raspberry Pi and currently allows control with Android devices (though we’d like to add additional sensors and build some autonomy).


Upcoming Enhancements

 

Hopefully in time for this Halloween, I just received a new shipment of EMG sensors from Advancer Technologies: MyoWare Muscle Sensor - RobotShop


I'm excited to incorporate the EMG sensors into the Wearable Multiclaw (and potentially Autonomous Cyborg Backpack) so you can control the extra arms with muscle flexes and muscle potentials!

All code for the projects is a mixture of Python, Java, and C. You can download code for the projects on github: https://github.com/prateekt/CyborgDistro

 

Thank you to Element 14!

 

Thanks to Element 14 for the box of goodies! We look forward to adding some of the components to the Cyborgs. Especially the sense hat for rasp pi -- the hardware looks awesome. Can't wait to hack some more.

 

Happy Halloween from CyborgDistro!

cyborgdistro.com

 

pumpkinpi2015

jointhecyborgs

Quick update.  The extra parts came in but when I attempted to replace GPIO header with no pins sticking out with one with extended GPIO pins I am having problems actually getting the new Header to slip on.  I will attach a picture at bottom of post.  The issue seems to be each SenseHat pin having a gold encasing that are big enough that the new header doesn't want to slip on.  Not sure if I should strip the SenseHat of those gold casings per pin or not.  Anyone else with input please PM or share here.  General Google Fu really doesn't talk much about the SenseHat and pin issue other than buying new header with extensions.

 

In addition, running the instructions to update the default RPi load that came with the card seems to be good to go.  But every time I try to use Python to actually access the SenseHat I found more items needing updates.  I think the Pillow update finally stopped those errors but now it seems to not be able to find the SenseHat.  This may still be due to libraries needing updates or similar.  I will share the specific error later.  Sadly one night I just kept updating and updating and updating and not note taking so I don't have specifics.  Have thought about just going back to step one and rewriting the image to default, then trying to run updates again.  :-)

 

SenseHat Header Pins.

WP_20151023_002.jpg

 

First off, let say thanks for being part of this project.

A couple of weeks ago I started assembling my project. The LCD display will come after.

I managed to destroy a servo, by playing with it. Also, I'll simplify the box by using and RGB led.

 

When I first booted into latest version of Raspbian I ran the Configuration menu. I rebooted, and the ran:

 

sudo apt-get update && sudo apt-get upgrade –y && sudo apt-get dist-upgrade -y

 

I must say, I’m programming at work so I’m using putty and connect via SSH to the Raspberry pi.

The Pi’s Ip can be found by running:

 

Ifconfig

ScreenHunter_520 Oct. 23 10.26.jpg

The ip is under inet addr: and might look like 192.168.1. ..

 

Then I imported the project folder running:

 

git clone https://github.com/CharlesJGantt/Trick-or-Trivia.git

 

 

To follow: modify the code, add more questions and so on.

While I'm waiting for some parts to arrive, I decided to build the amplifier board that was supplied with the parts kit.  Here's what came in the kit:

 

2015-10-22 20.01.18.jpg

Step 1 of the build was to install the resistor:

 

2015-10-22 20.05.45.jpg

Step 2 was to install the diode:

 

2015-10-22 20.08.32.jpg

Step 3 was to install the slider switch:

 

2015-10-22 20.10.01.jpg

Step 4 was to install the ceramic capacitors:

 

2015-10-22 20.16.38.jpg

Step 5 was to install the speaker and power connectors:

 

2015-10-22 20.21.11.jpg

Step 6 was to install the audio connector:

 

2015-10-22 20.22.53.jpg

Step 7 was to install the 100uF electrolytic cap:

 

2015-10-22 20.25.12.jpg

Step 8 was to install the LED:

 

2015-10-22 20.31.32.jpg

Step 9 was to install the integrated circuit:

 

2015-10-22 20.36.25.jpg

Step 10 was to install the 1000uF electrolytic caps:

 

2015-10-22 20.41.12.jpg

Step 11 was to install the potentiometer:

 

2015-10-22 20.43.25.jpg

Here's the finished board:

 

2015-10-22 20.43.59.jpg

And here's the tools that I used to build the board:

 

2015-10-22 20.45.44.jpg

 

Thanks to Element 14 for the chance to participate in this build-a-long!

Trick-or-Trivia-Banner005.jpg

 

Welcome back to the Trick or Trivia Blog. In this installment, I am going to lightly document the process I used to build the faux tombstone that will be used to hold the Raspberry Pi 7-inch touchscreen. My original plan was to buy a 5-foot tall Frankenstein statue, or some other tall halloween figure. Unfortunately I was unable to find anything locally that fit within my budget of $100 for this segment. After talking to a few friends, and watching several videos on YouTube, I decided to just build my tombstone from 1-inch thick construction foam insulation.

 

During the planning of this project, I realized I needed to make the tombstone thicker than 1-inch as the foam is quite weak. This led me to search for 2-inch thick foam, and while it exist, it seems to not be sold anywhere in South Carolina. In the end, I wish I would have been able to find the 2-inch thick foam as gluing the sheets together proved to take a lot longer than I anticipated. However, the end result was still quite amazing, and I find myself still wondering how I managed to pull off such a realistic looking tombstone.

 

 

The Hardware and Tools Needed

 

20151015_134523_HDR.jpg

 

Below you will see a list of the hardware and tools used to build the tombstone that we will use to hold our 7-inch touchscreen. All of these tools and materials can be purchased at your local home improvement store, and most can even be found online. The only thing that is brand specific is the Glidden Gripper primer that is used as a glue. This primer / sealer is what many foam tombstone builders use to glue their models together as it is cheap, dries without exposure to air, and it carves easily.

 

    • Glidden Gripper Primer / Sealer (Used as glue to glue the foam together.
    • Stone Grey Latex Paint
    • Black Latex Paint
    • 2oz acrylic craft paint White
    • 2oz acrylic craft paint OD Green
    • 2oz acrylic craft paint  Desert Tan
    • Acrylic Caulking
    • Great Stuff Foam In A Can
    • 1-inch Closed Cell Insulation Foam sheet. 4’x8’
    • Xacto  / Hobby Knife
    • Jigsaw with 2” 32TPI blade or 32TPI Hacksaw blade
    • 2x 2-inch Chip Brushes
    • Old Soldering Iron or Wood Burning Iron Kit
    • Popsicle Sticks or Wooden BBQ Skewers
    • 5-inch Prop Skull
    • 1-meter straight-edge
    • Curve Ruler
    • Compass / Circle Ruler
    • Fine Point Sharpie
    • Acetone / Acetone-based fingernail polish remover
    • Hot Glue Gun
    • Stanley Sureform Foam Shaping Rasp

 

 

The Design

 

ToTRender.jpg

 

I chose to go with a mix between a traditional tombstone, and something you might find in a classic B-grade horror flick. After quickly scratching out a general outline on a piece of paper, I sat down in Sketchup and modeled what the tombstone would look like. As you can see, it borrows from traditional, gothic, and horror-movie tombstone designs. The overall height is about 5.5-feet, and places the touchscreen at a height that most children can easily access it.

 

ToTlayout.jpg

 

This design fits entirely from one 4’x8’x1” sheet of pink / green / blue insulation foam from any big-box home improvement store. I do suggest gluing the two main tombstone pieces together before hand and cutting them out once they are dry.

 

ToTDem.jpg

 

You can download the Sketchup design file for this project which includes 3D models of the finished tombstone, layouts with dimensions, etc, from here. Use this file to get the dimensions you will need for each piece. I simply printed out each design on a normal sheet of paper, and used them to layout each part onto the foam.

 

 

The Build

 

 

IMG_9829.jpg

 

I like to start all of my projects by laying out any tools, components and materials neatly so that I can quickly and easily access them when needed.

 

IMG_9831.jpg

 

With all of the tools and materials laid out, I decided that the easiest thing to do would be to cut the large foam sheet into halves, and then half one of the halves again. This would give me two 2-foot by 4-foot pieces which I could then glue together and set aside while the glue dries.

 

IMG_9836.jpg

 

I do not have a photo of the actual glue application process as it was fairly warm outside, and the Glidden Gripper was drying almost instantly. The basic method I used is exactly the same as you would use when painting a wall. Use a small roller brush to apply a very liberal coat of the Gripper onto one side of one of the sheets of foam. Then place the other sheet on top of the freshly “painted” surface. Align things so that at least two edges align at a right angle.

 

IMG_9837.jpg

 

I knew I would need a way to index the two sheets of foam together, and after searching for toothpicks for about half an hour, I found some craft sticks that would work just fine if I cut a point onto them with my hobby knife. Place one of these sticks in each corner of the foam sheets, and push down until you are sure that both layers have been penetrated.

 

Now set the laminated foam sheets to the side, placing heavy objects on top of them. This will help apply enough pressure so that the sheets get a good bond when the Glidden Gripper dries. I used two drink coolers filled with water, which applied about 200lbs of pressure.

 

20151015_140540_HDR.jpg

 

With the two large pieces drying, let’s move onto cutting out the base of the tombstone. We will need to glue it together as well. As I mentioned earlier, I printed out the design files for each element of this build, and I used a yardstick to transfer the dimensions over.

 

IMG_9843.jpg

 

It is important to remember to pull your measurements from one of the factory-square edges. This will ensure that everything aligns nice and square in the end.

 

IMG_9842.jpg

 

Here you can see one of the major issues with construction insulation foam. It’s built on a tongue and groove design that greatly speeds up installation, and improves its efficiency. This groove is problematic if you plan on using the full 4-foot dimensions of the sheet though. I simply chose to place this piece facing the back so that it is not seen.

 

20151015_155821_HDR.jpg

 

With all of the pieces cut out, I dry-stacked them to test for proper fit. As you can see, the foam warped a little, but since this is supposed to be a 100+ year old tombstone, I am ok with the less-than-perfect look. Before I glue things up though, I need to rough the edges up a little so that they look like they have been exposed to the elements for the last century.

 

20151015_161632_HDR.jpg

 

To rough the edges I used the Stanely Shureform Foam Rasp. I practiced on a scrap piece of foam and found a stroke that would not rip the foam, but shave it. After about 10-minutes I was quite pleased with the results.

 

20151015_162340_HDR.jpg

 

Now it’s time to glue the base layers together. Again using a liberal coating of Glidden Gripper, I  coated each layer, then stacked them together using sharpened popsicle sticks to hold the alignment. Just like the two larger pieces, place this to the side and stack something heavy on top to ensure a proper bond while the glue dries.

 

20151015_165649_HDR.jpg

 

With the base out of the way, it’s time to get to work on the cross that will adorn the top of the tombstone. I began transferring the design over, and used a curve ruler to get the clean lines the base of the cross calls for.

 

20151015_171943_HDR.jpg

 

As you can see, I goofed my layout a little, but caught the mistake before cutting anything out. Remember to measure once, and cut twice! I used the blue ruler off to the right to draw the circle. I bought it on amazon years ago, and this was the first time I ever used it. I like it because it allows you to quickly draw a circle of any size up to 12” diameter.

 

20151015_175600_HDR.jpg

 

I used a hacksaw blade and hobby knife to cut the cross out. In hindsight I should have used my jigsaw as it would have turned this 25-minute task into a three-minute job. Here you can see that I have already roughed the edges with the rasp, and shaped the cross a little. Now it’s time to add some faux cracks and surface blemishes.

 

20151015_181411_HDR.jpg

 

Using a wood-burning tool, and the rasp, I added several cracks and surface blemishes to the cross. This was actually pretty fun, and I was able to really add some fine detail with the conical tip on the wood-burning tool. The surface blemishes were created by pressing the rasp into the surface and twisting it from side to side while pushing up or down at the same time.

 

20151015_181959_HDR.jpg

 

I started painting the cross with the stone grey paint. The surface blemishes along with the cracks proved to make this process a little more difficult as the paint had to be “pushed” into the small crevices. I found that the best method was to sort of stab the paintbrush into the cracks, and “wiggle” it on the surface blemishes.

 

20151015_183747_HDR.jpg

 

With the cross painted, I sat it aside and let it dry for about 12 hours before applying a second coat. One thing to be aware of is that latex paint will not dry if the humidity is too high, and if any dew falls on the paint before it dries, the paint will stay wet.

 

I let the cross, the base, and the main tombstone body dry overnight, and well into the next day. I would estimate that things dried for about 14 hours, and unfortunately the Glidden Gripper had not fully dried by the time I got around to laying out the tombstone body. In hindsight, I would much prefer using something like a non-solvent based contact cement to glue the two sheets together. Glidden Gripper is pretty common for gluing foam together, but in general, you should wait about 48-72 hours before it is fully cured.

 

I was highly frustrated at this point and I forgot to take photos of the tombstone’s layout on the uncut laminated foam sheet. I used the same method to lay it out as I did everything else. I also cut it out using a jigsaw this time as two sheets proved to be a little too difficult to cut with a hacksaw blade by hand.

 

20151016_184737_HDR.jpg

 

I then lined the edges with blue painters tape, masking off between 1.5 and 2-inches. This was part of a failed experiment to use acetone to melt a significant portion of the surface which would create a relief cut look. As I later found out, new insulation foam like this is coated in a solvent-resistive film that prevents things like construction adhesives, spray-paint and other solvent laced things from eating it away. The big blank spot at the top is where the LCD will mount.

 

20151016_193110_HDR.jpg

 

Again being frustrated, I failed to take photos of the next process. I decided to use my plunge router to remove the first ¼-inch of the surface where I wanted the acetone to etch away. Unfortunately, even pure acetone had a hard time etching the majority of the surface, but I did notice that large puddles would eat away portions of the foam, leaving these cool craters behind. So with this new found knowledge I dripped puddles of acetone onto the surface and used the rasp to speed up the chemical reaction by etching where I wanted the craters. As you can see, it gave the tombstone a really cool, aged look. I did make sure to hose the tombstone down to help nullify any remaining acetone residue that might have been hiding.

 

20151017_132633_HDR.jpg

 

The next morning I set up the wood burning tool again and began creating more faux cracks into the surface of the tombstone. I also took the tool and used it to define the line between the letters, edges, and LCD mounting spot. This made the inside portion really stand out. While it’s not pictured here, I also used the rasp to create more surface blemishes to tie in the tombstone body to the cross.

 

Before I get to the next part, I want to show you how I prepped the small 5-inch prop skull to be mounted onto the tombstone.

 

20151012_142410_HDR.jpg

 

The skull was purchased at Target for about $3, and was the perfect size for a tombstone of this size. I wanted a foam skull, but unfortunately almost every skull you find in the USA is blow-molded.

 

20151012_142824_HDR.jpg

 

Since I needed a way to firmly attach the skull to the tombstone, I decided to fill it with Great Stuff foam in a can. To allow the foam to expand (it expands by 3-4 times the volume used) I cut the back of the skull off, and then created a 1.5” dam with masking tape. This would ensure that the foam rose high enough above the cut line that I could get a good flat cut later on.

 

20151012_142836_HDR.jpg

 

Since the skulls jaw is moveable, I taped it shut in the event that the great stuff foam glued it into place.

 

20151012_143013_HDR.jpg

 

I then taped the skull to the railing on my homes back deck. This allowed me to use both hands when filling it with foam. In hindsight I should have wrapped this rail in plastic from a trash bag or something. I got lucky and no foam dripped off, but it could have turned into a disaster. Great Stuff literally sticks to anything and everything, and is almost impossible to remove.

 

20151013_173651_HDR.jpg

 

Here you can see the foam expanded. I only filled the skull about half way with the wet foam, and once dry, it was significantly larger in volume. I mistakenly thought that it was fully cured here, and cut the top off at the tape line.

 

20151013_173814_HDR.jpg

 

What I did not know was that the foam inside that had not been exposed to air was still liquid. After I took this photo I laid the skull in a box and to my surprise it had expanded more overnight and the foam had squirted out of any crack it could find. This process repeated itself three times. I later found out that I could have layered in wet paper towel strips every inch or so of foam. this would allow moisture to wick in and help cure the foam faster.

 

2015-10-21-19_38_35-New-notification.jpg

 

With the foam skull finally cured, I traced its outline onto the tombstone, and used my router to “hog out” the material by about ¾-inch deep.

 

2015-10-21-19_39_10-New-notification.jpg

 

Then I used a low-temp hot glue gun to secure the skull to the tombstone. I used this method because when I tried a few other glues, they did not seem to stick well to the Great Stuff foam as it was very porous. Hot glue worked great, and dried almost instantly.

 

2015-10-21-19_39_51-New-notification.jpg

 

With the skull recessed about ¾-inch into the surface of the tombstone, I used a latex / silicone blend caulking to seal the edges and give it a nice transition to make it appear as if it is part of the stone.

 

2015-10-21-19_40_38-New-notification.jpg

 

With the skull in place, I could finally begin painting the whole tombstone. Just like the cross, painting in the cracks and surface blemishes proved to be a tough task. I spent two hours making sure that everything was properly coated, and that no pink from the foam was showing.

 

20151017_154342_HDR.jpg

 

I apologize for not getting any good shots of the painting process, but it was getting late and I was in a hurry to beat the fast-setting sun. The image above was taken after two coats had been applied, and let dry for about 24 hours.

 

20151015_184813_HDR.jpg

 

Now it’s time to paint the base. Much like the tombstone and cross, getting paint into this rough surface was challenging, but after about an hour and two coats later, I managed to get everything covered.

 

20151015_185617_HDR.jpg

 

One important thing to mention is that when painting rough surfaces like this, it is paramount that you rotate the piece and check it from every angle. Even after two coats, I still found a few tiny pink spots that I missed.

 

20151018_144133.jpg

 

Once again, I do not have any photos of the finishing process. I took about 30 photos of the dry and wet brushing techniques my Girlfriend and myself used to detail the tombstone, but for some reason I lost all 30 of them and four videos I had recorded as well. There are dozens of videos on YouTube and thousands of tutorials on the web that detail these techniques though, so if you are interested in these processes, search for “dry brush technique” and “wet brushing” or “paint washing technique” on YouTube.

 

Building this tombstone took way more time than I thought it would. All in all, I think I have about 14 hours into its design and construction, and about $140 in materials, tools, and other things I bought for it. While my budget was only $100, I feel that $140 is a fair number since I got some tools, and enough paint to do another 1-2 of these. The biggest thing is the time it took be to build it. Including the waiting times for things like glue and paint to dry, the project took about 4.5 days to complete, which put me way behind on its posting schedule.

 

That is going to wrap this installment of Project: Trick or Trivia. Check back in a few days for the next installment, where we finally mate the screen to the tombstone, and permanently mount the candy dispenser mechanism. Until then, remember to Hack The World, and Make Awesome!

 

Win this Kit and Build-A-Long

 

  1. Project Introduction

  2. Building The Trivia Interface

  3. Interfacing Ambient and Triggered Audio Events
  4. Building The Candy Dispenser & Servo Coding
  5. Carve Foam Tombstone
  6. October 24th -  Assembly and Testing
  7. October 28th - Project Wrap-up

A very busy October so far and finding out that my project kit has been misplaced by UPS means that I have less than 10 days to Halloween with not really a lot of equipment at disposal! So I have come up with an achievable plan I can work on over the weekend and not need a lot of equipment, hence minimalist!

 

I have a number of gadgets around the house which I will try to integrate in this project, particularly the following:

  • Foscam CCTV to act as a motion detector (Using Motion as a software motion detector)
  • Philips Hue System with bulbs in multiple rooms of the house to be triggered on motion
  • Play haunting music on an existing Raspberry Pi Audio Player based on an Hifiberry Amp and running Volumio OS

 

 

Depending on progress with the project, I will try and have proximity sensor based interactive module on the door to make it more interesting for the kids!

 

Short and sweet but I hope some of the basic stuff will be interesting to the readers!

So we (Chrystal and I) received Candi's remains yesterday, a very happy yet sad occasion. Attached are lots of photos of what we received. Also is a chart of what we will make for the Halloween candy dispenser.

 

Below is what has to be modified to allow Scratch to use the GPIO's. This modify is a lot easier then the first time I did it. It took about 4 hours of changing the first time (2 years ago) now it is very easy.

 

 

Open up an LX Terminal window and download the installer.

 

sudo wget https://dl.dropbox.com/s/gery97qildl7ozd/install_scratch_gpio.sh -O /boot/install_scratch_gpio.sh

sudo /boot/install_scratch_gpio.sh

 

 

The full kit received:

 

Kit.JPG

Touch Screen:

LED.JPG

Raspberry Pi2:

Rpi 2.JPG

Amplifier:

Amp.JPG

LED's, WIFI and power cord:

LEDs.JPG

Servos, LED's (From NeoPixel) and Channel level shifter:

servos.JPG

SD card and power:

Power.JPG

Connecting the touch screen:

Connect.JPG

Up and running with the modified Scratch language:

Running.JPG

 

 

 

Plan.png

 

 

I want to give a special thank you to Peter Oakes for the idea of the infinity well (Peter did an Infinity Mirror which is what gave me this idea). Check it out in his blogs, it's very cool.

 

 

There are a couple of small issues (Not complaining but noting). There were absolutely no instructions on how to hook up the touch screen. It wasn't an issue but for others it might be. The 4gb SD card didn't fit into the micro sd slot, I tried for hours and finally gave up and grabbed a micro sd card (Kidding). The power adaptor is 5v 1A, with the touch screen it requires 5v 1.6A. None of what is mentioned was an issue as like most of us we have something laying around that works.

 

More to blog in the next few days as Candi comes to life!!

 

We (Chrystal and I) are so thankful to all the sponsors, Element14, element14dave (For all his hard work he does for us) and the community!!

 

Hope everyone is enjoying this as much as we are!!

 

Following the previous blog Step by Step Build Trick or Trivia Halloween Candy Dispenser #1, I will start from configuring WiFi. I used a couple of links to help me set up Wifi: https://www.raspberrypi.org/documentation/configuration/wireless/wireless-cli.md and How-To: Add WiFi to the Raspberry Pi | Raspberry Pi HQ. Basically you need add a new network in /etc/wpa_supplicant/wpa_supplicant.conf file like this


network={
    ssid="your router ssid"
    psk="wifi password"
}








 

After configuring Wifi, I rebooted Pi 2 and unplugged the Ethernet cable. I opened my router's website and found the new IP address(e.g., 192.168.1.112) assigned to Pi's wifi interface. Then SSH to the Pi 2 using command

ssh pi@192.168.1.112







 

And Pi 2 accepted the connection as expected


$ ssh pi@192.168.1.112
pi@192.168.1.112's password: 
Linux raspberrypi 3.18.11-v7+ #781 SMP PREEMPT Tue Apr 21 18:07:59 BST 2015 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Oct 18 13:37:31 2015 from lianhongs-air

NOTICE: the software on this Raspberry Pi has not been fully configured. Please run 'sudo raspi-config'







 

and Pi 2 had no abnormal behavior

Pi 2 Wifi connection

 

After you have the network connection, you need make sure you have the latest Raspbian OS (you can download from https://www.raspberrypi.org/downloads/raspbian/) installed. Otherwise, you need run the following update and upgrade commands:

$sudo apt-get update
$sudo apt-get upgrade -y





Now, it's the time to mount Pi 2 on the LCD panel. The picture below shows the parts to mount together.

LCD_mount


If you haven't used LCD flexi cable before, please note that the latch on the connector is usually closed when it arrives. You need open it before you insert the flexi cable into the connector as shown below. The first pic shows close state while the second shows it in open state. To open it, just pull the latch from the two ends simultaneously.

latch_closelatch_open


Next insert the flexi cable to the connector on the LCD driver board through the bottom of the latch (shown below). After the cable is inserted, remember to close latches by pushing two ends back.

insert_flexi


Then connector the other cable to the driver board as well. Also screw in the four standoffs as shown below.

mount_drv


Next, we connect driver board with the Pi 2. Please note that the flexi cable needs to insert into the connector through the top of the latch this time rather than through the bottom.

drv_pi2

 

Next, connect the other end of the flexi cable to Pi 2's display port. Also connect two wires from the driver board to Pi 2's GPIO pin#2 & #6 as shown below.

bundle


Now time to power up the Pi & LCD. You cannot use the power adapter came with the kit as described in my previous blog (Step by Step Build Trick or Trivia Halloween Candy Dispenser #1). You have to use a 5V, 2A adapter. If everything goes well so far, you should be able to see Desktop on the LCD display as shown below. I assume you have completed the OS configuration using command sudo rasp-config and configured the automatic startup of desktop.

Desktop

 

In summary, I got wifi and LCD work in this blog. I will continue my work and stay tune for the next blog Step by Step Build Trick or Trivia Halloween Candy Dispenser #3.

Previous posts for this project here:

http://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/tags#/?tags=pieintheface

 

Got my kit, and assembled the Pi screen.  I did a few things that will help with development and deployment.  Follow the steps on this page:

http://opentechguides.com/how-to/article/raspberry-pi/5/raspberry-pi-auto-start.html

 

This will allow auto login, as well as automatic starting of the GUI on boot.  This is better for a project that will use the GUI and wont have a keyboard and/or mouse attached.

 

Edit your /boot/boot.config file to have these lines.  The lines probably already exist so just edit them.

framebuffer_width=800
framebuffer_height=480

 

I found this to be a perfect resolution for the Raspberry pi Screen as well as the command line interface.

 

I have been working on the code to animate the eyes.  I have one eye working that blinks at random.  I have it enlarged for development and here is a video demonstration:

 

 

 

 

 

 

The blinking is a simple algorithm that ensures it does not blink too fast or blink too slow.  The actual blink animation is a fixed time, but the time between blinks is based on the interval between the last blink.

if(self.blinking == 0):  #determine when to blink again
   display.blit(self.leftEye,(0,0))
   if (time.time() - self.lastBlink >= self.blinkDelay):
    print("Blink!" + "(" + str(self.blinkDelay) + ")")
    self.blinking = 1
    self.blinkIndex = 0
    self.blinkDirection = 1
    lastBlinkSwap = time.time()
    if(self.blinkDelay <= 2):
     self.blinkMin = 4
    else:
     self.blinkMin = 1
    if(self.blinkDelay >= 4):
     self.blinkMax = 3
    else:
     self.blinkMax = 6
    self.blinkDelay = random.randint(self.blinkMin,self.blinkMax)
    self.lastBlink = time.time()



 

 

 

I have added a Eye subclass to the Face class.  Each part of the face will be a class that encapsulates and draws its own images on the screen.  The completed code thus far:

 

import os
import pygame,sys, time, random
from pygame.locals import *

#a generic function for loading a set of images from a given directory
def LoadImages(imagesPath):
 imageList = list()
 for dirname, dirnames, filenames in os.walk(imagesPath):
  for filename in sorted(filenames):
   try:
    imageList.append( pygame.image.load(os.path.join(dirname, filename)))
   except:
    pass
 return imageList
 
#a generic function for loading a set of sounds from a given directory
def LoadSounds(imagesPath):
 soundList = list()
 for dirname, dirnames, filenames in os.walk(imagesPath):
  for filename in sorted(filenames):
   try:
    soundList.append( pygame.mixer.Sound(os.path.join(dirname, filename)))
   except:
    pass
 return soundList

#define the face and sub classes, which is just used to keep all the images together that go together
class Eyes:
 def __init__(self,path):
  self.leftEyeSquint = LoadImages(os.path.join(path, 'eye/left/squint/'))
  self.leftEye = pygame.image.load(os.path.join(path, 'eye/left/eye.png'))
  self.leftEyeBlink  = LoadImages(os.path.join(path , 'eye/left/blink/'))
  self.rightEyeSquint  = LoadImages(os.path.join(path , 'eye/right/squint/'))
  self.rightEyeBlink  = LoadImages(os.path.join(path , 'eye/right/blink/'))
  self.leftEyeX = 20
  self.leftEyeY = 20
  self.leftEyeW = 20
  self.leftEyeH = 20
  self.rightEyeX = 70
  self.rightEyeY = 70
  self.rightEyeW = 20
  self.rightEyeH = 20
  self.lastBlink = time.time()
  self.lastBlinkSwap = time.time()
  self.blinkMin = 1
  self.blinkMax = 6
  self.blinkDelay = random.randint(self.blinkMin,self.blinkMax)
  self.blinking = 0 
  self.blinkIndex = 0
  self.blinkDirection = 1
 
 def getBlink(self):
  if(self.blinking == 0):  #determine when to blink again
   display.blit(self.leftEye,(0,0))
   if (time.time() - self.lastBlink >= self.blinkDelay):
    print("Blink!" + "(" + str(self.blinkDelay) + ")")
    self.blinking = 1
    self.blinkIndex = 0
    self.blinkDirection = 1
    lastBlinkSwap = time.time()
    if(self.blinkDelay <= 2):
     self.blinkMin = 4
    else:
     self.blinkMin = 1
    if(self.blinkDelay >= 4):
     self.blinkMax = 3
    else:
     self.blinkMax = 6
    self.blinkDelay = random.randint(self.blinkMin,self.blinkMax)
    self.lastBlink = time.time()
  else:  #animate blinking
   if(time.time() - self.lastBlinkSwap >= .3):
    self.blinkIndex = self.blinkIndex + self.blinkDirection
    if(self.blinkIndex >= len(self.leftEyeBlink)):
     self.blinkIndex = len(self.leftEyeBlink)-1
     self.blinkDirection = -1
    if(self.blinkIndex == 0 and self.blinkDirection == -1):
##     print("reset")
     self.blinking = 0
     self.blinkDirection = 1
     self.lastBlink = time.time()
##    print("blink index:" + str(self.blinkIndex))
    display.blit(self.leftEyeBlink[self.blinkIndex],(0,0))
     
    
  
 
class Face:
 def __init__(self,path):
  #load each component of the eyes using the generic image loading function
  
  self.mouthTalk = LoadImages(os.path.join(path , 'mouth/talk/'))
  self.talkSounds = LoadSounds(os.path.join(path , 'sounds/talk'))
  self.singSounds = LoadSounds(os.path.join(path , 'sounds/sing'))
  self.scareSounds = LoadSounds(os.path.join(path , 'sounds/scare'))
  
  #create the eyes class
  self.eyes = Eyes(path)
  #define vars for face
  
  #emperically  determined coordinates
  
  self.mouthX = 20
  self.mouthY = 150
  self.mouthW = 200
  self.mouthH = 200
  
 def PrintInfo(self):
  print str(len(self.eyes.leftEyeSquint)) + ' left squint images loaded'
  print str(len(self.eyes.leftEyeBlink )) + ' left blink images loaded'
  print str(len(self.eyes.rightEyeSquint  )) + ' right squint images loaded'
  print str(len(self.eyes.rightEyeBlink  )) + ' right blink images loaded'
  print str(len(self.mouthTalk )) + ' talk images loaded'
  print str(len(self.talkSounds )) + ' talk sounds loaded'
  print str(len(self.singSounds )) + ' sing sounds loaded'
  print str(len(self.scareSounds )) + ' scare sounds loaded'


  
#main code here
pygame.init()
display = pygame.display.set_mode((800, 320))
pygame.display.set_caption('Funny Pumpkin')

#Create a list of faces classes, this example only has 1 face, but multiple faces can be used
faces = list()
#load the default face 
faces.append(Face('./faces/default/'))

#test the class
faces[0].PrintInfo()





#global vars
FPS = 30



#main game loop
i = 0

while(1):
 faces[0].eyes.getBlink()
 for event in pygame.event.get(): 
  if event.type == QUIT:
   pygame.quit()
   sys.exit()

 pygame.display.update()








This series of blogs will show how I build theTrick or Trivia Halloween Candy Dispenser project step by step following Charles_Gantt's series of blogs Trick or Trivia Halloween Candy Dispenser #001 - Project Introduction.

 

First of all, thank element14Dave & element14 for giving me this opportunity. I received my kit a few days ago. Below is what in the box.

all_parts

 

I noticed that the kit includes a 4GB SD card instead of microSD card, so I have to find myself a 8GB micro SD card as shown in the right hand side of the picture below.

SDCard

I recently moved to U.S. and I don't have HD TV in my apartment, so I have to figure out how to use Raspberry Pi without HD TV. After a quick google search, I found a forum post Headless setup: no keyboard, display or frustration which is the exact guide I am looking for. Before I mount my Pi on the back of 7" LCD panel, I'd like to have the OS (Raspbian based on Debian Wheezy) running first, so I followed the post mentioned previously.


After the OS is written to the microSD card, I plugged it into the slot and connected the power to Raspberry Pi 2 board with the wall adapter (shown below) included in the kit (black color), and also connected Ethernet cable to my router. The first thing I noticed is that the LED on the power adapter took about 3~5 seconds to light up after I plugged it into the wall outlet. Secondly, the two LEDs on the Pi were solid on. It's reasonable the power red LED is solid on, but the green LED should flash when it reads SD card. I waited about 10 minutes and still couldn't see Pi on my router's connected device list. I thought the power adapter might be the problem.

Bad power adapter


Fortunately, I have another 5V power adapter in white color (shown below). I changed power adapter and this time the green LED started flashing. However, after a few seconds, the green LED started doing some weird thing: stay in solid green for about 10 seconds and then off for a second and repeat this cycle forever until I unplugged the power. I suspect that the first power-up with the black adapter might corrupt the SD card, so I rewrite the OS, then powered up with the white power adapter. This time, everything works as expected. I can do SSH to the Pi.

Good power adapter


At this point, I am sure the black power adapter is NO good for Pi 2. I checked the label on the black power adapter, it says "O/P 5.25V 1A". No wonder it doesn't work - it cannot provide the max. current Pi 2 requires which is 2A. Update: after a quick google search, I am not sure the previous statement is correct.


I will continue my work and stay tune for the next blog(Step by Step Build Trick or Trivia Halloween Candy Dispenser #2).

The blogs that seem to be getting updated are the 7" RPi screen ones so I thought I would keep the Foginator people rolling while they are waiting for their kits to come to them.  :-)

 

Earlier I posted a picture listing all of the parts that came with the Foginator Project so anyone still waiting for shipment should be receiving similar.  Reading through the Foginator 2000 blog entries by Charles_Gantt is showing that additional parts will be needed to at least do the basics of the project of combining the new SenseHat and a remotely controlled Foginator.

 

I highly suggest that everyone check out his projects purely for the education of seeing these projects being worked and adapted to deal with real life experiences let alone the advantage for some of us will have of learning from his efforts before we stumble. 

 

You can check out his Foginator posts @ Foginator 2000: #001 Project Introduction

 

Additional items I have ordered that seem integral to the idea of the project is a PIR sensor, Channel Relay Module and a GPIO 2x20 stackable header with extra long pins.


The first 2 are to actually allow the event of someone remotely triggering the foginator, the 3rd is to allow you to have access to the GPIO pins of the RPi since once you place the SenseHat the pins can't be used.  Charles goes into more details in his blog entry complete with pictures:  Foginator 2000: #005: Neopixel Integration with Raspberry Pi and Arduino

 

In that above post there is also additional items needed for Neopixel implementation (such as Arduino Nano).  If I can get all of the other key parts working before the deadline I will revisit the need of ordering a Nano board.

 

While I am waiting for those to come in I will jump up to the SenseHat installation and software loading so as to be ready to tie in the rest of the project as quickly as possible.  Caveat, here in the US it is the weekend, so of course even paying for 2 day shipping means that it won't leave until Monday and arrive on Wednesday.  The idea of relying on standard 5-8 business days seemed to be too much of antagonizing Murphy's Law to me.  Hopefully anyone still waiting for their items can check their inventory and order the extra parts needed as soon as possible so we can all see everyone's projects on Halloween!  :-)

jkutzsch

RPi Haunted Foginator

Posted by jkutzsch Oct 16, 2015

Project items came in! 

 

Stuff just came in!

List of items staring in the center and spiraling out clockwise:

1.  The Raspberry Pi Sense Hat.  Key to the project.

2.  The Raspberry Pi board.

3.  Adafruit 8 channel level shift

4.  Adafruit Neopixel Stick 8

5.  Visaton Speaker

6.  Element 14 Wi-Pi

7.  ABS Case, Grey

8.  8gb Rasperry Pi card preloaded with Noobs, (8gm microsd with in SD adapter)

9.  Velleman-kit, 2x52 amplifier

 

Going through the project information item I will need that I don't have spare will include an IR sensor and Relay to sense and trigger the Foginator.  Plus some extra large (20mm) gpi Header pins for the sense hat.  Possible an arduino board for the Neopixel, but I will research that more later. 

 

The Tin at the bottom was a cool Ouija board mint tin that I was hoping was a little larger then the standard Altoids tin to make a custom Halloween case for the Pi, looks standard size. Though.

 

Huge Thank you! to Element 14 for including me in this project, I will update as the other parts come in and progress continues.

 

John K.

 

Introduction

 

I had a thermal printer for a while now, but never used it as part of a project. Recently, I purchased the new Raspberry Pi Touch Screen and decided to make a kind of photo booth. The touch screen would be used for the user input, instead of using (mechanical) buttons. If the user is satisfied with the picture, it can be printed on the spot by the small printer.

 

It's certainly not a new idea, but I thought it would be a fun little project to try out.

 

The main components used in this project are:

 

Main components
SBC, RASPBERRY PI 2, MODEL B, 1GB RAMSBC, RASPBERRY PI 2, MODEL B, 1GB RAM
RASPBERRY PI CAMERA BOARD, 5MPRASPBERRY PI CAMERA BOARD, 5MP
Raspberry Pi 7" Touch Screen Display with 10 Finger Capacitive TouchRaspberry Pi 7" Touch Screen Display with 10 Finger Capacitive Touch
DONGLE, WIFI, USB, FOR RASPBERRY PIDONGLE, WIFI, USB, FOR RASPBERRY PI
Mini Thermal Receipt Printer
Raspberry Pi Camera Wide-Angle Lens

 

Raspberry Pi

 

For this project, I ended up using a Pi 2. Originally, I tried with the A+, but some software components failed to install (more on that in the "Kivy" paragraph).

 

For the OS, the latest version of Raspbian was used (2015-09-24 Jessie). It can be downloaded from the official Raspberry Pi website: https://www.raspberrypi.org/downloads/raspbian/

Getting the OS image on a microSD card can be done in several ways depending on your own operating system. In my case, in OSX, I used "dd" to get the image on the micro SD card.


Fredericks-Mac-mini:~ frederickvandenbosch$ sudo diskUtil list
Fredericks-Mac-mini:~ frederickvandenbosch$ sudo diskUtil unmountDisk /dev/diskX
Fredericks-Mac-mini:~ frederickvandenbosch$ sudo dd if=Downloads/2015-09-24-raspbian-jessie.img of=/dev/diskX bs=1m
Fredericks-Mac-mini:~ frederickvandenbosch$ sudo diskUtil unmountDisk /dev/diskX























 

Once the image has been written to the microSD card and the card has been unmounted, it can be removed from the PC and inserted in the Raspberry Pi.

 

Touch Screen

 

Connecting and getting the touch screen to work with the Raspberry Pi was super easy using the instructions found right here on element14: http://www.element14.com/community/docs/DOC-78156#installI


Using the latest Raspbian image (2015-09-24 Jessie), the touch screen was plug & play. I did install the additional virtual keyboard by executing following command:

 

pi@photobooth ~ $ sudo apt-get install matchbox-keyboard



























 

WiPi

 

Getting wifi to work on the Pi is another one of those plug & play things. Just connect the wifi dongle, select the access point you wish to connect to in the desktop environment and enter the password. That's all there is to it.

 

Pi Camera

 

No photo booth without a camera, right? Let's see how to connect and enable the camera.

 

Connecting the camera

 

To connect the camera to the Pi, open the CSI slot located near the ethernet port and ensure the camera's flex cable is inserted with the exposed contacts facing away from the ethernet port.

 

Enabling camera support

 

By default, the camera support is disabled. To get the camera to work, support needs to be enabled using the "raspi-config" tool.

 

Open a terminal and enter following command:

 

pi@photobooth ~ $ sudo raspi-config
























 

A menu will appear. Select option 5: "Enable Camera", and in the following step, select "Enable". Reboot the Pi.

Screen Shot 2015-10-04 at 08.39.49.pngScreen Shot 2015-10-04 at 08.39.52.png

 

Thermal Printer

 

To set up the printer, a complete guide is available over at Adafruit (https://learn.adafruit.com/pi-thermal-printer/overview), only a few steps are relevant for this project though and I will highlight them in the next paragraphs.

 

Connecting the printer

 

There are two parts to connect the printer:

  • power, using an external 5V power supply (at least 1.5A for the printer only)
  • data, using the Pi's GPIO serial interface (including GND)

 

To easily connect an external power supply, I cut off one end of the provided power cable and screwed on a female DC barrel jack connector. The data cable, even though not ideal, can be connected to the Raspberry Pi's GPIO. Careful though, the the printer's TX pin (RX on the Pi's GPIO) should either be disconnected or have a 10k resistor added to compensate for the level difference (5.0V vs 3.3V).

IMG_0027.JPG

 

You'll notice I moved the GND jumper wire from the touch screen to another GND pin, in order to accommodate the printer's data cable.

 

Controlling the printer

 

Start by installing the necessary software components.

 

pi@photobooth ~ $ sudo apt-get install python-serial python-imaging python-unidecode








 

In the cmdline.txt file, remove references to ttyAMA0 to avoid conflicts with the printer on the serial interface.

 

pi@photobooth ~ $ sudo nano /boot/cmdline.txt

#dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait


 

Download the Adafruit python library for the printer, containing some example code.

 

pi@photobooth ~ $ sudo apt-get install git
pi@photobooth ~ $ git clone https://github.com/adafruit/Python-Thermal-Printer
pi@photobooth ~ $ sudo reboot





















 

After the Pi has rebooted, it should be possible to make a test print.

 

pi@raspberrypi ~ $ cd Python-Thermal-Printer
pi@raspberrypi ~/Python-Thermal-Printer $ python printertest.py





















 

The printer should then output something like this:

IMG_0030.JPG

 

Kivy

 

Kivy is an open source Python library used for developing applications making use of user interfaces. Kivy's official website can be found here: http://kivy.org/#home

 

The installation steps and some example code are provided via Matt Richardson's tutorial, in which he used Kivy to control the Pi's GPIO using the touch screen: http://mattrichardson.com/kivy-gpio-raspberry-pi-touch/index.html

 

Some notes on my experience, performing the installation:

  • I originally used the Raspberry Pi A+. However during the Cython installation step, it runs out of memory and starts swapping. The installation never finishes as the kswapd0 process takes 100% CPU. Using the Raspberry Pi 2, no problems were encountered.
  • Originally, when trying to edit the Kivy's config.ini (~/.kivy/config.ini) in order to add touch support, the file didn't exist. After running an example (~/kivy/examples/demo/pictures/main.py), the file was there and could be edited.

 

Project

 

With all individual components working, it's time to move on to the project specific topics.

 

Code

 

The code is based on Matt Richardson's example application, which was then adapted to suit my needs. In addition, Adafruit's thermal printing python library was added to have printing support as well.

 

I've added comments in the code to make easier to understand.

 

 

Build

 

For the frame, I picked something simple: a wooden board holding all the components in place. The result would be a flat and portable photo "booth".

I started by using some tape to draw on and see how the result would be. Everything looked good, so I started cutting and drilling. A bit of sanding was required to make everything fit.

 

{gallery} Build

IMG_0042.JPG

Board: the piece of wood before the cutting and drilling

IMG_0044.JPG

Layout: using tape and a pencil to decide where I'll put the different components

IMG_0050.JPG

Cutting: happy with the layout, I cut out the parts using an oscillating multitool

IMG_0051.JPG

Drilling: some drilling was required for the camera and the handle

IMG_0053.JPG

Edges: removed the top corners to make some rounded edges

IMG_0056.JPG

Fitting: test-fitting the parts

IMG_0057.jpg

Feet: making some "feet"

IMG_0058.jpg

Bandsaw: using the bandsaw, the "feet" can easily be cut to the desired shape

IMG_0069.jpg

Cleanup: with everything in place, some tidying up was required

IMG_0068.jpg

Testing: my assistants testing the new gadget

 

Demo

 

IMG_0068.jpg

 

Hope you like the project!

While we wait for Candi's remains to arrive we are preparing for the undertaking to bring her generosity back to life for children. My daughter Chrystal is very good at programming Scratch language. This will be the programming language used for this project. Scratch alone can't run the gpio's so modifying the os is required. This will allow Scratch and python to work together and create new scratch icons. The os hack will be documented and uploaded for all to see. The coffin is made for the candy to be dispenced from. I plan on giving Candi a face and voice for greating the children. There will be motion sensors to alert Candi when children appear.

 

This is so much fun,

 

Chrystal and Dale

Foginator-Banner-005.jpg

 

Welcome to installment #005 of Project: Foginator 2000, part of the 2015 Raspberry Pi Halloween Project series here at Element14. In this week's episode I am going to demonstrate how to get Neopixel (WS2812B) LED modules working with the Raspberry Pi 2. Unfortunately as you will see, this is an almost impossible task as the previously working library is only compatible with Raspberry Pi versions up to the Model B+.

 

Below is a table containing the parts you will need for this segment of the project. In addition to these parts you will need to connect the Raspberry Pi to the internet either via a wifi dongle, or a wired Ethernet connection. You will also need three colors of stranded hook-up wire, a soldering station, solder, and some 0.100 male header pins.

 

Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

USB PORT POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

USB WIFI MODULE

26Y845826Y8458

Fog Coloring Rings

1

NEOPIXEL RING - 16 X WS2812

26Y846026Y8460

Mood LEDs

1

NEOPIXEL DIGITAL RGB 1M 144LED BLACK

34C109234C1092

PSU Vreg

1

LM7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

58K379658K3796

PSU LED Resistor

1

METAL FILM RESISTOR, 1KOHM, 250mW, 1%

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K794969K7949

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 47UF, 50V, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%

14N941814N9418

PSU LED

1

LED, RED, T-1 3/4 (5MM), 2.8MCD, 650NM

49Y756949Y7569RPi Sense Hat1Raspberry Pi Sense HAT
13T927513T9275Arduino Nano1Arduino Nano V3
38K032838K032810k Resistor1Multicomp 10k Resistor

 

 

The Theory

 

 

1138-00.jpg

Neopixel’s are the brand name for the popular WS2812B individually addressable RGB LED modules, and are marketed and sold by Adafruit here at Element14. You can find WS2812B strips, rings, sticks, and individual modules on various electronic retail outlets as well, but for the purpose of this project, I will be using genuine Neopixel strips and rings from Adafruit.

 

The NeoPixel line is the latest advance in the quest for a simple, scalable and affordable full-color LED. Red, green and blue LEDs are integrated alongside a driver chip into a tiny surface-mount package controlled through a single wire. They can be used individually, chained into longer strings or assembled into still more interesting form-factors.

 

leds_neo-closeup.jpg

As you can see, each Neopixel contains a small microcontroller built into each LED module, with control wires emanating to each LED die. Neopixels use a single-wire protocol making them easy to integrate into any project without consuming valuable GPIO resources. Neopixels pass data along to the next module in-line, and can be individually controlled in single module,strip and matrix form factors.

 

Unfortunately, Neopixels require very strict timings, and this causes a lot of headache when attempting to control them from something like a Raspberry Pi, as it’s GPIO pins are software based, and not hardware based like those of an Arduino. Raspberry Pi models up to the B+ were able to skirt around this limitation thanks to the excellent rpi_ws281x library created by Jeremy Garff, but as I recently found out, the library does not seem to function on the new Raspberry Pi 2 boards.

 

I spent a good portion of last week looking for a solution to this problem, and to be honest, I came up with nothing. This caused me to freak out a little bit, and Neopixels play a very large role in this project. After much deliberation, and consultation with some friends here at Element14, I decided to abandon my quest to get Neopixels working with the Raspberry Pi 2. Instead of directly controlling them with the Pi, I decided to go a much easier route, by controlling them with an Arduino Nano, which will be triggered by the Raspberry Pi.

 

As many of you may already know, Adafruit has an excellent guide to getting Neopixels up and running on an Arduino, and even wrote their own library. (Albeit it is a modified version of the pre-existing fastLED library.) To keep things simple and easy to understand for those of you following along at home, I stuck with the Adafruit Neopixel Library despite being more familiar with the fastLED library. If you are looking for more code examples for driving WS2812B modules, give fastLED a try.

 

neopixel_stick_circuit.png

As I briefly mentioned earlier, Neopixels utilize a single-wire data format, meaning they only require a single data wire regardless if your strip has one module or one thousand modules. The only other connections required are 5V and GND connections. It is very important to remember that with each “pixel” you get three LEDs that are being driven. This means that Neopixel strips, and rings can draw a large amount of current. I find that on a standard Arduino board, only about 60 Neopixels can be driven before browning out the board, and that number diminishes by half, if the strip is set to display white at full brightness.

 

For this reason, I recommend driving your Neopixels with a separate power source such as a 5V 1A regulated source, or a 5V 2A wall transformer. You can also power the strip with a 4x AA battery box. Adafruit also recommends filtering the power input with a large capacitor, and limiting the current on the data line with a resistor. I find that this is usually not needed, but it will prevent a pixel from dying in the event you accidentally plug the strip in while the system is powered up.

 

 

Wiring the Neopixels, Arduino Nano, and Raspberry Pi

 

 

fog2000schem2.jpg

Wiring up the Neopixels to the Arduino is fairly straightforward,and is as simple as following the diagram above. Note that I have connected the data-in line from both the Neopixel Ring and the Neopixel strip to the Arduino Nano’s digital pin 6. This allows me to drive both Neopixel devices with the same code. Also note that I have connected a 4X AA battery pack in the image for illustration purposes. As you will see later, I am using an LM805 VReg-based power supply in the physical application.

 

Connecting the Raspberry Pi to the Arduino is simple as well. Connect the Raspberry Pi’s GPIO Pin 21 to the Arduino Nano’s Digital Pin 8. Then connect one of the Raspberry Pi’s GND pins to the shared GND circuit between the Neopixels and Arduino Nano. It is very important that all of the ground’s in this circuit are connected together. Finally, a 10k Ohm resistor needs to be connected as a pulldown resistor on the Arduino’s Digital Pin 8. This will prevent any false triggers from happening.

 

Raspberry-Pi-Sense-HAT.jpg

Since we are using the Raspberry Pi Sense Hat with our Raspberry Pi, we need to make some slight modifications to the Sense Hat before we can connect any jumper wires to the GPIO headers. As you can see in the image above, the header pins that come with the sense hat do not protrude past the black plastic bar on top of the sense hat.

 

20151001_153828_HDR.jpg

Unfortunately figuring out how to get around this issue is not something that has been widely discussed anywhere on the internet. However, I did manage to find a post on the Astro Pi forums that mentioned buying some extra-long header pins from Adafruit. This is one of the reasons this post has been delayed for so long. As you can see in the image above, the extra-long pins from Adafruit are about 5mm longer than the ones that ship with the Sense Hat.

 

20151001_153743_HDR.jpg

The first step in installing the new header pins is to gently pry the Sense Hat off of its existing header pins. The Sense Hat was designed with this in mind, and prying the existing header pin strip out is easy when done slowly and carefully with a small flat-blade screw driver. Once removed, you can simply slide the longer header pins into place.

 

20151001_153931_HDR.jpg

Now you are ready to hook up everything as per the instructions above. In the image below you can see how I have everything laid out. Note that for this portion of the project to work, you need to have followed all of the previous installment’s instructions as well.

 

20151013_225331_HDR.jpg

It looks like a bit of a mess, but with everything laid out and taped down, I could easily troubleshoot any issues that arose.

 

20151013_225352_HDR.jpg

Here you can see the connections made to the Raspberry Pi. Note that I have removed some of the plastic connectors from some of the jumper wires. The header pins only stick about 5mm above the surface of the black plastic bar, and some of my jumpers were having issues keeping a secure connection.

 

20151013_225342_HDR.jpg

Here is a shot of how I have the Arduino Nano connected to the breadboard and wired up. Note that the grey wire is connected to the neopixel strip, and the twisted yellow, red, and black wires are connected to the neopixel ring. In this image, you can also see the LM7805-based 5V power supply I built. More on that later.

 

20151013_225347_HDR.jpg

Finally, a shot of the NeoPixel ring wired up. Note that these rings do not come pre-wired, and you will need to solder wires to it.

 

 

 

Building a 5V Regulated Power Supply

 

 

This is the same PSU that I built for my Trick or Trivia Halloween Candy Dispenser #004 - Building The Candy Dispenser & Servo Coding project, so I have re-used its images below. To build this PSU you will need the following components, as well as a soldering iron, flush cutters, and a 12-30V DC power source.

 

58K382758K3827

Resistors

1

METAL FILM RESISTOR, 220 OHM, 250mW, 1%

10M846410M8464

General Purpose Diode

1

1N40011N4001 Rectifier Diode 50 V 1 A

34C109234C1092

PSU Vreg

1

7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%,

14N941814N9418

PSU LED

1

RED, T-1 3/4 (5MM)

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display
66H746266H7462Strip Board1VECTOR ELECTRONICS-8022-PCB, Tracks(Strip Board)
21M490921M4909Screw Terminal2MOLEX-39543-0002-TERMINAL BLOCK

 

 

TrickOrTrivia_004 (1).jpg

 

A 5V regulated power supply circuit is quite simple to build thanks to the fairly common LM7805 voltage regulator, and requires just five components to get up and running. A 100uF capacitor, two 0.1pF ceramic capacitors, a 1N004 diode, and the LM7805 regulator. I am adding two screw terminals, and an indicator LED to the mix. I want to design a pcb for this, but for now a piece of protoboard will work just fine.

 

FZBN31NHH2VMX7Q.bmp

 

Following the schematic above, build the power supply and solder in each component. The protoboard I am using is different from the one listed above as I have a big supply of these from Protostack.com, so I just used one of mine.

 

TrickOrTrivia_004 (2).jpg

 

With all of the components soldered together, I made the necessary jumps from each component to the next. I lucked out with the Protostack board as it has integrated power and ground rails. This cut down on the number of jumps I needed to make.

 

TrickOrTrivia_004 (3).jpg

 

With everything soldered up, I trimmed the board down to reduce its size, and connected a 12v 1amp power source. The red LED lit up and I confirmed 5V out with a multimeter.

 

TrickOrTrivia_004 (4).jpg

 

 

The Neopixel Code

 

 

With everything connected, load the Arduino IDE and make sure the Adafruit Neopixel Library is installed. Refer to Adafruit’s Neopixel Uber Guide if you need help installing the library. Additionally, I won’t be going over the Arduino sketch that will drive the Neopixels in great detail as Adafruit does a good job at that in the code’s comments.

 

To keep things simple, I am using a modified version of Adafruits Strand Test example. I have added some custom code that looks for a high signal on the Arduino Nano’s digital pin 8, and included an else statement that tells the neopixels to turn off if no high signal is present. As always, you can find all of the code used in the Foginator 2000 project at its Github Repository.

 

 

/* This code is adapted from the StrandTest example from Adafruit's Neopixel library. Please visit adafruit.com to download the neopixel library in order to use this code. https://learn.adafruit.com/adafruit-neopixel-uberguide/arduino-library */

#include <Adafruit_NeoPixel.h>

#define PIN 6

// Parameter 1 = number of pixels in strip
// Parameter 2 = Arduino pin number (most are valid)
// Parameter 3 = pixel type flags, add together as needed:
//  NEO_KHZ800  800 KHz bitstream (most NeoPixel products w/WS2812 LEDs)
//  NEO_KHZ400  400 KHz (classic 'v1' (not v2) FLORA pixels, WS2811 drivers)
//  NEO_GRB    Pixels are wired for GRB bitstream (most NeoPixel products)
//  NEO_RGB    Pixels are wired for RGB bitstream (v1 FLORA pixels, not v2)
Adafruit_NeoPixel strip = Adafruit_NeoPixel(60, PIN, NEO_GRB + NEO_KHZ800);

// IMPORTANT: To reduce NeoPixel burnout risk, add 1000 uF capacitor across
// pixel power leads, add 300 - 500 Ohm resistor on first pixel's data input
// and minimize distance between Arduino and first pixel.  Avoid connecting
// on a live circuit...if you must, connect GND first.

int rasPin = 8; // defines digital pin 8 as rasPin
int val = 0; // creates an integer called val, and assigns it a value of 0.

void setup() {
  strip.begin(); // Initialize the pixel strip
  strip.show(); // Initialize all pixels to 'off'
  pinMode(rasPin, INPUT); // sets rasPin to an Input pin
  }

void loop() {
val = digitalRead(rasPin); // tells the arduino to take a reading on rasPin and store the value in the val integer we declared earlier.

if (val == HIGH) // says if val is equal to 1, run the following code
{
  delay(1000); // wait one second
  rainbowCycle(30); // run the rainbowCycle function
  }
  else // tells the arduino that if val equals anything other than 1 (high) to run the following code.
  {
  colorWipe(strip.Color(0,0,0), 100); // sets each pixel to black (off) one by one via the colorWipe function
    }
}

// This function makes a rainbow equally distributed throughout the strip
void rainbowCycle(uint8_t wait) {
  uint16_t i, j;

  for(j=0; j<256*5; j++) { // 5 cycles of all colors on wheel
    for(i=0; i< strip.numPixels(); i++) {
      strip.setPixelColor(i, Wheel(((i * 256 / strip.numPixels()) + j) & 255));
    }
    strip.show();
    delay(wait);
  }
}

// This function fill the dots one after the other with a color
void colorWipe(uint32_t c, uint8_t wait) {
  for(uint16_t i=0; i<strip.numPixels(); i++) {
      strip.setPixelColor(i, c);
      strip.show();
      delay(wait);
  }
}

//This code generates a color-wheel value generator.
// Input a value 0 to 255 to get a color value.
// The colours are a transition r - g - b - back to r.
uint32_t Wheel(byte WheelPos) {
  if(WheelPos < 85) {
  return strip.Color(WheelPos * 3, 255 - WheelPos * 3, 0);
  } else if(WheelPos < 170) {
  WheelPos -= 85;
  return strip.Color(255 - WheelPos * 3, 0, WheelPos * 3);
  } else {
  WheelPos -= 170;
  return strip.Color(0, WheelPos * 3, 255 - WheelPos * 3);
  }
}



 

 

Now upload the above code to the Arduino. If you have an error, check that the Neopixel library is properly installed, and that your Arduino is connected to the correct com-port.

 

 

Python Code To Trigger Arduino Via Raspberry Pi

 

 

Making an Arduino do something based on a trigger from a Raspberry Pi is quite simple, and only requires a few lines of code. Since we are looking for a high-input on the Arduino, we simply need to tell the Raspberry Pi to set one of its GPIO pins as an output when we want the Arduino to do something. In our case, we want the Neopixels to light up when the motion sensor is tripped. This will allow us to illuminate the fog that was triggered by the motion sensor as well.

 

Below you will find the code. I have not broken it down as it is simply just a few lines that set GPIO Pin 27 to an output when the motion sensor is tripped.

 

import RPi.GPIO as GPIO
import time
import sys

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
GPIO.setup(17, GPIO.IN)
GPIO.setup(21, GPIO.OUT)

def fire_fog():
    GPIO.output(21,True)
    time.sleep(2)
    GPIO.output(4,True)
    time.sleep(10)
    GPIO.output(4,False)
    time.sleep(20)
    GPIO.output(21,False)
    GPIO.cleanup()
    sys.exit()

while 1:
    time.sleep(3)
    if GPIO.input(17)==True:
        fire_fog()



 

 

Testing The Code

 

 

Power the Raspberry Pi, Arduino, and Neopixel power supply up, and then connect to the Raspberry Pi via SSH with a terminal like Putty, or Terminal if you are using a Mac. Using the Nano text editor, create a new file called foginator_ledfx_demo.py using the command below.

 

nano foginator_ledfx_demo.py

 

Now paste the Python code from the previous section above, and then save and exit the Nano text editor. If everything is hooked up correctly, and the code has been copied and pasted correctly, you can run the command below to see the neopixels light up.

 

sudo python foginator_ledfx_demo.py

 

You will have to wait for a few seconds, then move your hand over the motion sensor. The program will wait about two seconds, and then trigger the relay which fires the fog. At the same time, the Raspberry Pi will send a high signal to the Arduino which will trigger the NeoPixel strip and ring. Check out the video below to see it in action.

 

 

So that wraps up part five of the Foginator2000 project. This was a really fun but frustrating portion of the project for me as I lost a lot of time trying to sort out the Neopixel / Raspberry Pi 2 incompatibility issues. In the end, everything worked out well, but I was forced to add another part to the project’s bill of materials. Recently, a newly acquired friend of mine, who just happens to be a EE, told me that being an engineer means that you spend more than half your time problem solving. After this portion of the project, I am highly inclined to believe him.Tune in in just a few days for my next installment on the Foginator2000 project. Until then remember to Hack The World and Make Awesome!

 

Win this Kit and Build-A-Long


  1. Project Introduction
  2. Fog Controller Hardware and Test
  3. Environment Sensing Coding & Testing
  4. Ambient Audio Hardware and Coding
  5. Lighting Coding and Testing
  6. October 16th -  Final Assembly and Testing
  7. October 23th - Project Wrap-up

Previous posts for this project can be found here:

http://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/tags#/?tags=pumpkinpi2015

 

I want to blog a little about the pumpkin mainly becuase I need to get the design nailed down while I wait for my hardware to arrive.

 

The pumpkin will play at random, pre-recorded sounds.  When a sound plays, the eyes and mouth move around.  When a

sound ends, the eyes will still move but the mouth wont.

 

Lets start with the eyes.

 

The eyes will blink randomly and if a user "presses" the eye it will squint.  This means I need a set of images

that can be "flipped" to imitate the motions.

 

As an example, a blink is a series of images that are played in order.  This means I need a method of loadig up images and storing them, as well as keeping them in order.

So what I will do is create a file structure that keeps everything nice and tidy.  For now I will only have two

actions for the eye which will be blink and squint.

 

I propose the following directories:

 

faces/

faces/<name>/eye/
faces/<name>/left/
faces/<name>/right/
faces/<name>/mouth/
faces/<name>/sounds/

The <name> will be the name of the face, as I want to be able to load multiple faces.  The default face must exists

and be called 'default'.

Within each eye folder, a set of files is needed.  I propose the following:

 

faces/<name>/eye/left/eye.jpg
faces/<name>/eye/left/blink1.jpg
faces/<name>/eye/left/blink2.jpg
...etc

faces/<name>/eye/left/squint1.jpg
faces/<name>/eye/left/squint2.jpg
...etc

 

The same structure exists for the right eye and the other structures.  When animating, the images are flipped in

order when the action is needed.

 

Having these common directories allows searching and loading of faces to be easier.  For now I will use only the

default face for my development.

 

With the directory in place, now we need to load all the face components which includes the eyes, mouth and sounds.

 

This will require the use of a class for each face to keep everything together.

 

Lets start with an easy example on how to load up all the squint images for the default face.

 

 

import os
import pygame,sys
leftEyeSquint = list()
for dirname, dirnames, filenames in os.walk('./eye/default/left/squint/'):
     for filename in sorted(filenames):
  try:
   leftEyeSquint.append( pygame.image.load(os.path.join(dirname, filename)))
  except:
   print(filename + ": invalid file")
print str(len(leftEyeSquint)) + " Squint images loaded\n"






This gives us a list with an ordered set of squint images.  It should be known that any fiels in the squint

directory will be attempted to be loaded as images, this is why an exception clause exists.

 

This function is a generic function for loading images for the left eye squint.  It does not need to be hard coded for the left eye as it can be used for all the face images.

So I will make it part of a Face Class.  This class will have this generic method of loading images given a path.

 

Here is the complete code for loading up all the components for the eyes and storing the images in a list that is part of a class.

 

import os
import pygame,sys
#define the face class, which is just used to keep all the images together that go together
class Face:
 def __init__(self,path):
  #load each component of the eyes using the generic image loading function
  self.leftEyeSquint = self.LoadImages(os.path.join(path, 'eye/left/squint/'))
  self.leftEyeBlink  = self.LoadImages(os.path.join(path , 'eye/left/blink/'))
  self.rightEyeSquint  = self.LoadImages(os.path.join(path , 'eye/right/squint/'))
  self.rightEyeBlink  = self.LoadImages(os.path.join(path , 'eye/right/blink/'))
  
 
 #a generic function for loading a set of images from a given directory
 def LoadImages(self,imagesPath):
  imageList = list()
  for dirname, dirnames, filenames in os.walk(imagesPath):
       for filename in sorted(filenames):
    try:
     imageList.append( pygame.image.load(os.path.join(dirname, filename)))
    except:
     pass
  return imageList

#Create a list of faces classes, this example only has 1 face, but multiple faces can be used
faces = list()
#load the default face 
faces.append(Face('./faces/default/'))

#test the class
print str(len(faces[0].leftEyeSquint)) + " Left squint images in class\n"
print str(len(faces[0].rightEyeSquint)) + " Right squint images in class\n"




 

 

The same pattern will be used for the mouth and sounds.  The sound function will call a different method of the pygame but the logic remains the same.  The class will be fully filled out and the a list of classes will be used in the main animation loop.

It looks like I am one of the first to receive the Kit, So I thought I would do a quick video of what was in the box:

 

 

Thanks Element14 and Dave, the kit looks great and was shipped fast as always!

Keep an eye on the main page for future blogs:

 

Animated_Grim Blog: Home Page [Last Updated 6/10/2015 < British Format]

Updated arrangements:

 

Since the sad announcement we now await for Candi's remains to be delivered to us. Our family is saddened by the loss especially her friend Scratch & her pet Python. They both graciously volunteered to help Dale & Chrystal be the undertakers to put Candi's remain back together and prepare her for the coffin. Scratch & Python will keep her thoughts, actions and memories alive as well help with keeping children happy. Candi's memory will be kept in her favorite desert (Raspberry Pi), her cheerful display will be on an LED screen for everyone to see and touch. Her eyes and smile will be from bright diamonds (LED lights). Once Candi's remains are received, we will have a viewing for family and friends who can stomach the mess we receive her in.

 

Obituary:

 

Name: Candi D. Spencer

Age: To young

 

For Candi's whole life she lived for making kids happy. Handing out candy once a year to any child who asked. Her final wish was to continue on with this tradition every year from beyond the grave. Her dad "The Candy man" started her on this tradition many years ago (He had a different agenda after death from what I understand). Candi gave out so much candy that it killed her. The story is to sad to write, but her legacy will live on once Halloween day arrives.

 

Thank you,

 

Dale & Chrystal Winhold

We regret to announce the sudden death of Candi D. Spencer. The tombstone is being made in memory of Candi. The tombstone has 2 doors that will open as a grieving child approaches which reveals a LED touch screen. The saddened child will be asked a question, upon answering, Candi will give a treat to make the child happy. The tombstone will be mounted on top of the coffin, Candi's hand will come out of the coffin offering the treat.

 

DSC00387.JPG

 

More information on this sad announcement to come very soon!!

 

Dale & Chrystal Winhold

Trick-or-Trivia-Banner004.jpg

 

Welcome back to the Trick or Trivia Blog. In this installment, I am going to cover the candy dispensing mechanical assembly, as well as the coding to get the servo up and running on the Raspberry Pi 2. Up until now, all of the parts needed to build this project were able to be purchased from Newark.com or MCM Electronics, but this portion will require a 3D printer or some handy inguinuity and minor skills with wood working equipment. I have included both the printable .STL files as well as the sketchup design files so that anyone following along at home can recreate, improve, or modify things to fit their needs.

 

When I came up with the concept of the Trick or Trivia candy dispenser, I spent hours trying to figure out the best way to autonomously dispense Halloween candy one piece at a time, and mocked up a few designs in sketchup. After a lot of thought, I came to the conclusion that the candy needed to be very compact and pretty uniform and consistent in size.  It was also important that the candy be very tightly wrapped as loose packaging caused jams. I finally settled on Starburst candies as they fit all of the requirements. Starburst were actually my second choice, with Now & Later candies being my first. Unfortunately I could not find any of them locally, and know that Starburst are found throughout the US and abroad. Let’s get into the meat of things, and talk about the hardware needed for this project.

 

 

The Hardware

 

 

Below you will see a list of the hardware used to build out the candy dispenser portion of this project. In addition to these components, you will need the following tools: a soldering iron, solder, flush cutters, wire strippers, 12-inches or more of 3-conductor wire, 16 Gauge Galvanized Steel Wire (found at hobby stores), and a bag of Starburst candies.

 

Newark.com


Newark Part No.

Notes

Qty

Manufacturer / Description

38Y646738Y6467

RPi

1

RASPBERRY PI 2, MODEL B,

38Y647038Y6470

SD Card

1

RASPBERRY PI 8GB NOOBS MICRO SD CARD

44W493244W4932

PSU

1

POWER SUPPLY 5V, 1A

06W104906W1049

USB Cable

1

USB A PLUG TO MICRO USB B PLUG

53W628553W6285

WiFi Dongle

1

ADAFRUIT USB WIFI MODULE

58K382758K3827

Resistors

1

METAL FILM RESISTOR, 220 OHM, 250mW, 1%

10M846410M8464

General Purpose Diode

1

1N40011N4001 Rectifier Diode 50 V 1 A

34C109234C1092

PSU Vreg

1

7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%,

14N941814N9418

PSU LED

1

RED, T-1 3/4 (5MM)

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display
66H746266H7462Strip Board1VECTOR ELECTRONICS-8022-PCB, Tracks(Strip Board)
21M490921M4909Screw Terminal2MOLEX-39543-0002-TERMINAL BLOCK

 

MCM Electronics

 

MCM Part No.

Notes

Qty

Manufacturer / Description

28-17450

Servo

1

Micro Servo

 

 

3D Printing The Candy Dispenser Assembly

 

 

I chose to 3D print the parts for the candy dispenser simply because I have a few 3D Printers at my disposal at home, and could quickly design everything in Sketchup. If you do not have a 3D printer, you could easily build this assembly from wood or even foam core. The most important thing to remember is to leave enough clearance on all moving parts to negate any candy size anomalies.

 

Download all of the .STL files and the Sketchup design file from Thingiverse.com.

 

To keep this blog at a somewhat reasonable length, I am not going to post any images of the parts being 3D printed but as you can see from the image below, they print well at a 0.25mm layer height.  I used Voltivo Excelfil PLA Filament as the printing medium as PLA is more food-safe than ABS. Since the candy is wrapped in wax-coated paper, PLA is fine to make the dispenser out of.

 

TrickOrTrivia_004 (6).jpg

 

I mocked up the candy dispenser on a scrap box from a previous Newark order, and used hot glue to temporarily stick everything together. This was a very important step as I realized that the plunger tube was about 2mm taller than the magazine was despite being exactly the same in the sketchup file. I printed the tube again and the second try was perfect. I suspect a slicing error to be the cause of the first tubes difference. I apologize for the low-quality image, I simply got so wrapped up in getting this to work, that I forgot to take one, and had to use a screen cap from a video of everything working.

 

TrickOrTrivia_004-(7).jpg

 

In the image above you can see that I have the servo mounted to the back of the box, with the push-rod made from the 16 gauge wire pushing the plunger into the tube. It took a little trial and error to get the length right, and to get the servo’s horn placed just right, so that it would push candy out and not bind on the return stroke.

 

TrickOrTrivia_004-(8).jpg

 

With everything lined up and secure, I tested the dispenser for over an hour until I was confident that everything would work fine. I then super glued the plunger tube to the candy magazine using a thick, gel-like super glue. If this were ABS plastic I would have solvent-welded it together using acetone instead.

 

TrickOrTrivia_004-(9).jpg

 

I then glued the steel rod into the plunger block using hot glue. I chose hot glue as it is easier to remove if I need to change its length.

 

TrickOrTrivia_004 (5).jpg

 

With the easy part done, it was time to move on to getting the code for the servo working.

 

 

Wiring and Coding the Servo.

 

 

When designing the original kit for this project I listed a normal-sized hobby servo as I thought it’s extra power would be needed, but as it turns out, a smaller 9g servo works much better. This is due to the fact that the candy sometimes binds in the tube if it’s wrapper is coming loose, and when it binds, the bigger servo will actually bend the steel push rod. The 9g servo simply stalls out, preventing the rod from bending.

 

The servo I am using is a small 9g metal-gear servo from Hobby King’s Turnigy line, but the one listed in the parts list at the top of this post will work just as good. I also had issues trying to drive the original Tower Pro servo from the Raspberry Pi 2, even with a 6-volt, 5-amp power supply hooked to it. The smaller 9G servos worked just fine with the 5V power supply we will build later in this post.

 

The PIGPIO Library

 

To drive the servo and retain audio output I chose to install the pigpio library. Installing it is as easy as entering the following commands into the terminal one by one.

 

wget abyz.co.uk/rpi/pigpio/pigpio.zip

unzip pigpio.zip

cd PIGPIO

make

sudo make install

 

Then restart the Raspberry Pi with the following command.

 

sudo reboot

 

Once the Pi is back up and running, we need to start the pigpio module using the following command.

 

sudo pigpiod

 

This command will need to be ran every time the Raspberry Pi reboots. I simply added it to the crontab, just like we did with the command that plays the ambient audio on boot. In the event you need to stop the pigpio module, simply run the commands below. To find out more about what the pigpio module can do, check out its info-page.

 

sudo killall pigpiod

 

Servo Control Code

 

I chose the pigpio module because it not only allows the user to utilize all of the Raspberry Pi’s GPIO pins as PWM pins, but because it does not block audio like some of the other servo control solutions do. An added bonus is how easy it is to program servo control using Python with this library. Below is a breakdown of the servo code, followed by the full TrickOrTrivia code with the servo control integrated.  As always, you can find the full code for this project at its Github repo.

 

First we need to import the time and pigpio libraries.

 

import time
import pigpio
























 

Next we need to define which GPIO pin is connected to the servo.

 

servos = 4 #GPIO number
























 

Now we need to initialize the pigpio library

 

pi = pigpio.pi()
























 

This next block of code is the function that makes the candy dispenser’s plunger move back and forth to dispense three pieces of candy upon a correct answer. We are telling the servo to move hard right with a pulsewidth of 2500, and then move a little more than 90 degrees left with a pulsewidth of 1300. We wait 0.5 seconds between each move. When finished, we turn the servo off by setting a pulse width of 0, and telling the pigpio module to stop. Finally we break out of this function.

 

def correct_servo ():
        while True:
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)


            pi.set_servo_pulsewidth(servos, 0);
            pi.stop()
            break
























 

The same code works for the incorrect answer, but only dispenses a single piece of candy.

 

def incorrect_servo ():
    while True:
        pi.set_servo_pulsewidth(servos, 2500)
        time.sleep(.5)


        pi.set_servo_pulsewidth(servos, 1300)
        time.sleep(.5)


        pi.set_servo_pulsewidth(servos, 0);
        pi.stop()
        break
























 

In the full code below, you will see that I call each of these functions in the Blink_LED functions.

 

from Tkinter import *
import RPi.GPIO as GPIO
import time
import sys
import pygame
import pigpio

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(19, GPIO.OUT)

servos = 4 #GPIO number
pi = pigpio.pi()

state = True

correct_audio_path = '/home/pi/Desktop/audio/correct.mp3'
incorrect_audio_path = '/home/pi/Desktop/audio/incorrect.mp3'

def correct_servo ():
        while True:
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 0);
            pi.stop()
            break

def incorrect_servo ():
    while True:
        pi.set_servo_pulsewidth(servos, 2500)
        time.sleep(.5)
        pi.set_servo_pulsewidth(servos, 1300)
        time.sleep(.5)
        pi.set_servo_pulsewidth(servos, 0);
        pi.stop()
        break

def blink_led():
    # endless loop, on/off for 1 second
    while True:
        GPIO.output(26,True)
        pygame.mixer.init()
        pygame.mixer.music.load(correct_audio_path)
        pygame.mixer.music.set_volume(1.0)
        pygame.mixer.music.play(5)
        time.sleep(10)
        correct_servo()
        GPIO.output(26,False)
        GPIO.cleanup()
        pygame.quit()
        sys.exit()

def blink_led_2():
    # endless loop, on/off for 1 second
    while True:
        GPIO.output(19, True)
        pygame.mixer.init()
        pygame.mixer.music.load(incorrect_audio_path)
        pygame.mixer.music.set_volume(1.0)
        pygame.mixer.music.play(5)
        time.sleep(10)
        incorrect_servo()
        GPIO.output(19,False)
        GPIO.cleanup()
        pygame.quit()
        sys.exit()

root = Tk()
root.overrideredirect(True)
root.geometry("{0}x{1}+0+0".format(root.winfo_screenwidth(), root.winfo_screenheight()))
root.focus_set()  # <-- move focus to this widget
root.configure(background='black')

label_1 = Label(root, text="Welcome to Trick or Trivia", font=("Helvetica", 36), bg="black", fg="white")
label_1.grid(columnspan=6,padx=(100, 10))
label_2 = Label(root, text="Answer the question for candy!", font=("Helvetica", 28), bg="black", fg="red")
label_2.grid(columnspan=6, pady=5, padx=(100, 10))
label_3 = Label(root, text="Casper is a friendly ____!", font=("Helvetica", 32), bg="black", fg="green")
label_3.grid(columnspan=6, pady=5, padx=(100, 10))
button_1 = Button(root, text="Ghost", font=("Helvetica", 36), command=blink_led)
button_1.grid(row=4, column=2, pady=5, padx=(100, 10))
button_2 = Button(root, text="Ghast", font=("Helvetica", 36), command=blink_led_2)
button_2.grid(row=4, column=4, sticky=W, padx=(100, 10))
button_3 = Button(root, text="Ghoul", font=("Helvetica", 36), command=blink_led_2)
button_3.grid(row=5, column=2, pady=5, padx=(100, 10))
button_4 = Button(root, text="Gremlin", font=("Helvetica", 36), command=blink_led_2)
button_4.grid(row=5, column=4, sticky=W, padx=(100, 10))
label_4 = Label(root, text="Correct Answer = 3 Pieces", font=("Helvetica", 20), bg="black", fg="green")
label_4.grid(columnspan=6, padx=(100, 10))
label_5 = Label(root, text="Incorrect Answer = 1 Piece", font=("Helvetica", 20), bg="black", fg="red")
label_5.grid(columnspan=6, padx=(100, 10))

root.mainloop()
























 

 

Building a 5V Regulated Power Supply

 

 

The 9g servo we are using requires a 5V power source, and while the Raspberry Pi is capable of powering it for free movement, the servo could pull 2-3 amps if it binds up. The Pi is not capable of sourcing this much current through its GPIO, and it could cause damage to the Pi. So we are going to build a quick and simple regulated 5V power supply. This power supply will not supply 2-3 amps, but can sustain 1-amp without a heatsink.

 

To build this PSU you will need the following components, as well as a soldering iron, flush cutters, and a 12-30V DC power source.

 

58K382758K3827

Resistors

1

METAL FILM RESISTOR, 220 OHM, 250mW, 1%

10M846410M8464

General Purpose Diode

1

1N40011N4001 Rectifier Diode 50 V 1 A

34C109234C1092

PSU Vreg

1

7805 LINEAR VOLTAGE REGULATOR, 5V, TO-220-3

17F216517F2165

PSU Filter Cap

1

CERAMIC CAPACITOR 0.1UF, 50V, X7R, 20%

69K790769K7907

PSU Filter Cap

1

ELECTROLYTIC CAPACITOR 100UF, 50V, 20%,

14N941814N9418

PSU LED

1

RED, T-1 3/4 (5MM)

49Y171249Y17127-Inch Touch Screen1Raspberry Pi 7" Touch Screen Display
66H746266H7462Strip Board1VECTOR ELECTRONICS-8022-PCB, Tracks(Strip Board)
21M490921M4909Screw Terminal2MOLEX-39543-0002-TERMINAL BLOCK

 

 

TrickOrTrivia_004 (1).jpg

 

A 5V regulated power supply circuit is quite simple to build thanks to the fairly common LM7805 voltage regulator, and requires just five components to get up and running. A 100uF capacitor, two 0.1pF ceramic capacitors, a 1N004 diode, and the LM7805 regulator. I am adding two screw terminals, and an indicator LED to the mix. I want to design a pcb for this, but for now a piece of protoboard will work just fine.

 

FZBN31NHH2VMX7Q.bmp

 

Following the schematic above, build the power supply and solder in each component. The protoboard I am using is different from the one listed above as I have a big supply of these from Protostack.com, so I just used one of mine.

 

TrickOrTrivia_004 (2).jpg

 

With all of the components soldered together, I made the necessary jumps from each component to the next. I lucked out with the Protostack board as it has integrated power and ground rails. This cut down on the number of jumps I needed to make.

 

TrickOrTrivia_004 (3).jpg

 

With everything soldered up, I trimmed the board down to reduce its size, and connected a 12v 1amp power source. The red LED lit up and I confirmed 5V out with a multimeter.

 

TrickOrTrivia_004 (4).jpg

 

With the power supply built and working, we can move on to testing our servo!

 

 

Testing the Servo

 

TrickOrTrivia_004-(10).jpg

 

Connect the servo to the power supply as shown. Then connect the servos signal wire to the Raspberry Pi’s GPIO Pin 4 using the BCM schema. You also need to connect the power supply’s ground to the Raspberry Pi’s ground. You can see that I have done that here with the GND rail on a breadboard I am using to power the mock-up’s LEDs.

 

To test the servo, let’s create a new test script with Python. Using the Nano text editor create a new file within the Trivia Scripts directory using the following commands.

 

cd /home/pi/Desktop/TriviaScrips

nano servo_test.py

 

Now paste the following code into the file you just created. Then exit out of the file, saving the changes.

 

import time
import pigpio
import sys

servos = 4 #GPIO number

pi = pigpio.pi()
#pulsewidth can only set between 500-2500

def correct_servo ():
        while True:
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 2500)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 1300)
            time.sleep(.5)
            pi.set_servo_pulsewidth(servos, 0);
            pi.stop()
            break


def incorrect_servo ():
    while True:
        pi.set_servo_pulsewidth(servos, 2500)
        time.sleep(.5)
        pi.set_servo_pulsewidth(servos, 1300)
        time.sleep(.5)
        pi.set_servo_pulsewidth(servos, 0);
        pi.stop()
        break

correct_servo()
incorrect_servo()
sys.exit






 

To run the script, use the following command.

 

sudo python servo_test.py

 

The servo should move back and forth a few times before finishing up, and the script terminating. If it worked, copy the full foginator code that I posted earlier in this post, and edit the foginator2000.py script. Delete the existing code and paste the new code into it. Save and exit. Now you should be able to run the Foginator2000.py script and when you select the correct answer, three pieces of candy will be ejected from the magazine like in the video below.

 

 

I apologize for this post being so late, but as you might know, South Carolina got hit by a pretty massive rainstorm over the last few days. That has seriously hindered my ability to work on anything, but we are back to blue skies now. I hope you enjoyed this installment of my Trick or Trivia project, and I hope that you learned a thing or two about the Raspberry Pi and servos. That is going to wrap this installment of Project: Trick or Trivia. Check back in a few days for the next installment. Until then, remember to Hack The World, and Make Awesome!

 

Win this Kit and Build-A-Long

 

  1. Project Introduction

  2. Building The Trivia Interface

  3. Interfacing Ambient and Triggered Audio Events
  4. Building The Candy Dispenser & Servo Coding
  5. Carve Foam Tombstone
  6. October 24th -  Assembly and Testing
  7. October 28th - Project Wrap-up

What I will build if I am selected(the more detail, the better!):

 

Synpopsis
At carnivals they usually have a dunk tank where a person, which I usually see dressed as a clown, heckles the crowd to lure them into paying money and attempt to dunk heckler.  The Heckler makes faces and yells things at people as they walk by.  I want to build a pumpkin that does the same thing but heckles trick or treaters or attempts to startle them by waking up suddenly with a loud "BOO!"

 

 

High level description

A group of trick or treaters walk up to the door and notice a foam pumpkin sitting there, no lights inside.  Suddenly eyes and a mouth appear and it begins speaking.  It yells out BOO! then begins heckling the kids using various pre-recorded skits.

 

The eyes are moving, the mouth is moving, all animated.  One of the kids touches the eye, and the pumpkin reacts with pain and says some choice words.  Another pokes the pumpkin in the mouth and it acts disgusted and pretends to spit out the taste.

 

After the kids leave the pumpkin  begins yelling out random things such as "Hey, You there!".  It sings some Halloween songs and later goes to sleep, waiting for its next victims. It may choose a funny face or an evil face next.

 

The Hardware

Inside the pumpkin will be a Raspberry Pi, a PIR motion sensor, an amplified speaker, and the Raspberry Pi Screen.

A large foam or plastic pumpkin will be used.

 

IMG_20150914_195004305.jpg

 

Only 1 screen is needed so a pumpkin the correct size will need to be found to fit the screen and have the face be the right size.  In the below picture the holes are cut and the screen is seen through the holes.  When the pumpkin turns on, animations will be displayed to have moving eyes and a moving mouth.  They eyes and mouth can be touched and will provoke a reaction.

 

The eyes and mouth will be sprite animations of various forms.  In order to move the project along, I will be using free clip art and sound bytes, though I will have to record some of my own sounds.

 

The PIR sensor will be hidden a disguised as a mole of I choose to get the pumpkin a hat, it can be hidden in there.  When the pumpkin is in sleep mode, it will be awaken by either a timer or an event from the PIR sensor.

 

animated.jpg

 

Here is a concept picture of what one face could look like:

 

animated.jpg

 

 


The Software
As much as I love C and SDL, I am going to use Python and the PyGame library.  If that will not perform well enough, I may switch to Java. 
The PyGame library is easy to use and has little overhead to get up and running.  In only a few lines of code, I can have a sprite moving around the screen.  I found that Python is more popular amongst many Pi users and I want this to be a recreatable project that can be more easily modified by Makers.

 

I plan to use as much pre-written software, clip art, and pre-recorded sounds bytes as I can.  I found trying to recreate everything from scratch causes me to lose focus on getting the project completed.

 

 

There exists animated GIF already that mimic mouth movements and the cells from the GIF can be animated with the Pygame library.

 

The bulk of the work will be to find images I want to use, and record skits I want to play back.

 

Animations will be on one thread, sounds on another.  An additional thread will monitor for "mouse clicks" or screen pokes.  All threads will have a synchronizing method.  The RPI 2 is great for multithreading as it as multiple cores and I feel has enough power to handle this application with a high level scripting language.

 

 

Summary

Plastic or foam pumpkin

An Animiated face

Pre-recorded sounds

A interact able face via the touch screen

PIR sensor for motion detection

Raspberry Pi

Raspberry Pi Screen

Powered speaker

 

 

 

About Me

Many years ago I decided to learn DirectX programming under windows.  While this is a specific framework, the generic principles of game programming apply here.  Such concepts as double buffering, bitmapping, the blitter, and the video memory are similar on any platform for general use.

My kids and I build Halloween decorations from felt like the kits that are sold to build little houses.  This will go inline with that activity and will give us a good prop to add to our current set of props we have built!

 

I recently completed a project in the Sci Fi Your Pi Challenge called the QuadCOP in which I had to get into the intricates of the Pi and its multi-core capabilities.  Threading on the cores will be an important aspect of this project in order to play sounds and animations at the same time, which keeping them in sync.  I did a similar concept with the QuadCOP where the GPS NMEA parsing is done on its own thread and it sets flags in order to let the main process know what is going on.  Python allows threading and I will be taking advantage of this.

 

I have learned from my previous project to not get bogged down in so many technical details but use more high level languages in order to move the macroscopic portion of the project along.

We will be using quite a lot of image processing on this project, lets make life simple.

 

 

It it crazy Just How much miss information there is about opencv and the Raspberry pi, many tutorials will guide you thru a installation process that takes 12 hours+ and has library's linked complexly.

 

Opencv 2.4.1 [CV2] is available in the default Rasbian Reposttory.

To Install on Rasbian, type the following into the command terminal:

 

Type Into Terminal

Sudo apt-get update

sudo apt-get upgrade

 

 

sudo apt-get install libcv-dev

sudo apt-get install python-opencv

sudo apt-get install libopencv-dev

sudo apt-get install libcv2.3  #[May have updated to libcv2.4]

sudo apt-get install opencv-doc

 

How To use:

>Open the File manager and go to your documents

>Right click>CreatNew>Empty File

>Add the line [#!/usr/bin/python] To the top of the file

>Save it as <FileName>.py

>Right Click on the File >Properties>Permissions and select everyone> OK

>Double click on the file and execute in terminal

 

Hints:

>Remember it is a python based install of openCV 2

>Execute the python script as root if you want access to the GPIO pins

>If the script doesnt work, Cd / into the directory and run python <Filename.py>

 

Useful Links:

 

Adaptive Threshold:  http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html

Other Threshold

Why enter the competition

>I have checked out a few of the other Halloween projects and they are great!

>Many fun Halloween pi ideas just wont leave the back of my mind

>The Element14 Challenges are Fun

 

 

Project Scope

Right, well I’m guessing you have watched the above video. Lets talk about the plan in a little more detail.

 

Aim:

Pi based vision based project that is scary, fun and would make a great project for people interested in the Rpi and vision. I get maybe as little as 20 trick or treaters a year all on the old side for trick or treating. So my plan is to try and scare them, I have a particular creepy corner at the front of my house that would fit a large figure well.

 

 

Flow Chart

flow

 

 

Death will be a scale model with a real scythe [secured into the ground and blunted] with a mister permanently on flowing fog from under his cloak, beside him will be a chest full of candy.

 

gri

 

The screen will be in “Deaths” outstretched left hand and the children will get to select how many children there are. I am hoping most children will be greedy and select a number greater than the number there.

 

If the selected number is greater than the number seen in the camera the head controlled by two servo’s will turn to look at the child the red led’s behind the eyes will come on and with the mister and a speaker will say “ Liers do not get candy“  the head will then pan around  and “there are only X souls before me” the head will turn back to its defauly position and the children will get a second chance “ select again my honest minions”. And we will loop back to the options again.

 

Child selects a number equal to or less than the number seen in camera view

 

As for the candy delivery portion, we will skip automating this bit due to costs. we will just have a chest full of candy and mist.

 

ISR

ISR will be based on a switch on the candy box Lid, if the box is opened without first playing the game.

 

Voice Clips

The interaction with the player will be via a computer generated voice complemented by head movement and other special effects [Eye LED's, Fog].

 

[1] " Liar" [Red Led Behing eyes comes on, mist from mouth hole, followed by a pan motion of the head] " I see Y souls before me"

[2] [ Makes eye contact with primary player] "Try your luck again my minion"

[3][Head pans to count number of faces] " I too see X Souls here"

[4]"take your candy my minions and go fourth" [Led's turn green, looks at the candy chest] Led's in chest will dimly light.

[5][ ISR based, red led's on mist on] "Thief's Be Warned! , Play the game or perish"

 

 

Potential problems

>The Facedetection is relatively easy, but we will have to take care to make sure it can detect a mask too.

>Theft, where I want to place the unit is on the edge of the property in a particular spooky spot, Easy for a theif to take the whole thing, but it is relatively remote and we usually see around 20 trick or treaters. so we will just take our chances.

>The Kids might rinse the box containing the treats, we will have to hope they are too scared to do this and check on it every so often to top up if needed.

 

Parts I already Have:

> Red LED’s

>Ultrasonic foger [Already have a diy version of this and it would make a great DIY BLOG]

>Syth

> Opto Isolators for actuators

>Webcam/Pi Camera

>Small Speaker

 

Parts I will need to source:

If you can help out with any of these I would be very grateful.

>Supplied touch screen

>Pi 2b [Element14Dave said he has sent one in the post, but a second one would make life much easier]

>2x  Servo’s [for the mask pan/tilt]

> “Screem” Mask and Cloak, [will get these from the dollar store]

 

 

I hope I managed to put across what I envision, the first prototype will explain it much better than words alone could do,

Mike

With running this project on a non existent budget, a lot of the stuff is going to have be  $1 or borrowed temp from another project.

 

Here is the poor mans "Fog" build. I built this unit as a pesticide doser, by just adding water it will create safe cool mist and by adding a few delivery hoses we should be able to route this wherever it is needed.

 

Take a look at the Vertical Growing Blog for more info on this mister:

 

Automated Green House Blog's Home Page [Updated: 6/10/2015 < British Format]

 

 

 

 

 

What you will need:

[The Prices are estimates]

 

>24v Ultrasonic Mister [£2]

[You may want multiple units for large rooms/ large amounts of mist]

Mister

 

 

>24v Power supply

Any source of 24V DC, make sure it can supply enough current for your planned number of misters and Fan.

Misters have a female 5.5mm Power jack [common size for wall mounted power supplies]

 

>24v PC Fan £1

24v Fan

 

> A Container Of your Choice with removable LID [Wide and shallow is better]

 

>Fan Speed Controller [Optional] £2

The basic ones just work as a voltage divider, so you can make your own from a potentiometer if you like.

Optional

 

>5.5mm 12v splitter cable [Optional] £2

4

 

How to build it:

As Always this is just a set of instructions, follow them at your won risk.


>I'm not including a wiring diagram with this blog, everything operates at the same voltage and it is as simple as wiring all parts in parallel.

>Cut 4 holes in the top of the container you plan to use, two the size of the fan you selected and two smaller ones for the waterproof bungs on the mister wires.

>Mount the fan so it is blowing air into the container [stops the motor getting wet] and alter the mister lead length in the bungs to place them in the middle of the container away from your inlet and exhaust holes.

>Add a exhaust pipe if you want to direct the mist to somewhere else.

 

How To use:

As a general mister or humidifier to keep cool in summer or for a Halloween project, just add water power it on and let it do its thing, use the fan controller knob to control the amount of mist you want. We will be using a PWM controller to do this later int he project.

 

 

Keep an eye on the main Animated_Grim Project:

 

PumpkinPi2015 Competition Animated_Grim Blog: Home Page

In this post, we are going to create a very simple program. The program will ask you for your name, and then it will introduce you with the phrase: Hi, my name is _____. Through doing this we will be exposed to some of the basic concepts that we will want to use in any project that we might do in the future. We'll see how to take input from the user. How to respond to various user actions like when they click on a button. Finally, we'll see how to write out information to the screen to let the user know what is going on. These fundamental building blocks will be key elements of any project that we try to do in the future.

 

Starting With a Button

Let's start off with just a single button and see how that works. To do this, create a page that looks like this:

Now this snippet of code assumes that you called your project MyNameIs when you created it. If you called your project something else, then you'll need to change that part to whatever you called your project. If you don't, you'll get a compiler error like this:

 

'MainPage' does not contain a definition for 'InitializeComponent' and no extension method 'InitializeComponent' accepting a first argument of type 'MainPage' could be found (are you missing a using directive or an assembly reference?)

 

These compile errors are really trying to tell you that your MainPage.xaml and MainPage.xaml.cs files don't align. In the .xaml file, the first line:

Specifies the full path to the class. By full path, I mean the namespace that the class is in and the name of the class. You can think of it as the namespace is the folder and the class name is the file name. Same kind of idea. So, that's how it is specified in the MainPage.xaml file, and then if we look in the MainPage.xaml.cs file, we see something like this:

Here you can see the namespace is more clearly called out (MyNameIs). Then, within the namespace's curly braces, you can see the name of the class called out (MainPage). So, to get rid of the error, you need to make those two match.

 

Going back to the XAML, we can see that we have set two attributes on the button. The first is the content. This is going to be the text that is displayed within the button. Now, the content doesn't have to be text. You can set it to a picture or many other things, but for now, text is all that we need. The second is the click attribute. This is the name of the method that is going to be called when the user clicks the button.

 

So, let's go ahead and add that method into the MainPage.xaml.cs file:

After you add the method, click on the location where the red dot is in the image below:

BreakPoint.png

When you click there, it should add a red dot on the screen. This is a breakpoint. This means that when the program hits this location, it will stop and give you a chance to debug and see what is going on. This is an extremely handy debugging tool that will save you lots of time in the future. So, go ahead and run the program and click the button. When you do this, Visual Studio should come to the foreground and you will see a small yellow arrow within the red circle. This tells you that when you clicked the button, the Submit method was called and that the program is now stopped there waiting for you to debug:

Debug.png

At this point, we really don't have anything to debug. All we were interested in knowing was that when we clicked the button, the Submit method was called. Since we hit our breakpoint, we know that happened. Now, we can either hit the green triangle that looks like a play button to continue execution of the program, or hit the red square that looks like a stop button to kill the program.

 

One of the most beneficial skills that you can learn is how to effectively debug programs. Fortunately, Visual Studio has a lot of powerful tools to help you debug programs better. Breakpoints are one of the key techniques for figuring out what is really going on.

 

User Input

Now that we know, the button is working properly, let's add some more stuff around the button:

Now, if we look in the middle of the code, we can still see the same button that we had as before. If we go one level up in the XAML, we see a StackPanel with an orientation of Horizontal. A StackPanel is a way to layout information in the UI. You can think of a StackPanel as a stack of magazines sitting on the floor. However, if the orientation is set to Horizontal, then the stack of magazines in on their side, like they are sitting on a bookshelf. Since this StackPanel is Horizontal, that means that we will have a TextBlock on the left, followed by a TextBox in the center, and a Button on the right.

 

TextBlocks are a way to display text to the user. You can think of them as a way to output text to the user. So, in this TextBlock, we display the phrase “Name:” to the user. Then next to this TextBlock, we have a TextBox. TextBoxes are used to get input from the user. So, TextBlock = output, TextBox = input. This will look like your standard text entry field. Then on the right of that we have a Submit button.

 

That Horizontal StackPanel is embedded within another StackPanel. This is a standard StackPanel where things are stacked on top of one another. So this means that the “Name:” TextBlock, the entry TextBox, and the Submit Button will all be on top of the TextBox that has a Name of OutputText. Don't worry if the description sounds a little complicated at first. Once you see the UI in action it might click better. If not, there are many posts dedicated to explaining the StackPanel, like this one:

 

http://www.wpf-tutorial.com/panels/stackpanel/

 

The last TextBlock was different than the first one. In the first one, we set the text to a given value “Name:”, however in the second one, we gave the TextBlock a name of OutputText. This is because the first TextBlock will always display the text “Name:”. That's all we ever want it to do. It will never change. It's goal is just to describe what the user should be typing into the text field. However, we want to programmatically update the second TextBlock to introduce the person whose name was typed in.

 

Finally, let's add some logic to the Submit method:

Here we simply set the Text property on the OutputText TextBlock to be a phrase that we create. The phrase that we create is “Hi, my name is _____” where we fill in the blank with the Text that they user typed into the MyName TextBox.

 

You can also remove the breakpoint by clicking on the red circle. We no longer need it, since we know that the Submit method is being called when we click the button.

 

Now when you run the program and type in a name and hit submit, you should see something like this:

name.png

Conclusion

It might not be the prettiest program in the world, but it sure was effective. We were able to get input from the user and then display it back. We were able to place a button in the UI that kicked off this process, and got a feel for how to react to user interactions. Additionally, we learned a little bit about setting a breakpoint and doing some debugging.

 

GitHub

The full source code for this tutorial is also on GitHub:

 

https://github.com/oneleggedredcow/RPi2-Tutorial

Filter Blog

By date: By tag: