Skip navigation

I know these things are two-a-penny now, but here's my Pi powered Mame table. Instead of buying furniture at Ikea I decided to buy a table at a charity shop and upcycle that.


Charity shop table

I'd previously built a Picade Console, and was using that for donor parts: arcade buttons and joystick, and the Picade card which converts the button pushes to keyboard presses.


I cut holes in the table, sanded it down and sprayed it blue. The plywood table top didn't like having big holes drilled into it and split up quite badly. This meant I had to use the perspex button cover and artwork that came with the Picade to hide the damage and support the buttons.


Do you like blue?

In the shelf underneath I cut holes where I was going to mount the speakers.


Under the table

Even with quite a large table there's not a lot of space available underneath! The perspex sheet is holding the display in place. At the left is a Raspberry Pi 2. USB leads power the display controller and the Picade card. Then at the right are the arcade controls. The audio wires for the speakers run down a little duct I spray painted to match and stuck to a table leg.


This is the finished thing.


Arcade Table

Old arcade games are better in portrait, so I turned the screen round 90 degrees. The buttons on the table top are game controllers as well as 1 and 2 Up. On the left hand edge are Volume Up and Down, along the front are Start and Insert Coin, and finally on the right hand edge are Enter and Escape. The decals came from Etsy sellers.


There are more photos and notes in the Flickr album linked to by the last photo.

The primary advantage of the Raspberry Pi NoIR camera is seeing in complete darkness. By using an Infrared light source - completely invisible to the human eye - images and videos can be captured with no visible illumination whatsoever.


This video was shot in complete darkness with the new NoIR V2 camera. All illumination comes from two Infrared LEDs powered from the Raspberry Pi itself. It's an example of a common use of night vision - security and surveillance.


A rather shifty looking character sneaks through the front door under the cover of darkness, but is caught on video!


These are some still photos taken under the same low light conditions, captured at the full 8 megapixel resolution the NoIR V2 offers.



No I.R.

Despite its name, the new NoIR V2 cameraNoIR V2 camera for the Raspberry Pi isn't something designed for filming 1940's mobster movies. What makes it special isn't an additional feature, but rather what it lacks. Most digital cameras are designed to capture images in the same spectrum of light as a normal human eye, producing realistic photos and videos. While cameras can view light outside of this range, filters are use to ensure that only the desired light makes it in the final image.


The NoIR camera does what it says in its name - unlike most cameras, it has no filter for the infrared spectrum of light. This makes photos look very alien and unnatural. Most of the colour is incorrect and any bright surfaces are extremely washed out. You might think this makes the NoIR camera nothing more than a novelty, but it has a very big advantage - being able to see in very low light environments, or capture pictures in complete darkness by using an invisible Infrared light source. The resulting pictures might not be something you would want to frame, but they are significantly more visible compared to those shot on a standard camera in the dark.


You can see a video and collection of photographs of the NoIR V2 camera under Infrared illumination in complete darkness here, and the same for a range of more artistic daylight images here.



Video Streaming

A great use for the camera modules is to use the in built wireless networking on the Raspberry Pi 3 to stream live video across a network. In doing this there are two main network configuration options:


  • The Raspberry Pi 3 can operate as its own access point that devices connect to directly, enabling it to operate independently of other networks. This option would be the way to go if you wanted to have a single device, like an old phone or tablet, dedicated to being the display for the video stream.
  • Connect the Raspberry Pi 3 to a standard wireless router or access point. This would let users view the video stream on a smart phone or tablet without having to switch between a home, internet enabled wireless network and a separate one used for the video stream. However, if the home WiFi network is heavily used, having the Raspberry Pi stream video over it might cause problems with network congestion and bandwidth.


Baby monitor

Combining the night vision and wireless streaming capabilities of the NoIR camera and Raspberry Pi 3 combination, I put together a project to make a video baby monitoring system. It's something that a parent would use to keep an eye on a resting child with a live video feed to a phone or tablet, and give notifications when the baby wakes or becomes restless.

NoIRvIR.jpgA Standard camera versus the NoIR V2: Night and Day


For this project I opted to connect the Raspberry Pi to a standard network. Setting up WiFi can be done using terminal, but it's much simpler to connect an HDMI monitor and use the Raspbian desktop environment. Set up the Raspberry Pi to use a WiFi network like normal, with the networking icon in the task bar. To make it simpler to find the video stream later, right click on the icon and set the wireless interface to use a static IP address.


Casing it out

Crafting a case and mount for this project presents some challengers. It needed to be flexible to allow the camera to be positioned to get a good shot of the baby while it sleeps.


Like all of the camera modules, the NoIR camera connects to the Raspberry Pi with a 16mm wide ribbon cable. Initially I had the idea to put the camera on the end of a short flexible pole with the Pi in a case at its base. This had a problem - having a small thing sticking out that a baby could fit in its mouth isn't a good idea. I had to keep the unit big enough to not present a choking hazard. The best way to go about it was to integrate the camera and the Raspberry Pi together in a case and pivot the whole unit to get the correct angle.


I used a hinged plastic mount designed for attaching a GPS to a cars windscreen. Attached to this is an official Raspberry Pi 3 casean official Raspberry Pi 3 case, mounted upside down. I drilled a hole inside the lid of the case and used mounting tape to secure the camera module on the inside, letting the lens slightly poke out. The short ribbon cable flexed around to the socket on the Raspberry Pi. I also drilled holes and mounted two of these Infrared LED'stwo of these Infrared LED's in the lid. I combined them in series and added the appropriate resistor and used female jumper leads to connect them to the 5v terminal on the GPIO header. It can be a tight fit under the lid, so it is important to be careful that no leads touch any other pins or the main board of the Raspberry Pi.


The hinged mount is attached to a wall with adhesive picture frame pads - I didn't want to use suction cups as they can be unreliable for long term use. The case with the camera inside is then facing downward from the wall over a babies crib, and the hinge can be adjusted to get an optimal viewing angle.



To get the baby monitor system up and running I experimented with a few coding options. Python has extensive libraries for interfacing with both the standard and NoIR camera modules, including network streaming options. While coding for the camera in python is extensively customizable, I found that using the server socket transmission commands had significant latency issues when streaming an HD feed over a network. The camera can also be interfaced with by using standard Linux terminal commands and scripting. It's efficient for basic functionality, but isn't very well suited for a complex program like what is required for this project.


The solution I opted to use is the RPi Cam Web Interface suite. It is very flexible with configuration, but importantly it streams live video over a standard web interface - making it compatible with just about any smart device with a web browser. It also allows the Raspberry Pi to be shut down correctly with a button on the web interface. I didn't bother to put security on the video stream because it'll only be viewable on an encrypted WiFi network. The RPi Cam Web Interface does, however, support password prompt access and a range of permissions.


Before installation the camera needs to be enabled inside of Raspbian. Under Preferences in the main menu load the Raspberry Pi Configuration program and set the camera option to enable. With the camera enabled, installing the Interface software is done by pasting the following lines into a terminal window.


git clone
cd RPi_Cam_Web_Interface
chmod u+x *.sh


After that you'll see a dialog box with some options. Unless you have specific requirements, hit Enter to start the installation with the default settings. After it has rebooted accessing the software is done by entering the IP address of the Raspberry Pi into a browser. This can be done on the Pi itself or on any device connected to the same network.

Screen Interface

I played around with the video resolution, bitrate and framerate setting to get optimal performance over a wireless network. Devices like this are often placed far away from the router and through a few walls, so signal strength may not be the best.

  • Video res: 720x1280 - I found the sweet spot in resolution to be 720p HD. Opposite to what is normal, I used a vertical 720x1280 video frame to fill the screen of a smart phone when used in portrait mode.
  • Video fps: 20 - The default 25 frames per second is somewhat overkill for observing a baby that isn't moving very much while sleeping. Bumping the frame rate down to 20 still allows viewers to see if the baby is moving but slightly relaxes the demand it places on the wireless network.
  • Brightness: 60 - Increasing the brightness does wash out the picture a little, but gives more clarity in low light.
  • Exposure Mode: nightpreview - The NoIR V2 is good in low light, but turning the exposure to nightpreview cleans up the image just a little more.
  • Image Quality: 100, Preview Quality: 100 - Maxing out the image quality and preview quality didn't seem to have any impact on the streaming performance, but made the overall clarity just a little better. Setting the preview width to 720 allows the live video to be the full quality captured.
  • Motion detect mode: External

I changed the default on screen title to something a bit more appropriate - Baby Cam. I left the complete hours, minutes and seconds time stamp there. Having the seconds constantly counting gives a good indicator that the video feed is live and hasn't malfunctioned.


There are some settings I had to change manually outside of the web based UI, done by editing the file /etc/raspimjpeg.

  • Adding the line 'fullscreen true' to the bottom of the file. This makes the Index page of the camera interface default to full screen video, rather than showing the option buttons.
  • Changing the pre existing line motion_detection from false to true makes the system start motion detection automatically at boot.


Motion sensing

I implemented a system to send alerts when the baby starts to get restless during nap time. The RPi Cam Web Interface has integrated motion sensing algorithms that are well suited to detecting both subtle and more obvious motion. I tuned through trial and error and ended up with the following settings, configured under the 'Edit motion settings' button on the default index page.

  • On_event_start: python /etc/home/pi/ - When motion is detected the named python script will be executed to send out an alert.
  • Threshold: 2100 - This is a tricky one. The motion detection sensitivity is defined by a number between 1 and 2147483647. Every use and scenario is different depending on the sensitivity required. Having the number too high results in it being triggered with no perceivable motion at all. 2100 seemed to work well for me.
  • Lightswtich: 55 - Useful when the baby sleeps with the curtains slightly open on an overcast day. This option stops the sun moving out from behind clouds resulting in a false trigger due to the change in lighting.
  • Minimum_motion_frames: 3 - The number of consecutive frames that motion needs to be present before the sensor is tripped. A baby doesn't move like The Flash, so having it a little higher than 1 gives less false triggers.

Under the schedule settings I cleared the boxes under Motion Start and Motion Stop. These are used when recording is to start when motion is detected. For a baby monitor a live alert is required, not a video recording.


Unfortunately the motion service can have complications when executing some commands. Searching around, I found that it was a bug that many users have encountered with no great solution. If you find that the alert script isn't executing, starting the motion program out of daemon mode from the command line makes it function properly. It's a work around rather than a solution, but it can be automatically done every time the Raspberry Pi boots. Edit bashrc via the command 'sudo nano .bashrc' and add the line 'motion -n' to the bottom of the file and the problem should be avoided.

Sending alerts

I used the on_motion_start parameter to trigger and execute a python script that sends out notifications. There is a range of different providers that offer applications on iOS and Android to receive notifications from inside a python script. I used Instapush, although other services work equally as well. Using it is done by signing up on their website, creating a new application then pasting the provided code with the unique appid and secret code parameters into a python script.


The Instapush API also needs to be installed on the Raspberry Pi, done simply with the following line in a terminal window.

sudo pip install instapush


I placed this script in the default /home/pi directory and named it ''.

import RPi.GPIO as GPIO 
import time
import os.path
from instapush import Instapush, App
GPIO.setup(4, GPIO.IN, pull_up_down = GPIO.PUD_UP)  

input_state = GPIO.input(4)
if input_state == False and os.path.isfile('active') == False: 
            open('active', 'a')
            app = App(appid='xxxxxxxx', secret='xxxxxxxx')
            app.notify(event_name='Baby_Monitor', trackers={ 'Baby': 'Louis'})
  • Lines 1 through 4 import the necessary modules for the script.
  • Line 5 - 7 sets the numbering system for the GPIO pins on the Raspberry Pi, selects a pin number and sets it up for use as a switch trigger then gives it the variable 'input_state'.
  • Line 8 is a if statement to check two conditions:
    1. If the switch attached between GPIO pin 4 and a ground pin is active. I put a jumper to bridge pin 4 to the neighbouring ground pin. Removing the jumper disables the notifications from being sent.
    2. If a file named 'active' is in the current directory. This is part of the method used to ensure that notifications are only sent once every 10 minutes to prevent a huge flood of alerts being sent consecutively.
  • Line 9 creates a blank file titled 'active' in the current directory.
  • Lines 10 and 11 are the Instapush provided code to trigger the sending of notifications.
  • Line 12 waits for 600 seconds, or 10 minutes.
  • Line 13 deletes the file 'active'.



Final thoughts

The NoIR V2 camera is a great way to get clear images in low light situations, and the increased fidelity with the V2 makes for great quality images and videos. Outside of just monitoring sleeping babies, combining the Interface, the NoIR camera and a Raspberry Pi 3 together can make a very sophisticated security camera system. More than sending alerts, a relay could be added to the system to sound an alarm, turn the lights on in a room or lock a door with a solenoid. Nearly anything that a Raspberry Pi can do is able to be triggered by detecting motion. Plus the whole unit is smaller and cheaper yet higher fidelity than many commercial surveillance systems. Combining it with a big source of Infrared light, such as an array of LED's or a large bulb, the NoIR V2 camera could view a large area in absolute darkness.


If you have any questions about this project or the NoIR V2 in general, leave them in the comments below or hit me up on Twitter - @aaronights.

I’m building a home automation project which connects a Raspberry Pi to control my central heating. I wasn’t particularly happy ripping out all the existing controls, and wanted to piggyback onto them.. which helps if the Pi ever fails (I’ve still got the old controls to fall back on).


I also didn’t want to mess with the existing heating control board, so bought a duplicate unit (British Gas UP2) from eBay for about £12.. I can perfect the project on that, and install it when I’m ready.


This set of videos goes through each step of the project.. starting off with opening the control board, an overview of what I want to do, and testing out the changes.


Opening up the Control Panel


This was a bit tricky.. it wasn’t quite obvious which plastic clips needed pushing in to pull the board out.. if you were doing this on your actual panel (not an eBay-bought duplicate) then this video should help work out what you need to do to get into it without damaging anything.



Project Overview


Next up, I’ll quickly go over what I intend to do to piggyback onto the control board. There’s a project here which did exactly what I wanted to do. He’s not using a PiFace 2 like I intend to use, and he wants to be able to control the hot water as well, but everything else is the same.



Safety First – Masking off the High Voltage Area


In this second video, I’ll show how I’m masking off the high voltage area of the board to make it a bit safer when I’m testing things out. Obviously most of the time the board is off, but this helps keep things safer when it is on without the cover.



Identifying Solder Points


Luckily this blog gave me a good starting point, but it wasn’t clear where to get the status of the central heating.. I used a multimeter to find a spot which changed voltage when the system was on, and this diagram shows you what I found;





Since I only needed 4 wires for this project (2 for the switch, and 2 for the system state), I took an old USB cable, cut the ends off, stripped the wires and soldered it to the board without much trouble.



Soldering Complete!


This shows the control board after the soldering has been completed.. it’s pretty simple soldering; the only tricky part was finding the points to connect to for the system state (on/off). I’ve stuck down some of the wires so that they don’t catch or get stuck underneath the control boards buttons.



Testing the Wiring


Now that I’ve done the soldering, I’m testing out the wiring.. seeing whether connecting the two wires for the switch turns the central heating on, and when it is on, whether we get voltage on the other two wires to indicate the system state.



Controlling from Software


I’ve now hooked it up to the Pi Face 2 board, which can be controlled with a few lines of Python to simulate a button press, and detect the state of the system. It wasn't strictly necessary to use a Pi Face 2.. I just happened to have one that I wanted to use in a project. One disadvantage of the Pi Face 2 was that it can't talk to a 1-wire temperature sensor, so I ended up soldering on a Pi Wingman to give me easy access to the unused GPIO pins.




With these basics in place, the rest of the control software can be written to do scheduling, bring in temperature readings, and allow the system to be controlled remotely.



Software Architecture


One of the early design decisions for the Raspberry Pi powered heating controller was to have the Pi secured behind a firewall without direct access to it from the Internet. What I decided to do was have a set of simple PHP web pages on a remote web host that you can access from anywhere, and the Pi control server talks to that web host to send/receive data.


What I didn’t want was for the Pi to run a web server that ends up getting compromised & having the run of my home network.



The Pi server and remote webspace need to be paired with an access key. Anyone accessing the remote site needs the correct access key to be able to control the system.. and the level of control is limited by the API we’ll put in place.. i.e. remote clients won’t have direct access to your internal network via an open port on your home router.


Of course, you could actually host the ‘remote’ part of this set up on your Pi and use port forwarding; the architecture allows for both types of access. The access key is still needed to control the system, but you’ll be more vulnerable to attacks on your Apache/PHP installation & need to keep up-to-date with software patches to help ensure your system is secure.



I've made a fair amount of progress on a relatively simple set of scripts + PHP that makes this possible.. this video shows how it looks so far. The control part isn't hooked up, but it is able to accept a command from the front-end and pass it to the back end scripts for actioning. The temperature logging is working nicely.



PiSP Pocket

Posted by theluthier Apr 10, 2016

About 6 months ago, I stumbled onto the Ben Heck Show and binge-watched several episodes before deciding to try some hardware modding myself. Here's my first project: The "PiSP Pocket"! I.e. a raspberry pi crammed into a gameboy pocket. The Pi-inside-a-GBP isn't an original idea but I think the dual analog sticks is unique, at least in terms of execution. I was inspired by this photoshopped image.

The specs:

  • Raspberry Pi 3 Model B
  • RetroPie 3.6
  • 2.2" SPI TFT display
  • 32 GB microSD storage
  • 3000 mAh battery with Powerboost 1000c (about 3-4 hours of battery life)
  • Battery status indicator circuit
  • 1 regular USB port, 2 micro USB ports (one for charging)
  • Original GBP power switch, D-pad, start, select, A & B buttons with additional L1, L2, R1, R2, X, Y buttons sourced from a TV remote (all interfaced via Teensy 2.0)
  • 2 PSP analog sticks
  • PWR and ACT LEDs
  • 2 mono speakers (no output from headphone jack unfortunately)

The rpi3 plays N64 perfectly, Dreamcast mostly perfect, and PSP surprisingly playable with a decreased frame rate. I'm quite impressed with the performance compared to the rpi2. It's been fun playing these systems on the nostalgic GBP form-factor. Though my adult hands get a skosh achy after a couple hours of playing heh.

Thanks for looking!





The Raspberry Pi camera is an awesome piece of kit and can really liven up your next project. But how can you get started with it? Well let’s grab a camera, Pi 2 and a few cheap components and build three projects based on the camera.



Installing the camera is quick and easy and to start the installation firstly you will need to locate the black connector marked CAMERA between the HDMI and Ethernet ports.

Carefully lift the top and bottom edges of the connector vertically, they will gently slide up and then stop when in place. Be careful as the CAMERA connector is rather fragile, you will only need to use a little pressure on the connector.

Remove your camera from the box and slide the ribbon connector into the CAMERA connector, ensure that the blue edge faces the ethernet port. Be careful handling the camera it is rather fragile and sensitive to static. With the ribbon inside the connector gently push the connector edges back down, locking the ribbon in place.



With the camera in place, boot up your Raspberry Pi and from the desktop open LXTerminal and type the following



At the menu navigate to Enable Camera and press enter. In the next screen select Enable, and then navigate to Finish, which will prompt you to reboot your Raspberry Pi. Do this and allow the Pi to reboot, thus enabling your camera.

With the camera enabled we next need to check that it has been configured correctly and to do that we use the command raspistill in  LXTerminal.



This will launch the camera and show a preview on the screen for a few seconds, compose yourself and it will take the picture. You can then open the picture via the file manager. It should be in /home/pi or in the directory where you used the command. If this does not work, check that you have connected the camera correctly and that raspi-config shows the camera as enabled. Remember do not remove the camera from the connector while the Raspberry is on, it will cause damage to the camera.

For the last setup step we shall install the Python PiCamera libraries so that we can hack the camera using Python.

In LXTerminal issue the following command



Once this is complete in LXTerminal type

Project 1 - Take a picture with Python

Our first project is rather simple but it shows how to use the PiCamera library and gives us a quick introduction to the library.

What will you need?

Raspberry Pi 2Raspberry Pi 2

Raspberry Pi CameraRaspberry Pi Camera


At this time the application has opened the shell, where we can issue commands / test logic on a line by line basis. We really need to be in the editor, so click on File and New Window to launch an editor window.

As soon as the editor window is open, click on File and Save and name the file, anything BUT This is a good practice to get into as it means that any subsequent saves are handled quickly.

So we start our code by importing two Python libraries.

Time - To control the speed of our project

PiCamera - To use the camera with Python



So with the libraries in place we now turn our attention to creating the main body of code. We start by using

To rename the picamera library into something more manageable, in this case “camera”.

Then we create a preview of the image, in the same way that your mobile phone shows a preview of the scene before the image is taken. This preview stays on screen for 5 seconds before capturing the image to the Desktop, and lastly the preview window is closed, ending the project.

When ready, save the code as and then click on Run >> Run Module.

The code will start a preview of the picture to be taken, wait 5 seconds so you can compose yourself and then take the picture saving it to the desktop. Then the preview will end.

When the camera is active, you will see the red light illuminate in the corner of the board.

So how did your picture come out? Was it upside down? Too dark or light?

Well PiCamera has a few features that can be tweaked.

Rotation - You can easily rotate an image in 90 degree segments by using

camera.rotation = 180

This will flip the image upside down.

Saturation - Add more or less color to your picture, values can be between -100 and 100

camera.saturation = 50

Brightness - Tweak your image if it is too dark or bright. Values are between 0 and 100

camera.brightness = 50

Resolution - Create images at different resolutions. Values are entered by width and height, so an image 1920x1080 is 1920, 1080

There are loads of tweaks that you can make and for the full list head over to Dave Jones’ great resource


Project 2 - Take a picture using a button

What will you need?

Raspberry Pi 2Raspberry Pi 2

Raspberry Pi CameraRaspberry Pi Camera

Push Button / SwitchPush Button / Switch


Male to Female Jumper Wire x 2


Before continuing please ensure that you have followed the above setup instructions.

Taking a picture at the touch of a button is something that we take for granted thanks to mobile phone technology and cheap consumer electronics. But in this project we will deconstruct the process and create our own push button camera using a few common electronic components.

We start this project by attaching the hardware to the Raspberry Pi.




With the hardware attached our focus shifts to the code, more specifically the Python code that will power this project.

To start open LXTerminal and type in the following, remember to press enter at the end of the line.



This will open the Python 3 editor, commonly referred to as IDLE. At this time the application has opened the shell, where we can issue commands / test logic on a line by line basis. We really need to be in the editor, so click on File and New Window to launch an editor window.

As soon as the editor window is open, click on File and Save and name the file, anything BUT This is a good practice to get into as it means that any subsequent saves are handled quickly.

So we start our code by importing three Python libraries.

Time - To control the speed of our project

PiCamera - To use the camera with Python

RPi.GPIO - To use the GPIO pins with Python



With the libraries added, save your work before progressing.

Next we setup the GPIO, firstly we configure the pins to use the Broadcom pin mapping (see diagram) which is not a logical layout, rather it breaks out the pin numbering from the Broadcom System on a Chip (SoC) that powers the Pi.



We use a variable to store the pin number on to which our push button is connected.

Then we setup the button to be an input and to be set high, in other words turned on at the start of the project.

We now create an infinite loop to constantly look for the button to pressed and when that happens the camera code is launched.

Now we make a few configuration changes to the camera settings, firstly changing the resolution, and then the saturation and brightness.

Now we have the button detection code, this will look for a change to the GPIO pin attached to the button and when the pin goes from high to low the preview screen will come to life, wait for 5 seconds and then capture your photo before closing the preview and waiting for another button press.

With the code complete, save it as and then go to Run >> Run Module. Wait a few seconds and then press the button to trigger the camera to life. Project 2 is complete.

Project 3 - Take a picture using Minecraft


What will you need?

Raspberry Pi 2Raspberry Pi 2

Raspberry Pi CameraRaspberry Pi Camera

Minecraft is not only a great game, it is also a great source of Pi projects, and here is a camera triggered in Minecraft that you can code in less than 30 minutes.

For this project you will need to close any Python 3 windows that you may have open. Currently the Minecraft Pi API only works with Python 2.

Then open LXTerminal and type the following.

Then click on File >> New Window to open a new editor window.

We start as ever with importing the libraries that make this project possible.

Time - To control the speed of our project

PiCamera - To use the camera with Python

Mcpi - To link Python with Minecraft

Next we create a link between our Python code and Minecraft. We create a variable called “mc” and that stores “minecraft.Minecraft.create()”, so whenever we use “mc” it tells Python to replace it with the long string of text.

So we now reach the main body of our code, this is the loop that will constantly check our position in the Minecraft world. So we start with the loop, a simple infinite loop called “while True” and we then create a variable called pos and store the player's current position in the world.

Now we create a conditional statement, that will check our current position and compare it to the a hard coded value, in this case checking that our location on the X axis  is -7.0 and when this is true a message is posted to the chat window before reusing the Python code that we wrote earlier to take a picture.

Save your code as but do not run it yet. Navigate to the Raspbian menu and go to Games and select Minecraft Pi.


After a few seconds Minecraft Pi will be on screen, go ahead and create a new game and then a new world. When the game starts you will be dropped near to the X Y Z coordinates 0,0,0. To move around use the W A S D keys, and to look use your mouse. Spacebar is used to jump. Get a feel for the controls and then head to -7.0, you can see your location as a co-ordinate in the top left of the window.

Once there, open the inventory by pressing E on your keyboard, then find the signpost tool and left click on it to use it. In the game world, drop a sign post to show where -7.0 is.


With that done, press TAB on your keyboard to release the mouse from Minecraft and navigate back to our Python code. Click on Run >> Run Module to start the project. Navigate back to Minecraft, the easiest way is to click on the window to bring it into focus.

Now move to -7.0 again and the camera should spring into life!

So there we have it. Three projects all using the PiCamera in a different way. What can you do with the PiCamera and the Raspberry Pi 2?

All of the code for these projects can be found at

Need to know the distance to the sun in centimeters, what weather it will be tomorrow or turn on the lights using your voice? Or perhaps you just need someone to talk to at night, when working on your projects? With Amazon's Alexa Voice Service on the Raspberry Pi Zero, this is now a reality, at a very affordable price!


Using a Raspberry Pi Zero, a USB sound card, speaker, microphone and a huge button, I created my personal assistant!



For instructions on how to reproduce this build, have a look at the blog post on my personal website: Running Amazon Echo (Alexa) on Raspberry Pi Zero – Frederick Vandenbosch



New age gaming is all about new technology and advanced peripherals or toys and has given rise to things such as motion gaming. The old arcade games with a big machine with large buttons and 8 bit graphics and sounds brings backs so many memories of childhoods passed which need to be relived sometimes. The proposed project is about breathing life into the old and bulk arcade machine which was a great source of entertainment (with a bit of nostalgia). “Punch it up” is an arcade based game with a lot of physical exercise included and is more or less a fighting game. With the help of large punchable buttons, you get to fight your opponent while standing right besides him in a virtual fighting game. This concept roots from the classical “big game” form factor but takes advantage of the programming capabilities offered by the Raspberry Pi 3 as well as the extensive processing power to make things more new age.


How it works


Screen Shot 2016-04-05 at 4.18.53 PM.png

Figure 1. Basic Game Layout


The game is designed with each player having one joystick and a set of 4 buttons arranged as shown in figure 1,  that can be punched to activate in-game functions which gives the player a physical feeling of the game. There are button pads which are designed with a cushion to protect your hand when you punch them and they stand on top of regular arcade buttons. The pads are removable if you want to revert back to the original arcade game button feel. When playing the game, the joystick function just like  your avatar move around and dodge the attack while the punching buttons are to land punches and kicks on your opponent. Synchronise the punches/kicks and the position of your avatar for a successful ‘KO’ of your opponent.





There are two plans of action for this project. The first involves using retro pi to play the classic arcade games like street fighter. Only this time, instead of a gamepad, you get to literally such buttons to make the characters punch in game. Buttons 5 and 6 however will be made a part of a wired joystick which the user will hold in the hand. For combos, the user can keep a button pressed while punching the second button thereby executing more advanced moves.



As shown in the diagram above, we can use a wired soft stick with two buttons conveniently placed near the thumb.


20160406_151308 4.jpg

For the second phase of implementation, now of out team members will try and implement a game of our own where we take pictures of our opponent and use it in the game. The total screen is split into 8 columns and 2 rows with a 2x2 matrix available to each player. The user image is displayed to the opponents screen in one of the 4 squares. The opponent can punch one of the 4 buttons to punch in one of the 4 squares. If you are able to punch the right square you get a point. The user can move his image within the 4 squares using the joystick to avoid getting punched. The game is to punch the opponent while avoid getting punched yourself.



Risks and challenges

The only challenge here is getting the enclosure sturdy enough to take the punches. We will be taking the help of a carpenter to fabricate something that can really take it.



We are proposing a twist to the original arcade game form factor while adding a new arcade game of our own. Its our first experience with such a project and once we finish this one, we can really kick things up a notch




Punch It Up Team


g++, shubhamsharda, shwetankv007 ipv1

I've been asked to look into "enchanting" some objects for a workshop up in Leeds at the end of the month so I thought I'd do some research into different tools to use.



I've already used Python and that has good support for the GPIO with libraries such as Ben Nuttal's GPIO Zero. GPIO Zero: Developing a new friendly Python API for Physical Computing - Ben Nuttall


I had a play with that and it is simple and intuitive. There's lots going on under the surface but the API is clean and simple.


I looked at using Scratch and was rather disappointed. The implementation for interacting with the GPIO is via a external service and "broadcast" block with "magic" strings. This seems a horrid cludge, I was expecting to see at best some motor / LED control blocks or at worst some low level pin blocks.




I was also aware of Node-Red, I'd never tried it out but knew it was good for controlling dinosaurs.




I knew that I was also going to need some hardware. Given that the event I was going on was only a day long it made sense to look for some off the shelf components. I wanted something that had screw terminals and would be easy for people to use.


I looked at a couple firstly the PiFace, this seems a capable board but was out of stock at the time I was ordering.

PiFace – PiFace Digital


There was also the RasPiRobot board from Simon Monk (of Maker's Guide to the Zombie Apocalypse fame). I got one of these ordered to try out and wired it up to an old motor / gearbox.

RasPiRobot V3 | MonkMakes

Getting started with Node-Red

I did experiment with running Node-Red on my Windows Laptop and it is possible to get it to work. However it's pre-installed on the latest Raspbian image so you can just fire it up from the console with node-red. I was running my Pi headerless so I did that by using SSH.

Once you've set it going, wait a few seconds and then you can open up a web browser to connect on the default port of 1880.

The flow based approach is a bit different if you are a coder but if you've an electronics background then it might actually seem more intuitive and it did not take me long to get used to how it worked.


LED Example

My first experiment was to turn the 2 LEDs on and off. I created a couple of "inject" nodes as those come with buttons to press on the left. One was configured to send a 1 and the other a 0. I then added some GPIO nodes and a function node which was coded to invert the signal. These were wired up as follows. The only "issue" I found was that Simon's pins referenced the BCM (broadcom) numbering not the pins on the header. Once I'd fixed that the flow worked fine.


Source - Library - Node-RED


PWM Control

The first issue I had when I tried to create a new flow was that the pins were already used on the previous flow. I saved that off and deleted it. I restarted node-red and was able to add the new flow.

The flow has 5 switches that pass the values 0 to 100 to the pin. The Pin is configured for PWM control. I found that the brightness did not seem linear so I selected specific values for the inputs to compensate for that.



Source - Library - Node-RED


Motor Control

The motor control is a combination of these two approaches. However, because of the control requirements for the H-Bridge chip I needed to create a complex flow to convert from on/off/back/forward into the pin values for the chip. This was handled by a sub flow and some flow specific variables.


The fast and slow injectors have a message type of "Speed", this is detected by the split message note and passed straight to output 3 to control the speed pin.

Go and Stop have a message type of "On", this is also picked up by the split message and sent to the on and off nodes. Off turns off both 1 and 2 outputs, On passes to the get direction function which reads the direction variable and then generates a H/L or L/H depending on the value.

The forwards and backwards node have a type of "Direction" so are split out to the flow at the bottom. These save the value of the direction to a flow specific variable. This flows on to get direction node and handles the direction in the same way as the On flow.



Source - Library - Node-RED

Filter Blog

By date: By tag: