Skip navigation
1 2 3 Previous Next

Raspberry Pi Projects

252 posts
Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part - Coming Soon

Can our garden water itself?

In this project we continue our search to keep our garden well watered, but this time we start fresh with a new project...A self watering garden!

 

Re-using some of the kit from Project 1, in this project we introduce relays, 12V circuits and peristaltic pumps that will water our garden based on the soil moisture sensor from Project 1. All we need to do is keep a water butt full of water, either through rain or grey water collection!

 

IMG_20170920_151615.jpg

 

For this project you will need

 

Building the hardware

Aside from our Pi Zero W, the main player in this project is the Rasp.IO Analog Zero board, which provides us with an analog to digital converter, the MCP3008. Yes you can buy the chip on its own for only a few dollars / pounds, but the Analog Zero board is a convenient form factor that offers a “less wires” alternative.

The main sensor we are using is a simple soil moisture sensor from Velleman. The moisture sensor is a simple analog sensor which connects to the 3V and GND pins on the Analog Zero and the output of the sensor is connected to A0. The output from the sensor is in the form of a voltage from 0V to 3.3V (as we are using the 3.3V power from the Pi Zero GPIO) if there is no conductivity, i.e the soil is dry then no voltage is conducted, if the soil is wet then the soil will most likely conduct all of the voltage.

 

The other part of the project is a relay, used to control the 12V circuit for our peristaltic pump which will pump water from a water butt to our plants using a rotating motion to “squeeze” the water through the connected plastic hose. The relay is controlled from the GPIO of our Pi. In this case we connect the relay to 3V, GND and the Input of the relay to GPIO17.

 

The Analog Zero will take a little time to solder, and we shall also need to solder the pins for I2C and solder the 3V and GND pins for later. Once soldered, attach the Analog Zero to all 40 pins of the GPIO and then connect the sensor and relay board as per the diagram. You will also need to provide the 12V power supply to the peristaltic pump. The + connection from the 12V supply goes to the relay, via the normally open connection, the icon looks like an open switch.

 

DSC_2803.JPGDSC_2806.JPGIMG_20170920_151134.jpg

 

Build the project so that the wiring is as follows.

 

Circuit.png

 

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use. Now open a terminal and enter the following command to configure SPI connection.

 

 

 

sudo raspi-config

 

 

Yes we can use the GUI “Raspberry Pi Configuration” tool found in the Preferences menu, but having raspi-config available to us over an SSH connection is rather handy should we need it.

 

 

Once inside raspi-config, we need to navigate to “Interfacing Options” then once inside this new menu go to the SPI option and press Enter, then select “Yes” to enable SPI. While not strictly necessary, now would be a great time to reboot to ensure that the changes have been made correctly. Then return to the Raspbian desktop. With the hardware installed and configured, we can now move on to writing the code for this project.

 

Writing the code

To write the code for this project we have used the latest Python editor, Thonny. Of course you are free to use whatever editor you see fit. You will find Thonny in the Main Menu, under the Programming sub-menu.

 

We start the code for this project by importing two libraries. The first is the GPIO Zero library, used for simple connections to electronic components. In this case we import the MCP3008 class for our Analog Zero board and then we import DigitalOutputDevice, a generic class to create our own output device.

 

from gpiozero import MCP3008, DigitalOutputDevice
import time

 

 

Now lets create two objects, the first, soil is used to connect our code to the Velleman soil moisture sensor, connected to A0 on the Analog Zero board, which is channel 0 on the MCP3008 ADC. Our second object is a connection to the relay, which is triggered by an output device, on GPIO17.

 

 

soil = MCP3008(channel=0)
relay = DigitalOutputDevice(17)

 

 

Moving on to the main part of the code we create a loop that will constantly run the code within it. Inside the loop the first line of code creates a variable, soil_check. This variable will store the value passed to it by the MCP3008, which is handled via the soil object. As this value is extremely precise we use the round function to round the returned value to two decimal places.

 

 

 while True:
    soil_check = round(soil.value,2)

 

 

Next we print the value stored in the variable to advise the user on the soil moisture level, handy for debugging the code! Then the code waits for one second.

 

 

   print('The wetness of the soil is',soil_check)
    time.sleep(1)

 

 

To check the soil moisture level we use an if conditional test. This will test the value stored in the soil_check variable against a hard coded value. In this case 0.1 was found to be very dry soil, but of course you are free to tinker and find the value right for your soil. If the soil is too dry then the condition is passed and the code is executed.

 

 

   if soil_check <= 0.1:

 

 

 

So what is the code that will be run if the condition is met? Well remember the relay object that we created earlier? We are going to use that object to turn on the relay, effectively closing the open switch and enabling the 12V circuit to be completed. This will trigger the peristaltic pump to life and pump water into the plants. Now for testing we set the time to two seconds, but in reality this will be much longer, depending on the length of hose that the water needs to pass through.  So when enough water has been passed  we need to turn off the relay, cutting the 12V circuit. The code then waits for 10 seconds before the loop repeats. Again these times are in seconds for test purposes, but in reality they would be in minutes.

 

       relay.on()
        time.sleep(2)
        relay.off()
        time.sleep(10)

 

So that’s it, we have now built and coded the project and it is ready to be tested. To test the code in Thonny, click on the “play” button located in the menu, or press F5. Now as there is no conductivity between the prongs of the soil moisture sensor the code will trigger and start to water the plants, obviously be careful with this!

Once checked, place something conductive between the two prongs and you will see that the output is just printed to the Python shell and no watering is triggered. When you are finished press the red stop button to halt the code.

 

So now that we have code, how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

 

 

#!/usr/bin/python3

 

 

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

 

sudo chmod +x self_watering.py 

 

 

 

Now in the same terminal, launch the project by typing

 

 

./self_watering.py

 

 

 

Now the project will run in the terminal, checking our soil moisture levels and watering as necessary!

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

 

sudo crontab -e

 

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

 

@reboot /home/pi/self_watering.py

 

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Now reboot the Pi Zero W and for now ensure the soil moisture sensor has no connection between the prongs. After a about a minute, the project should be running, and your pump should start pumping water into the plants.

 

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source, ensure the soil sensor is out of the box. Place the project in your garden, and make sure the soil moisture sensor is firmly in the ground. Power up the Pi Zero W, and now your garden can now water itself!

You may have seen my blog post about creating a small portable media center that I can easily take on holiday to hook up to the hotel TV. If not, you can find it here;

 

Raspberry Pi powered media center to take on holiday

 

To reduce the amount space it took up, I used a cheap USB keypad which could be used to control the media center. It worked really well & having something hard-wired meant I didn't have to worry about a Bluetooth-paired device needing re-pairing.

 

However, what I then realised was it would be good to be able to use a spare remote control instead. I was using the OpenElec distribution and looked through their documentation for how to do this, but only found references to version 3 of the software (it's on version 7) and how to get LIRC working with it. There were plenty of blog posts on hooking up IR support, but a lot of them were written 2-3 years ago, and the software has moved on somewhat.

 

Hardware Setup

 

What I did first was buy a suitable IR receiver. I chose the Vishay TSOP4838TSOP4838 (which costs less than £1) because of the voltage range (2.5-5.5v) and receiver frequency (38KHz). If you look at the datasheet for the product, you'll see which pins should get wired up to the Pi;

 

 

Simply wire pin 1 to GPIO 18, pin 2 to GND, and pin 3 to a 3.3v power pin, e.g.

 

 

By using some short F-F jumper wires and a small cut in the side of the case, I was able to position the reciever neatly(ish) on the side.. it's still easily removable, but you could integrate it into the case a bit more seamlessly than this

 

 

 

 

Software Setup

 

Before this project I was using OpenElec, but had limited success getting the IR support working properly. I switched to OSMC which I'd read had better IR support through the main UI. I think I was actually on the right track with OpenElec, but I realised later that the old vintage Xbox remote I was trying to use wasn't 100% working.

 

If you're going to use a remote control that's officially recognised, then you can jump this part about learning IR remote control codes.

 

Learning IR remote commands

 

The remote I found in the loft was an old DVD player remote which (unsurprisingly) wasn't in the list of pre-recognised remotes in the OSMC installation. I needed to get the Pi to learn the IR pulses being sent out by the remote and map them to the Kodi functions.

 

1. First off, you need to telnet to the Pi. Username: osmc, Password: osmc.

 

2. Next you need to stop the LIRC service which is being locked/used by Kodi

 

sudo systemctl stop lircd_helper@lirc0

 

3. Now you can run the IR learn mode.. this will record what it finds to the config file you specify;

 

irrecord -d /dev/lirc0 /home/osmc/lircd.conf

 

4. Follow the on-screen instructions which will recognise your remote.

 

One observation I had was that this only worked properly if I stopped after the first prompt to press lots of keys on the remote.. if I completed the second stage, the key mapping didn't work, e.g.

 

If I ignored the second phase & let it abort, the learn process worked

 

 

When it's working, you'll be able to enter the Kodi function (like KEY_UP, KEY_DOWN, etc)  & map it to a key press on your remote;

 

Once you've mapped all the functions you want, we then need to move back to OSMC and tell it to use that config file we've just written.

 

OSMC Settings

 

In OSMC you need to do the following;

 

1. Disable the CEC service (via System Settings > Input > Peripherals > CEC Adapter), which seems to be needed for LIRC to work.

2. Now go into OSMC settings and pick the Raspberry Pi icon

 

 

2. Go into Hardware Support and enabled LIRC GPIO Support. You shouldn't need to change anything if you connected the sensor to GPIO 18.

 

 

3. Now go back and select the Remote Control option;

 

 

4. Ignore the list of pre-installed remotes and select Browse;

 

 

5. Navigate to the folder where LIRC wrote your config file;

 

 

6. Confirm the change & reboot the box;

 

 

That should be it.. your remote should be able to control everything in Kodi.

Here is my  Raspberry Pi Wireless project to display daily Flickr  explore photos on a used Apple Cinema Display:
https://atticworkshop.blogspot.com/2017/08/raspberry-pi-zero-wireless-photoframe.html

See my wireless motion detection system after this link, it's pretty awesome! Or, carry on...

 

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

Disclaimer: I’m an engineer, not a pro film maker. Be advised.

 

 

This project will retrofit an old washer or dryer to alert you via text message when the clothes are done.

 

With the IOT market hot right now, many appliances have applications in this realm. Recently we have seen internet connected cooking appliances and refrigerators. Of all the appliances in a house, the one that has remained mostly the same in its process is the washer and dryer. Most people dread using these machines because like baking, you have to wait and tend to the process when needed. With a washer, if you leave your clothes in there for too long without transferring all the clothes to the dryer, you risk having your clothes start to smell like mold or dry out, in which you have to rewash them. If you leave your clothes in the dryer for too long, they will wrinkle. In which you have to send them for another heated spin. Ideally, the clothes get transferred to the dryer as soon as the washer is done and the clothes are taken out of the dryer and folded or hung as soon as the dryer is done. People are either too busy or don’t hear the buzzer when it's done. These days, people are better at responding to their phone than when the dryer or washer is done. At this point, most washer and dryers only have the capability to remind you using a buzzer or chime which is short and sweet. Easy to forget or not hear at all. To make it life easier, why can’t that buzzer or chime reminder be a text message, something we are all now are very good at responding to.

 

I based this project on another I did some time ago using a Bealebone, but now it's ported to a Raspberry Pi since the Pi 3 has build in WiFi. I had to try it.

 

For this project we used a Raspberry Pi 3 to text your phone. Yes, that's all you need to send a text. Most people don’t realize that you can send a text (sms) via email. So by hooking up the Raspberry Pi 3 to wifi and using a email server we can send a text via email. The carriers for cell phone service have provided an easy way to do this.

 

This website popular mechanics list the ways to do this for most carriers:

http://www.popularmechanics.com/culture/web/how-to/a3096/email-to-text-guide/

It is the same way you address an email: number@insertcarrierhere.com provided in the list below:

 


Parts:

Raspberry Pi 3

MMA8452Q 3-Axis Accelerometer

USB Battery Pack (Any external pack will work, here is an OK one.)

MicroUSB Cable

2 Industrial Magnets

1 Rocker switch

1 Panel Mount LED

Project Box

 

------------------------------------------------------------------------------------------------------------------------------

The Schematic:

Pi washer texter schematic.JPG

The schematic is simple. The accelerometer is attached to the Raspberry Pi 3 with four project wires.

 

The hardest part is OPTIONAL, adding an indicator LED and on/off switch. Technically, you can just plug in the Pi 3 to the USB battery very time you want to use it. But, if you want an easy fire-and-forget kind of device, place in that switch and LED!

 

The build:

How this is built doesn’t matter. At all.

 

All I did was slap the components inside a project box (enclosure). It can be inside any shape box it will all fit inside. However, with my build I wanted to mitigate any issue.

- I wanted to mount the accelerometer as ridged as possible inside the box. This is to make sure that most of the movement senses is from the machine it is attached to.

- I used two large rare-earth magnets to make sure it attaches to the washer/dryer as firmly as possible. Since the whole system works off of the idea the machine will have some vibration it can sense, it’s best to make sure it doesn’t get shaken off the machine!

- Portability and temporary use needed to be considered. I didn’t want to attach the sensor system to the machines permanently. I would only use it once a week or so anyway. Then I can turn it off and store it.

 

For those who want to see how I put it all together, see the following gallery:

 

{gallery} Raspberry Pi 3 washer dryer texter

20170703_180436.jpg

The main components are attached to the lid of the enclosure, since it is easier to attach standoffs.

20170703_180427.jpg

The battery and he magnets are hot-glues to the bottom of the main enclosure compartment.

20170703_180421.jpg

Although project wires are long, they do not interfere with the battery.

20170703_203520.jpg

The micro-USB connection that powers the Pi 3 is spliced inside the box for the on/off switch and the LED/resistor.

20170703_203527.jpg

20170703_203532.jpg

20170703_180427.jpg

20170703_174015.jpg

This is the complete system enclosed in the box and turned on.

 

 

Function:

We created a wireless box that attaches to your washer or dryer via magnets. There is a switch and LED on the top of the box that let you turn on and off the device and show you an indication whether the box is on or off. When the user is using their washer or dryer, they simply turn on the device before they start the washer or dryer and turn it off when they retrieve their clothes after the washer or dryer is finished with the load. The device works by detecting if the washer/dryer is on or off by reading its vibration. There are of course cycle changes in a washer and dryer that would fool the device into thinking that it is off when the motor stops for up to 30 seconds usually. A timer is implemented into the code to determine if the washer/dryer has stopped for 1 minute. Since a cycle change takes less than 1 minute, it only sends a text after 1 minute of no activity. Ensuring that the washer/dryer is done with that load. If the washer/dryer starts back up within that 1 minute then it continues to read the vibration until 1 minute of no activity to send a text. To measure whether the washer/dryer was on or off the accelerometer measures the X axis of the 3 axis provided. This is because the X axis is the horizontal plane of the surface of the dyer that moves the most.

 

The vibration of the washer/dryer is a side to side motion and less of an up and down. So we only need to use the X axis for measurement. There is a subroutine that measures out 50 readings in 10 seconds. So that is a reading every 200ms. After 10 seconds of readings, the subroutine returns the current state of the device. It returns whether or not the accelerometer X axis numbers are in range of the baseline which is taken when the device is first turned on and the washer/dryer is off. The 50 readings are compared, then calculated, and it is determined whether the values of X axis are in-range or out-of-range. The in-range values indicate that the device does not detect vibration therefore system mode is in standby mode and waiting for the appliance to start. Once the device detects vibration, the mode is then set to ON and the cycle and timing detection starts. When a cycle ends the device vibration readings will go “in-range” and the cycle check mode starts and the 1 minute timer will start. If no activity, the device will go into Finish mode and send a text. Then go back to standby to wait for another start.

 

Code:
The texting part works by sending an email via the Raspberry Pi 3. Since we are using email, we need an email handling service like gmail to send the email which gets translated into a text by the carrier. To set this up you need an email to login to, for example, in python:


#Assign sender and receiver, X represents the digits of the phone number
sender = 'youremail@gmail.com'

receiver = 'XXXXXXXXXX@vtext.com'

 

#Next we create the message:
header = 'To: ' + receiver + '\n' + 'From: ' + sender

body = 'Laundry is DONE!'

 

signature = '- Sent From Rpi'

 

#Load the gmail server and port into the class “mail”

mail = smtplib.SMTP('smtp.gmail.com',587)

 

#run a subroutine with your email login and password for your gmail.
          def sendText():      

   

               mail.ehlo()     

               mail.starttls()

     mail.ehlo()

     mail.login('youremail@gmail.com', 'password’)

     mail.sendmail(sender, receiver, '\n\n' + body + '\n\n' + signature)

     mail.close()


Running the sendText() function will send your text with the initialized variables loaded into it.


Wifi connection:  

The python code for this project was written in the Raspbian OS using the python 3.4.2 IDE. VNC Viewer was used to View the Raspberry Pi’s Desktop with VNC server installed on the Raspberry Pi. Once the box is connected to wifi using the Raspberry Pi’s wifi IP address, you can ssh into it using a terminal program like putty or VNC viewer to see the Raspbian Desktop. Typing in “ifconfig” into the terminal gives you the IP address of the Raspberry Pi connected via wifi.

-Screenshot of the Raspbian Desktop, showing python code in python 3.4.2 IDE and terminal



I2C Library:

Python has a couple different libraries to use for i2c communication. For this project we used smbus for python 3.4.2. This requires us to install the python3-smbus package by typing “sudo apt-get install python3-smbus” into the terminal. The project code uses the function calls, bus.write_byte_data(device address, register address, data to write) and bus.read_i2c_block_data(device address, start register, and number of bytes to read). The “bus.” is set = at the beginning of the program, bus=SMBus(1). Which sets the variable “bus” to SMBus(1) for easier writing in the code. The SMBus(1) tells the library we are using i2c bus number 1 to read and write on. The Raspberry Pi uses i2c bus 1 by default. We write to the MMA8452Q chip’s configuration registers to configure the chip for use via i2c bus. Especially the register that puts the accelerometer into active mode so we can read values from the digital output registers to retrieve the acceleration data on the Raspberry pi. “bus.read_i2c_block_data()” lets us read the first 7 registers into an array. We then take the X acceleration data from the X output register and parse out that data into variables.    

Accelerometer Connection:

The accelerometer is powered with 3.3V from the Raspberry Pi and communicates via I2C. The python program writes to the configuration registers setting up how the data should display and configuring the mode you want to use. For this project we used the XYZ mode where the device is pulled from the I2C where the X axis values are translated into g’s (acceleration). The python program reads the values from a register on the accelerometer and all mathematical translation from the pulled values to the units it uses to determine its state is done by the python program.

- Sparkfun MMA8452Q

- Alternative Accelerometer here

 


Program Running Terminal Screenshots:

 

The program is run via ssh and executed using “sudo python3 txter.py”

-Running the Python script from the terminal, displays the hardware and variables being initiated

-Showing the readings, current state, and mode

-Showing the device reading the appliance vibration meaning its ON 

-Starting the 1 minute timer when the values are back in range.

-Checking if the stop of vibration was a cycle change or finished

-Checking timer to see if 1 minute has passed without activity.

-1 minute has passed without activity meaning the laundry load is finished, sending text


Text Received:

Screenshot_20170703-185313.png

  • Screenshot of received text on phone

This project is about a digital picture frame aimed at family members, such as grandparents.

 

The idea is that parents taking pictures of their children, can easily share those pictures with the children's grandparents by making them appear on the picture frame automatically. In turn, the grandparents can "like" the pictures, letting the children's parents know which pictures are their favourites.

 

By making use of a specific software platform called resin.io, multiple instances of this picture frame can be deployed for various family members, without hassle.

 

Screen Shot 2017-08-29 at 18.34.39.png

 

Features

 

The project makes use of different services. Here's an overview:

 

Screen Shot 2017-08-28 at 16.51.45.png

 

The picture frame offers following features:

  • simple user interface to navigate the pictures, start a slideshow or like a picture
  • periodically download pictures from a shared Dropbox folder
  • send push notifications whenever a picture is liked
  • Turn the picture frame's display off every evening, and back on every morning

 

Let's take a closer look at the software and hardware for this project, and how you can build your own connected picture frame.

 

Hardware

 

The following hardware components are used in this project:

 

Assembly is super easy, following these steps:

  1. Mount the Raspberry Pi 3 to the Raspberry Pi Touchscreen
  2. Connect the jumper wires from the screen's board to the Pi for power
  3. Slide the Touchscreen assembly through the enclosure's front bezel
  4. Screw everything in place

Do not insert the microSD card or power on the frame yet, as the software needs to be

Image-1 (1).jpg

 

Software

 

The complexity of the project is in the software. Let's break it down.

 

resin.io

 

Resin.io makes it simple to deploy, update, and maintain code running on remote devices. Bringing the web development and deployment workflow to hardware, using tools like git and Docker to allow users to seamlessly update all their embedded linux devices in the wild.

Resin.io's ResinOS, an operating system optimised for use with Docker containers, focuses on reliability over long periods of operation and easy portability to multiple device types.

To know more details about how resin.io works, be sure to check out this page: How It Works

Sign up for a free account and go through the detailed Getting Started guide. From there, you can create your first application.

 

Application Creation

 

Setting up a project requires two things:

  • application name: ConnectedFrame
  • device type: Raspberry Pi 3

 

Screen Shot 2017-08-26 at 21.38.55.png

 

After completing both fields and creating the application, a software image can be downloaded for the devices to boot from. The useful part is that the same image can be used for every device involved in the project. Select the .zip format, which will result in a file of about 400MB, as opposed to 1.8GB for the regular .img file.

Screen Shot 2017-08-26 at 21.38.45.png

Before downloading the image, connectivity settings can be specified, allowing the device to automatically connect to the network once booted. Enter the desired SSID and matching passphrase.

 

Flashing SD Card

 

Once the image specific to the application is downloaded, it needs to be flashed to a microSD card for the Raspberry Pi to boot from.

 

There is a tool available for doing just that, by the same people from resion.io, called Etcher. It works on mac, Linux and Windows, is simple to use and gets the job done.

Screen Shot 2017-08-26 at 21.50.54.png

 

Launch Etcher, select the downloaded image file. Etcher should automatically detect the SD card, all that remains is to click the "Flash" button.

 

The SD card is ready to be inserted in the Raspberry Pi.

 

Configuration & Environment Variables

 

Some raspberry Pi configuration changes are typically made by editing the /boot/config.txt file. Resin.io allows users to do this via the user interface, by defining Device (single device) or Application (all devices) Configuration Variables.

 

In config.txt, pairs of variables and values are defined as follows: variable=value

 

Using the Device/Fleet Configuration, the variable becomes RESIN_HOST_CONFIG_variable and is assigned the desired value.

 

For example, rotating the LCD touch screen is normally done by appending lcd_rotate=2 to /boot/config.txt. As a configuration variable, this becomes RESIN_HOST_CONFIG_lcd_rotate with value 2.

Screen Shot 2017-08-26 at 18.01.22.png

 

Another type of variables, are Environment Variables, which can again be defined at application or device level.

 

Screen Shot 2017-09-03 at 09.57.08.png

 

These environment variables can be used by the operating system, such as "TZ" which is used to set the correct timezone, but also by scripts.

 

Following environment variables are used by the connected frame Python script:

  • DISPLAY: display to use for the Tkinter user interface
  • DROPBOX_LINK: link to dropbox shared folder
  • IFTTT_KEY: personal IFTTT webhooks key to trigger notifications
  • DOWNLOAD_INTERVAL_HOURS: interval in hours to download photos from the dropbox folder
  • CAROUSEL_INTERVAL_SECONDS: interval in seconds to automatically switch to the next photo
  • FRAME_OWNER: the name of the person the frame belongs to, used to personalise the "like" notification

 

Most are to be set at application level, though some variables such as FRAME_OWNER are specific to the device.

The link to the shared dropbox folder ends with "?dl=0" by default. This has to be changed to "?dl=1" in the environment variable, to allow the application to download the pictures.

 

Application Deployment

 

I've been developing a Python application using Tkinter to create the graphical interface for the picture frame.

The layout is simple: four interactive buttons (two on each side), with the picture centralised.

 

Deploying an application with resin.io requires some additional files, defining which actions to perform during deployment and which command to use to start it. The full code and accompanying files for this project can be found on GitHub.

 

You can clone the repository for use in your resion.io application, reproducing the exact same project, or fork it and modify it as you desire!

 

git clone https://github.com/fvdbosch/ConnectedFrame 
cd ConnectedFrame/

 

In the top right corner of your resin application dashboard, you should find a git command. Execute it in the cloned repository.

 

git remote add resin gh_fvdbosch@git.resin.io:gh_fvdbosch/connectedframe.git

 

Finally, push the files to your resin project:

 

git push resin master

 

If all went well, a unicorn should appear!

Screen Shot 2017-08-26 at 17.55.45.png

 

In case of problems, a clear error message will appear, telling you what exactly went wrong.

 

IFTTT

 

"IFTTT" stands for "If this, then that" and is an online platform that enables users to connect triggers and actions for a plethora of services.

 

For this particular project, the webhooks service is used to trigger notifications to the IFTTT app on a smartphone.

Screen Shot 2017-08-28 at 21.19.27.png

 

The trigger is part of the code and needs to remain as is, though the action could be modified to suit your own personal needs.

 

Demo

 

Enough with the theory, let's see the frame in action!

 

 

What do you think? Is this something you could see family members use? Let me know in the comments!

Raspberry_Pi_Logo.svg.png

Raspberry Pi Projects - How does your IoT garden grow?

Join Les Pounder as he works on his IoT Garden! Watch him integrate technology into his garden and maintain the health of his favorite plants.

Check out our other Raspberry Pi Projects on the projects homepage

Previous Part
All Raspberry Pi Projects
Next Part

 

Project1B.JPG

 

Gardening is a delightful hobby, but it can be a chore and one particularly bothersome job is watering the garden. Water it too often and the plants can die, too little and they can also die! But surely technology can offer a solution to this age old problem? Well yes it can, and in this tutorial we shall be using the Raspberry Pi Zero W, the lower power Pi with added Bluetooth and WiFi, along with two sensors, a soil moisture sensor to check if our garden needs water and an Adafruit Flora UV sensor to measure the ultraviolet index of the sun. This data is then emailed to our preferred inbox via a Gmail account that we can use to send the messages on demand.

 

pebble.jpg

Our soil moisture sensor sensor will need an analog to digital converter to convert the analog voltage from our soil moisture sensor into something that the Pi can understand. In this case we shall be using the MCP3008 via an add on board from Rasp.IO called Analog Zero.

 

So let's get started by looking at the bill of materials for this project.

 


All of the code, and a detailed circuit diagram can be found on the Github page for this project. (Zip Download link)

 

Building the hardware

 

 

Rasp.io Analog Zero

Aside from our Pi Zero W, the main player in this project is the Rasp.IO Analog Zero board, which provides us with an analog to digital converter, the MCP3008. Yes you can buy the chip on its own for only a few dollars / pounds, but the Analog Zero board is a convenient form factor that offers a “less wires” alternative.

 

Vellemen Soil moisture SensorAdafruit Flora UV Sensor

The two sensors that we are using are a simple soil moisture sensor from Velleman, and an Adafruit Flora UV sensor based upon the SI1145 IC. The moisture sensor is a simple analog sensor, hence the use of the Analog Zero, but the Flora UV sensor uses I2C so we need to have access to those GPIO pins, which luckily the Analog Zero provides.

 

The Analog Zero will take a little time to solder, and we shall also need to solder the pins for I2C and solder the 3V and GND pins for later. We will also need to solder wires from the Flora UV sensor to attach to our Analog Zero.

Once soldered, attach the Analog Zero to all 40 pins of the GPIO and then connect the sensors as per the diagram.

 

Diagram of the circuit

 

Now connect up your keyboard, mouse, HDMI, micro SD card, and finally power up the Pi Zero W to the desktop. You will need to setup WiFi on your Pi Zero W, and make a note of the IP address for future use. Now open a terminal and enter the following command to configure SPI and I2C connections.

 

 

sudo raspi-config

 

 

raspi-config-main.png

Yes we can use the GUI “Raspberry Pi Configuration” tool found in the Preferences menu, but having raspi-config available to us over an SSH connection is rather handy should we need it.

 

 

Once inside raspi-config, we need to navigate to “Interfacing Options” then once inside this new menu go to the SPI option and press Enter, then select “Yes” to enable SPI. Then do the same for the I2C interface.

 

raspi-config-SPI.png

 

While not strictly necessary, now would be a great time to reboot to ensure that the changes have been made correctly. Then return to the Raspbian desktop. With the hardware installed and configured, we can now move on to installing the software library for this project.

 

Getting started with the software

We only have one software library to install for this project, and that is a Python library for working with the SI1145 sensor present on the Flora UV sensor. But before we install that library we need to ensure that our Pi Zero W has the latest system software installed. So open a terminal and type the following to update the list of installable software, and then install the latest software.

 

 

sudo apt update && sudo apt upgrade -y

 

With the software updated we now need to run another command in the terminal, and this command will install the Python library for our UV sensor.

 

 

git clone https://github.com/THP-JOE/Python_SI1145

 

 

Now change directory to that of the library we have just downloaded.

 

cd Python_SI1145


Now we need to run an install script so that the library will be available for later use. In the terminal type.

 

sudo python3 setup.py install


This library was designed to work with Python 2 but it installs cleanly and works with Python 3. But if you try out the built in examples you will need to add parentheses around the print statements, in line with Python 3 usage.

 


We can now close the terminal and instead let's open the Python editor, Thonny, found in the main menu under the Programming sub menu.

Once Thonny opens, immediately save your work as soil-uv-sensor.py

 

As ever with Python, we start by importing the libraries that we shall be using.

The first library is GPIO Zero, the easy to use Python library for working with the GPIO. From this library we import the MCP3008 class that will enable us to use the Analog Zero. Our second library is time, and we shall use that to add delays to our code, otherwise the project will spam our mailboxes with emails! Our third library is the SI1145 library that we shall use to read the data from our UV sensor. The fourth, fifth and sixth libraries are used to send emails, the smtplib enables Python to send an email, and the email.mime libraries are used to construct an email.

 

from gpiozero import MCP3008
import time
import SI1145.SI1145 as SI1145
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText


So we now move on and create two objects, one that is used to create a connection to the Flora UV sensor, called “sensor”. The other object is called “soil” and here we make a connection to the soil moisture sensor that is currently connected to A0 (Channel 0 of the MCP3008) via the Analog Zero.

 

 

sensor = SI1145.SI1145()
soil = MCP3008(channel=0)

 

 

Our main body of code is a while True loop that will ensure that our code runs continuously. Our first few lines in this loop will read the UV sensor and store the value into a variable, UV, which is then divided by 100 to give us a UV index value, which is then printed to the Python shell for debug purposes.

 

 

while True:
    UV = sensor.readUV()
    uvIndex = UV / 100.0
    print('UV Index:        ' + str(uvIndex))

 

 

In order to get the reading from our soil moisture sensor, we first need to make a new variable called soil_check and in this variable we store the value that is being sent A0 / Channel 0 of the MCP3008. Typically this would be read as a voltage, with 100% conductivity providing 3.3V But in this case the MCP3008 class from GPIO Zero returns a float value between 0.0 and 1.0 With 0.0 being no conductivity, so dry soil, and 1.0 meaning we have perfect conductivity and probably an over watered garden. You will also notice that we round the figure to two decimal places, as the value returned from the soil moisture sensor is quite precise and goes to many decimal places. We then print this value to the Python shell before sleeping for a second. Now for the purposes of testing the delay between checks is rather small, but in real life this delay would be measured in hours.

 

 

    soil_check = round(soil.value,2)
    print('The wetness of the soil is',soil_check)
    time.sleep(1)

 

 

So now that we have the data, let's use it. For this we need an if condition to check the value stored in soil_check against a hard coded value. In my case I used 0.1, but you are free to alter this to suit the plants / garden that you have. In my case I wanted to know if the soil became really dry, so any value equalling or lower than 0.1 will trigger the alert.

 

   if soil_check <= 0.1:


Now we start to construct the email that will be sent should the alert be raised. The first part of any email is to say who the email is from and who it is being sent to.

 

        fromaddr = "YOUR EMAIL ADDRESS"
        toaddr = "EMAIL ADDRESS TO SEND TO"


Next we construct our email as a MIME multipart message, in other words we can add more content to our email than a standard email. For this project we use multipart to enable the use of a subject line. But this could also be used with attachments such as video / images. Here we set up with our from and to email addresses, and then we set up the subject of the email.

 

        msg = MIMEMultipart()
        msg['From'] = fromaddr
        msg['To'] = toaddr
        msg['Subject'] = 'Garden requires water'

 

The next line we come across forms the body of our email, and it is made up from the readings taken by our sensors. These values are stored as floats in the variables soil_check and uvIndex and we then use concatenation to add them to a string readings which is then stored in the body.

 

        readings = 'Soil is '+str(soil_check)+'wet and the UV index is '+str(uvIndex)
        body = readings

 

 

Then we attach all of the email contents ready to be sent.

 

 

        msg.attach(MIMEText(body, 'plain'))

 

 

In order to send the message we need to have a connection to the Gmail server.

 

 

        server = smtplib.SMTP('smtp.gmail.com', 587)

 

 

So we now need to ensure that our connection is secure, so we use Transport Layer Security.

 

        server.starttls()

 

 

Now let's login to our Gmail account obviously you will need to use your own account.

 

 

server.login(fromaddr, "PASSWORD")

 

 

Our next step is to create a new variable text which will contain our email message converted to a string.

 

 

        text = msg.as_string()

 

 

We can now finally send the email using our email address, the address of the recipient, and the text that we have just converted.

 

        server.sendmail(fromaddr, toaddr, text)

 

 

Our last two lines of code close the connection to the Gmail server, and then instructs the project to wait, in this case for 10 seconds, but in reality this value will be much longer, otherwise you will receive lots of email spam!

 

 

        server.quit()
        time.sleep(10)

 

 

So that’s it, we have now built and coded the project and it is ready to be tested. To test the code in Thonny, click on the “play” button located in the menu, or press F5. Now as there is no conductivity between the prongs of the soil moisture sensor the code will trigger it to send an email. So check your inbox to see if it has arrived. Once checked, place something conductive between the two prongs and you will see that the output is just printed to the Python shell and no email is triggered. When you are finished press the red stop button to halt the code.

 

So now that we have code, how can we make it executable? In order to do this there are two steps to take. First we need to add a line to the top of our Python code which instructs Python where to find the interpreter.

 

 

#!/usr/bin/python3

 

 

With that complete, we now need to go back to the terminal, and we need to issue a command to make the Python file executable from the terminal. The command is.

 

 

sudo chmod +x soil_uv.py

 

 

Now in the same terminal, launch the project by typing

 

 

./soil_uv.py

 

 

Now the project will run in the terminal, printing output to the shell, and the emails should start to be sent as there is no connection between the two prongs of the sensor.

 

So how can we have the code run on boot? Well this is quite easy really. In the terminal we need to issue a command to edit our crontab, a file that contains a list of applications to be run at a specific time/date/occasion. To edit the crontab, issue the following command in the terminal.

 

 

sudo crontab -e

 

 

If this is the first time that you have used the crontab, then it will ask you to select a text editor, for this tutorial we used nano, but everyone has their favourite editor!

 

With crontab open, navigate to the bottom of the file and add the following line.

 

@reboot /home/pi/soil_uv.py

 

Then press Ctrl + X to exit, you will be asked to save the file, select Yes.

 

Now reboot the Pi Zero W and for now ensure the soil moisture sensor has no connection between the prongs. After a about a minute, the project should be running, and your inbox should start to receive emails.

 

WorkingProject.jpg

Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source, ensure the soil sensor is out of the box, but keep the UV sensor inside the box, oh and make sure the container is transparent! Place the project in your garden, and make sure the soil moisture sensor is firmly in the ground. Power up the Pi Zero W, and now you can wait for your garden to tell you when it needs watering.

 

Next time...

In the next blog post in this series, we shall build a system to automatically water our garden when an alert is triggered.

The Raspberry Pi 3 has ability to rapid fire pulses out of its GPIO. It's surprising how fast! In other words, it can handle a lot of tasks.

 

If the only thing the Pi is expected to do is spin a stepper or servo motor really fast, this is good news. Theoretically, the Pi can send pulses to the motor driver faster than most any drive could accept them. The problems start when the Pi is expected to control and monitor numerous devices, all while maintaining exact timing. There just isn’t time to do it all exactly when you want it done.

 

The Solution

The solution to the problem is to offload some tasks to another device. That is exactly what we did here with a Raspberry Pi 3 and two Arduino UNO’s (Figure 1 & 2). The Pi and the UNOs are connected via an i2c bus. The Pi functions as the bus master and the UNOs function as slave nodes. With an arrangement like this, each slave node is told what to do with a uniquely addressed packet of data. They handle control of the motors long after the Pi has sent them their instructions, leaving it free to do something else. If the Pi was handling everything itself, it would need to use clever algorithms and more advanced coding methods like timer interrupts to simultaneously control the two motors.

 

A logic level converter allows the 5-volt UNOs to safely communicate with the 3.3-volt Raspberry Pi 3. Also, notice the two pull-up resistors on the SCL and SCK lines of the i2c bus. There are several articles on the internet that investigate how the value of these resistors effect the rise time an i2c signal. 10KΩ is the value we had at hand and over the short distances involved and the relatively slow speed of 400Kbps, it worked perfectly. You may want to research this topic for your own projects.

 

schematic 1.JPG  

Figure 1: The wiring diagram.

 

DSC06073.JPG

Figure 2: Raspberry Pi 3 connected to Arduino UNOs via an I2c bus. Note the red logic level converter board on the breadboard. It is required for bidirectional communications between the 5v UNO and the 3.3v Raspberry Pi.

 

 

The code explained a bit.

(Code is attached to this post, link at the bottom)

 

A short length of Python code runs on the Raspberry Pi. This code relies on the smbus library to automates the transmission of data out onto the i2c bus. The function bus.write_i2c_block_data(address, cmd byte, array) takes three arguments. The address argument is the address of the slave node you want to transmit to. The cmd byte is a special reserved byte that always gets sent first. The array argument is where you put the main bulk of the data you want to send, up to 32 bytes. It will be tx’d immediately after the cmd byte is sent.

 

Each UNO slave uses the wire.h library for handling of the i2c interface. You will see a lot of print commands sending data to the serial-monitor. These can be removed without consequence. They are used for keeping an eye on what the code is doing.

 

Every few seconds the python code addresses each slave node in turn and gives it instructions. The first bit of the cmd byte to indicate the direction of motor rotation (Figure 3). The remaining bit are unused although, they can be used to communicate anything you want. The 4 bytes in each of the arrays communicate the number of steps for the motor to take and the time to wait between them.

 

unspecified bits

direction bit

bit #

7

6

5

4

3

2

1

0

value

X

X

X

X

X

X

X

1

Figure 3: The first bit of the cmd byte indicates the whether the UNO should spin the motor clockwise or counter-clockwise. The other bits aren’t used for anything and are ignored by the code.

 

Data is stored in a byte array because the i2c bus transmits data a byte at a time. An unsigned byte can only store values from 0 – 255. To get around this bottleneck, we treat two concurrent array elements like they are the high and low byte of a 16-bit integer value (Figure 4). These bytes are sent out and then recombined into an integer variable at the slave node using this clever bit of code from the C51 compiler site. It allows the access and manipulation of the high and low bytes of an integer variable with a couple of macros. You can see the macro definitions and how it appears in the Arduino sketch in Figure 5. They perform this trick by creating pointers to the bytes within an integer value.

 

There are other ways to do this but this is a very quick way of accomplishing the task.

 

byte array.jpg

Figure 4: Integer values are split up into a byte array.

 

 

macros 1.JPG

Figure 5: These macros from the C51 site allow us to manipulate the high and low bytes within an integer variable.

 

 

Conclusion

You aren’t obligated at all to use any of the hardware or the libraries presented here. You don’t even have to use i2c, another serial protocol can be used. This is a concept that can be applied across systems and platforms. Also, our example uses Arduino UNOs, but there is no reason that a smaller board like an Adafruit Trinket or an Arduino Mini couldn’t be used in its place. A robot with an UNO at each motor might look a little silly.

 

For truly exact timing, a real-time clock could be added to the i2c bus. The master and slave nodes can then read the exact time and use that data to precisely time events.

 

The protocol we devised for this example of using the cmd byte in conjunction with 4 more bytes from an array to communicate step count, time between steps and direction of rotation can be modified to meet your requirements. Each of your slave nodes could be doing far more than spinning a motor and as such, may require 20 or more bytes to effectively communicate instructions. Whatever form that protocol takes is up to you. Do not feel obligated to do exactly as we did. This article is only meant to demonstrate a technique as simply as possible and to give the reader a solid starting point.

 

Finally, our example uses two slave nodes but there is no reason why more couldn’t be added. A lot more in fact. 127 nodes at the minimum and with some adjustment to the addressing scheme, there is no reason you couldn’t put 1000 devices on the bus. Not that you’d need that many. If you do, post a comment. We all are going to want to know what the heck you are up to.

 

 

Explanation of video:

What you see here are the two UNO i2c nodes controlling their respective motors after they have received instructions on how to do so from the Master node. Those instructions where sent, received and decoded before the motors even started to move. If the raspberry Pi was left to control these motors, it would be occupied with that task for the duration of each motor movement. By offloading the job of controlling the motors, the Pi has more resources to dedicated to something else, like running a GUI.

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Project Iron Pi Zero Mark II

 

  • Raspberry Pi Zero v1.3 or Zero W
  • Raspberry Pi 8MP NoIR Camera v2
  • Adafruit Power Boost 1000C with Adafruit Approved Lipo Battery
  • Mini USB Microphone
  • UV, IR, and RGB LED
  • Laser

 

 

 

DSC_1109.JPG

 

 

 

DSC_1099.JPG

 

 

DSC_1102.JPGDSC_1103.JPG

DSC_1108.JPG

 

 

     First activation of Iron Pi Zero Mark I with RPi Zero v1.3.

 

 

FB_IMG_1499001114482.jpg

 

 

 

 

FB_IMG_1499001128239.jpg

FB_IMG_1499001170046.jpg

FB_IMG_1499001159969.jpg2 (1).JPG

 

  Thanks for checking out my project. Stay tuned for updates.

 

Trent

This post will cover the basics of how I use a 3D printed camera mount to take photos of the moon using a Raspberry Pi cam. My telescope has a 1.5 inch eyepiece but you can download the 3D printable file and adjust the size to fit your needs. I am using a Raspberry Pi 3 running Raspbian, using a regular Pi cam. Then using an android phone app I can take pictures and view them on my phone seconds later.1492258660637.jpg

I plan to take this setup to some remote locations. I chose to use a GoPro flex arm attachment and SmartiPi case for now so I can quickly attach and remove the Raspberry pi from the telescope. I picked up the flex arm for around $12 on amazon, and the SmartiPi case was about $25 from MCM electronics with a GoPro mount built in. These work for now, maybe bring some zip ties to keep the Raspberry Pi attached, remember the telescope with be slewing up and down, and the Pi may try to fall off!!

GOPR3454.JPG

 

I use my phone as a wifi hotspot to create a network to connect the phone to the Pi in remote locations. I've already setup Raspbian on the Pi and pre-configured the network connection, so It connects to the phone hotspot automatically. I've had best result using the android phone app RaspiCam Remote It has a video mode that allows me to see what the telescope sees so I can line up the shot. Try using apps like VNCviewer for remote desktop access, You can then run python scripts to take custom photos. Or use the app RaspManager with it's built in photo system we can take pictures, and download them to the phone quickly,but no preview to line up your shot. Below are photos of the moon taken from the 8MP Pi cam!! Very happy with the results now to take this setup into the mountains for some clearer sky's.

 

For more pictures and even a video visit It'sAliveHobbies.com Thanks for reading!

We recently went on holiday and I took my laptop & VGA cable with me. It was my intention to hook it up to the TV and play some media on it to keep the kids happy on rainy days (pretty essential in the UK!). It turned out the TV had the VGA port covered up by the mounting bracket, so we ended up putting the laptop on a chair and watching videos from there; it did the job, but there was a perfectly good TV I could have used.

 

At home we have a Fire TV Stick that runs Kodi, but the problem with Fire TV is that it has to have an internet connection, otherwise it refuses to run any apps. I'd rather not have to tether it to my phone all the time.

 

Next time I'll be more prepared; I've put together a compact and flexible setup consisting of a Raspberry Pi 2 running OpenElec (and Kodi) together with a set of cables allowing me to hook it up to pretty much any TV. The Pi runs Kodi really well; the OpenElec distribution boots really quickly & has good Wifi and BlueTooth support.

 

Cable-wise, I've got a 1m standard HDMI cable, which will be used in most situations.. with a 2m HDMI extension lead if I can't get the Pi near enough to the TV (some accommodation doesn't have power sockets where you'd expect them). I've also got a RCA lead, with a SCART adaptor as well.. so that helps if we get stuck with an older TV, which is a plus point for using a Pi 2 with composite out.

 

For media storage I've gone with a USB3 Flash Drive with a capacity of 64Gb, which gives us more to play with than the microSD card, and it's super-fast for copying media from a PC. As soon as you plug in the flash drive, Kodi will show it in the menus.

 

At first I chose a compact/travel USB-based keyboard instead of Bluetooth in case OpenElec 'forgot' the keyboard and I'd have nothing to navigate the menus to re-pair it. But then I bought a numeric keypad from eBay for £2. The keypad isn’t instantly recognised by Kodi, but an easy way to get it up and running is to use the Keymap Add-on. Attach a normal USB keyboard and the keypad at the same time, start the add-on, and use the keyboard to activate the remap process. From there, it’s dead simple to map the keypad to the different Kodi functions.

 

So that's it.. nothing groundbreaking or overly difficult to put together. The whole system is small enough to fit in a small travel bag & gives us a lot of flexibility when dealing with different hotels/accommodation. You may just find the TV accepts the USB flash drive and can play back whatever is on it.. but at least you'll have all the gear you need if it doesn't

 

 

 

Kit list;

 

1 x 2m HDMI extension cable

1 x 1m HDMI cable

1 x Raspberry Pi 2

1 x 8Gb MicroSD card

1 x Compact USB travel keyboard

1 x RCA to SCART adapter

1 x 3.5mm plug to RCA lead

1 x 64Gb USB3 Flash Drive

1 x USB power supply + cable

 

Update 16-Aug-2017 - I've just got back from a holiday near Blackpool in the UK and this system worked brilliantly. It was all in an old camera bag, and this is the second place we've stayed that had a patch panel for the connections.. here's one from our visit to Center Parcs earlier in the year;

 

pisocket.png

 

And here's the one from last week;

 

 

The patch panels and handily located power sockets make things a lot easier!

Originally posted on: Pi Zero Case One-Minute Mod – Frederick Vandenbosch

 

A 1-minute mod for the official Raspberry Pi Zero case, inspired by the ZeroView.

 

Requires:

  • Raspberry Pi Zero (W) with official case
  • Raspberry Pi Camera module
  • Hobby knife
  • Two suction cups

 

mod.jpg


Follow me:

logo_128.png1490017350_facebook_circle.png1490017353_twitter_circle.png1490017359_youtube_circle.png1490017357_instagram_circle.png

Hello! I'm currently designing a project based off of the Nintendo Switch and modern gaming tablets, It's called PiTab (Pie-Tab) and it uses the Raspberry Pi 3 Model B. It has two slots for controllers on the sides, and a screen in the middle. It also has a 3.5mm headphone jack, an HDMI out, and a rechargeable battery.

Why have a rechargeable battery? It's made to be taken with you on the go, not just at home! There's two ways to play; connect it to the dock, or play as a handheld.

You can also play with any bluetooth/USB gamepad supported by RetroPie.

 

I probably won't make the "PiCon" controllers on the sides of the console, as I'm not good at that kind of stuff...

 

LINKS:

http://www.tinyurlcom/teamdoom

https://www.gofundme.com/PiTab/

 

Concept Art:

 

Console PiCons.png

Console Front.png

I'm looking at building an AoIP link using a pair of suitable Rpi's with Wolfson / Cirrus audio boards.

 

Audio In/Out would be analog. The Ethernet link would be extended using Ubiquity wireless long range access point devices set up as a wireless bridge.

 

I'm getting a wee bit lost trying to determine the most suitable combination of Rpi board, audio board, and OpenOB version. I also have a strong preference for a board combination that is drop-in compatible with an existing & readily available case such as the Camdenboss Wolfson board case (from what I can gather, this seems to be designed for the older Rpi boards which are no longer available?).

 

My intention is to set this up as a more or less embedded system that I can plug in, turn on, and send / receive audio.

 

I'd appreciate some help figuring this out. I'm not really concerned about having the latest & greatest, but rather the combination that is most likely to just work, with as little troubleshooting & fiddling around as possible.

 

# I work as an audio / light tech, and also as wireless installer for a small ISP using Ubiquity equipment. Although I prefer to use Ubuntu based Linux OS's such as Mint for my daily computing, I'll be the first to admit that I'm not much of terminal / code guy - my configuration requirements for work are VERY basic. I can get a bit of help from our chief technician, although his available time is limited. I can however follow sufficiently clear instructions

In the previous parts of this series we've setup a shared network folder and some network nodes. Now we can actually get on with installing and using Blender.

 

Installation

To install blender the following is needed.

 

sudo apt-get install blender

 

Running Blender

As Blender is a graphical program, it made sense to attach a screen to my controller node and launch the application. It takes a while to launch but eventually it returned the default scene of a cube and a light. Even on the Pi3, it's pretty slow to use from the graphical interface so I'd not want to have to use this for creating the scenes on the Pi. The menus are unresponsive and even just navigating the file structure is a challenge.

2017-02-04-222352_1360x768_scrot.png

I downloaded some sample files and rendered the first one. A couple of minutes later it appeared.

2017-02-04-223435_1360x768_scrot.png

Command line

It is also possible to run Blender from the command line to render either single frames or animated sequences. You'll need to use the UI to design the models and animation first and you can set the output parameters here but some of the output details can be changed at the command line.

 

The command line returns a strange error which I've not worked out yet.

AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"

 

I repeated the rendering from the command line with the following.

 

blender -b /mnt/network/Samples/AtvBuggy/buggy2.1.blend -o /mnt/network/Samples/AtvBuggy/buggyrender -f 1

 

The parameters are:

-b scene to load in background

-o output file

-f number of frame to render

 

On the Pi3 that generated the file in 01:00.38.

the Pi2 takes a little longer 01:15.89.

 

Animation

I picked a model helicopter animation to test out the rendering on the cluster. I created a simple shell script to render different frames on each of the nodes.

ssh cluster1 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 1 -e 25 -a &
ssh cluster2 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 26 -e 50 -a &
ssh cluster3 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 51 -e 75 -a &
#Render rest locally
blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 75 -e 100 -a

 

Then ran the script with

 

./BatchRender.sh > render.log

 

This was perhaps a little optimistic as it was hard to tell what was going on and at least one of the nodes failed to find the network drive.

 

I had to remount the drives using to following command. It should be possible to schedule this at boot but I have yet to configure that.

 

sudo mount -a

 

I then created a ssh session to each of the nodes and started rendering. The first few frames appeared after about 30 minutes, the helicopter turned out to be a photo-realistic Meccano one!

Helicopter00001.png

3 of the nodes were producing one frame every 30 minutes the last was estimating 10 hours per frame. When I check that node was a B+ so the extra power of the Pi3 really makes a difference here. So, best that the other 3 nodes take some of the work-load from this node.

 

After a few frames, I realised that this was not actually animated so all my nodes had produced the same image! My blender skills are fairly limited so rather than animating this I tracked down some demo examples with animation at https://download.blender.org/demo/old_demos/demos/ .

I decided to use hothothot.blend from the 220 zip file. Results below.

 

Producing a video

Once you have a series of frames you need to turn these into a video. Blender does have a built-in video editor for this but an alternative is the command line tool FFMPEG.

This can be installed by following Jeff Thompson's instructions to build FFMPEG, note that this could take a few hours.

 

Creating the video took a few seconds with the following command:

 

ffmpeg -r 60 -f image2 -s 320x240 -i Render%05d.png -vcodec libx264 -crf 25  -pix_fmt yuv420p Render.mp4

 

 

Summary

Drawing1.png

So in summary, the blade does a good job of providing a platform and power to the blades. As has been seen, the setup of the network can be challenging, perhaps I should have stuck to DHCP! The sharing of the disk in comparison was straight forward. The suggested use case of a Blender render farm is quite achievable although you'd want to use the Pi3 rather than earlier models. I think if you had a big project you want to look into how the allocate of frames to nodes could be automated, there are some commercial solutions available but it should also be possible to code something.

 

Reference

https://docs.blender.org/manual/en/dev/render/workflows/command_line.html

https://www.blender.org/download/demo-files/

FFmpeg

Installing FFMPEG for Raspberry Pi – Jeff Thompson

Using ffmpeg to convert a set of images into a video

Checking Your Raspberry Pi Board Version

As shabaz mentioned in the previous comments a lot of the setup for a Pi Cluster applies to other scenarios. Something I stumbled upon this week was building a hadoop cluster with raspberry pi which is another thing you could do with the Bitscope blades.

 

In this part of the project, I'm looking at the setting up of the nodes.

 

SSH

I took a slightly different approach to enabling SSH on the nodes by creating a file called ssh on each of the boot partition of the SD cards.

 

Network

Each node was renamed and given an unique IP, 201,202 and 203.

 

So that the nodes can communicate to the main network the controller has been configured to act as a gateway, see Niall McCarroll - Building a Raspberry Pi mini cluster - part 1

 

I followed the previous steps to give the boards static IP addresses, however, it does not work. The boards kept ending up with a DHCP assigned IP address and if I turned off DHCP then I ended up with no address at all.

 

Eventually, this turned out to be out of date information and rather than change the IP address in the interface file, it has to be given in the DHCP config file  /etc/dhcpcd.conf

 

interface eth0
static ip_address=10.1.1.201/24
static routers=10.1.1.200
static domain_name_servers=192.168.1.254 8.8.8.8 4.2.2.1

 

I also found that the domain name servers were not being picked up correctly. The following command shows what you have configured.

 

 resolvconf -l

 

It should give the list of addresses mention above. I found it did not work correctly until I changed the /etc/network/interfaces back to it's default.

iface eth0 inet manual

 

Network share

The steps to mount the share are the same as for the controller, starting with the backup of the fstab file, creating a password file and adding the mount point.

 

I also needed to install smbclient using

sudo apt-get install smbclient

 

Automating the configuration

Once SSH and the network are configured we can automate the install of the other software. The first step to this is to follow the steps in Rachael's article below to add keys to connect to SSH. We can then use shell commands to run the same thing on each of the nodes. You can't use this for interactive tools such as editors but it's good for command line such as mkdir and cp.

 

#!/bin/sh
HOSTS="cluster1 cluster2 cluster3"
for HOSTNAME in $HOSTS; do
    echo executing command on $HOSTNAME
    ssh `whoami`@$HOSTNAME $@
done

 

In the next and final part of this series I'll look at running Blender from the command line so that all the nodes can be processing files.

 

Reference

 

Setting a static ip in Raspbian

Building a Raspberry Pi mini cluster - part 1

Updating security for remotely connecting to my servers via SSH

Filter Blog

By date: By tag: