Skip navigation
2017

I'm looking at building an AoIP link using a pair of suitable Rpi's with Wolfson / Cirrus audio boards.

 

Audio In/Out would be analog. The Ethernet link would be extended using Ubiquity wireless long range access point devices set up as a wireless bridge.

 

I'm getting a wee bit lost trying to determine the most suitable combination of Rpi board, audio board, and OpenOB version. I also have a strong preference for a board combination that is drop-in compatible with an existing & readily available case such as the Camdenboss Wolfson board case (from what I can gather, this seems to be designed for the older Rpi boards which are no longer available?).

 

My intention is to set this up as a more or less embedded system that I can plug in, turn on, and send / receive audio.

 

I'd appreciate some help figuring this out. I'm not really concerned about having the latest & greatest, but rather the combination that is most likely to just work, with as little troubleshooting & fiddling around as possible.

 

# I work as an audio / light tech, and also as wireless installer for a small ISP using Ubiquity equipment. Although I prefer to use Ubuntu based Linux OS's such as Mint for my daily computing, I'll be the first to admit that I'm not much of terminal / code guy - my configuration requirements for work are VERY basic. I can get a bit of help from our chief technician, although his available time is limited. I can however follow sufficiently clear instructions

In the previous parts of this series we've setup a shared network folder and some network nodes. Now we can actually get on with installing and using Blender.

 

Installation

To install blender the following is needed.

 

sudo apt-get install blender

 

Running Blender

As Blender is a graphical program, it made sense to attach a screen to my controller node and launch the application. It takes a while to launch but eventually it returned the default scene of a cube and a light. Even on the Pi3, it's pretty slow to use from the graphical interface so I'd not want to have to use this for creating the scenes on the Pi. The menus are unresponsive and even just navigating the file structure is a challenge.

2017-02-04-222352_1360x768_scrot.png

I downloaded some sample files and rendered the first one. A couple of minutes later it appeared.

2017-02-04-223435_1360x768_scrot.png

Command line

It is also possible to run Blender from the command line to render either single frames or animated sequences. You'll need to use the UI to design the models and animation first and you can set the output parameters here but some of the output details can be changed at the command line.

 

The command line returns a strange error which I've not worked out yet.

AL lib: (WW) alc_initconfig: Failed to initialize backend "pulse"

 

I repeated the rendering from the command line with the following.

 

blender -b /mnt/network/Samples/AtvBuggy/buggy2.1.blend -o /mnt/network/Samples/AtvBuggy/buggyrender -f 1

 

The parameters are:

-b scene to load in background

-o output file

-f number of frame to render

 

On the Pi3 that generated the file in 01:00.38.

the Pi2 takes a little longer 01:15.89.

 

Animation

I picked a model helicopter animation to test out the rendering on the cluster. I created a simple shell script to render different frames on each of the nodes.

ssh cluster1 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 1 -e 25 -a &
ssh cluster2 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 26 -e 50 -a &
ssh cluster3 blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 51 -e 75 -a &
#Render rest locally
blender -b /mnt/network/Samples/Demo_274/scene-Helicopter-27.blend -o /mnt/network/Samples/Demo_274/Helicopter##### -s 75 -e 100 -a

 

Then ran the script with

 

./BatchRender.sh > render.log

 

This was perhaps a little optimistic as it was hard to tell what was going on and at least one of the nodes failed to find the network drive.

 

I had to remount the drives using to following command. It should be possible to schedule this at boot but I have yet to configure that.

 

sudo mount -a

 

I then created a ssh session to each of the nodes and started rendering. The first few frames appeared after about 30 minutes, the helicopter turned out to be a photo-realistic Meccano one!

Helicopter00001.png

3 of the nodes were producing one frame every 30 minutes the last was estimating 10 hours per frame. When I check that node was a B+ so the extra power of the Pi3 really makes a difference here. So, best that the other 3 nodes take some of the work-load from this node.

 

After a few frames, I realised that this was not actually animated so all my nodes had produced the same image! My blender skills are fairly limited so rather than animating this I tracked down some demo examples with animation at https://download.blender.org/demo/old_demos/demos/ .

I decided to use hothothot.blend from the 220 zip file. Results below.

 

Producing a video

Once you have a series of frames you need to turn these into a video. Blender does have a built-in video editor for this but an alternative is the command line tool FFMPEG.

This can be installed by following Jeff Thompson's instructions to build FFMPEG, note that this could take a few hours.

 

Creating the video took a few seconds with the following command:

 

ffmpeg -r 60 -f image2 -s 320x240 -i Render%05d.png -vcodec libx264 -crf 25  -pix_fmt yuv420p Render.mp4

 

 

Summary

Drawing1.png

So in summary, the blade does a good job of providing a platform and power to the blades. As has been seen, the setup of the network can be challenging, perhaps I should have stuck to DHCP! The sharing of the disk in comparison was straight forward. The suggested use case of a Blender render farm is quite achievable although you'd want to use the Pi3 rather than earlier models. I think if you had a big project you want to look into how the allocate of frames to nodes could be automated, there are some commercial solutions available but it should also be possible to code something.

 

Reference

https://docs.blender.org/manual/en/dev/render/workflows/command_line.html

https://www.blender.org/download/demo-files/

FFmpeg

Installing FFMPEG for Raspberry Pi – Jeff Thompson

Using ffmpeg to convert a set of images into a video

Checking Your Raspberry Pi Board Version

As shabaz mentioned in the previous comments a lot of the setup for a Pi Cluster applies to other scenarios. Something I stumbled upon this week was building a hadoop cluster with raspberry pi which is another thing you could do with the Bitscope blades.

 

In this part of the project, I'm looking at the setting up of the nodes.

 

SSH

I took a slightly different approach to enabling SSH on the nodes by creating a file called ssh on each of the boot partition of the SD cards.

 

Network

Each node was renamed and given an unique IP, 201,202 and 203.

 

So that the nodes can communicate to the main network the controller has been configured to act as a gateway, see Niall McCarroll - Building a Raspberry Pi mini cluster - part 1

 

I followed the previous steps to give the boards static IP addresses, however, it does not work. The boards kept ending up with a DHCP assigned IP address and if I turned off DHCP then I ended up with no address at all.

 

Eventually, this turned out to be out of date information and rather than change the IP address in the interface file, it has to be given in the DHCP config file  /etc/dhcpcd.conf

 

interface eth0
static ip_address=10.1.1.201/24
static routers=10.1.1.200
static domain_name_servers=192.168.1.254 8.8.8.8 4.2.2.1

 

I also found that the domain name servers were not being picked up correctly. The following command shows what you have configured.

 

 resolvconf -l

 

It should give the list of addresses mention above. I found it did not work correctly until I changed the /etc/network/interfaces back to it's default.

iface eth0 inet manual

 

Network share

The steps to mount the share are the same as for the controller, starting with the backup of the fstab file, creating a password file and adding the mount point.

 

I also needed to install smbclient using

sudo apt-get install smbclient

 

Automating the configuration

Once SSH and the network are configured we can automate the install of the other software. The first step to this is to follow the steps in Rachael's article below to add keys to connect to SSH. We can then use shell commands to run the same thing on each of the nodes. You can't use this for interactive tools such as editors but it's good for command line such as mkdir and cp.

 

#!/bin/sh
HOSTS="cluster1 cluster2 cluster3"
for HOSTNAME in $HOSTS; do
    echo executing command on $HOSTNAME
    ssh `whoami`@$HOSTNAME $@
done

 

In the next and final part of this series I'll look at running Blender from the command line so that all the nodes can be processing files.

 

Reference

 

Setting a static ip in Raspbian

Building a Raspberry Pi mini cluster - part 1

Updating security for remotely connecting to my servers via SSH

Filter Blog

By date: By tag: