Skip navigation

Just another interview to share. As the song is almost the same (with an updated image) please don't ask me to translate

(full article in attach)

Screen Shot 2015-06-30 at 15.35.21.png

I'll have to keep this short because I have to get to bed as I have work in the morning. However, I just wanted to report that I was able to get sensor readings from Xtrinsic MEMS sensor board to be published to the web browser.


The process is as follows:


  1. Sensor board on Raspberry Pi B+ has a python script that handles communication between the board and the computer. This same python script then pushes the data to the mqtt server which is on the Raspberry Pi 2. This is set over port 1883.
  2. Once the server receives the data it then publishes it to all subscribers. In this case, the same RPi2 is subscribed and awaiting the information. The data is then received by the JavaScript code sitting on the lightttpd server.
  3. The JavaScript then updates the HTML and displays the data.


I've done it this way so that the server can be separate from the boards that will be sitting in the pizza bag/box. Of course, it is possible to have the mqtt server on the sensor board computer, but then that would mean having an mqtt server for each pizza delivery. It makes much more sense to have a single broker (that is what the mqtt servers are called) for all the pizzas.


Well, more progress next week. I'll continue to work on getting all the sensor information publishable to the web browser and then I'll have to start building out the interface. I'm leaving the actual pizza bag for last.

The TrainingSphere is basically a coax drone based on the Raspberry board.

There is a very interesting project (Navio+) that is a porting of the ArduPilot software on a Raspberry board. Navio+ software is supposed to be used with their own board, but this board is very expensive and have a lot of features I don't actually need for this project. So I will try to create an APM that uses a different set of sensors specifically designed for indoor use, namely

  1. position will be based on the recognition of a given landmark
  2. altitude will be provided by a sonar
  3. as accelerometer I will the MicroStack accelerometer provided with the challenge kit


Given these assumptions, the plan for building the TrainingSphere includes the following steps

  1. Install Realtime Wheezy
  2. Install OpenCV on Raspberry board
  3. Install Blob detection library
  4. Make a simple blob detection application
  5. Make an application that can measure distance from a landmark of known size
  6. Test the sonar sensor
  7. Modify the ArduPilot code to accept input from the new sensors
  8. Modify the motor control output to provide a PWM that varies from 0% to 100% (currently the motor control output is a PWM that is compatible with any servo input)
  9. Build the TrainingSphere frame
  10. Add laser pointer


While waiting for some last components to finish building the last remaining probes the last days was busy for single parts testing and organising the components in their final asset.

The resulting aspect of the Meditech prototype at the actual date is shown in the following image.


From top of the black plane you can see:

  • Leftmost and rightmost circular holes hosts the two speakers for all the audio information and messages.
  • Top-center the LCD 20x2 alphanumeric display. The display will survive in almost any condition as it is controlled by the PIC of the ChipKit PI that communicates with the PI Master but manages independently alarms and the entire control panel (temperature, menu, alarms and the general health status of the system.
  • To the center below the display there is the notification LEDs column; three orange LED shows the status of the blood and hearth probes (when enabled), one LED will flash in synch with the IR controller and one dim its brightness depending on the cooler fan speed.
  • To the center below the display (right side) there is the generic analog potentiometer that will be used for fast and reliable fine-tuning, with different behaviour depending on the active features.
  • Below the potentiometer (not visible, as it is black on black) there is the IR sensor for the IR control of the system.
  • Below the left speaker hole there is the BitScope set for probes data acquisition and already connected internally to the Raspberry PI
  • The two bottom-left cables are connected respectively to the microphonic stethoscope and the heartbeat sensor
  • To the front-side you can see the cooler fan


Actually missing are the GPS unit, the ECG device and the echo-unit, the high precision temperature checker and the blood pressure sphygmomanometer.


Settings and changes in detail


IMG_20150624_101648897.jpgStorage final replacement

As described in the original design Meditech storage has been replaced with its definitive 120 Gb SSD storage. The reason adopting an SSD is not too much related to the higher speed of the SSD respect the traditional HDD: as a matter of fact the SSD is connected to the Raspberry PI through a USB 2 and the speed increase is not so meaningful. Very important instead is the reduced weight and the greater resistance of the SSD devices respect their mechanical cousins especially for an architecture that will be used in unpredictable situations.

The image shows the SSD installed after the removal of the temporary 1TB 5" HDD

It is possible to connect external storage to the Raspberry PI via one of the USB 2 ports. I have searched for a mSATA to USB adapter but I have not found nothing of reliable so I have bought a 120 Gb SSD with the standard SATA adapter instead of the smaller mSATA and reused a spare board SATA-to-USB adapter usually connected to mechanical HDD.

In the post Raspberry PI: USB hard disk boot I have discussed how to setup an external HDD to be used as system disk on the Raspberry PI.

After formatted the SSD and partitioned with ext4 format (see the mentioned post for details) all the existing data from the old HDD has been transferred with the rsynch terminal command to the new SSD drive. Then the system has been powered down, the HDD has been definitely removed and booted with the SSD without problems.

IMG_20150623_203541578.jpg IMG_20150623_203548955.jpg


For the speakers a simple amplified stereo circuit for PC has been used. The two speakers, after some hearing tests has been set in a strategic position considering the expected position of the operator during while he is working in front of the Meditech device.

IMG_20150624_170542437.jpg IMG_20150624_170549093.jpg

To set the two speakers, as usual a two-parts support has been designed then milled on a 3mm thick plastic plate as shown in the following images.

IMG_20150624_155214733.jpg IMG_20150624_163244002.jpg

The exposed surface (top speaker side) has been covered with a soft fabric then the two support has been screwed under the two surface holes.

IMG_20150624_164444289.jpg IMG_20150624_170600519.jpg

Freeing some space

For a better air circulation and to fit the wires in a more rational way the PI master device has been rotated modifying the support as shown in the images below. The temperature sensor of the control panel detect the average temeperature between the Raspberry PI board and the yellow support.

IMG_20150624_134111839.jpg IMG_20150624_134325162.jpg IMG_20150624_160704568.jpg

Installing the BitScope

Using the simple adapter circuit for the BitScope probe headers the device has been installed on the top surface of the control panel. This method has the advantage to save a lot of time, avoiding an excessive modification to the original BitScope device and in the meantime exposing the led and status signals so they are immediately visibile to the user.

IMG_20150617_095616924.jpg IMG_20150625_144436137.jpg

Unsolved speakers issue

As you can see in this short example, the speakers generates a very disturbing noise. This occurred as I powered the system for the first time, so I though in something wrong in the circuit, connections etc. Instead all was fine. After doing some tests I discovered that this noise - persisting also when the audio board is not playing - occurs only when the HDMI cable between the PI and the monitor is disconnected. As I plug the cable in the monitor I got a perfect silence. The noise restarted as the monitor was powered on.

Note also that the HDMI cable connected the PI master device that is a different one than the PI Slave 4 hosting the audio card. I have no idea what I should check to stop this problem. Any suggestion is welcome.

Previous posts for this project:



Project Update


The sun was shining and I had some time to myself, ideal conditions to start cutting things and annoy the neighbours with the sound of powertools. I started working on the enclosure that will hold the computer that will slide in and out of the desk. Because the dimensions were beyond what my CNC can handle, everything was done by hand and I finally gave the router a try. I messed up here and there, but the result so far is more than acceptable


Check out the picture gallery below to get an idea of the work that was done.


{gallery} Screen Enclosure

photo 1.JPG

Two pieces: I cut out two pieces with the same dimensions out of a larger MDF board. The screen and Pi 2 wii be contained in between the two layers.

photo 2.JPG

Dimensions: A cutout will be made for the usable part of the screen, grooves will be made to have the screen held in place.

photo 3.JPG

Cutting: Using the same oscillating multitool as I used to cut the desk, I cut out the rectangle for the screen.

photo 4.JPG

Pop: The cuts are very slim and straight enough for something done by hand. The piece popped out easily.

photo 5.JPG

Guide: Using clamps and a spare piece of MDF, I made a guide for the router.

photo 1.JPG

First time: My first time using the router, ever! Slipped away from the guide a few times, but no problem.

photo 2.JPG

Smoother: The second side went a lot smoother as you can see from the picture above.

photo 3.JPG

It fits: With the grooves routed on all four sides, the screen fits nicely.

photo 4.JPG

Front: View from the front with screen inserted. As you can see, I messed up on one of the sides when the router cut too deep.

photo 3.JPG

Fixed: Managed to fix the mistake from earlier using some wood filler. I'll be applying the same method to the corners to fix the cuts.


In "Star wars", a young Luke Skywalker is trained to use the force and sense your enemy. In the training sessions, the enemy is an hovering sphere that randomly fires laser shots. The apprentice Jedi has to intercept the non-lethal laser beams with his light saber.

For this challenge, I'd like to build a similar gadget.



The are many reasons why building the hovering sphere is both exciting and challenging

    1. autonomous flight is a field I always liked to explore in more depth
    2. make the Raspberry act as a real-time controller is something that can be useful in many other fields of interest
    3. indoor localization through landmarks is also a field of research that can be easily ported to many other projects



The flying sphere will be build using harmonic steel for the wireframe. The wireframe contains the blade and prevent any damage to the player due to the rotating blades. The mechanics for the coaxial rotor will be salvaged from an normal RC helicopter




The sphere  will hover thanks to the mechanic components taken from an RC helicopter with counter-rotating blades. Counter-rotating blades solution makes the "helicopter" intrinsically stable, so a steady flight condition can be maintained with very few control interventions. Rotation along the vertical axis (yaw) is achieved by properly adjusting the speed of the two rotating blades. A third propeller will be used to make the sphere translateConsidered that

the sphere will determine its position using the camera. A reference landmark will be placed on the floor. The sphere will try to maintain a predefined distance from the landmark by looking for the landmark and analyzing the size and orientation of the landmark itself. Picture below outlines the basic geometry of the vision system. Image07.pngBased on the two variables y (distance from the ground to the camera) and Φ (angle of camera from ground), analysis can be done on the Y position on the image to determine the distance from the landmark. Initial calibration will be needed to determine the exact distance of the landmark based on the placement of the landmark in the camera view. It should be noted that due to the nature of an angled camera, the higher the value of h1 (from diagram), the farther the landmark is. A range finder will be installed facing downwards to determine the value of y (distance from the ground to the camera) and an accelerometer will determine the value of Φ (angle of camera from ground)A laser pointer will be mounted and a platform with two degrees of freedom to randomly shoot a laser beam.The trainee's light saber will be covered with a light reflective material. His purpose will be to place the saber on the beam trajectory and reflect it back to the sphere. The sphere's camera will run a blob recognition algorithm to determine whether the reflected beam is in sight From the electronic point of view, the following boards will be installed

  • Raspberry Pi Model A
  • Raspberry Pi camera board
  • Ultrasonic range finder
  • Microstack accelerometer
  • Infineon DC Motor control module
  • A laser pointer
  • two servos to point the laser pointer


Hardware description

The flying sphere will be made up of a wireframe structure built using harmonic steel threads. Picture below gives a rough idea of the way the components will be assembled


The mechanic needs to provide enough thrust to lift all the required electronics components. We expect  blades of 25 cm in diameter to be enough to lift 400g of equipments.Motor will be controlled by a motor controlled board like the Infineon DC Motor control module. The control signal will be provided by the Raspberry Pi module.The third small rotor for translation will be controlled by  simple L293 IC since the power involved is much lower than the power required by the main rotor

Software description

The software flow chart is as shown in the flowchart below


The software will implement the following features

  • altitude and position hold: by analyzing range finder readings and video stream from Raspberry Pi camera, the software that runs on the Raspberry Pi Model A board will try to hold an altitude of about 2 metres from the ground and to always face the landmark placed on the floor. Height is simply determined by reading the range finder's output. Translations that are eventually required to hold position  are achieved by first rotating the sphere in the direction where the sphere needs to translate and then activating the third propeller. Vertical rotation (yaw) is achieved by slightly changing the speed of the counter-rotating blades

  • human body recognition: after the flying sphere has reached the operating altitude, it starts rotating around its vertical axis scanning for human bodies (i.e. the Jedi under training) using the Raspberry Pi camera module. Vertical rotation is achieved by slightly changing the speed of the counter-rotating blades

  • laser firing: when the human body is in view, the flying sphere activates the laser pointer for a few seconds. The Jed trainee have to intercept the laser beam with its light saber (in this case, it's just a stick covered with light reflective material). In order to make the laser beam visible, a smoke machine will be installed in the room

    • since the sphere will be used in closed environment (mainly because you need a dark and smoky room to see laser beams), a method for indoor localization has to be devised
    • since a camera is required to "see" the beam reflected by the trainee's light saber
    • reflected laser beam detection: when the laser beam is reflected with the right angle by the light saber, the reflected beam is seen by the Raspberry Pi camera by means of a Blob detection algorithm. The flying sphere will switch on a green light and activate a buzzer to signal that the test has been passed



After many struggles I have managed to get websockets working with mosquitto on my RPi 2 (Thanks element 14 for this baby).


I decided to start from scratch, so I wiped the SD card and reinstalled Raspbian. I honestly had given up on the idea of using websockets, but I couldn't find any recent tutorials that didn't include enabling websockets so I decided to give it one last shot. This particular install tutorial was helpful, although there are number of sites with similar information, it was most helpful when it came to adding in user access. It has some problems because it's not up-to-date with the latest package changes, so just follow my instructions. I just wanted to give them credit.


Alright. So here's how you get it working. You need to install a variety of packages but in the end you will be very happy.


Make sure your system is updated, it's always good to start with:

sudo apt-get update


The first thing you are going to want to do is install OpenSSL. mosquitto uses this to handle communications over websockets. Once this is installed you should also install cmake, if you don't already have it. You'll need this to build mosquitto and websockets.

sudo apt-get install openssl-dev
sudo apt-get install cmake


Next, you want to install websockets. This is tricky because the most recent (1.4) is busted. See this discussion on stackoverflow for more info. The bottom line is that you need to use version 1.3 to make this work.

tar xvzf v1.3-chrome37-firefox30.tar.gz
cd v1.3-chrome37-firefox30
mkdir build
cd build
cmake ..
sudo make install


That should do it for libwebsockets. The folder name for your tar files usually takes on the filename, but if it doesn't, just ls into your source directory and you should find the actual name of the folder, just cd into that folder at line 03 and continue with the rest of the commands.


Now, we have to install a support package called uuid-dev to complete the mosquitto installation in the next step.

sudo apt-get install uuid-dev


The next step is to install mosquitto with websockets enabled. This, like libwebsockets, must be built from source.

tar xvzf mosquitto-1.4.tar.gz
cd mosquitto-1.4


At this point you have to edit the file. You will look for the line, "WITH_WEBSOCKETS:= no" and change it to read, "WITH_WEBSOCKETS:= yes". I like to use nano, but you can use vi or whatever. CTRL+W helps finding things quickly in nano.



Alright, now it's time to build and install mosquitto!

sudo make install


Hopefully everything is going smoothly up to this point. mosquitto has a configuration file that holds some basic information to keep the server running. The installation comes with an example configuration file that is well commented and we're going to copy this file over and use it to build our own. It's actually not necessary and you could simply create your own config file with the upcoming lines, but it might be helpful to read through the comments to get an idea what is possible.

sudo cp /etc/mosquitto/mosquitto.conf.example /etc/mosquitto/mosquitto.conf


Now open the new config file in your favorite text editor (nano for me) and you must add the following commands (please read on before doing this if you are unsure):

sudo /etc/mosquitto/mosquitto.conf
listener 1883
listener 9001
protocol websockets
password_file mosquitto.pwd


You may be wondering where to exactly to put this in your config file. When you open up the config file look for the area called "Default listener". That's where I put line 02, I changed it from "port 1883" to "listener 1883". This may not actually be necessary, I'm not sure, but it worked for me.


Line 03 and 04 will go under the section called, "Extra listeners". You should see a line that reads: "#listener". You can remove the hash (uncomment the line) and add in 9001. This means that mosquitto should listen on ports 1883 and 9001. The latter being the websockets port. A few paragraphs below the listener line where it reads, "Choose the protocol to use when listening.", you can add line 04. Just uncomment "#protocol mqtt" and change it to match line 04.


You can add in the password access line under the section that reads, "Default authentication and topic access control". Uncomment the line, "password_file" by removing the hash in front and type in line 05.


That's it for the config file. Just save the file and exit out of the text editor. Now create the user that will be allowed access to mosquitto. This will protect it from unauthorized access across the network. This is absolutely not necessary because mosquitto allows anonymous login, but it's always good to limit access. Make "username" your preferred name. You'll be prompted to enter a password, too.

sudo mosquitto_passwd -c mosquitto.pwd username


The next step is to create a link because websockets is actually installed in a different directory from where mosquitto expects to find it.

sudo ln -s /usr/local/lib/ /usr/lib/


There is another workaround for this, but I didn't try it this time around so I'm not sure if it truly, truly works. Back when you installed websockets, you could have use this cmake command in place of the one at line 06:

cmake .. -DOPENSSL_ROOT_DIR=/usr/bin/openssl


As I said, I didn't try it this time around, so I don't know if it really solves the problem, but I have tried it in past attempts and it seems like it corrected the issue.


We're almost there! The next step is to enable IPv6 on the Raspberry Pi. I was getting an annoying error whenever I ran mosquitto: "Warning: Address family not supported by protocol". Turns out it was because IPv6 was not enabled. It's way easy to correct this problem. First, turn IPv6 on, then go into your /etc/modules file and add the line "ipv6" to the end of the file. IPv6 will now be enabled at boot.

sudo modprobe ipv6
sudo nano /etc/modules


The last thing you need to install is Paho JavaScript client. This is the Javascript package you'll use to help you communicate with your mosquitto mqtt server over websockets.

sudo cp /var/www/mqttws31.js


Line 02 assumes that you have a webserver already installed. I followed this tutorial (this appears to be down right now but it's cached here) to get lighttpd/mysql/phpmyadmin running on my RPi2. I suggest you follow it, it works every time. Ignore the last part about making it work over TCP/IP. Oh, also note that you really don't need all of the extra PHP packages they suggest, just pick and choose the ones you require. Anyway, all I did was copy the Javascript library to the web server directory so we can call it in our HTML file.


Wow, that's it for installation. The next step is to create the test file. In order to do this, we're going to use Paho's test Javascript code. So go ahead and create an HTML file in your web directory:

sudo nano /var/www/mosquitto.html


Now you'll want to paste in the following and note where it says username to put the username you created earlier and the password, too.

    <script src="mqttws31.js"></script>
    <title>mqtt status</title>

    <div id="status">connecting...</div>


// Create a client instance
client = new Paho.MQTT.Client('', Number(9001), "clientId-"+ Math.random());

// set callback handlers
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;

// connect the client
  userName: "username",
  password: "password"

// called when the client connects
function onConnect() {
  // Once a connection has been made, make a subscription and send a message.
  var status = document.getElementById("status");
  status.innerHTML = "onConnect";
  message = new Paho.MQTT.Message("Hello");
  message.destinationName = "/World";

// called when the client loses its connection
function onConnectionLost(responseObject) {
  if (responseObject.errorCode !== 0) {
    var status = document.getElementById("status");
    status.innerHTML = "onConnectionLost:"+responseObject.errorMessage;

// called when a message arrives
function onMessageArrived(message) {
  var status = document.getElementById("status");
  status.innerHTML = "onMessageArrived:"+message.payloadString;


Take a look at line 17: Be sure to enter your RPi's network IP address. In my case, I am using the assigned internal router IP. You can find out your IP address using ifconfig. Also note that "clientId" can be anything you want and you don't need to generate a random number if you don't want to. I think this is mainly for logging purposes, it keeps track of who is connecting to mosquitto.


It's time to turn on mosquitto. If everything has been setup correctly, this should be flawless. I have run into some problems when I don't run this while in the /etc/mosquitto directory. I don't know why, but when I try to load it from a different directly, it has a problem running the password file.

cd /etc/mosquitto
mosquitto -c /etc/mosquitto/mosquitto.conf


Now, go ahead and direct your web browser to the IP address you entered above along with the name of the HTML file. For example: You should see a "Hello" message appear. The script subscribes to /World and then sends "Hello". When mosquitto receives the message, it pushes that baby to all subscribers and it shows up in your browser.


Yes, this takes a lot of trial and error, but it does in fact work. The next step would be to configure mosquitto as a service that starts automatically but I'm having a little bit of trouble with the init.d files I've found online. I'll have to get back to everyone on that once I've sorted it out. This has really set me back a bit, but I'm glad it's all sorted out. I can finally move on with my project!

Previous posts for this project:



Project Update


How about a Sci Fi Your Pi / Enchanted Objects cross-over post for a change ?


Let me introduce the Magic Lamp! What's special about it, is that it is turned on or off depending on where you place it on the desk! It also creates colourful effects on walls and ceiling when turned on. An Adafruit Trinket and NeoPixel Ring are powered wirelessly by having the charging base hidden inside the desk's surface. By moving the circuit above the base, the Trinket is powered on and animates the NeoPixels.


For details on the build, check out this post: Sci Fi Your Pi: PiDesk - Guide: Magic Lamp with wireless charger, Adafruit Trinket & NeoPixels





The idea for this little project is to have a lamp which can be turned on or off simply by moving it on a desk.


A wireless charging base is built into the desk's surface, and a circuit with microcontroller and LEDs gets powered wirelessly using the charging coil.


Components & Circuit

Screen Shot 2015-06-19 at 21.14.48.png

The electronic components used in this project are the following:

The Qi Wireless Charger was bought on eBay for a couple of euros. According to the specs, it is capable of outputting 5V/1A, which should be more than sufficient for this simple circuit.Connecting everything together is simple:

  • the coil's positive terminal is connected to the Trinket & NeoPixel USB+/PWR pins
  • the coil's negative terminal is connected to the Trinket & NeoPixel GND pins
  • the Trinket's #0 pin is connected to the NeoPixel IN pin


Program & Testing

The Trinket is programmed using the Arduino IDE in order to animate the NeoPixel Ring. But before this can be done, the necessary libraries need to be installed.Adafruit already has excellent tutorials covering this on their learning system:


Once the hardware is supported and the library installed, it is possible to program the Trinket. An example program is provided with the NeoPixel library, called "strandtest". This sketch cycles through different animations and colors and is perfect to understand how the NeoPixel Ring is being animated.


Below are some animated gifs (you may need to click to see the animation) showing a quick tests with the wireless charger first and then with the Trinket and NeoPixel Ring.


2015-06-21 19_34_10.gif2015-06-21 19_33_26.gif


Assembly & Demo


To hold the electronics, a simple housing is required. This can be done using a readily available container or can be custom created using cardboard, plastic, or even using 3D printing. I opted for the latter and created a simple hollow cylinder just big enough to hold all the parts. The wireless charger coil is at the bottom and the NeoPixel Ring at the top. I'm also using a crystal ball on top of the circuit to diffuse the light.


photo 1.JPGphoto 4.JPG


I hid the wireless charger base in the desk's surface, under my cutting mat. When the circuit is placed above the charger, it lights up without requiring a wired power source. Magic.


photo 5.JPGphoto 4.JPG


Because of the crystal ball sitting on top the NeoPixel Ring, some effects are also reflected on the ceiling, gradually changing color.




The video below provides an overview from start to completion, to end with a demo of the magic lamp.






The proposed project has an array of submodules and sub-projects as explained in the first post. Till now I have made a basic RPi Robot and gone into the basics of adding a python GUI to control it. I have also started working with OpenCV on the Raspberry pi and worked with live images. I have also produced a DIY quadcopter and added it to the mix.


So I forgot… Oops!


Due to an array of issues, I forgot to post an update on Monday so here it is In my last update, I talked about some 3D printing and some custom enclosures. Last week I concentrated on some design tools and a design for my project. In this post I talk about my experience with CAD software and the design for my Central Computer for project VIRUS.


Here we go!


Redesigning a design… again! Updates


The deadline has been extended and its time to kick things up a notch. Uhh.Ok maybe not I decided to consolidate my design for this project and there are two sub projects that work well together and can be expanded later. They are the Camera Based Gesture Recognition and the voice command system. These can function together to become a nice demo and I have the voice part working pretty well. I will share the technical details in a later post so stay tuned.


The theme is Science Fiction and here I am chucking out tutorial after tutorial and I realised that I need to have something that really says Sci-Fi. I have worked on the gesture recognition code and have gotten it to work to some extent and will be publishing a demo video next week. I worked on the housing for the camera before and the initial idea was to put it in an R2D2 like robot but since it was getting done by someone else as well hence I made a slight diversion. I started to search for something else and ultimately came across three choices.


1. GLaDoS from Portal (Video Game)

2. InMoov (

3. Wheatley from Portal 2


There is a GLaDoS Lamp intractable out there at ( so I did not want to go that a way


The inMoov is just too complicated as well for this instance. Wheatley seemed like a good option and hence I sketched up some ideas how I can fit everything into it. I already made a minion robot for a previous design challenge which looked like this...




Making Wheatley


I am not a mechanical design engineer and I faced a lot of issues with the design of the Wheatley Enclosure. The purpose was to fit the Camera, an Ultrasonic Sensor and a Raspberry Pi 2 along with some battery power, speakers and some LEDs. I experimented with Fusion 360 and a trial version of Autodesk Inventor and after days of mucking around, I finally made this...





I fit in all the components and made a small pivot which will sit on top of a base and a servo motor will be used for the tilt mechanism. No panning this time but I did manage to make some slots so that screws are not  needed when closing the LID!


What next?


The obvious next step is to print this casing and unfortunately, I am out of filament. I am looking at ebay but this stuff is expensive but I will get it done one way or the other. Another thing on my todo list to make videos for the gesture control and audio control working with my OpenHAB system.


Lemme know if you have any comments and suggestions


Thanks for reading.



As the 15 inches monitor is carried externally and needs to be set simply and fast to the Meditech box, the top handle has been used designing a couple of rocust and removable parts that - when not in use - can fit in the accessories and components side of the box. The image below shows the 3D milling simulation of the base component; the two ones are identical.

Screen Shot 2015-06-18 at 11.18.09.png

The support can be reviewed with a joint to change the monitor orientation (the right side in the image above) with a fixing screw; this enhancement will be done in the next prototype. The monitor side has a fixed orientation of about 20 Deg for a comfortable viewing while working.

IMG_20150618_154349424.jpg IMG_20150618_154512435.jpg

The two images above shows the finished support and the support fit in the box handle. The material used is transparent acrylic 8 mm thick.

The next images shows the Meditech monitor in its definitive position.

IMG_20150618_154619825.jpg IMG_20150618_154635483.jpg IMG_20150618_154721173.jpg



Welcome to Week 5 of the I Ching hexagrams project.  At a last I have the code stable enough to create a video to show what i have achieved.  It might not look that impressive, but I have managed to get (most of) the bugs out of the start-up code so that the display output is reasonably predictable.

I Ching Symbols


What I Have Done So Far


The code that underlies this little video consists of the following elements:

  • Introductory Splash screen based on the module from the PiFaceCAD library, but adapted to include alternation between a fixed text instruction page and an animated depiction of the I Ching characters.  They should be displayed vertically as in the graphic above to represent the way the Chinese would write them, but the PiFaceCAD display needs to be horizontally aligned for the Western characters to display correctly
  • Menu choices based on the module in the PiFaceCAD library, but adapted to deal with multiple menu levels.
  • Background program modules to handle the routine tasks.
  • Interrupt handlers to process the switch presses
  • creation of a global 'menus' class to manage passing of control tokens between the background code and the interrupt handlers and to control the transition between menus by passing pointer values.

Implementation of exception handlers to trap unexpected events.

The code is probably not very elegant, but it represents how far I have come in learning Python.


What the Video Shows

The video starts with the instruction screen, and then alternates this with a depiction of the I Ching characters that moves right and left in the display.  The right hand (slightly separated) button is designated the 'Back' button, and pressing this from the splash screen invokes the first page of the top-level menu. Pressing any of the other buttons will jump to a specific page in the top-level menu, but i could only show one exit from the splash screen.

Once in the main part of the program, pressing either of the four closely-spaced buttons will select a specific menu page.  I did not demonstrate it in the video, but moving the 'T' button left and right will move through the menu options, rolling around through the ends, to allow menus of more than four options to be accommodated.

Pressing the 'T' button in will invoke an action associated with the menu choice: this can currently be one of:

  • Select a lower level menu.
  • Invoke a command, such as calling a function or other module
  • Quit the program.

In the video, the beginning text of the 'Cast Hexagram' level 2 menu (Stalks) can be seen before a bug causes it to be overwritten with the error message from the exception handler.

I inadvertently pressed the 'back' button twice when returning to the top-level menu: 'Cast Hexagram' can be seen briefly before the 'Halt Requested' message identifies that the back button was pressed from the top-level menu.

Control is then passed back to the background program, which stops after displaying the 'Program Stopped' message.

Further bugs mean that the event listeners for the switches are not switched off, and some interesting things then happen to the display.


The following images shows the steps of the creation of the Meditech HeartBeat probe  electronics.


Schematics and layout

Finger hearbeat schematics.png Finger hearbeat layout.png

Above: schematics and layout of the probe.

The right side block (in the layout image), corresponding to the bottom right part of the schematic is a separate component; it is a small adapter to plug in the BitScope inputs for data acquisition from the various Meditech probes that plan to use analog and logic data prodcessing.

The two images below shows the resulting milled PCB (both sides)

IMG_20150617_092801875.jpg IMG_20150617_092817945.jpg


Circuit assembly and connections

The images below shows the BitScope connector and how it will fit in the device.

IMG_20150617_095616924.jpg IMG_20150617_095533248.jpg

Below: the HearBeat circuit completed connected to the BitScope via the adaptor connector.

The led and sensor connector will be fices inside the working panel of the Meditech box.



Catch up time

Posted by armour999 Jun 15, 2015


I was out of commission for a while after a freak accident in the garage. I was moving some boxes and had some heavy iron and lumber crush my left arm and part of my right arm. Nothing broke but it did take some time to heal. So I will be blogging quite abit to catch up. Back in the game




Raspberry Pi camera and Dropbox


I wanted to use the Raspi_Cam_Web_Interface to capture stop motion pictures and videos. But I decided I would like the process to be more automated. I looked at pushing the media to a cloud so I would not have to manually download the images. I decided to try Dropbox and it works well with very little python code.






You need to set up a DropBox account and then set up an app to link to your Raspberry Pi. You can set up your app at:


I played with some choices but found the File Type version seemed to work well. As you can see it supplies an App Key and App Secret. You will be using this to link to your Pi.

ow we want to install Dropbox for Raspberry Pi:


git clone


Once downloaded you can make the script executable by using the following command:


chmod +x



The first time you run the script you will be asked to enter the App Key and App Secret.



Screenshot (24).png



HINT:  Copy the Keys to a text editor first rather than copy and paste to Putty from DropBox. Otherwise it does not play nice and you may have errors.  I used Word. Once your Keys are accepted it will ask you to open up a URL to confirm connection.  Assuming you are using Putty, copy the contents to your clipboard and paste to a text editor. Now copy the URL to a browser. You may receive a message from Dropbox that the connection is successful but unless you perform the last step in Putty the token may still fail. Some oauth tokens are corrupt so you may have to try a couple of times.


RaPiCamcoder stores media files in /var/www/media. So I want a script to pull the .jpg files to Dropbox and see the media on my Blackberry and Laptop in real time. I tried a couple of test.jpg and it seemed to work like a charm.


I used this script to start the downloader:


pi@raspberrypi ~/Dropbox-Uploader $ ./ upload /var/www/media/ {*.jpg*} /Apps/PiRover


This was tricky. Most documentation did not include a target file for the upload and failed. I took several scripts and reduced the code to one line and added the target DropBox. The command tells Raspberry Pi to upload all files ending in .jpg in /var/ (location that Raspi_Cam_Web stores the images) and upload to my DropBox App called PiRover.


I setup a full dropbox instead for final testing and called it PiRover. When I ran the script the images stored in /var/www/media uploaded to DropBox at a fairly good speed and now is accessible on my Blackberry and Laptop in minutes.


A cron job is added to run the script every minute and I’m done! I will add a cleanup cron job so the SD card does not fill up too fast. I’ll have some videos posted soon. Please do not rain.





The Microstack GPS unit is easy to assemble.  The adaptor board is color coded so goof proof when adding the GPS



The software installation was straight forward:


sudo aptget install python3microstacknode


Install gpsd tools:


sudo aptget install gpsd gpsdclients pythongps


The raspiconfig . Advanced tools is used to disable the serial port.


One more step to auto start the GPS:


sudo dpkgreconfigure gpsd  


● Choose <yes> when asked if you want to start gpsd automatically. 


● Choose <no> when asked “should gpsd handle attached USB GPS receivers automatically”


● When asked which “Device the GPS receiver is attached to: enter  /dev/ttyAMA0


● Accept the defaults for other options.



Now we want to create the GPS object using Python 3 commands:


>>> import sys,math,time,microstacknode.gps.l80gps


>>> gps=microstacknode.gps.l80gps.L80GPS()


The import command has been modified to include sys, math and time.


Let’s look at the data.  If you execute cgps –s at the command line you can view a table with the  GPS’  speed, time, position, visible satellites and quality of the fix.

Screenshot (28).png


Its been nice outside so I have been flying RC airplanes and coding on my project in the evenings.  There is much software to write so that is the focus right now.





Unfortunately that means my updates are not too exciting.  I can post code but you guys will be bored.

Instead, I will post some functionality I am testing that the QuadCOP will need.


As I mentioned I can manually control the quadcopter and then put it in auto fly mode with a switch on my radio.  So here are some things I am having it do, these items will be put to use but I am testing them one at a time.  Each one builds up on the previous one.


Loiter - It goes into auto mode and simply tries to stay at the current GPS coordinates.


360 degree Loiter - Same as above but it turns slowly 360 degrees (sensor sweep)


Autolanding - It goes into auto mode, and using its loiter functionality it tries to land nicely at the current GPS coordinates.  I am going to have to use a ping sensor for anything below 2 feet.


Return to Base - using a switch on my radio (macro record) it simply notes the current GPS location.  I then fly it around and when I put it into automode, it returns to the GPS coordinates it noted and then lands.


Out of Bounds - Similar to return to base.  It notes the current GPS coordinate on my mark, then it calculates a 400 foot radius arc in FRONT of the GPS coordinates.  I then fly around and if I go out of bounds, the quadCOP will return to base and then give me control back. The Quad COP should never go behind me, only in front.


Collision detection - using some ultrasonic ping sensors, it if detects an object it will take evasive action.  The action is a simple square type motion, by turning left and going a certain distance, then turning right again.


Once the above is completed, I can use the macro record functionality I built to start navigating waypoint macros. The main point here is altitude and heading information is recorded so I can be much more precise than using an IPAD to program waypoints.  I can very precisely position the QuadCOP since I am manually flying it.


Issues I am resolving:
I am trying to use the XTrensic MEMS board for altitude and heading information.  It works great with the Raspberry Pi B+ but not with the Raspberry Pi 2.  I did some research and realized I need to enable I2C "repeated starts". You can read more about it here:


I have the sensors working with the Pi 2 but somehow it gets "out of sync" and I cannot poll the sensors.  I think there may be a voltage or pull-up resistor issue involved.  So I am considering my options, I do plan to include the Raspberry Pi B+ for camera purposes, I could just use it to read the sensors and then pass the info via I2C to the flight computer, or I could do the same thing using the ChipKit Pi.  I'll figure it out! What puzzles me is why it works well with the B+ and not the Pi II because they use the same I2C chip...


Another small issue is with the Microstack GPS.  It defaults to 9600 baud and updates at 1hz.  It is supposed to accept commands so I can change the baud rate and the frequency.  I will need the GPS info at least 2hz.  It can go up to 10hz so if I can push 3hz and still not miss bytes I think I will be good.  The QuadCOP wont moving fast but one update per second is really not a good idea if it is gusty out.  In 1 second it can move pretty far if the wind is strong enough.


When I find both solutions I will post.

Previous posts for this project:



Project Update

photo 2.JPG

Project Update #10 already and only one week away from the challenge's halfway point. Time to kick things in a higher gear. That's why I've done something slightly more technical for this week's update: I've created my own capacitive touch breakout board!


As you may have seen over the course of the project, I've been using the Touch Board as a capacitive touch input sensor for the Raspberry Pi. I used it because I had one and thought it was the perfect opportunity to use it, since I never had (like many other boards I own ...). Unfortunately, for others trying to recreate (parts of) this project, it is an expensive investment at about €76 (post-kickstarter). That's why I set on a quest to reduce the cost of that component and ended up succeeding with a cost reduction in the range of 95%!


I've used the Atmel AT42QT1070 touch sensor ICAtmel AT42QT1070 touch sensor IC on a custom-made breakout board to convert up to five of the Pi's GPIO pins into capacitive touch inputs. The cool thing about this little breakout board is that it could be used in combination with other SBCs or microcontroller boards! The board was designed in Eagle and produced via SeeedStudio for €10 including shipping, for 10 boards (I did receive 20 though ...).


For more details and a demo, check out this post: Sci Fi Your Pi: PiDesk - Guide: Capacitive Touch with Atmel's AT42QT1070 Touch Sensor IC


Rather than using an out-of-the-box capacitive touch solution for my projects, I thought I'd attempt making my own little breakout board. The idea is to use a sensor capable of triggering normal digital input pins using touch.

This post covers the selected touch sensor IC, the circuit used, getting a custom PCB made and how to use it in a project. At the end of this post, you'll find a video covering the soldering and a brief demo of the custom board.


Touch Sensor IC


While searching for capacitive touch solutions, I quickly came across two different ICs: the Freescale MPR121 (as used in the Touch Board) and the Atmel AT42QT10XX. The MPR121 comes in a QFN package, immediately making it more difficult to use. It has up to 12 inputs and an I2C interface. It can operate from 1.7 to 3.6V making it slightly more difficult to work with when using a 5V host.


Comparing the specs of the MPR121 with the AT42QT10XX IC, I decided to pick the latter.


Atmel AT42QT1070


Some of the features worth mentioning are:

  • SOIC14 package
  • up to seven inputs
  • fully debounced outputs
  • suppresses effects of external noise
  • touch sensing using single pin
  • different operation modes (see next paragraph)


According to the datasheet, the keys can also be operated behing panels of glass (up to 10mm thick) or plastic (5mm).


And finally, because it can operate at voltages from 1.8V to 5.5V, it can be used in combination with a wide variety of boards, such as Arduino (5V logic level) or Raspberry Pi (3.3V logic).


Operating modes


There are two modes the touch sensor IC can operate in:

  • comms mode
  • standalone mode


In comms mode, the sensor can have up to seven input keys and interfaces with a master microcontroller or SBC via I2C. The I2C interface can for example also be used to configure sensitivity of the different input keys. A typical connection diagram for comms mode is the following (as provided in the datasheet):

Screen Shot 2015-06-14 at 15.06.29.png


The other mode, standalone mode, does not make use of the I2C interface but rather converts up to five capacitive touch inputs into digital outputs. This is useful to convert digital inputs on a microcontroller board or SBC into capacitive touch inputs. Because the I2C interface is not used, sensitivity of the touch inputs cannot be configured and is static. The connection diagram for standalone mode is the following:

Screen Shot 2015-06-14 at 15.15.31.png


Different versions


Depending on the application's required number of inputs, different ICs of the same family can be used.


Just to name a few:

  • AT42QT1010: single channel
  • AT42QT1040: four channels
  • AT42QT1111: eleven channels


A full list can be found here: Dedicated Touch Devices


Schematic and Board


I chose to make a breakout board for the standalone mode. The circuit is simple as it only involves the touch IC, a couple of resistors and a capacitor. It was a bit trickier to fit everything in the smallest form factor possible, but that worked out well as you can see below.


The schematic and board were created using Eagle and the AT42QT1070 component was downloaded from the Farnell page of the part (AT42QT1070-SSU - ATMEL - SENSOR, QTOUCH, 7-KEY, 14SOIC | Farnell element14).

Screen Shot 2015-04-07 at 16.03.00.pngScreen Shot 2015-04-07 at 16.01.48.png

This is only a first version. I'll be releasing the files after some possible improvements, such as:

  • including pull-up resistors for the input pins rather than relying on the attached microcontroller board/SBC using internal pull-ups
  • LED indicators to show which input is active, as this may be useful for easy troubleshooting





To get the PCB done, I used SeeedStudio. The service is cheap, at about $12 for 10 boards, including shipping. The only downside is perhaps the fact that it takes two weeks between ordering and receiving the boards. While waiting for the PCB to arrive, I ordered the parts so I could populate the board once it got here. I purchased the AT42QT1070AT42QT1070 from Farnell, the other bits I had available.


Once all the pieces were available, I took out the soldering paste, heated the pre-heater and hot air rework station, laid out the components on the board and reflowed the solder. The result can be witnessed in the pictures below. To see the whole process, be sure to watch the video at the end of this post.

photo 2.JPGphoto 1.JPG


Before hooking up the board to something else, I used my multimeter to check all the connections were correct and no shorts were introduced while soldering.




For testing, I decided to used the breakout board in combination with a Raspberry Pi. The board was powered using the Raspberry Pi's 3.3V and GND pins. The touch sensor's first output was connected to the Pi's GPIO17 and the sensor's first input to a jumper wire.


Using the code below, GPIO17 is configured with internal pull-up resistor and is pulled low when the sensor's first input is touched.


import time
import RPi.GPIO as GPIO


#initialise a previous input variable to 1 (assume button not pressed last)
prev_input = 1
while True:
  #take a reading
  input = GPIO.input(17)
  #if the last reading was high and this one low, print
  if ((not input) and prev_input):
    print("Button pressed")
  #update previous input
  prev_input = input
  #slight pause to debounce


The code can easily be extended to support all five inputs, as long as the GPIO pins are configured with internal pull-up.




In the video below, you can see me populating and soldering the custom PCB, followed by a little demo of the board used with a Raspberry Pi.


Meditech short presentation article on the Ibiza newspaper, Italian edition this morning.

Screen Shot 2015-06-15 at 10.59.41.png


One of the hearth controls is the hearth rate sensor, based or plethysmograph. In our specific case this probe is based on the light transmission so the version is a PPT, or photoplethysmograph.

The general principle is to measure a light source transmitted through a human body tissue: the change of the transmitted light varies synchronized with the heart pulses. This probe will be used in conjunction with the blood pressure analysis to take a more complete blood status checking.


Main components



After several tests the choice for the adopted sensor has been oriented to the IR component TCRT1000 reflective optical sensor ( see the attached data sheet) for photoplethysmography (and other non-biomedical applications). It has the advantage of working at 3.3V in the IR range at a fixed wavelenght of 950nm that has a good tissue penetration with very low influence from the external visible light; in fact direct light exposition of the sensor reduces the detection contrast and can create serious problems in the detection but in this specific application the sensor reflection is read with the transmitter-reflecting surface directly connected to the human body tissue surface so this problem does not exist. So, despite this low cost device is declared for non-specific medical applications it has demonstrated to work fine in a very stable way.



The test version of the sensor has been included in a velcro strip that should be closed around (better) the finger index of the patient. The prototype version will provide a more comfortable support for the base always following this principle.


Signal processing

Inspired by some solutions explained on Electronic Projects Focus the following images shows the test preview circuit where using a generic four OP Amp MCP6004 (see the attached data sheet) the sensor signal is filtered by the noise, isolating and amplifying the beat frequency only so it can be detected as a digital level change.


The signals are filtered and amplified through two stages: a low-pass and a high-pass filter isolating and amplifying the value. The final resulting signal is then post-processed for Meditech by the BitScope device (uses a one of the 8 logic channels).


The output from the low-pass filter is sent to the high-pass filter through a 5K trimmer for the final tuning of the signal before the last amplification stage.

Screen Shot 2015-06-15 at 08.03.32.png

The heartbeat frequency detection (in the image above has been detected with the BitScope meter application) demonstrated to be very stable and reliable for the usage.


Circuit preview

The following images and video shows a breadboard testing circuit preview of the probe.



The past two weeks have consisted of parts placement and trying to fit the circuits and boards into a suitable enclosure. I found a 5.3 x 5.3 x 2.0 inch case to use. This is as close as I could get to the approximate TOS tricorder of 8.0 x 6.0 x 2.0 inch enclosure without actually designing and 3D printing one. I would d have liked to have that ability at this point in the project. Even better would have been to print out the actual shape and colors of the original tricorder from the TV series. The photos below show some of the circuitry and components of the Picorder. The main sensory circuit board consists of a temperature/Humidity sensor, an alchol sensor and a CO2 sensor. On the opposite end of this circuit board will be another consisting of a rangefinder sensor and flame detector and possibly an IR motion detector. Software/script development is ongoing.


Although I do own one of the $60.00 TOS tricorder, I could not bring myself to tear it down to try to use it for the contest, nor did I feel it would be suitable to house the components of the RPI, power supply, and sensors.


I added another gallery of current photos for the recent progress. Up until this point, I have been using the Pi Model B for the design and testing of each of the stages of development.The sound tests, the temperature and humidity and the flame detection tests have all been done with the Model B. However, in order to continue with my desire to utilize the 3.2 tft display for GUI or graphing the results of the sensor readings in real time, I realized I could not use the same GPIO I/O pins that I used in the sensor tests. Since the displays header uses some of the pins and the header itself prohibits access to the rest, I have had to replace the model B with the Model B+. This allows the use of the GPIO pins from header pins # 27 through 40 to use for the sensory inputs.


As of this date, I have begun to reassign pins and modify all of the software to reference the new assignments throughout the routines as I develop them into a single running script. This step is already proving to be very challenging for a newcomer to programming, especially with the many different ones available each with their own specific syntax requirements.


This next time period will consist of completing of the the enclosure and all remaining components and necessary wiring. Final testing of all components as a complete unit and working out any bugs in hardware and software will follow.




The first camera probe prototype of Meditech has been completed and tested. It includes:


  • 16x2 alphanumeric LCD display
  • Set of five control buttons and one three-states position button to enable the different features and parameters, manage diagnostic tests and control the device
  • WiFi connection to the rest of Meditech networking device
  • 12 RGB LEDs ring around the camera


The prototype has been assembled in a custom designed container. As this probe is mainly based on visual detection there is the need to connect it to the main Meditech structure moving the probe nearby the patient. In this version the camera head (with LEDs) is fixed but a better solution should include an articulated flip movement.



The following images shows the design components to be milled. The material is white acrylic 5 mm thick. A better solution can be reached using 3 mm thick to reach some design simplifications.

Screen Shot 2015-06-14 at 10.31.29.png

Above: top and bottom sides.Below: the three lateral sides.

Screen Shot 2015-06-14 at 10.32.54.png

Below: the components of the camera head enclosure and the opaline plastic cover for the LED rings. It is on a separate design as is has been milled with opaline white transparent acrylic sheet, 3mm thick

Screen Shot 2015-06-14 at 10.35.12.pngScreen Shot 2015-06-14 at 10.36.42.png



The following images shows the milled and refined components, ready for assembly. Note that the LCD display has a transparent cover protection (2mm thick plexyglas)

IMG_20150613_182805056.jpgIMG_20150613_182749698.jpg IMG_20150613_190335208.jpg


Excluding the transparent LCD protection, the camera head parts and the LED cover ring that was glued with chyanoachriate all the parts has been assembled with screws for any future modification.

Below: Raspberry PI B+ screwed on the base and the LCD protection glued on the top frame. To fix the Raspberry PI the PiFaceCAD has been removed; note the camera flat cable already molded to fit in the camera head on top of the box and the pass-through connector for the LED ring control.


Below: the assembly steps of the camera and LED ring cables inside the guide and the back part of the camera head. The camera is kept aligned with a piece of adhesive soft strip.


The finished camera head mounting and the sides assembled together and screwed.

Below: views of the camera head and LEDs ring in the final assembly.


Below: views of the camera probe finished prototype




The following images shows the camera probe testing. The test has been made with the system status displayed on the LCD (as it changes continuously), the LED ring test and the camera previewing a video without recording. The test has been left active for about 12 hours.



Test Video

The test video shows some seconds of the 12 hours test process with the probe controlled via terminal from another computer connected to the same WiFi network.


The external Meditech camera probe, a Raspberry PI unit that can be detached from the main device, is one of the most versatile probes of the system. This unit will host the PI camera to be used for any kind of video / shooting when needed to see and remotely share (also real-time when needed) images of the environment or the patient. Around the camera lens there is a 12 RGB LEDs ring that can provide several functions:


  • White light for simple illuminate small dark areas while shooting or making videos
  • Flashing light for automated iris reactivity test
  • Specific colours to check possible vision diseases
  • Other VEP (Visually Evoked Potentials and reactions) based on coloured lights



The following image shows the pre-assembly of the probe.



PiFaceCAD usage

To manage the probe while it is not nearby the main device a small user interface should be built so the PiFaceCAD has been adopted. The question was how to manage the Control and Display device with the micro controller while the light ring should also work at the same time, but the display and controller seems using almost all the PI pins.


Instead, while all the PiFace PI boards works controlled by the SPI the display and control continued woking fine also with the RGB LEDs ring connected to the GPIO bus (PWM pin 18). In this prototype version it has been used the Adafruit neopixel 12 ring but the LED component can be found by several distributors, 100 units per package for few cents.


Next step

The following video shows the test of all the components together (the monitor will not be present in the final version). The next step is to build a box for the probe.


Last but not least...

Thanks for the support to fvan for the great neopixel tutorial and clem57 & peteroakes for the PiFaceCAD suggestions.

Previous posts for this project:



Project Update


Ikea desk meets multitool. There's not much more to say about it, enjoy the picture gallery and be sure to check the description


Hope you like the result so far!


{gallery} Ikea desk meets multitool

photo 1.JPG

Desk & Multitool: The desk I picked for the project and the multitool I planned on using to carve out futuristic shapes with.

photo 1.JPG

Sketch: Sketched where the different components like Pi, buttons and futuristic shape for LED strip would be.

photo 2.JPG

Tape: Used tape to better visualise the shapes and serve as reference when cutting.

photo 3.JPG

Internals: That's what the inside of a cheap Ikea desk looks like. A block in each corner to screw the legs to and a cardboard mesh for the rest.

photo 4.JPG

Cutting: Cutting using the multitool was rather easy, but tricky at times where the blade was larger than the shape to cut out.

photo 5.JPG

Done: Most of the cutting work done, waiting for the final dimensions of the screen before cutting out that part.

photo 4.JPG

Test: A quick test to fit the Pi and LED strip and have a look at the result. Once the final LED strip arrives, it can be installed. In the mean time, I'll be testing various diffusors.

Hello, everyone!


Well, I've been struggling to get Mosquitto to play friendly with websockets. Has anyone else tried running Mosquitto with websockets enabled? I posted the following on someone's blog about running the MQTT server on a raspberry pi but it is still "awaiting moderation" so I'm reaching out to this wonderful community for some help! Otherwise, if I can't get it working, I'm just going to stick to the tried-and-true mqtt protocol and pass a text file to lighthttpd. Always have to have a back up plan!


I’m running mosquitto 1.4 on a raspberry pi. mosquitto is able to open a websockets port on 9001 and listen. I can connect fine but I get the socket error message: "Socket error on client , disconnecting." I’m using an example javascript Paho code found here:

Here is my mosquitto.conf:
allow_anonymous true
autosave_interval 1800
persistence true
persistence_file m2.db
persistence_location /var/lib/mosquitto/
#connection_messages true
#log_timestamp true
#log_type all
#log_dest file /var/log/mosquitto/mosquitto.log

listener 1883

listener 9001
protocol websockets


So, any thoughts?

In other news, I made this wonderful discovery. Film heating elements. I'm thinking of adding one to the delivery bag that I'm modifying so that it can maintain a decent internal temperature to keep the pizza(s) warm during travel. The idea is that the heating element can be monitored via the temperature sensor and (hopefully) adjusted via the RPis built into the bag.

Film heating element

The film heating elements vary in voltage, I've seen a lot of 12 and 24V ones but I did find a version on Sparkfun that would be ideal for the project and it warms up to approximately 65 degrees Celsius (150 degrees F). With some decent foam padding to insulate it, it could do a decent job of keeping things cozy.

PCB prototyping

After a final check the schematic is expected following the previous tests done with wires and breadboards so the final PCB prototype has been milled as shown in the images below.

IMG_20150608_214734885.jpg IMG_20150608_214746266.jpg

As shown in the final circuit assembled image, only the control components as soldered on the board (mainly the various pull-up resistors). As the board should be fit in the Meditech box and connected to the elements installed at different distances, all the other components are wires with connectors.




The following images shows how the board and the connections are set inside the box. The control components (LCD, LED status, IR Sensore and calibration potentiometer) are on the top side of the lid, accessible during the normal usage as it should remain closed (and possibly locked). The only two parts that remain inside of the box are the temperature sensor put in the RPImaster ares (the Raspberry-PI 2 main unit) and the can connection that is controled by the board. A couple of flat cables connect this control panel interface board to the ChipKit PI and another flat cable connects the board to the Raspberry PI GPIO for the IR sensor.

IMG_20150609_185219550.jpg IMG_20150609_185230385.jpg


Final view

The following images shows the final view of the control panel installed on the top side of the lid. The areas - actually empty - on both sides of the lid surface will include the plugs to connect the diagnostic probes, actually under construction.

IMG_20150609_185134744.jpg IMG_20150609_191440463.jpg



The proposed project has an array of submodules and sub-projects as explained in the first post. Till now I have made a basic RPi Robot and gone into the basics of adding a python GUI to control it. I have also started working with OpenCV on the Raspberry pi and worked with live images. I have also produced a DIY quadcopter and added it to the mix. In this update I talk about some 3D printing and how I printed a LCD bracket.



So you have a 3D printer


Congratulation! If you are reading this, either you already have a 3D printer or intend to get one in the future. In either case,  you have taken the first step towards prototyping freedom. Before you fire up the printer, you need to understand what you want to print. A lot of people (including my wife) have taken to experiments with downloading models from the internet and then printing them. And this is the best way to get started because someone has done the handwork of designing the thing for you and all you need to do is press a button and viola!


Ofcourse there are scenarios where you have a specific requirement like a broken knob or part of a simple machine or like in my case, the need for something to hold the LCD in place. And at that point you know what you need so you would think you need to just make exactly what you want in a CAD tool and press print, right? Well chances are it might not work out the way you had hoped because believe it or not when you are making things for 3D printing, there are still a few things to keep in mind for the output to be successful. Let us go through the steps I took to create something I needed.


Step One: Decide what you want


Before I even start the computer, I need to analyse what I need. This means dimensions and a rough drawing and hence usually, I start with a piece of paper and pencil to make a rough sketch of what I need.

Here my requirement was that I have a 10.1” LCD which needs a sort of ‘stand’ which is a bit flexible. Hence I started on a paper to try and make possible solutions. The image below is my first rough sketch of three designs I though would be useful.



I chose design C because it was a solution which is flexible and can be used for other display sizes as well. Additionally, it uses the least amount of material and hence it was the obvious choice.



Step Two: The dimensions


After I have a drawing in hand, I need to add dimensions to the lines. The dimensions of the LCD are measured and then the drawing is finalised. It takes some time but the drawing is complete. In my case I need dimensions of the slots as well as the thickness. The cel robot has a 20 Micron thickness resolution and a 0.8mm thickness nozzle and a 0.3mm hence I will chose dimensions in multiple of these so that prints are precise.



Step Three: The 3D modelling software tool


In order to make the 3D models, we need a software program that can do the heavy lifting. There are a bunch of options and here is my take on them.


Beginner at 3D modelling?

Google/Timble Sketchup is what you are looking for.

Its an easy to use tool and with google warehouse with LOTS of models of everything to everything. If you use CadSoft Eagle, then you may get the skeckup plugin which allows you to create a 3D model of your PCB for FREE!

BUT... beware that you can create 3D models that MAY NOT get printed in sketchup. Why? Because it allows you to create objects with zero thickness like rectagles and circles which cannot be synthesized by a 3D printer.

Don't Like Sketchup?

Autocad 123D is the next best bet and allows you to create all the boxes you want. I have used this the least but its a good place to start with Autodesk.


Master of Mechanical Design and want precision?

RS Design Spark Mechanical

This is one powerful tool to create enclosure and multi part things. Its free and has a lot of great tutorials on YouTube and it is the choice of people who want precise control on their design.

You can save your files locally and share them online. BUT if you are a newbie, it will take some time to get used to and I recommend watching some videos BEFORE getting started or even downloading this tool. Its pro grade with very few features missing and it highly recommended for 3D print modelling.


Like coding/scripting instead of that mouse?

OpenSCAD is a completely scripted way of 3D modelling and many find it to be more comfortable. The true power of this tool is if you can exploit it's parametric approach and use paramters to dynamically modify and customize models on demand.

Warning. If you do not like writing code... don't get into this one!


Want to make complex Organic Shapes?

Blender is the standard for making 3D shapes like faces and other non-mechanical stuff. There are tutorials on how to convert a photograph into a 3D model and yes its complicated and no its not for a newbie. Blender will make you sweat and it will make you dizzy and it will make you hurt but it will produce the most amazing stuff anyone has seen(for free).


Want a balance of the mecha and orga?

Autodesk Fusion 360 is what you should look at. Its big and powerful and works like Autodesk 123D and DS Mechanical and takes some time to get used to. BUT once you get the hang of it, you will be making stuff like crazy. Want to add a little 3D flower to your Raspberry Pi Case? Sure you can do that! There is a tutorial on how to make a mouse using Fusion 360 and its quite simple. Hinges and joints and all are there and then you can render the models to your heart's content.

BUT its cloud based. Which means there is no 'Save as file' option. You can export it as an archive which is equivalent but you will need an internet connection every now and then just to start the app which can make you hate this app.


Personally, I use Fusion 360 most of the time but say hello to Design Spark Mechanical and sketchup every now and then. I use EagleCad to create 3D models of PCBs and their enclosures which are simple enough.


There are commercial software as well but I am not going into those.


Step Four: Making things


We have the drawing on paper and it should be pretty straightforward when converting into the 3D models. However not everything that can be designed can be printed. There are a couple of things that you need to keep in mind when you design for print, you need to consider the following.



1. A 3D Printer cannot print in air and so you need to design edges that have supports. Most of the time this can be done by the software like mesh maker from autodesk but its always better to have a design that can be printed the way you designed it



2. Don’t design small posts and columns unless your 3D printer can support it. A lot of the time we design thin details like line and posts which cannot be fabricated.



3. Know the limits. Every 3D printer has a print volume which tell you how big your 3D print can be. If your final object is larger, you need to either split it into pieces or design it with joints.



4. Material thickness considerations. Try and design wall thickness in multiples of nozzle size so that its comes out just right.



With these considerations, I designed the frame using google ketchup.



Step Five: Press Print


The second last step is to send the files to the 3D printer. The results may vary depending on the 3D printer.



Step six: Putting things together


The final step is to see if you did everything right or not. In my case the assembled print is shown in the image below.








I got things to work and as you can see its a good fit. I hope I have helps you make a better print and if so then gimme a shout out. I also made a Raspberry Pi case and I am attaching the pictures at the end.



See you next time,






The following images shows the circuit schematics and PCB layout for the control panel connections as mentioned in the previous post Meditech: Control panel breadboarding circuit preview


Control Panel Schematics.png Control Panel layout.png

An interview on the Meditech Project on paper today (full article in attach, english translation to the bottom)




A performance leap in providing health care assistance in extreme situations is possible thanks to a micro-computer and a mobile or Wi-Fi device.

It is all packed in a carrying case, that is the base of a project engineered by Enrico Miglino, an Italian researcher who has been living in Santa Eulària (Ibiza, Spain) for five years, and has been working on technology projects for years.

«I carry out the development of the software and the prototypes, working on circuits from the idea to the realization of the product. Companies usually hire me for carrying out a project or I propose them new ideas», Enrico Miglino explains.

Based on that, this Italian researcher is completing the details of his project, called Meditech, which is a mobile medical device.

«My idea is the realization of a portable product whose price is extremely competitive and integrates electronic components and circuits, and enables paramedics in an extreme situation, such as an accident, to connect as if they were in an hospital or an health care center in order to get the information detectable from the patient real-time», he explains.

So, in case an emergency operating room needs to be prepared at the nearer hospital and the patient is in a isolated place, paramedics can rely on a single unit including everything they need and «they can send the data about the patient real-time.»

At the end of July, Enrico plans to complete the number zero prototype, which has to be ready and running to be used in emergency situations and rough environments, so that at the end of the year it will be possible to test it. Moreover, so far this device has been quite cost-effective.

As a matter of fact, it has been partly funded by Farnell Element14, a manufacturer of electronic components. Farnell launched a competition for new ideas related to innovative projects and Enrico participated with Meditech and got a contribution in the form of components.

Another important characteristic of this project is that it is open source, that is free hardware and software. «It is an open project that anybody can carry out, provided that nobody will market it; besides, it has a humanitarian value, as it can be used for example after an earthquake or a tsunami, when medical care is needed and difficult to provide».

Each unit sold to a customer automatically generates a donation. With the payment of a unit from the customer, Enrico Miglino will produce a second unit. «It is obvious that the customer knows the destination of the second unit, as well as the organization that receives it, and that in turn knows who is the customer.»

As a matter of fact, he says that one of his customers in Nigeria has already issued an order to produce one of these briefcases and that the second will be used in a Nigerian hospital.


«The important thing is that the material can be connected to the hospital through the laptop when you are in a dramatic situation and thus be able to work with the hospital», said Miglino, who also points out that in Ibiza there are the right conditions for the use of such material.

(thanks to for the english translation)

Previous posts for this project:



Project Update

photo 1.JPG


The weeks are flying by at incredible speed. I also managed to lose my wallet last weekend causing a lot of administrative headaches to block bank cards, request new identity card and driver license, etc ... Some things move more quickly than others and I already have a bank card, which I used to purchase some of the components for the project.


First, I ordered a 5m addressable LED strip from eBay, to be integrated in the desk, as the tests using the Raspberry Pi to drive a 1m version were successful. It should take about two weeks to get here, giving me enough time to make the necessary shapes in the desk's surface to put them in.


Then, a trip to Ikea was in order. I knew which desk I wanted for my project, the shortest path to its location and the shortest path to the exit. The plan was perfect, in and out in 10 minutes. There was just one issue: my wife tagged along for the ride. Needless to say she needed to see the entire store to get some ideas, picked up some extra items, wanted my opinion on new sofas, etc ... Can anyone relate (please say you can ...) ? Anyway, the trip took much longer than expected, but I have the desk I wanted for my project, for less than 10 EUR (I already had a set of legs).


Finally, a trip to the hardware store to get a new tool: an oscillating multitool. I figured this would make it easier to perform controlled cuts in the desk's surface without having to slice through completely (cheap Ikea desks have some kind of cardboard mesh inside). It's a tool I'll be able to use for many other things, so I consider it a good investment for the future.


So, more of a preparation week, as I move on to the actual build of the project. I'll be drawing some patterns on the desk next week and hopefully start making some cuts.


Stay tuned!


As already explained in the previous posts, the Meditech container includes many components surrounded by a small networks of different and specialized microcomputers and microcontrollers. A ChipKit PI board connected with the RPImaster main device manages a series of independent controls mostly related to the internal Meditech health status; the same micro controller board is also used to set several calibration levels of the probes.


Meditech should be easily controlled and managed by the user, that is expected to be a non high-skilled technical personnel. So the control panel is an internal automated device though to simplify the life of the operator enabling him to manage the different diagnostic systems without too much effort and without the need of a keyboard and mouse that are optional only in some conditions.


Control Panel preview.jpg


The control panel features

The following are the features supported by the control panel


Temperature control

An integrated heat sensor constantly controls the internal temperature (shown on request on the LCD control panel display). When the level reach about 40C the cooling fan is automatically started and the speed is increased if the heat level grows during the usage.


Parameters calibration

Depending on the user choices, the same analog potentiometer can be used to calibrate different probes, where needed (e.g. the microphonic stethoscope audio gain).


Probes activities

A series of leds shows the probes that are enabled and working.


LCD Alpha.jpgLCD alphanumeric display

An alphanumeric LCD (based on the LCD Alphanumeric kit for Arduino and the software library adapted to work with the ChipKit board) using the I2C of the ChipKit PI micro controller board shows information, alarms, internal temperature etc.


IR receiver

This is the only tool directly connected to the RPImaster enabling the user to access the control panel features and settings from remote. This is useful as in many cases the probe settings should be managed by the operator while he is nearby the patient.

The micro controllers exchange data with the RPImaster device via the serial connection that has sufficient speed for simple commands exchange.


Fan control

The fan control is an automatic feedback in response to the internal temperature. The fan speed is controlled by a PWM signal managed by the micro controller and is independent by the entire process.


Lid open alarm

Lid open alarm is activated by a microswitch (the mechanical solution in these cases is more reliable then redundant electronic). If the devices components lid is opened while the system is running, it is shown an alarm on the LCD display then after x second (the delay duration should be tested experimentally) the micro controller sends a command to the RPImaster that initiates a total shutwon procedure of all the devices.


The following video shows the experimental preview of the test circuit just before it is converted in schematic and the content of the box organized in a proper way.



This is the first article of the second phase of the project development: the implementation of the diagnostic probes.

The first implementation described below if the digital microphonic stethoscope based on the usage of the Cirrus Logic Audio Card.



The principle

Apparently the technical principle is almost simple. A small condenser microphone should replace the traditional stethoscope earpieces. But - as always occurs - things are not so simple as they seems.

The microphonic stethoscope replaces the standard earpieces with a small condenser microphone as shown in the images below. Here it was used a common commercial device of 5mm diameter. The microphone should fit in about 20 cm of the stethoscope head. This gives the better sound than put the microphone too near to the stethoscope head. In-line with the microphone there is the cable that is plugged in the audio card. Using a good quality cable with copper shield (leaving it unconnected) I got a good quality/noise ratio. The length of the audio cable is about 2 meters.


The following images shows the microphone in detail

IMG_20150604_111110463.jpg IMG_20150604_111059157.jpg


As a matter of fact with the audio card installed and working with those "simple" commands available with the installation you can do nothing, ear nothing and no documentation is provided at all. The solution is inside the commands themselves:


  1. As the usage is in a headless system there is no reason to involve the mplayer to hear the tests, with the further difficult that without a user interface this command is very difficult to manage. So it has been ignored.
  2. As the acquisition from the audio card is driven by the arecord linux command and the settings are managed by the amixer command the play features has been set using the aplay command part of the same toolset.
  3. Based on the low frequencies rom the internal of the human body, a fine tuning has been done to set the better audio acquisition. The attached test file is an example that I have compared with the traditional hearing level and sound quality.

The images below shows the complete probe compared with the traditional one (one hearth, two devices )


IMG_20150604_111236888.jpg IMG_20150604_111306665.jpg


Sampling and software

To hear the test.wav file you should use - if from a PC - a couple of headphones reporting good bass sounds else it is very low or impossible to hear. Also put the volume to the max level. The following image shows how the acquired sound curve appears:


Original test wav.jpg


The file is a modified version of the audio card settings with an average quality set for the acquisition / playing of the hearth and general internal human body sounds.

The original sound has been processed (on a fast Mac laptop) using the iZotope software. The following images shows the selective equalisation settings (the noise and unwanted frequencies has been almost completely removed) and the obtained wav file that can be found in attach to this post (the test-processed file)


Selective equalization.png Equalized test wav.jpg

The definitive approach - that will be discussed in a further article - for the automated processing of the microphonic stethoscope will be done on the dedicated Raspberry PI hosting the sound card. The audio curve optimisation and processing is done with a headless python interface (the real time data are sent to the master device that store them, display etc.) using the snack library that demonstrated to be the open source most reliable and precise project in this field.

Nothing dramatic to post this week, and no pictures again, but I have been debugging Python code to deal with PiFaceCAD interrupts, and progress is a bit slower than I had hoped.  However, i am nearer the point where i can share a fork of some working code on a public repository on GitHub.


I have had the BitScope Micro working with my laptop ready for when i start debugging the comms between the 4D display and the Pi, but despite using it with a new HP laptop that has not yet been overloaded with other software, the app keeps crashing, so i have not been able to use it in anger yet.


No update on the Wolfson Audio card delivery, but reading the specs it might not work on the Pi2 because the latter does not have the auxiliary audio pads that the Wolfson card sits on, and there might be clashes with the PiFaceCAD, so i might look at putting the audio out through a speaker connected to the audio jack or possibly the piezo speaker fitted to the 4D display.


Hopefully i can add to this report on Friday.

Previous posts for this project:


Nothing of real importance here.  Most of what I am doing is c++ coding behind the scenes for navigation of the flight system.  This is not going to be a real informative post but just something to keep me visible.


How about some pictures?  I was walking around with this in the neighborhood, I bet I looked geeky!  I am recording coordinates for testing.  However I need to get mems sensor board we have got in the kit working, so I can use the magnetic compass.



Foam "Egg"

First, a foam "Egg" I am going to use as a gps "radar".



I cut it in half, then used a blow torch to shrink the inside to make I hollow, that's why it looks like a toasted marshmallow.


The "contraption"

The whole "contraption".  Powered by a 2S lipo and a 5V regulator.



An OLED that connects via I2C.  I have it for now to see what is going on while walking around.  I'll explain this later after all this is installed neatly on the QuadCOP.



The Radar, needs hot glue to fill the gaps, and some paint!

Just a good news that is an important part of the project lifecycle.


At the date of today on 2nd of June I have signed a pre-agreement - as a matter of fact a Memorandum of Understanding - with the Hicom Mobile Ltd and (mentioned as second party) and the Heart health Medical Services hospital (mentioned as third party) in Port harcourt, Rivers State, Nigeria about the Meditech device.


Citing the most meaningful parts from the agreement


It is established that Enrico Miglino has developed an innovative biomedical device referred to as its project codename Meditech, partially sponsored by Element14, part of the The Farnell group, U.K. as described in the annexed technical documents.

It is also established that Meditech is created adopting some commercial technologies, components and devices and it will be produced under open hardware and open source licenses with some limitations related to the marketing distribution, promotion and selling.


The First Party agrees to designate Second party as the sole and exclusive partner to market, distribute and sell the product on the territory of Nigeria.


A first period of six months will be referred to as Pilot phase of the business to test the marketing response and the adopted marketing strategies.

The Third Party, also named as the medical reference for the project final testing, explicitly manifest its interest in the product, its adoption at the agreed commercial conditions as explained in the annexed documentation in the following contexts, and its technical support in the testing and certification of validity of the data produced by Meditech compared with traditional analogue diagnostic systems.


Third party agree to follow the project from the actual phase up to the end of the Phase 3: and to be part of this third phase testing.

Third party agree that the Phase 3 testing process will be actuated inside of one of the medical structures he will select with a collaborator he consider trustable after a period of training on the use of Meditech.

Third party agree to support the project in the Phase 2 with any medical advice and suggestion needed or that he consider essential for the better device response and behaviour and - if available - the support of medical components useful to test the Meditech device.

Third party agree to acquire some (indicate how many if possible) of the first Meditech units accordingly with the contractual conditions that will be discussed and agreed by all the parties of signing this agreement accordingly with the marketing and product distribution guidelines as described in the project documents annexed to this project.


The initial conditions, as fas as what was declared in the initial project purposes, where every sold unit will generate a free unit delivered to a non-profit organisation or similar reality is part of this agreement.

That's all






The project proposed has subsystems that need to be produced individually. The project is all over the place right now and due to issues with my ISP, I am having to deal with delays of all kinds. In this update, I redirect you to an existing project series that I am doing and I will be using it in my project so please read on.


The Surveyor

I started a series of posts on making a quadcopter and the posts are as follows:



I just updated part 4 and will be used in this project. The concept is to add control of quadcopter via a Raspberry Pi which will take pictures of a particular location and upload it automatically. To control the quadcopter, the RPI will send commands via PWM which be produced by an RPISOC which was provided in the Forget Me Not Challenge.


I will be adding some videos later in this week...


To be continued...