Skip navigation
2015
armour999

Final writeup

Posted by armour999 Aug 28, 2015

IMG_20150327_223447.jpg


Right to the wire to finish this project. This has been so much fun and a challenge. The software used included Java, C++, Python and Linux. I used several tools including;

1. SD Formatter:  https://www.sdcard.org/downloads/formatter_4/

2. Win32diskimager: http://sourceforge.net/projects/win32diskimager/files/latest/download

3. FileZilla : https://filezilla-project.org/download.php?type=client


Access Point for extending WIFI range

 

One issue I had was the short range for WiFi outside the house. I looked at several tutorials for building a Access point and the Arch Wipi worked right out of the box. I could not make it wireless as the signal strength was too weak. It was stronger than our wireless in the house and I was impressed at the range it extended the WIFI .

 

Arch Linux Raspberry Pi WiFi AP Requirements for Rasoberry Pi Model B Revision 2.0

  • Power adapter with at least 1500 mA (2 amp recommended)

  • Minimum 2GB SD Card Fat32 formatted

  • Wifi USB dongle

  • Network Cable with Internet access

IMG_20150324_113706.jpg

    Image Installation and Access Point Setup


  • Download  Arch Linux Wireless Raspberry Pi image: https://sourceforge.net/projects/archwipi/files/latest/download
  • Extract it. eg:  sudo tar zxf archwipi.img.tar.gz

    Optional – extend partition to use all of disk. You can use gparted. (I used Raspi-config)

  • Plug your internet cable and new Arch WiPi SD card into your Raspberry and power it on.

  • Everything is automated so after a minute or so scan for a new network SSID = archwipi

  • If you don’t see the archwipi SSID, then it means you need to manually install your Wi-Fi dongle drivers. I had no issue and the SSID was an option on all the Raspberry Pi computers as a wirelesss network.

  • The wifi password is: 1010101010

  • If you need to login to the Pi the credentials are: root | archwipi

  • You can change the WiFi password to whatever you like by editing: /usr/lib/systemd/system/create_ap.service

  • Check CPU speed, temperature and more using ./bcmstat.sh

  • You can also view graphs of Pi stats, browse to this address (Pi’s IP Address):8080/

 

Raspberry Pi Camera (Two ways)

 

I decided to use two ways to take pictures. One ways was to use the Raspi-Pi cam and also the Snap Camera with the Pi Face Display and Control:

IMG_20150324_115423.jpg

Raspi-Pi cam

Detailed documentation can be found at:

http://elinux.org/RPi-Cam-Web-Interface

 

Once the software is installed please note that RaspiStill command will not work if you are testing the camera. The basic installation is:


  • Install Raspbian on your RPi
  • Attach camera to RPi and enable camera support with sudo  raspi-config
  • Clone the code from GitHub and run the installer with the following commands:

 

          git clone https://github.com/silvanmelchior/RPi_Cam_Web_Interface.git

 

          cd RPi_Cam_Web_Interface

 

          chmod u+x RPi_Cam_Web_Interface_Installer.sh

 

          ./RPi_Cam_Web_Interface_Installer.sh install

 

After the install you need to reboot the Raspberry Pi. Once that is completed you can view the GUI in a browser. I found either Chrome or Firefox worked well. This GUI has many features including Timelapse and Motion  Detection. The files are stored in a specific location so I found a way to use Dropbox to store the pictures in real time on my laptop and Blackberry . I did investigate a couple of cloud apps but this seemed to be the easiest.

 

Adding Dropbox to the Raspberry Pi


You need to set up a DropBox account and then set up an app to link to your Raspberry Pi. You can set up your app at: https://www.dropbox.com/developers/apps/

 

I played with some choices but found the File Type version seemed to work well. As you can see it supplies an App Key and App Secret. You will be using this to link to your Pi.

ow we want to install Dropbox for Raspberry Pi:

 

git clone https://github.com/andreafabrizi/Dropbox-Uploader/

 

Once downloaded you can make the script executable by using the following command:

 

chmod +x dropbox_uploader.sh

 

 

 

The first time you run the script you will be asked to enter the App Key and App Secret.

 

./dropbox_uploader.sh

Screenshot (24).png

 

 

HINT:  Copy the Keys to a text editor first rather than copy and paste to Putty from DropBox. Otherwise it does not play nice and you may have errors.  I used Word. Once your Keys are accepted it will ask you to open up a URL to confirm connection.  Assuming you are using Putty, copy the contents to your clipboard and paste to a text editor. Now copy the URL to a browser. You may receive a message from Dropbox that the connection is successful but unless you perform the last step in Putty the token may still fail. Some oauth tokens are corrupt so you may have to try a couple of times.

 

RaPiCamcoder stores media files in /var/www/media. So I want a script to pull the .jpg files to Dropbox and see the media on my Blackberry and Laptop in real time. I tried a couple of test.jpg and it seemed to work like a charm.

 

I used this script to start the downloader:

 

pi@raspberrypi ~/Dropbox-Uploader $ ./dropbox_uploader.sh upload /var/www/media/ {*.jpg*} /Apps/PiRover

 

This was tricky. Most documentation did not include a target file for the upload and failed. I took several scripts and reduced the code to one line and added the target DropBox. The command tells Raspberry Pi to upload all files ending in .jpg in /var/www.media (location that Raspi_Cam_Web stores the images) and upload to my DropBox App called PiRover.

 

I setup a full dropbox instead for final testing and called it PiRover. When I ran the script the images stored in /var/www/media uploaded to DropBox at a fairly good speed and now is accessible on my Blackberry and Laptop in minutes.

 

A cron job is added to run the script every minute and I’m done! I will add a cleanup cron job so the SD card does not fill up too fast. I’ll have some videos posted soon. Please do not rain.

 

 

 

SnapCamera

 

A complete guide can be found here:

 

http://www.piface.org.uk/guides/how_to_use_snapcamera/install_snap_camera/

 

Basic install instructions are:


Install snap camera with the command
sudo apt-get install python3-snap-camera

 

Start SnapCamera by running
snap-camera

 

 

 

 

 

 

 

 

 

 

 

GPS Real Time Tracking

IMG_20150324_115331.jpg

Most of my effort went to programming the Raspberry Pi and Microstack GPS to provide . The Microstack GPS unit was installed as per the instructions:

 

sudo apt get update

sudo apt get install

sudo apt get install gpsd gpsdclients pythongps

 

Disable the Serial Port from raspiconfig

 

● From the menu choose Advanced Options.

 

● Then choose serial.

 

● When asked “would you like a login shell to be accessible over serial?” choose ● A message saying “serial is now disabled” will appear.

 

● Exit raspiconfig and reboot the Raspberry Pi

 

Then:  sudo dpkgreconfigure gpsd

 

● Choose when asked if you want to start gpsd automatically.

 

● Choose when asked “should gpsd handle attached USB GPS receivers automatically”.

 

● When asked which “Device the GPS receiver is attached to”, enter /dev/ttyAMA0.

 

● Accept the defaults for other options.

 

 

 

Now test the GPS with: cgps s

 

 

 

 

Creating a Real Time GPS Tracking

 

  • Sharing using a Google Earth KMZ file
  • Live data provided by Microstack GPS
  • Connected to the Raspberry Pi via /dev/ttyAMA0

 

 

Robot Chassis (My first Robot)

 

Well I went with the Half_Pint Runt Rover that came with no instructions. I'm not sure if its because I'm left-handed or Canadian but I fund this a challenge to assemble. I did find a 3D diagram and a short video that helped put the kit together.

 

 

 

Attaching the motors to the Gertbot


A comprehensive datasheet can be found here:


http:///datasheets/1862080.pdf

 

The schematic was helpful in connecting the motor coils and power supply to the  GertBot.

 

IMG_20150326_090826.jpg

Well this project was a great challenge and I LOVE my new QuadCOP.  I think I may rename it to DRIOD COP since I made the look akin to Star Wars droids.

 

I have posted some information that I will not repeat in this summary.  A few post that I think are very important:

Some information from my original application

Explanation of the ControlSwitch

Challenges with the ChipKit Pi

Microstack GPS and how I used it

The Sensor Array

I2C Custom Block Protcol

Rasberry Pi Turning on the camera

Autopilot Functionality I am Trying to Implement

QuadCOP Navigiation

The Test Flight Plan

First Loiter Test

Source Code and Software Flowcharts

 

My Favorite posts that are not as important but include my family:

You call that A QuadCopter? (see first video)

QuadCOP final construction pics

 

I want to provide a high level picture of the final hardware:

 

QuadCop_Final2_bb_edited2.jpg

 

Here is the final walk around of all the hardware and elements I have used.

 

 

 

Parts Used from Kit:

  • Pi 2
  • Pi b+
  • Raspberry Pi Cam
  • ChipKit Pi
  • Microstack GPS
  • Xtrinsic Mems Board (still under testing)
  • Pi Shim RTC

 

 

Things I added from original design:

  • Custom Protocols
  • Moving Head
  • Use of ChipKit Pi instead of Arduino (huge learning curve)
  • Custom built quad body (instead of buying one)

 

Things I didn't get done in original design:

  • Sound Card
  • Motion Sensor
  • Sending of Text Messages (Logging is still done)

I have yet to test the waypoint navigation as I am out of time, but I will be testing that this weekend, I will add the video to this post.   Currently what I am testing it the "loiter" functionality, that it the QuadCOP attempts to stay at a specific heading and GPS location.

 

 

I am new to this hobby and did not account for the huge learning curve with some of the systems.  I also lost a month for a family vacation and wish I would of taken the project with me!

 

As a final fairwell I went out in high winds and did one last test of the loiter and also flew the quad around a bit.  The winds were very high and I wanted to see how well the quad held.  Unfortunately the camera cut out after 1 minute.  This flight was CRAZY and the QuadCOP was trying hard to holds its place.  I had to take control back a few times.

 

I attempted a very low loiter and the quad headed towards the ground.  There was a puddle on the ground and I think it messed up the sonic sensor, and it hit the ground!  You can see the puddles.

quadcop_crash_small.png

 

Since the head came off, it unplugged the RPi B and the camera, and the video didint record from inside the head.  The repairs seem minor so I am going post more vid tomorrow with a 2nd retry.

 

Conclusion

Thanks Element 14 and EVERYONE for your help and support.  I didn't get as much done as I wanted and my documentation is not as nice as I am capable of.  But rest assured in future projects my abilities and quality will continue to rise!

This project allowed me to use C++, which seems to be a dying language in the everyday workplace.  C++ is dear to my heart and thoroughly enjoyed coding the QuadCOP with a real language.

 

My family participated including holding things, getting tools, painting, and putting up with a grouchy dad!  Time with family is never wasted and as such I am very satisfied with my project.  I will continue it on forever as it will never be considered complete!  I have an awesome new RC toy with unique functionality and I plan to fly this around at my club giving demos.  It is a real show stopper!

 

Edit:  What I feel I accomplished.

 

I read the point of the contest is to push the knowledge of the Raspberry Pi 2.  I feel I did that by using wiringPi and I2C, block protocols, repeated starts, threading (Pi 2 has multiple cores) as well as sorting out the ChipKit Pi as a real time component to the Raspberry Pi.  I showed how to realistically use the Chip Kit Pi do to do real tasks that may not be suitable for a Non Real-Time System such as the Pi2 with Raspbian.

 

I don't like that I didn't get all the functionality I wanted to completed.  I do feel that I solved many issues and if someone wants to do real work with C++ and the Pi as an embedded system I have provided a strong foundation for doing that.  I felt these things were more important than adding more eye candy or videos.  Unfortunately the technical aspects do not get much attention, as is shown by my software flow charts and source code post.  I hope that someone can use my knowledge I have shared.

 

 

A few parting pictures:

IMG_0280.JPG

 

flying.png

 

quad_fly.png

 

IMG_0319.JPG

 

IMG_0302.JPG

IMG_0294.JPG

 

IMG_0290.JPG

 

IMG_0050.JPG

IMG_0289.JPG

 

The Ketteh sez was a good project.

IMG_0242.JPG

We all had fun!

kids.png

Michael Hahn

Picorder SCRIPT

Posted by Michael Hahn Aug 28, 2015
#######################################################################


#!/usr/bin/env python


#######################################################################
## Michael Hahn - Final Version 8-27-2015
## Sci_Fi_Your_Pi element14 contestant. 
## The Picorder: A Star Trek style Tricorder
#######################################################################
## Flame Sensor_Temperature_Humidity_Motion_Distance Sensing routine
#######################################################################
import os
import sys
import RPi.GPIO as GPIO
import time
import Adafruit_DHT
from time import sleep
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(13, GPIO.IN)# Flame detector
GPIO.setup(17, GPIO.IN) # PIR motion detector


temp = Adafruit_DHT.DHT11
pin = 4


os.system('omxplayer --no-osd -o local Picordersound.mp3 &')
###############################################
##### Flame Detection SENSOR
###############################################
if (GPIO.input(13)):
    print
    print
    print("Flame detected, take ACTION")
    os.system('omxplayer --no-osd -o local Autodefense.mp3 &')
    print
    print
else:
    print
    print
    print("No Flame detected, safe to proceed")
    print
    print
###############################################
##### DISTANCE MEASURING SENSOR
###############################################


TRIG=26
ECHO=6


GPIO.setup(TRIG,GPIO.OUT)
GPIO.setup(ECHO,GPIO.IN)


print "Waiting For Sensor To Settle"
time.sleep(1)
GPIO.output(TRIG, True)
time.sleep(0.00001)
GPIO.output(TRIG, False)


print
print "Distance Measurement In Progress"
print


pulse_start = time.time()
time.sleep(0.00005)
pulse_end = time.time()


os.system('omxplayer --no-osd -o local Queue.mp3 &') 


pulse_duration = pulse_end - pulse_start

distance = pulse_duration * 17150

distance = round(distance, 5)


print
print "Distance:",distance,"cm"
##print " Distance :%5.1f cm" % distance
print
print


####################################################


humidity, temperature = Adafruit_DHT.read_retry(temp, pin)


if humidity is not None and temperature is not None:
    print
    print 'Temp={0:0.1f}*C  Humidity={1:0.1f}%'.format(temperature, humidity)
    print


else:
    print
    print 'Failed to obtain reading. SCANNING!'
    os.system('omxplayer --no-osd -o local nosignal.mp3 &')
    print
    sleep(1);
####################################################
###### PIR MOTION SENSOR
####################################################
input = GPIO.input(17)


if (GPIO.input(17) == True ):
        print
        print "DANGER - DANGER - MOTION has been detected"
  os.system('omxplayer --no-osd -o local IntruderAlert.mp3 ')
        print
else:
        print
        print "SCANNING AREA for signs of MOTION"
        print
  sleep(1);


##################################################################


def restart_program():
    """Restarts the current program.
    Note: this function does not return. Any cleanup action (like
    saving data) must be done before calling this function."""
    python = sys.executable
    os.execl(python, python, * sys.argv)


if __name__ == "__main__":
    answer = "y" #raw_input("Do you want to restart this program ? ")
    if answer.lower().strip() in "y yes".split():
        restart_program()

Previously:

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Direction of Travel Indicator 1

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - Current Position

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - GPS Test

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - Direction of Travel Indicator 2

 

So it is now the end of the available time for the current challenge. i guess i shold expect to be limited by time when i missed a couple of months of project time doing other things.

 

Like all good Sci Fi Series the installments have been limited in number and the viewer is left wanting more. Having stopped on a bit of a cliff hanger I do intend to resolve the threads left open in this series and will return in the next season with more. (should I be pushing a box set to sell to keep the fans hooked until next time?)

 

I have enjoyed the challenge and of trying to put together the project and I intend to complete this at some point (unfortunately this is not likely to be before the deadline in about 15 minutes)  It is unlikely to be of much real use as a navigation tool (the position is likely to be rather approximate in  nature - middle of the UK type information, rather than postcode accuracy of satnav devices) and would be rather larger than most people would want to carry around. However I think that within the design parameters of the device it would work well. The adventurer would normally be travelling in some form of large (obviously steam powered) vehicle so the size and weight would not be an issue. The accuracy would also not be a problem as the locations are always fairly in-specific in nature and a desire for such accuracy would be decidedly ungentlemanly.


so in true Sci Fi fashion......................

 

To Be Continued.jpg

Previously:

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Direction of Travel Indicator 1

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - Current Position

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - GPS Test

 

Direction of Travel Indicator 2

Having got the GPS working I could now get the GPS position for the current location and then compare that with the intended destination. The next challenge was to decide on some suitable destinations and to get the latitude and longitude for those places.

 

After a quick think I selected a short list of possible destinations to test the functionality (which would also act as start points for the route indication):

 

1. Sheffield

2. London

3. Perth

4. New York

5 St Petersburg

 

These destinations were chosen as they were visible on the map I had selected (and being based near Sheffield it would be easier to test using that as one of the destinations. They are also major cities that may feature in a steampunk adventure.

 

using Latitude Longitude Finder on Map Get Coordinates I came up with the following list.

 

1. latitude: 53.381129 longitude: -1.470085

2  51.507351  -0.127758

3. -31.953513 115.857047

4. 40.712784 -74.005941

5.  59.920613 30.322952

 

 

To test the device have set up these as variables and compare the position and print N or S and E or W to indicate the direction. Once the device is put together these would set a GPIO pin high to light an LED under the appropriate arrow.

 

To decide which way that the adventurer needs to travel the device needs to calculate the difference between the current location and the destination.

 

latlong.png

 

To calculate the direction to travel the latitude and longitude numbers of each location are compared. There is two ways to compare the numbers and indicate the direction to travel thatIi have considered.

 

The first is to simply take the destination figures away from the current location figure. If the result is a negative number then the destination is south (for latitude, west for longitude) of the current location so the adventure is shown the South arrow (West arrow for longitude) to move towards the destination. This gives a quick comparison and points the traveler towards the destination. The problem with this method is that if the destinations are at the edges of our flat map (with 0 0 at the centre) then the direction indicated could be moving all of the way around the world in the oposite direction to the one which would be the shortest way to get to the destination.


To correct this difficulty and travel in the shortest direction a correction need to be applied. To do this the difference can be used to check if the two positions are closer in the initial direction or by traveling in the opposite direction. This is actually simpler than it sounds as we just need to check if the difference is larger than 90 for latitude and larger for 180 for longitude then the traveler would be best served traveling in the opposite direction to that indicated above.

 

so that would be something like this (having got the position and in real python rather than a late night mix of python and pseudo python)

 

clat = current latitude
clong = current longitude
dlat = destination latitude
dlong = destination longitude

difflat = clat-dlat
difflong = clong-dlong

#for latitude
if difflat > 90 OR difflat < -90:
     if difflat > 0:
          print "N"
     else
          print "S"
else
     if difflat > 0:
          print "S"
     else
          print "N"

#for longitude
 if difflong > 180 OR difflong < -180:
     if difflat > 0 :
          print "W"
     else
          print "E"
else
     if difflat > 0:
          print "E"
     else
          print "W"


This would then go in a loop tested every few seconds and the print statements replaced with GPIO pins being set to high.


All sorted then, well not quite this would throw up issues if the destination was due south (or east, west or north) of the current position as the difference would be zero (although with the number of decimal places given by the GPS module you would have to be pretty accurately directly in line with the destination the problem still remains so a check to see if either difflong or difflat would be added before each section and no light in that section lit.


I also mentioned previously the possibility or adding the ordinals if there were sufficient pins left to do so. This would be a relatively easy task of adding a couple of variables and then seating those with the move directions. if both latitude and longitude required movement the appropriate ordinal could be lit up and only when there was no difference in Latitude or longitude would one of the cardinal points be lit.

 

The next job is to tidy up the above and get it running on the RPi as a test (which might happen this evening, but more likely over the weekend). Once the basic comparison is working i will then look at adding in the code to pick up the current and destination locations, With this part of the device working it could then function as a very basic navigation system. Having looked at blogs about the subject on here the idea occurred to me that I could use this part of the device as a way of locating geocaches if those locations were used instead of the major cities for the theatrical steampunk navigation device. Once in it's big case it may be a little impractical but adds an everyday use to what is going to be a bit of a novelty.

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

 

Well its time to post my source code.  Everything is written in C++ across 3 platforms.  The Pi 2, ChipKit Pi, and Arduino.  I used some advance features in C++ such as I2C, buffered serial communication, and PThreads.

 

I uploaded everything to GitHUB and still have an update to do to get the latest major version, as well as clean out some junk.  The GitHub is located at

https://github.com/screamingtiger/QuadCOP

 

Here is the Manifest.  All code is completely custom except TinyGPS++ and the Adafruit library for the OLED.  However I have made wrappers around this code.

FilesDescriptionDependenciesSystemNotes
autocontrol.cppMain control code for the Raspberry Pi Flight System (RPFS).

 

Gets sensor information and responds to events.

 

Navigates and records waypoints.
All other code for Raspberry PiRaspberry Pi
gps.cpp, gps.hMulithreaded GPS Parser and trackerTinyGPS++ ported to Pi, custom serial ISR

PThreads

Raspberry PiSerial ISR added to TinyyGPS++.cpp.

 

Runs in background parallel to autocontrol.cpp
heading.cpp, heading.cpp5883L Magnetometer heading objectWiringPiRaspberry PiSee http://wiringpi.com/
i2c.cpp, i2c.hBlock send algorithm for sending complex I2C dataWiringPiRaspberry Pi
TinyGPS++.cpp, TinyGPS++.hGPS NMEA parserCustom defines that are included with ArduinoRaspberry PiTook out millis() calls

 

See https://github.com/mikalhart/TinyGPSPlus/releases

 

See screen.cpp, screen.h
OLED screen control objectWiringPi, Adafruit_SSD1306 driversRaspberry PiSee https://github.com/adafruit/Adafruit_SSD1306
cam.pyCamera control via GPIOPythonRaspberry Pi
sensorarray.cppMain code that reads sensors and sends data via I2CArduino Wire libraryArduino
controlswitch_pic32.pdeMain Control switch code for reading and sending PWM signals for flight controlChipKit Wire and SoftPWM librariesChipKit Pi/PIC 32Also controls rotating head

 

 

Flow Diagrams

 

Header 1
Software Interconnection Diagram
Software.jpeg
autocontrol.cpp (RPFS)
RPFS.jpeg
controlswitch_pic32 (ControlSwitch)
ControlSwitch.jpeg
sensorarray.cpp (Sensor Array)
Sensor Array.jpeg
I2C Block Protocol
I2C Block Protocol.jpeg

Previously:

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Direction of Travel Indicator 1

Sci Fi your Pi - Prince Dakkar's patent log taking chart compass - Current Position

 

GPS Test

The lack of a Microstack baseboard appeared to be a bit of a stumbling block if it could not be overcome. So that was where my attention was next directed.

 

As I mentioned previously i had found some blog comments suggesting the GPS module could be used without the baseboard by connecting direct to the RPi GPIO pins. So i pulled out a breadboard and some wires to give it a go. The table included previously showed i needed to connect pins 8 and 10 to the MTXSRX and MRXSTX pins on the module along with hooking up power and ground connections.

microstackgps.jpg

 

The blog comments suggested this was all I needed to get it all running so I powered up the Pi installed the software. This did not work so i tried a coule more times doing the same thing and surpisingly nothing changed and it still didn't work so i went off to search for the instructions - Microstack node documentation.

 

I went through and installed the Microstack node software and reinstalled gpsd. I tried again but still nothing worked.

 

I added some more connections to the GPS module and tried again. This time adding connections from pin 0 on the GPS module to pin 7 on the RPi and pin 1 on the gps module to pin 12 on the Raspberry PI. This again drew a blank and I decided to give up.

 

 

 

 

 

BUT............then whilst starting to write a blog entry about the GPS unit defeating me I saw a red flashing light out of the corner of my eye. SO i went back to the RPi and ran the test script again and............success!!!!!

 

photo.JPG

 

 

The longitude and latitude were printing on the screen. It appears that i was being rather impatient and did not wait for the GPS module to get going before trying to get information from it.


So the device now knows where it is just need to get it to tell the adventurer how to get where he/she wants to be next.

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

 

I was very nervous.  I turned the loiter on for just a second and it held then started dropping, the ground sensor caught it.

 

Because my controls are being passed through the chipkit pi, there is some  resolution lost so its a bit touchy for a hover..

 

A few messy pre flight pics, and the video.  I am going out to get a better flight and test some more of the functionality.  Today is a busy day.

 

Messy but secure.  Covered up by the top head so who cares!

 

 

 

flight2.png

 

IMG_0296.JPG

 

 

Ping sensors.  Only 2 of 4 installed.

IMG_0298.JPG

 

I need a haircut.  And to comb my hair.  And a bath.  This project has taken control of my life!

 

I have to go fly more, no time to waste!  The last few seconds of the vid is the loiter, I yelled STOP because I thought it would crash into the ground.  It got very close but the triggers worked and it detected ground!  WHEW

 

BTW the head is moving by itself as intended, I need to fix up the moving algorithm, its pretty plain.

 

 

Edit:  A couple notes of how my version of loiter works.  Typically loiter attempt to maintain heading, and to maintain a specific GPS coordinate all within some certain thresholds.

 

If a quad needs to move backwards, it can move backwards.  But not my quad (yet).  It will do a 180 degree turn and move forwards.  Currently, all special corrections are done with forward movements.  I gave my quad an advantage during testing in that I put the nose into the wind, meaning it may not have to turn around.

 

So its like a game of Asteroids where as the ship goes through the center of the pull, it does a 180 to face it.  The nose always points into the center of the field.

 

This is just to make testing easier.  Just like the heading wont do a 350 degree left rotation instead of a 10 degree right rotation, the same efficiency algorithm can be used to move without changing heading at all.

Hello all

 

 

This is probably the last post I write before the end of "Sci Fi Your Pi" challenge and I want to take this occasion to thank Element14, sponsors and everyone involved in launching and managing this challenge and also for the opportunity to be part of it.

 

 

Although I did not manage to take this this project where I wanted, in this post I'll show you the current status of Cybernetic Computer Interface project.

For those of you not knowing what Cybernetic Computer Interface project is, please check the previous posts from here:

- Cybernetic Computer Interface

- Cybernetic Computer Interface - First step

- Cybernetic Computer Interface - Audio interface settings

- Cybernetic Computer Interface - Hardware description and functional demo

 

In the last post, you could see and hear Cybernetic Computer Interface in action operating inside a box. But Cybernetic Computer Interface is supposed to be a wearable device so let's make it so.

 

At this stage of construction, initial ideas about how mechanical structure should be made proved wrong and involved a lot of back and forth testing and redesign.

Because of the weight, in the end I had to add additional structural elements to make the device wearable.

One of the biggest problems was to put everything on the quite small frame.

 

The frame is built from aluminium strip, which is also light, flexible, strong and easy to work with. I need it to be flexible to allow the whole structure to

squeeze against the head and sit as tight as possible.

 

20150828_155125.jpg

All boards are mounted on this frame with zip ties and wires in order to allow them to move without create mechanical tension in the boards, to avoid crack them.

 

The hardest point was to make the moving arm for Control and Display module. This should be mounted on the frame, must be extensible to allow an easy reading of display

and needs to be strong enough to not bend or break under the weight of CAD module (ask me how I know ).

The actual version of the arm is made from a hacked selfie stick

 

20150828_155216.jpg

Also all power and control lines from RPi to CAD had to be long and flexible enough to allow mechanical parts to move. After trying MANY types of cables,

I end up using cables recovered from dead computer mice, which proved the best choice so far.

 

20150828_155406.jpg

On the right side of CCI, I added two small voltage indicators, one for each LiPo cell. These have their own switch in order to be checked alone from the rest of the system.

 

Since the last post, the software was also improved with SDcard and USB drive support but voice recognition support is still not present.

 

Because of the large (and unforseen ) quantity of time involved in mechanical side development and building, I did not manage to build the outer shell and mount everything as I hoped for the final presentation, so this is the current state. Working but not finished.

I hope I will finish to build the outer shell in the next weeks and show you the completed device.

 

Here you can see different views of the device and a short demo movie.

20150828_155732.jpg

 

20150828_155102.jpg20150828_155136.jpg

Aaaand my assistant beta testing the product

20150828_162338.jpg20150828_162400.jpg20150828_162406.jpg

 

In the next weeks I'll build the outer shell and post here the final version with all planned features.

Thank you all for following Cybernetic Computer Interface building thread and I hope you found an interesting and informative journey.


All the best

-=Seba=-

balearicdynamics

Meditech: Thanks

Posted by balearicdynamics Top Member Aug 28, 2015

Beyond the traditional (a bit rhetorical, yeah?) thanks it is the worth to spend some words on what opportunity this challenge has represented for this project.

The Meditech idea was just an idea. I was sure it was possible but was necessary a starting point, a sort of cooperation with someone. This is what I have found here. First of all the thanks are to the Element14 entire organisation that trusted in the first proof of concept submission. Then my personal thanks are for all those members that got any kind of possible support and helpful hints that is impossible to mention all here but they perfectly know what I mean.

 

So Meditech got the right boost and today the first phase - formerly phase zero, codename tricorder - has been completed: the idea is a project moving on his way. The opportunity to invest few money, interact with a lot of different point of view and very different skills just while the first prototype was growing from the scratch has dramatically simplified the first - and more difficult - step. As well the Element14 support with the kit that made available all the needed hardware and more.

 

Now the project has gained its own future. Today it is sure that the next phase 1 will be completed hopefully following the expected timeline while it is a promising option the first preproduction of the first 10 units (expected to be available for delivery in the first two months of 2016).

Thanks to the challenge and Element 14 today Meditech is a trustable idea while there is a growing attention of the media for what happens next months. Personally I will continue to blog the thread with constant updates on the Element14 community as the primary reference point for the Meditech development project.

 

That's all. Enrico

 

Meditech-1024.jpg

I breathed new life into Flight Controller.  I am working franticly getting it ready for flight.  Expect a few last minute posts, and I mean down to the last hour!.  STAY TUNED.........

 

20150827_201521.jpg

dmrobotix

PizzaPi: The Last Day

Posted by dmrobotix Aug 27, 2015

Hello, everyone!

 

This has been a really fun challenge. I've never entered a design challenge contest before but after doing this one, I would not hesitate to enter another one! It's come down to the wire for me and I have to say that while I am a little disappointed that I did not get the final prototype together to demonstrate, I am excited about what I have accomplished thus far. I do plan to continue working on PizzaPi and completing the prototype. I want to develop it to work in different business settings, not just pizza.

 

With that out of the way, let me show/tell you what I DID manage to get done (the picture below shows my work station during the last two weeks).

 

PizzaPi assembly

 

Hardware. I want to remind you that early on in the competition I spent time working on hardware communication. This was a real challenge and I ran into a lot of roadblocks, especially when it came to getting mosquitto (the mqtt broker) working. In the end, I persevered and got all three Raspberry Pis talking to each other and sending sensor information back and forth. I was hoping to make a final video of this, but the GPS module is currently unable to pick up a satellite signal. I've noticed this can be a problem at times and I think this could be remedied with an antennae attached to the device. It came online 4 minutes before the deadline! The video is below!

 

 

Software. The software consists of a web interface in which all the hardware and database entries can have a place to be viewed. There are three components to this: 1) an administrative interface, 2) a customer interface and 3) a driver iOS app. Currently, all of these are now pulling information from the database and displaying them. They all incorporate a map object so that GPS coordinates can be displayed and routes can be determined. Below are a some videos demonstrating this. The iOS app was a really big challenge for me as I had never written a phone app in my life and had never used Swift or Objective-C (I coded the app in Swift).

 

 

 

 

 

Conclusion. Despite not finishing the prototype, I had a blast doing this. I had planned enough time to finish the project but I had to extend my internship at the lab to finish work there, so that took away four weeks that I had dedicated to this project! I want to thank everyone at element14 for sponsoring my design concept and for sending me all those amazing parts. I hope, despite not finishing on time, that they will consider me for a future design challenge! Good luck to everyone and I hope the winner enjoys that really cool Boba Fett helmet.

 

Thanks everyone!

Thank you element14, judges, and all the contestants and members that have made the past 4 months exciting for me and others. Building the Picorder has been very enlightening, a wonderful project and endeavor. So many skills are needed to produce projects of the kind that I've seen here in the community. Many of them learned and some that seem to come naturally to others.


Awesome is the single word I can use to describe it! I'm uploading the final documents now and video of the Picorder in action. There are a few attachments so I apologize. As indicated in my video, last minute tests caused some undesirable results. My display whited out and I had to reformat and load the operating system and scripts and files onto the new card. Still not sure what happened. I felt like I was cramming for finals! In a way, I guess I was!


As contest time comes to an end; it's been great. I hope to continue and do more in the coming months.

 

Michael

 

Introduction

 

This is it, the end of the Sci Fi Your Pi Design Challenge. It's been a long journey, with lots of learning, building, sharing and blogging. I hope you have enjoyed tagging along as I made progress on project PiDesk!

 

This post is the final summary of the project.

 

Blog posts

 

Over the course of the challenge, a lot of content was created. I experimented with a new blogging approach with which I separated project updates from guides. The guides were kept as generic as possible, so that readers would not be required to know about the challenge to understand the content. With the challenge over, I'll be renaming the guides and moving them to the appropriate sections of the website, hopefully making them easier to find for others. They will however remain linked in the project updates, so no information is lost in the process.

 

These are the blog posts created during the challenge.

 

Project Updates

 

 

Guides

 

 

Project Summary

 

If this is the first time you stumble upon a PiDesk blog post, have a look at the blog posts above. To get a quick idea of what the project is about, have a look at the pictures below.

 

The collages represent different parts of the project in different stages. The final result is unveiled at the end of this post. To summarise though, the goal was to create a futuristic desk. I did so by integrating things such as LEDs, a wireless charger and capacitive touch controls inside the desk. There is even a computer that pops out of the desk when the correct button is touched!

 

summary-desk_build.pngsummary-stepper_motors.png

summary-magic_lamp.pngsummary-capacitive_touch.png

 

Components

 

A lot of different components and technologies are used in this project: stepper motors, addressable LEDs, capacitive touch, wireless charging, etc ...

 

DescriptionQuantityUsed in
Raspberry Pi B+Raspberry Pi B+1Desk controls
WiPi USB DongleWiPi USB Dongle2Desk controls / Desktop computer
No brand USB Sound Card1Desk controls
Adafruit Mono 2.5W Class D Amplifier (PAM8302)1Desk controls
GertbotGertbot1Desk controls
NEMA 17 Stepper MotorNEMA 17 Stepper Motor2Desk controls
WS2812 LED Strip2 metersDesk controls
AT42QT1070 Capacitive Touch ICAT42QT1070 Capacitive Touch IC1Desk controls
Micro Switch ON/OFF2Desk controls
Mini Speaker 8ohm1Desk controls
Raspberry Pi 2 BRaspberry Pi 2 B1Desktop computer
Adafruit Stereo 2.8W Class D Amplifier (TS2012)1Desktop computer
Recuperated Laptop LCD Display1Desktop computer
LCD Display Controller1Desktop computer
Speaker 8ohmSpeaker 8ohm2Desktop computer
Qi Wireless Charger1Magic Lamp
Qi Wireless Receiver1Magic Lamp
Adafruit NeoPixel Ring WS 2812 (12)1Magic Lamp
Adafruit Trinket 5V1Magic Lamp
12V to 5V DC-DC Converter2Desk controls / Desktop computer
12V Power Supply1Desk controls / Desktop computer / Magic Lamp

 

I made these image, trying to illustrate how everything fits together and interacts:

 

Slide1.pngScreen Shot 2015-08-25 at 21.49.14.png

 

From left to right, top to bottom, we have:

  • Raspberry Pi B+: This is the heart of project PiDesk, as it is in charge of controlling all the different components involved in the project. I originally opted for the Raspberry Pi A+, but was forced to move to B+ as the workaround required to have both audio and neopixels work, required a USB sound card.
  • Dual channel relay board: The relay board is controlled by the Raspberry Pi GPIO pins. It makes it possible to turn a 12 power supply ON or OFF, powering the laptop display, and a 5V power supply for the Raspberry Pi 2 desktop computer.
  • Capacitive touch IC (AT42QT1070): The custom breakout board is used to convert Raspberry Pi GPIO pins into capacitive touch input sensors. The touch sensors have been created using copper tape and conductive paint.
  • LED strip (WS2812): The LED strip is controlled by the Raspberry Pi and has been built into the desk's surface. It displays animations depending on the ongoing action.
  • Gertbot: The gertbot has two functions: raising and lower the screen assembly by controlling two stepper motors, and knowing when to stop using end stops.
  • Wifi dongle (WiPi): Wifi connectivity, mainly used during programming and testing.
  • Soundcard with amplifier (PAM8302): Play sound effects depending on the action to be executed.

 

Slide2.pngSlide4.png

The pictures above represent on one side, the Pi 2 desktop computer, on the other, the magic lamp. As you can see, these items are very straightforward compared to the desk controls and require very little explanation.

 

For the desktop computer, a Pi 2 is used in combination with a recuperated laptop screen for which a controller board was found. A combination of a stereo amplifier and speakers are used for sound.

As for the magic lamp, a Trinket microcontroller and NeoPixel ring are powered via a wireless receiver. The circuit is powered on when placed on top of the wireless charger.

 

Power distribution

 

Slide2.png

 

The main power supply is a beefy 12V one. It is used to power the stepper motors via the Gertbot and the LCD screen via the controller board. The other components of the project require 5V, which is achieved using DC-DC converters. The 12V input to the desktop components (Raspberry Pi 2 & LCD controller) are interrupted by a relay which is controlled by the Raspberry Pi B+. Two channels have been foreseen, although only one is currently in use.

 

Code

 

All of the different components illustrated above require some code to work.

 

To get the NeoPixels and Gertbot to work, external Python libraries were used. Here are the links:

 

 

Two scripts are in charge of combining the different features and make everything work together. The full code is available on GitHub.

 

PiDesk Main Script:

 

PiDesk LED Animations Script:

 

Demo

 

Finally, the "moment supreme", the moment you (may) have been waiting for, the final result.

IMG_7234.jpgIMG_7236.jpg

 

Because it wouldn't be a real demo without an actual video, here it is. The first part is a montage of various stages of the build, followed by some demonstrations.

 

 

Thank you

 

I'd like to thank element14, the sponsors, the judges and anyone involved in this challenge for setting this up, providing the kits and allowing me to participate. I had fun, I hope you did too following this project and you like the end result.

 

To my fellow contestants and members: thank you for following the project, liking the posts and providing feedback along the way.

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

I am going to get into some technical details which hopefully isn't boring for the audience of this contest.  If anybody, it is for the judges as it is important to understand the amount of work I have put into this on a technical level.

 

I had mentioned in a previous post that the chip kit pi has some issues that make it different that an Atmel ATMega (Arduino).

The Wire library for arduino allows you to send and receive multiple bytes of data between one stop and start sequence.

However, the ChipKit Pi (Pic 32) has an issue doing this.

 

Because I am sending commands to the ControlSwitch from the RPFS via I2C, I need to keep command intact.  The commands consist of a controlbyte, a checkbyte, and other data up to 10 bytes.  I worry about missing a byte or getting out of sequence.  By using the wire library I though the built in I2C protocol would be suffice.  Alas, I can only send 1 byte at time.  This means that if I sent the ChipKit Pi 10 bytes, the OnReceive event handler will be called 10 times.  So I have to keep "track" of what is going on in order to completely process a command.  That is, there needs to be persistence between the event handler calls to keep track of the current status of a command to ensure all information is received and valid before doing what the command tells it to do.

 

So I had to get creative and created a block algorithm for this.  I used the block algorithm for all I2c communications except for the Magnetometer.

 

A high level general description is that a specific byte is sent that represents a block start.  Another specific byte is then sent as a stop byte.  The block algorithm does not allow repeated starts. That is that once a block is started, it must be specifically ended.  If another start byte is sent before a stop byte, the start byte is counted as data in the block.

 

 

The down side to this is that the stopbyte can never be used for data.  That is 1 value out of 256 values so I decided that is ok.  Actually there is another byte that cannot be used either, the resetbyte.

 

This is sent to reset the block status and let the receiver know a brand new block is coming in.  If the receiver is in the middle of receiving a block, that block is reset.

 

The resetbyte cannot be used as data either.  So 2 values out of 256 cannot be used.

 

So A block of data:

[StartByte] [data1..10] [StopByte]

 

Up to 10 bytes can be sent at a time.  The protocol is set so that way only 1 complete block can be had at one time.  That is, if the receiver has not processed the completed block, a new block cannot be sent.  To ensure this does not create a lock, the sender sends a reset block every time before sending a new block.

 

A quick flow chart of the block algorithm.

 

I2C Block Protocol.jpeg

 

 

I apologize, the code display is not working well for me and stripping out my indentations.  I will work on fixing it later.

 

Here is a snapshot of the code that sends a block of data. This code is running on the RPFS (Raspberry Pi)

//Block Definitions

#define BLOCKSTART 204
define BLOCKSTOP 190
define BLOCKRESET 195



int SendBlock(int address,int *data,int count)

bool error = false;
//Send Block Reset

if(wiringPiI2CWrite(fd,BLOCKRESET) == -1)
error = true;

//Send Block Start

if(wiringPiI2CWrite(fd,BLOCKSTART) == -1)

error = true;

//Send the address

if(wiringPiI2CWrite(fd,address) == -1)

error = true;

 
for(int i=0;i<count && !error;i++)

if(wiringPiI2CWrite(fd,data[i]) == -1)
error = true;

if(wiringPiI2CWrite(fd,BLOCKSTOP) == -1)
error = true;


if(error)
return -1;
else
return 0;























Here is an example of how the to receive a block, one byte at a time.

 

 

//Block setup variables
bool blockStarted = false;

int block[10];
int blockCounter = 0;
bool blockCompleted = false;
bool blockBlock = false;
int blockSkipped = 0 ;

//For chipkit Pi, numBytes = 1 always
void I2CReceiveEventBlock(int numBytes)

{

//Every 2 bytes are our data pairs, writes come in groups of 3
unsigned char cb, cbc, reg;
cb = Wire.receive();
//If blockBlock is set we cannot get a new block until this one is processed
if(!blockBlock)
{
//Block Start
if(cb == 204)
{
blockStarted = true;
blockCounter = 0;
return;
}
//Block End
if(cb == 190)
{
if(blockStarted)
{
blockStarted = false;
blockCompleted = true;
blockBlock = true;
return;
}
}
//Block Reset
if(cb == 195)
{
blockStarted = false;

blockCompleted = false;
blockCounter = 0;
}
//Data
if(blockStarted)
{
block[blockCounter++] = cb;
blockCounter %= 10;
}
}
else

blockSkipped++;
}
















 

When a block is completed, you can see a variable on line 41 that is set, blockCompleted = true.

In  the main loop of the code, you check this variable routinely and if it toggles to true, then you know you have a new block of data to process.

 

 

The ProcessBlock function in the whole code takes the data portion of the block and parses it out.  The first byte of the data portion is usually the "register" I wish to read or write to.  The important thing is that after a block is processed, that the variable blockBlock is set back to false so the event handler can resume getting a new block.  You can see on line 18 above if this variable is set to true, the protocol will start skipping bytes.

 

The files I2C.cpp and I2C.h contain the sending protocol.  The receiving protocol is part of the ControlSwitch and is embedded in that code.  It is also embedded into the Arduino code.

 

ToDo:  Make this a library that can be used for both the ChipKit Pi, and the Arduino.

YellowPrinter.jpgThis is a part that has not yet considered in the software development discussions.

 

Introduction

As mentioned in a previous post, one of the Meditech peripherals is a small bluetooth 55mm thermal printer, covering a fundamental role, especially in cases of first aid and urgent interventions on-field.

The immediate following procedure expected after the very first aid operations with the help of the Meditech diagnostic probes is moving the patient in a organised structure for his hospitalisation or more adequate treatments.

 

As all the Meditech operated interventions are monitored and stored on a database, while every record is attached to the GPS absolute position and the precise synchronised date and time, it is possible to generate a short yet complete printout that will describe the patient health status, that will follow him during his transportation.

 

The first and most important use of these information is to give a summarised bunch of data in a human readable form (fast and reliable) to the doctors and specialised equipe that will support the patient.

The secondary but not less important reason is to keep a documented track on paper of the followed procedure by the first-aid personnel and the afforded strategy, if the patient has been moved, where he was recovered etc. In any case this track represent a testimonial document (that can be further integrated with the more complete diagnostic information stored on the Medited database) following the patient along his path.

 

There is also another important factor to consider: the availability of a printer can be extremely helpful to produce in any moment specific diagnostic data (e.g. the ECG track, the statistical results of a long period monitoring etc.). The availability of these information in a traditional paper support can be a key factor in certain conditions.

 

Printing more than text-only data

The Meditech Bluetooth printer works with the widely diffused ESC protocol, or Epson Escape, adopted by almost all the thermal roll printers (also named receipt-printers) by years. This protocol has been created by Epson around the en of 1980 and is actually one of the most reliable methods used by thermal printers. For more details on how the ESC protocol works see the following link on Wikipedia. The full (and few redundant) protocol specification from Epson is detailed in the attached document in pdf format.

While the protocol itself resulted a market winner adoption because of its simplicity its use is not so simple as it seems; as a matter of fact any control command should be sent to the printer in the form of and escape sequence resulting in a complex and difficult to debug code. Another issue approaching this printing protocol directly is its rigidity; every kind of different string should be managed properly taking in account that not all the printers - also supporting the same protocol - are the same and some control codes that works fine on a device can produce unattended effects on another.

 

Making a protocol parser

To solve the problem one for all it has been implemented a protocol parser, where every command has been converted in a simple function call accordingly with the following rules:

 

  1. Commands never generates printing mistakes or errors: if the required parameters are incomplete and it is not possible to apply a default value, the command should have no effect in the printout
  2. Commands should not send wrong data to the printer in any case
  3. Commands should be called in-line
  4. Commands should always return the expected value or an empty string (not NULL)
  5. Every command function should make a complete consistency check to avoid wrong escape sequences sent to the printer.

 

The resulting printing mechanism is dramatically simplified enabling the program to work with strings where the control code should simply be appended. So, for example to make a text in bold, there is the boolean call Bold( [ [true], false] ) returning the correct sequence to enable or disable the bold character: the string

 

"This is a BOLD test"

 

can be done as

 

"This is a " + protocolClassInstance.Bold(true) + "BOLD" + protocolClassInstance.Bold(false) +" test";

 

The same method will be applied to all. The following scriptlet shows the protocol class ESC_Protocol with all the available API; the full code will be posted in the GitHub repository.

 

class ESC_Protocol {

  public:
  int charSet;
  bool underline;
  bool bold;
  bool strong;
  bool reverse;
  bool bigFont;
  bool doublePrintHeight;
  bool doublePrintWidth;
  bool bitmapHighDensity;
  bool printHRI;
  int printHRIStyle;

  ESC_Protocol(void);
  ESC_Protocol(bool, bool, bool, bool);

  char* ResetPrinter();
  char* Bold(bool);
  char* CustomEsc(int[], int);
  char* Underline(bool);
  char* Underline(int);
  char* Reverse(bool);
  char* PrintAndFeedLines(int);
  char* EndParagraphLines(int);
  char* PrintAndFeedDots(int);
  char* CharTypeFace(int);
  char* HriTypeFace(int);
  char* CharBoundary(int, int);
  char* CharAttributes(int, bool, bool, bool, bool);
  char* AbsolutePosition(int);
  char* PrintingAreaWidth(int);
  char* CharacterScale(int, int);
  char* SelectPaperSensorStop(bool, bool);
  char* SetPanelButtons(bool);
  char* HorizontalTab();
  char* RelativePosition(int, bool);
  char* pagemodeAbsolutePrintPosition(int);
  char* pagemodeRelativePrintPosition(int, bool);
  char* DefaultLineSpacing();
  char* pagemodeFormFeed();
  char* pagemodePrintPage();
  char* NewLine();
  char* LineSpacing(int);
  char* RightCharSpacing(int);
  char* CharSpacing(int);
  char* HriPrintingPosition(int);
  char* BarcodeHeight(int);
  char* BarcodeWidth(int);
  char* Barcode(int, char*);
  void setDefaultSettings();
  void setBitmapDensity(int);
  void setCharAttributes(bool, bool, bool, bool, bool);
  void setCharTypeFace(int);
  void setCharBoundary(int, int);
  void setDotSpacing(int);
  void setLineSpacing(int);
  void setCharSpacing(int);
  char* getBoundary();
  char* getPrintableString(char*);
  char* getBitmapHeader(int, int, int);
  char* UserCharacterSet(bool);
  char* SetHorizontalTabs(int[]);
  char* DoubleStrike(bool[]);
  char* pagemodeSetPageMode();
  char* InternationalCharacterSet(int);
  char* pagemodeStandardMode();
  char* pagemodePrintDirection(int);
  char* pagemodePrintingArea(int, int, int, int);
  char* Rotate90(bool);
  char* SetMotionUnits(int, int);
  char* Justify(int);
  char* OpenCashDrawer(int);
  char* OpenCashDrawer(int, int, int);
  char* CharacterCode(int);
  char* CutPaper(int);
  char* UpsideDown(bool);
  char* LeftMargin(int);
  char* KanjiPrintMode(bool, bool, bool);
  char* SelectKanji();
  char* CancelKanji();
  char* KanjiUnderline(int);
  char* KanjiCharacterSpacing(int, int);
  char* KanjiQuadMode(bool);
  char* stringForPrinter(int[], int);
  char* pagemodeCancelPrintData();
  char* DoubleStrike(bool);
  char* Start();
  char* Max_Peak_Current_324(int);
  char* Max_Speed_324(int, int);
  char* Intensity_324(int);
  char* Status_324();
  char* Identity_324();
  char* Set_Serial_324(int);
  char* EOP_Opto_Type_324(int);
  char* EOP_Opto_Calib_324(int, int);
  char* EOP_Opto_Param_324();
  char* EOP_Opto_CurrLev_324();
  char* Save_User_Param_324();
  char* Factory_Default_324();
  char* Loading_Pause_324(int);
  char* Loading_Length_324(int, int);
  char* Loading_Speed_324(int, int);
  char* Historic_Heat_324(bool);
  char* Msk_App_324(int, int);
  char* Near_EOP_Presence_324();
  char* Near_EOP_Opto_Calib_324();
  char* Near_EOP_Status_324();
  char* Near_EOP_Opto_Curr_Lev_324();
  char* Internal_Font_324(int);
  char* Max_Columns_324(int);
  char* Text_Line_Rotate_324(bool);
  char* Paper_Forward_324(int);
  char* Paper_Backward_324(int);
  char* Graphic_Offset_324(int, int);
  char* Graphic_Print_324(int, int, int, char);
  char* Partial_Cut_324();
  char* Full_Cut_324();
  char* Barcode_Rotate_324(bool);
  char* Mark_Length_324(int);
  char* Tof_Position_324();
  char* Mark_To_Tof_Position_324(int, int);
  char* Opto_Head_Line_Len_324(int, int);
  char* Mark_To_Cut_Position_Len_324(int, int);
  char* Head_Dot_Line_ToCut_324(int, int);

};


Previously:

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Direction of Travel Indicator 1

 

Current Position

The plan was to indicate position on a decorative map using the GPS position from the GPS Module and then using that to move two long bar indicators to show that position on the map. This would maybe also have some artistic flair with the point that they cross decorated in some form (either an elaborate cross hair or a miniature steampunk style vessel).

 

To do this the GPS Module was to be used with the baseboard. As i described in the previous entry I failed to notice i don't have one so that plan changed and i am now using the GPS module without the baseboard.

 

The python library is used to obtain the position from the GPS module and then the fun starts.

 

I was intending to get a couple of stepper motors to help with exactly positioning the pointer on the map. This has not happened so i am intending instead to use two other motors i have lying around the house to drive the bars. The plan is to drive the motors using the Gertbot - (Data sheet) and try to use some cogs and gears to make the bars travel slow enough for drama and control.

 

i bought a lot of "steampunk cogs" from eBay but this was unfortunately not what i thought and instead of functional cogs and gears i could use it was a decorative assortment of bits for sticking on things. I am currently exploring the idea of 3D printing the required gears or just missing out that part of the design and programming the  motors run slower. this may mean less accurate positioning (but we are talking about a pointer on a map of around A3 size so it is unlikely to be all that accurate anyway). This is probably a compromise that will be appropriate to the device as the adventurer only needs an approximate idea of his position or all the fun of the adventure would be taken away.

 

to move a bar along the map the motors will drive a belt at one end of the device. I did think about using a fixed bar but the space needed either side of the map meant that that was not feasible. so a loop that moves back an forward using only one motor looked like the best plan.


Before i could really progress i needed to find a map that suited the theme so i could work out how to move the motors. I mentioned before the difficulties in working with maps on a flat surface so I won't go into it i would either need to use a map that showed scale and shape accurately (but do more thinking and maths. I also wanted the vintage style feel appropriate to the theme of the device. As most of the Steampunk genre appears to be either set in the Victorian era of have elements of Victoriana I wanted to find a map from that era. I quite like the globe style maps that have a circle for each hemisphere (but not the programming headache that would present). So in the end i searched for a flat map that would be around the Victorian era.


This is what I found:


empire.jpg

 

The map although not quite as an antique feel as i maybe wanted combines a Victorian style with the nice square grid pattern that will make moving the position indicator to a map position slightly simpler.

 

Now I have the map sorted I need to connect the motors up and experiment with speeds and times to move the pointers appropriately on the map. This will require finding / making an appropriate platform to sit this all on and making up the motor belts.

As the glucose measure probe to be used as reference has not been delivered for the expected time (just a couple of days ago) it has been moved as part of the Meditech phase 1 It is the worth to start do explain what is the adopted principle and what is the kind of analisys.

The following image shows a general view of the reference device. it cost on the market if you are not diabetics almost 2/3 of the entire cost of a complete Meditech unit, so I had to spend some time to find a way to have a full working device and accessories with a more reduced investment (about 100$)

IMG_20150825_221731.jpg

The analyser (the biggest one in the image above) calculates the glucose percentage in the blood (a small drop of blood is needed from a finger, but we can assume this as non-invasive) The small needle is managed by a simple use mechanical device that "shoots" few mm the needle in the skin. Needles are single-use sterile parts that are the "bullet" of the mechanical device.

 

Usage steps

The glucose measure in diabetic patients should be done frequently  several times during the day; usually they should measure by their own with a similar device the glucose value in the blood when eating to self-calculate the insulin quantity they should inject to compensate their disease. Meditech glucose probe will replace the measurement device (providing more complete information) and should be able to adopt the same methodology in glucose measurement. This makes simple to use the needles and chemical  reactive already available on the market and worldwide diffused; can be found without difficult anywhere.

 

The first step is producing a blood drop with the mechanical "shooting" needle, usually in a finger. Depending on the age of the patient and his body characteristics the needle pressure can be regulated to minimise the pain (that is anyway almost null). See the detail in the image below

IMG_20150825_221859.jpg

 

The second step is placing a drop of blood on the reactive test strip, that is another single-use small electrode chemically treated as shown in the detail of the image below.

IMG_20150825_222026.jpg

Then, the reading procedure will start. As mentioned, the Meditech glucose measurement circuit will respect the reactive test strip size and contact positions and the accessory components (available in any farmacy as spare parts for few dollars) will be the same, just to grant the full compatibility of the methodology for two reasons: a better comparison testing with volunteers and the best standardised compatibility with the commercial devices.

 

The Meditech Glucose probe

The approach that will be followed by the Meditech glucose measurement probe will be based on the standard test strip terminals as shown in the PCB terminal layout example below:Screen Shot 2015-08-27 at 10.02.10.png

 

 

The sensor circuit schematics using the pre-built test strips will be almost simple:

Screen Shot 2015-08-27 at 10.03.57.png

The circuit blue blocks are almost common parts based on a low-pass filter similar to the same already used for the Heart Beat sensor and ECG, while the green blocks are a current-to-voltage specific converter IC from Freescale that has been adopted by many other similar measurement devices.

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

I am going to catch up a few blog posts and post my code in the AM.

 

The QuadCOP has a variety of sensors onboard to help it avoid collisions and detect events.

 

From left to right, top to bottom:

4X Sonic sensors, 4 pin version.

1X PIR motion detector

1X IR flame detector (http://www.amazon.com/gp/product/B00AFSEC2Y?psc=1&redirect=true&ref_=oh_aui_detailpage_o06_s00)

1X mechanical relay

1X Luxeon LED

1X Arduino Nano with header pins.

 

IMG_0282.JPG

 

 

All of these sensors are digital in that they have a trigger, and then a "Delay".  If the delay it too long, it is assumed the sensor did not detect anything.  If it does pull the "Echo" pin high, then you time from the point you pulled the trigger pin high to the time the echo pin went high.  Then there is a conversion factor of some sort for each sensor to tell you what you need to know.

 

There are two ping sensors on the bottom for ground detection, and two on the forward nose to detect obstacles.

The IR flame sensor is inside rotating head.  This allows flame to be"Scanned" for.

The PIR sensor is for when the QadCOP is docked at its base station, if it detects motion it will fly into action!

The Luxoen LED is for night flying, and is triggered by the mechanical relay.

The Nano is connected to the I2C bus as a slave.

 

The Nano runs a loop and triggers all sensors in a certain order.  If anything is detected, it stores the event in a small queue of 100 items for each Queue.  It is a FIFO (First In First Out) Que and will stop recording at 100 items.  The RPFS will send a register read to the SensorArray, each register will represent a sensor.  When an event is read, a read pointer (stack pointer) is incremented.  So each queue has two pointers, a write and read pointer.  The data is returned for processing to the RPFS.  The data returned is always 2 bytes of data using the I2C block algorithm I created, which will be explained in a later post.

 

The SensorArray also accepts commands, via another register.  The only two commands at this time are "clear all queues" and "Turn on/Off the LED".  Each command consist of a register byte, a command code, and the parameter code ( 0 or 1 for on/off).  The parameter code only applies to the LED but is still sent for the clear queues command to be consistent.

 

This is a picture of the I2C bus "splitter" or multiplexer.  In the upper middle of the pic is how I am connecting the various I2C items together with the RPFS always the master.

 

IMG_0284.JPG

 

I will post a picture soon of the sensors installed.

 

Here is the software flow for the code that runs in the Arduino Nano.

 

Sensor Array.jpeg

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

Long story short.  My flight controller was acting up and it finally burned out.   It was acting flakey so I took everything apart and had intermittent success.  The I plugged something in wrong and now its done.

 

It could read read channels 3 and 4 off the ChipKit Pi, but not 1 and 2.  I confirmed CKP is putting out the PWM needed with my oscilloscope and I tested it on a servo.  So the flight controller must of took at hit at one time.

I finally go the CKP working perfectly!

 

This is what I ordered here:

https://www.hobbyking.com/hobbyking/store/uh_viewItem.asp?idProduct=60488

 

I am going to order anther one and it will be here next week    So I am going to keep posting my stuff but there can be no test flight until next week!

 

I pout over 90 hours into my project the last 2 weeks to be stopped by this!

 

Hey Element 14, how about another month extension?  Who else needs one, raise your hand!    I know I know I am pushing it.

 

The results of my testing I was doing.  Time to put it all back together for final presentation.

20150826_232544.jpg

frellwan

Thanks

Posted by frellwan Aug 26, 2015

I just wanted to take a moment and thank Element14 for the opportunity to participate in this design challenge. I had a great time completing a project that is meaningful to my everyday work. I appreciate the comments and suggestions from fellow Element14 members and 'ScyFy your Pi' participants. I really enjoyed reading the posts from other competitors and took a couple ideas for future projects.

 

Good luck to all and again Thank You Element14!!!

This project mainly deals with data communication, so it is difficult to show a lot of video about the project. Below I have attempted to show some of the pieces working

 

Here is a video showing if the bit B3:0/0 in the PLC is a 1 that an email will be sent alerting someone (in this case me) that an alarm condition is present:

 

Here is the recipe being loaded into the PLC

 

Here is a video of the usb/RS232 communication to PLC in action. The red and green lights indicate transmit and receive activity.

 

 

Here was the premise of my project:

SerialMaster

I was able to complete all of the communication necessary for this project. It is difficult to show the FTP transfers.

 

One of the things I will continue to work on is adding a web server interface for the configuration of the communication channel.

Previously:

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Functional Design

Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication

 

 

First it is time for an apology; life got in the way and Ii forgot all about this project until nearly too late. With the deadline looming I am unlikely to fully complete the product as I had hoped but I do intend to try and get as much detail as possible onto the blog before then and will try and update progress to completion even if that is a little after the competition deadline.

 

Direction of Travel Indicator 1 - Setting current and destination positions

An adventurer embarking on a Journey will need to know which way to go so the second part of the functionality I envisioned included a set of arrows on the device that would show which way the traveler needs to move to reach the destination.

 

To make this work I need to take the current position of the device (from the micro-stack GPS module) and then compare this with the intended destination. The Raspberry Pi will be used to achieve this task. Once the current GPS position is received it will be split into latitude and longitude figures. these are then compared in turn with the desired destination latitude and longitude.

 

The comparisons are then used to make a decision on which LED(s) to illuminate indicating the direction to travel.

 

Getting Current Position

To get the current position I was planning to use the Microstack GPS module and the Microstack Baseboard. The datasheets for these are here - Baseboard and GPS Module.

 

This is where I encountered my first big problem. I hadn't really checked the contents of the kit and this means that I was not aware that I did not have a Microstack baseboard. As i had left this discovery till rather late in the day it left me with a dilemma. I could not realistically get a Baseboard in time so I needed a solution.

 

A little head scratching (and maybe a few choice words) later i came across this post on the Microstack GPS for Geochaching by callum smith. As well as a great explaination of how to use the Microstack GPS module to get current position there is also mention (in the comments) that it can be used without the baseboard.

 

So the plan changed and the GPS module will be used without the Baseboard. This can be done by connecting the appropriate pins on the module to the correct pins on the RPi.

 

microstackgps.jpg

pinout.jpg

The Microstack GPS module uses a python library available from Github. The libraries can be installed using apt-get:


General microstack node library:

sudo apt-get install python3microstacknode



 

GPS specific tools:

sudo apt-get install gpsd gpsdclients pythongps



 

Once done the GPS module can then be used to get position information required.

 

With the GPS Module connected up and the python libraries installed the current position can be obtained (using gps.gpgll).

 

This can then be used to pull out latitude and longitude numbers and the NS and EW indicators.

 

Getting Destination Position

The destination position is a little more tricky as I have set up a challenge for my self in the way destination is selected (I do not intend to use a screen or buttons as it would not fit with the aesthetic).

 

I have planned to use a selector system to indicate start and end positions on the device for route setting (described in my earlier blog - Sci Fi Your Pi - Prince Dakkar's patent log taking chart compass - Route selection and indication).

 

scroll wheel.jpg

 

 

This is used to connect strings of LEDs to indicate the route depending on the position of the connections. I am intending to also use this to set the desired destination as well.

 

The plan is to use the connections on the wheel to connect a pin or combination of pins on the Raspberry Pi to set them High or low. Obviously the pins needed limits the number of destinations but using a combination of pins I can get a usable number of destinations from a relatively small number of pins. 4 or 5 pins which will give me 8 or 16 destinations (depending on how many destinations I can find LEDs for and fit on the scroll wheels).

 

The pin connections will use the same idea as the LED strings supplying a current to set pins high or not to set it low. The pin pattern will give a binary number that corresponds with the destination. Each destination will have a variable set with the longdtude, latitude and EW / NS indicators.

 

Comparison of Positions

The current and desired positions are the compared and the the difference used to set which LED to light.

 

If the desired position is south of the desired position the north LED will light if the current position is south and west of the desired position the north and east LEDs will light and so on. I am hoping I will have enough pins left to add the ordinal points and improve this (in the above example the North east LED would light rather than the north and east).

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

Later today I will be publishing my code and flow charts.  One thing I am working on is getting an accurate heading from my magnometer.  I have an open question out there for some help, I also have ordered a new sensor just in case.  I expect by tomorrow to have it resolved.

 

However, In the interim its time to get this flying!  Despite the fact my magnometer is wrong, it is still consistent.  I cannot navigate to waypoints with incorrect heading information and the current readings I have do not adjust linearly to the correct heading.  But I do know where north, south, east and west are and I can adjust my heading to those specific headings.  What doesn't work is trying to get to NE, SE, SSW etc..

 

 

So, for testing here is the plan:

Go north for 20 feet, turn right, Go  east for 20 feet, turn right, Go south for 20 feet etc...

So basically the QuadCOP is flying a square.

 

At the end of each 20 foot leg, the QuadCOP will perform a sensor scan, by having the flight system move the head 180 degrees.  All of this will be automatic without any manual control!

 

I feel this will give a good test of the Rasberry Pi Flight System (RPFS) and check the control switch (ChipKi Pi) functionality.  The control switch is already working for manual control as it is relaying the PWM signals from the Rx to the flight controller.

 

A couple previous vids to show what I am talking about.

here is an explanation of the control switch:

 

 

Here is my first test of the control switch, which is reading the PWM from the RX and passing it to the flight controller:

 

 

For reference, here is the software and hardware flow, to be included again when I post the rest.  Drill downs are needed for each system.

 

I will be flying this evening and posting a video.

 

SOftware Flow.jpeg

 

WISH ME LUCK!  Its time for this thing to Fly or DIE.

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

 

Here are a few more final construction pics, the rest will just be installing sensors.  Ill post a final pic pointing out the various parts used from the kit.

In the morning I will be posting my github, code, and flow charts.

 

I am working on a bug with the 3 axis magnometer before I can take another test flight.

 

PiCam installed in the bottom head with raspberry PI model B hot glued to the side.  This allows the head to rotate without twisting the cable.  A power cable will go to the Pi but it will be long and thin enough to allow the rotation.

IMG_0277_s.jpg

 

 

A small foam bearing (bottom) was put in place to stop the wobble.

IMG_0278.JPG

 

 

Needs a bit of adjusting, the Raspberry PI pulls it down to one side so I need some weight to balance it.

IMG_0280.JPG

 

A video of much better head movement!

Case design improvements

The Meditech eye probe camera is conceived to work nearby the main device component for easier usage. The camera position with the RGB light ring (see the image below) in the case prototype is fixed but need to be able to rotate and should be plied when the module is not in use to fit better in the component side of the case. This means to redesign this part of the case adding this two movements. As this part is the cover of the camera flat cable, it does not affect the circuitry and connections.

IMG_20150613_190335208.jpg IMG_20150613_182805056.jpg

Adding vision disease tests

Together to the active tests that can be done with the camera probe there is a series of test that can improve the vision potential diseases of the patient based on the use of the screen and special coloured patterns. The following images are two examples of the patterns used to generate VEP, Visually Evoked Patterns than can be used to record iris contraction and other secondary parameters, avoiding one of the associated analysis - The EEG - that need specialised personnel and can't be applied in the conditions usually expected for the Meditech device application.

Screen Shot 2015-08-26 at 00.29.01.png Screen Shot 2015-08-26 at 00.30.11.png

Another series of tests that should be implements with the esclusive use of the main color display are visual patterns to detect color blindness.

Ishihara_compare_1.jpg

There is a consideration about this kind of tests. As apparently these can be useful only in certain kind of analysis and conditions, i.e. during a visit in an hospital, really the things are different. If we think the kind of environment the Meditech device is provided for, all the possible simple yet realiable diagnostic evaluations maybe useful at least for two reasons:

  1. Extend as much as possible the application possibilities offered by the device, increasing its versatility.
  2. Give the option to the medical operator to produce a patient investigation fast, simple and as much as complete possible.

Not much of a final update, i am afraid: real-world intrusions, and getting distracted by seeing if i could use the new 4D Systems touch screen to get around the limitations of the PiFaceCAD mean that i don't have a lot to show for the past month's activities.

 

Several hundred lines of code written but untested, and only tomorrow to pull it all together before the delivery deadline is not looking good.  Learning to write robust real-time code in Python has proven to be much more of a head-stretch than I expected: the last rule of Real Programming published in the early 1980s was that Real Programmers can program in Fortran in any language, but i have failed that test.

 

All is not lost: i will give it one last bash, but if you do not see a working Hexagram casting tomorrow, you will know that I have disappeared up my own conclusion.

 

In the meantime, well done to all the successful projects: it has been fun reading about your exploits, and i look forward to finding out who has won the big prize.

This past week I have been completing the code to finalize the project The MTrim RS-422 serial interface needed to be available from within the DF1  RS-232RS-232 interface object since the bits in the PLC that are being read by the DF1 object actually trigger the sending of information across the RS-422 serial interface to the MTrim

 

The final code for the serial communications can be found at: https://github.com/frellwan/SciFy-Pi.git  in the serial/DF1 folder.

 

Code to make the MTrim Accessible from within the DF1 class:

class DF1ClientProtocol(SerialClientProtocol):

  def __init__(self, logger, ftpEndpoint, mtrimSerial):
  ''' Initializes our custom protocol

  :param logger: The local file to store results
  :param ftpEndpoint: The endpoint to send results to and read recipes from
  '''
  SerialClientProtocol.__init__(self)
  self.logger = logger
  self.ftpEndpoint = ftpEndpoint
  self.mtrimSerial = mtrimSerial
  self.logFile = logger.getFileName()
  self.ENQCount = 0
  self.lcOEE = LoopingCall(self.startOEEData)
  self.lcAlarms = LoopingCall(self.startAlarmsData)
  self.lcFTP = LoopingCall(self.startFTPTransfer)
  self._reconnecting = False
  self.config = utilities.optionReader()
  fLogFile = logfile.LogFile('df1comms.log', '/home/pi/projects/newSLC/logs', maxRotatedFiles=2)
  fLogObserver = log.FileLogObserver(fLogFile)
  log.startLogging(logfile.LogFile('df1comms.log', '/home/pi/projects/newSLC/logs', maxRotatedFiles=2))
  #log.startLogging(sys.stdout)
  self.notified = False
  self.transferred = False
  self.loaded = False



 

 

Then to change parameters when the recipe is changed:

def sendRecipe(recipe):
    PLCRecipe = self.config.getPLCRecipe()

    result = self.mtrimSerial.writeParameter(1, 1, float(recipe[-3]))
    result.addErrback(self.errorHandler, "sendRecipe")

    result = self.mtrimSerial.writeParameter(1, 20, float(recipe[-2]))
    result.addErrback(self.errorHandler, "sendRecipe")

    result = self.mtrimSerial.writeParameter(1, 21, float(recipe[-1]))
    result.addErrback(self.errorHandler, "sendRecipe")

    index = 1                               # Index 0 is recipe name
    var = []
    for address in PLCRecipe:
        request = protectedWriteRequest(1, address, [float(recipe[index])])
        result = self.sendRequest(request)
        result.addErrback(self.errorHandler, "sendRecipe")
        var.append(result)
    d = defer.gatherResults(var)
    d.addCallback(clearRecipeBit)
    d.addErrback(self.errorHandler, 'saving data in StartOEEData failed')



 

 

I also need the PiFace code and the serial code to run when the RPi is booted up.

 

I tried following the istructions found here, but I could not get it to work. I was able to use the basic concept outlined, but had to make changes to the rc.local file in the /etc/init.d directory. When I did that it worked fine.

So to enable the PiFace code and the serial code I created a file - launcher.sh - that looks like this:

#!/bin/sh
# launcher.sh
cd /home/pi/projects
cd /home/pi/projects/PiFace
python3 lcdSM.py &
cd /home/pi/projects/newSLC
python df1.py &



 

The I used the command:

sudo nano /etc/rc.local



 

to make the file look like this:

#!/bin/sh

### BEGIN INIT INFO
# Provides: rc.local
# Required-Start: $network $remote_fs $syslog
# Required-Stop: $network $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: Start df1comms daemon
### END INIT INFO

# Install Script Info
(sleep 10; sudo /home/pi/projects/launcher.sh)&


 

 

Everything is working very well. I will put some videos together to show here shortly.

Programming the Transmitter

Since the hardware is finished, it's time to work on system setups.

First of all, I programmed the transmitter (a Taranis X9D. BTW, if you don't know the Taranis, google it: it's a open-source transmitter, so you can do everything you want with it!)

The first thing to do is... to upgrade the transmitter firmware. This is required not just to have the latest version, but because the European version of the transmitter is shipped with a firmware that does not support D8 modulation, which is the only modulation supported by the receiver I installed on the TrainingSphere.

Upgading the firmware is very easy

  1. Remove the TF card from the card slot of TARANIS Plus battery compartment, and insert into the computer's SD reader
  2. Download the firmware from FrSky website download section and copy the .bin file into the “FIRMWARES” folder on the SD card
    Firmware folder.png
  3. Insert the TF card back into the card slot of TARANIS Plus battery compartment.
  4. Turn on the transmitter while pushing T1 and T4, the screen will display "Taranis Bootloader". Choose "Write Firmware", and then press the ENT button.
    Taranis bootloader.png
  5. Select the bin file that you’d like to upgrade to and then press the ENT button.
    Firmware selection.png

  6. Wait and succeed, turn off the transmitter.

 

I created a new model for the training sphere

IMG_20150825_142201.jpg

 

I selected the multirotor option, as this is the most closed to the training sphere. I initially selected the "helicopter" option, but in this case you will be asked to select the channels for cyclical, tail rotor, etc

 

IMG_20150825_142231.jpg

 

Then I choose the proper channels for each command, namely throttle, pitch, roll and yaw

 

IMG_20150825_142249.jpg

IMG_20150825_142326.jpg

 

IMG_20150825_142402.jpg

 

IMG_20150825_142417.jpg

 

After the model wizard has completed, I edited the model to make the following changes:

  1. entered the model name
  2. select the output format to D8
  3. select the channel for selecting the flight mode (channel 5) and for the RTL (Return to Launch) function. The latter is not strictly required since the training sphere is not going to use GPS, but in any case it doesn't harm

 

That's completes the transmitter setup. Next step is to bind the transmitter to the receiver. This is a very common procedure, and there are a lot of very clear tutorials, like this one . The basic steps are

  1. press the switch button on the receiver and turn it on
  2. switch on the transmitter, select the model and scroll to the [Bind] command. Press the ENT button
  3. the red LED on the receiver will flash. This means that the receiver has been  bound
  4. cycle the power on the receiver

Introduction

The Meditech project is closing his first part. Starting from this post will follow a series of reminders and informative documentation to focus the point on what is the state-of-the-art project at the date and what is planned for the next two steps, further deadlines etc. The Meditech development lifecycle, from the initial concept up to the product available on the market will pass through three phases; the scheme below is a short reminder of what should be expected:

Project lifecycle - Tab 7.png

Accordingly with the scheme above as the most complex and longer step was the phase 0 (started from scratch). The ending of this first part if far away to the end of the entire project that requires again some months of work. At the actual date, there are already the availability (written agreement) with a public hospital in Nigeria, Nanoro (Burkina Fasu) and some other places I am discussing with.

 

Phase zero: state of the project point by point

The next posts will show in detail those aspect not yet documented in the previous articles. The following is a list of expected tasks to be done in the first phase of the project and their current status.

 

  • Container and internal architecture: Main hardware components and task distribution mainline - Done
  • Components connection: Internal wired network approach, all the networking components and settings - Done
  • Powering system - Battery operated: The initial design was including a battery supply system that has been excluded in the first model - Cancelled
  • Powering system - AC power: The actual model will be powered by an ATX-like power supply unit working with 120-240 VAC - Done
  • Networking final configuration: The final networking configuration will work on a double network; and internal network bridged with an external network for Internet access - Done
  • Internal web server and database: The database architecture and internal web server (Apache2 + PHP + MySQL) has been setup and tested for responsiveness - Done
  • User Interface and Controller: The environment has been setup and tested with a custom hardware interface (software and electronics) - Done
  • TTS support for easier user interactivity: The Text-To-Speech support is part of the UI and is integrated in the Meditech control panel - Done
  • Printing support: The remote printing support with the relative bluetooth control software and the printer standard ESC/POS protocol management has been developed and tested - Done
  • Medical probes: Some of the probes has been full tested while some other are already under parameters comparison - Details below
    • Heart Beat : Filter circuitry done, tested and compared with assessed device
    • Human high precision temperature measurement : Analog reading through the BitScope analog channel. Under testing for continuous reading with assessed device, not yet disclosed.
    • Blood pressure digital sphygmomanometer : Not yet disclosed; will be tested with assessed device with about 30-50 different volunteers (documented)
    • Microphonic stethoscope : Probe and electronic filtering developed and tested. Auscultation data are recorded and can be streamed if needed to the remote support
    • Glucose sensor : the sensor electronics is based on the same (similar) used in a commercial product that I have received late, just a couple of days ago. The probe is under testing with some comparative analysis with volunteers.
    • Eye probe camera and variable light : This probe with all the electronics has already been tested with success. Then some other usage in the range of vision has been focused and the implementation is under test.
    • Microscope camera: The microscope camera has been tested for body surface image analysis (skin, insect bites, rashes, etc.)

Well, I'm making progress but this iOS app stuff is a little bit tricky! Just about everything I proposed to do on this project I've had some familiarity with EXCEPT iOS apps. In fact, I've never made a mobile phone app but I've always wanted to. I'm glad I'm finally getting around to learning how to do this, but at the same time, it's really challenging trying to learn a new language in such a short period of time.

 

PizzaPi driver's phone app prelim layout

 

I decided to learn how to do this in the new Apple language, Swift. I had read a bit about this language a few months ago, that Apple is really trying to push its developers to this language and that they've spent a lot of effort trying to improve it and make the language more robust. I have to say, it has some neat properties that I have not seen elsewhere. The problem, however, is that Swift is changing all the time. I'm having trouble following tutorials that were written only a year ago because of all the changes going on.

 

Right, so that's a headache but let's take a look at that screenshot. This is the only part of the whole project that is being coded in OS X, the rest has been coded in Arch Linux/Raspbian. What you see is XCode, Apple's programming IDE. I've tried to use it in the past for other languages but I've always hated its clunkiness but when it comes to iOS development, it really does shine.

 

You can see the basic layout of the app. There is a login screen that asks for the driver's name. The driver enters his/her name and then the code takes that name, parses it and sends it to the web server for verification and feedback. I'm using this tutorial: https://twitter.com/dave_on_Cape/status/574734739472584704 as my guide and modifying it as I go. You can see in the bottom windows that it's actually tried to run the code that connects to the server, but I haven't finished writing it so of course it sends a response of "404", meaning the verification script (a PHP file that has yet to be written) was not found.

 

layout on iPad

Here's what the layout looks like on an iPad. For the pizza delivery management side I am designing the web interface to be accessible via a tablet. I've noticed that lots of restaurants make use of tablets. It seems a lot easier to use that than a clunky laptop. Anyway, you can see a little black squiggly line on the map. That's some dummy GPS data that I used to test overlaying GPS coordinates onto a Google map. It's not that difficult to do, all you need to use is GeoJSON. There is actually a nifty website (geojson.io) that will format your data for you so you don't really have to learn how to do the formatting yourself.

 

Things are moving along. The app building is the most challenging thing right now. I still need to write the code to link the back end to the front end but that really isn't that difficult. I do that sort of thing everyday at the lab! What's left after that is to port everything onto the Raspberry Pi 2 and then get real sensor data flowing. Luckily, I wrote the sensor code a long time ago (remember all the headaches with mosquitto?). All I have to do is have the PHP code find the directory where the sensor data is and store those files in the appropriate tables in the database.

 

Until the next post...

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/authors/screamingtiger?ICID=DCH-SciFiPi-challengers

 

I have mounted the camera in the head, and have the head moving!  I don't have the servo secure yet so that is why it is wobbly.  I posted a vid below on a preview of how it works.  I just controlled it with my radio but in flight the Raspberry Pi (RPFS) will use it to "Scan" around.

 

Cut a hole for they eye

 

IMG_0259.JPG

 

Pi camera mounted with hot glue

IMG_0261.JPG

 

lens pokes through

IMG_0262.JPG

 

Servo, needs hard wood mounts, just held on with glue for now.

 

IMG_0264.JPG

 

A preview.  Note the plastic eye cover, made from a Christmas Ornament.

IMG_0266.JPG

Watch the Vid of the head.

 

Previous Posts Here:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/tags#/?tags=quadcop_project

 

I am working on some of the presentation now since weather is too bad for testing.

 

I struggled with what scheme to go with.  Star Trek, Terminator, Robocop?  I really liked the Droids from the Movie Oblivion but realized I would need a HUGE ball and the paper Mache my kids made was too heavy.  So I decided on Star Wars!  I decided a BB8 scheme would be good since R2D2 is so overdone.  I made a modification as well that instead of the QuadCOP rotating to do a sensor scan, it will instead have a rotatable head Akin to the Star Wars Astro Droids.

 

Here is a sneak preview, more to come!

 

A fan is used to dry the base coat and Polyurethane for hardness.  This is the top.

 

IMG_0252.JPG

 

Underneath the hood on the bottom.  The wood is to hold the main battery.  This keeps the weight low and stabilizes the QuadCOP.

IMG_0255.JPG

 

This servo mechanism will be "Turning Heads"

IMG_0256.JPG

 

 

Masked off for first detail painting.  This is the bottom, which will be the head.  I found a nice eye to go with it!  The Raspberry Pi cam will be on the head so it can look around.

IMG_0253.JPG

 

First level details painted.

IMG_0257.JPG

 

Not Bad, not perfect.  Some weathering will hide all imperfections.  Seems all Star Wars bots have some weathering on them to make them appear "old" or beat up a bit.

IMG_0258.JPG

 

BOLLOCKS!  Ketteh sez is mine.

IMG_0243.JPG

 

Here is the general overall look I am going for on this, with the rotating head and the eye with the Pi Cam in it.

BB8_Robot1.jpg

In this post, some details about the connections to the Raspberry and Navio+ boards will be provided

An overview of the required connections are shown in the following picture

 

Power diagram.jpg

 

Raspberry and Navio+ power

Navio+ has three power sources, all of them can be used simultaneously as they are protected by ideal diodes.

For testing and development purposes:, it is possible to connect 5V 1A power adapter to the Raspberry Pi’s microUSB port. Raspberry Pi will provide power to the Navio+.

In the actual drone, Navio+ should be powered by a power module connected to the “POWER” port on Navio+. Navio+ will provide power to the Raspberry Pi.

 

Powering servo rail

Rasberry's power module does not power servos. To provide power to the servo rail plug your drone’s BEC into any free channel on the servo rail. BEC voltage has to be in the range of 4.8-5.3V

 

RC input

Navio+ only supports PPM signal as an input. To connect receivers that do not support PPM output you can use PPM decoder or SBUS to PPM converter. PPM receiver is powered from a servo rail, so BEC should be present. In my case, I chose the FrSky D4R-II 4ch 2.4Ghz ACCST Receiver

RC output

The servos and ESCs (Electronic Speed Controller) required for the specific frame will be connected to the RC output connectors. For the frame used in this project, four outputs will be used, as shown in picture below

 

CoaxCopterTopView1.jpg

Today I completed the wiring of the TrainingSphere. The final result is shown in picture

 

IMG_20150823_100313.jpg

 

IMG_20150823_100400.jpg

 

As you can see, I finally made the decision the use the Emlib Navio+ board. This is a really challenging project I want to continue to develop even after the end of this design challenge, so I invested some  money in a piece of hardware that is reliable and tested enough to make me develop applications that run in the Raspberry environment and interact with the APM platform. Possibilities are really endless compared with "closed" solutions like pixhawk and ardupilot

 

Anyway, next steps will include

1. Parameters settings

2. Compass calibration

3. Accelerometer calibration

4. Radio calibration

 

and finally... first flight (hoepfully)

 

After that, the plan is the add the optical flow sensor for making the sphere able to loiter in an indoor enviroment

Previous Posts Here:

http://www.element14.com/community/tags#/?tags=quadcop_project

 

Im currently working on 2 major updates this weekend.  I am adding in some sci fi elements and installing sensors to give the QuadCOP a more Science Fiction Look.  The modifications also add quite a bit of realestate to place things and remove the clutter.

Here is a preview, stay tuned for some auwsome pictures of what this looks like when painted!

 

 

20150822_100900rr.jpg

 

 

 

The other thing I am working on is testing the Raspberry Pi Flight System (RPFS) and I need to give you more information on how this works.  I have to demonstrate the actual functionality of the QuadCOP and it is time to do that.

I currently have about 2000 lines of original code between the systems not counting the TinyGPS++ library I used.  While not a lot, for an embedded system its not small!

 

Heading Information

Heading information is a number between 0 and 360 with 0 pointing magnetic north and 90 degrees pointing magnetic east.  The numbers get bigger as you rotate right and smaller as you rotate left unless you cross north.

For a 3D flight system, there are two forms of heading.  There is a GPS heading which shows the direction that the GPS unit is heading.  Then there is the quadcopter heading, which shows the direction the front of the quadcopter is pointing.

The reason this is important is that the quadcopter can move in one direction while facing another.  So the quad copter may be moving sidways and the GPS heading may say 90 degrees.  But the quadcopter may be pointing north.  The reason this is important to know is that without the quadcopter heading we don't know which way to move to change heading.   If we need to go east, and dont know which way the quadcopter is facing, do we go forward, backwards or sideways?   GPS Heading can also help with wind conditions.  If we are facing north but moving northeast, we know something is pushing us sidways and can make a correction to go due north.  This can be done by adding in some sideways velocity or pointing the front of the quad at an angle to and apply more forward velocity,

 

Heading2.png

 

 

Way Point Information

A waypoint is a structure that contains altitude, longitude, latitude and heading information.  This information is sent to the RPFS 2 times per second.  So when in automode there are two waypoints.  The current position, and the destination.  The current position is stored in a waypoint structure since it fits nicely.

 

The are then two critical functions needed that let the QuadCOP navigate.  Both of these functions provide relevant information given two waypoints.

 

HeadingTo:  Given two waypoints, this returns a heading between 0 and 360 that points directly to destination.  This function is ran 2 times per second and is passed the current waypoint (current position) and the destination waypoint.  The heading is then updated to make the front of the QuadCOP point towards the destination.

 

DistanceBetween:  This provides the distance, in inches, between two waypoints.

 

 

How do we know when we reach the destination?  Well when the DistanceBetween the current position and the destination is "Zero".  Zero, is not really zero but a threshold that is good enough.  For my purposes 1 foot (.3 meters) is considered Zero.  I also have experimented with allowing the QuadCOP to move faster if the distance between is large enough.

 

 

 

CurrentSpeed:  This is calculated by the GPS.  It is not used by the waypoint functions but rather the flight heuristic system.  If the current speed is too fast, the QuadCop will slow down.  If it is too slow to QuadCOP an apply more power to move forward.  This allows correction for wind conditions that may affect flight.

 

 

The simplest approach to move between points is to always move forward.  This means that we have to keep the QuadCOP pointing in the direction of the next waypoint and apply a forward velocity to move in that direction.

 

Setting and Correcting Heading and Velocity

Two functions are used to get the QuadCOP going in the right direction.

 

SetHeading:  Given the required heading, it is compared with the current heading of the QuadCOP.   Care is needed to ensure that this is done correctly.  As an example., lets way we want to move due east at 90 degrees.  But the heading information shows 93 degrees.  This means we need to rotate left 3 degrees.  How does the QuadCOP know which direction to rotate?  A serious issue could happen if the QuadCOP tires to get to 90 degrees from 93 degrees by rotating right.  It would do nearly a 360 degree turn!  So simple care is needed to ensure we rotate the least amount.  We also need to make sure we don't over rotate.  So some heuristics are applied based on the correction needed.  If we only need to adjust 3 degrees, only a small amount of power is needed.

 

The amount of rotation we apply given the degree differential, is called the Heading Gain.  As an example, we are facing 93 degrees and need to get to 90 degrees by rotating left.  If we apply too much power, by the time the heading information is updated we have over corrected and now are facing 88 degrees.  So now another correction is needed to rotate right, and if we once again apply too mulch power we may up facing 92 degrees,  This cycle repeats quickly and causes a condition called "wag".

 

The front of the quad copter is looking left and right at a rapid pace and may never be able to hit its target.  If it gets bad enough it can cause the flight to become unstable.  Choosing the correct gain is an empirical processes that requires guesswork.  Also it is possible different values for the gain are needed at different times.  All of this is handled by the heuristic built into the RPFS so it can detect how its actions affect the sensor input and make adjustments as needed.

 

This seems fine for setting the heading but what about getting the QuadCOP moving?

I mentioned in previous posts the ControlSwitch (ChipKit Pi) is what actually controls the QuadCOP, the RPFS simply sends commands to the ControlSwitch to tell it what to do.  This done via ControlBytes.  These are a set of bytes that represent directions and adjustment information.  This simple structure ignores power information and will result in a small predetermined amount of power, in MicroSeconds, being applied in each direction.  Another control byte can be sent that is on a per direction level, with power information indicated.  This allows the QuadCOP to make fine adjustments in each direction as needed.  This to deal with the wind as discussed above.

 

So that concludes the first update for the weekend.  More to Come!  We are now getting into the fun stuff that final week!

 

 

 


Back again with another update. I've uploaded another video capture that shows both the customer front-end and some of the administrative front-end. I am almost done building out the UIs for these and then it is a matter of writing the PHP code to link it all together with the MySQL database.

Just to refresh everyone's memory, the Raspberry Pi 2 will be acting as both the web server and mosquitto broker. Therefore, all the code I'm writing right now will be hosted on the RPi2. This will be located at the hypothetical pizza joint.

 

In between writing this code, I have been researching how to go about writing the iOS app that will be the driver console. It seems like it will be most easiest to incorporate the iOS map app, but we'll see. I want to be able to send along directions for multiple locations, so the driver doesn't have to think about who to deliver to next on his/her route.

 

It's fun seeing the web interface start looking like something tangible. It's going to be even more fun once everything is communicating. That should start piecing together by the end of the weekend.

Introduction

Definitely the Python language with some content improvements has been adopted to manage the UI, replacing the initial idea to use Qt for two reasons: development optimization and architecture simplification. Unfortunately as many times occur, making things simple it is not so simple.

 

Exploiting the features of the Linux graphic interface

pygtk-splash.jpgTogether with Python there is a very useful  library interfacing the language with the standard features natively available in the Raspbian desktop: PyGTK

This library lets create almost any graphical user interface with the Python language using  GTK+. This means that it is possible to create multiplatform visual applications based on the graphic features and performances of the Gnome Desktop.

The resulting program can work easily with good performances without further intermediate graphic components.

Another advantage is that the entire set of UI applications developed in PyGTK inherit the GTK  desktop theme adapting to any supported environment.

 

As occur with all the Python libraries the integration of PyGTK in the Python programs is almost simple:

 

#!/usr/bin/env python

import sys
try:
    import pygtk
    pygtk.require("2.0")
except:
    pass
try:
    import gtk
    import gtk.glade
except:
    print("GTK Not Availible")
    sys.exit(1)

class HellowWorldGTK:
    """This is an Hello World GTK application"""

    def __init__(self):

        #Set the Glade file
        self.gladefile = "HelloWin.glade"
        self.wTree = gtk.glade.XML(self.gladefile)


if __name__ == "__main__":
    hwg = HellowWorldGTK()
    gtk.main()

 

A simple GTK Window with Python

The following scriptlet  shows the creation of a simple windows using PyGTK

#!/usr/bin/env python

# example base.py

import pygtk
pygtk.require('2.0')
import gtk

class Base:
    def __init__(self):
        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
        self.window.show()

    def main(self):
        gtk.main()

print __name__
if __name__ == "__main__":
    base = Base()
    base.main()

 

This source is very simple generating a small window on the screen as shown below:

Screen Shot 2015-08-22 at 00.50.26.png

Something more complex

Using the PyGTK API - we can try this more complex example

#!/usr/bin/env python

# example table.py

import pygtk
pygtk.require('2.0')
import gtk

class Table:
    # Our callback.
    # The data passed to this method is printed to stdout
    def callback(self, widget, data=None):
        print "Hello again - %s was pressed" % data

    # This callback quits the program
    def delete_event(self, widget, event, data=None):
        gtk.main_quit()
        return False

    def __init__(self):
        # Create a new window
        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)

        # Set the window title
        self.window.set_title("Table")

        # Set a handler for delete_event that immediately
        # exits GTK.
        self.window.connect("delete_event", self.delete_event)

        # Sets the border width of the window.
        self.window.set_border_width(20)

        # Create a 2x2 table
        table = gtk.Table(2, 2, True)

        # Put the table in the main window
        self.window.add(table)

        # Create first button
        button = gtk.Button("button 1")

        # When the button is clicked, we call the "callback" method
        # with a pointer to "button 1" as its argument
        button.connect("clicked", self.callback, "button 1")


        # Insert button 1 into the upper left quadrant of the table
        table.attach(button, 0, 1, 0, 1)

        button.show()

        # Create second button

        button = gtk.Button("button 2")

        # When the button is clicked, we call the "callback" method
        # with a pointer to "button 2" as its argument
        button.connect("clicked", self.callback, "button 2")
        # Insert button 2 into the upper right quadrant of the table
        table.attach(button, 1, 2, 0, 1)

        button.show()

        # Create "Quit" button
        button = gtk.Button("Quit")

        # When the button is clicked, we call the main_quit function
        # and the program exits
        button.connect("clicked", lambda w: gtk.main_quit())

        # Insert the quit button into the both lower quadrants of the table
        table.attach(button, 0, 2, 1, 2)

        button.show()

        table.show()
        self.window.show()

def main():
    gtk.main()
    return 0      

if __name__ == "__main__":
    Table()
    main()

This code will generate the window shown in the image below

Screen Shot 2015-08-22 at 00.56.34.png

Until we press the buttons 1 and 2 we  see on the terminal the button message then pressing the  OK button the program ends.

Hello again - button 1 was pressed
Hello again - button 2 was pressed

 

The PyGTK library includes also the API to set callback functios, associate methods to the buttons and so on. A complete manager of the visual interaction. Unfortunately also for a  simple application (three buttons with their callback  inside a standard window) we should write a lot of code. As a matter of fact every graphic option, button, icon and  detail should be written in Python calling the proper PyGTK API.

 

Separating the design from the code

The solution to make the things easier is to operate a separation between the User Interface design and the code. To reach this goal we should adopt a technique very similar to the Android applications, keeping apart the  objects design in XML format  from the PyGTK Python code.

The Meditech Python controller when start shows the main Meditech logo on the screen while managing the inter-process communication. To reach this result  the background image has been created:

MeditechBackground.jpg

Then a special window has been defined in a separate XML file: the MeditechInterface2.glade as shown below

<?xml version="1.0" encoding="UTF-8"?>
<interface>
  <!-- interface-requires gtk+ 3.0 -->
  <object class="GtkWindow" id="MeditechBackground">
    <property name="visible">True</property>
    <property name="sensitive">False</property>
    <property name="can_focus">False</property>
    <property name="halign">center</property>
    <property name="valign">center</property>
    <property name="title" translatable="yes">Meditech 1.0Beta</property>
    <property name="resizable">False</property>
    <property name="modal">True</property>
    <property name="window_position">center-on-parent</property>
    <property name="default_width">1024</property>
    <property name="default_height">1080</property>
    <property name="hide_titlebar_when_maximized">True</property>
    <property name="type_hint">desktop</property>
    <property name="skip_taskbar_hint">True</property>
    <property name="skip_pager_hint">True</property>
    <property name="accept_focus">False</property>
    <property name="focus_on_map">False</property>
    <property name="decorated">False</property>
    <property name="deletable">False</property>
    <property name="gravity">center</property>
    <property name="has_resize_grip">False</property>
    <property name="mnemonics_visible">False</property>
    <property name="focus_visible">False</property>
    <child>
      <object class="GtkImage" id="background">
        <property name="width_request">1024</property>
        <property name="height_request">768</property>
        <property name="visible">True</property>
        <property name="sensitive">False</property>
        <property name="can_focus">False</property>
        <property name="xalign">0</property>
        <property name="yalign">0</property>
        <property name="pixbuf">images/Meditech-1024.jpg</property>
      </object>
    </child>
  </object>
</interface>

This is a window where many parameters are  different than the default: there are no decorations, the window is not resizable, the image is centered and both window and image are expanded over the entire screen and more. Designing the UI apart has created a dramatic simplification in the Python code, where the entire UI definition is reduced to a line of code as shown below.

import sys
try:
    import pygtk
    pygtk.require("2.0")
except:
    pass
try:
    import gtk
    import gtk.glade
except:
    print("GTK Not Availible")
    sys.exit(1)

class MeditechMain:

    wTree = None

    def __init__( self ):
        #  ============================ XML with the UI desgn definition
        builder = gtk.Builder()
        builder.add_from_file("MeditechInterface2.glade")
        # ============================
        builder.connect_signals(self)
        self.window = builder.get_object("MeditechBackground")
        self.window.fullscreen()
        self.window.maximize()
        self.window.set_keep_below(True)
        self.window.set_deletable(False)
        # self.window.show_all()

        # self.image = builder.get_object("Background")
        # self.image.show()

def main():
    gtk.main()

if __name__ == "__main__":
    mainClass = MeditechMain()
    main()

 

What makes the difference is the call add_from_file("MeditechInterface2.glade") incorporating the XML file. Obviously the PyGTK APIs remain available and can be used in the program to make changes and adaptions to the initial UI.

 

Making the design simple

It is almost intuitive that it is not simple to define the UI components in  XML. It is also obvious that this separation between design and code has also another great advantage: we can retouch and adjust some design issues without changing the code.

The reason that the UI design XML file has the glade extension derives from the name of the graphic IDE we are using, just to create the design. Again this strategy is a remembrance of the Android UI design.

Screen Shot 2015-08-22 at 01.36.13.png

The Glade IDE makes available all the GTK components to design the UI components seeing them as they appear at runtime; then generates the glade XML file when the design is saved to be used in the PyGTK Python application. Details on the installation and usage of the Glade IDE can be found at glade.gnome.org

Introduction

Meditech should be something simple. Simple to use, addressed to non-expert IT users, possibly as much autonomous as possible, possibly able to help the operator, possibly usable with few buttons (=NO KEYBOARD REQUIRED) and much more. This is a must over all the possible features that should have.

 

The equation is simple: the user should see Meditech like a tool, despite what it contains. Power on the device until the devices says Ready then his skill and knowledge should be focused - maybe exclusively focused - on the use of the probes. This means set the body temperature sensor in the right place, set the ECG electrodes in the right place, know if the data are indicating a possible disease or not.

 

Every user simplification, as any developer knows, will correspond to a meaningful complexity increase in the back of the system. But this is the only way that Meditech can be really usable in the non-conventional operating conditions it is expected to be used. This means that there are no excuses

 

Resuming the architectural simplifications applied we can focus:

 

  • Simple numeric IR controller (like the TV controller) manages the entire system
  • TTS (Text-To-Speech) audio feedback avoid the user to read status changes, confirmations and usage guides
  • No Keyboard is needed for the normal usage
  • Screen display shows only the essential information: no inactive windows, no long messages to read, no floating windows
  • Desktop and menu bars from the Linux are hidden on startup

 

The main controller strategy

One of the key-concept of Meditech is modular system; this goal is reached because every component works as a vertical, independent task solver: if a component is not detected or stop responding for some reason its features are excluded. As a matter of fact are sufficient two of the three internal Raspberry PI devices to keep the system running. This obviously includes the essential peripherals: audio card, networ switch, control panel and LCD display. So, ignore the question "what happens if it explode" and similar.

 

The "glue" keeping all together and distributing the tasks correctly, depending on the user requests is the main controller, a sort of semaphore component created in Python. So what is the better place where this application can reside? on the background. Not running in background but on the background graphic component that is the minimal view shown on the screen while the system is powered-on.

IMG_20150816_182602.jpg

So, the image above shows the just powered-on Meditech face. As the entire system is controlled through the Infrared Controller until the system is not put in maintenance mode and connected with a mouse and keyboard there is no way to enter in a deeper level of the system. And there is no reason.

The actual developing version of the prototype, for obvious reasons does not start automatically on boot but the application version will start directly with the standard interface.

 

Controlling the system behaviour

 

At this point there is another important question to be answered: how to manage the Meditech architecture data flow?

 

As the user interface manager is the nearest task to the user interaction this is also the best candidate to manage the entire data flow and task execution. When Meditech is powered-on this is the process that is started automatically together with the infrared controller appplication. The interaction module send commands and requests directly to the process controller that shows the background main User Interface. At this point the system is ready to manage all the parts from a single asynchronous controller.

 

Note that the graphic view for every status, informational window, graphic etc. is actuated by independent Python widgets launched and blocked by the process control as well as the activation of the probes. But the other side the background processes are ready to receive the probes information, collect data and store them on the database. As the remote access is to the information is under the user control also the activation of the Internet transmission to the remote server over a dependent MySQL database is managed by the process controller.

 

The following scheme illustrates how starting from the User Interaction and direct feedback the system works and react to the requests.

 

Main controller - Tab 6.png

Controlling processes with Python in practice

As occur in almost any language al sin Python it is possible to launch external tasks, i.e. bash scripts, programs or other Python scripts. The problem arise when the Python process control application should act as a semaphore (also intended in the traditional IT way). The choice of Python - should be remarked - instead of something more low-level depends on the graphic performances of the language; in this way it is possible to integrate both the main UI visualisation and the process control saving one more task running inside the RPI master device.

 

The approach in Python is almost simple: there are three different instructions enabled to launch external programs. The first (and probably more immediate) approach is the use of the os.system() call to deal for another process with the operating system; no matter how this command is done, C++ or any other compiled language, java or bash. So, for example:

 

import os
print(os.popen('command').readline())
x = _
print(x)






This is the first method to be avoided, due the complexity managing the return values. The other potential issue is that with os.system() call Python pass the control of the entire process - that we want to control - to the operating system.

What has demonstrated the right approach instead is the use of the subprocess call. Just like the Python multiprocessing can do managing internal multi-threading, in a similar way subprocess can spawn different processes giving more control to the calling application, in a simpler way. So it is possible - that is our case - manage the following architecture:


  1. Start the main UI + process control
  2. Start the IR controller that send back to the controller the user requests, commands etc.
  3. At this point the process is full operational

 

All the other processes launched by the controller start something in the system by launching a bash script or a C++ command. Every process follows a double-data exchange with the controller enabled to decide what is working in a certain moment and what it is not. A typical procedural approach can be the following:

 

User actionController actionDirection
Enable body temperatureLaunch body temperature widget on-screenRECEIV
Enable the probe activity and start readingSEND
Update the widget dataSEND
Continue updating until the user stop

 

Every process is launched keeping direct control of its stdin, stdout, stderr while the widget visualisation are graphic objects developed in Python script.

 

For subprocess details in Python, this link is a good starting point.

For the full explanation of the subprocess, this is the Python link manual

Hello, everyone!

 

I apologize for being incommunicado for so long. I was sick for a few weeks and then I had to wrap up work at the lab for the summer. Excuses, excuses, I know. Anyway, I'm here! I'll be posting a lot more as we are now in the final stretch.

 

Part of PizzaPi is hardware, but a majority of it is software and right now I'm working on the web/mobile/tablet interface. I'm doing this using MySQL as my database; PHP, JQuery/JavaScript, HTML, and CSS to create the web site. I'm also using Bootstrap to handle compatibility across devices, so the website will actually look great in just about any screen size.

 

Here is a screenshot of how the database is setup. I use relational database theory and link all the tables together to ensure consistency throughout the backend.

 

Relational view

Hopefully the layout is easy to understand. I plan to collect sensor data from individual orders as well as delivery time. This information will later be available for the pizza store to use to improve future orders. There are tables that deal with the pizza itself, such as the kind of sauce, crust, or toppings that a customer might want. The latter tables all link into the main pizza table and then most everything else links into the order table, known as "piorder".

 

The video is short, but I thought it might be nicer to see the website in motion rather than in screenshots. This is a static version, of course. I will be working on linking the backend to the front so that information is pulled dynamically based on the customer input. I also have drawings in my notebook that show the layout for the store's frontend, that will be coming later in the week.

 

That's about it for now. Looking forward to getting this project done!

In my previous post I showed some photos of a frame made of steel wires. I found out this design to be absolutely faulty because it was not rigid enough to dampen vibration induces by motors and propellers.

So I started with a new frame frame made of U-shaped aluminum profiles. The final result is shown here

 

IMG_20150819_172203.jpg

 

The U-shaped profiles are joined using 0.5 mm-thick aluminum sheet

 

IMG_20150819_172226.jpg

 

At the bottom, two servos and the battery are located. The control surfaces are made with a 0.5mm aluminum sheet

 

IMG_20150819_172253.jpg

IMG_20150819_172247.jpg

At the top, there is the Raspberry board. The board is mounted on a plywood base that should absorb vibrations, and kept in placed by four rubber rings ( as suggested by this tutorial)

 

IMG_20150819_172307.jpg

 

Finally here is the counter-rotating motors

 

IMG_20150819_172317.jpg

 

Now I need to mount and connect all the electronic components (ESC, BEC, receiver)

Previous posts for this project:

 

 

Project Update

 

Problem!

 

I was testing the combination of audio and light effects today, only to find out that they are affecting each other!

 

Playing audio messes up the animations and the LED strip starts blinking randomly. There is also a nasty jitter in the audio output. I don't know the exact cause of the problem, I just hope it won't affect anything else ... There's a video at the bottom of this post, demonstrating the problem.

 

Solution?

 

The current workaround involves a USB sound card. It's a super cheap, low quality sound card I got from eBay for about $2, which I had around. The device is plug an play, and all that is required is to specify it as the output device when playing out sound files.

photo 4.JPG

 

Listing the device, ensuring it is detected properly:

 

pi@PiDesk ~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Set [C-Media USB Headphone Set], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0



 

It is the last one in the list, card 1, device 0. These are the two parameters required when playing out, using the "-D" option:

 

pi@PiDesk ~ $ aplay -D hw:1,0 example.wav
Playing WAVE 'example.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo



 

The workaround is not that nice, but due to the time constraints, I'll stick with that for now. Another disadvantage of this solution is that it requires an extra USB port on the Pi. As I was planning on using the A+ (with a single USB port), I'll either have to use a USB splitter or move to the B+.

 

Anyway, problem solved for now, let's hope no other surprises pop up ...

 

Video

 

A short video demonstrating the problem using the onboard audio, and the solution with USB audio.

 

Previous posts for this project:

http://www.element14.com/community/community/design-challenges/sci-fi-your-pi/blog/tags#/?tags=quadcop_project

 

YES, I am hiding some mess OK???  THe QuadCOP is a mess too but it goes together cleanly pretty fast.

MESSY

 

Here is a quick video of some testing I am doing.  This is part of the sensor array prototype.  I will use an arduino Pro Mini for the final result this week.

 

The attached file is the current schematic drawing of the 'Picorder'.


At this time, no other additions or modifications are expected (unless something drastically fails this late in the contest) to be incorporated into the device. It's basic prototype design and features (as indicated in the earlier posting yesterday), are intact and functional.


I am still tweaking the scripting to permit the device to operate as efficiently as I am able so that its measurements and readings are accomplished and displayed legibly.

This past weekend was fairly intense with picorder activity as indicated by the enclosed photo gallery. Construction continues as I carve out the case by hand for the TFT display complete with four momentary contact switches. We'll soon find out if I can incorporate them to be functional with the scripting, or at the very least to gracefully shutdown the picorder. Sensor array taking shape also with physical placement. So far the flame detector, distance/echo sensor, temperature/humidity sensor and alcohol and Co2 sensors are in place. Wiring continues and scripting is ongoing. Parts list is partially compiled as forging forward towards deadline.

{gallery} Picorder Construction Day 58

IMG_20150622_010053.jpg

Picorder encasement

IMG_20150622_010038.jpg

Small power board with 5vdc regulator and RPI B+

IMG_20150622_010027.jpg

Flame and distance echo sensors

IMG_20150622_010008.jpg

Front end sensor array

IMG_20150621_191455.jpg

Replacing Model B with RPI B+ for more GPIO port usage

IMG_20150621_162823.jpg

Carving the case

IMG_20150621_162835.jpg

Old-fashioned carving with hand tools

IMG_20150621_162816.jpg

Almost completed trimming

IMG_20150621_181550.jpg

Display: Mounting

IMG_20150621_181554.jpg

Taking Shape

IMG_20150621_181554.jpg

 

Carved case for TFT Display

Introduction

One way to optimize the behavior of the Meditech system is adopting the more reliable tools, programming languages and technologies depending on the different tasks that should be accomplished. This implies obviously a multi-language  environment, essential to reach the better simplification level; for example, following this primary directive, all where it is possible the MySQL database will be accessed via low level SQL queries with bash scripting techniques as well as the hardware control software is developed in gcc++.

 

This approach maybe a bit more complex than the choice of a unified development language, possibly at high level. In fact we should include in the pros of this fragmented approach also a better costs optimization involving hardware solutions just when it is not possible to solve them with software.

 

In the initial Meditech design one of the development platforms included in the project was Qt but then this directive changed, as Python revealed the better language to develop the main scripting sources controlling the Meditech sub-processes and a good way for the User Interface design, with some more extra effort.

 

Based on these considerations at the actual date the Meditech development scheme has totally excluded the Qt environment replaced by Python beside to the Linux scripting tools, SQL language, PHP and few other development components and libraries.

 

Involved software components

In the scenario described above every different software environment adopted in the system should be viewed as a set of one ore more specialized package(s) set. The following diagram shows the general scheme.

Meditech UI and Optimization - Tab 5.png

First of all this methodology tend to take the maximum advantage working in a multi-task environment. As it has been already discussed in the previous posts, Meditech is a modular system using an internal set of three specialized Raspberry PI devices, plus a fourth unit dedicated to the camera features but other can be added if needed. The same described vertical task approach has been adopted in all the devices as the software model.

In the scheme we identify different classes of applications, harmonized and integrated by an inter-process controller developed in python.

 

The software sections in detail

 

On-demand processes

These are processes that most include the network connection (e.g. launching a task on another unit of the system); are bash scripts that launch a task execution when the behavior needs it. A typical example is the process TTS (Text-To-Speech) playing a synthesized sentence in response to a command. The task-on-demand can be called by other processes or by the main inter-process controller; they have the characteristic that we should always expect and exit condition.

 

Background processes

This group includes that startup Linux services, like the peripheral controls, Apache2 web server, the Php engine etc. Then there are Meditech-specific processes that starts when the device is powered-on and runs indefinitely. A typical example is the infrared controller running over the lirc ervice (Linux Infrared Controller) managing the Infrared controller interface, the primary interaction method with the system.

 

Networking

Beside the OS networking services, including but not only the SSh server, web server, MySQL server, NTP server and more, there are other Meditech-specific networking services based on bash scripting commands to manage some special network features like the remote database update, image streaming, continuous data processing intra-networking data exchange etc.

 

Database storage

This the class of tasks related to the local MySQL database management and the remote server update (when there is an active Internet connection). Where the SQL queries are recursive tasks these are embedded in bash scripts (to simplify the calls) while in the interactive UI the database is used as data collector and the information retrieved from the sensors are represented graphically on the screen widgets with Python and PythosnSQL programs.

 

Hardware control

The control panel and the data acquisition from the probes is developed under GCC commands that are embedded in bash scripts.

 

User Interface and interactivity

The Meditech UI is developed in Python and the visualization of the various widgets is controlled by the inter-process control developed in Python too.

 

Internet web access

This class of tasks is divided in a double client-server mechanism to enable the remote support from any authorized remote Internet connection (it is sufficient a browser). on the Meditech side the Apache2 + MySQL and PHP engine grant the remote access from the same LAN. Using a could server when an Internet connection is active and the remote assistance support has been enabled by the local operator the data are sent real-time over a cloud MySQL server. A PHP-based web site grant the accessibility from remote to the data and enable the chat support with the local Meditech device.

Just a quick notification that I have nearly completed the Picorder changes and schematic diagram. Both have taken me longer than anticipated. I had some obstacles to overcome with the 1.5vdc power for the speaker amplifier. My voltage divider needed some extra care to get the required voltage without burning out the amp itself. Final measurement and adjustment has proven positive.

 

As for the unit itself. Just a brief synopsis of the features and functions. Unlike the original tricorder which only had the tricorder sound play, I incorporated sounds for each of the sensor measurements just to distinguish one from the other. Later in another video demonstration, the following will occur for each sensory measurement:

 

1. While Temperature and humidity are displaying on the LCD screen, the normal tricorder sound will repeat a two second audio clip as well as during all other measurements.

 

2. When a flame is detected, an "automatic defense procedures initiated" audio clip will play.

 

3. When motion is detected, an audio clip of "intruder alert" will play

 

4. When distance is measured, no audio clip will be heard save the Picorder (original tricorder sound from Sta Trek TOS) sound clip.

 

5. All measurements and readings, as well as measurement statements will continuously scroll (trying to have continuous graphing display instead) on the LCD screen.

 

Schematic posting is next.

This week I was able to make progress on being able to read the recipe files and load the data to the PLC. I also made some modifications to the OEE data collection to add a header line to the log  file.

 

PLC Recipe Files:

In order to load recipes, the operator will interact through the PanelView display. The operator chooses to load a new recipe by first selecting a recipe 'Family', then selecting the actual product. As can be seen below N10:23 and N10:24 hold the current screen that the operator is on and the next screen to load. When the operator has selected a product to load, bit B3:6/0 is set to signal that a product has been selected. The RPi is reading this bit, along with a couple of others, every 0.2 seconds. When it detects that this bit is True, it reads the product that was selected from the PLC, then reads the recipe file from the SD card on the RPi and selects the appropriate recipe data from the file. The RPi then sends this data to the PLC and when the data transfer is complete will clear B3:6/0.

 

The operator will select the recipe 'Family' as shown in the below picture. Due to corporate policy I had to distort the choices, but I think you get the point.

recipe family select

 

Once the 'Family' is selected, the actual recipe will be selected as shown in the below picture.

recipe select

Once the recipe is selected, bit B3:6/0 is set in the PLC.

plc recipe logic

 

Once the RPi recognizes the change in state of B3:6/0, the RPi will retrieve the recipe information and transmit it to the PLC.

 

recipe loaded

 

 

if (bits[1]):
    # self.loaded added so multiple uploads won't be initiated
    if (not self.loaded):
        serialLog.debug("Loading Values to PLC")
        print "Load values to PLC"
        self.loaded = True

        def clearRecipeBit(response):
            request = protectedWriteRequest(1, self.ALARMS[1], [0])
            d = self.sendRequest(request)
            d.addErrback(self.errorHandler, 'clearRecipeBit')


        def sendRecipe(recipe):
            PLCRecipe = self.config.getPLCRecipe()
            index = 1                               # Index 0 is recipe name
            var = []
            for address in PLCRecipe:
                request = protectedWriteRequest(1, address, [float(recipe[index])])
                result = self.sendRequest(request)
                result.addErrback(self.errorHandler, "sendRecipe")
                var.append(result)
            d = defer.gatherResults(var)
            d.addCallback(clearRecipeBit)
            d.addErrback(self.errorHandler, 'saving data in StartOEEData failed')
                
                
            def getRecipeValues(recipeName):
                localDir, remoteDir = self.config.getRecipeDirectories()
                filename = localDir + '/' + 'families.csv'
                fObj = open(filename, 'r')
                for recipe in fObj:
                    if recipe.strip() in recipeName[0]:
                        recipeFile = localDir + '/' + recipe.strip() + '.csv'
                        fRecipe = open(recipeFile, 'r')
                        for line in fRecipe:
                            if recipeName[0] in line.strip():
                                sendRecipe(line.strip().split(','))
                            
        request = protectedReadRequest(1, 'ST15:20')
        d = self.sendRequest(request)
        d.addCallback(getRecipeValues)
        d.addErrback(self.errorHandler, 'saving recipe data')
            
    else:
        self.loaded = False





 

 

PLC Data Logging

The RPi scans the PLC every 60 seconds to retreive OEE information that will be stored to a local file and eventually FTP'd to a OEE server. The data is stored to a logfile that rotates on a daily basis. The loggerfile is saved in csv format. I added some code to the loggerfile program to add header information to the first line of the file.

 

def _openFile(self):
    self.closed = False

    if os.path.exists(self.path):
        self._file = file(self.path, "r+", 1)
        self._file.seek(0, 2)
    else:
        config = utilities.optionReader()
        if self.defaultMode is not None:
            # Set the lowest permissions
            oldUmask = os.umask(0o777)
            try:
                self._file = file(self.path, "w+", 1)
                #write header information
                self._file.write(','.join(map(str,config.getLoggerHeader())))
            finally:
                os.umask(oldUmask)
        else:
            self._file = file(self.path, "w+", 1)
            #write header information
            self._file.write(','.join(map(str,config.getLoggerHeader())))

    if self.defaultMode is not None:
        try:
            os.chmod(self.path, self.defaultMode)
        except OSError:
            # Probably /dev/null or something?
            pass

    self.lastDate = self.toDate(os.stat(self.path)[8])




 

 

Last thing to do is integrate the MTrim serial program into the DF1 serial program to write recipe values to it.

Previous posts for this project:

 

 

Project Update

 

With the challenge coming to an end soon, so must my futuristic desk.

 

I've worked on the following things this week:

  • paint screen assembly
  • install screen and Pi 2
  • attach LED strip to desk with diffuser
  • install wireless charging base for magic lamp
  • test code for lift and LED strip

 

I'm nearly there! I still need to cover the desk's surface with a sheet of white acrylic, hiding all the guts of the desk and giving a clean finished look. Once the code is finalised, test everything and prepare for final demonstration.

 

To summarise, the remaining to-do's are:

  • place acrylic surface
  • connect capacitive touch control to Pi
  • finalise power management
  • finalise code
  • test, test, test
  • make video and post for final demo

 

Here are some pictures of the current state of the desk, hope you like it

 

photo 1.JPGphoto 2.JPG

photo 3.JPGphoto (28).JPG

Hello,

 

Its been awhile since I wrote an update.  I have done much work on the quad cop.  I ran into some issues with the MEMS sensor board and have abandoned it in favor of another electronic compass.  So instead of being negative I went offline to work through my issues.  There are many and I tend to complain.

 

I also installed both my raspberry PI into my quadcopter, so I only have command line access.  My main computer crashed and I only had my phone for a month to access internet, very hard to make blog update with that.

 

I promise an update this week.

 

Joey

Introduction

The most common way, and probably better known, to manage a MySQL database is using the PhpMyAdmin web application. This sounds good in all that cases where the MySQL database is remotely stored on a web server, especially when the core components of the MySQL database are managed by the server provider reserving a specific database partition.

 

Note: another good way to use the PhpMyAdmin is for the popular Blog and CMS Wordpress, with the database management can be done with a PhpMyAdmin Wordpress plugin.

 

Things comes slightly different when the server is a Raspberry PI, a Linux machine we can take full control from our LAN. In this case we can adopt a more reliable solution directly provided by Oracle. Better if the bare database management includes queries testing, working faster, work with a confortable visual tool.

These and other things are possible with a free multi-platform tool provided by Oracle that maybe considered the better way to access and control MySQL databases on the Raspberry PI. This Meditech project annex explain how the MySQL Workbench has been used to setup and maintain the database of the Meditech project.

Screen Shot 2015-08-10 at 11.34.25.png

MySQL accessing by remote

It is not sufficient to install and setup the MySQL database on the raspberry PI (following the common MySQL installation procedure) to enable external (remote) users access to the data: the MySQL database should be enabled for remote access. For more details on the enabling procedure see the attached document.

As the database resides on the RPI master device that should be updated by the other slave units granting the remote access should be set anyway. The procedure is almost simple, part of the standard MySQL online documentation.

 

The MySQL database should have a user and password enabled for remote access; in our case to make the things easier the same user/password pair to access the Raspberry PI has been adopted. Take in account that his is NOT a regular Linux user but it is a database user with nothing to do with the operating system.

 

Connecting the workbench to the remote database

After the installation, launching the workbench the main screen shows the the possible options and the database connections with the remote devices as in the image below

Screen Shot 2015-08-10 at 16.22.08.png

While the PhpMyAdmin after logged the user has access to the databases he is authorized as this web application is part of the same MySQL database here things are different. It is like having an IDE that can connect to as many database as you want, local or remote just as they were different projects.

The image below shows the LAN connection settings from the development Mac to the RPI master where the Meditech project database is stored. The connection parameters can be tested, then after confirmed these are permanently stored in the workbench. The database connection should be considered the entry point to the database schema we want to work with.

Screen Shot 2015-08-10 at 16.56.54.png

Every time we need to work with the database schemas (i.e. the entire MySQL architecture on the desired server, users, tables, queries etc.) it is sufficient to double-click the corresponding connection on the workbench main screen.

Screen Shot 2015-08-10 at 17.01.33.png

As the database connection is established it is shown the main SQL editor page. The following image shows the RPI master database where the only schema is the PhpMyAdmin, that was already installed on the raspberry PI, for testing purposes.

Screen Shot 2015-08-10 at 17.03.25.png

One of the most important differences between the MySQL Workbench and PhpMyAdmin is that with the workbench we have a top level vision of the installed MySQL engine with a better and wider control over the architecture. Who is used to manage server databases with PhpMyAdmin known that it operates from inside the database and it is not possible to have this kind of scenario.

 

An helpful tool set for database design

The following images shows a first advantage of having the full control of the MySQL engine: all the users, connection, server status and more can be checked in every moment including a good traffic monitor while the database is running serving other users.

Screen Shot 2015-08-10 at 17.20.27.pngScreen Shot 2015-08-10 at 17.20.13.png Screen Shot 2015-08-10 at 17.20.03.png

But I think that one of the most interesting features of the workbench cover the database design aspects. After the essential components of the database has been defined - like in this example the PhpMyAdmin database tables, we can use it to generate and then edit graphically the data queries, tables relationship and more.

Starting from the MySqlAdmin tables definition with a simple automated wizard the database tables structure has been extracted and generated graphically like shown in the image below.

Screen Shot 2015-08-10 at 17.30.43.png

The database designs can be organized visually to easily create the documentation like ths simple example in the attached pdf; the design is also interactive and can be used to expand the database features, complete the tables relationships, creating queries, procedures etc. in a comfortable visual environment.

Previous posts for this project:

 

 

Project Update

 

It's been a busy week for me, not so much for the desk though. First, I went on holiday with my wife and kids to the belgian coast for the week, and I closed the weekend by introducing a visitor from New Zealand to belgian beers and strolling through Brussels together with another belgian member. First one to guess both members gets 10 points. (No real points, no prize can be claimed!)

 

Anyway, back on topic ... there have been little bits of progress on the desk. Let's go over them ...

 

Touch controls

 

The conductive pads and tracks have been laid out on the desk. All that is left to do is to hook them up to the prototyping board I made last time and hope they work as expected!

I started off by laying out the copper tracks and then drew the shapes of the contact pads using a drinking glass and a marker. Afterwards, I carefully painted the pads using conductive paint.

photo 1.JPGphoto 2.JPG

 

Power distribution

 

To simplify things, I'd like to power the entire setup from a single power supply. The two voltages required are 5V (Pi, LED strip) and 12V (Stepper motors, laptop screen), so I plan to use a single 12V power supply and have a converter to step it down to 5V for the components that require it. This also means that different splitters are required to ensure every device gets powered.

photo 3.JPG

 

Motors and endstops

 

I've mounted the motors at the bottom of the frame using some MakerBeam pieces I had around. With my printer still giving me headaches, I ended up taking a piece of wood, drilling holes for the threaded rods to fit through and make a larger hole on the bottom side holding the two captive nuts. First tests indicate the stepper motors have no trouble handling the weight and can lift the screen assembly as expected. I'll try to get a video out for this in the coming days.

 

The endstops have been installed on the side of the frame, the first one triggering when the screen is lowered flush with the desk, the other one when it's been raised enough. You can see both switches on the picture in the next paragraph.

photo (25).JPG

 

Cable management

 

Another thing I managed to do, is some basic cable management, ensuring the wires of the motors and endstops are tucked away neatly. I used a plastic cable guide for that, which I attached on the side of the screen's frame.

photo (26).JPG

 

Python

 

Finally, I started merging the different bits of code I created over the course of the challenge, combining all the different features like sound effects, neopixel control, capacitive touch, etc ... I haven't been able to test it yet, that's planned for next week. If everything ends up working as expected, I can start finalising the build and get testing and tweaking. I'm not using a particular IDE for this, as I've found the Sublime text editor to do the necessary syntax highlighting and completion.

Screen Shot 2015-08-09 at 22.00.44.png

Introduction

As Python is an interpreted language the first temptation for develop applications is directly editing the sources on the Raspberry PI, eventually with the help of the Python idle simple IDE installed by default in raspbian; this development environment is almost primitive, while the availability of a good development environment for the Python language maybe very helpful especially if the code can be managed on the PC while tested real-time on the destination device.

This Meditech project annex explains just how an efficient environment has been set to reach this scope.

 

The role of Python in the software architecture

Pointing the attention to the language Python it has two advantages: it runs fast in the Raspberry PI environment and can be executed (launched) by the command line. The multi-language approach of the entire Meditech project is draft in the following table:

 

Language / EnvironmentUsage
C++Building programs and command line commands to to the hard work: communication, calculation, data processing
SQLDirect access via queries from bash commands for internal data organization and database management
PhpExternal access via Apache2 web server supporting database integration
BashPre-built commands to manage complex tasks and simplify the inter-process communication
PythonLocal User Interface and on-screen real-time monitoring

 

Simplifying the Python development

As explained in the Annex I also in this case the best way to simplify the development lifecycle is adopting an external development IDE. After several tries, I have decided to use the  PyCharm Community Edition IDE (free open source version).

Screen Shot 2015-08-09 at 16.41.59.png

I should recognize that this product from JetBrains demonstrated a very efficient instrument for the language development.

In the case of Python we have not the need of the remote compilation as it is based on the Python interpreter. A potentially risk factor working with an external IDE is due by the raspbian libraries not available on other platforms; anyway this aspect can be ignored because the advantages are worthy. Like the case of NetBeans IDE for C++, also PyCharm is simple to download and install. An appreciable aspect is the good available documentation accessible from the PyCharm IDE and the good contextual help support provided, as well as the editor features supporting depth syntax checking and indentation control.

 

The Python development environment

The way followed creating a Python development environment is a bit different than the remote compilation settings requiring to add some changes and integrations.

 

Minimal requirements

The minimal requirements on the Raspberry PI side requires as usual the SSH connectivity; it is essential while developing on a remote linux embedded device to reach the system from a terminal. The minimal requirements settings on the PI are the following:

  • NFS server for folder sharing
  • SSH remote access (supporting the graphical environment on the PC with the -Y option as an alternative to a headless Raspberry PI)
  • A couple of simple Bash commands to fasten the synchronization
  • The possibility to mount on the PC the Raspberry PI remote folder. Also in this case on the Mac OSX a Bash command has been built
  • A simple command to keep synchronize the PC development folder with the Raspberry PI test folder
  • Python installed on the system (it is by default on raspbian)

 

Raspberry PI configuration

The Raspberry PI configuration is really simple. The only change you need is to export the development folder to the NFS server so it is shared remotely by the PC connected on the same LAN.

 

Note: it is best practice to export the folder limited to the development PC IP address.

 

Supposing the following initial conditions,

  • The PC address on the network is 192.168.1.5
  • The Raspberry PI IP address is 192.168.1.99
  • The Raspberry PI folder to share is /home/pi/GitHub

 

execute the following command to edit the exported (shared) folders:

 

sudo nano /etc/exports

 

Then add the following line, accordingly with your real PC and Raspberry IP address and foldert to share

 

/home/pi/GitHub 192.168.1.5/0(rw,async,insecure,no_subtree_check,no_root_squash)

 

After saving the file, reload the NFS server with the following command to restart the NFS sharing service

 

sudo /etc/init.d/nfs-kernel-server restart

 

At this point your folder can be mounted on the remote PC. This example will work with a development PC based on Linux (Ubuntu, Debian etc.) or Mac OSX. For Windows see the how to share a folder from Raspberry PI with the Samba protocol to reach the same result (the same document is attached to this post).

 

PC remote folder mounting

Now that the Raspberry PI folder is shared on the LAN for the specific IP address of our development PC we should mount the remote folder so it is available locally on the PC. To do this avoiding to repeat the mount command it is sufficient to write a short bash comand that we will launch everytime we need to develop with the PI. In the following example the command refers to my local development folder on the Mac, so you should adapt it accordingly with your computer folder structure.

 

#!/bin/bash

# Mount RPI master Meditech Python repository and launch the ide.
sudo mount -o resvport,rw -t nfs 192.168.1.99:/home/pi/GitHub/meditech_python_interface meditech_python_interface/

 

At this point all can be considered done. We should simply start the PyCharm IDE on the PC and open projects and build them directly in this folder. To make the things more reliable a small improvement has been done.

While the remote Python development folder is mounted as meditech_python_interface/ an identical folder named local.meditech_python_interface/ has been created on the PC. So the PyCharm IDE sources are written in the local... folder and wen the python code should be run on the raspberry PI platform it is synchronized with the remote folder overwriting still existing files; in this way there is a local and remote replica of the sources, just like an anticipated backup

Also in this case the synchronization task is simplified by a bash command wrote once and executed in seconds everytime it is needed.

 

#!/bin/bash

# Update the remote mount files from the local development folder
sudo cp local.meditech_python_interface/* meditech_python_interface/

 

 

The Python development scenario

We can comfortable develop our Python applications on the Raspberry PI with only three windows:

 

  • The PyCharm IDE window
  • A local terminal session
  • A Raspberry PI remote SSH terminal session

 

The following image shows the PC with the Python develompent settings

Screen Shot 2015-08-09 at 18.57.39.png

This post is an annex to the Meditech project explaining one of the (possible) best practices to setup an efficient development environment for C++ developing on the Raspberry PI platform with the advantage of an advanced IDE and remote compiling without emulators.

 

Why a development IDE

When C/C++ language programming covers a large part of an embedded project going far beyond the simple cut and paste of some examples, to be able working in a good development environment represent a success factor for code quality and usability; adopting a high level programming IDE become a must at least for the following reasons:

  • Availability of optimized editing tools, including language syntax-checking
  • Fast moving between sources and headers inside a well organized project
  • Easy accessibility to classes, function declarations, constants, commenting
  • Fast syntax checking and bug-tracking
  • Sources and headers organization in projects
  • Optimized compiling feedback and fast error checking
  • Local and remote sources replication in-synch

 

Note that the use of a PC with a high level IDE creating code for different platforms (mostly embedded devices) where it is difficult or impossible to develop it is a widely diffused practice. This is the way adopted at least for the following well-known devices:

    • All Android based devices
    • Symbian devices (already diffused in the Indian and some south-world countries)
    • iOS smartphones and iPad
    • Arduino
    • ChipKit and many PIC based microcontrollers
    • Many other SoC and SBC

 

These and many other factors dramatically increases the productivity and the quality of the final result when working with a remote compiling enabled IDE.

Screen Shot 2015-08-09 at 10.41.15.png

 

What IDE for the Raspberry PI

The first assumption is that the Raspberry PI linux (here it has been used raspian but the concept is the same with other distributions) should not host the development environment as it is the target of the project. So we should think to the better way to manage the code development on a PC seeing the result real-time on the target device. In few words, we will provide a simple network connection between the Raspberry PI and the development PC; we can use the WiFi, the LAN connection, the home router or any other method so that the two machines can share their resources on the network.

 

The other assumption is that for the best result it should be possible to compile remotely with few simple operations, fast checking errors and compilation results in the IDE on the PC, running the program over the native platform.

 

The two most popular open source IDE for multi-language development are Eclipsehttp://www.eclipse.org/ and NetBeanshttps://netbeans.org/. There is an alternative to the remote compilation using a cross-compiler: with a particular settings it is possible to compile on a different hardware architecture (i.e. a PC with a Intel-based CPU) the code that should run on the Raspberry PI that is, an ARM based architecture. It is a more complex way with so few advantages that where it is possible it is best to avoid this method.

 

Just an interesting note: the PC Arduino IDE represent a good effort in this direction, as well as the MPIDE supporting also the ChipKit PIC based platforms. It is a simple (and a bit primitive) IDE making a cross-compilation of the program before uploading the binary file to the micro controller board.

 

I am used to base many of my developments on the Eclipse IDE as this is one of the privileged Android, Java and Php develompent tools. Unfortunately after some tests I saw too many issues when trying to connect the networked Raspberry PI for remote compiling so I adopted the NetBeans IDE supporting a very simple setup.

 

Minimal requirements for remote compiling

There is a series of minimal requirements that should be accomplished to remote compile C++ programs on the raspberry PI; most of these are obvious but a reminder can be useful:

 

  • SSH and SFTP installed on the Raspberry PI for remote access (this option can be enabled from the raspi-config setup utility)
  • GNU compiler (better to check the system upgrade for the last version of the compiler, assembler and linker)
  • SSH access to the Raspberry PI from the PC
  • Development version of the C++ libraries needed for your needs, correctly installed on the Raspberry PI

 

NetBeans IDE setup on the PC

These notes refers to the version 8.0 of the Netbeans IDE; If a different (maybe newer) version is installed maybe you find some minor changes in the menu settings.

To install a copy of the IDE on your PC it is sufficient to go to the NetBeans IDE platform download page downloading the last available version for yourplatform (Mac, Windows or Linux)

Screen Shot 2015-08-09 at 13.25.12.png

The installation is simple and in most cases the default settings are all what you need.

When the installation process end, few things should be changed from the Settings menu. Its location can vary depending on the PC platform you are using; in the Mac OSX it is in the pull-down NetBeans main menu (the topmost left choice) while in Windows and Linux maybe in the File menu.

Screen Shot 2015-08-09 at 13.30.15.png

From the settings windows (see the image above) you can customize all the features of the IDE, e.g. the editor behavior, source font and colors, the graphic appearance and so on. From the C/C++ option tab select the GNU compiler (it should be the default, else add it updating the IDE with the Add button on the same window).

Screen Shot 2015-08-09 at 13.34.06.png

Againfrom the same window you should Edit the host list (by default there is only localhost, the development PC) adding the parameters to connect with the Raspberry pi:

  • Add the user (in this case the default user pi)
  • Add the password and confirm to save it avoiding to repeat it every time the IDE starts connecting to the Raspberry PI
  • Insert the Raspberry PI IP address (better to set a static IP instead of a DHCP assigned to avoid the IP changed the next time you reopen the IDE)
  • Specify the access mode SFTP (that is, FTP protocol over SSH connection)
  • Enable the X11 (the linux graphic server) forwarding

That's all!

 

Starting developing

With the IDE set in this way you can start creating a new project and write your code. On the IDE top toolbar there is the connection button to activate the remote connection with the Raspberry PI. To compile remotely from the PC you should be connected. As the device is remotely connected NetBeans gives you the option to edit the application sources locally but when you Build the application the sources are automatically zipped, sent to the remote device (the Raspberry PI), unzipped and compiled. Errors and messages are shown in the Build result window making the debug very simple.

 

Another very useful feature is the option to open a remote terminal directly from the IDE. In this way, as shown in the following screencast, the development lifecycle included the program running and testing become very simple and efficient with a minimal effort.

This week I was able to make progress on a couple of remaining items. First, I was able to get email notifications working and secondly, I was able to get recipes downloaded. I still need to get the recipe data into the PLC, but I should finish that up shortly. All code for this project is available at https://github.com/frellwan/SciFy-Pi.git under the Serial/df1 folder.

 

Email Notification

The typical protocol for sending mail on the Internet is the Simple Mail Transfer Protocol (SMTP).  I will be using Twisted Mail to send SMTP messages when alarm conditions are detected in the PLC. I outlined the steps necessary to install Twisted in a previous post, and Twisted Mail is part of that install.

 

SMTP is a plain-text protocol. To send an email, we need to know the IP address or hostname of an SMTP server. To do this we need to look up the mail exchange (MX) servers for the hostname we will be sending mail to (e.g. ME@google.com – MX lookup would give us the hostname of the mail exchange server for google.com). Most SMTP messages are sent using port 25.

 

A simple way to do this with Twisted Mail:

 

MXCALCULATOR = relaymanager.MXCalculator()

def getMailExchange(host):
    def cbMX(mxRecord):
        return str(mxRecord.name)
    return MXCALCULATOR.getMX(host).addCallback(cbMX)




 

 

In my company I am able to send email from a fictitious email address (such as MACHINENUMBER@hostname.com) and it will be delivered without issue as long as it goes to a hostname.com email address. This allows me to have each machine send emails that will allow the recipient to know which machine is having problems. These emails lack the authentication headers of typical emails and will usually end up in a spam filter when I attempt to send to an email address outside of my company’s mail exchange hostname.

 

Sending a SMTP email is a fairly easy task with Twisted Mail:

 

def sendEmail(mailFrom, mailTo, msg, subject=""):
    def dosend(host):
        mstring = "From: %s\nTo: %s\nSubject: %s\n\n%s\n"
        msgfile = StringIO(mstring % (mailFrom, mailTo, subject, msg))
        d = defer.Deferred()
        factory = smtp.ESMTPSenderFactory(None, None, mailFrom, mailTo, msgfile, d,
                                          requireAuthentication=False,
                                          requireTransportSecurity=False)
        reactor.connectTCP(host, 25, factory)
        return d
    return getMailExchange(mailTo.split("@")[1]).addCallback(dosend)




 

 

Recipe Download

As mentioned in the project description, I will be accessing a recipe database on an external server through an FTP connection. The PLC will set a bit when the recipe files need to be downloaded from the server. When this bit is set an FTP connection is established and a file that contains the names of all the recipe files is downloaded and then each recipe name is read from the file and downloaded separately.

 

if (bits[2]):
     # self.transferred added so multiple downloads won't be initiated
     if (not self.transferred):
          #Download Recipes from Server
          serialLog.debug("Downloading Recipes")
          d = self.ftpEndpoint.connect(FTPClientAFactory())
          d.addCallback(getRecipeFiles)
          d.addErrback(self.FTPfail, 'startFTPTransfer')
          self.transferred = True
     else:
          self.transferred = False




 

def getRecipeFiles(ftpProtocol, localDir):
  def downloadRecipes(self):
       filename = localDir + '/' + 'families.csv'
       fObj = open(filename, 'r')
       for recipe in fObj:
            recipeName = recipe.strip() + '.csv'
            recipeDir = localDir + recipeName
            recipeFile = FileReceiver(recipeDir)
            d = ftpProtocol.retrieveFile(recipeName, recipeFile)
            d.addErrback(fail, "getRecipeFiles")

  # Download recipes
  familynames = localDir + '/families.csv'
  recipeFile = FileReceiver(familynames)
  d = ftpProtocol.retrieveFile('families.csv', recipeFile)
  d.addCallback(downloadRecipes)
  d.addErrback(fail, "getRecipeFiles")