Skip navigation
1 2 3 Previous Next

Sci Fi Your Pi

65 Posts authored by: balearicdynamics Top Member

The Meditech project version 2 (on development) explained during a  1hr talk at the last QtCon in Berlin the past September, 4.

IMG_20160904_094920.jpg IMG_20160904_095102.jpg

The recorded event

Follow the link to see a recorded version of the QtCon Meditech event: Relive: Meditech: A Qt-driven OSHW device – QtCon Streaming

IMG_20160904_091654.jpg

QtCon in Berlin, September 2016. Images gallery

 

{gallery} QtCon images gallery

IMG_20160903_080604.jpg

IMG_20160903_080650.jpg

IMG_20160903_080909.jpg

IMG_20160903_081005.jpg

IMG_20160903_080942.jpg

IMG_20160903_094156.jpg

IMG_20160903_085524.jpg

IMG_20160903_092311.jpg

IMG_20160903_100706.jpg

IMG_20160903_130551.jpg

IMG_20160903_131547.jpg

IMG_20160903_133148.jpg

IMG_20160903_093947.jpg

IMG_20160903_090606.jpg

IMG_20160903_085849.jpg

Introduction

The Meditech project is moved to the further step 1. This will produce the formerly testing version of the device; this means a version enabled for testing with volunteers. To move the project and the prototype to the end of this step a series of essential upgrades has been focused playing on the actual version. While Meditech phase zero aimed to reach a full working prototype, despite of the aspect the testing version Meditech phase 1 will be the first full working prototype.

 

This introduction post points out the first series of changes that will be applied to the actual running prototype to set it as perfectly working in a test environment. Further improvements will be added to the internal architecture and the software component; the release date for the Phase 1 prototype version is expected around the mid - end of the month of October.

 

Display

The actual prototyped display was 15 inches so an external support was needed. The main issue of this display is its weight and the portability: the device and be easily put in-place but an extra transport bag is needed to carry it and this will reduce the usability of the system that should be set in the normal working conditions in a very short time.

IMG_20150909_113052.jpg IMG_20150909_113126.jpg

How the display actuallly fits and the back support.

 

The obvious solution mentioned in the posts related to the screen is a smaller 10" or 7" screen that will fit well hosted in the middle of the Meditech device when it is closed. The following image shows the free space for the display that is about 40x20 cm. A custom support should be created supporting the mechanics to slide it in the usage position.

IMG_20150909_113724.jpg

 

The newly announced <strong>7" touch screen for Raspberry PI</strong><strong>7" touch screen for Raspberry PI</strong> maybe a good solution also for the good price it is proposed on the market.

 

Control panel cover

In the first version of the prototype the control panel cover has been done with two half of soft 2mm plastic material. The following image shows how it is now; this simplified a lot the disposition of the components on the surface (the GPS and a couple of jack plugs are always missed in the image) as it is very simple to drill and cut.

IMG_20150909_113916.jpg

 

Then next version will be precisely cut and milled on a 1mm white Aluminium plate; the material is shown in the image below. This solve also the problem of labelling the plugs and signals: a full adhesive surface will be done and applied with the same material used to put the advertising images on the car surface. With the size of the surface this solution will be efficient at a very low cost (about 3-5 Euro)

IMG_20150909_114222.jpg

 

Cabling

As show in the following image the actual version is using common commercial cables that result very longer than needed. A series of custom cables will be wired saving a lot of space. Also the power distribution will be affected with a better single board circuit and all where it is possible the round cables will be replaced by flat ones.

IMG_20150909_113408.jpg

 

Internal devices and components

Also for the internal devices a strong revision will be provided.

IMG_20150909_113442.jpg

  • Make more reliable supports for the Raspberry PI Devices
  • Single-board including all the control panel components (fan control, IR controller, LCD, LEDs
  • Rationalise the power supply, powering the Raspberry PI via the GPIO connector instead the mini USB  plugs
  • Use flat cables all where is possible
  • Optimise the network cabling
  • Improve the door-open switch
balearicdynamics

Meditech: Thanks

Posted by balearicdynamics Top Member Aug 28, 2015

Beyond the traditional (a bit rhetorical, yeah?) thanks it is the worth to spend some words on what opportunity this challenge has represented for this project.

The Meditech idea was just an idea. I was sure it was possible but was necessary a starting point, a sort of cooperation with someone. This is what I have found here. First of all the thanks are to the Element14 entire organisation that trusted in the first proof of concept submission. Then my personal thanks are for all those members that got any kind of possible support and helpful hints that is impossible to mention all here but they perfectly know what I mean.

 

So Meditech got the right boost and today the first phase - formerly phase zero, codename tricorder - has been completed: the idea is a project moving on his way. The opportunity to invest few money, interact with a lot of different point of view and very different skills just while the first prototype was growing from the scratch has dramatically simplified the first - and more difficult - step. As well the Element14 support with the kit that made available all the needed hardware and more.

 

Now the project has gained its own future. Today it is sure that the next phase 1 will be completed hopefully following the expected timeline while it is a promising option the first preproduction of the first 10 units (expected to be available for delivery in the first two months of 2016).

Thanks to the challenge and Element 14 today Meditech is a trustable idea while there is a growing attention of the media for what happens next months. Personally I will continue to blog the thread with constant updates on the Element14 community as the primary reference point for the Meditech development project.

 

That's all. Enrico

 

Meditech-1024.jpg

YellowPrinter.jpgThis is a part that has not yet considered in the software development discussions.

 

Introduction

As mentioned in a previous post, one of the Meditech peripherals is a small bluetooth 55mm thermal printer, covering a fundamental role, especially in cases of first aid and urgent interventions on-field.

The immediate following procedure expected after the very first aid operations with the help of the Meditech diagnostic probes is moving the patient in a organised structure for his hospitalisation or more adequate treatments.

 

As all the Meditech operated interventions are monitored and stored on a database, while every record is attached to the GPS absolute position and the precise synchronised date and time, it is possible to generate a short yet complete printout that will describe the patient health status, that will follow him during his transportation.

 

The first and most important use of these information is to give a summarised bunch of data in a human readable form (fast and reliable) to the doctors and specialised equipe that will support the patient.

The secondary but not less important reason is to keep a documented track on paper of the followed procedure by the first-aid personnel and the afforded strategy, if the patient has been moved, where he was recovered etc. In any case this track represent a testimonial document (that can be further integrated with the more complete diagnostic information stored on the Medited database) following the patient along his path.

 

There is also another important factor to consider: the availability of a printer can be extremely helpful to produce in any moment specific diagnostic data (e.g. the ECG track, the statistical results of a long period monitoring etc.). The availability of these information in a traditional paper support can be a key factor in certain conditions.

 

Printing more than text-only data

The Meditech Bluetooth printer works with the widely diffused ESC protocol, or Epson Escape, adopted by almost all the thermal roll printers (also named receipt-printers) by years. This protocol has been created by Epson around the en of 1980 and is actually one of the most reliable methods used by thermal printers. For more details on how the ESC protocol works see the following link on Wikipedia. The full (and few redundant) protocol specification from Epson is detailed in the attached document in pdf format.

While the protocol itself resulted a market winner adoption because of its simplicity its use is not so simple as it seems; as a matter of fact any control command should be sent to the printer in the form of and escape sequence resulting in a complex and difficult to debug code. Another issue approaching this printing protocol directly is its rigidity; every kind of different string should be managed properly taking in account that not all the printers - also supporting the same protocol - are the same and some control codes that works fine on a device can produce unattended effects on another.

 

Making a protocol parser

To solve the problem one for all it has been implemented a protocol parser, where every command has been converted in a simple function call accordingly with the following rules:

 

  1. Commands never generates printing mistakes or errors: if the required parameters are incomplete and it is not possible to apply a default value, the command should have no effect in the printout
  2. Commands should not send wrong data to the printer in any case
  3. Commands should be called in-line
  4. Commands should always return the expected value or an empty string (not NULL)
  5. Every command function should make a complete consistency check to avoid wrong escape sequences sent to the printer.

 

The resulting printing mechanism is dramatically simplified enabling the program to work with strings where the control code should simply be appended. So, for example to make a text in bold, there is the boolean call Bold( [ [true], false] ) returning the correct sequence to enable or disable the bold character: the string

 

"This is a BOLD test"

 

can be done as

 

"This is a " + protocolClassInstance.Bold(true) + "BOLD" + protocolClassInstance.Bold(false) +" test";

 

The same method will be applied to all. The following scriptlet shows the protocol class ESC_Protocol with all the available API; the full code will be posted in the GitHub repository.

 

class ESC_Protocol {

  public:
  int charSet;
  bool underline;
  bool bold;
  bool strong;
  bool reverse;
  bool bigFont;
  bool doublePrintHeight;
  bool doublePrintWidth;
  bool bitmapHighDensity;
  bool printHRI;
  int printHRIStyle;

  ESC_Protocol(void);
  ESC_Protocol(bool, bool, bool, bool);

  char* ResetPrinter();
  char* Bold(bool);
  char* CustomEsc(int[], int);
  char* Underline(bool);
  char* Underline(int);
  char* Reverse(bool);
  char* PrintAndFeedLines(int);
  char* EndParagraphLines(int);
  char* PrintAndFeedDots(int);
  char* CharTypeFace(int);
  char* HriTypeFace(int);
  char* CharBoundary(int, int);
  char* CharAttributes(int, bool, bool, bool, bool);
  char* AbsolutePosition(int);
  char* PrintingAreaWidth(int);
  char* CharacterScale(int, int);
  char* SelectPaperSensorStop(bool, bool);
  char* SetPanelButtons(bool);
  char* HorizontalTab();
  char* RelativePosition(int, bool);
  char* pagemodeAbsolutePrintPosition(int);
  char* pagemodeRelativePrintPosition(int, bool);
  char* DefaultLineSpacing();
  char* pagemodeFormFeed();
  char* pagemodePrintPage();
  char* NewLine();
  char* LineSpacing(int);
  char* RightCharSpacing(int);
  char* CharSpacing(int);
  char* HriPrintingPosition(int);
  char* BarcodeHeight(int);
  char* BarcodeWidth(int);
  char* Barcode(int, char*);
  void setDefaultSettings();
  void setBitmapDensity(int);
  void setCharAttributes(bool, bool, bool, bool, bool);
  void setCharTypeFace(int);
  void setCharBoundary(int, int);
  void setDotSpacing(int);
  void setLineSpacing(int);
  void setCharSpacing(int);
  char* getBoundary();
  char* getPrintableString(char*);
  char* getBitmapHeader(int, int, int);
  char* UserCharacterSet(bool);
  char* SetHorizontalTabs(int[]);
  char* DoubleStrike(bool[]);
  char* pagemodeSetPageMode();
  char* InternationalCharacterSet(int);
  char* pagemodeStandardMode();
  char* pagemodePrintDirection(int);
  char* pagemodePrintingArea(int, int, int, int);
  char* Rotate90(bool);
  char* SetMotionUnits(int, int);
  char* Justify(int);
  char* OpenCashDrawer(int);
  char* OpenCashDrawer(int, int, int);
  char* CharacterCode(int);
  char* CutPaper(int);
  char* UpsideDown(bool);
  char* LeftMargin(int);
  char* KanjiPrintMode(bool, bool, bool);
  char* SelectKanji();
  char* CancelKanji();
  char* KanjiUnderline(int);
  char* KanjiCharacterSpacing(int, int);
  char* KanjiQuadMode(bool);
  char* stringForPrinter(int[], int);
  char* pagemodeCancelPrintData();
  char* DoubleStrike(bool);
  char* Start();
  char* Max_Peak_Current_324(int);
  char* Max_Speed_324(int, int);
  char* Intensity_324(int);
  char* Status_324();
  char* Identity_324();
  char* Set_Serial_324(int);
  char* EOP_Opto_Type_324(int);
  char* EOP_Opto_Calib_324(int, int);
  char* EOP_Opto_Param_324();
  char* EOP_Opto_CurrLev_324();
  char* Save_User_Param_324();
  char* Factory_Default_324();
  char* Loading_Pause_324(int);
  char* Loading_Length_324(int, int);
  char* Loading_Speed_324(int, int);
  char* Historic_Heat_324(bool);
  char* Msk_App_324(int, int);
  char* Near_EOP_Presence_324();
  char* Near_EOP_Opto_Calib_324();
  char* Near_EOP_Status_324();
  char* Near_EOP_Opto_Curr_Lev_324();
  char* Internal_Font_324(int);
  char* Max_Columns_324(int);
  char* Text_Line_Rotate_324(bool);
  char* Paper_Forward_324(int);
  char* Paper_Backward_324(int);
  char* Graphic_Offset_324(int, int);
  char* Graphic_Print_324(int, int, int, char);
  char* Partial_Cut_324();
  char* Full_Cut_324();
  char* Barcode_Rotate_324(bool);
  char* Mark_Length_324(int);
  char* Tof_Position_324();
  char* Mark_To_Tof_Position_324(int, int);
  char* Opto_Head_Line_Len_324(int, int);
  char* Mark_To_Cut_Position_Len_324(int, int);
  char* Head_Dot_Line_ToCut_324(int, int);

};


As the glucose measure probe to be used as reference has not been delivered for the expected time (just a couple of days ago) it has been moved as part of the Meditech phase 1 It is the worth to start do explain what is the adopted principle and what is the kind of analisys.

The following image shows a general view of the reference device. it cost on the market if you are not diabetics almost 2/3 of the entire cost of a complete Meditech unit, so I had to spend some time to find a way to have a full working device and accessories with a more reduced investment (about 100$)

IMG_20150825_221731.jpg

The analyser (the biggest one in the image above) calculates the glucose percentage in the blood (a small drop of blood is needed from a finger, but we can assume this as non-invasive) The small needle is managed by a simple use mechanical device that "shoots" few mm the needle in the skin. Needles are single-use sterile parts that are the "bullet" of the mechanical device.

 

Usage steps

The glucose measure in diabetic patients should be done frequently  several times during the day; usually they should measure by their own with a similar device the glucose value in the blood when eating to self-calculate the insulin quantity they should inject to compensate their disease. Meditech glucose probe will replace the measurement device (providing more complete information) and should be able to adopt the same methodology in glucose measurement. This makes simple to use the needles and chemical  reactive already available on the market and worldwide diffused; can be found without difficult anywhere.

 

The first step is producing a blood drop with the mechanical "shooting" needle, usually in a finger. Depending on the age of the patient and his body characteristics the needle pressure can be regulated to minimise the pain (that is anyway almost null). See the detail in the image below

IMG_20150825_221859.jpg

 

The second step is placing a drop of blood on the reactive test strip, that is another single-use small electrode chemically treated as shown in the detail of the image below.

IMG_20150825_222026.jpg

Then, the reading procedure will start. As mentioned, the Meditech glucose measurement circuit will respect the reactive test strip size and contact positions and the accessory components (available in any farmacy as spare parts for few dollars) will be the same, just to grant the full compatibility of the methodology for two reasons: a better comparison testing with volunteers and the best standardised compatibility with the commercial devices.

 

The Meditech Glucose probe

The approach that will be followed by the Meditech glucose measurement probe will be based on the standard test strip terminals as shown in the PCB terminal layout example below:Screen Shot 2015-08-27 at 10.02.10.png

 

 

The sensor circuit schematics using the pre-built test strips will be almost simple:

Screen Shot 2015-08-27 at 10.03.57.png

The circuit blue blocks are almost common parts based on a low-pass filter similar to the same already used for the Heart Beat sensor and ECG, while the green blocks are a current-to-voltage specific converter IC from Freescale that has been adopted by many other similar measurement devices.

Case design improvements

The Meditech eye probe camera is conceived to work nearby the main device component for easier usage. The camera position with the RGB light ring (see the image below) in the case prototype is fixed but need to be able to rotate and should be plied when the module is not in use to fit better in the component side of the case. This means to redesign this part of the case adding this two movements. As this part is the cover of the camera flat cable, it does not affect the circuitry and connections.

IMG_20150613_190335208.jpg IMG_20150613_182805056.jpg

Adding vision disease tests

Together to the active tests that can be done with the camera probe there is a series of test that can improve the vision potential diseases of the patient based on the use of the screen and special coloured patterns. The following images are two examples of the patterns used to generate VEP, Visually Evoked Patterns than can be used to record iris contraction and other secondary parameters, avoiding one of the associated analysis - The EEG - that need specialised personnel and can't be applied in the conditions usually expected for the Meditech device application.

Screen Shot 2015-08-26 at 00.29.01.png Screen Shot 2015-08-26 at 00.30.11.png

Another series of tests that should be implements with the esclusive use of the main color display are visual patterns to detect color blindness.

Ishihara_compare_1.jpg

There is a consideration about this kind of tests. As apparently these can be useful only in certain kind of analysis and conditions, i.e. during a visit in an hospital, really the things are different. If we think the kind of environment the Meditech device is provided for, all the possible simple yet realiable diagnostic evaluations maybe useful at least for two reasons:

  1. Extend as much as possible the application possibilities offered by the device, increasing its versatility.
  2. Give the option to the medical operator to produce a patient investigation fast, simple and as much as complete possible.

Introduction

The Meditech project is closing his first part. Starting from this post will follow a series of reminders and informative documentation to focus the point on what is the state-of-the-art project at the date and what is planned for the next two steps, further deadlines etc. The Meditech development lifecycle, from the initial concept up to the product available on the market will pass through three phases; the scheme below is a short reminder of what should be expected:

Project lifecycle - Tab 7.png

Accordingly with the scheme above as the most complex and longer step was the phase 0 (started from scratch). The ending of this first part if far away to the end of the entire project that requires again some months of work. At the actual date, there are already the availability (written agreement) with a public hospital in Nigeria, Nanoro (Burkina Fasu) and some other places I am discussing with.

 

Phase zero: state of the project point by point

The next posts will show in detail those aspect not yet documented in the previous articles. The following is a list of expected tasks to be done in the first phase of the project and their current status.

 

  • Container and internal architecture: Main hardware components and task distribution mainline - Done
  • Components connection: Internal wired network approach, all the networking components and settings - Done
  • Powering system - Battery operated: The initial design was including a battery supply system that has been excluded in the first model - Cancelled
  • Powering system - AC power: The actual model will be powered by an ATX-like power supply unit working with 120-240 VAC - Done
  • Networking final configuration: The final networking configuration will work on a double network; and internal network bridged with an external network for Internet access - Done
  • Internal web server and database: The database architecture and internal web server (Apache2 + PHP + MySQL) has been setup and tested for responsiveness - Done
  • User Interface and Controller: The environment has been setup and tested with a custom hardware interface (software and electronics) - Done
  • TTS support for easier user interactivity: The Text-To-Speech support is part of the UI and is integrated in the Meditech control panel - Done
  • Printing support: The remote printing support with the relative bluetooth control software and the printer standard ESC/POS protocol management has been developed and tested - Done
  • Medical probes: Some of the probes has been full tested while some other are already under parameters comparison - Details below
    • Heart Beat : Filter circuitry done, tested and compared with assessed device
    • Human high precision temperature measurement : Analog reading through the BitScope analog channel. Under testing for continuous reading with assessed device, not yet disclosed.
    • Blood pressure digital sphygmomanometer : Not yet disclosed; will be tested with assessed device with about 30-50 different volunteers (documented)
    • Microphonic stethoscope : Probe and electronic filtering developed and tested. Auscultation data are recorded and can be streamed if needed to the remote support
    • Glucose sensor : the sensor electronics is based on the same (similar) used in a commercial product that I have received late, just a couple of days ago. The probe is under testing with some comparative analysis with volunteers.
    • Eye probe camera and variable light : This probe with all the electronics has already been tested with success. Then some other usage in the range of vision has been focused and the implementation is under test.
    • Microscope camera: The microscope camera has been tested for body surface image analysis (skin, insect bites, rashes, etc.)

Introduction

Definitely the Python language with some content improvements has been adopted to manage the UI, replacing the initial idea to use Qt for two reasons: development optimization and architecture simplification. Unfortunately as many times occur, making things simple it is not so simple.

 

Exploiting the features of the Linux graphic interface

pygtk-splash.jpgTogether with Python there is a very useful  library interfacing the language with the standard features natively available in the Raspbian desktop: PyGTK

This library lets create almost any graphical user interface with the Python language using  GTK+. This means that it is possible to create multiplatform visual applications based on the graphic features and performances of the Gnome Desktop.

The resulting program can work easily with good performances without further intermediate graphic components.

Another advantage is that the entire set of UI applications developed in PyGTK inherit the GTK  desktop theme adapting to any supported environment.

 

As occur with all the Python libraries the integration of PyGTK in the Python programs is almost simple:

 

#!/usr/bin/env python

import sys
try:
    import pygtk
    pygtk.require("2.0")
except:
    pass
try:
    import gtk
    import gtk.glade
except:
    print("GTK Not Availible")
    sys.exit(1)

class HellowWorldGTK:
    """This is an Hello World GTK application"""

    def __init__(self):

        #Set the Glade file
        self.gladefile = "HelloWin.glade"
        self.wTree = gtk.glade.XML(self.gladefile)


if __name__ == "__main__":
    hwg = HellowWorldGTK()
    gtk.main()

 

A simple GTK Window with Python

The following scriptlet  shows the creation of a simple windows using PyGTK

#!/usr/bin/env python

# example base.py

import pygtk
pygtk.require('2.0')
import gtk

class Base:
    def __init__(self):
        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
        self.window.show()

    def main(self):
        gtk.main()

print __name__
if __name__ == "__main__":
    base = Base()
    base.main()

 

This source is very simple generating a small window on the screen as shown below:

Screen Shot 2015-08-22 at 00.50.26.png

Something more complex

Using the PyGTK API - we can try this more complex example

#!/usr/bin/env python

# example table.py

import pygtk
pygtk.require('2.0')
import gtk

class Table:
    # Our callback.
    # The data passed to this method is printed to stdout
    def callback(self, widget, data=None):
        print "Hello again - %s was pressed" % data

    # This callback quits the program
    def delete_event(self, widget, event, data=None):
        gtk.main_quit()
        return False

    def __init__(self):
        # Create a new window
        self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)

        # Set the window title
        self.window.set_title("Table")

        # Set a handler for delete_event that immediately
        # exits GTK.
        self.window.connect("delete_event", self.delete_event)

        # Sets the border width of the window.
        self.window.set_border_width(20)

        # Create a 2x2 table
        table = gtk.Table(2, 2, True)

        # Put the table in the main window
        self.window.add(table)

        # Create first button
        button = gtk.Button("button 1")

        # When the button is clicked, we call the "callback" method
        # with a pointer to "button 1" as its argument
        button.connect("clicked", self.callback, "button 1")


        # Insert button 1 into the upper left quadrant of the table
        table.attach(button, 0, 1, 0, 1)

        button.show()

        # Create second button

        button = gtk.Button("button 2")

        # When the button is clicked, we call the "callback" method
        # with a pointer to "button 2" as its argument
        button.connect("clicked", self.callback, "button 2")
        # Insert button 2 into the upper right quadrant of the table
        table.attach(button, 1, 2, 0, 1)

        button.show()

        # Create "Quit" button
        button = gtk.Button("Quit")

        # When the button is clicked, we call the main_quit function
        # and the program exits
        button.connect("clicked", lambda w: gtk.main_quit())

        # Insert the quit button into the both lower quadrants of the table
        table.attach(button, 0, 2, 1, 2)

        button.show()

        table.show()
        self.window.show()

def main():
    gtk.main()
    return 0      

if __name__ == "__main__":
    Table()
    main()

This code will generate the window shown in the image below

Screen Shot 2015-08-22 at 00.56.34.png

Until we press the buttons 1 and 2 we  see on the terminal the button message then pressing the  OK button the program ends.

Hello again - button 1 was pressed
Hello again - button 2 was pressed

 

The PyGTK library includes also the API to set callback functios, associate methods to the buttons and so on. A complete manager of the visual interaction. Unfortunately also for a  simple application (three buttons with their callback  inside a standard window) we should write a lot of code. As a matter of fact every graphic option, button, icon and  detail should be written in Python calling the proper PyGTK API.

 

Separating the design from the code

The solution to make the things easier is to operate a separation between the User Interface design and the code. To reach this goal we should adopt a technique very similar to the Android applications, keeping apart the  objects design in XML format  from the PyGTK Python code.

The Meditech Python controller when start shows the main Meditech logo on the screen while managing the inter-process communication. To reach this result  the background image has been created:

MeditechBackground.jpg

Then a special window has been defined in a separate XML file: the MeditechInterface2.glade as shown below

<?xml version="1.0" encoding="UTF-8"?>
<interface>
  <!-- interface-requires gtk+ 3.0 -->
  <object class="GtkWindow" id="MeditechBackground">
    <property name="visible">True</property>
    <property name="sensitive">False</property>
    <property name="can_focus">False</property>
    <property name="halign">center</property>
    <property name="valign">center</property>
    <property name="title" translatable="yes">Meditech 1.0Beta</property>
    <property name="resizable">False</property>
    <property name="modal">True</property>
    <property name="window_position">center-on-parent</property>
    <property name="default_width">1024</property>
    <property name="default_height">1080</property>
    <property name="hide_titlebar_when_maximized">True</property>
    <property name="type_hint">desktop</property>
    <property name="skip_taskbar_hint">True</property>
    <property name="skip_pager_hint">True</property>
    <property name="accept_focus">False</property>
    <property name="focus_on_map">False</property>
    <property name="decorated">False</property>
    <property name="deletable">False</property>
    <property name="gravity">center</property>
    <property name="has_resize_grip">False</property>
    <property name="mnemonics_visible">False</property>
    <property name="focus_visible">False</property>
    <child>
      <object class="GtkImage" id="background">
        <property name="width_request">1024</property>
        <property name="height_request">768</property>
        <property name="visible">True</property>
        <property name="sensitive">False</property>
        <property name="can_focus">False</property>
        <property name="xalign">0</property>
        <property name="yalign">0</property>
        <property name="pixbuf">images/Meditech-1024.jpg</property>
      </object>
    </child>
  </object>
</interface>

This is a window where many parameters are  different than the default: there are no decorations, the window is not resizable, the image is centered and both window and image are expanded over the entire screen and more. Designing the UI apart has created a dramatic simplification in the Python code, where the entire UI definition is reduced to a line of code as shown below.

import sys
try:
    import pygtk
    pygtk.require("2.0")
except:
    pass
try:
    import gtk
    import gtk.glade
except:
    print("GTK Not Availible")
    sys.exit(1)

class MeditechMain:

    wTree = None

    def __init__( self ):
        #  ============================ XML with the UI desgn definition
        builder = gtk.Builder()
        builder.add_from_file("MeditechInterface2.glade")
        # ============================
        builder.connect_signals(self)
        self.window = builder.get_object("MeditechBackground")
        self.window.fullscreen()
        self.window.maximize()
        self.window.set_keep_below(True)
        self.window.set_deletable(False)
        # self.window.show_all()

        # self.image = builder.get_object("Background")
        # self.image.show()

def main():
    gtk.main()

if __name__ == "__main__":
    mainClass = MeditechMain()
    main()

 

What makes the difference is the call add_from_file("MeditechInterface2.glade") incorporating the XML file. Obviously the PyGTK APIs remain available and can be used in the program to make changes and adaptions to the initial UI.

 

Making the design simple

It is almost intuitive that it is not simple to define the UI components in  XML. It is also obvious that this separation between design and code has also another great advantage: we can retouch and adjust some design issues without changing the code.

The reason that the UI design XML file has the glade extension derives from the name of the graphic IDE we are using, just to create the design. Again this strategy is a remembrance of the Android UI design.

Screen Shot 2015-08-22 at 01.36.13.png

The Glade IDE makes available all the GTK components to design the UI components seeing them as they appear at runtime; then generates the glade XML file when the design is saved to be used in the PyGTK Python application. Details on the installation and usage of the Glade IDE can be found at glade.gnome.org

Introduction

Meditech should be something simple. Simple to use, addressed to non-expert IT users, possibly as much autonomous as possible, possibly able to help the operator, possibly usable with few buttons (=NO KEYBOARD REQUIRED) and much more. This is a must over all the possible features that should have.

 

The equation is simple: the user should see Meditech like a tool, despite what it contains. Power on the device until the devices says Ready then his skill and knowledge should be focused - maybe exclusively focused - on the use of the probes. This means set the body temperature sensor in the right place, set the ECG electrodes in the right place, know if the data are indicating a possible disease or not.

 

Every user simplification, as any developer knows, will correspond to a meaningful complexity increase in the back of the system. But this is the only way that Meditech can be really usable in the non-conventional operating conditions it is expected to be used. This means that there are no excuses

 

Resuming the architectural simplifications applied we can focus:

 

  • Simple numeric IR controller (like the TV controller) manages the entire system
  • TTS (Text-To-Speech) audio feedback avoid the user to read status changes, confirmations and usage guides
  • No Keyboard is needed for the normal usage
  • Screen display shows only the essential information: no inactive windows, no long messages to read, no floating windows
  • Desktop and menu bars from the Linux are hidden on startup

 

The main controller strategy

One of the key-concept of Meditech is modular system; this goal is reached because every component works as a vertical, independent task solver: if a component is not detected or stop responding for some reason its features are excluded. As a matter of fact are sufficient two of the three internal Raspberry PI devices to keep the system running. This obviously includes the essential peripherals: audio card, networ switch, control panel and LCD display. So, ignore the question "what happens if it explode" and similar.

 

The "glue" keeping all together and distributing the tasks correctly, depending on the user requests is the main controller, a sort of semaphore component created in Python. So what is the better place where this application can reside? on the background. Not running in background but on the background graphic component that is the minimal view shown on the screen while the system is powered-on.

IMG_20150816_182602.jpg

So, the image above shows the just powered-on Meditech face. As the entire system is controlled through the Infrared Controller until the system is not put in maintenance mode and connected with a mouse and keyboard there is no way to enter in a deeper level of the system. And there is no reason.

The actual developing version of the prototype, for obvious reasons does not start automatically on boot but the application version will start directly with the standard interface.

 

Controlling the system behaviour

 

At this point there is another important question to be answered: how to manage the Meditech architecture data flow?

 

As the user interface manager is the nearest task to the user interaction this is also the best candidate to manage the entire data flow and task execution. When Meditech is powered-on this is the process that is started automatically together with the infrared controller appplication. The interaction module send commands and requests directly to the process controller that shows the background main User Interface. At this point the system is ready to manage all the parts from a single asynchronous controller.

 

Note that the graphic view for every status, informational window, graphic etc. is actuated by independent Python widgets launched and blocked by the process control as well as the activation of the probes. But the other side the background processes are ready to receive the probes information, collect data and store them on the database. As the remote access is to the information is under the user control also the activation of the Internet transmission to the remote server over a dependent MySQL database is managed by the process controller.

 

The following scheme illustrates how starting from the User Interaction and direct feedback the system works and react to the requests.

 

Main controller - Tab 6.png

Controlling processes with Python in practice

As occur in almost any language al sin Python it is possible to launch external tasks, i.e. bash scripts, programs or other Python scripts. The problem arise when the Python process control application should act as a semaphore (also intended in the traditional IT way). The choice of Python - should be remarked - instead of something more low-level depends on the graphic performances of the language; in this way it is possible to integrate both the main UI visualisation and the process control saving one more task running inside the RPI master device.

 

The approach in Python is almost simple: there are three different instructions enabled to launch external programs. The first (and probably more immediate) approach is the use of the os.system() call to deal for another process with the operating system; no matter how this command is done, C++ or any other compiled language, java or bash. So, for example:

 

import os
print(os.popen('command').readline())
x = _
print(x)






This is the first method to be avoided, due the complexity managing the return values. The other potential issue is that with os.system() call Python pass the control of the entire process - that we want to control - to the operating system.

What has demonstrated the right approach instead is the use of the subprocess call. Just like the Python multiprocessing can do managing internal multi-threading, in a similar way subprocess can spawn different processes giving more control to the calling application, in a simpler way. So it is possible - that is our case - manage the following architecture:


  1. Start the main UI + process control
  2. Start the IR controller that send back to the controller the user requests, commands etc.
  3. At this point the process is full operational

 

All the other processes launched by the controller start something in the system by launching a bash script or a C++ command. Every process follows a double-data exchange with the controller enabled to decide what is working in a certain moment and what it is not. A typical procedural approach can be the following:

 

User actionController actionDirection
Enable body temperatureLaunch body temperature widget on-screenRECEIV
Enable the probe activity and start readingSEND
Update the widget dataSEND
Continue updating until the user stop

 

Every process is launched keeping direct control of its stdin, stdout, stderr while the widget visualisation are graphic objects developed in Python script.

 

For subprocess details in Python, this link is a good starting point.

For the full explanation of the subprocess, this is the Python link manual

Introduction

One way to optimize the behavior of the Meditech system is adopting the more reliable tools, programming languages and technologies depending on the different tasks that should be accomplished. This implies obviously a multi-language  environment, essential to reach the better simplification level; for example, following this primary directive, all where it is possible the MySQL database will be accessed via low level SQL queries with bash scripting techniques as well as the hardware control software is developed in gcc++.

 

This approach maybe a bit more complex than the choice of a unified development language, possibly at high level. In fact we should include in the pros of this fragmented approach also a better costs optimization involving hardware solutions just when it is not possible to solve them with software.

 

In the initial Meditech design one of the development platforms included in the project was Qt but then this directive changed, as Python revealed the better language to develop the main scripting sources controlling the Meditech sub-processes and a good way for the User Interface design, with some more extra effort.

 

Based on these considerations at the actual date the Meditech development scheme has totally excluded the Qt environment replaced by Python beside to the Linux scripting tools, SQL language, PHP and few other development components and libraries.

 

Involved software components

In the scenario described above every different software environment adopted in the system should be viewed as a set of one ore more specialized package(s) set. The following diagram shows the general scheme.

Meditech UI and Optimization - Tab 5.png

First of all this methodology tend to take the maximum advantage working in a multi-task environment. As it has been already discussed in the previous posts, Meditech is a modular system using an internal set of three specialized Raspberry PI devices, plus a fourth unit dedicated to the camera features but other can be added if needed. The same described vertical task approach has been adopted in all the devices as the software model.

In the scheme we identify different classes of applications, harmonized and integrated by an inter-process controller developed in python.

 

The software sections in detail

 

On-demand processes

These are processes that most include the network connection (e.g. launching a task on another unit of the system); are bash scripts that launch a task execution when the behavior needs it. A typical example is the process TTS (Text-To-Speech) playing a synthesized sentence in response to a command. The task-on-demand can be called by other processes or by the main inter-process controller; they have the characteristic that we should always expect and exit condition.

 

Background processes

This group includes that startup Linux services, like the peripheral controls, Apache2 web server, the Php engine etc. Then there are Meditech-specific processes that starts when the device is powered-on and runs indefinitely. A typical example is the infrared controller running over the lirc ervice (Linux Infrared Controller) managing the Infrared controller interface, the primary interaction method with the system.

 

Networking

Beside the OS networking services, including but not only the SSh server, web server, MySQL server, NTP server and more, there are other Meditech-specific networking services based on bash scripting commands to manage some special network features like the remote database update, image streaming, continuous data processing intra-networking data exchange etc.

 

Database storage

This the class of tasks related to the local MySQL database management and the remote server update (when there is an active Internet connection). Where the SQL queries are recursive tasks these are embedded in bash scripts (to simplify the calls) while in the interactive UI the database is used as data collector and the information retrieved from the sensors are represented graphically on the screen widgets with Python and PythosnSQL programs.

 

Hardware control

The control panel and the data acquisition from the probes is developed under GCC commands that are embedded in bash scripts.

 

User Interface and interactivity

The Meditech UI is developed in Python and the visualization of the various widgets is controlled by the inter-process control developed in Python too.

 

Internet web access

This class of tasks is divided in a double client-server mechanism to enable the remote support from any authorized remote Internet connection (it is sufficient a browser). on the Meditech side the Apache2 + MySQL and PHP engine grant the remote access from the same LAN. Using a could server when an Internet connection is active and the remote assistance support has been enabled by the local operator the data are sent real-time over a cloud MySQL server. A PHP-based web site grant the accessibility from remote to the data and enable the chat support with the local Meditech device.

Introduction

The most common way, and probably better known, to manage a MySQL database is using the PhpMyAdmin web application. This sounds good in all that cases where the MySQL database is remotely stored on a web server, especially when the core components of the MySQL database are managed by the server provider reserving a specific database partition.

 

Note: another good way to use the PhpMyAdmin is for the popular Blog and CMS Wordpress, with the database management can be done with a PhpMyAdmin Wordpress plugin.

 

Things comes slightly different when the server is a Raspberry PI, a Linux machine we can take full control from our LAN. In this case we can adopt a more reliable solution directly provided by Oracle. Better if the bare database management includes queries testing, working faster, work with a confortable visual tool.

These and other things are possible with a free multi-platform tool provided by Oracle that maybe considered the better way to access and control MySQL databases on the Raspberry PI. This Meditech project annex explain how the MySQL Workbench has been used to setup and maintain the database of the Meditech project.

Screen Shot 2015-08-10 at 11.34.25.png

MySQL accessing by remote

It is not sufficient to install and setup the MySQL database on the raspberry PI (following the common MySQL installation procedure) to enable external (remote) users access to the data: the MySQL database should be enabled for remote access. For more details on the enabling procedure see the attached document.

As the database resides on the RPI master device that should be updated by the other slave units granting the remote access should be set anyway. The procedure is almost simple, part of the standard MySQL online documentation.

 

The MySQL database should have a user and password enabled for remote access; in our case to make the things easier the same user/password pair to access the Raspberry PI has been adopted. Take in account that his is NOT a regular Linux user but it is a database user with nothing to do with the operating system.

 

Connecting the workbench to the remote database

After the installation, launching the workbench the main screen shows the the possible options and the database connections with the remote devices as in the image below

Screen Shot 2015-08-10 at 16.22.08.png

While the PhpMyAdmin after logged the user has access to the databases he is authorized as this web application is part of the same MySQL database here things are different. It is like having an IDE that can connect to as many database as you want, local or remote just as they were different projects.

The image below shows the LAN connection settings from the development Mac to the RPI master where the Meditech project database is stored. The connection parameters can be tested, then after confirmed these are permanently stored in the workbench. The database connection should be considered the entry point to the database schema we want to work with.

Screen Shot 2015-08-10 at 16.56.54.png

Every time we need to work with the database schemas (i.e. the entire MySQL architecture on the desired server, users, tables, queries etc.) it is sufficient to double-click the corresponding connection on the workbench main screen.

Screen Shot 2015-08-10 at 17.01.33.png

As the database connection is established it is shown the main SQL editor page. The following image shows the RPI master database where the only schema is the PhpMyAdmin, that was already installed on the raspberry PI, for testing purposes.

Screen Shot 2015-08-10 at 17.03.25.png

One of the most important differences between the MySQL Workbench and PhpMyAdmin is that with the workbench we have a top level vision of the installed MySQL engine with a better and wider control over the architecture. Who is used to manage server databases with PhpMyAdmin known that it operates from inside the database and it is not possible to have this kind of scenario.

 

An helpful tool set for database design

The following images shows a first advantage of having the full control of the MySQL engine: all the users, connection, server status and more can be checked in every moment including a good traffic monitor while the database is running serving other users.

Screen Shot 2015-08-10 at 17.20.27.pngScreen Shot 2015-08-10 at 17.20.13.png Screen Shot 2015-08-10 at 17.20.03.png

But I think that one of the most interesting features of the workbench cover the database design aspects. After the essential components of the database has been defined - like in this example the PhpMyAdmin database tables, we can use it to generate and then edit graphically the data queries, tables relationship and more.

Starting from the MySqlAdmin tables definition with a simple automated wizard the database tables structure has been extracted and generated graphically like shown in the image below.

Screen Shot 2015-08-10 at 17.30.43.png

The database designs can be organized visually to easily create the documentation like ths simple example in the attached pdf; the design is also interactive and can be used to expand the database features, complete the tables relationships, creating queries, procedures etc. in a comfortable visual environment.

Introduction

As Python is an interpreted language the first temptation for develop applications is directly editing the sources on the Raspberry PI, eventually with the help of the Python idle simple IDE installed by default in raspbian; this development environment is almost primitive, while the availability of a good development environment for the Python language maybe very helpful especially if the code can be managed on the PC while tested real-time on the destination device.

This Meditech project annex explains just how an efficient environment has been set to reach this scope.

 

The role of Python in the software architecture

Pointing the attention to the language Python it has two advantages: it runs fast in the Raspberry PI environment and can be executed (launched) by the command line. The multi-language approach of the entire Meditech project is draft in the following table:

 

Language / EnvironmentUsage
C++Building programs and command line commands to to the hard work: communication, calculation, data processing
SQLDirect access via queries from bash commands for internal data organization and database management
PhpExternal access via Apache2 web server supporting database integration
BashPre-built commands to manage complex tasks and simplify the inter-process communication
PythonLocal User Interface and on-screen real-time monitoring

 

Simplifying the Python development

As explained in the Annex I also in this case the best way to simplify the development lifecycle is adopting an external development IDE. After several tries, I have decided to use the  PyCharm Community Edition IDE (free open source version).

Screen Shot 2015-08-09 at 16.41.59.png

I should recognize that this product from JetBrains demonstrated a very efficient instrument for the language development.

In the case of Python we have not the need of the remote compilation as it is based on the Python interpreter. A potentially risk factor working with an external IDE is due by the raspbian libraries not available on other platforms; anyway this aspect can be ignored because the advantages are worthy. Like the case of NetBeans IDE for C++, also PyCharm is simple to download and install. An appreciable aspect is the good available documentation accessible from the PyCharm IDE and the good contextual help support provided, as well as the editor features supporting depth syntax checking and indentation control.

 

The Python development environment

The way followed creating a Python development environment is a bit different than the remote compilation settings requiring to add some changes and integrations.

 

Minimal requirements

The minimal requirements on the Raspberry PI side requires as usual the SSH connectivity; it is essential while developing on a remote linux embedded device to reach the system from a terminal. The minimal requirements settings on the PI are the following:

  • NFS server for folder sharing
  • SSH remote access (supporting the graphical environment on the PC with the -Y option as an alternative to a headless Raspberry PI)
  • A couple of simple Bash commands to fasten the synchronization
  • The possibility to mount on the PC the Raspberry PI remote folder. Also in this case on the Mac OSX a Bash command has been built
  • A simple command to keep synchronize the PC development folder with the Raspberry PI test folder
  • Python installed on the system (it is by default on raspbian)

 

Raspberry PI configuration

The Raspberry PI configuration is really simple. The only change you need is to export the development folder to the NFS server so it is shared remotely by the PC connected on the same LAN.

 

Note: it is best practice to export the folder limited to the development PC IP address.

 

Supposing the following initial conditions,

  • The PC address on the network is 192.168.1.5
  • The Raspberry PI IP address is 192.168.1.99
  • The Raspberry PI folder to share is /home/pi/GitHub

 

execute the following command to edit the exported (shared) folders:

 

sudo nano /etc/exports

 

Then add the following line, accordingly with your real PC and Raspberry IP address and foldert to share

 

/home/pi/GitHub 192.168.1.5/0(rw,async,insecure,no_subtree_check,no_root_squash)

 

After saving the file, reload the NFS server with the following command to restart the NFS sharing service

 

sudo /etc/init.d/nfs-kernel-server restart

 

At this point your folder can be mounted on the remote PC. This example will work with a development PC based on Linux (Ubuntu, Debian etc.) or Mac OSX. For Windows see the how to share a folder from Raspberry PI with the Samba protocol to reach the same result (the same document is attached to this post).

 

PC remote folder mounting

Now that the Raspberry PI folder is shared on the LAN for the specific IP address of our development PC we should mount the remote folder so it is available locally on the PC. To do this avoiding to repeat the mount command it is sufficient to write a short bash comand that we will launch everytime we need to develop with the PI. In the following example the command refers to my local development folder on the Mac, so you should adapt it accordingly with your computer folder structure.

 

#!/bin/bash

# Mount RPI master Meditech Python repository and launch the ide.
sudo mount -o resvport,rw -t nfs 192.168.1.99:/home/pi/GitHub/meditech_python_interface meditech_python_interface/

 

At this point all can be considered done. We should simply start the PyCharm IDE on the PC and open projects and build them directly in this folder. To make the things more reliable a small improvement has been done.

While the remote Python development folder is mounted as meditech_python_interface/ an identical folder named local.meditech_python_interface/ has been created on the PC. So the PyCharm IDE sources are written in the local... folder and wen the python code should be run on the raspberry PI platform it is synchronized with the remote folder overwriting still existing files; in this way there is a local and remote replica of the sources, just like an anticipated backup

Also in this case the synchronization task is simplified by a bash command wrote once and executed in seconds everytime it is needed.

 

#!/bin/bash

# Update the remote mount files from the local development folder
sudo cp local.meditech_python_interface/* meditech_python_interface/

 

 

The Python development scenario

We can comfortable develop our Python applications on the Raspberry PI with only three windows:

 

  • The PyCharm IDE window
  • A local terminal session
  • A Raspberry PI remote SSH terminal session

 

The following image shows the PC with the Python develompent settings

Screen Shot 2015-08-09 at 18.57.39.png

This post is an annex to the Meditech project explaining one of the (possible) best practices to setup an efficient development environment for C++ developing on the Raspberry PI platform with the advantage of an advanced IDE and remote compiling without emulators.

 

Why a development IDE

When C/C++ language programming covers a large part of an embedded project going far beyond the simple cut and paste of some examples, to be able working in a good development environment represent a success factor for code quality and usability; adopting a high level programming IDE become a must at least for the following reasons:

  • Availability of optimized editing tools, including language syntax-checking
  • Fast moving between sources and headers inside a well organized project
  • Easy accessibility to classes, function declarations, constants, commenting
  • Fast syntax checking and bug-tracking
  • Sources and headers organization in projects
  • Optimized compiling feedback and fast error checking
  • Local and remote sources replication in-synch

 

Note that the use of a PC with a high level IDE creating code for different platforms (mostly embedded devices) where it is difficult or impossible to develop it is a widely diffused practice. This is the way adopted at least for the following well-known devices:

    • All Android based devices
    • Symbian devices (already diffused in the Indian and some south-world countries)
    • iOS smartphones and iPad
    • Arduino
    • ChipKit and many PIC based microcontrollers
    • Many other SoC and SBC

 

These and many other factors dramatically increases the productivity and the quality of the final result when working with a remote compiling enabled IDE.

Screen Shot 2015-08-09 at 10.41.15.png

 

What IDE for the Raspberry PI

The first assumption is that the Raspberry PI linux (here it has been used raspian but the concept is the same with other distributions) should not host the development environment as it is the target of the project. So we should think to the better way to manage the code development on a PC seeing the result real-time on the target device. In few words, we will provide a simple network connection between the Raspberry PI and the development PC; we can use the WiFi, the LAN connection, the home router or any other method so that the two machines can share their resources on the network.

 

The other assumption is that for the best result it should be possible to compile remotely with few simple operations, fast checking errors and compilation results in the IDE on the PC, running the program over the native platform.

 

The two most popular open source IDE for multi-language development are Eclipsehttp://www.eclipse.org/ and NetBeanshttps://netbeans.org/. There is an alternative to the remote compilation using a cross-compiler: with a particular settings it is possible to compile on a different hardware architecture (i.e. a PC with a Intel-based CPU) the code that should run on the Raspberry PI that is, an ARM based architecture. It is a more complex way with so few advantages that where it is possible it is best to avoid this method.

 

Just an interesting note: the PC Arduino IDE represent a good effort in this direction, as well as the MPIDE supporting also the ChipKit PIC based platforms. It is a simple (and a bit primitive) IDE making a cross-compilation of the program before uploading the binary file to the micro controller board.

 

I am used to base many of my developments on the Eclipse IDE as this is one of the privileged Android, Java and Php develompent tools. Unfortunately after some tests I saw too many issues when trying to connect the networked Raspberry PI for remote compiling so I adopted the NetBeans IDE supporting a very simple setup.

 

Minimal requirements for remote compiling

There is a series of minimal requirements that should be accomplished to remote compile C++ programs on the raspberry PI; most of these are obvious but a reminder can be useful:

 

  • SSH and SFTP installed on the Raspberry PI for remote access (this option can be enabled from the raspi-config setup utility)
  • GNU compiler (better to check the system upgrade for the last version of the compiler, assembler and linker)
  • SSH access to the Raspberry PI from the PC
  • Development version of the C++ libraries needed for your needs, correctly installed on the Raspberry PI

 

NetBeans IDE setup on the PC

These notes refers to the version 8.0 of the Netbeans IDE; If a different (maybe newer) version is installed maybe you find some minor changes in the menu settings.

To install a copy of the IDE on your PC it is sufficient to go to the NetBeans IDE platform download page downloading the last available version for yourplatform (Mac, Windows or Linux)

Screen Shot 2015-08-09 at 13.25.12.png

The installation is simple and in most cases the default settings are all what you need.

When the installation process end, few things should be changed from the Settings menu. Its location can vary depending on the PC platform you are using; in the Mac OSX it is in the pull-down NetBeans main menu (the topmost left choice) while in Windows and Linux maybe in the File menu.

Screen Shot 2015-08-09 at 13.30.15.png

From the settings windows (see the image above) you can customize all the features of the IDE, e.g. the editor behavior, source font and colors, the graphic appearance and so on. From the C/C++ option tab select the GNU compiler (it should be the default, else add it updating the IDE with the Add button on the same window).

Screen Shot 2015-08-09 at 13.34.06.png

Againfrom the same window you should Edit the host list (by default there is only localhost, the development PC) adding the parameters to connect with the Raspberry pi:

  • Add the user (in this case the default user pi)
  • Add the password and confirm to save it avoiding to repeat it every time the IDE starts connecting to the Raspberry PI
  • Insert the Raspberry PI IP address (better to set a static IP instead of a DHCP assigned to avoid the IP changed the next time you reopen the IDE)
  • Specify the access mode SFTP (that is, FTP protocol over SSH connection)
  • Enable the X11 (the linux graphic server) forwarding

That's all!

 

Starting developing

With the IDE set in this way you can start creating a new project and write your code. On the IDE top toolbar there is the connection button to activate the remote connection with the Raspberry PI. To compile remotely from the PC you should be connected. As the device is remotely connected NetBeans gives you the option to edit the application sources locally but when you Build the application the sources are automatically zipped, sent to the remote device (the Raspberry PI), unzipped and compiled. Errors and messages are shown in the Build result window making the debug very simple.

 

Another very useful feature is the option to open a remote terminal directly from the IDE. In this way, as shown in the following screencast, the development lifecycle included the program running and testing become very simple and efficient with a minimal effort.

Introduction

Meditech will be used in many different environments and not always the visual information provided on the screen maybe useful. Receiving helpful audio hints and suggestions on-the-go is a good solution; when the user should activate a probe, start or stop the data acquisition nearby the patient it is very useful to control the entire system with a simple infrared controller but maybe not sufficient. This is the reason that to improve the system usability the TTS (Test-To-Speech) support has been added to enforce the user interaction and system feedback.

The introduction of this feature in the Meditech system has involved two devices: RPI master and RPI slave3 hosting the Cirrus Logic Audio Card.

One of the biggest issues is that as the audio card uses its own custom distribution of the raspian linux with a real-time kernel. Unfortunately it does not support the standard apt-get update and apt-get upgrade features: In few words, the raspian version of the RPI slave 3 should never be updated and there are very few packages that can be installed on this version. In fact the audio card performances are very good and are used for the microphonic stethoscope probe so it is not the worth to replace this hardware with a more flexible one penalising the quality.

Last but not least, the text strings are managed by the controller application installed on the RPI master device.

 

How it works

The scheme below shows the implementation architecture of the TTS feature in the Meditech device.

Screen Shot 2015-07-28 at 11.03.01.png

The synthesized strings can be disabled with the mute button on the controller. When the synthesis is active when the user press a command on the controller the Meditech speakers says a short contextual sentence. The video below shows the interaction effect.

 

TTS sentences generation

The voice synthesis does not need to be used at runtime. To be more clear, this must not used for at least two reasons: the voice synthesis is a resource consuming process and the sentences changes only when a new version of the applications is released. So what we really need is a TTS reliable system that should be run once and a good method to store audio text files on the RPI master device, where there is sufficient space as it has the 120 Gb SSD disk attached.

 

The synthesis tool

The TTS tool that has been adopted is festival that for many reasons if highly customizable, efficient yet reliable, runs on raspian linux and has a good environment for custom commands, voices definitions and more. The Festival Speech Synthesis System has been developed by the Centre for Speech Technology Research of the University of Edinburgh; it is a long-term project, started in 1984 that is already maintained with a wide community of users and developers sharing their knowledge. The actual version that can also be found in the raspian repository is the 2.4 of the end of 2014.

To install Festival in the Raspberry PI 2 (it works fine also in the B+ 512K) it is needed a simple operation:

 

sudo apt-get install festival



 

As the installation provided by the official raspian linux distribution, the best results can be obtained with a more in-depth analisys of the Festival features. To achieve good audible results and understanding a bit more the fascinating world of the Text-to-speech synthesys I strongly suggest to download the Festival package and compile it separately to have more voices, intonations and tools that can dramatically improve the  features provided with the standard installation. Not required but it is a good advantage, an average knowledge on how the lisp language works. As a matter of fact Festival can run in command mode or process batch commands expressed with Scheme list sentences that is a variant of the official lisp language.

One of the features of Festival is the ability to execute the TTS generating wav files for further usage.

 

Speech synthesis batch command

What is needed to synth the text messages in audio format is a method that starting from a simple text string (e.g. "Hello World") a corresponding audio file is generated. The other important implication is that the audio file should follow a specific naming convention. The better way is that the generated file has the same name of the corresponding string ID in the controller program: this means that for the sentence whose ID is 10 the generated audio file should have the name 10.meditech where 'meditech' is the arbitrary file extension for the audio messages.

The resulting bash command create_ttsText.sh has the following format:

 

#!/bin/bash
# Create the text files to convert to TTS
echo "*** Creating TTS $2 ***"
echo $1 > ~/$3
text2wave -o $2 -F 48000 -otype snd ~/$3
rm ~/$3
echo "*** Completed ***"


 

Note that this command uses the text2wave festival scheme script that is part of the festival tts package

 

As far as the festival architecture it is not possible to pass to it a string but it is needed a text file. So the first action of the create_ttsText.sh script is just to convert the input string to a template text file that is deleted after the audio synthesis operation. With this bash command the synthesis parameters are preset to simplify as much as possible the process that will be automated. The text2wave festival script call fixes the output audio file to the snd type and the quality frequency to 48KHz; as a matter of fact these files will be played with a high quality sound card so it is the worth to keep the quality high (48KHz is the standard frequency for the DVD audio quality).

With this batch the number of parameters that should be passed for the synthesis is minimal. For example the generation of the audio file named output.meditech from the test string "hello, Element14 ! How is the weather today?" will be as shown below

 

./create_ttsText.sh "hello, Element14 ! How is the weather today?" output.meditech


 

Audio files creation

The audio files creation has been implemented as a special option in the controller application. This program will run indefinitely when the RPI master device starts (when the Meditech is powered-on) for the real-time process of the infrared controller buttons. When the program instead is launched with the -v (that is, "voice") parameter it executes the audio files creation. This option can be launched also when the program runs as a background process in standard mode so there are not special procedures to follow to generate the files when needed.

 

The audio files are stored in the ~/tts_audio_messages folder where the name of the file is the same of the message ID in the controller program.

pi@RPImaster ~ $ ls tts_audio_messages/
1.meditech   14.meditech  19.meditech  23.meditech  3.meditech  8.meditech
10.meditech  15.meditech  2.meditech   24.meditech  4.meditech  9.meditech
11.meditech  16.meditech  20.meditech  25.meditech  5.meditech
12.meditech  17.meditech  21.meditech  26.meditech  6.meditech
13.meditech  18.meditech  22.meditech  27.meditech  7.meditech

 

To add this feature a new version of the controller application has been created adding the file MessageStrings.h with the strings ID definition and some other useful constants.

// Strings array IDs
#define TTS_SYSTEM_RESTARTED 0
#define TTS_POWER_OFF 1
#define TTS_SHUTDOWN 2
#define TTS_VOICE_ACTIVE 3
#define TTS_MUTED 4
#define TTS_STETHOSCOPE_OFF 5
#define TTS_STETHOSCOPE_RUNNING 6
#define TTS_STETHOSCOPE_ON 7
#define TTS_BLOOD_PRESSURE_OFF 8
#define TTS_BLOOD_PRESSURE_RUNNING 9
#define TTS_BLOOD_PRESSURE_ON 10
#define TTS_HEARTBEAT_OFF 11
#define TTS_HEARTBEAT_RUNNING 12
#define TTS_HEATBEAT_ON 13
#define TTS_TEMPERATURE_OFF 14
#define TTS_TEMPERATURE_RUNNING 15
#define TTS_TEMPERATURE_ON 16
#define TTS_ECG_OFF 17
#define TTS_ECG_RUNNING 18
#define TTS_ECG_ON 19
#define TTS_INFORMATION 20
#define TTS_TESTING 21
#define TTS_TESTING_END 22
#define TTS_SYSTEM_READY 23
#define TTS_START_PROBE 24
#define TTS_PROBE_STOPPED 25
#define TTS_CONTINUOUS_ON 26

 

The strings themselves instead are loaded in an array in the program. In future versions, supporting the language localization a multi-language set of includes should be used to replace the hardcoded strings.

When the main() - i.e. the program itself - is launched it is checked if the -v parameter is passed, to start the generation function instead the normal process.

// Check for main parameters
    if(argc > 1) {
        // Check for valid arguments
        if(argc != 2) {
            printf(MAINEXIT_WRONGNUMPARAM);
            exit(EXIT_FAILURE); // Wrong number of arguments
        }
        // We expect and argument in the format '-x' where 'x' is
        // the option code
        if(strstr(argv[1], VOICE_STRINGS) ) {
            ttsStrings();
            printf(MAINEXIT_DONE);
            exit(0);    // ending
        } // Launch the TTS generation
        else {
            printf(MAINEXIT_WRONGPARAM);
            exit(EXIT_FAILURE); // Wrong argument
        }
    } //

 

With the -v parameter it is called the function ttsStrings() to execute recursively the bash script. As mentioned before, the strings are defined in an array, in the same order of the IDs in the MessageStrings.h header file.

/**
\brief Convert the program application strings to voice messages
*/
void ttsStrings(void) {
   
    //! The strings array with the messages
    const char * MESSAGES[TTS_MAX_MESSAGES] = {
        "System restarted to the power-on conditions. ",
        "Power-off: press the OK button for complete shutdown, any other button to ignore. ",
        "Power-off confirmed. Shutdown-sequence started. ",
        "Voice messages are now active. ",
        "Muted. ",
        "Microphonic Stethoscope is now disabled. ",
        "Microphonic Stethoscope is already active.",
        "Enabled Microphonic Stethoscope. ",
        "Blood Pressure measurement probe is now disabled. ",
        "Blood Pressure measurement probe is already active.",
        "Enabled Blood Pressure measurement probe. ",
        "Heart Beat measurement probe is now disabled. ",
        "Heart Beat measurement probe is already active.",
        "Enabled Heart Beat measurement probe. ",
        "Body Temperature measurement probe is now disabled. ",
        "Body Temperature measurement probe is already active.",
        "Enabled Body Temperature measurement probe. ",
        "E.C.G. probe is now disabled. ",
        "E.C.G. probe is already active.",
        "Enabled E.C.G. probe. ",
        "Look at the Control Panel display for system information. ",
        "Started a Control Panel test cycle. ",
        "Control Panel test cycle ended. ",
        "Startup completed. System ready. ",
        "Press OK button to start the probe collecting data. ",
        "Probe stopped.",
        "Continuous mode running. Press OK to stop collecting data."
    };

    printf(TTS_START_PROCESS);

    // Generate the TTS wav files
    for(int j = 0; j < TTS_MAX_MESSAGES; j++) {

        char fileName[64];
        char fileTemp[64];
        char programName[32];
        char programPath[64];
        char messageText[1024];

        sprintf(fileName, "%s%d.%s", TTS_FOLDER, j + 1, TTS_FORMAT);
        sprintf(fileTemp, "%d.tmp", j + 1);
        sprintf(programName, "%s", TTS_SHELL_COMMAND);
        sprintf(programPath, "%s", TTS_SHELL_PATH);
        sprintf(messageText, "%s", MESSAGES[j]);

        char* arg_list[] = {
            programName,     // argv[0], the name of the program.
            messageText,
            fileName,
            fileTemp,
            NULL
          };
       
        // Spawn a child process running the command. 
        // Ignore the returned child process id.
        spawn (programPath, arg_list);
    }
}

 

There are different methods to execute a shell command from a C++ program using the exec command; the issue is that using a call to exec from the C++ program the second process (launched by exec) runs together with the calling process (our program) and - especially in case like this where there is a loop of many sequential calls - concurrency problems and arise. What we need is to be sure that when we launch the external program with exec our calling program wait until the child process has been completed. To do so, the parameters call is prepared as shown in the scriptlet above but the secondary process is launched spawning our primary process. So what happens is that the primary process launch the child processes in the correct way, then return back and proceed launching the next process and so on. As in this case we don't need to check a return value all the conversion processes can be executed in parallel. The following image explain what is shown on the terminal when the conversion process is started.

Screen Shot 2015-07-28 at 14.51.05.png

 

Playing audio files

In a similar way when the controller is not in mute mode, when a button is processes the corresponding audio file is played as shown in the example below from the function parseIR()

case CMD_RED:
            if(infraredID != controllerStatus.lastKey) {
                if(!controllerStatus.isMuted) {
                    playRemoteMessage(TTS_TESTING);
                }
                cmdString = cProc.buildCommandDisplayTemplate(TID_TEST);
                controllerStatus.serialState = SERIAL_READY_TO_SEND;
                setPowerOffStatus(POWEROFF_NONE);
                // Check the serial status
                manageSerial();
            }
            break;

 

Below there is the playRemoteMessage() function that uses the spawning method as well.

/**
\brief Play a voice message on the remote RPIslave3 with the
Cirrus Logic Audio Card.

\note To the Linux side the two computers should be set to share the
private / public ssh key to avoid passing user and password during the
ssh remote command launch

\param messageID The message ID to play remotely
*/
void playRemoteMessage(int messageID) {
   
        char sshCall[64];
        char sshServer[64];
        char programName[32];
        char programPath[64];

        sprintf(programName, "%s", SSH_COMMAND);
        sprintf(programPath, "%s", SSH_PATH);
        sprintf(sshCall, "%d", messageID + 1);

        char* arg_list[] = {
            programName,
            sshCall,
            NULL
          };
       
        // Spawn a child process running the command. 
        // Ignore the returned child process id.
        spawn (programPath, arg_list);
}

 

Also in this case there is a call to the SSH_COMMAND that is a bash script as shown below.

#!/bin/bash
# Play a message remotely
ssh 192.168.5.4 "/home/pi/play_message.sh $1"

 

Here is the difference. In this case the audio - that is stored on the RPI master linux device, should be played on a different IP address, that is the RPI slave3 linux device hosting the Cirrus Logic Audio Card. To reach this behavior some linux actions were needed:

 

  • The RPI master storage folder ~/tts_audio_messages has been exported, i.e. shared via linux NFS
  • On the RPI slave3 the same empty folder has been created, then has been mounted remotely. The result is that in the mounted folder there are the same files for playing
  • On the RPI slave3 the bash command play_message.sh has been created (see the description below)

 

When the spawned (child) process is launched by the controller program on the RPI master what really happens is that the bash script command play_message.sh is executed on the RPI slave3. The only parameter passed is - again - the message ID. The remote bash command plays the audio file without consuming resources of the RPI master that gain immediately the control of the main program after launching the command.

pi@RPIslave3 ~ $ more play_message.sh
#!/bin/bash
# Play a Meditech controller audio message
AUDIODEV=hw:0 play --no-show-progress -V0 /home/pi/tts_audio_messages/$1.meditech reve
rb 30

 

The updated sources are available on the GitHub repository as usual.

Introduction

Meditech internal architecture is built by a network of Raspberry PI devices specialised to manage some acquisition probes producing several kind of data. The information of the running system should be shared with the devices and collected for the graphical representation and for historical purposes so the ideal data collector is represented by a centralised database hosted on the main unit RPI master (Raspberry PI 2).

 

To efficiently organise the data the acquired information are stored and accessed on a MySQL database. This is one of the reasons why the RPI master is equipped with a 120Gb SSD storage.

 

Database organisation

The following scheme shows the central role of the RPI master database in the data collection, storage and distribution.

Screen Shot 2015-07-23 at 14.40.05.png

The MySQL database inside the Meditech architecture represent the main data center. Installed on the same RPI master unit there is also the Apache2 web server and PHP server. The database is also used for store single-record tables with the persistent configuration data of the system.

 

The data sets that should be available from remote can be accessed through a set of PHP APIs; this strategy permit to access from remote the real time information (for external support and assistance) using any Internet-connected device including smartphones and tablets, PC and laptop with a connected browser.

 

As all the Meditech units are connected on the same LAN the data exchange is fast as the MySQL server on the RPI master is accessed by the other units directly via SQL queries using SSH tunnels.