Skip navigation
2014

Always wanted to send push notifications from an embedded device into my phone and I wanted it to be easy and free. Google have the API with no quota for free and then a friend recommended me a web hosting site that offers free small apps hosting. So with all pieces in place finally decided to put words into code. Just released an app into the play store to do just that, the app has a web component written in python and hosted in a free account at heroku.com. App has not be tested carefully but then again, nothing is in the hacking community. Better to have something that nothing, bugs and enhancements can be latter added.

RemoteAlertBoxDiagram.png

 

The concept is.

Create a message on the embedded device triggered by a simple action event. It may be a sensor read or other condition. Then post a message to the server with the desired text. That message is then pushed into the phone, For this there is a web API I created using python and Flask and from there the message is forwarded to the google cloud messaging site to get pushed into your phone.









The FUN PART, How to use it:

1. Install RemoteAlert on your phone (available for free from Google play store) (https://play.google.com/store/apps/details?id=com.soy.remotealert)

2. In the app you are prompted with your device UniqueId. That id is arbitrary created within the app, so it means nothing outside of RemoteAlert.

3. Using the UniqueId from the app just issue a post with a json containing the message you want to show on the device.

 

Sample Code:

After installing the app in your phone will see an screen similar to this:

copy of RemoteAlertApp1.png

Using the orange id from the above image you may issue a simple post to send a message.

Example using a curl command from the shell (you can just run this on the shell of any computer that has curl)

curl -X POST -H "Content-Type: application/json" http://remote-alert.herokuapp.com/post/a32b831c-7623-4c43-99cf-b614ff54e902 -d '{"message":"Hellow Push Message World"}'

 

A python sample will look like this:

import json

import urllib2

 

class RemoteAlert:

    def send(self, device_id, message ):

        data = { 'message' : message }

        url = 'http://remote-alert.herokuapp.com/post/' + device_id

        req = urllib2.Request( url )

        req.add_header('Content-Type', 'application/json')

        urllib2.urlopen(req, json.dumps(data))

        return 'OK'

 

ra = RemoteAlert()

dev_id = 'a32b831c-7623-4c43-99cf-b614ff54e902'

 

print ra.send(dev_id, 'Hellow Push Message World')

 

This is how the received message looks like:

copy of device_notification.png

 

A simple POST with a JSon body with the mesaage is all it is needed. You may use https if you prefer is the same

http://remote-alert.herokuapp.com/post/<device_id>

{ "message" : "There you go" }

 

You may use the language of preference for this, C#, ruby, C++, perl, etc etc etc. And this can be used from anywhere not just an embedded computer. For example you may have an script on your PC start up to send you a message if somebody logged into your computer. Or a script on your raspberry pi that sends you your public IP when it changes...

This is a work in progress and software projects are never finished but just wanted to put it out there so anybody can use if wanted. All code will be posted open source to freely use. You may print it all into a shirt or a bed cover it will be public. There is nothing special in the code and that's the idea. We all should have basic features for free at least to hack away and learn.

More sample codes to come. Planning on adding some sample to push notifications directly from an arduino, if the web API needs to be changed for that it will be change. Adding the notifications straight from the arduino will make this even more fun.

The app is free, no personal info, no registration/login required. As long as there is free web hosting this will be online.

No message are stored on the server and no personal info. The android app has no sign in, there is no special permissions except the network access because without internet this will not work.

Said that, I would not recommend nobody to share confidential data using this. This app has no error correction, it just sends a message to the google cloud messaging to get push into your phone as soon as you post it. However the Google Cloud Service do not push the message to the phone instantly, from testing I have seen that messages may take close to a minute to get to the phone. Normally I get the messages in 20 secs more or less (haven't time it carefully).

If anybody use it, add a comment bellow so others in the community can benefit, be gentle but if it does not work please post it as well. Save time to others if you do not like it, or vice-versa.

 

Sample video just to have an idea of how much time it takes for the message to be sent and received. Un-edited just to show a real world sample of the time it took.

Code from the video: https://github.com/soynerdito/RaspberryPiTalk/blob/master/in_out_sample/push_no_block_event_led.py

library https://github.com/soynerdito/RaspberryPiTalk/blob/master/in_out_sample/remotealert.py

 

Hope this can help others, this is just a simple project put together to help me do other projects faster. I will blog again in progress and features, take care to all!

300x250_pisun_ban.jpg

 

Keep Your Home Safe and Snug This Summer!

 

In our introductory Pi in the Sun blog, we promised projects that would make your summer better. Now we're delivering.


Here are 2 Pi Projects that will allow you to worry less about keeping unwanted visitors and prowlers away while you hit the holiday road.

 

1.   Let Frederick Vandenbosch show you how you can use a Pi, a Pi Camera and a PiFace Digital to build an alarm system that detects unwanted visitors, scares away intruders and transmits automatic notifications of disturbances.

 

DSCN3083.JPG

 

2.  With this project, Carriots M2M Applications Platform reviews how to program the Raspberry Pi to send a data stream to a database when the Pi detects light. The result: You receive a text message signaling to you that someone is in your house.

 

 

 

 

Have more ideas for how the Pi can protect your home while you're away this summer? Share them!

 

 

300x250_pisun_ban.jpg

(Note: Click here to see XMP-2!)

 

Introduction

The XMOS startKIT from Farnell (or Newark) is a very low cost (£12 including VAT) processor platform that works well with the Raspberry Pi. Together is possible to construct robotics applications with almost no soldering required.

 

xmp-photo-rear.jpg

The XMOS startKIT is a near-credit-card sized board with an XMOS chip on it with multiple ‘XMOS cores’ that can be programmed in C.  XMOS technology allows things to run in parallel at high speed with low jitter. These are exactly the characteristics which could be ideal for robotics applications.

 

Together with some code to run on the XMOS startKIT board and on the Raspberry Pi (RPI), the boards are used to construct a simple compact mobile platform (I’m going to call it XMP, XMOS Mobile Platform instead of a robot from now on in the XMOS spirit of preceding everything with ‘X’).

 

Although the XMP-1 is not much of a robot until it has some sensors and more programing, it could be extended in future for robotics experiments. The XMP-1 uses low cost off-the-shelf standard hardware and no exotic tools beyond screwdriver, wire-cutters and pliers.

 

This post covers communication between the RPI and XMOS board using a serial peripheral interface (SPI) interface and how to construct up the XMP-1 and control it from a web browser.

The video here shows XMP-1 being taught a route; the first attempt at using it! It would benefit from a better user interface. XMP-1 can move quite fast, but I took it easy here on a low speed. On the right is the browser control, and at the bottom is the console output, just generating some keep-alive and status messages to see what's occurring.

 

This next video below shows the XMP-1 attempting to play back the route and causing suffering and pain along the way. My low-cost continuous rotation servos (which are being used to drive the wheels) were not very good, and XMP-1 has no sensors yet.

 

 

A bit more detail

This post is actually part 2 of some XMOS startKIT experiments. Part 1 contains the XMOS introduction, terminology, architecture and a quick getting started guide with example programs. If you're interested in the technology then it may help to follow part 1 first, so that this part 2 makes more sense. This part 2 is intended to construct a simple framework for high speed communications between the Raspberry Pi and the XMOS startKIT board. The framework should be general purpose enough to be able to use for many projects (robotics was not my intention). The XMP-1 is really just a by-product, in the desire to test out the Raspberry Pi to XMOS board communications. It is recorded here in case it is useful. (Note, there is also a part 3 entitled XMOS startKIT: XMOS and Raspberry Pi Oscilloscope XAE 1000 which reuses the SPI capability discussed is this post, and introduces how to use the analog to digital converter (ADC) that is present in the XMOS chip, and how to perform real-time graphics in a web browser).

xmp-photo.jpg


If you’re only interested in constructing and using XMP-1 then you can just take the code at the bottom of the post, compile and store to Flash (as described in part 1) on the XMOS startKIT board and the Raspberry Pi, and just follow the sections that describe XMP-1 hardware assembly, skipping all other content here. If you’re interested in controlling any hardware using the Raspberry Pi and a web browser then some of the code here can be reused. But, to get most out of the Raspberry Pi and XMOS startKIT combination the remainder information here may be useful if you’re new to the startKIT.

 

Solution Overview – Hardware and Software

Here is a photo of the completed XMP-1 being charged up. For outdoor use, I used a 802.11 hotspot type device (MiFi), running the browser on a mobile phone.

xmp-dev.jpg

 

The diagram below shows the approximate layout of the bits and pieces as viewed from the rear of XMP-1. You can see that it’s pretty basic – XMP-1 was just a quick experiment.

board-layout.png

 

The Raspberry Pi (RPI) is used to handle all network activity. It runs a small web server and most of the code is written in JavaScript on a Node.js platform. The RPI communicates motor control speeds (actually continuous rotation servos were used for XMP-1) over a serial interface (SPI) to the XMOS startKIT board. The XMOS startKIT is responsible for feeding pulse width modulation (PWM) signals to the motors.

The RPI is connected to the network using an 802.11 WiFi USB adapter.

The full wiring diagram is shown here. The hardware and construction is described later.

wiring.png

 

The diagram below shows the software that will be implemented on the RPI and on the startKIT. It looks a lot, but it isn’t and can be broken down into small parts which will be described further below. As mentioned the entire source code is at the bottom of this post so it could be used without any modification if desired.

solution-overview-software.png

 

In brief, the green block handles the web interaction and determines the speed and direction of the motors based on the user input. The green block provides a web page (index.html) to the user that incorporates the user interface. The xmos_servo program is a small bit of software written in C that translates the desired speed/direction into serial peripheral interface bytes of data that are sent to the startKIT. The startKIT software is divided into three parts that run simultaneously on separate XMOS cores. The spi_process converts the SPI signals into data that is stored in an array. The data_handler code inspects the array to decide what to do (the only conclusion it makes today is to manipulate the servos). The servo_handler process outputs a pulse stream to the servos, so that they can rotate at the desired speed. All these blocks are explained in more detail further below.

 

Serial Peripheral Interface (SPI)

SPI relies on four wires known as SS, SCLK, MISO and MOSI and an assignment of master and slave for the two devices involved in the communication. In the case of the RPI and XMOS board, the RPI is the master device and is responsible for generating the clock signal. The RPI transmits data on the MOSI wire, and received data on the MISO wire. This means that the SPI interface can transfer data in a bidirectional fashion at the same time. In practice if one-way data is required then either the MOSI or the MISO signal can be ignored depending on the direction of interest.

 

The oscilloscope screenshot here (individual signals and the automated SPI decode from a Tektronix MSO2024B oscilloscope) shows an example of SPI communication using the Raspberry Pi. SPI can be configured in a few ways; you can see in this example that three bytes of data were transferred from the master (RPI) to the slave (XMOS board) and that they were 0x02, 0x00 and 0x10, and either no data or 0x00, 0x00, 0x00 was transferred from the slave to the master simultaneously.

spi-comms-example2.png

The screenshot above shows a fairly slow SPI connection (32kHz clock rate) but it can be massively sped up as shown in the screenshot below:

high-speed-spi.png

The SS wire is a chip-select signal (active low). The RPI has two pins on it’s 26-way connector that could be used for SS; they are shown circled in blue in the diagram below, marked CE0 and CE1. This means that the RPI can talk to two SPI slave devices if desired. In this case, only one of the CE pins was used – I picked CE1.

rpi-marked.png

 

The pins circled in yellow above are the MOSI, MISO and SCLK pins on the Raspberry Pi. The pin circled in black is a 0V connection which would also be needed between the RPI and XMOS board.

 

Controlling Hobby Servo Motors

Hobby servo motors generate a movement based on an input signal. They usually rotate less than one full revolution. Usually a hobby servo will rotate within a range of about 180 degrees. The output shaft can be connected to (say) linkages to make wheels turn full left or full right (or anything in between) based on the input signal.

servo-typical.jpg

 

The diagram below shows the internals of a typical hobby servo (taken from this site). On the left (in blue) is a conventional DC motor. It is geared down a lot, and the final shaft can be seen on the right connected to a blue arm which could be connected to a wheel steering mechanism for example. Underneath the final shaft will be a potentiometer, and that provides feedback about the exact position of the final shaft. A hobby servo is therefore a closed loop system and can self-correct if the arm gets accidentally knocked away from the desired position.

Servo_Stripped.jpg

Hobby servos typically have three connections; 0V, 5V and Signal. The signal wire is a digital input to the servo and it requires a PWM signal. The size of the pulse width determines the angle that the shaft will move to. The PWM signal needs to repeat every 20 msec, and a pulse width of 1.5msec will result in the shaft moving to a centered position. A width of 1msec will move the servo fully in one direction, and a width of 2 msec will move the servo fully in the other direction (further below there will be some oscilloscope traces of servo control).

 

There is a type of modified servo known as a ‘continuous rotation’ servo. It is a modified servo where the potentiometer is removed along with any end-stops, and the circuitry is persuaded into thinking that the servo is still in the centered position. Sending PWM with a pulse width other than 1.5msec will make the mechanism rotate in the clockwise or anticlockwise direction at a speed that depends on the pulse width. The XMP-1 uses two continuous rotation hobby servos, one for each wheel. They are not an optimal way of obtaining controlled motion (XMP-2 will use DC brushed motors) since they are being used for a purpose different from the original intent for hobby servos, but they have the advantage that they can be controlled by a digital logic signal and they do not require any external H-bridge circuit.

 

Hobby servo wires can be color-coded differently depending on manufacturer. Usually the center wire is red, and it goes to +5V. The black or brown wire is 0V. The white or yellow wire is the PWM signal input.

 

Starting development - Connecting up the boards

In order to develop the software, the RPI and startKIT were connected up using a ribbon cable and IDC connector assembly - these can be assembled using a vice or purchased ready-made. For a self-assembled version it is worth buying an extra IDC connector for use as a debug connector at the center of the cable, to make life easier when probing signals with a multimeter or scope. Note that the left and right rows of pins on the debug connector swap around when done in the orientation shown in the photo below.

rpi-xmos-connected.jpg

The connector serving debug purposes close-up. I used these connectors:

debug-connector.jpg

With a RPI model A+ or B+, the connector is 40-way instead of 26-way, and due to lack of gap the 26-way ribbon cable will not fit the '+' models. One solution is shown below. Again the center connector is used for debugging purposes. This requires two 40-way connectors40-way connectors and 40-way ribbon cable40-way ribbon cable as well as the 26-way connector26-way connector.

xmos-startkit-rpi-plus-assembled.jpg

 

 

Implementing SPI (spi_process) on the XMOS startKIT

Using the XMOS development environment (xTIMEcomposer) was covered in part 1. The screenshots below show the Windows version of xTIMEcomposer, but the Linux version looks identical (and possibly the Mac version may look similar too).

Create a new project (File->New->xTIMEcomposer Project) and give it a name such as spi-test.

new-proj.png

 

A source code file will be created (spi-test.xc) which will be used to implement the main body of software.

There is a lot of pre-created code that is available for including, and it can be browsed by selecting the xSOFTip tab on the lower-left pane in xTIMEcomposer as shown in the screenshot below.

xsoftip.png

 

Select SPI Slave Function Library as shown in the screenshot above. When you do so, the Developer Column on the right side of xTIMEcomposer will update and show help information. Scroll down to obtain documentation links for the library.

developer-column.png

 

At this point you can right-click on the SPI Slave Function Library in the xSOFTip lab and import the library into the workspace. I’m no expert on xTIMEcomposer so I’m probably using it wrong here, but the source code and header file for the library appeared in a separate folder in the Project Explorer (shown circled in blue below):

proj-explorer.png

 

The files were required to be in the spi-test folder (so that they appear as shown circled in green above) so to achieve that I manually copied the spi_slave.h and spi_slave.xc files from module_spi_slave/src folder into the spi-test/src folder using Windows Explorer.

explorer-manual-move.png

 

The software uses the concept of ports to control output or to read input. There is a mapping between these logical ports and the physical mapping out to the pin on the chip. The mappings can be altered in certain combinations (See figure 3 in the Introduction to XS1 ports PDF document).

 

Input/Output ports on XMOS devices can be 1, 4, 8, 16 or 32-bit wide. When designing with the part, you may wish to allocate certain functions to 1-bit ports, or other functions to multi-bit ports, and so figure 3 will be very useful to determine which ports and pins to use.

 

With the SPI slave code now in the spi-test/src filder, this code was modified slightly. The library code makes an assumption that the ports that are being used for the SPI interface are all  1-bit ports, whereas the Raspberry Pi SPI SS pin (CE1) is connected to a 32-bit port on the XMOS board. Figure 8 from the startKIT Hardware Manual PDF document is shown below. In the center in green you can see the 2x13-way header that connects between the XMOS board and the Raspberry Pi. On the left and right in blue are the physical pin names on the chip (X0D0,, X0D11, etc). The pin highlighted values are the logical port numbers. P1A, P1D and so on are single-bit ports. P32A1 is the first binary digit of a 32-bit port.

xmos-port-pinout.png

 

Quite a few changes were made to the SPI library and the entire code is attached to the post, so only some snippets of code will be described here, there is no need to copy/paste, the full code attached at the end of this post can be used.

 

The SPI interface on the XMOS device is initialized as shown here. It is explained further below.

void spi_slave_init(spi_slave_interface &spi_if)
{


    int clk_start;
    set_clock_on(spi_if.blk);
    configure_clock_src(spi_if.blk, spi_if.sclk);
    configure_in_port(spi_if.mosi, spi_if.blk);
    configure_out_port(spi_if.miso, spi_if.blk, 0);
    start_clock(spi_if.blk);






    return;
}







 

As mentioned in the Part 1 post, I/O can be clocked in and out of the XMOS device at precise times. In the code above, the  set_clock_on function (defined in the XMOS xs1.h header file) is used to turn on one of the built-in clocking mechanisms in the XMOS chip. The diagram below (from the Introduction to XS1 Ports document) shows this mechanism in yellow. The configure_clock_src function is used to select an external clock (shown in blue in the diagram). It will be connected to the SCLK pin on the Raspberry Pi. The configure_in_port and configure_out_port functions are used to tie ports to the clocking mechanism. Both the MOSI and MISO signals (shown in green below) are configured to be tied to the clocking mechanism.

port-diag.png

 

The way serial data is handled on XMOS devices is really neat. The code here is explained further below. First, a structure is used to contain details about the ports that are desired to be used as the SPI interface.

typedef struct spi_slave_interface
{
    clock blk;
    in port ss;
    in buffered port:8 mosi;
    out buffered port:8 miso;
    in port sclk;
} spi_slave_interface;







 

The interesting lines above are the ones that refer to port variables mosi and miso. They have been declared as type port:8. If the variables are assigned 1-bit port addresses, then the XMOS device will automatically de-serialize the 1-wire stream of bits into 8-bit values.

It makes the rest of the SPI code really simple. Here is the code that manages SPI data input from the Raspberry Pi:

void spi_slave_in_buffer(spi_slave_interface &spi_if, unsigned char buffer[], int num_bytes)
{
    unsigned int data;
    unsigned int vlen=0;


    clearbuf(spi_if.miso);
    clearbuf(spi_if.mosi);


    for (int i = 0; i < num_bytes; i++)
    {
        spi_if.mosi :> data;
        data=data<<24;
        buffer[i]=bitrev(data);
        if (i==2)
        {
            vlen=(((unsigned int)buffer[1])<<8) | (unsigned int)buffer[2];
            if (vlen==0)
                break;
        }
        if (i >= vlen+2)
        {
            break;
        }
    }
}







 

In the code above, you can see that there is a for loop, and within the loop the line spi_if.mosi :> data; is used to read 8 bits of information on the MOSI line into the variable called data.

 

The next two lines are used to flip the bits around within the byte and then the data is stored in a buffer array.

 

The next few lines need some explanation; they are related to the desired protocol. It was intended to create some general-purpose code that could be used for many things, not just XMP-1. If the Raspberry Pi sends data to the XMOS startKIT board, the XMOS board needs to know how many bytes of data to expect. This could be hard coded but it would be inflexible.

 

It was decided to use a very simple ‘tag (or type), length, value’ (TLV) protocol. The first byte that the Raspberry Pi must transmit is a tag or identifier in the range 0-255 (i.e. one byte). It is up to the user to decide what the values represent. For example, a value of 1 could mean “set motor speed” and a value of 2 could mean “set headlight brightness intensity”. The second two bytes are a 16-bit value that indicate how many value (i.e. data) bytes are to follow. I decided to limit this to 4kbyte (4096 bytes) which should meet many use-cases but the actual value can be changed by adjusting a BUFLEN definition in the code.

 

Therefore the minimum number of bytes sent on the SPI interface are three (tag, and a length of 0x0000) and the maximum are 4099 which is a tag and a length of 0x1000 (this is 4096 in hexadecimal) and 4096 data bytes.

 

The protocol was refined slightly, so that an odd tag number means that the Raspberry Pi expects a response back in the following SPI communication that it initiates after the current TLV stream is complete, and an even tag number means that the Raspberry Pi expects no response back.

 

This is a very basic protocol but it should meet many usual requirements. It is also explained in the table below where the blue number is the SPI byte index into the receiving 4099-byte buffer.

tlv-scheme.png

 

Going back to the earlier code snippet it can be seen that the next few lines check the buffer[1] and buffer[2] contents on the fly while the SPI data is being received. The contents are expected to be the length as seen in the diagram above (see blue buffer index). As soon as the code has determined the remainder length, it will accept exactly that number of Data bytes, and then the routine exits.

 

That covers SPI input to the XMOS board on the MOSI line. SPI output from the XMOS device on the MISO line operates in a similar manner, checking the the length simultaneously on the MOSI line on the fly again, so that the function can exit as soon as the requested number of bytes has been transferred.

 

Inter-Process Communication

Now that SPI was figured out and a protocol had been implemented to exchange variable length data in either direction up to 4096 bytes long, some consideration was given to the main body of the program. It was clear that an XMOS core would be dedicated to handling the SPI task, but the rest of the code may need to reside in one or more additional XMOS cores.

 

In part 1, it was described how tasks run in parallel on different XMOS cores, and how the tasks can communicate to each other by pushing values into channels. There is another way of communicating between cores and it uses the concept of “transactions via interfaces” rather than channels. It is more flexible because you can send multiple variables of different types from one XMOS core to another. The transaction types are defined much like a C function prototype. This all becomes much clearer by looking at an example.

 

For instance, if an application had a task that controlled a display, then a sending task may want to turn the display on or off, or it may want to plot a pixel. The interface definition for the communication between the two XMOS cores could look something like this:

interface program_display
{
    void backlight(int state, int color) ; // transaction type 1
    void plot(int x, int y, int color); // transaction type 2
};







 

Interface communication is unidirectional, so if the display wanted to send information such as (say) the touchscreen state, then another interface would need to be used in the other direction. From this it is clear that interfaces have a client and server end. The diagram here shows two XMOS cores (in purple), two interfaces (in gray) and the first interface (called program_display) allows two different types of transactions to occur (in blue) across the program_display interface.

interface-comms.png

The great thing about using interfaces and transaction types is that, much like C function prototypes, you can have return values and you can pass references to variables, so that even though the communication is always initiated by the client end of the interface, data transfer can occur both-ways. Another very interesting feature not shown on the diagram is the ability for the server end to be able to send a ‘notification’ to the client end. This can be a signal to the client to issue a transaction in the usual manner, to perhaps retrieve some data. This feature will be used in the XMP-1 code. So, more information on exactly how to code the interfaces and send data and notifications will be explained further below.

 

Designing the IPC architecture to handle SPI content

The SPI interface handling has already been described. Now the content of the SPI messages needs to be presented to a task in a useful manner for subsequent processing. Armed with the knowledge about interfaces and transactions, it was possible to begin allocating functionality to separate XMOS cores and designing the inter-process communication to get to a general-purpose framework that would allow useful message content to be sent from the RPI to the XMOS board and vice-versa, and be processed.

 

The diagram here shows what was developed (a similar diagram as before, except now there is a time sequence from top to bottom).

arch1.png

When the Raspberry Pi desires to send a message to the XMOS board, the RPI will construct up the message into the TLV format described earlier. The information is then clocked out on the MOSI signal wire (shown in green at the top of the diagram above). Simultaneously the XMOS device needs to send something back, but since there is no information yet to send back, the MISO line can contain garbage or all zero values as shown in pink. The spi_process function will collect up the message into a buffer (an array of unsigned char) and then it will initiate a transaction to a separate data_handler XMOS core. The data_handler is responsible for processing the contents of the message and optionally sending back information to the spi_process XMOS core, so that any subsequent SPI exchange can send useful data back to the Raspberry Pi instead of garbage values.

 

The data could be sent between spi_process and data_handler by making a copy of the buffer. However instead it is possible to just pass a pointer to the buffer memory. One way this can be done is to ‘move’ control of the pointer and buffer memory locations  from spi_process to data_handler. Once data_handler is done with the message inspection, it can move control back to spi_process using the return variable that is possible to use in transactions. This is why the diagram above has a transaction called array_data with a parameter defined as a moveable pointer and a return value defined as a moveable pointer too. This way, only one XMOS core has access to the buffer memory at any one time.

 

These are the interfaces that are used:

interface to_rpi
{
    void code(unsigned char c);
};


interface from_rpi
{
    unsigned char* movable array_data(unsigned char* movable bufp);
};







 

The spi_handler code allocates space for a buffer, and then passes control of the buffer to the data_handler code using the line buf=c.array_data(move(buf)) shown in the code here:

void
spi_process(interface to_rpi server s, interface from_rpi client c)
{
  unsigned char storage[4099];
  unsigned char* movable buf=storage;
  ...
  buf=c.array_data(move(buf));
  ...
  select
  {
    case s.code(unsigned char c):
      if (c==SEND)
      {
        spi_slave_out_buffer(spi_sif, buf, 4099);
      }
      break;
  }
}







 

The data_handler code obtains control of the buffer and then if any response is desired to be sent to the RPI on a subsequent SPI transaction, the buffer is populated with a response. Finally control of the buffer is passed back to the spi_handler process.

void
data_handler(interface to_rpi client c, interface from_rpi server s)
{
  select
  {
      case s.array_data(unsigned char* movable vp) -> unsigned char* movable vq:
         // vq contains the data from SPI. We can do whatever we like with it here.
         // Any response is constructed up here too:
         vq[0]=0x22; // tag
         vq[1]=0x00; // length
         vq[2]=0x00; // length
         vq=move(vp);  // pass the pointer control back to spi_process
         tosend=1;
         break;
     }
  if (tosend)
  {
    c.code(SEND);  // send a code to spi_process so that it is aware there is data to send to RPI
  }
}







 

Earlier it was mentioned that if an odd tag value was sent by the RPI then this would be an indication that the RPI expected a response message from the XMOS startKIT board on the subsequent SPI exchange. This is implemented by both the spi_process and data_handler making a note that a return message is expected if the first byte received is an odd value. Once data_handler has finished constructing the return message in the buffer memory it moves the buffer pointer back to the spi_process XMOS core and also sends a code transaction which could contain a message such as “ready to send”. The spi_process XMOS core is now ready for any subsequent SPI exchange. If the data_process doesn’t want to send any message back to the Raspberry Pi (for example if the tag was even valued) then the code transaction is not sent (or a different code could be sent such as “not ready to send”).

 

In the graphic diagram earlier you can see that the subsequent SPI exchange did transmit data back to the Raspberry Pi on the MISO wire.

 

To summarize, the spi_process and data_process present a fairly general-purpose capability to exchange data bidirectionally between the RPI and XMOS board.

 

Implementing PWM (servo_handler) on the startKIT

To test out the general purpose architecture, it was decided to use it to control many devices. The devices ended up being hobby servos because they require very little electrical interfacing effort - no H-bridge or transistor driver is needed – and the servo input wire can be directly connected to an XMOS output pin. I didn’t have many servos, so although the code implements 8 servo control, only two were used for XMP-1.

 

The code could be modified to provide DC motor control too (with a suitable external H-bridge circuit).

 

It was decided to use a single XMOS core to handle the eight servos. The diagram below shows the total of three XMOS processes used in the solution. The new addition is the servo_handler task which is shown on the right. This task has an array that stores the current servo values. As soon as the task starts up, the values are initialized to a centered value (or standstill for a continuous rotation servo) and then every microsecond the task wakes up to check if the servo PWM signal needs adjustment.  If it does then the servo port output is toggled. After 20msec the process repeats.

arch-servos.png

As before, the Raspberry Pi will send a TLV format message to the startKIT board. This time the tag will be an even number, so no response is expected. 16 value (data) bytes will be sent, two per servo. The buffer will be moved to the data_handler task as before. It will now check for a tag number 2, and if it matches then it sends a notification (shown as a dashed blue arrow in the diagram above) called data_ready to the servo_handler task, to make it aware that new servo setting values are available. When ready, the servo_handler task will move it’s servo settings pointer to the data_handler process for it to populate the servo settings to new values and move the pointer back to the servo_handler process.

 

This is the interface definition to achieve the notification and transfer of data:

interface servo_data
{
    [[notification]] slave void data_ready(void);
    [[clears_notification]] unsigned int* movable get_data(unsigned int* movable servop);
};







 

The data_handler XMOS core contains this snippet of code which checks to see if the command from the RPI relates to servo control (by checking the first tag byte) and then storing the servo data locally until the servo_handler requests it:

void
data_handler(interface to_rpi client c,
             interface from_rpi server s,
             interface servo_data server ss)
{
  unsigned int servo_request[8];
  select
  {
      case s.array_data(unsigned char* movable vp) -> unsigned char* movable vq:
        if (vp[0]==0x02)  // servo update request
        {
            idx=3;
            for (i=0; i<8; i++)
            {
                servo_request[i]=(((unsigned int)vp[idx])<<8) | ((unsigned int)vp[idx+1]);
                idx=idx+2;
            }
            ss.data_ready(); // send notification to servo_handler
        }
        break;
      case ss.get_data(unsigned int* movable servop) -> unsigned int* movable servoq:
           for (i=0; i<8; i++)                                                
           {                                                                  
               servop[i]=servo_request[i];                                    
           }                                                                  
           servoq=move(servop);                                               
           break;                                                             
  } // end select
}







 

The servo_handler receives the notification and retrieves the data by temporarily passing control of local storage called servo_width to the data_handler to populate. It is achieved using the ptr=cc.get_data(move(ptr)) line below:        

void                                                                                                   
servo_handler(interface servo_data client cc)
{
  unsigned int servo_width[8];
  unsigned int* movable ptr=servo_width;


  select
  {
    case cc.data_ready():
      ptr=cc.get_data(move(ptr));              
      break;
  }
} 







 

The actual PWM output was achieved in a simplistic manner using a 1 microsecond timer. Since servos only need up to a 2msec pulse every 20msec, the XMOS core can be put to sleep for a large portion of the time.

select
{
    case t when timerafter(time+wait) :> time: // 1*1E2 is 1usec
        for (i=0; i<8; i++)
        {
            if (period==0)
            {
                servo_port[i] <: 1;
                wait=1*1E2;
            }
            if (period==swp[i])
            {
                servo_port[i] <: 0;
            }
        }
        period++;
        if (period>3000) // 3msec
        {
            period=0;
            wait=17*1E5; // 17 msec
        }
        break;
}







 

The oscilloscope screenshot below shows the signals to control four servos, all in the centered position. The pulse width is 1.5 msec here and the period is 20 msec.

servo-centered.png

By sending different commands to the XMOS startKIT, the pulse widths can be changed as shown below (the screenshot below has a different time scale, to show the pulse width in more detail).

servo-various.png

Raspberry Pi code: Handling the SPI interface

The XMOS source code has been described. On the Raspberry Pi, there are two programs that run; a small program called xmos_servo which handles the SPI interaction, and the web/application server running on Node.js (see the solution overview near the beginning of the post to recall these bits of functionality).

 

There xmos_servo code was based on existing code called spidev.c which is a test program for SPI. It was modified to accept servo pulse width parameters on the command line.

 

To use the code, the SPI interface on the RPI needs to be enabled. This is done by typing sudo raspi-config on the RPI command line, selecting Advanced -> SPI settings and then ensuring Enabled is selected. Then, exit out by selecting Finish.

 

Issue sudo reboot (this may or may not be necessary) and then you should see in the /dev folder:

spidev0.0
spidev0.1







 

From the home folder (/home/pi) location create a folder called (say) development, and in there create a folder called something like xmos:

mkdir –p development/xmos
cd development/xmos







 

Place the xmos.c file that is in the zip file at the bottom of this post into that folder, and compile the code:

gcc –o xmos_servo xmos.c







 

You can now run it and control the servos from the command line using this syntax as an example:

./xmos_servo 1500 1500







 

1500 is the desired pulse width in microseconds. Anything between about 500 and 2500 may be useful depending on the actual servo. Since the servos are mounted on opposite sides of XMP-1, the two servos require a value higher than 1500 and a value lower than 1500 for them to rotate in the same direction.

 

Low cost servos will need some trimming in the software to set the center point where no rotation will occur. It should be 1500, but for the servos that I used it was about 1460 microseconds.

 

Raspberry Pi web server and application software

Node.js is something worth experimenting with, because it speeds up application development tremendously. It allows you to write applications in JavaScript. Here the entire web server and application is written in JavaScript. It means that no Apache or Lighttpd web server needs installation and configuration.

 

In a post around Christmas time the procedure to install Node.js was described. Here, a similar exercise is undertaken (some of it is cut-and-pasted here) but with a more recent version of Node.js.

 

Issue the following commands from the /home/pi folder:

cd /home/pi
mkdir mytemp
cd mytemp
wget http://nodejs.org/dist/v0.10.26/node-v0.10.26-linux-arm-pi.tar.gz
tar xzvf node-v0.10.26-linux-arm-pi.tar.gz
sudo mkdir /opt/node
sudo cp -r node-v0.10.26-linux-arm-pi/* /opt/node







 

The above will result in the node executable being placed at /opt/node/bin being installed.

The /etc/profile file needs two lines added before the line that says ‘export PATH’. To do this, type

sudo vi /etc/profile







 

(assuming you know how to use vi; otherwise use an alternate editor). These are the two lines to be added:

NODE_JS_HOME="/opt/node"
PATH="$PATH:$NODE_JS_HOME/bin"







 

Then, reboot the RPI:

sudo reboot







 

Once it comes back alive, the next step is to install socket.IO:

cd ~/development/xmos
npm install socket.io







 

The above command will take about 5 minutes to complete.

 

The HTML file that will be served to the user is in a file called index.html. It has two parts. The first part (actually in the second half of the file) contains lots of buttons, for example here the forward speed buttons are implemented:

<p>
  <label for="id_button_turn">Forward</label>
  <input type="submit" name="id_button_fwd_high" id="id_button_fwd_high" value="High"  onclick="fwd_click(3)">
  <input type="submit" name="id_button_fwd_med" id="id_button_fwd_med" value="Med"  onclick="fwd_click(2)">
  <input type="submit" name="id_button_fwd" id="id_button_fwd" value="Low"  onclick="fwd_click(1)">
  <input type="submit" name="id_button_stop1" id="id_button_stop1" value="Stop"  onclick="stop_click()">
</p>







 

The second part (near the top of the file) contains JavaScript code that sends the button click to the Node.js web server that will run on the RPI. It is using Socket.IO to send data:

  function fwd_click(i){
  var statustext = document.createTextNode("Status: Setting FWD... ");
  clear_stat();
  stat_div.appendChild(statustext);
  socket.emit('action', { command: 'fwd'+i });
  }







 

The code also implements a return status bar, and the entire file is in the attached zip file.

browser2.jpg

 

The remainder of the software on the RPI is coded in JavaScript in a file called index.js.

These few lines of code implement a simple web browser:

var app = require('http').createServer(handler)

// HTML handler
function handler (req, res)
{
  console.log('url is '+req.url.substr(1));
  reqfile=req.url.substr(1);
  if (reqfile != "xmp-logo.png")
  {
  reqfile="index.html"; // only allow this file for now
  }
  fs.readFile(progpath+reqfile,
  function (err, data)
  {
    if (err)
    {
      res.writeHead(500);
      return res.end('Error loading index.html');
    }
    res.writeHead(200);
    res.end(data);
  });
}







 

The snippet of code that allows the JavaScript program to execute the xmos_servo program and pass the servo parameters is shown here:

var progpath='/home/pi/development/xmos/';
prog=child.exec(progpath+'xmos_servo '+value[0]+' '+value[1]), function (error, stdout, stderr){};
  prog.on('exit', function(code)
  {
  console.log('app complete');
  });
The Socket.IO connection is handled using this code:
// Socket.IO comms handling
// A bit over-the-top but we use some handshaking here
// We advertise message 'status stat:idle' to the browser once,
// and then wait for message 'action command:xyz'
// We handle the action xyz and then emit the message 'status stat:done'
io.sockets.on('connection', function (socket)
{
  socket.emit('status', {stat: 'idle'});
  socket.on('action', function (data)
  {
    console.log(data);
    cmd=data.command;
    console.log(cmd);
    // perform the desired action based on 'command':
    ret=handleCommand(cmd);
    socket.emit('status', {stat: ret});
  }); // end of socket.on('action', function (data)
}); // end of io.sockets.on('connection', function (socket)







 

The remainder of code inside the index.js file implements the logic that maps the command from the user (via Socket.IO) into values for the servos. Any trimming of values for low cost servos is done in this code using hard-coded values.

 

Assembling XMP-1

XMP-1 was assembled literally using string. It couldn't get more simplistic :-) Ordinary string may loosen, so lacing cord was used (it has a non-stretchy core, and a soft outer coating that doesn’t loosen easily. The base of XMP-1 was a piece of pre-cut 100x100mm perforated sheet steel from ebay, there are lots of sellers for this. In H.G. Wells fashion all the messy mechanics/charging/power stuff was down below on the underside, and the RPI/XMOS startKIT were on top.

 

Here the servos were tied to the base, and the front ball/caster (from Tamiya) was mounted with some screws and M3 hex spacers and bolts/nuts to get the height correct. With lacing cord, the servos are extremely secure and don’t move.

xmp-top-initial.jpg

 

The photo below shows the underside view. The wheels were from Pololu (they sell two kinds in this range, one does not fit on standard servos – this one does).

xmp-underside-initial.jpg

 

Below is a photo of the underside with a few more bits added. I just used whatever charger/DC-DC converter and battery that I had. I used a LiPo Rider Pro board (blue board in this photo) which contains the charger and 5V DC-DC output converter combined on one board. This supplied sufficient power to drive the two servos and the XMOS board. The LiPo cell was from Farnell (shown in the center of the photo, in-between the servos). The cell is a high cost part in this build but it is not safe to trust unbranded LiPo batteries or batteries from suppliers who do not indicate where the battery came from, especially for this type of project. I think it will give many years of good service long after XMP-1 has been disassembled. If a different battery is used, ensure it is not much higher capacity (this is a lot of energy), and that it has a built-in protection circuit. Even so, I would not consider leaving a project charging unattended. In a similar vein, it would be safer to use a plastic base (and drill holes, or have them laser cut as an example) to minimise risk of a short to the chassis especially from sharp edges.

xmp-underside-partial.jpg

 

The RPI is powered from a smaller, separate LiPo battery – this is because under load, the servos may consume enough power to cause the RPI to reset. By far the easiest thing to do is just run it from a separate supply.

 

The photo below shows the following things circled in different colors:

Red: Tamiya ball/caster

Green: 5V output DC-DC converter for RPI from ebay (the exact model I used is unavailable but there are plenty of others)

Yellow: Charger circuit for RPI LiPo battery from Olimex (they also have suitable batteries for powering the RPI, such as this one)

Blue: A small stripboard to serve as the supply junction point for the two servos and the 5V output from the LiPo Rider Pro board (which can be seen in blue in the background).

underside-castor.jpg

 

Here is the final result:

xmp-rpi-side-zoomed.jpg

 

Summary

It is hoped that some bits of this post may be useful. As can be seen, it is fairly straightforward to connect up the Raspberry Pi to the XMOS startKIT and begin making projects with high-speed, low-latency timing. Unlike other solutions, the XMOS startKIT is extremely low priced, and has incredible timing performance – absolutely ideal for robotics projects. XMP-2 will replace low-cost servo motors with brushed DC motors in a complete redesign which will need additional tools. XMP-1 however provided a lot of fun and a good opportunity to learn XMOS, with a very short assembly time (a few hours).

 

The SPI code can be re-used for any high-speed input/output that may be needed for RPI/XMOS combined projects.

 

For projects that require web based control, Node.js is highly worth considering. It runs quickly on the RPI, and needs no configuration (unlike Apache) and allows applications to be written quickly in a small amount of code.

 

 

Code is available here.

I Have had a raspberry pi and in that time have learnt a great deal. From nothing to lots of projects I have managed to make lots of things with my pi including a geocaching device called a cacheberry, a robot and even a wildlife cam. I always blog my projects at www.mypifi.net/blog which originally was for my geocaching and later a reminder to me how I built something. But now as more and more people discover my blog I have chosen to blog how I make stuff so others can try, hopefully if I find I problem and solve it that it helps someone else out in the same boat.

This is a project that I embarked upon after reading Raspberry Pi Santa Catcher with Pi NoIR and PiFace CAD by Frederick Vandenbosch (fvan) who gets credit for a good majority of this write-up. I basically re-tailored it to focus on a practical problem that we face in our daily lives; especially those who have a hard time finding a parking spot at work: how can I be notified when a parking spot is vacated in a specific area of my workplace parking lot. Obviously, this is not a full-proof solution, and the end result is more of a proof-of-concept than a full-blown solution. However, I consider it as a good step toward accomplishing that goal and I hope others will expand it further and report back to spread the benefit to others.

I have followed more of a bullet-point "instruction set" style whereby someone can follow through them and get things running rather quickly. Fvan has done a better job elaborating on some of these so you could refer to his write-up for more details.

Before jumping into the content, an important addition that I made beyond the instructions is to share some pitfalls and the associated lessons-learned that I have gained as I implemented this project. These will be inserted within the instructions and typically preceded with a "Note" label; then they will be consolidated in the final "Wrap-Up" section.

Finally, and although this may be considered "a given", but I'll say it anyway: proper due diligence should be exercised when someone actually puts this project into practice; especially in terms of notifying people that monitoring is in progress. While the huge and fairly conspicuous camera may help with that, it may also be helpful to have a sign informing them that the property is being monitored - all of that of course, after getting proper authorization from the parking lot "owners."

 

Here's an overview of the topics that I will cover:

  1. Hardware Setup
  2. Software Installation
  3. Activating/Deactivating Monitoring
  4. Notifying based on movement
  5. Wrap-up and lessons-learned

 

Let's get right into it.


1. Hardware Setup: Required Parts (all obtained through Element14):

  • Raspberry Pi Model B - 8GB SD Card pre-installed with NOOBS - (SKU: 04X5042)
  • Pi NoIR (No InfraRed) Camera - (SKU: 08X2023)
  • PiFace CAD (Control And Display) - (SKU: 01X3013)
  • Enclosure for PiFace/Pi/Camera - (SKU: 95W3070)
  • Wi-Pi Dongle/Module - (SKU: 07W8938)
  • Dummy Camera, Monoprice - (Amazon ASIN: B007VDTTTM)

Hardware.jpg

More About Pi NoIR

  • NoIR (No InfraRed) Camera
  • Infrared filter removed
  • Black PCB (vs. usual green)
  • 5 megapixel resolution
    • 2592 x 1944 px static images
    • 1080p30, 720p60 and 640x480p60/90 video
  • Comes with instructions (easy)

2. Software Installation:

  • NOOBS SD card
    • Booted Pi with pre-installed NOOBS (from Element 14)
    • Chose Raspbian as the installation
    • This took almost 7 minutes
    • Rebooted
    • After installation, VERY important:
    • sudo apt-get update
    • sudo apt-get upgrade
  • Configure Camera, SSH, and SPI
    • Ran raspi-config (with sudo)
      • Enable camera module
        • Basic testing for camera after enabling it from raspi-config
        • Command: raspistill -o noirimage.jpg
        • Duration: took about 7 seconds to take picture and save it
        • The key thing with Pi NoIR is that:
          • IR light is visible to it but not to Pi Camera
          • Used a TV remote control as an IR light source
      • Enable SSH access (Advanced -> A4 SSH)
      • Enable SPI (for PiFace CAD at Advanced -> A5 SPI)
        • In raspi-config:
          • Option 8 Advanced Options -> A5 SPI -> set to Yes -> select OK -> Finish
          • Note: troubleshooting: if you don’t see this option (A5 SPI) in the options
            • sudo apt-get update
            • sudo apt-get upgrade
  • Set up PiFace CAD

sudo apt-get install python3-pifacecad

  • Test PiFace CAD Installation
    • Test that everything has been installed by running the sysinfo.py program:
    • sudo python3 /usr/share/doc/python3-pifacecad/examples/sysinfo.py
      • Note: don’t forget the sudo, otherwise you will get a permission denied
    • You will get something like the following:

PiCAD.jpg

  • sysinfo.py gives details about the Raspberry Pi’s IP address, temperature, and memory.
  • I also tested other Python scripts (There are several gz files under that same folder. Unzip (sudo gunzip *.gz) and test.)
  • Weather:
    • sudo python3 /usr/share/doc/python3-pifacecad/examples/weather.py

Weather.jpg

  • Even hangman
    • sudo python3 /usr/share/doc/python3-pifacecad/examples/hangman.py

hangman.jpg

  • Set up IR remote with PiFace CAD
    • Helps us control our PiFace CAD with a TV remote
    • Before starting:
      • Set up LIRC (Linux Infrared Remote Control):
        • Software that handles interfacing between hardware & Pi media center
        • Mostly included in recent Pi distros
          • If not: apt-get install lirc
  1. Set up Infrared
  • This sets up the Raspbian Infrared module to use GPIO pin 23
    • Command: sudo modprobe lirc_rpi gpio_in_pin=23
    • No output is displayed as a result of this command but you can test it out.
  • First run “pidof lirc” to see if LIRC processes are running
    • If pidof is not installed: sudo apt-get install pidof
    • If pidof returns a number then kill that process
    • Command: sudo kill [-9] <process number>
  • Test that it works with “mode2” program:
    • Command: mode2 -d /dev/lirc0
    • If it says: “No such file or directory” then LIRC is most likely not installed. If so, follow the earlier instruction to install it.
  • To make sure the module is loaded each time you boot, add lines to /etc/modules:
    • Command: sudo vi /etc/modules
    • Add the following two lines:
      • lirc_dev
      • lirc_rpi gpio_in_pin=23
  • To set up your Pi to receive IR data from the PiFace CAD
    • Back up previous Linux IR control config
      • Command: cp /etc/lirc/hardware.conf ~/.
    • Edit the hardware.conf file
      • Command: sudo vi /etc/lirc/hardware.conf
      • In /etc/lirc/hardware.conf, change these values as follows:
        • LIRCD_ARGS="--uinput"
        • DRIVER="default"
        • DEVICE="/dev/lirc0"
        • MODULES="lirc_rpi"
      • Save and Exit
  • Reboot

     B. Set up Your Remote

  • The /etc/lirc/lircd.conf file tells LIRC about your remote control
  • Every remote is different, so we need to download or generate a config for each remote
  • Back up original LIRC config file
    • Command: cp /etc/lirc/lircd.conf ~/.
    • Note: This will save a copy of your original lirc configuration to your home directory. If you ever want to restore these original settings use the command:
      • sudo cp ~/lircd.conf /etc/lirc/lircd.conf
  • Try and find your remote control config file online:
    • In midori or other browser, visit: http://lirc.sourceforge.net/remotes/
    • Note: I initially chose to use the Apple TV remote but that didn’t work since the drivers at the above link seem to have been outdated.
    • I chose a Hitachi CLU-4997S. I found a driver that is close to it (CLU4341UG2) which was the closest to it.

remotecontrol.jpg

  • Under: http://lirc.sourceforge.net/remotes/
    • Right-click on Hitachi (not .jpg file)
    • Click Save as.. Then copy it to /etc/lirc/ directory
    • Copy it as lircd.conf
      • Command: sudo cp <file> /etc/lirc/lircd.conf
  • Sudo Reboot
  • Log in and issue command: irw

TestingRemoteWithIRW.jpg

    • Every time you click on a button on that remote, irw would output a confirmation that it has received these key signals. Great!
  • Now that we're done with the IR/Remote setup, let's move on to the next two topics, namely: Detection and Activation.
  • Detection
    • Main goal: Use the motion application which utilizes the Pi NoIR camera to detect movement. When it does, it records until that motion stops.
    • The motion application generates still images of moving objects. It can also be configured to generate videos of “live” camera stream and can be watched remotely.
    • Instructions for the motion setup (partially) follows instructions found at the following link. Note that I have included the process that I followed below but you can always refer to this link if you'd like to implement it yourself.

http://www.codeproject.com/Articles/665518/Raspberry-Pi-as-low-cost-HD-surveillance-camera

  • The instructions that I followed and documented:
    • Enable WiFi (done earlier - remember to use static IP configuration)
    • Assemble hardware:
      • Although doesn’t look complicated, it took a while

NoIRCameraLit.jpg

  • Note 1: A really important thing to remember is to set up the Pi in a way that doesn’t risk tampering with the NoIR or the Pi itself.
  • Note 2: you may want to disable the camera LED so it doesn’t flash every time it records. This can be done by adding the following to /boot/config.txt  disable_camera_led=1


  • Installing the motion detection software
    • Command: sudo apt-get install motion
    • It appears that the current version of motion does not (yet) support the Pi camera module. Therefore, we need to download and install a special build with such support
    • Let's download and install that special build with such support. Commands:
      • cd /tmp
      • sudo apt-get install -y libjpeg62 libjpeg62-dev libavformat53 libavformat-dev libavcodec53 libavcodec-dev libavutil51 libavutil-dev libc6-dev zlib1g-dev libmysqlclient18 libmysqlclient-dev libpq5 libpq-dev
      • wget https://www.dropbox.com/s/xdfcxm5hu71s97d/motion-mmal.tar.gz
      • Unpack downloaded gz file. Command:
        • tar zxvf motion-mmal.tar.gz
      • Now must update the version of motion that we installed earlier with this downloaded build. Commands:
        • sudo mv motion /usr/bin/motion
        • sudo mv motion-mmalcam.conf /etc/motion.conf
      • Enable the motion daemon so that motion always runs. Command:
        • sudo vi /etc/default/motion
        • Change the following parameter value (after =) to yes:
          • start_motion_daemon=yes
      • Permissions should be done correctly; should give user motion permissions to run motion application after rebooting. We do so with the following commands:
        • sudo chmod 664 /etc/motion.conf
        • sudo chmod 755 /usr/bin/motion
        • sudo touch /tmp/motion.log
        • sudo chmod 775 /tmp/motion.log
      • Note: if you run into problems then you may need to remove and re-install (I ran into it and that was the most effective solution after pulling my hair out for a while). To remove, use the following command, then re-install it using the earlier instructions.
        • sudo apt-get remove motion
    • Time to configure motion. For that, let's edit and make the following changes to /etc/motion.conf. Command:
      • sudo vi /etc/motion.conf
      • Make the following changes:
        • daemon on  (motion always runs as daemon in bg)
        • logfile /tmp/motion.log  (store log file in this dir)
        • width 640
        • Height 480
          • Note: The default values for width and height were initially set to 1280x720. Unfortunately, for some reason, that caused the system to crash, which is why I reduced them to the above values.
        • framerate 2  (no need for realtime video, 2 pics/s ok)
        • pre_capture 2
        • post_capture 2 (rec 2 frames before/after motion detected)
        • max_mpeg_time 600 (10 mins of movie time)
          • Note: this may be called max_movie_time
        • ffmpeg_video_codec msmpeg4 (this format allows video to run anywhere)
        • stream_localhost off  (allow access to live stream anywhere)
        • stream_auth_method 2 (protect with uid/pwd)
        • stream_authentication <uid>:<pwd>
          • Note: I do NOT recommend that you use either one of these two "authentication" parameters right off the bat, until you become comfortable with a working setup. I wanted to include them for the sake of completeness
    • Reboot: sudo Reboot
    • After the reboot, the red light of the camera module should be turned on, which shows that motion currently is using the camera to detect any movement.
      • Note: you can take that out (as mentioned earlier) by adding the following to /boot/config.txt:
        • disable_camera_led=1
    • Open Firefox on another machine on network & enter the following URL:
      • <Pi IP>:8081
    • You will notice that it will starts streaming
      • Note: Chrome did not work. After doing some research into it, it turns out that “Chrome does not properly handle the mjpg stream”


3. Activating/Deactivating Monitoring

    • Main goal: Keep monitoring the spots using the motion application and control the activation and de-activation of such monitoring
    • Activation/Deactivation
      • The configuration of motion’s web interface does not persist, which requires that we automate this task
      • Borrowed Fvan’s scripts (see later)
      • Two main parameters:
        • output_pictures: to generate a still image of moving object
        • ffmpeg_output_movies: to generate a movie for movement
      • Command to test initial configuration state for params:
        • cat /etc/motion.conf | grep -e output_pictures -e ffmpeg_output_movies
        • Output:

activateDeactivateScript.jpg

  • Command (sed) to disable both params:
    • sudo sed -i -e 's/output_pictures.*/output_pictures off/g' -e 's/ffmpeg_output_movies.*/ffmpeg_output_movies off/g' /etc/motion.conf
  • Output: None
  • Command to check params again after running script to disable params (same as above but we want to check the effect of running the above sed command):
    • cat /etc/motion.conf | grep -e output_pictures -e ffmpeg_output_movies
    • Output:

catCommandAfterDeactivation.jpg

    • Notice how both params are now set to off (due to the sed command)
  • Now let’s test changing from off to on (enable), with the following command:
    • sudo sed -i -e 's/output_pictures.*/output_pictures best/g' -e 's/ffmpeg_output_movies.*/ffmpeg_output_movies on/g' /etc/motion.conf
    • Output: None
  • Again, let's run the same earlier command to check params after enabling the params:
    • cat /etc/motion.conf | grep -e output_pictures -e ffmpeg_output_movies
    • Output:

sedActivateScript.jpg

    • Great! So now, we have a fairly straightforward (well, at least short) command that helps us enable/disable the monitoring. Let's now turn our attention to one of the last tasks in this configuration, which is to use the PiFace CAD as a tool to visualize the status (param changes, etc.).
  • LCD Visual Indicator
    • Main Goal: Need a visual indicator once the parameters have been changed (activated/de-activated)
    • How: Display a message on the PiFace’s LCD screen
    • As suggested by fvan, it is a good idea to combine the previous (sed) commands and LCD visual indicator in one script
      • Have main snippets from script below but you're encouraged to pick it up from his post, at the reference link provided at the end of this post.

LCDVisualDisplayScriptSnippet.jpg

  • We need to activate the remote control with scripts
    • Refer to earlier section (Set up your Remote) for configuring remote, but remember:

wgetRemote.jpg

  • There's a standard script that listens for specific keys. The script Name is: .lircrc
    • It reacts to signals decoded by lircd
    • Note 1: this is placed in your home directory
    • Note 2: you can optionally create a system-wide config file in /etc/lirc/lircrc which would be used when no .lircrc is found in home directory
  • Syntax for .lircrc

SyntaxForLircr.png

  • Let’s see our .lircrc script

OurLircrScript.png

    • Notes:
      • As it states in this figure, use python3 which we installed earlier.
      • Button config shows sequence of keys. In this case, similar to fvan's, I used 12346 for activating and 12356 for deactivating monitoring
      • As explained in the earlier Syntax figure, repeat was set to 0 so that repetitions of 1 (for example) are not interpreted as 11, 111, etc.
      • The "prog" parameter told .lircrc that irexec will respond, and therefore should be running, so let's do that.
    • To ensure irexec is running, let's make an entry in /etc/rc.local
      • Command: sudo vi /etc/rc.local
      • Before exit 0, add:
        • sudo -u pi irexec -d
    • To monitor, go to the following link of Firefox: http://<IP>:8081
      • kind-a lame pic of a ceiling (just for testing at this point)
  • Testing
    • Main goal: test that this setup is working and that I'm getting notified once a change takes place in the parking spots
    • Activating system:

ActivatingSystem.jpg

  • Before detection

BeforeDetection.jpg

  • Waved hand in front of camera (for basic testing)

AfterWavingHand.jpg

  • De-activating System

DeactivatingSystem.jpg
4. Notifying based on movement

  • Main goal: how to get notified when the system detects a vacant spot

Just like fvan, I used Apache web server to host the content and checks continuously for recent content (using timestamps) and sending an e-mail with that content.   a. Web Server Setup

    • We need to offer the content from motion on a web server so that we can access it from any browser that can reach it. The first thing is to install the server:
      • sudo apt-get install apache2
    • Like fvan, I chose to link the media directory to the web server by creating a soft link to it as follows:
      • sudo ln -s /path_to_media_files/ /var/www/media

   b. Checking for available (recent) media

    • Look for available jpg or avi files that were modified in the last 2 minutes:
      • find /path_to_media_files -type f -mmin -2  | grep -e "avi" -e “jpg"

   c. Script that checks for media and notifies

    • I leveraged fvan's script with two changes:
      • I added another function (method) and called it sendText(), which sends an SMS message out to my cell phone. This is almost identical to sendMail() but with an e-mail of <id>@txt.att.net
      • I changed the "last modified" time to 2 minutes instead of 5

 

Final Test:

So, after mounting the camera, positioning it toward an area in my driveway, pressing the proper combination on the remote control (to activate monitoring), and driving in, below are snapshots from what the system captured. Note that the figures below are from several detection events; I tried to include the best ones, but you get the gist of it .


Here's a sample e-mail notification that I received: 

EmailNotification.png


A sample e-mail content:

EmailNotificationContent.png

Another e-mail content:

EmailNotifDetection.jpg

Here's a sample SMS notification that you would receive:


photo 1.PNG


 


And Here's the picture that was captured in the link that I received. The first one was during a pitch black - no movement - time frame.

Detection1.jpg


The second one was after Pi NoIR detected the lights of my car as I started it on the main street.

Detection2.jpg


The third is when I turned onto my driveway.

Detection3.jpg


This is from another trial, when I realized that having the camera face the direction of the car (head-on) is probably not a good idea then I positioned it on the side. The image below shows when it detected the car as it was coming off of the main street onto my driveway.

Detection4.jpg

5. Wrap-Up & Lessons-Learned

 

This was a lot of fun, very educational, and eye opening. I hope others will benefit from it.


See the original Drinkmotizer video - and how it was built! (follow this link)


Celebrate the winter holidays with your own drink mixing robot. Drinkmo never lets you down!


Margarita Screenshot.png

The Drinkmotizer interface.

 

 

But, Drinkmo is different too. I made a few changes to its operation. First, I changed out the sub 100oz/in stepper motor for a 280 oz/in alternative. I did this so that the drink platforms could blast through any obstacle. Whether it be debris, dry triple sec, or someone’s finger… right through!

 

Second, I swapped out the stepper driver for a Gecko G210X, single stepping motor. I was originally doing 1/10th micro-stepper, and hand a maximum speed. Now, I am able to step up the speed, so to speak. I set it to be a little faster. Future modifications will make it move nightmarishly fast, you have my word on that.

 

Third, the onboard air regulator was originally taking 800 psi air from a paintball gun tank to operate the chaser module. The problem here was frequent air line bursts. So, I made an adapter to go from a portable air compressor to the Drinkmo regulator system. With a maximum of 100 psi from the air compressor,  air line compromises were over. I also upped the chaser bottle pressure from 5 psi to 15 psi to force the chaser out faster.

 

I have big plans from Drinkmo in the coming months. Almost a complete overhaul. Cheaper kits to follow too!

 

C

See more news at:

http://twitter.com/Cabe_Atwell

PEarle

Tomcat on my Pi

Posted by PEarle May 2, 2014

I decided to install Tomcat on my Pi and see what the performance is like.

 

First thing to do is make sure I have got the latest version of all my currently installed function - e.g. use command

 

sudo apt-get update



 

Step 1 - Install Java

 

Java installation is very straightforward with the following command

 

sudo apt-get install oracle-java7-jdk



 

Once that has finished you can check that Java installed correctly  with the command java -version. You should get something like this

 

java_blog_1.png

 

To double check that it was installed I wrote and compiled a simple java class - of course it had to be "HelloWorld".

 

Using the 'nano' editor, I created a file called HelloWorld.java which contains the following ;

 

public class HelloWorld {
     public static void main(String args[]) {
          System.out.println("Hello World !!");
     }
}



 

I compiled this with the command javac HelloWorld.java which created class file HelloWorld.class.

 

Finally I executed the class using command java Helloworld and saw the expected output - i.e.

 

java_blog_2.png

 

Step 2 - install Tomcat


Download Distribution


Get the Tomcat distributions and unzip it as follows

 

wget http://mirrors.axint.net/apache/tomcat/tomcat-7/v7.0.42/bin/apache-tomcat-7.0.42.tar.gz
tar xzf apache-tomcat-7.0.42.tar.gz

 

This will unpack the distribution underneath your current location (e.g. /home/pi/apache-tomcat-7.0.42)

 

Configure user

 

Before we start tomcat we need to configure an administrator user. Edit file tomcat-users.xml in the conf subdirectory (e.g. sudo nano conf/tomcat-users.xml) and add an entry under <tomcat-user>

 

e.g. add <user username="system" password="raspberry" roles="manager-gui"/>

 

java_blog_3.png

 

Start up

 

To start up Tmcat run startup.sh in the bin sub-directory - e.g. sudo bin/startup.sh

A small informational message will display and then the console will free up . Tomcat is now running in the background - to check this enter ps -ef | grep java and you should see something similar to the following;

 

java_blog_4.png

 

The best test of course is to try it in a browser - open a browser (which can be on another machine on your network) and enter URL e.g. 10.13.36.255:8080  (i.e. <server_name>:8080) and you should see something like this:

 

java_blog_5.png

Filter Blog

By date: By tag: