Introduction

Starting by the evening of the 5th of April, 7 of 9 has been moved to the exhibition location. This has changed the roles for a while as it should remain active for two months while the development will go ahead.

Until the development is complete I have prepared a first beta version; the real limitation is the way I should manage the interventions, software updates, and the next steps of updates/upgrades. Another limit concern the updates to the software: as the automatron will run during the day, I can't leave a development phase in the middle; every change should be calibrated so as it is finished and tested it give another feature to the system without interruption. The same replacement of the left eye implant described in Episode 8 has been done in one hour, tested, checked the camera working then restarted the system with the changes in place.

With software additions and updates things are more complicated; if it is not well tested before, there is a high risk that the program crash, or hang with the system laying in an unrecoverable condition that needs my presence on-site for a physical reset.

 

7 of 9 Beta

The bet version of the mannequin has been necessary to have it working with some basic features during the first days of the exhibition; this revealed then also a good approach to develop the entire project. Seeing the single part working – or not, with issues, or not – is a very good method to experience the features I planned to implement in the final release checking and testing live one by one.

The Mannequin Development Kit (MDK)

Having all the parts assembled and connected as should be in the final version, most of the work is a question of software, also expecting that some minimal change on-the-go to the hardware setting may be needed. This kind of approach has been possible with a proper configuration of the development environment on the Raspberry PI 3B+ that is the external door of 7 of 9. The considerable processor power and speed of this last model of the Raspberry PI board covered a key role, as well as the availability on-board of the WiFi connection.

Reminder: all the software, designs, 3D printable SDL files of this project are available under LGPL 3.0 license on GitHub at the following link: https://github.com/alicemirror/mannequin

Step 1 - Raspbian

Together with the Challenge Kit, I have received the Noobs micro SD from which I have installed the full desktop Raspbian. About the Noobs content, it is worth to arise a warning; when you start the PI with the micro SD card for the first time you access a menu of choices from which it is possible to select between several kinds of installations including a couple of Raspbian versions: lite and full. It is interesting to note that you can select more than one of the options; in this case, you get a Raspberry PI with multiple operating systems and a multi-boot option after reset. I have installed the full desktop Raspbian Linux.

As usual, after upgrading the installation to the last version of the components and configured the system the PI was ready to start the custom installation and development.

The operations described above, have been done in the lab with a keyboard and an HDMI monitor connected to the PI. Then, after connecting the device to the WiFi the most efficient and secure solution to reach it remotely has been to set up the VNC Server.

Step 2 - VNC Server and Remote Access to the PI

Accessing the Raspberry PI from remote on the same local WiFi LAN is not sufficient, it will remain alone in a different place from where I develop. To do this I have configured the VNC Server to be accessible from the Internet creating a personal team, free up to five computers to which I have added the PI.

The RealVNC viewer can be easily set up as an online team, free for personal use. The steps to register are very easy and the advantages are countless;

Above: as your team appears after the registration.

Below: the list of registered computers in your team

Above: you can also invite via email other collaborators to join your team and participate accessing to the registered servers.

The image below shows how the list of available VNC servers appear after the registration and setup; the Address book will list the local servers (as many as you want), not registered in the team while the <Your team name> Team (Home) shows the list of (max five) servers accessible from remote.

As the VNC viewer client is available for free for any platform(Mac OSX, Windows, Android, iOS and Linux) the PI becomes reachable from the Internet from everywhere.

There is a potential issue on the desktop resolution when you start the viewer without any screen connected to the PI; the resolution is very low (640x480). You can change this value setting the resolution in the Raspberry PI configuration and restart the system. I have set a 1024x768 default resolution for comfortable use of the PI desktop from the VNC viewer.

 

Step 3 - PiFace Digital 2

The setup of the PiFace Digital 2 board was another of the preliminary steps before leaving 7 of 9 going to the exhibition. I have already used in past this board so I was not expecting particular issues. I have installed the Python libraries and tried to test it – just to save time – without installing the emulator. Unfortunately, the board was nor recognized. I checked everything and everything was fine. I tried replacing the board with another I have here with the same result. Ok, step-back and installed the emulator. it worked!

The problem was just with the command line. A stupid command in Python as shown in the (poor, I should admit) documentation on the PiFace Digital 2 site.

Good: the board software is open source

Bad: it seems that the development is of abandonware kind. No one has updated the sources following the recent evolutions of the Raspian Linux.

To be honest, I also thought to an alternative solution of using the board (I need it at least for the relay availability); I have absolutely no time to clone the source repo trying to find the bug if any. unfortunately, there is a known bug that has not been solved yet. It is the case to say abandonware - sic - as on the GitHub repository the bug is the PiFace Digital Issue #36, opened on past 3 January 2018 (!!!) and still remain open. It is a blocking bug that I can suppose Element14 as a distributor is not aware at all.

Fortunately, I have not special problems switching (or mixing) Python 2 and Python 3 developments because there is a workaround but it is working only on the Python 3 version of the libraries.

The reason for the issue

The most recent Raspbian Kernels initializes the SPI serial communication to the speed of 125 MHz while the PiFace Digital 2 expect a default speed of 500 KHz but does not initialize this speed when it starts. This generates an incompatible speed in the communication protocol and the board is not recognized.

The workaround

The workaround worked but we should consider that some other issue may occur if using together with the PiFace Digital 2 another board communicating through the SPI protocol expecting without initialization the SPI new default speed. Despite this disquisition, there are a few changes that should be done to solve the problem.

 

  • Open a terminal session on the Raspberry and search where the spi.pi configuration file (Python 3) is located in your installation with the command find / -name spi.py
  • In a standard Raspbian Pixel installation, you can expect to find the file in this location:  /usr/local/lib/python3.5/dist-packages/pifacecommon/
  • Edit the spi.py file and search along the lines for the following part of configuration:
# create the spi transfer struct
        transfer = spi_ioc_transfer(
            tx_buf=ctypes.addressof(wbuffer),
            rx_buf=ctypes.addressof(rbuffer),
            len=ctypes.sizeof(wbuffer),
        )
  • Now, modify the definition of the transfer struct that includes all the parameters for the default SPI connection protocol adding a line that forces the SPI speed to the desired value as shown below
# create the spi transfer struct
        transfer = spi_ioc_transfer(
            tx_buf=ctypes.addressof(wbuffer),
            rx_buf=ctypes.addressof(rbuffer),
            len=ctypes.sizeof(wbuffer),
            speed_hz=ctypes.c_uint32(15000)
        )

 

Save the file and restart the system. Now the board should work.

 

Step 4 - Programming Arduino

And, finally, a relatively easy to setup step. I have tried in the past to develop with the Arduino IDE on Raspberry PI but the previous hardware version was really slow. On the 3B+ model, after I have installed the Raspbian Linux version of the Arduino IDE, the game changed. Development is easy without problems with the Arduino connected to the USB also when the PI is reached remotely through the VNC server.

There is just a detail to consider; the Linux version of the IDE does not include a library manager so you should install (just copy the folders or the zip files) the libraries by yourself. If you already have a desktop version of the IDE installed somewhere, just copy the content of the libraries folder to the default Arduino folder on the Raspberry PI and... that's all.

 

Step 5 - Preparing to Move

Consider this scenario: 7 of 9 is ready to go to its final location; it is not worth to open the finished and dressed mannequin at this point; the system is reachable by the WiFi but the other location has a different WiFi access point.

To avoid to reopen the "case" (the head), connect again a keyboard and a monitor to set up the new WiFi the solution is easy and probably can be applied in most of the cases. Until you can reach the PI from your local WiFiconnection and the VNC client/server, just set up your smartphone as a hotspot and add these WiFi settings to the Raspberry PI. Then shutdown and power off.

When you are in the new location, connect the programming laptop (or tablet, or smartphone) via the mobile hotspot again, configure the Raspberry with the local WiFi. As you lose the connection close the VNC client and reopen it. You can reach the PI again also using two different Internet connections because this is one of the computers part of your VNC team. That's all.

 

7 of 9 Beta Features

The most important difference between the beta version and the final release of 7 of 9 is that the two boards – Raspberry PI and Arduino – work together in synch but does not communicate yet. Every one of the two board is independent.

The Emotion Lighting Control

As discussed in Episode 7 The internal lamp is activated by a couple of chained relays where the first, controlled by the Raspberry PI, is the relay 0 of the PifaceDigital 2.When configured, the two relays on the board corresponds to the digital output pins 0 and 1.

To create the most flexible and modular system possible I decided to opt for a strategy applied also in the next development steps: every feature is managed by a specific Python script and a bash command can activate it eventually passing the needed parameters.

 

# Start PiFace Digital
import time
import pifacedigitalio as pfio
pfio.init()

# Enable light fire for 60 sec
#
# Note: the light is connected to the Relay 1
# on output port 0
pfio.digital_write(0, 1)
time.sleep(60)
pfio.digital_write(0, 0)

 

This short Python command powers the light relay for 60 seconds every time it is run. I have added it to be launched every 5 minutes in the cron schedule and it worked perfectly.

About the cron scheduler: I have always had a lot of problems to remember al the parameters to set up correctly a line in the cron. Always by when I started working on the Unix system V. If you prefer to manage efficiently the cron schedule on the Raspberry PI, I suggest installing the graphical UI of the scheduler; it is self-explaining, easy to use and has a good error checking. You can find install it with the command sudo apt-get install gnome-schedule

Moving Around

The current Arduino code initializes the stepper motor finding the zero position and can move the torso in a range of 40 Deg. The motion and speed control depends on a set of commands that set a control structure containing all the parameters related to the motor

 

typedef struct StepProfile {
  int torsoSpeed;
  int rotAngle;
  int lastAnglePos;
 };

 

When commands are received by the Arduino these set one or more of the parameters of the StepProfile structure. Then the moveTorso() function implement the motion depending on the current parameters set in the StepProfile structure.

 

/**
 * Move the motor to the new angle, if differs from the last.
 * The new position is the difference between the last angle and
 * the new rotation angle. The angle value is converted to motor
 * steps.
 */
void moveTorso() {
  if(torsoControl.rotAngle != torsoControl.lastAnglePos) {
    // Set the motion speed
    torsoStepper.setSpeed(torsoControl.torsoSpeed);
    //! Calculate the number of steps corresponding to the algebraic difference
    //! between the new angle and the current position
    int newAngle = torsoControl.rotAngle - torsoControl.lastAnglePos;
    
    //! Number of steps based on the new position    
    double moveSteps = 360 / STEPS_PER_REVOLUTION * ANGLE_DEMULTIPLIER * newAngle;
    
    // Move the motor testing the entire rotation range.
    torsoStepper.step(moveSteps);
    torsoControl.lastAnglePos = torsoControl.rotAngle;
  }
}

 

Until the commands are not implemented in the Raspberry PI, I have created in the loop() function a series of predefined commands to periodically move 7 of 9. Globally, I define a small angle increment and the initial position

 

int anglePos = MAX_ANGLE / 2;
int increment = 4;

 

Then in the loop() function I cycle from right to left inverting the direction at both the ends.

 

// Update the angle increment
  anglePos += increment;
  setTorsoSpeed(SPEED_LOW);

  // Check for the higher limit
  if(anglePos >= MAX_ANGLE) {
    increment = -4;
    anglePos += increment * 2;
    setTorsoSpeed(SPEED_ZERO);
    setTorsoZero();
  }
  
  // Check for the lower limit
  if(anglePos <= 5) {
    increment = 4;
    anglePos += increment;
    setTorsoSpeed(SPEED_ZERO);
    setTorsoZero();
  }

  // Set the new position
  torsoControl.rotAngle = anglePos;
  moveTorso();


  delay(10000);

 

If you take a look to the GitHub sources under the Arduino/Mannequin folder, you can see that it is already implemented an experimental mechanism to detect the sound level and direction using the two microphones (the ears). This part is not yet working as expected so it is a commented block for now.

Previous Episodes

Next Episodes