Skip navigation

Arduino - Oak and WeMos

Posted by ntewinkel Top Member Jan 25, 2016

I received my Digistump Oak a few days ago.

Digistump Oak

Picture is shown with an AA battery for size.  They included a pretty sticker and post card. Maybe that's to make it look like I actually received something, because that Oak is tiny!


The idea is that I should be able to use it just like a small Arduino, with WiFi.

They are still working on the firmware, so from what I can tell I can’t actually use it yet for anything ( - click through to their Kickstarter campaign for the updates). Please let me know if that’s not the case, I might be missing something.


Then a few nights ago, as I was browsing AliExpress for some art tools for my dear spouse, I came across the WeMos D1 mini (, which looks a LOT like the Oak, and it looks like it’s ready to use, and (at least when bought from a place like AliExpress) is only about $5 (shipping included).


Has anyone seen or heard of or tried the WeMos yet? and/or did anyone else sign up for the Oak, and receive it yet?

And will these be available through element14-and-friends in the future?


I forget who it was who said that crowdfunding/Kickstarter campaigns generally mean we pay today for next year’s technology (or something that might be outdated by the time it ships) - in this case that appears to be true again with WeMos and sort of with the $5 Raspberry Pi (with cheap WiFi dongle can be under that $10 mark), although the RPi is bigger so might not fit some applications. The Oak was $10, +$5 shipping to Canada (No regrets for sure, I'd still buy one at that price).




In November I offered to use an Arduino UNO as a reversing monitor.


It's a simple concept

  • Arduino
  • Ultrasonic ping sensor
  • NeoPixel LED's


You use the ping sensor to detect the distance and change the colour of the NeoPixel to show Green for safe, Orange for Warning and Red for Stop.


In Part 1 ( ) I made this comment :-


Part Two

I'm still completing the final parts of the code.

There have been some other distractions (inc silly season) and I want do some more testing on the code.

I have found a suitable enclosure but I need to live test it before I fit it to the enclosure.


The Ping sensor uses a Tx and Rx ultrasonic transducer to send out a burst of high frequency audio.

This causes ringing of the transducer and can cause the mounting hardware to resonate which may give false readings.

I've been finding that doing this work in a confined room results in lots of echo, and therefore my "same distance' measurement isn't the same.


I've also had some issue with the IDE and the colourisation of keywords, that threw me off for a few days.




Short Distances

I was getting random short distances that made no sense.

I tried slowing down the Ping rate which seemed to help, but not make the problem go away.


I tried changing the hardware, using the theory it was faulty, but still I got random values that were short.


I tried a new library, various averaging approaches and searched the internet to see if the issue was common.

I found a site where the author used the "Running Median" of several readings, and while the code seemed to work at short distances, anything longer than 1m seemed to cause these random short distance values.


In the end I needed to start assembling the hardware in order to take some photos.

During the assembly I followed the instruction here that specified adding a 1000uF capacitor across the NeoPixel power wires.


    100uF instead of the 1000uF   photo source : three minions (me, myself and I)


When I next connected the assembled unit to the computer, it behaved perfectly with no short distances.


So my conclusion is that while the lack of capacitor didn't impact on the Neopixels, it did manifest in the Ping Sensor which was hanging across the same 5v source.




While you can hardcode the distance trigger points, it makes much more sense to allow the user to set their own.

The problem is how do you make something user friendly, and fairly clear while capturing the distance.


I elected to use a single pushbutton, and detect it at power-up.

If it is pressed and held at power-up, the display changed to three Green pixels to indicate the SAFE distance is stored when you push the button.

Set_SAFE.jpg Set_WARN.jpg  Set_STOP.jpg

photo source : three minions (me, myself and I)


Once you have set the SAFE distance, the display changes to three Blue pixels to indicate the WARN distance when you push the button.

If it is pressed, then the display changed to three Red pixels to indicate the STOP distance is stored when you push the button.

Once it has stored all three values, it returns to normal.




When I started I was intending to use Orange for the Warning colour.

NeoPixels will do Orange but the brightness is so much less that I felt you may struggle to see it in bright daylight.

When you send the colour it is in the form of Red value, Green value, Blue value with a range of 0-255.

Hence RED is 255, 0, 0, GREEN is 0, 255, 0 and BLUE is 0, 0, 255 and so it is the next brightest colour available.





While the display is obvious, if the Ping Sensor is not working, or the distances aren't stored, then the sketch goes into an Error display mode.

I elected to make half the display Red while the other half was blank and alternate at about a 1 second rate.


It becomes very obvious there is an issue.

It is not easy to continually detect the Ping Sensor, as the library returns distances greater than the maximum as a zero, just as you get if there is no sensor.


I included a blanking routine that if it detected the same distance 120 times (approx. 60 secs) it made blanked the display.

If the ping sensor went faulty this would kick in, and it wouldn't come out of it when you reached the SAFE distance (Green).


Hence I think I covered all the errors I could.




This version was done for The Shed Magazine which use this version of Arduino UNO.

It allowed me to connect everything to the pins using female to female jumpers, but the same concept will work for a normal UNO.




photo source : three minions (me, myself and I)


The case is a food type container from a large retailer and at $2 makes a very suitable enclosure


I intend to make another version using a Prototype board to connect to the UNO that element14 kindly supplied



Did it work

Since my daughter moved out, I don't have a parking issue, but if she returns this would be much better than the "stick that moves" we used to have before.


The photos below show the false wall I put in my garage to test it out.

I was very happy with the results.

SAFE.jpg  WARN.jpg  STOP.jpg

photo source : three minions (me, myself and I)


Final STOP distance.jpg


This is the final distance, and from the drivers seat looks like it is touching.


Normally you would be reversing into the garage and having the issue of parking in the right spot, but everyone's needs are different, and this works forward or back.


The relative position of the display v the sensor may need changing to suit the vehicle and direction.

I mounted it over to the right to allow viewing in an external mirror.





Attached is the sketch and libraries needed to run it.

There are plenty of comments because later I won't remember why I did xyz, or what I was thinking.

Comments take no room in the compiled code but do make it easier for others to use your sketch.






DIY-Thermocam - An open-source, do-it-yourself thermal imager




The DIY-Thermocam is a do-it-yourself, low-cost thermographic camera based upon the famous Teensy 3.2 mikrocontroller. It uses a long-wave infrared sensor from FLIR to provide high quality thermal images.


The aim of this project is to offer a cheap, open-source thermal plattform for private persons, educational institutes or small companies. There are various applications like finding heat leaks in the insulation of buildings including windows,  the analysis of electric components, as well as exploring at night.


The large firmware written in Arduino compatible C++ code can be used as it is or modified / extended to your own needs. That allows the DIY-Thermocam to be used as a versatile basis for various, different thermal applications.


For more information about the project, check out the website:


Ultrasonic Reversing Monitor

Posted by mcb1 Top Member Jan 7, 2016

In November I offered to use an Arduino UNO as a reversing monitor.


It's a simple concept

  • Arduino
  • Ultrasonic ping sensor
  • NeoPixel LED's


You use the ping sensor to detect the distance and change the colour of the NeoPixel to show Green for safe, Orange for Warning and Red for Stop.

Simple enough .... a few lines of code and its' all sorted.



What about Safety?

There are no safety issues, you just reverse until the light changes colour.

What safety, nothing will fail.




That's the difference between a "Fail to Safe" system and the above assumption.


In some cases, the person may not be aware of all the factors, and therefore they assume something.

Neopixels retain the last instruction until you send a new one, or remove the power.

If the Arduino or the data line fails for whatever reason, you have the display showing Green and it never changes.


The average user will just think these are lights and if you turn them off they go off.

However as the Engineer designing this system, I know better, and therefore should be designing the system to take the NeoPixels characteristics into account.



What about the Distance settings?

Having hard coded distance settings is fine, but what happens if you install it in your parents place and they want to change the trigger point.

You could reprogram it with new figures, but what happens if you don't have that capability?.


Having the ability to set the distance makes it much more user friendly, but it also brings some other challenges.

You need to store the setting, and read it at startup.

There needs to be a warning if there is no settings read (because they aren't stored or corrupted)

The method of setting the distance needs to be considered, and made user friendly.



I'm pinging all the time

Pinging constantly is fine, but why wear out the transducer while the car is either not there, or is parked for two weeks.

Since it is ultrasonic is there an effect on animals?

Perhaps this might be great to persuade mice and other rodents to vacate the place, but I'm concerned if domestic pets are troubled by it.



I decided that after 10 seconds of the detecting the same distance, we can reduce the ping rate.

We can drop the ping rate down to once every second, we reduce the wear, but allows for any movement to be detected.


We really need to know when the car is returning and the distance is getting less.

Since these can work from 4.5m (14 feet), the first few seconds are likely to be well within the safe distance.




Blank the display

Having the Neopixel stuck on the last distance colour is fine, except it simply wears them out and is not necessary.

It would be much more useful to blank the display while it is in 'sleep' mode.


We still ping every second, and if the distance changes then unblank the display.


I thought that 60 secs is adequate time to decide the car is staying put, and this becomes the sleep time.




How do I know it's working?

Having a 'heartbeat' is necessary to provide user confidence that its working.



I elected to make one LED White and cycle it around at a 1 sec rate.

While the display is blank, the same will work, or I could leave it blank and maybe flash the onboard LED.



For the error warning, I elected to flash half the ring RED, with the rest blank. This alternates at a 1 second (1Hz) rate.

ie the first half is RED while the second half is blank and then it swaps to first half blank, second half Red.

It becomes very obvious something isn't right.




What does this all mean?

What it means is that this device performs the same function ie detects a distance and changes colour.

We have added :-

  • Allow the user to change settings.
  • Provide confirmation it is working.
  • Warn if there is an error.
  • Reduce the wear by reducing the ON time of some parts.
  • Reduce the ultrasonic pollution created by the pinger.



The hardware remains the same, but the code has grown exponentially.

It might not be the most efficient code (I have stated before I'm not a software programmer), but it includes plenty of comments for anyone to follow.

That helps me later when I think why the .... did I do that, or What was I thinking?


It also means the quick simple task has morph'ed into a much longer project, and has consumed far too much time.



My Advice

If you get into these projects, you need to think of the things that could go wrong and deal with them.

Your time is spent planning how to do xyz, coding it, and working out a means to test it.


We've made comments before about Engineering is 90% planning and 10% doing, and this is no different.

The end result is a much better product, that is much safer and more user friendly.



Part Two

I'm still completing the final parts of the code.

There have been some other distractions (inc silly season) and I want do some more testing on the code.

I have found a suitable enclosure but I need to live test it before I fit it to the enclosure.


The Ping sensor uses a Tx and Rx ultrasonic transducer to send out a burst of high frequency audio.

This causes ringing of the transducer and can cause the mounting hardware to resonate which may give false readings.

I've been finding that doing this work in a confined room results in lots of echo, and therefore my "same distance' measurement isn't the same.


I've also had some issue with the IDE and the colourisation of keywords, that threw me off for a few days.


It's got to the stage where I can start actually testing it, and testing the hardware before fitting it into an enclosure ensures you can determine where the problem is, if it doesn't work as expected.

Hopefully soon I can do part 2 and include the code, and some more details of the project.

edit Part 2 is here


In my previous post, I showed how to control a few LEDs using an Arduino board and BitVoicer Server. In this post, I am going to make things a little more complicated. I am also going to synthesize speech using the Arduino DUE digital-to-analog converter (DAC). If you do not have an Arduino DUE, you can use other Arduino boards, but you will need an external DAC and some additional code to operate the DAC (the BVSSpeaker library will not help you with that).



In the video below, you can see that I also make the Arduino play a little song and blink the LEDs as if they were piano keys. Sorry for my piano skills, but that is the best I can do . The LEDs actually blink in the same sequence and timing as real C, D and E keys, so if you have a piano around you can follow the LEDs and play the same song. It is a jingle from an old retailer (Mappin) that does not even exist anymore.



The following procedures will be executed to transform voice commands into LED activity and synthesized speech:


  1. Audio waves will be captured and amplified by the Sparkfun Electret Breakout board;
  2. The amplified signal will be digitalized and buffered in the Arduino using its analog-to-digital converter (ADC);
  3. The audio samples will be streamed to BitVoicer Server using the Arduino serial port;
  4. BitVoicer Server will process the audio stream and recognize the speech it contains;
  5. The recognized speech will be mapped to predefined commands that will be sent back to the Arduino. If one of the commands consists in synthesizing speech, BitVoicer Server will prepare the audio stream and send it to the Arduino;
  6. The Arduino will identify the commands and perform the appropriate action. If an audio stream is received, it will be queued into the BVSSpeaker class and played using the DUE DAC and DMA.
  7. The SparkFun Mono Audio Amp will amplify the DAC signal so it can drive an 8 Ohm speaker.


List of Materials:



STEP 1: Wiring


The first step is to wire the Arduino and the breadboard with the components as shown in the pictures below. I had to place a small rubber underneath the speaker because it vibrates a lot and without the rubber the quality of the audio is considerably affected.



Wiring 1

Wiring 2

Wiring 3


Here we have a small but important difference from my previous post. Most Arduino boards run at 5V, but the DUE runs at 3.3V. Because I got better results running the Sparkfun Electret Breakout at 3.3V, I recommend you add a jumper between the 3.3V pin and the AREF pin IF you are using 5V Arduino boards. The DUE already uses a 3.3V analog reference so you do not need a jumper to the AREF pin. In fact, the AREF pin on the DUE is connected to the microcontroller through a resistor bridge. To use the AREF pin, resistor BR1 must be desoldered from the PCB.


STEP 2: Uploading the code to the Arduino


Now you have to upload the code below to your Arduino. You can also download the Arduino sketch from the link below the code. Before you upload the code, you must properly install the BitVoicer Server libraries into the Arduino IDE (Importing a .zip Library).


#include <BVSP.h>
#include <BVSMic.h>
#include <BVSSpeaker.h>
#include <DAC.h>

// Defines the Arduino pin that will be used to capture audio

// Defines the LED pins
#define RED_LED_PIN 6
#define YELLOW_LED_PIN 9
#define GREEN_LED_PIN 10

// Defines the constants that will be passed as parameters to
// the BVSP.begin function
const unsigned long STATUS_REQUEST_TIMEOUT = 3000;
const unsigned long STATUS_REQUEST_INTERVAL = 4000;

// Defines the size of the mic audio buffer
const int MIC_BUFFER_SIZE = 64;

// Defines the size of the speaker audio buffer
const int SPEAKER_BUFFER_SIZE = 128;

// Defines the size of the receive buffer
const int RECEIVE_BUFFER_SIZE = 2;

// Initializes a new global instance of the BVSP class
BVSP bvsp = BVSP();

// Initializes a new global instance of the BVSMic class
BVSMic bvsm = BVSMic();

// Initializes a new global instance of the BVSSpeaker class
BVSSpeaker bvss = BVSSpeaker();

// Creates a buffer that will be used to read recorded samples
// from the BVSMic class
byte micBuffer[MIC_BUFFER_SIZE];

// Creates a buffer that will be used to write audio samples
// into the BVSSpeaker class
byte speakerBuffer[SPEAKER_BUFFER_SIZE];

// Creates a buffer that will be used to read the commands sent
// from BitVoicer Server.
// Byte 0 = pin number
// Byte 1 = pin value
byte receiveBuffer[RECEIVE_BUFFER_SIZE];

// These variables are used to control when to play
// "LED Notes". These notes will be played along with
// the song streamed from BitVoicer Server.
bool playLEDNotes = false;
unsigned int playStartTime = 0;

void setup()
  // Sets up the pin modes

  // Sets the initial state of all LEDs
  digitalWrite(RED_LED_PIN, LOW);
  digitalWrite(YELLOW_LED_PIN, LOW);
  digitalWrite(GREEN_LED_PIN, LOW);

  // Starts serial communication at 115200 bps

  // Sets the Arduino serial port that will be used for
  // communication, how long it will take before a status request
  // times out and how often status requests should be sent to
  // BitVoicer Server.

  // Defines the function that will handle the frameReceived
  // event
  bvsp.frameReceived = BVSP_frameReceived;

  // Sets the function that will handle the modeChanged
  // event
  bvsp.modeChanged = BVSP_modeChanged;

  // Sets the function that will handle the streamReceived
  // event
  bvsp.streamReceived = BVSP_streamReceived;

  // Prepares the BVSMic class timer

  // Sets the DAC that will be used by the BVSSpeaker class

void loop()
  // Checks if the status request interval has elapsed and if it
  // has, sends a status request to BitVoicer Server

  // Checks if there is data available at the serial port buffer
  // and processes its content according to the specifications
  // of the BitVoicer Server Protocol

  // Checks if there is one SRE available. If there is one,
  // starts recording.
  if (bvsp.isSREAvailable())
    // If the BVSMic class is not recording, sets up the audio
    // input and starts recording
    if (!bvsm.isRecording)
      bvsm.setAudioInput(BVSM_AUDIO_INPUT, DEFAULT);

    // Checks if the BVSMic class has available samples
    if (bvsm.available)
      // Makes sure the inbound mode is STREAM_MODE before
      // transmitting the stream
      if (bvsp.inboundMode == FRAMED_MODE)

      // Reads the audio samples from the BVSMic class
      int bytesRead =, MIC_BUFFER_SIZE);

      // Sends the audio stream to BitVoicer Server
      bvsp.sendStream(micBuffer, bytesRead);
    // No SRE is available. If the BVSMic class is recording,
    // stops it.
    if (bvsm.isRecording)

  // Plays all audio samples available in the BVSSpeaker class
  // internal buffer. These samples are written in the
  // BVSP_streamReceived event handler. If no samples are
  // available in the internal buffer, nothing is played.;

  // If playLEDNotes has been set to true,
  // plays the "LED notes" along with the music.
  if (playLEDNotes)

// Handles the frameReceived event
void BVSP_frameReceived(byte dataType, int payloadSize)
  // Checks if the received frame contains binary data
  // 0x07 = Binary data (byte array)
  if (dataType == DATA_TYPE_BINARY)
    // If 2 bytes were received, process the command.
    if (bvsp.getReceivedBytes(receiveBuffer, RECEIVE_BUFFER_SIZE) ==
      analogWrite(receiveBuffer[0], receiveBuffer[1]);
  // Checks if the received frame contains byte data type
  // 0x01 = Byte data type
  else if (dataType == DATA_TYPE_BYTE)
    // If the received byte value is 255, sets playLEDNotes
    // and marks the current time.
    if (bvsp.getReceivedByte() == 255)
      playLEDNotes = true;
      playStartTime = millis();

// Handles the modeChanged event
void BVSP_modeChanged()
  // If the outboundMode (Server --> Device) has turned to
  // FRAMED_MODE, no audio stream is supposed to be received.
  // Tells the BVSSpeaker class to finish playing when its
  // internal buffer become empty.
  if (bvsp.outboundMode == FRAMED_MODE)

// Handles the streamReceived event
void BVSP_streamReceived(int size)
  // Gets the received stream from the BVSP class
  int bytesRead = bvsp.getReceivedStream(speakerBuffer,

  // Enqueues the received stream to play
  bvss.enqueue(speakerBuffer, bytesRead);

// Lights up the appropriate LED based on the time
// the command to start playing LED notes was received.
// The timings used here are syncronized with the music.
void playNextLEDNote()
  // Gets the elapsed time between playStartTime and the
  // current time.
  unsigned long elapsed = millis() - playStartTime;

  // Turns off all LEDs

  // The last note has been played.
  // Turns off the last LED and stops playing LED notes.
  if (elapsed >= 11500)
    analogWrite(RED_LED_PIN, 0);
    playLEDNotes = false;
  else if (elapsed >= 9900)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 9370)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 8900)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 8610)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 8230)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 7970)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 7470)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 6760)
    analogWrite(GREEN_LED_PIN, 255); // E note
  else if (elapsed >= 6350)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 5880)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 5560)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 5180)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 4890)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 4420)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 3810)
    analogWrite(GREEN_LED_PIN, 255); // E note
  else if (elapsed >= 3420)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 2930)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 2560)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 2200)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 1930)
    analogWrite(YELLOW_LED_PIN, 255); // D note
  else if (elapsed >= 1470)
    analogWrite(RED_LED_PIN, 255); // C note
  else if (elapsed >= 1000)
    analogWrite(GREEN_LED_PIN, 255); // E note

// Turns off all LEDs.
void allLEDsOff()
  analogWrite(RED_LED_PIN, 0);
  analogWrite(YELLOW_LED_PIN, 0);
  analogWrite(GREEN_LED_PIN, 0);


Arduino Sketch: BVS_Demo2.ino


This sketch has seven major parts:


  • Library references and variable declaration: The first four lines include references to the BVSP, BVSMic, BVSSpeaker and DAC libraries. These libraries are provided by BitSophia and can be found in the BitVoicer Server installation folder. The DAC library is included automatically when you add a reference to the BVSSpeaker library. The other lines declare constants and variables used throughout the sketch. The BVSP class is used to communicate with BitVoicer Server, the BVSMic class is used to capture and store audio samples and the BVSSpeaker class is used to reproduce audio using the DUE DAC.
  • Setup function: This function performs the following actions: sets up the pin modes and their initial state; initializes serial communication; and initializes the BVSP, BVSMic and BVSSpeaker classes. It also sets “event handlers” (they are actually function pointers) for the frameReceived, modeChanged and streamReceived events of the BVSP class.
  • Loop function: This function performs five important actions: requests status info to the server (keepAlive() function); checks if the server has sent any data and processes the received data (receive() function); controls the recording and sending of audio streams (isSREAvailable(), startRecording(), stopRecording() and sendStream() functions); plays the audio samples queued into the BVSSpeaker class (play() function); and calls the playNextLEDNote() function that controls how the LEDs should blink after the playLEDNotes command is received.
  • BVSP_frameReceived function: This function is called every time the receive() function identifies that one complete frame has been received. Here I run the commands sent from BitVoicer Server. Commands that controls the LEDs contains 2 bytes. The first byte indicates the pin and the second byte indicates the pin value. I use the analogWrite() function to set the appropriate value to the pin. I also check if the playLEDNotes command, which is of Byte type, has been received. If it has been received, I set playLEDNotes to true and mark the current time. This time will be used by the playNextLEDNote function to synchronize the LEDs with the song.
  • BVSP_modeChanged function: This function is called every time the receive() function identifies a mode change in the outbound direction (Server --> Arduino). WOW!!! What is that?! BitVoicer Server can send framed data or audio streams to the Arduino. Before the communication goes from one mode to another, BitVoicer Server sends a signal. The BVSP class identifies this signal and raises the modeChanged event. In the BVSP_modeChanged function, if I detect the communication is going from stream mode to framed mode, I know the audio has ended so I can tell the BVSSpeaker class to stop playing audio samples.
  • BVSP_streamReceived function: This function is called every time the receive() function identifies that audio samples have been received. I simply retrieve the samples and queue them into the BVSSpeaker class so the play() function can reproduce them.
  • playNextLEDNote function: This function only runs if the BVSP_frameReceived function identifies the playLEDNotes command. It controls and synchronizes the LEDs with the audio sent from BitVoicer Server. To synchronize the LEDs with the audio and know the correct timing, I used Sonic Visualizer. This free software allowed me to see the audio waves so I could easily tell when a piano key was pressed. It also shows a time line and that is how I got the milliseconds used in this function. Sounds like a silly trick and it is. I think it would be possible to analyze the audio stream and turn on the corresponding LED, but that is out of my reach.


STEP 3: Importing BitVoicer Server Solution Objects


Now you have to set up BitVoicer Server to work with the Arduino. BitVoicer Server has four major solution objects: Locations, Devices, BinaryData and Voice Schemas.


Locations represent the physical location where a device is installed. In my case, I created a location called Home.


Devices are the BitVoicer Server clients. I created a Mixed device, named it ArduinoDUE and entered the communication settings. IMPORTANT: even the Arduino DUE has a small amount of memory to store all the audio samples BitVoicer Server will stream. If you do not limit the bandwidth, you would need a much bigger buffer to store the audio. I got some buffer overflows for this reason so I had to limit the Data Rate in the communication settings to 8000 samples per second.


BinaryData is a type of command BitVoicer Server can send to client devices. They are actually byte arrays you can link to commands. When BitVoicer Server recognizes speech related to that command, it sends the byte array to the target device. I created one BinaryData object to each pin value and named them ArduinoDUEGreenLedOn, ArduinoDUEGreenLedOff and so on. I ended up with 18 BinaryData objects in my solution, so I suggest you download and import the objects from the VoiceSchema.sof file below.


Voice Schemas are where everything comes together. They define what sentences should be recognized and what commands to run. For each sentence, you can define as many commands as you need and the order they will be executed. You can also define delays between commands. That is how I managed to perform the sequence of actions you see in the video.


One of the sentences in my Voice Schema is “play a little song.” This sentence contains two commands. The first command sends a byte that indicates the following command is going to be an audio stream. The Arduino then starts “playing” the LEDs while the audio is being transmitted. The audio is a little piano jingle I recorded myself and set it as the audio source of the second command. BitVoicer Server supports only 8-bit mono PCM audio (8000 samples per second) so if you need to convert an audio file to this format, I recommend the following online conversion tool:


You can import (Importing Solution Objects) all solution objects I used in this post from the files below. One contains the DUE Device and the other contains the Voice Schema and its Commands.


Solution Object Files:


STEP 4: Conclusion


There you go! You can turn everything on and do the same things shown in the video.



As I did in my previous post, I started the speech recognition by enabling the Arduino device in the BitVoicer Server Manager. As soon as it gets enabled, the Arduino identifies an available Speech Recognition Engine and starts streaming audio to BitVoicer Server. However, now you see a lot more activity in the Arduino RX LED while audio is being streamed from BitVoicer Server to the Arduino.


In my next post, I will be a little more ambitious. I going to add WiFi communication to one Arduino and control two other Arduinos all together by voice. I am thinking of some kind of game between them. Suggestions are very welcome!

I've been working on this Arduino project for a while. Mostly mechanical issues, but finally got a solid solution for my LEGO train track switch. I am also using IoT style messaging to emulate a distributed network of "things". It's a little overkill for a single track switch, but it helps to understand the bigger picture with these types of technologies. I plan on integrating this with a train schedule API so I can send the train down a path based on real-world data. I also have an OLED display in the mail, so will be cool to show train times as well.



Train Track Switch – Servos, LEDs, PubNub and | Internet of Lego

train track switch - thumbnail.jpg



This Lego Weather Station is an exercise in IoT concepts using an Arduino (Cactus Micro Rev2) which has a built-in ESP8266 WiFi module. It then sends the sensor data using MQTT to a Mosca MQTT broker. A Node-RED client subscribes to the data feed, stores it into a MongoDB and then visualizes it with a Google Chart. The blog article is more about the journey than just the destination . This is one of many cool projects in the Internet of Lego city, which examines all sorts of interesting embedded, NodeJS and IoT concepts using Lego.  Full Story: Weather Station – DHT11, MQTT, Node-RED, Google Chart, Oh My! | Internet of Lego

Filter Blog

By date: By tag: