As I mentioned in my previous post Summer of tinyML , has been running a series webinars on tinyML in conjunction with their tinyML Vision Challenge.


I missed a webinar last month that covered using the Himax WE-I Plus AI Development board with the Edge Impulse framework: tinyML Vision Challenge - Himax & Edge Impulse, but i was able to watch it on YouTube:


I had purchased a WE-I Plus board from SparkFun  a few months ago, but never got the opportunity to try it out.


The WE-I Plus has a nice set of features for Edge AI on a compact 40mm x 27mm PCB.  It has a HM0360 monochrome VGA (640x480) camera, dual PDM microphones, LSM9DS1 IMU, plus Red and Green LEDs.  The IO is somewhat limited - one I2C port and 3 GPIOs.  It uses a 400MHz HX6537 ASIC with a 32-bit processor and FPU.





The first step was to print a quick housing so that I could tripod mount the board for ease of use.


Out-of-box, the WE-I Plus is loaded with Person Detection firmware that runs on power up.  I used Putty to connect to the USB Serial Port to observe the detection output.  A short clip shows the person score change from negative to positive as I move into the camera frame.  I'm not sure what the absolute value is indicating.


The board is fully supported by Edge Impulse, so the quickest way to get started is to download the data capture firmware from Edge Impulse and flash it using their loader script.  To run the scripts, you need to have the Edge Impulse CLI installed.  This is well documented in their getting started guide .


I am using Windows so I used the flash_windows.bat script.  Here is the downloaded directory with the scripts and firmware image.


I already had the CLI previously installed, but that turned out to be an issue because I had an outdated version that did not have the Himax included.

But that was easy to update using the error message from the daemon.


The next step is to log into and create a new project.  In this case I created one named Himax Object Detection.


Now at the Command prompt you can start the edge-impulse-daemon which will connect the device to your project.  The first time you start the daemon, it will ask for your edge impulse credentials (name,pw) and which project to connect the device to.  In this case, I've connected previously but not named the device - so it just asks for the device name.  If I wanted to connect to a different project I could have started the daemon with the --clean option.


The device will now be connected to the project in my edge impulse dashboard.


The next step is to collect some data.  In the Data Acquisition window you get the choice of connected device and available sensors.


I'll select a sample of the Camera at 640x480 resolution.  It will give me a live preview in the Camera feed window.  Then I just need to label the data (Ralph) and hit the Start sampling button.


And after a few seconds of upload time, I get the data sample.


Now that data acquisition is working, I need to design a test and collect some labeled data.  In the next post I'll look at Impulse (Model) design and deployment.