Vision Thing

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project!

Back to The Project14 homepage

Project14 Home
Monthly Themes
Monthly Theme Poll



This project is an attempt to build a in-car safety device, which monitors drivers' attention to the road and create alerts if the driver becomes sleepy or distracted.

The statistics are quite alarming in regard to drowsy driving. Here are some numbers from the US

  • An estimated 1 in 25 adult drivers (aged 18 years or older) report having fallen asleep while driving in the previous 30 days.
  • The US National Highway Traffic Safety Administration estimates that drowsy driving was responsible for 72,000 crashes, 44,000 injuries, and 800 deaths in 2013. However, these numbers are underestimated and up to 6,000 fatal crashes each year may be caused by drowsy drivers.

Texting while driving is another significant source of accidents a well. Take a look at the following incident, when a tram conductor got distracted by a mobile phone:

I think there is a lot of value to having a robust and accessible solution for this safety challenge.


Driver State Monitor (DSM) with BeagleBone AI is a project I'm working on explore the capabilities of OpenCV and BeagleBone AI (BBAI) to improve drivers' safety. It is based on work done by Adrian Rosebrock, which was published in his blog "Drowsiness detection with OpenCV"


The best way not to fall asleep while driving is to have good regular sleep. You can read about it in one of my projects Saving Sleep with PocketBeagle .

I'm using HoG Face Detector implemented in Dlib library. It has its strong and weak points.


  1. Fastest method on CPU and works well on BBAI AM5729. No dependencies for accelerators like TI EVE or TI DSP cores.
  2. Light-weight model, fits well into BBAI.
  3. Work well for frontal and slightly non-frontal faces and under small occlusion.
  4. Adrian Rosebrock code provided a good accelerator and a starting base to extend.


  1. Doesn't work well in low light conditions.
  2. Does not work very well under substantial occlusion
  3. Does not work for side face and extreme non-frontal faces, like looking down or up.
  4. The pretrained model requires licensing for commercial use as described in


Building dependencies

It took a lot of time to build dependencies on BBAI: OpenCV 4.12, Numpy, dlib, scipy and others. I wish they were already preloaded on BB AI..


I was considering using the OpenCV DNN model as well, but I ran into some challenges: it requires a newer version of OpenCV than was provided with BBAI.I've built OpenCV 4.12, but I've got runtime errors when I tried to use its DNN model on BBAI.


I was planning on connecting my NXP BLE gas analyzer to detect if levels of CO2 and TVOC are unsafe. But I have not been able to make BLE operational on the current version of Linux kernel supported by BBAI. More details about the issue can be found at BBAI GitHub. And my plan to use BT speaker failed too. So I've decided to use a buzzer.


Buzzer requires signal from GPIO. Python library for BB hasn't been updated to support BBAI. So it took some research to map pins.

Adding new capabilities

Head tilt

Detecting drowsiness by measuring eyes has some limitations. It is not very useful if the driver is wearing glasses. So I’ve added a new feature that measures head tilt. It allows to detect state, when the driver is distracted by a mobile phone.

Headless mode

I’ve added a new command line parameter that allows to run the program without a screen in a headless mode.

Capturing telemetry

It is quite useful to capture values of variables to analyze the behavior of the system, and to fine tune thresholds to trigger alarms.


Performance optimization

My web camera generates a 480p video stream of 30fps. I've implemented frame dropping in my DSM. It perform an analysis only once in three frames.




I've added an option to start DSM automatically after the boot as a service, so that it can be operated without the need to start it from terminal.


Reflection on initial tests

After doing several tests I realized that I need to implement several new capabilities to make it more practical.

Video stream calibration

This feature is to detect the face and crop unrelated space. It accelerates processing significantly and improves the reliability of face detection. It is now done automatically at the start of the DSM.


Tilt calibration
Tilt threshold can be different depending on the location of the camera and other factors. So it is now automatically calculated after several seconds.
Video recording

It is quite useful to review real tests. So I've added video recording capability to my DSM.


Testing Driver State Monitor with BeagleBone AI

Lab Test

The following video illustrates the working solution in my home lab. The first alarm gets generated on the 16th second when I start looking down, triggered by a head tilt, and another at the 31st second, when I closed my eyes. Be ready for a noisy buzzer!

Test results

The following graph illustrates that the alarm (red line) was triggered by reaching the head tilt threshold in the first and by reaching the EAR threshold (closed eyes) in the second case.

Test Drive

Here is my test setup in the car.

BBAI in the car - ready for the first test drive.


This video captured the real test drive of DSM, where results of processing got captured on video. The young driver did a great job and was focused on the road all times - no alerts where generated during the drive.


I'm using the following components to implement this project:

  • BBAI to detect and analyze the driver's face, eyes, and to check if they are open, if they are focused on the road or if the head is tilted and focused on a mobile phone. It was provided by Element14.
  • USB Webcam
  • Seeed studio buzzer
  • CPU fan F251R-05LLC.  It was provided by Element14.
  • Debian (supplied with BBAI) kernel 4.14.x
  • Python 3.5
  • OpenCV 4 to process video stream
  • Dlib by Davis King to detect face and face landmarks

Hardware components


My ToDo list

  • Test the system in a car
  • Test with glasses
  • Test in low light condition
  • Publish code on GitHub
  • Improve reliability in low light conditions (may need to add IR Camera)
  • Improve performance by using EVE, c66x cores once TI releases support for OpenCV 3.3+ and DNN module.
  • Explore Tensorflow/Python integration with TIDL once it is available in Q1 2020
  • Add gas sensors once BLE support is implemented in BBAI
  • Add blue LED light in addition to buzzer
  • Set 10 fps mode on camera



Thank you for reading my blog!