Vision Thing

Enter Your Project for a chance to win an Oscilloscope Grand Prize Package for the Most Creative Vision Thing Project!

Back to The Project14 homepage

Project14 Home
Monthly Themes
Monthly Theme Poll

 

 

 

Introduction

I will work on a virtual loop sensor.

The camera to install at junction, get live stream and I can add virtual loop sensor on the road to detect and count vehicles.

It can also detect traffic congestion level.

 

Vehicle detection loops, called inductive-loop traffic detectors, can detect vehicles passing or arriving at a certain point, for instance approaching a traffic light or in motorway traffic. An insulated, electrically conducting loop is installed in the pavement. [1]

 

Vision based loop sensor requires no pavement cut. I can add/remove/modify loop sensor whenever i need it.

 

Implementation:

I will use raspberry pi 2 and raspberry pi  camera v1.3

i will record video from traffic junction/grab it from online and process at home instead of installing raspi on the junction.

for video proof, i will make a simple vehicle count and detection with mini size vehicle (e.g. remote car) to demonstrate the project output.

 

Reference.

[1] https://en.wikipedia.org/wiki/Induction_loop

[2] https://www.drivingtests.co.nz/resources/traffic-lights-change/

 

Bill of Materials

raspberry pi 2

raspicam v1.3

raspberry pi 2 casing

resistor x 2.

led 2 x.

some wires.

 

optional:

wipi

keyboard and mouse.

monitor

 

Setting up Raspberry Pi 2

I download 2019-04-08-raspbian-stretch and flash it to SDcard using BalenaEtcher. Done without any problem.

https://downloads.raspberrypi.org/raspbian/images/

 

 

OpenCV

This is challenge one. Initially i thought of using python opencv, easy installation. install dependencies, and pip install will do the work for me.

But I keep getting error message "ImportError: numpy.core.multiarray failed to import" when i import cv2.

Tried many ways and cant get rid of it. Update numpy, upgrade python, change different version opencv, no luck.

finally, i decided to recompile and use c++ instead. took me hours to compile it.

https://linuxize.com/post/how-to-install-opencv-on-raspberry-pi/

 

 

 

Simple Program

initial plan was to record live video from junction. but frequent raining days make it difficult. alternative is to search google for traffic video.

I wrote a simple program to playback video.

 

to compile code with opencv lib, use following. change binary file permission before running.

g++ test.cpp -o test $(pkg-config --cflags --libs opencv4)
chmod +x test
./test

program to play video

#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>


#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>


using namespace cv;
using namespace std;


int main(int argc, char* argv[])
{  
    // load video
    VideoCapture cap("ysample.mp4");
    if (!cap.isOpened()){
        printf ("failed to open video capture\r\n");
        return -1;
    }
    while (1){
        // start capture image from video
        Mat capturedImage;
        cap >> capturedImage; // get a new frame from video
        if (capturedImage.empty()){
            printf ("end of video\r\n");
            break;
        }
        imshow("input", capturedImage);
        waitKey(20);
    }
    return 0;
}

 

Flow Chart and Implementation

The idea is simple, background subtraction. given a line sample long enough, i can have the average value of the background.

Whenever there is vehicle passby, it will cause pixels value change, and detection is done based on these changes.

I use a sample video from youtube to process.

https://www.youtube.com/watch?v=nt3D26lrkho

 

Here is the output.

 

and the source code

#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
#include <math.h>

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;

#define BG_DEPTH 100          // collect 1000 samples as Background

// store point location of sensor
Point pta, ptb;
bool drawingLine = false;

Mat getBGMean (Mat bg);
int threshold (Mat sensor, Mat mean);

/**********************************************************************/
/*******************this is mouse call back function*******************/
/**********************************************************************/
void my_mouse_callback(int event, int x, int y, int flags, void* param)
{
    Mat* image = (Mat*)param;

    switch(event){
        case EVENT_MOUSEMOVE:{
            if(drawingLine){
                ptb.x = x;
                ptb.y = y;
            }
        }
        break;

        case EVENT_LBUTTONDOWN:{
            drawingLine = true;
            pta.x = x;
            pta.y = y;
            ptb.x = x;
            ptb.y = y;
        }
        break;

        case EVENT_LBUTTONUP:{
            drawingLine = false;
            ptb.x = x;
            ptb.y = y;
            line(*image,pta,ptb,Scalar(0x00,0x00,0xff),3);
        }
        break;
    }
}

int main(int argc, char* argv[])
{  
    // load video
    VideoCapture cap("highway2.mp4");          
    if (!cap.isOpened()){
        printf ("failed to open video capture\r\n");
        return -1;
    }

    // -----------------------------------capture first frame for sensor drawing---------------------------//
    Mat capturedImage;
    cap >> capturedImage;
    Mat temp = capturedImage.clone();

    // setup mouse callback, this is to draw a line
    namedWindow("VLS", WINDOW_AUTOSIZE);
    setMouseCallback("VLS", my_mouse_callback, &capturedImage);

    // print text on image
    putText(capturedImage, "Draw Sensor", Point(30, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);
    putText(capturedImage, "Press ESC to Complete" , Point(30, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);

    // live display of line drawing, until user press ESC
    while (1)
    {
        capturedImage.copyTo(temp);
        if (drawingLine)
            line(temp,pta,ptb,Scalar(0x00,0x00,0xff),3);
        imshow("VLS",temp);

        if (waitKey(15) == 27)
            break;
    }

    // remove mouse callback to the output window
    setMouseCallback("VLS", NULL, NULL);

    // -----------------------------------sample sensor and build background image--------------------------//
    int sensorLength = max(abs(pta.x -ptb.x)+1,abs(pta.y -ptb.y)+1);     // calculate number of pixel for drawn sensor.
    Mat sensor, bgImage;
    sensor.create(Size(sensorLength, 1), CV_8UC3);          // create a Mat to hold sensor value (as line image)
    bgImage.create(Size(sensorLength, BG_DEPTH), CV_8UC3);

    // start sample background
    for(int frameNum=0; frameNum < BG_DEPTH; frameNum++){
        // fill up background image
        cap >> capturedImage;
        printf("sampling background %d of %d\n",frameNum,BG_DEPTH);

        // just to display image with sensor
        temp = capturedImage.clone();
        line(temp,pta,ptb,Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);
        waitKey(1);

        // sample sensor line into background image
        uchar *lineBufferPtr, *bgImagePtr;
        LineIterator lineIt(capturedImage, pta, ptb, 8);

        bgImagePtr = bgImage.ptr<uchar>(frameNum);
        for (int i = 0; i < lineIt.count; i++, ++lineIt){
            bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
            bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
            bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
        }
    }

    // -----------------------------------sample sensor and start sensing--------------------------//
    // loop forever, until end of video
    while(true){

        // we need to update background with new sample. before that, shift the background upward by 1 row before fitting new sensor into it.               
        uchar *curBGImagePtr, *newBGImagePtr;
        int nRows = bgImage.rows - 1;
        int nCols = bgImage.cols * bgImage.channels();
        for (int row = 0; row < nRows; row++){
            curBGImagePtr = bgImage.ptr<uchar>(row + 1);
            newBGImagePtr = bgImage.ptr<uchar>(row);
            // start copy pixel from col to col in a row
            for (int col = 0; col < nCols; col++)
                newBGImagePtr[col] = curBGImagePtr[col];
        }

        // capture image from video
        cap >> capturedImage;
        // just to show the line 
        temp = capturedImage.clone();
        line(temp,pta,ptb,Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);

        // sample sensor and update background
        uchar *lineBufferPtr, *bgImagePtr, *sensorPtr;
        LineIterator lineIt(capturedImage, pta, ptb, 8);

        // sample sensor
        bgImagePtr = bgImage.ptr<uchar>(BG_DEPTH - 1);          // update sensor to bgimage at last line
        sensorPtr = sensor.ptr<uchar>(0);
        for (int i = 0; i < lineIt.count; i++, ++lineIt){
            bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
            bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
            bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
            sensorPtr[i * 3] = ((const Vec3b)*lineIt).val[0];
            sensorPtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
            sensorPtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
        }     

        // detection
        Mat mean = getBGMean (bgImage);
        int detection = threshold (sensor, mean);

        printf ("sensor value: %d\r\n", detection);

        // write to image 
        if (detection)
            putText(temp, "ON", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
        else
            putText(temp, "OFF", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);

        imshow("VLS",temp);
        imshow("BG",bgImage);
        imshow("SENSOR",sensor);
        waitKey(1);
    }

    return 0;
}

int threshold (Mat sensor, Mat mean)
{
    // convert sensor to float
    sensor.convertTo(sensor, CV_32FC3, 1/255.0);

    // find euclidean distance between sensor and bg mean
    int vehicleArea = 0;
    for (int x = 0; x < mean.cols; x++){
        float a,b,c, eDistance;
        a = sensor.at<Vec3f>(0,x).val[0] - mean.at<Vec3f>(0,x).val[0];
        a = a*a;
        b = sensor.at<Vec3f>(0,x).val[1] - mean.at<Vec3f>(0,x).val[1];
        b = b*b;
        c = sensor.at<Vec3f>(0,x).val[2] - mean.at<Vec3f>(0,x).val[2];
        c = c*c;
        eDistance = a+b+c;
        eDistance = sqrtf(eDistance);

        // if distance > 0.2, increase vehicle area
        if (eDistance > 0.2)
            vehicleArea++;
    }
    // if the sensor has huge difference with bg mean, return 1 as detected
    if (vehicleArea > 50)
        return 1;
    return 0;
}

Mat getBGMean (Mat bg)
{
    Mat mean(Size(bg.cols, 1), CV_32FC3, Scalar(0,0,0));          // create a Mat to hold bg mean value
    bg.convertTo(bg, CV_32FC3, 1/255.0);

    // sum all into mean Mat
    for (int x = 0; x < bg.cols; x++){
        for (int y = 0; y< bg.rows; y++)
            mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x) + bg.at<Vec3f>(y,x);
    }

    // get mean
    for (int x = 0; x < bg.cols; x++)
        mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x)/bg.rows;
    return mean;
}

 

 

 

Multiple Sensors and LED Indicator

added multiple sensor option. can add more sensor, it is define at the beginning of the program.

the idea is simple as well, do the JOB for SENSOR_COUNT times using for loop.

the more sensors are added, the slower is the process.

#define SENSOR_COUNT 2

 

connect 2 leds as indicator to GPIO24,25. The sensor status is shown on screen + led. below is schematics.

Red LED with 390ohm resistor, active low.

it took me a while to realise BCM pin mapping is difference from WiringPi.

 

I am using wiringpi to control gpio. it is easy to use, and syntax is similar to arduino.  it is also preinstalled in raspberry pi.

additional flag is needed to compile program with wiring pi.

 

g++ main.cpp -o main $(pkg-config --cflags --libs opencv4) -lwiringPi

 

Blink | Wiring Pi

 

Output Video

 

Source Code

#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
#include <math.h>


#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>


#include <wiringPi.h>


using namespace cv;
using namespace std;


#define SENSOR_COUNT 2
#define BG_DEPTH 100        // collect 1000 samples as Background


// store point location of sensor
Point pta[SENSOR_COUNT], ptb[SENSOR_COUNT];
bool drawingLine = false;
int activeSensor = 0;
int detectionResult[2];


Mat getBGMean (Mat bg);
int threshold (Mat sensor, Mat mean);


/**********************************************************************/
/*******************this is mouse call back function*******************/
/**********************************************************************/
void my_mouse_callback(int event, int x, int y, int flags, void* param)
{
    Mat* image = (Mat*)param;


    switch(event)
    {
    case EVENT_MOUSEMOVE:
        {
            if(drawingLine)
            {
                ptb[activeSensor].x = x;
                ptb[activeSensor].y = y;
            }
        }
        break;


    case EVENT_LBUTTONDOWN:
        {
            drawingLine = true;
            pta[activeSensor].x = x;
            pta[activeSensor].y = y;
            ptb[activeSensor].x = x;
            ptb[activeSensor].y = y;
        }
        break;


    case EVENT_LBUTTONUP:
        {
            drawingLine = false;
            ptb[activeSensor].x = x;
            ptb[activeSensor].y = y;
            line(*image,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),3);
        }
        break;
    }
}


int main(int argc, char* argv[])
{  
    // setup wiring pi
    wiringPiSetup () ;
    pinMode (24, OUTPUT);
    pinMode (25, OUTPUT);
    digitalWrite (24, HIGH);
    digitalWrite (25, HIGH);
  
    // load video
    VideoCapture cap("highway2.mp4");        
    if (!cap.isOpened()){
        printf ("failed to open video capture\r\n");
        return -1;
    }
    
    // -----------------------------------capture first frame for sensor drawing---------------------------//
    Mat capturedImage;
    cap >> capturedImage;
    Mat temp = capturedImage.clone();


    // setup mouse callback, this is to draw a line
    namedWindow("VLS", WINDOW_AUTOSIZE);
    setMouseCallback("VLS", my_mouse_callback, &capturedImage);


    // print text on image
    putText(capturedImage, "Draw Sensor", Point(30, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);
    putText(capturedImage, "Press ESC to Complete" , Point(30, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);
    
    // live display of line drawing, until user press ESC
    for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
        while (1)
        {
            capturedImage.copyTo(temp);
            if (drawingLine)
                line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),3);
            imshow("VLS",temp);


            if (waitKey(15) == 27) break;
        }
    }
    
    // remove mouse callback to the output window
    setMouseCallback("VLS", NULL, NULL);


    // -----------------------------------sample sensor and build background image--------------------------//
    Mat sensor[SENSOR_COUNT], bgImage[SENSOR_COUNT];
    for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
        int sensorLength = max(abs(pta[activeSensor].x -ptb[activeSensor].x)+1,abs(pta[activeSensor].y -ptb[activeSensor].y)+1);    // calculate number of pixel for drawn sensor.
        sensor[activeSensor].create(Size(sensorLength, 1), CV_8UC3);        // create a Mat to hold sensor value (as line image)
        bgImage[activeSensor].create(Size(sensorLength, BG_DEPTH), CV_8UC3);
    }
    
    // start sample background
    for(int frameNum=0; frameNum < BG_DEPTH; frameNum++){
        // fill up background image
        cap >> capturedImage;
        printf("sampling background %d of %d\n",frameNum,BG_DEPTH);
        
        // just to display image with sensor
        temp = capturedImage.clone();
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++)
            line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);
        waitKey(1);
        
        // sample sensor line into background image
        uchar *lineBufferPtr, *bgImagePtr;
        
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            LineIterator lineIt(capturedImage, pta[activeSensor], ptb[activeSensor], 8);
            bgImagePtr = bgImage[activeSensor].ptr<uchar>(frameNum);
            for (int i = 0; i < lineIt.count; i++, ++lineIt){
                bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
            }
        }
    }
    
    // -----------------------------------sample sensor and start sensing--------------------------//
    // loop forever, until end of video
    while(true){
        
        // we need to update background with new sample. before that, shift the background upward by 1 row before fitting new sensor into it.            
        uchar *curBGImagePtr, *newBGImagePtr;
        
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            int nRows = bgImage[activeSensor].rows - 1;
            int nCols = bgImage[activeSensor].cols * bgImage[activeSensor].channels();
            for (int row = 0; row < nRows; row++){
                curBGImagePtr = bgImage[activeSensor].ptr<uchar>(row + 1);
                newBGImagePtr = bgImage[activeSensor].ptr<uchar>(row);
                // start copy pixel from col to col in a row
                for (int col = 0; col < nCols; col++)
                    newBGImagePtr[col] = curBGImagePtr[col];
            }
        }
            
        // capture image from video
        cap >> capturedImage;
        // just to show the line 
        temp = capturedImage.clone();
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++)
            line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);
    
        // sample sensor and update background
        uchar *lineBufferPtr, *bgImagePtr, *sensorPtr;
        
        // sample sensor
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            LineIterator lineIt(capturedImage, pta[activeSensor], ptb[activeSensor], 8);
            bgImagePtr = bgImage[activeSensor].ptr<uchar>(BG_DEPTH - 1);        // update sensor to bgimage at last line
            sensorPtr = sensor[activeSensor].ptr<uchar>(0);
            for (int i = 0; i < lineIt.count; i++, ++lineIt){
                bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
                sensorPtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                sensorPtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                sensorPtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
            }    
            
            // detection
            Mat mean = getBGMean (bgImage[activeSensor]);
            detectionResult[activeSensor] = threshold (sensor[activeSensor], mean);
        }
        
        printf ("sensor value: %d, %d\r\n", detectionResult[0], detectionResult[1]);
        
        // write to image 
        if (detectionResult[0]){
            putText(temp, "ON", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (24, LOW);
        }
        else{
            putText(temp, "OFF", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (24,HIGH);
        }
        if (detectionResult[1]){
            putText(temp, "ON", Point(330, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (25, LOW);
        }
        else{
            putText(temp, "OFF", Point(330, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (25, HIGH);
        }
        
        imshow("VLS",temp);
        waitKey(1);
    }


    return 0;
}


int threshold (Mat sensor, Mat mean)
{
    // convert sensor to float
    sensor.convertTo(sensor, CV_32FC3, 1/255.0);
    
    // find euclidean distance between sensor and bg mean
    int vehicleArea = 0;
    for (int x = 0; x < mean.cols; x++){
        float a,b,c, eDistance;
        a = sensor.at<Vec3f>(0,x).val[0] - mean.at<Vec3f>(0,x).val[0];
        a = a*a;
        b = sensor.at<Vec3f>(0,x).val[1] - mean.at<Vec3f>(0,x).val[1];
        b = b*b;
        c = sensor.at<Vec3f>(0,x).val[2] - mean.at<Vec3f>(0,x).val[2];
        c = c*c;
        eDistance = a+b+c;
        eDistance = sqrtf(eDistance);
        
        // if distance > 0.2, increase vehicle area
        if (eDistance > 0.1)
            vehicleArea++;
    }
    // if the sensor has huge difference with bg mean, return 1 as detected
    if (vehicleArea > 50)
        return 1;
    return 0;
}


Mat getBGMean (Mat bg)
{
    Mat mean(Size(bg.cols, 1), CV_32FC3, Scalar(0,0,0));        // create a Mat to hold bg mean value
    bg.convertTo(bg, CV_32FC3, 1/255.0);


    // sum all into mean Mat
    for (int x = 0; x < bg.cols; x++){
        for (int y = 0; y< bg.rows; y++)
            mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x) + bg.at<Vec3f>(y,x);
    }
    
    // get mean
    for (int x = 0; x < bg.cols; x++)
        mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x)/bg.rows;
    return mean;
}

 

 

Live Video from RaspiCam

Seach google for C++ raspicam library, this is the first appear.

https://www.uco.es/investiga/grupos/ava/node/40

get the latest source file and follow the instruction to install raspicam.

 

For opencv4, you will get plenty of XXX was not declared in this scope errors.  This is due to capture properties no longer begin with CV_ in opencv4, and raspicam is written based on opencv3.

workaround is to change all CV_CAP_XXX into cv::CAP_XXX in raspicam source file. There are 4 in total:

src/raspicam_cv.cpp

src/raspicam_still_cv.cpp

utils/raspicam_cv_test.cpp

utils/raspicam_cv_still_test.cpp

 

 

to compile the program with raspicam, flag -lraspicam_cv is needed.

 

g++ main.cpp -o main $(pkg-config --cflags --libs opencv4) -lwiringPi -lraspicam_cv

 

source code

#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
#include <math.h>


#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>


#include <wiringPi.h>
#include <raspicam/raspicam_cv.h>


using namespace cv;
using namespace std;


#define SENSOR_COUNT 2
#define BG_DEPTH 100        // collect 1000 samples as Background


#define USE_RASPICAM


// store point location of sensor
Point pta[SENSOR_COUNT], ptb[SENSOR_COUNT];
bool drawingLine = false;
int activeSensor = 0;
int detectionResult[2];


Mat getBGMean (Mat bg);
int threshold (Mat sensor, Mat mean);


/**********************************************************************/
/*******************this is mouse call back function*******************/
/**********************************************************************/
void my_mouse_callback(int event, int x, int y, int flags, void* param)
{
    Mat* image = (Mat*)param;


    switch(event)
    {
    case EVENT_MOUSEMOVE:
        {
            if(drawingLine)
            {
                ptb[activeSensor].x = x;
                ptb[activeSensor].y = y;
            }
        }
        break;


    case EVENT_LBUTTONDOWN:
        {
            drawingLine = true;
            pta[activeSensor].x = x;
            pta[activeSensor].y = y;
            ptb[activeSensor].x = x;
            ptb[activeSensor].y = y;
        }
        break;


    case EVENT_LBUTTONUP:
        {
            drawingLine = false;
            ptb[activeSensor].x = x;
            ptb[activeSensor].y = y;
            line(*image,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),3);
        }
        break;
    }
}


int main(int argc, char* argv[])
{  
    // setup wiring pi
    wiringPiSetup () ;
    pinMode (24, OUTPUT);
    pinMode (25, OUTPUT);
    digitalWrite (24, HIGH);
    digitalWrite (25, HIGH);
    
#ifdef USE_RASPICAM
    raspicam::RaspiCam_Cv Camera;
    
    //set camera params
    Camera.set(cv::CAP_PROP_FORMAT, CV_8UC3);
    Camera.set(cv::CAP_PROP_FRAME_WIDTH, 640);
    Camera.set(cv::CAP_PROP_FRAME_HEIGHT, 480);
    //Open camera
    cout<<"Opening Camera..."<<endl;
    if (!Camera.open()) {
        cerr<<"Error opening the camera"<<endl;
        return -1;
    }
#else  
    // load video
    VideoCapture cap("highway2.mp4");        
    if (!cap.isOpened()){
        printf ("failed to open video capture\r\n");
        return -1;
    }
#endif


    // -----------------------------------capture first frame for sensor drawing---------------------------//
    Mat capturedImage;
#ifdef USE_RASPICAM
    Camera.grab();
    Camera.retrieve (capturedImage);
#else
    cap >> capturedImage;
#endif
    Mat temp = capturedImage.clone();


    // setup mouse callback, this is to draw a line
    namedWindow("VLS", WINDOW_AUTOSIZE);
    setMouseCallback("VLS", my_mouse_callback, &capturedImage);


    // print text on image
    putText(capturedImage, "Draw Sensor", Point(30, 20), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);
    putText(capturedImage, "Press ESC to Complete" , Point(30, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0,255,255), 0.5);
    
    // live display of line drawing, until user press ESC
    for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
        while (1)
        {
            capturedImage.copyTo(temp);
            if (drawingLine)
                line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),3);
            imshow("VLS",temp);


            if (waitKey(15) == 27) break;
        }
    }
    
    // remove mouse callback to the output window
    setMouseCallback("VLS", NULL, NULL);


    // -----------------------------------sample sensor and build background image--------------------------//
    Mat sensor[SENSOR_COUNT], bgImage[SENSOR_COUNT];
    for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
        int sensorLength = max(abs(pta[activeSensor].x -ptb[activeSensor].x)+1,abs(pta[activeSensor].y -ptb[activeSensor].y)+1);    // calculate number of pixel for drawn sensor.
        sensor[activeSensor].create(Size(sensorLength, 1), CV_8UC3);        // create a Mat to hold sensor value (as line image)
        bgImage[activeSensor].create(Size(sensorLength, BG_DEPTH), CV_8UC3);
    }
    
    // start sample background
    for(int frameNum=0; frameNum < BG_DEPTH; frameNum++){
        // fill up background image
#ifdef USE_RASPICAM
        Camera.grab();
        Camera.retrieve (capturedImage);
#else
        cap >> capturedImage;
#endif
        printf("sampling background %d of %d\n",frameNum,BG_DEPTH);
        
        // just to display image with sensor
        temp = capturedImage.clone();
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++)
            line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);
        waitKey(1);
        
        // sample sensor line into background image
        uchar *lineBufferPtr, *bgImagePtr;
        
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            LineIterator lineIt(capturedImage, pta[activeSensor], ptb[activeSensor], 8);
            bgImagePtr = bgImage[activeSensor].ptr<uchar>(frameNum);
            for (int i = 0; i < lineIt.count; i++, ++lineIt){
                bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
            }
        }
    }
    
    // -----------------------------------sample sensor and start sensing--------------------------//
    // loop forever, until end of video
    while(true){
        
        // we need to update background with new sample. before that, shift the background upward by 1 row before fitting new sensor into it.            
        uchar *curBGImagePtr, *newBGImagePtr;
        
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            int nRows = bgImage[activeSensor].rows - 1;
            int nCols = bgImage[activeSensor].cols * bgImage[activeSensor].channels();
            for (int row = 0; row < nRows; row++){
                curBGImagePtr = bgImage[activeSensor].ptr<uchar>(row + 1);
                newBGImagePtr = bgImage[activeSensor].ptr<uchar>(row);
                // start copy pixel from col to col in a row
                for (int col = 0; col < nCols; col++)
                    newBGImagePtr[col] = curBGImagePtr[col];
            }
        }
            
        // capture image from video
#ifdef USE_RASPICAM
        Camera.grab();
        Camera.retrieve (capturedImage);
#else
        cap >> capturedImage;
#endif
        // just to show the line 
        temp = capturedImage.clone();
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++)
            line(temp,pta[activeSensor],ptb[activeSensor],Scalar(0x00,0x00,0xff),1);
        imshow("VLS",temp);
    
        // sample sensor and update background
        uchar *lineBufferPtr, *bgImagePtr, *sensorPtr;
        
        // sample sensor
        for(activeSensor = 0; activeSensor < SENSOR_COUNT; activeSensor++){
            LineIterator lineIt(capturedImage, pta[activeSensor], ptb[activeSensor], 8);
            bgImagePtr = bgImage[activeSensor].ptr<uchar>(BG_DEPTH - 1);        // update sensor to bgimage at last line
            sensorPtr = sensor[activeSensor].ptr<uchar>(0);
            for (int i = 0; i < lineIt.count; i++, ++lineIt){
                bgImagePtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                bgImagePtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                bgImagePtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
                sensorPtr[i * 3] = ((const Vec3b)*lineIt).val[0];
                sensorPtr[i * 3 + 1] = ((const Vec3b)*lineIt).val[1];
                sensorPtr[i * 3 + 2] = ((const Vec3b)*lineIt).val[2];
            }    
            
            // detection
            Mat mean = getBGMean (bgImage[activeSensor]);
            detectionResult[activeSensor] = threshold (sensor[activeSensor], mean);
        }
        
        printf ("sensor value: %d, %d\r\n", detectionResult[0], detectionResult[1]);
        
        // write to image 
        if (detectionResult[0]){
            putText(temp, "ON", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (24, LOW);
        }
        else{
            putText(temp, "OFF", Point(30, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (24,HIGH);
        }
        if (detectionResult[1]){
            putText(temp, "ON", Point(330, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (25, LOW);
        }
        else{
            putText(temp, "OFF", Point(330, 50), FONT_HERSHEY_SIMPLEX, 1, Scalar(0,0,255), 2);
            digitalWrite (25, HIGH);
        }
        
        imshow("VLS",temp);
        waitKey(1);
    }
    
#ifdef USE_RASPICAM
    Camera.release();
#endif
    return 0;
}


int threshold (Mat sensor, Mat mean)
{
    // convert sensor to float
    sensor.convertTo(sensor, CV_32FC3, 1/255.0);
    
    // find euclidean distance between sensor and bg mean
    int vehicleArea = 0;
    for (int x = 0; x < mean.cols; x++){
        float a,b,c, eDistance;
        a = sensor.at<Vec3f>(0,x).val[0] - mean.at<Vec3f>(0,x).val[0];
        a = a*a;
        b = sensor.at<Vec3f>(0,x).val[1] - mean.at<Vec3f>(0,x).val[1];
        b = b*b;
        c = sensor.at<Vec3f>(0,x).val[2] - mean.at<Vec3f>(0,x).val[2];
        c = c*c;
        eDistance = a+b+c;
        eDistance = sqrtf(eDistance);
        
        // if distance > 0.2, increase vehicle area
        if (eDistance > 0.1)
            vehicleArea++;
    }
    // if the sensor has huge difference with bg mean, return 1 as detected
    if (vehicleArea > 50)
        return 1;
    return 0;
}


Mat getBGMean (Mat bg)
{
    Mat mean(Size(bg.cols, 1), CV_32FC3, Scalar(0,0,0));        // create a Mat to hold bg mean value
    bg.convertTo(bg, CV_32FC3, 1/255.0);


    // sum all into mean Mat
    for (int x = 0; x < bg.cols; x++){
        for (int y = 0; y< bg.rows; y++)
            mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x) + bg.at<Vec3f>(y,x);
    }
    
    // get mean
    for (int x = 0; x < bg.cols; x++)
        mean.at<Vec3f>(0,x) = mean.at<Vec3f>(0,x)/bg.rows;
    return mean;
}

 

Demo with Line Following Robot

the demo is done with line following robot. duct tape on the floor, form a loop and the robot will be running forever.

the camera is mounted (in exact, i tape it) on the printer, and looks at the floor.

camera view

Demo video