Fig 1: Lane following rover

Introduction

When I hear the term “lane following”, I think about automated assistance system on our cars which ensures that the car stays inside a lane. But what does it take to create such a system? To understand the complexity behind such lane following algorithms, I decided to develop one for the Arduino Engineering Kit’s rover.

 

I learned that to build a successful lane following algorithm, I need three things: lane detection algorithm, lane following logic, and controls algorithm to ensure the rover follows the lane following logic. I grouped the lane detection and the lane following logic into one, the vision algorithm, and embedded this onto a Raspberry Pi 3 board (Pi). The Arduino MKR1000 board is where the controls algorithm is running, and it receives the decisions from the Pi over Wi-Fi. In this project, the primary focus will be on the vision phase.

Fig 2: Data flow diagram

 

For those who have not heard about the Arduino Engineering Kit, it comes with all the hardware and software needed to build and program three hands-on projects that teach you fundamental engineering concepts along the way. Here is a YouTube playlist that shows these projects in action.

Before we look at the details of the lane following rover, here is a sneak peek of it in action. The video also provides an overview of the vision algorithm. 

 

Physical setup

The following image (Fig 3) shows the rover’s setup. Even though the image shows the Pi on top of the rover, in the actual implementation, the Pi was on the floor next to a wall outlet. The rover’s battery was not able to power both the Pi and the rover.

Fig 3: Rover setup

To deploy the vision algorithm to a Pi, the lane needed to be simplistic. I went with a white lane on a dark background (Fig 1) so that the lane detection algorithm could be both simple and fast.

 

Software used

MATLAB, Simulink, Stateflow, Computer Vision Toolbox, Simulink Support Package for Arduino Hardware, Simulink Support Package for Raspberry Pi

 

One big reason Simulink was used is because it simplifies communication between the Arduino and Raspberry Pi, a key requirement for this project. Sending data between two Wi-Fi enabled devices is usually arduous because you have to worry about datatypes, sizes, and sampling rates. Using Simulink, all you need to do is use the blocks from the Simulink Support Package to send and receive data and set the correct IP address of each device.

 

Another perk of using Simulink is that it helps you program different hardware boards without having to manually write code. With the click of a button, you can generate C/C++ code to deploy Simulink models onto embedded hardware.

Fig 4: Send and Receive blocks to enable communication between Arduino and Raspberry Pi

Vision algorithm

The vision algorithm has both lane detection and lane following logic. The overall workflow follows these main steps:

  1. Lane detection:
    1. Acquire and process an image to extract the edges of the lane
    2. Based on the position and slopes of the lanes, find the lane center
  2. Lane following logic:
    1. Turn to align with the lane center and drive forward a small amount

Lane Detection

Fig 5: Lane Detection implementation in Simulink

In the Simulink model, the Edge Detection block corresponds to step 1a above and Calculate Lane Center corresponds to step 1b. 

Fig 6: Edge Detection Simulink implementation

The contents of the Edge Detection subsystem are shown in Fig 6. Here, the lane boundaries are extracted from the input image using color thresholding and a built-in edge detection block. These blocks make it easy to implement complex algorithms without having to manually write the code. All we had to do was tune the color threshold value in the model and choose which built-in edge detection algorithm gave the best results. With external mode, we are also able to tune some of these parameters while the algorithm ran on the hardware to immediately observe a response.

Fig 7: Edge detection results using Simulink

Simulink’s SDL Video Display block allowed us to visualize the output of the algorithm/what the rover was seeing (Fig 7).

The Calculate Lane Center block is a MATLAB Function block which allows us to run MATLAB functions natively on the hardware using Simulink. This function performs three main actions:

  1. Picks a slice from the edge detected image. A slice near the bottom ensures that only one of the lanes is visible.
  2. Finds the slope of the edge. If it is positive, then the rover knows it is a left boundary and the corrective measure is to turn right. Likewise, if it is negative the rover must turn left.
  3. Identifies the center of the image (red plus in Fig 8) and the center of lane (half-way distance between detected lane and the other end of the image; green plus in Fig 8). The distance between these two is the magnitude of the rover’s desired turn.

Fig 8: Calculate Lane Center’s output visualized using Simulink

Lane Following Logic

A Stateflow chart was used to organize the rover’s actions into three states: start, drive and turn. Stateflow is particularly useful in projects like these where you want to take actions based on events.

We want the rover to start moving only after ensuring a connection is established between the Pi and Arduino, which is an event. This is represented by the Start state in Fig 9. Similarly, we only want the rover to go from the Turn state to the Drive state (in Fig 9) after the rover reaches the necessary heading (which is an event).

Fig 9: Lane following logic implementation in Simulink

Start: Here, we are calling the receivedHandshake function which ensures that a connection between the Arduino and the Pi is established before proceeding on to the autonomous driving portion of the algorithm. We also added a 3 second delay before moving the rover to minimize any latency related issues.

 

Turn: The Turn state calls the turnToLaneCenter function, which uses the lane detection algorithm that we saw in the previous section. Then, a command is sent to the Arduino over Wi-Fi to make the rover turn the desired amount in a specified direction. The rover stops turning when the headingReached variable becomes 1 (true); this is the exit condition of a state. The rover then enters the Drive state.

 

Drive: The sendDistance function is called here, which requests the Arduino to move the rover 5 inches forward. Precise convergence on the 5-inch mark is not required as the robot will come back to the Turn state. Here, the exit condition is 5 seconds because that is how long the rover took to move 5 inches on some portions of the lane.

 

Conclusion

By combining these algorithms that were running on the Raspberry Pi and a simple control logic deployed on the Arduino, we were able to design a truly autonomous “rover” that roams around while staying inside a lane. We have seen how Simulink was used to:

  1. Tune the color thresholding parameter of the lane detection algorithm.
  2. Visualize what the robot was seeing over Wi-Fi and use that to modify both the edge detection algorithm and tune the lane following logic.
  3. Program a Raspberry Pi and Arduino without having to manually write your own C/C++ code.

After this rewarding experience, I strongly recommend implementing algorithms on these easily accessible hardware boards as a first step to truly understanding the embedded hardware world. Feel free to share your ideas as comments on what improvements I can try out with this rover. If you are interested in reproducing this project, please reach out to our team here.