Most robots, especially autonomous robots, need to be clever enough to avoid bumping into obstacles. To do this they need sensors to investigate the environment around them, they need to process this data and identify obstacles in their vicinity. Finally, they need to be able to generate motor commands that steer them clear of any obstacles around them. This is the simplest form of obstacle avoidance and there are tons of examples on the internet, with plenty of little robots that can do this quite nicely.
With this sort of rudimentary obstacle avoidance algorithm, a robot could keep clear of obstacles but it would most likely wander around aimlessly while doing so. Sometimes robots need to be a little more intelligent. They need to reach a goal…perhaps they are chasing a target, perhaps they need to reach one of many way-points along a pre-determined path, perhaps they are headed toward a battery charging point or a position of interest, maybe they are meeting up with a friend, perhaps they need to duck under enemy radar cover while approaching a target!! Whatever the case, these kinds of robots…robots that can navigate intelligently, need to have a slightly more robust obstacle avoidance behaviour built into them.
What are some of the things a robot would need to achieve this sort of tasking…
- It would need to know the position of its goal in space. “Where am I going?”
- It would need to know its own position in space and be able to update this position as it moved within this space. “Where am I now?” There are several ways to do this. A robot could use shaft encoders on its wheels to estimate its position, or use a Particle Filter to fix itself. If it were outdoors, it could use a GPS receiver.
- It would need sensors to locate obstacles. “What is there around me?”. There are plenty of options for sensing obstacles, Ultrasonic pingers, Infra-Red Tx-Rx modules, LIDAR.
- It would need to choose a path that would keep it clear of obstacles and still eventually lead it to its goal. “How do I get to my goal safely?” This is where a good obstacle avoidance algorithm comes into play.
So here is my autonomous robot obstacle avoidance simulator. I wrote this in Python and most of the 400 odd lines of code only there to build the simulation, display graphics etc. The actual robot decision making code is only about 20 lines long and can easily be implemented on a small microcontroller like device.
The Robot and its Goal
- The robot is shown as a yellow circle marked “R”. The blue line represents its direct Line of Sight to its Goal (Marked with a green circle and annotated “G”). The white line represents the current recommended heading of the robot. In this screenshot, blue and white lines coincide. The tiny green circles behind the robot are its position trail.
- This robot is fitted with 18 distance measuring sensors shown numbered from 8 to -8. Each has a Field of View of 10 degrees and a maximum pick up range of 200 pixels. For a practical implementation on a real robotic platform, one could just as well use 3 sensors and sweep them through 60 degrees using a servo motor.
Obstacles are represented by red circles surrounded by white “safety” circles. When any obstacle is is within the range of one or more sensors, it is shown with a green “safety” circle. In the screen shot above, sensors with indexes -4, -5, -6, -7, and -8 all have an obstacle in view. All the other obstacles are not within sensor range and the robot is oblivious to their presence.
Objective : Reach the Goal Keeping Clear of the Obstacles
This algorithm uses the method of weighted sums to determine a recommended heading for the robot. Much of the program logic is based on this excellent paper by Ioan Susnea, Viorel Minzu, Grigore Vasiliu and I hereby acknowledge their work and place on record my thanks to them for sharing their work.
So how does this algorithm work. Simply put, each sensor is given a weight according to its “look” direction. Sensors closer to the middle have lower weights and those that look sideways have the maximum weight. Sensors on the port side have negative weights and those that look to starboard have positive weights. The output from each sensor is the distance of the nearest obstacle along its Line of Sight. If no obstacle is present, the sensor outputs a maximum value. We then just sum up and normalise the weighted outputs of each sensor to produce a recommended turning direction. As you can easily make out, whenever obstacles are present on any particular side of the robot, the sensors on that side are squeezed. Thus sensors with index numbers on the “free” side will dominate the weighted sum and the recommended output will reflect this bias. Look at the images below :-
Does it work perfectly?
In most situations, it works just fine and the robot is able to reach its goal and keep clear of obstacles along its path. The only problem seems to be when it is approaching obstacles at right angles, like the artificial wall below :-
In this situation, both port and starboard sensors would provide equally weighted outputs and the robots just crashes through the wall heading towards the goal :-
One possible solution is to force the robot to pursue an oblique path to the goal. This seems to work at first but as soon as the approach angle to the goal becomes perpendicular to the “wall”, it smashes through the wall. See below :-
So clearly there is still much work to be done, and I have made several modifications to the original concept to try and improve the algorithm , some of which I list below…
- I modified the algorithm so that the sensor array is always centered on the Line of Sight to the goal and not centered on the robot’s heading. This makes sense since the algorithm shouldn’t really care which direction the robot is facing, rather it should be always checking for a clear path to the goal.
- At each step the robot checks if there is a clear path to the target. Obviously it can only do this with the obstacles currently in its view and not all the obstacles in the simulation. If there is a clear path, the robot abandons any recommendations of the obstacle avoidance algorithm and heads straight for the goal until fresh obstacles are detected and deviations to track are needed.
I have uploaded the simulation into the CodeSkulptor website and you can experiment with it by clicking this link. Press the “play” button on the CodeSkulptor toolbar to start the simulation.The code is a little messy and still under development but it should provide a good starting point for a simple obstacle avoidance algorithm.
Use the “Set Robot” button to set the position of the Robot. The “Set Goal” button allows us to change the position of the goal, and the “Add Obs” button allows us to add more obstacles as needed. Tweaking the constants on lines 8 to 17 will alter the behaviour of the robot. Use the “Step” button to step through the simulation. The “Set Start” button is irrelevant at this stage of development and I will probably remove it some time soon.
Hopefully the video below provides a good demo of the algorithm at work…