Most robots, especially autonomous robots, need to be clever enough to avoid bumping into obstacles. To do this they need sensors to investigate the environment around them, they need to process this data and identify obstacles in their vicinity. Finally, they need to be able to generate motor commands that steer them clear of any obstacles around them. This is the simplest form of obstacle avoidance and there are tons of examples on the internet, with plenty of little robots that can do this quite nicely.
With this sort of rudimentary obstacle avoidance algorithm, a robot could keep clear of obstacles but it would most likely wander around aimlessly while doing so. Sometimes robots need to be a little more intelligent. They need to reach a goal…perhaps they are chasing a target, perhaps they need to reach one of many way-points along a pre-determined path, perhaps they are headed toward a battery charging point or a position of interest, maybe they are meeting up with a friend, perhaps they need to duck under enemy radar cover while approaching a target!! Whatever the case, these kinds of robots…robots that can navigate intelligently, need to have a slightly more robust obstacle avoidance behaviour built into them.
Clever Robots can avoid obstacles as they head towards a goal
It’s now fairly easy for me to build a robot that can do stuff. Drive around, balance on two wheels, pick up things, drive around some more, transmit video, obey orders, drive some more. While learning all this stuff this has been great fun, robots that just DO things are not much challenge anymore. So I have spent the last few months learning to make my robots more clever, building robots than can observe their environment, make intelligent decisions and re-configure themselves to interact optimally with external stimuli.
For me, vision processing was an easy first choice in trying to build intelligent robotics. The OpenCV library is an incredibly powerful library that one can download for free. OpenCV makes implementing computer based vision extremely easy and once you get more familiar with image processing, you start to see that most operations are just elementary arithmetic operations on matrices. Operations like background subtraction, edge detection, blob detection, kalman filtering and the extremely useful Hungarian Algorithm are all just simple matrix operations. OpenCV is a little tricky to learn, but once you get the hang of it, it’s supremely powerful when it comes to doing interesting things with visual data. I owe thanks for much of what I know about OpenCV to Kyle Hounslow. His video tutorials are a super easy way to get started with OpenCV.
A couple of months ago I used the OpenCV library to build a webcam based vision capable robotic arm. I used Qt and OpenCV to implement the video capture and frame processing.The idea here is to get the computer to track the green ball and then send the correct spatial coordinates to the robot arm which would then follow the ball in space. I used my 6 DOF robot arm and ArduinoTalker c++ class to perform the motion following in the “real” world. The movements are shaky because I was too lazy to implement any smoothing algorithm and the very obvious parallax error is because the camera is fitted to the laptop screen and not onto the arm itself.
An XY plotter is a machine that can control a plotting instrument (such as a pen or a cutting tool like a blade or a laser) over two axes in a accurate, precise manner. Computer Numerical Control (CNC) machines are very accurate XY plotters than can be used for anything from decorating cakes to cutting steel plates into very precise shapes and sizes.
I wanted to make a drawing robot that would be able to draw the contours of a human face, so I decided to experiment with some very basic stepper motors and a cheap toy plotter that I bought on the Internet. Unfortunately, the plotter itself is so poorly manufactured that it is useless as a drawing tool, but the whole project gave me much insight into the steps needed to design a build a proper computer controlled plotting machine.
The robotic arm is now fully operational. It has 6 Degrees of Freedom and can be controlled remotely from any laptop running the interface software.The robotic hand is capable of simple tasks such as lifting and carrying small objects. I have attached a wireless AV camera to the robotic hand. A human operator can now “see” what the robot is doing and issue commands accordingly over the wireless data link.
Power for the 4 high torque DC motors comes from a single 1.3Ah 12 V battery. The second battery (the taller one) is a 4.5 Ah 6V battery that will power the micro-controller unit (an Arduino Mega) and the six servos that control the robotic arm. Once basic testing operations are completed, I will add two more servos for a pan-tilt sensor mechanism (wireless camera/ sonar ranger/ IR sensor etc) that will also draw power from this 6V battery.
After successfully completing this superb online course from Stanford University on Machine Learning, I am now quite confident with designing and programming neural networks. Also, playing around with the incredibly powerful openCV library has got me experimenting with computer vision. If I were to try and put these two powerful tools together, and the most obvious outcome would be intelligent, vision capable robots.
But before I get into any of the complex programming needed to create these robots, I first need to build myself a proper ERP, an Experimental Robotic Platform. So this weekend, I spent most of my time working on an ERP chassis……….
As robots become smarter, faster and more capable, they are being developed to perform increasingly complex tasks. In order to perform these tasks properly, robots are becoming more and more dependent on accurate navigation through the environment in which they operate. Somewhere in the future, if intelligent robots were to rise up and demand fundamental rights, I think one of the first things they would ask for is the answer to the question, “Where am I?”.
This project brings together the DIY Haptic Control Glove and the Robotic Hand that I made earlier. The cost of this entire project was less than 25 US$. For details on how they were built and how they work, just follow the link for each.
This video demostrates the complete project.
1. Calibration of the glove
2. Control of the fingers
3. Touching finger tips of little and index fingers to demonstrate
4. Performing a simple task
5. Detail of servo movements
To test the working of a robot hand like the one I built earlier, I needed a haptic control glove that would encode the flexing of my fingers into electrical signals. These signals would be interpreted by a microcontroller (like the ATMEGA328 on the Arduino platform) and cause the servo motors on the robot hand to mimic my finger movements inside the glove. Electronic puppetry.
The word robot comes from the Polish word ‘robota’ meaning forced labour. In Russia, robota means just work, employment or operation. Funny, I’ve spent nearly two years in Russia and have probably spoken this word many many times, never really realising that it is also the root word for robot!
Anyway, this post is a photo-essay/tutorial on how I built my new robotic hand.