Most robots, especially autonomous robots, need to be clever enough to avoid bumping into obstacles. To do this they need sensors to investigate the environment around them, they need to process this data and identify obstacles in their vicinity. Finally, they need to be able to generate motor commands that steer them clear of any obstacles around them. This is the simplest form of obstacle avoidance and there are tons of examples on the internet, with plenty of little robots that can do this quite nicely.
With this sort of rudimentary obstacle avoidance algorithm, a robot could keep clear of obstacles but it would most likely wander around aimlessly while doing so. Sometimes robots need to be a little more intelligent. They need to reach a goal…perhaps they are chasing a target, perhaps they need to reach one of many way-points along a pre-determined path, perhaps they are headed toward a battery charging point or a position of interest, maybe they are meeting up with a friend, perhaps they need to duck under enemy radar cover while approaching a target!! Whatever the case, these kinds of robots…robots that can navigate intelligently, need to have a slightly more robust obstacle avoidance behaviour built into them.
Clever Robots can avoid obstacles as they head towards a goal
It’s now fairly easy for me to build a robot that can do stuff. Drive around, balance on two wheels, pick up things, drive around some more, transmit video, obey orders, drive some more. While learning all this stuff this has been great fun, robots that just DO things are not much challenge anymore. So I have spent the last few months learning to make my robots more clever, building robots than can observe their environment, make intelligent decisions and re-configure themselves to interact optimally with external stimuli.
For me, vision processing was an easy first choice in trying to build intelligent robotics. The OpenCV library is an incredibly powerful library that one can download for free. OpenCV makes implementing computer based vision extremely easy and once you get more familiar with image processing, you start to see that most operations are just elementary arithmetic operations on matrices. Operations like background subtraction, edge detection, blob detection, kalman filtering and the extremely useful Hungarian Algorithm are all just simple matrix operations. OpenCV is a little tricky to learn, but once you get the hang of it, it’s supremely powerful when it comes to doing interesting things with visual data. I owe thanks for much of what I know about OpenCV to Kyle Hounslow. His video tutorials are a super easy way to get started with OpenCV.
A couple of months ago I used the OpenCV library to build a webcam based vision capable robotic arm. I used Qt and OpenCV to implement the video capture and frame processing.The idea here is to get the computer to track the green ball and then send the correct spatial coordinates to the robot arm which would then follow the ball in space. I used my 6 DOF robot arm and ArduinoTalker c++ class to perform the motion following in the “real” world. The movements are shaky because I was too lazy to implement any smoothing algorithm and the very obvious parallax error is because the camera is fitted to the laptop screen and not onto the arm itself.
The robotic arm is now fully operational. It has 6 Degrees of Freedom and can be controlled remotely from any laptop running the interface software.The robotic hand is capable of simple tasks such as lifting and carrying small objects. I have attached a wireless AV camera to the robotic hand. A human operator can now “see” what the robot is doing and issue commands accordingly over the wireless data link.
Power for the 4 high torque DC motors comes from a single 1.3Ah 12 V battery. The second battery (the taller one) is a 4.5 Ah 6V battery that will power the micro-controller unit (an Arduino Mega) and the six servos that control the robotic arm. Once basic testing operations are completed, I will add two more servos for a pan-tilt sensor mechanism (wireless camera/ sonar ranger/ IR sensor etc) that will also draw power from this 6V battery.
After successfully completing this superb online course from Stanford University on Machine Learning, I am now quite confident with designing and programming neural networks. Also, playing around with the incredibly powerful openCV library has got me experimenting with computer vision. If I were to try and put these two powerful tools together, and the most obvious outcome would be intelligent, vision capable robots.
But before I get into any of the complex programming needed to create these robots, I first need to build myself a proper ERP, an Experimental Robotic Platform. So this weekend, I spent most of my time working on an ERP chassis……….
As robots become smarter, faster and more capable, they are being developed to perform increasingly complex tasks. In order to perform these tasks properly, robots are becoming more and more dependent on accurate navigation through the environment in which they operate. Somewhere in the future, if intelligent robots were to rise up and demand fundamental rights, I think one of the first things they would ask for is the answer to the question, “Where am I?”.
To test the working of a robot hand like the one I built earlier, I needed a haptic control glove that would encode the flexing of my fingers into electrical signals. These signals would be interpreted by a microcontroller (like the ATMEGA328 on the Arduino platform) and cause the servo motors on the robot hand to mimic my finger movements inside the glove. Electronic puppetry.
The word robot comes from the Polish word ‘robota’ meaning forced labour. In Russia, robota means just work, employment or operation. Funny, I’ve spent nearly two years in Russia and have probably spoken this word many many times, never really realising that it is also the root word for robot!
Anyway, this post is a photo-essay/tutorial on how I built my new robotic hand.
After building my 6 DOF robot arm, I needed a way to control it through the PC using a Qt graphical interface. Having already developed my Qt-Arduino interfacing class, I decided to up the complexity of the project by using an Arduino Pro Mini, instead of an Arduino UNO.
For the last two weeks I’ve been trying to get my head around quaternions. More specifically I need to learn how to manipulate them in a 3D environment. Its at times like this that I wish I had paid more attention to maths in school. Sigh!!
Anyway, on Sunday I decided to take a break from vector rotations, gimbal locks, matrices etc etc and got to work on an old project that I had put on hold. Arts and crafts is not really something that I do very often, or am good at, but on this occasion it turned out quite nicely. While there’s nothing really new here…its all very basic stuff, no fancy algorithms, just a little bit of work with the hands, some glue and a teensy bit of imagination.
The results are quite entertaining, far from perfect, but a nice little project for a Sunday afternoon. Take a look….