The Gift of Sight…

It’s now fairly easy for me to build a robot that can do stuff. Drive around, balance on two wheels, pick up things, drive around some more, transmit video, obey orders, drive some more. While learning all this stuff this has been great fun, robots that just DO things are not much challenge anymore. So I have spent the last few months learning to make my robots more clever, building robots than can observe their environment, make intelligent decisions and re-configure themselves to interact optimally with external stimuli.

For me, vision processing was an easy first choice in trying to build intelligent robotics. The OpenCV library is an incredibly powerful library that one can download for free. OpenCV makes implementing computer based vision extremely easy and once you get more familiar with image processing, you start to see that most operations are just elementary arithmetic operations on matrices. Operations like background subtraction, edge detection, blob detection, kalman filtering and the extremely useful Hungarian Algorithm are all just simple matrix operations. OpenCV is a little tricky to learn, but once you get the hang of it, it’s supremely powerful when it comes to doing interesting things with visual data. I owe thanks for much of what I know about OpenCV to Kyle Hounslow. His video tutorials are a super easy way to get started with OpenCV.

A couple of months ago I used the OpenCV library to build a webcam based vision capable robotic arm. I used Qt and OpenCV to implement the video capture and frame processing.The idea here is to get the computer to track the green ball and then send the correct spatial coordinates to the robot arm which would then follow the ball in space. I used my 6 DOF robot arm and ArduinoTalker c++ class to perform the motion following in the “real” world. The movements are shaky because I was too lazy to implement any smoothing algorithm and the very obvious parallax error is because the camera is fitted to the laptop screen and not onto the arm itself.

OpenCV can also track objects based on detecting motion. This is great when you don’t really know what color your tracked object is likely to be.The technique applied here is background subtraction – color values from corresponding pixels in two frames are literally subtracted from each other. If pixel in the resulting matrix is NOT EQUAL TO ZERO, it means that something has moved in that location. Easy!

Below is a screencast from a motion tracking program I wrote that uses feed from a traffic camera in Germany to track moving vehicles using the background subtraction technique. I have slowed the frame rate so that its a little easier to appreciate what is happening in the recording.

So here are some of the basic OpenCV code needed to accomplish something like this….

1. Create OpenCV object and stream from a webcam…


cv::VideoCapture capWebcam

capWebcam.read(matOriginal)

2. Convert from RGB color to HSV values..

<br data-mce-bogus="1">

cv::cvtColor(matOriginal,matBuffer,cv::COLOR_BGR2HSV_FULL);

3. Filter according to user determined limits. This is where we isolate the pixels that correspond to the green ball….


cv::inRange(matBuffer,cv::Scalar(H_MIN,S_MIN,V_MIN),cv::Scalar(H_MAX,S_MAX,V_MAX),matHSV);

4. Erode the result slightly to get rid of noise and then dilate the result to produce a group of pixels that are easy to track


cv::Mat erodeElement = cv::getStructuringElement(cv::MORPH_RECT,cv::Size(8,8));
cv::Mat dilateElement = cv::getStructuringElement( cv::MORPH_RECT,cv::Size(8,8));

cv::erode(matHSV,matHSV,erodeElement);
cv::erode(matHSV,matHSV,erodeElement);
cv::erode(matHSV,matHSV,erodeElement);

cv::dilate(matHSV,matHSV,dilateElement);
cv::dilate(matHSV,matHSV,dilateElement);

5. Find the contours of the object being tracked.


cv::findContours(temp,contours,hierarchy,cv::RETR_CCOMP,cv::CHAIN_APPROX_SIMPLE );

6. Find the geometric center of the area being tracked…


cv::Moments moment = cv::moments((cv::Mat)contours[index]);
double area = moment.m00;

x = moment.m10/area;
y = moment.m01/area;

7. Assuming you already have an instance of ArduinoTalker class, just pipe the X,Y data to Arduino…


void Dialog::FeedDataToArduino(int x, int y)
{
float ardX, ardY;
int yMax = 480;
int xMax = 640;
int scaleX = 1;
int scaleY = 1;

float alpha = 0.6;

//parse the visual coordinates into servo motor commands

ardX = x * 180/ xMax * scaleX;
ardY = 180 - y * 180/ yMax * scaleY;

ardX = alpha * ardX + (1 - alpha) * ardXOld;
ardY = alpha * ardY + (1 - alpha) * ardYOld;

rover1->AddDataToOutBuffer(ardX);

rover1->AddDataToOutBuffer(ardY);

rover1->SendDataToArduino();

rover1->ClearOutBuffer();

qDebug() << "Sent H Servo:" << ardX << ", V Servo:" << ardY ; ui->spn_ArduinoX->setValue(ardX);

ui->spn_ArduinoY->setValue(ardY);

ardXOld = ardX;
ardYOld = ardY;

}

Advertisements

2 thoughts on “The Gift of Sight…

  1. Hello , and Thanks for you useful topics.
    I am last year student in department of electrical and computer engineering and very interested in robotics. i am familiar with Arduino but have no idea about openCV and and Image processing and i want to know good resources to learn such i can improve my robotics skills like motion planning , SLAM and AI, I am fairly good at programming and control as i am nearly a control engineer.
    I suggest you to add a PID control to this project, Amazing work keep it up.
    Thanks,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s