The factory camera for the Raspberry Pi comes with a tiny lens that has a Field of View of about 67 degrees diagonal(53 degrees Horizontal and 41 degrees Vertical). As any analemma enthusiast will tell you, this FOV just barely meets the minimum FOV needed (I will post details on this soon) to achieve a full figure-of-eight image with enough room on the sides for a nice foreground image. So the only way I can get my Pi to capture a full analemma is by changing the factory lens.
Luckily this is done quite easily. I followed the excellent instructions posted here to remove my factory lens and fitted an M12 (12mm) mounting bracket from a old broken webcam in place of the factory lens. Unfortunately the mounting screws did not line up with the holes provided on the Pi’s camera PCB and so I just glued it into place. This is what the result looks like…
Most robots, especially autonomous robots, need to be clever enough to avoid bumping into obstacles. To do this they need sensors to investigate the environment around them, they need to process this data and identify obstacles in their vicinity. Finally, they need to be able to generate motor commands that steer them clear of any obstacles around them. This is the simplest form of obstacle avoidance and there are tons of examples on the internet, with plenty of little robots that can do this quite nicely.
With this sort of rudimentary obstacle avoidance algorithm, a robot could keep clear of obstacles but it would most likely wander around aimlessly while doing so. Sometimes robots need to be a little more intelligent. They need to reach a goal…perhaps they are chasing a target, perhaps they need to reach one of many way-points along a pre-determined path, perhaps they are headed toward a battery charging point or a position of interest, maybe they are meeting up with a friend, perhaps they need to duck under enemy radar cover while approaching a target!! Whatever the case, these kinds of robots…robots that can navigate intelligently, need to have a slightly more robust obstacle avoidance behaviour built into them.
Clever Robots can avoid obstacles as they head towards a goal
An XY plotter is a machine that can control a plotting instrument (such as a pen or a cutting tool like a blade or a laser) over two axes in a accurate, precise manner. Computer Numerical Control (CNC) machines are very accurate XY plotters than can be used for anything from decorating cakes to cutting steel plates into very precise shapes and sizes.
I wanted to make a drawing robot that would be able to draw the contours of a human face, so I decided to experiment with some very basic stepper motors and a cheap toy plotter that I bought on the Internet. Unfortunately, the plotter itself is so poorly manufactured that it is useless as a drawing tool, but the whole project gave me much insight into the steps needed to design a build a proper computer controlled plotting machine.
As robots become smarter, faster and more capable, they are being developed to perform increasingly complex tasks. In order to perform these tasks properly, robots are becoming more and more dependent on accurate navigation through the environment in which they operate. Somewhere in the future, if intelligent robots were to rise up and demand fundamental rights, I think one of the first things they would ask for is the answer to the question, “Where am I?”.
The Monty Hall problem was created by Steve Selvin and is a classic puzzle whose correct answer is counter-intuitive almost to the point of disbelief. As this page explains, even some of the most competent mathematicians of the 20th century refused to accept the correct answer to the Monty Hall problem for a long time.
Here is the statement of the problem :
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the other two doors are goats. You pick a door, and the host, who knows what’s behind the doors, opens another door, revealing a goat. He then says to you, “Do you want to change your selection?” Is it to your advantage to switch your choice?
Monty Hall Problem. Image is from wikipedia
What does intuition tell us? After the host opens one door, revealing a goat, we are left with two closed doors, one hiding a car and the other a goat (50% chance of success either way), intuition would lead us to conclude that there is no difference in our chances of success if we switched doors or not.
After playing around with DC electronics for almost a year now, I thought it was finally time to start mucking about in the world of Alternating Current. To begin, I decided to experiment with the ACS712 Current Sensor.
This image, taken from Visakhapatnam (17 40 N 083 17 E), is possibly the first recording of the Analemma of the Sun ever captured in India. It combines a sequence of 26 individual photographs taken from 24 March 2013 to 13 March 2014. Superimposed over each other, they demonstrate the movements of the sun through the sky over a full calendar year. Click on the image for a larger view.
Analemma of the Sun
I photographed this sequence using a Nikon D40x camera and a variable ND filter, adjusting filter density, shutter speed and aperture settings at each instance to capture only the sun’s disc, leaving the rest of the frame completely black. The final image combines all 26 exposures with a background shot I took later.
I recently read that to date, Mount Everest has been scaled 1924 times.
Compare this figure to the number of people who have successfully captured the Analemma of the Sun. Any guesses? 1000, maybe 500? Nowhere even close. According to the founder of this website, in the entire history of mankind, not more than 20 people have managed to get it right.
This is the first ever photograph that successfully captured the Analemma. It was taken over a period of one year by Dennis di Cicco in 1978.
This project brings together the DIY Haptic Control Glove and the Robotic Hand that I made earlier. The cost of this entire project was less than 25 US$. For details on how they were built and how they work, just follow the link for each.
This video demostrates the complete project.
1. Calibration of the glove
2. Control of the fingers
3. Touching finger tips of little and index fingers to demonstrate
4. Performing a simple task
5. Detail of servo movements
To test the working of a robot hand like the one I built earlier, I needed a haptic control glove that would encode the flexing of my fingers into electrical signals. These signals would be interpreted by a microcontroller (like the ATMEGA328 on the Arduino platform) and cause the servo motors on the robot hand to mimic my finger movements inside the glove. Electronic puppetry.