Tuesday, June 9, 2015

2015-06-08: Autonomy Incubator Runs Tests on Visual Odometry System




It's an exciting day here at the Autonomy Incubator, as intern Alex Hagiopol runs the first tests on what will become a GPS-independent localization system.  The end goal of this research is to be able to have the level of control over robots that we achieve now with outdoor positioning systems such as GPS or indoor tracking systems such as a ViConTM.

Perception remains one of the hardest problems in robotics. Currently, the solution to that problem is the ViCon system: a circle of near-infrared cameras that pick up on metallic balls attached to the robot's body, then transmit data about those points' position and orientation 200 times per second. So good is this system at telling where objects are, and so good are scientists at using it, that incredible UAV acrobatics become possible-- just look at this TED Talk from 2013 about the "athleticism" of quadcopters:



With rapid and precise tracking systems, UAVs can be controlled to do backflips in an auditorium, but capabilities are limited in terms of real-world implementations of autonomous machines. This, then, is the problem of robotic perception: achieve the same level of precision and control, without using an external positioning system.

The Autonomy Incubator's team is tackling the perception problem by leveraging an approach called SVO, or Semi-direct Visual Odometry.  Developed last year at the University of Zurich, the main computer receives video from a camera on a UAV, recognizes "features" in the robot's surroundings, attaches reference points to them,  and calculates speed, position, and orientation based on how the robot moves relative to those points.

The surest way to test SVO's accuracy is to compare it to "the truth" as generated by, in this case, the ViCon system. Today, Hagiopol took his webcam setup inside the ring to compare the accuracy of this visual method against the tried-and-true results from the infrared cameras. Notice the metallic balls attached to the camera rig used to track its movement through the observable operational area for data comparison.



Alex used a patchwork of foam tiles as his test surface, arranged carefully to provide a highly contrasted environment of features for the algorithm to recognize. Here, we can see the way the SVO algorithm picked up on features on the mat—



—and what those features looked like as points in the algorithm's map of the surroundings.  The line in blue shows the path Alex walked, which the algorithm derived from his movement in relation to the points. Very, very cool stuff.













No comments:

Post a Comment