Tuesday, November 3, 2015

2015-11-03: Autonomy Incubator Showcase

Thank you to everyone who attended our October Showcase at NASA Langley Research Center!

The Autonomy Incubator hosts monthly showcases for our NASA colleagues, during which we report out on how we are rising to the autonomy and robotics challenges in space, science, and aeronautics. These events are snapshots of selected technologies and capabilities that we’re developing as we advance towards our goal of precise and reliable mobility and manipulation in dynamic, unstructured, and data-deprived environments. Check out the time-lapse video of our showcase, below.



We began this month’s showcase by highlighting the work of our fall computer vision intern, Deegan Atha. Deegan, a rising junior at Purdue University, discussed the on-board object classification capabilities he is developing with the help of his mentor, AI engineer Jim Neilan. As Deegan showed our audience the camera he’s been using and how it can be easily added onto different vehicles, he discussed the processing time per image (about 3 seconds) and our work using Euclidean segmentation for detection of objects.

In the coming month, Deegan will be working on in-flight training data, building 3D on a vehicle to optimize accuracy, and ultimately demonstrating the object classification working on a flying vehicle. While we are not ready for him to leave he AI, we are looking forward to his exit presentation!

Deegan Atha, our computer vision intern, discusses his work in object classification.

In the above image, we can see the object classification working, as the
computer recognizes the chair and the NASA employee.


Jim Neilan highlights the different UAV designs in our lab.

Following Deegan, Jim Neilan briefly discussed our vehicles and the importance of our technology's platform portability. Although we might test on specific vehicles, like Herbie, our beloved rover, the autonomous behaviors we are developing are not designed to be vehicle-specific. 

The microphone was then handed to Anna Trujillo, our human factors engineer. At the Autonomy Incubator, we're working on creating autonomous systems that limit the need for constant piloting of a vehicle. However, while we're creating behaviors that can function independently of a pilot we still need a human to define, monitor, and adjust mission specifications. Anna discussed her work developing controls that can be easily understood by a person without piloting experience. She began with the example of a science mission, in which NASA scientists would be using a UAV to test ozone and CO2 levels. Using a display program Anna developed, scientists can use a tablet to get raw data values in real-time, without waiting for post-processing. Information such as vehicle position, time and sensor data can help the scientists ensure the test is running smoothly, and that the data they're getting makes sense. 

Display system created by Anna, designed to enable scientists to gather data in real time.

A package delivery display, in which the human operator can designate the size
of the package and locations of takeoff, drop off and landing.

Anna has also created a user display for package delivery. Using this system, a delivery person can easily input information about the size of the package and then choose separate locations of where the vehicle will takeoff from, drop off a package at, and land.  After the user designates this information, the underlying algorithms determine mission specifics, such as trajectory, without being directed by a pilot. To emphasize the general accessibility of the application, Anna asked a young boy in the audience to input values for the package delivery. He began with inputting  a package code, the first of which was pre-programmed. for a routine delivery. For example, the NASA Langley mail office might deliver the same type of package to our center director at the same time everyday. To show what would happen in the event of a non-routine package delivery, our volunteer created a new code, and then quickly and easily filled in the different data sets by hand. 


Ben Kelley with Herbie, our beloved rover. 

Continuing our discussion of human/vehicle interaction, Ben Kelley, our software architect, showcased his "Follow Me" behavior. In the "Follow Me" demo, our "Green Machine" UAV follows a rover. The behavior occurs when the UAV receives messages of the rover's position and alters its current waypoint (destination) to reflect the location one meter above the rover.

The "Green Machine" in flight, exhibiting the "Follow Me" behavior.

The UAV performs the "tree-dodging" developed by Dr. Loc Tran.

Dr. Loc Tran works on a variety of machine-learning projects at the Autonomy Incubator, and he presented his work with obstacle detection and avoidance. This "tree-dodging" behavior is an important area of research to highlight, as maneuverability and obstacle avoidance is integral to safe UAV flight. "Tree-dodging" begins when Loc gives the UAV the command to take off, with the directive that it is to navigate itself through the trees. Without any intervention from Loc or a pilot, the UAV uses a front-facing camera to detect and avoid certain features, such as branches from one of our artificial trees. If the UAV makes a mistake with trajectory planning, Loc will later correct the vehicle and designate a new rule of response. Through repetition of this process, the vehicle becomes skilled enough to navigate the course perfectly, and ideally, will be able to do the same in a course it has never encountered. One day, "tree-dodging" may help deliver supplies to areas that are difficult to navigate through, such as rainforests, and might be ported to underwater or planetary exploration applications.

Our pilot, Zach Johns, with the Mars Flyer. 

The showcase ended with a demonstration of the Mars Flyer and the autonomous capabilities our team is designing for NASA missions to the red planet. Jim Neilan, our project lead, pointed to a small computer with visual odometry algorithms onboard the Mars Flyer and explained it's ability to reliably navigate the Mars Flyer vehicle in a GPS-denied environment such as Mars. Visual odometry begins with data from the downward-facing camera. Processed onboard and in real-time, calculations of the movement of different ground features enable the computer to accurately create a 3-D map of the terrain.

Zach flies the Mars Flyer, while Dave North (on the right), explains the vehicle design.

At the Autonomy Incubator, we are continuously adapting to reflect research improvements and new NASA projects that require the the development of different autonomous capabilities. To stay up-to-date with these projects, follow us on social media.


Twitter: @AutonomyIncub8r
Instagram: @AutonomyIncubator

No comments:

Post a Comment