Wednesday, November 4, 2015

2015-11-04: Autonomy Incubator Celebrates One Year in B1222

Wednesday, November 4th, marked the one year anniversary of our team’s move to B1222, also known as the Old Reid Center. 

Moving Day! The Autonomy Incubator relocated to the former Reid Conference Center (b1222) today. I still can’t believe it… 
The new furniture is here! The new furniture is here!
Our first week in B1222
Since our move, the Autonomy Incubator has hosted over 35 student interns, given more than 50 tours (to center directors, the NASA Deputy Administrator, OMB, OSTP - just to name a few!), and brought in close to 20 outside speakers to share their work with the Center. The facility itself measure over 70,000 cubic feet, and the forest, mars landscape and residential zone all help our researchers accurately develop autonomous capabilities. 

A young visitor navigates a UAV using Meghan Chandarana's
developments with gesture-based controls.
The large auditorium space enables us to test and demonstrate our algorithms. A fan favorite, Dr. Loc Tran's tree-dodging demo highlights our work with machine learning and with obstacle detection and avoidance.

In the past year, the Autonomy Incubator has worked on a variety of projects. We host student interns who lead their own research, such as Bilal Mehdi and Javier Puig-Navarro's work on coordinated trajectories for small and micro UAS, and we also continuously adapt to reflect research improvements and industry needs. You can read more about our most recent projects here.
A UAV autonomously navigates through our forest.

James Rosenthal and Jim Neilan set up our rover and "Green Machine" UAV for a demo.

An overhead view of our auditorium a year after our move.
From right to left: our residential area, forest, and the Mars landscape.

We are also pleased to announce that the Autonomy Incubator has received a project extension and expansion in name. As the need for autonomous capabilities becomes more apparent, NASA is transitioning our team name from “Autonomy Incubator” to “Langley Autonomy and Robotics Center” although we will never stop incubating new and innovative missions!

An increasing number of projects at NASA Langley Research Center require the development of autonomous capabilities to ensure mission success, and we're looking forward to working with a variety of different teams on site. As we work to maximize our interaction with other groups at Langleyour past month’s showcase became an experiment in internal communications. We received a record number of NASA visitors due to heightened publicity for the event, and we left the showcase with a stronger connection to our audience and with their work.

As we reflect on all that has been accomplished in the past year, we also look towards developing autonomous capabilities to meet the future challenges that NASA will encounter in missions in space, science, and aeronautics. To stay up-to-date with the projects we're working on, follow us on social media.


Twitter: @AutonomyIncub8r
Instagram: @AutonomyIncubator

Tuesday, November 3, 2015

2015-11-03: Autonomy Incubator Showcase

Thank you to everyone who attended our October Showcase at NASA Langley Research Center!

The Autonomy Incubator hosts monthly showcases for our NASA colleagues, during which we report out on how we are rising to the autonomy and robotics challenges in space, science, and aeronautics. These events are snapshots of selected technologies and capabilities that we’re developing as we advance towards our goal of precise and reliable mobility and manipulation in dynamic, unstructured, and data-deprived environments. Check out the time-lapse video of our showcase, below.



We began this month’s showcase by highlighting the work of our fall computer vision intern, Deegan Atha. Deegan, a rising junior at Purdue University, discussed the on-board object classification capabilities he is developing with the help of his mentor, AI engineer Jim Neilan. As Deegan showed our audience the camera he’s been using and how it can be easily added onto different vehicles, he discussed the processing time per image (about 3 seconds) and our work using Euclidean segmentation for detection of objects.

In the coming month, Deegan will be working on in-flight training data, building 3D on a vehicle to optimize accuracy, and ultimately demonstrating the object classification working on a flying vehicle. While we are not ready for him to leave he AI, we are looking forward to his exit presentation!

Deegan Atha, our computer vision intern, discusses his work in object classification.

In the above image, we can see the object classification working, as the
computer recognizes the chair and the NASA employee.


Jim Neilan highlights the different UAV designs in our lab.

Following Deegan, Jim Neilan briefly discussed our vehicles and the importance of our technology's platform portability. Although we might test on specific vehicles, like Herbie, our beloved rover, the autonomous behaviors we are developing are not designed to be vehicle-specific. 

The microphone was then handed to Anna Trujillo, our human factors engineer. At the Autonomy Incubator, we're working on creating autonomous systems that limit the need for constant piloting of a vehicle. However, while we're creating behaviors that can function independently of a pilot we still need a human to define, monitor, and adjust mission specifications. Anna discussed her work developing controls that can be easily understood by a person without piloting experience. She began with the example of a science mission, in which NASA scientists would be using a UAV to test ozone and CO2 levels. Using a display program Anna developed, scientists can use a tablet to get raw data values in real-time, without waiting for post-processing. Information such as vehicle position, time and sensor data can help the scientists ensure the test is running smoothly, and that the data they're getting makes sense. 

Display system created by Anna, designed to enable scientists to gather data in real time.

A package delivery display, in which the human operator can designate the size
of the package and locations of takeoff, drop off and landing.

Anna has also created a user display for package delivery. Using this system, a delivery person can easily input information about the size of the package and then choose separate locations of where the vehicle will takeoff from, drop off a package at, and land.  After the user designates this information, the underlying algorithms determine mission specifics, such as trajectory, without being directed by a pilot. To emphasize the general accessibility of the application, Anna asked a young boy in the audience to input values for the package delivery. He began with inputting  a package code, the first of which was pre-programmed. for a routine delivery. For example, the NASA Langley mail office might deliver the same type of package to our center director at the same time everyday. To show what would happen in the event of a non-routine package delivery, our volunteer created a new code, and then quickly and easily filled in the different data sets by hand. 


Ben Kelley with Herbie, our beloved rover. 

Continuing our discussion of human/vehicle interaction, Ben Kelley, our software architect, showcased his "Follow Me" behavior. In the "Follow Me" demo, our "Green Machine" UAV follows a rover. The behavior occurs when the UAV receives messages of the rover's position and alters its current waypoint (destination) to reflect the location one meter above the rover.

The "Green Machine" in flight, exhibiting the "Follow Me" behavior.

The UAV performs the "tree-dodging" developed by Dr. Loc Tran.

Dr. Loc Tran works on a variety of machine-learning projects at the Autonomy Incubator, and he presented his work with obstacle detection and avoidance. This "tree-dodging" behavior is an important area of research to highlight, as maneuverability and obstacle avoidance is integral to safe UAV flight. "Tree-dodging" begins when Loc gives the UAV the command to take off, with the directive that it is to navigate itself through the trees. Without any intervention from Loc or a pilot, the UAV uses a front-facing camera to detect and avoid certain features, such as branches from one of our artificial trees. If the UAV makes a mistake with trajectory planning, Loc will later correct the vehicle and designate a new rule of response. Through repetition of this process, the vehicle becomes skilled enough to navigate the course perfectly, and ideally, will be able to do the same in a course it has never encountered. One day, "tree-dodging" may help deliver supplies to areas that are difficult to navigate through, such as rainforests, and might be ported to underwater or planetary exploration applications.

Our pilot, Zach Johns, with the Mars Flyer. 

The showcase ended with a demonstration of the Mars Flyer and the autonomous capabilities our team is designing for NASA missions to the red planet. Jim Neilan, our project lead, pointed to a small computer with visual odometry algorithms onboard the Mars Flyer and explained it's ability to reliably navigate the Mars Flyer vehicle in a GPS-denied environment such as Mars. Visual odometry begins with data from the downward-facing camera. Processed onboard and in real-time, calculations of the movement of different ground features enable the computer to accurately create a 3-D map of the terrain.

Zach flies the Mars Flyer, while Dave North (on the right), explains the vehicle design.

At the Autonomy Incubator, we are continuously adapting to reflect research improvements and new NASA projects that require the the development of different autonomous capabilities. To stay up-to-date with these projects, follow us on social media.


Twitter: @AutonomyIncub8r
Instagram: @AutonomyIncubator