Thursday, June 30, 2016

2016-06-29: Autonomy Incubator Develops Collision Detection Capabilities

Javier updates fellow intern Kastan Day on his research.

This is intern Javier Puig Navarro's third summer at the Autonomy Incubator (Ai), and he's wasted no time in churning out his typical extraordinary, innovative avionics research. When he pulled me aside to show me his work after lunch today, I was expecting to be amazed as usual, but something about today seemed different.

"I just thought you might like to see it," he said. "It's very pretty."

"Pretty?" I asked. Avionics is cool for a lot of reasons; aesthetics usually isn't one of them. But, heck, he was right. Look at this.

The graph model from Javier's algorithm.

One of Javier's many good qualities is that he's an excellent teacher (as many of the younger interns can attest), so I actually understand what's going on here enough to explain it to you. Ready? 

The algorithm Javier created is a collision detection algorithm for autonomous vehicles. It looks at the planned path for the vehicle compared to other obstacles and vehicles in the area, and determines if a crash is going to happen. 

"Collision checking can be used in the planning phase, but also on-line—if it's very efficient—for path re-planning and obstacle avoidance," he said. "Basically, what it does is tell you if two entities collide."

That's easy enough to understand, right? It's the way he does it that's the really cool part, and what generates all these trippy graphs.

The red and green shapes on the right are two randomly generated shapes, which may or may not intersect. Imagine they're two UAVs, and intersecting means they're colliding. The blue shape on the left is the Minkowski Addition, which is very much beyond my realm of understanding (here's a PowerPoint if you're into math) but can be understood as the "sum" of the two shapes. If the red and green shapes intersect, then the origin of the Minkowski Addition will be inside the blue shape. If the red and green shapes do not intersect, then the origin will be outside the blue shape. Here, let's switch to 2D so this is easier to see.

The red and green shapes intersect, so the origin (the black dot) is inside the Minkowski addition. The algorithm needs to confirm that the origin is inside the shape in order to determine that there's been a collision. How does it do this? More math!

The algorithm picks a random vertex of the blue shape (vertex A) and draws a line to the vertex that would put the line closest to the origin (vertex B). Javier used the GJK Distance Algorithm to let his program do this, which is again out of my wheelhouse, but here's the paper if you're interested.

Then, it looks for a point that would make a perpendicular line to the line between A and B, again in the direction of the origin. 

"The idea is that you always expand in the direction of the origin," Javier said.

Once the algorithm creates a shape that encompasses the origin, it knows that a collision has happened and sends out an alert. If the origin can't be encompassed, then the algorithm knows it must be outside of the shape and sends a "no collision" message. It can usually do this in less than six iterations, which is really, really fast.

"We do not need to compute all these vertices," Javier said, gesturing at the blue shape on the screen. The algorithm is iterative, which means it's starts at one point and keeps drawing shapes until it finds a solution. It does the minimum amount of work necessary to get the right answer, and then stops. This kind of agility will be crucial once Javier's algorithm gets implemented in real-time on a UAV. For now, he's also exploring other applications of his work.

"Now, the conclusion is collision or no collision. Then you could modify this algorithm to get the minimum distance between points and the penetration distance," he said.

Tuesday, June 28, 2016

2016-06-28: Autonomy Incubator Intern Deegan Atha Teaches Machines to See

Intern Deegan Atha carefully lines up a ficus and a quadrotor in front of the webcam mounted to one of the widescreen monitors in the flight range.

"You can do it," he mutters to the camera as he gives it a final adjustment before stepping in line with the UAV and potted plant. After half a second, three rectangles pop up on the display, each boxing and labeling the three things in view: Tree, Drone, and Human. He smiles as Kastan and I gasp appreciatively. His algorithm works.

If you've followed the Autonomy Incubator (Ai) for any length of time, you know that we do a lot of research with computer vision and augmented reality. From SVO (Semi-direct Visual Odometry) to PTAM (Parallel Tracking And Mapping) to a host of other applications, we rely on our vehicles being able to navigate within their surroundings. But, what if they could do better than that? What if we had vehicles that could not only detect objects around them, but recognize them— tell a "UAV" apart from a "person" or a "chair," for example? Deegan is back at the Ai this summer, answering these very questions.

A rising senior at Purdue University, Deegan returns to the Ai after starting this project as an intern last fall. He had never done much with object recognition before, but it quickly became his niche.

"When I got here, I didn't do much of [computer vision], but now I do some of this at Purdue," he said. His 3D object recognition algorithm has become so emblematic of his work that PIs Loc Tran and Ben Kelley refer to it by the acronym 3DEEGAN: 3D Efficient Environmental Generic Artificial Neural Network.

"It's a joke we came up with. Are you putting this on the blog?" Ben said.

Object classification isn't just a cool add-on to our existing computer vision algorithms; it has the potential to push our research ahead by lightyears. Why?  If a vehicle knows what an obstacle is— if its a tree versus a person, for example— then it can make a decision about how to maneuver safely in the situation.

Think about it. A tree will definitely remain stationary, so a simple avoidance maneuver is a safe way to go. A human, though, introduces all kinds of possibilities, so the vehicle will have to track the person, decide whether to hover or move, and determine if just landing would be the safest thing to do. The same goes for another UAV, a pole, a car, an airplane, etc. The more obstacles an autonomous vehicle can recognize, the more safely it can operate.

Deegan's 2D and 3D algorithms recognize Anicca, despite her cunning imitation of a robot.

In order to make a vehicle recognize objects, Deegan is training an onboard deep learning algorithm using convolutional neural networks. (And here are some links about those if you're interested.) The way he explains it actually makes a lot of sense, even to people like me, who flunked out of computer science.

Deep learning falls under the larger umbrella of artificial intelligence—it's a subset of machine learning that mimics the way mammalian brains work (hence why they incorporate "neural networks.") Deep learning means that the algorithms involved can learn to recognize patterns without direct supervision from the researcher, instead of the researcher sitting there and selecting features on different pictures for hours in order to train them. He just picks a set of images of something and makes the algorithm analyze them and classify them. Most of the images come from online libraries, but some of them Deegan takes himself.

Kastan explores the 3D algorithm's display with one hand and films with the other.

"The main dataset is called ImageNet, which is a data set from Stanford and Princeton that has millions of images and tens of thousands of image classifications, plus bounding box information," he said. "Bounding box" information highlights where in the image the object to be trained is, allowing for more accurate training.

The convolutional neural network, then, is the method Deegan has chosen to train his deep learning algorithm. It's a way for the algorithm to analyze an image from the data set, pick out features, and then classify it. What makes convolutional neural networks ideal for Deegan's algorithm is their efficiency: they use layers of gridded filters of "neurons" that build upon each other to find individual "features" in the scene—a hand, for example, or a propeller. When every neuron is connected to every other neuron in the scene, the final layer is "fully connected" and the algorithm then takes all the features it sees and combines them into a classification for the object. Here's a related paper from a team at U of Toronto if you want the gritty details.

An illustration of how deep learning algorithms classify images. (source)

"A convolutional neural network is basically just creating a hierarchy of features," he explained. "It would take a lot of memory and computation time if every layer was fully connected."

Plus, because convolutional neural networks scan images locally—piece-by-piece instead of all at once— the deep learning algorithm learns to recognize an object no matter what its position in the scene. So, even if the algorithm had only ever seen pictures of trees directly in the middle of the field of view, it would still be able to find and recognize a tree that was off to the left or upside down.

Basically, instead of trying to run one very computationally expensive, complete analysis of the image, convolutional neural networks run a bunch of smaller analyses that build upon each other to create an interpretation of the image. Make sense? Great! Here's a TEDTalk Deegan recommends to understand neural networks, by Professor Fei-Fei Li of the Stanford Vision Lab.

Monday, June 27, 2016

2106-06-27: Autonomy Incubator Sees Successful GLARF Flights

John Cooper with the GLARF in the Ai flight range.

Harnessed to a specially-installed rope tether, missing its nose cone, and swaddled in electrical tape repairs, the GLARF is one of the Autonomy Incubator's (Ai) more unconventional research vehicles. Don't be fooled by appearances though— this little model plane has a big job to do.

"GLARF" is an acronym for "GL-10 Almost Ready To Fly," because it's a small-scale research model of the real thing. For the uninitiated, the GL-10 (short for Greased Lightning 10— it's NASA; we love acronyms) is the hybrid-engine tiltwing aircraft NASA Langley has been developing as a UAV for the past couple of years. Basically, the wings look the way they do because they articulate, combining VTOL (Vertical Take-Off and Landing) with forward flight by transitioning horizontally once GL-10 in the air. Although the GL-10 project has its own team focused on Dynamics and Controls, the Ai has served a cooperative role in its development. For some time now, we've been hosting the GLARF.

The GLARF waits for take-off while John and his crew get ready for flight.

The Ai's crack team of GLARF wranglers includes PI Paul Rothhaar, co-I John Cooper, NIFS intern Andrew Patterson from University of Illinois Urbana-Champagne (UIUC), and high school intern Nick Selig. Their goal, John explains, is to develop autonomous capabilities for the GL-10. 

"The GLARF is an avionics test bed for the GL-10," he said. "What we want to do, eventually, is fly the GL-10 autonomously."

John pilots the GLARF from behind the net and a shatterproof glass barrier.
Andrew remains a vigilant safety pilot.

John's research with the GLARF focuses on control algorithms. I didn't know what that meant either, so I did some research (read: asked the engineers to "explain it to me like I'm five") and here's what I learned: controls for autonomous vehicles have an inner loop and an outer loop. The inner loop takes care of the mechanics of flight—stabilization, thrust, that sort of stuff—and the outer loop tells the vehicle what to do, much like a remote human pilot. John is working on making it smart enough to fly by itself - without a remote pilot. The algorithms developed by the Ai team will sit on top of both control loops, bringing sensing, perception, and decision-making into the mix - an outer outer control loop, if you will.

The GLARF, mid-flight.

Understandably, testing autonomous control algorithms on a vehicle this big, indoors, can get unwieldy, but that's why John has high school intern Nick Selig belaying it on a tether during flight. 

"The tether is there in case it fails; we can catch it," John said.

Nick tightens the slack as the GLARF gains altitude.

Stay tuned to the Ai blog for more updates on the GLARF! Now that John and his team have it off the ground and running smoothly, there's going to be a lot to keep up with.

Friday, June 24, 2016

2016-06-24: Autonomy Incubator Builds Foundation for More Outdoor Flight Testing

Zak Johns pilots Orange 1 (aka OG-1).
Despite the extreme weather we've been having in coastal Virginia the past few days (or weeks!), the Autonomy Incubator (Ai) teams managed to take advantage of today's sunny morning to get outside and get to work. Today was especially fun because we had four of Langley Air Force Base's air traffic controllers join us to watch our UAVs fly and get an idea of what we're doing over here, since they hear so much about it on the tower radio.

"In order to fly, we have what's called a Letter of Procedure, or LOP, with Langley," CERTAIN (City Environment for Range Testing of Autonomous Integrated Navigation) project manager Jill Brown, who works in close cooperation with the Ai for these outdoor flights, said. "Tethered or untethered, [flying] always requires live communication with the tower."

On the flip side, A1C Brandon Johnson-Farmer told the Ai crew, "When you guys first started flying, we used to get the binoculars out in the tower and and have contests for who could spot it." Sadly, we usually don't fly high enough for people on the Air Force base to see us over the trees, so today was the first time any of our friends from the Langley tower had seen an Ai vehicle fly.

Airmen Breanna Bowen, Jonathan Watkins, Caleb Rowles,
and Brandon Johnson-Farmer
The flight itself was similar to the flight we performed two weeks ago, in that it was an opportunity for the PIs involved in developing our PTAM (Parallel Tracking and Mapping) algorithm to gather valuable in-flight data about how good the latest version of their software is at tracking and mapping the UAVs movements without GPS data. Throughout the flight, PI Jim Neilan made sure to explain what was going on to our guests from Langley.

"We have to set the exposure and tune the camera manually," Jim said to the crowd as he pointed to the PTAM display where the algorithm was picking out points and creating gridded maps. Pilot Zak Johns kept the UAV in a gentle hover so that Jim and co-PI Kyle McQuarry had time to monitor the instruments and make adjustments.

Zak and Jill Brown lay out the 100-foot tether before takeoff.
While the test flight was going on, PI Ben Kelley and NASA GIS surveyor Jason Baker paced around the outskirts of the Back 40—the field out by the Gantry where we fly—and used a GNSS (Global Navigation Satellite System) receiver to mark out GPS points for future path-planning tests.

"We're getting high-accuracy GPS points for you guys to use as waypoints," Jason said. "There's a base station over in Building 1238 and another radio on the Gantry. We get 95% confidence from a two-second reading."

"Which is like thiiis much error," Ben added, holding his fingers about three inches apart.

As the Ai's algorithms continue to get more sophisticated and demand more advanced testing, these outdoor tests are becoming increasingly crucial to our mission. Soon, pending approval, we'll be able to liberate our vehicles from their tethers and start using a new geofencing system to contain our vehicles during tests, which will enable us to perform even more, exciting tests and even—cross your fingers!— demo outside. All of our efforts now are crucial, concrete steps toward achieving vision of the futures for small autonomous UAVs.

Thursday, June 23, 2016

2016-06-21: Autonomy Incubator Showcases Stable of House-Made Research Vehicles

Here at the Autonomy Incubator (Ai), we're more affectionate with our robots than some people are with their pets. Especially when it comes to the ones we made ourselves. Take, for example, the extreme attachment everyone seems to have to Herbie and Herbie-anna-etta-ella, our pair of rovers.

"Herbie is my dream child. He is special to me. He's mine," PI Ben Kelley said. Ben built Herbie in the fall of last year because he wanted to create a safe, efficient way to test control algorithms. "Especially now that we're flying larger UAVs... I made Herbie as a test platform to throw something on him quickly. "

"It's quicker, easier, and safer to test something on a little rover that moves at half a mile an hour," as opposed to flying a fully-rigged UAV every time, he continued.

Intern Kevin French, who has been at the Ai since January, was so inspired by Ben and Herbie that he built a companion rover in Herbie's image: Herbie-anna-etta-ella.

"We couldn't decide on a name," Kevin explained.

Herbie and Herbie-anna-etta-ella were designed with the needs of autonomy researchers in mind. They're each equipped with a large top platform, specially designed for different PIs to install and uninstall their equipment for different tests.

"Herbie keeps getting additions. And subtractions. He's an evolving creature," Ben said. "He used to have a lidar, but now he has an SVO camera on a pole on his head."

In addition, because the rovers are land vehicles, they take less time to set up and last longer on a battery charge than aerial vehicles. The result is faster results, and also safer research: while UAV test flights mean clearing the flight area and standing behind a net, a researcher can stand directly in front of a rover test without worrying about their algorithm going awry.

"With ground vehicles, if they run into each other, it's not gonna be a big deal," Kevin said. This feature is especially relevant to Kevin's research, which focuses on using cellular automata to monitor the behavior of several vehicles at once. I know that's a lot of jargon, so let me briefly explain:

If you ever learned about Conway's Game Of Life in computer science class, then you know what a cellular automoton is: a grid of cells that can be in "on" or "off" states, depending on the number of "on" cells surrounding a cell at a given moment. You can play all kinds of games in this grid, and it's a really useful model in machine learning for thinking about how entities interact. If you'd like to play around with Conway's Game of Life yourself, here's a place you can do it.

A "pulsar" formation in the Game of Life. (source)

Taking cellular automata as a starting point, Kevin creates rules for the UAVs in his simulation based on the activity in the cells around them. He can give them rules to adjust their behavior however he wants— encourage grouping, for example, or promote diversity of motion. In this graphic from his simulation, the vehicles (the black dots) have been told to stay as far away from each other as possible in order to prioritize collision avoidance. The green dot is the centroid, or the center of the fleet.

Crazy stuff, right? So, given how complicated multi-vehicle autonomous systems can be, rovers are an obvious choice for early-stage testing. They're slow, they're easy to observe in close quarters, and they're as adaptable as you could ask a vehicle to be. Before Kevin starts trying to manage a flock of micro-UAVs with his algorithm, Herbie and Herbie-anna-etta-ella let him get his research done.

There's an ongoing discussion in the scientific community about machine learning and the role autonomous machines will take in future daily life. Seeing Ben and Kevin's unbridled joy over the Ai's family of research robots is a cheerful peek into a future where man and machine happily coexist.

Wednesday, June 22, 2016

2016-06-22: Autonomy Incubator Seminar Series: Michael Wagner

Today, the Autonomy Incubator (Ai) was thrilled to host another lecturer in the Autonomy Incubator Seminar Series, Michael Wagner. Mr. Wagner is a Senior Project Manager at the National Robotics Engineering Center (NREC) at Carnegie Mellon University, as well as one of the founders of Edge Case Research LLC. His talk was titled, "Developing Trust in Autonomous Vehicles: Strategies for building safer robots."

Ai head, Danette Allen, introduces Michael Wagner.
During his long and illustrious (seriously, he's built robots everywhere from Antarctica to the Atacama desert) career as a roboticist, Wagner noted three errors that caused these autonomous robots to fail: memory leaks, logic bugs, and library instability. As autonomous robots grow more sophisticated and take on more tasks, he has taken up the mission of finding a way to mitigate those errors so that an autonomous robot that looks, say, like this one...

Did I mention he's also built robots for the military?
...always behaves in a safe and predictable way. However, autonomously checking an autonomous robot has obvious challenges.

"You have to basically replicate part of the autonomy algorithm in order to monitor it," he said.

Wagner's solution: create "run-time safety monitors" that take what the autonomy algorithm spits out and check it against other, verified inputs—like, from sensors—before sending it as verified output. If the algorithm's results don't check out, then the safety monitor will catch the error and won't send anything. By working with just the results of the autonomy algorithm instead of tackling the algorithm itself, he saves time and computing power while still monitoring for error.

"Checking is easier than doing," he summarized.

A problem within this safety-checking system, which is the meat of his current research, is testing and verifying perception. Computer vision is messy to work with in the first place; how is it possible to verify it autonomously? For this, Wagner is training a machine learning algorithm. He shows the algorithm pictures of a certain object (car, bus, pedestrian), then asks the machine learning algorithm to try identifying those objects and corrects it when it makes a mistake. With enough training, the machine learning algorithm can be used to verify the results of the autonomy algorithm.

In short: Yes.
With Wagner's "safing gate" in place, autonomous robots will become easier for humans to trust. Not only because they'll consistently behave in predictable ways, but also because we'll know that their behavior is being monitored and regulated.

"It's not, what is it, Skynet? It's not Skynet," he said, to amused chuckles from the audience.

2016-06-22: Autonomy Incubator Demos for IAOP

PI Jim Neilan discusses our indoor flight range.
Today brought important visitors for the Autonomy Incubator (Ai), as Tzu-Hsien Yen of NASA's Aircraft Management Division, Munro Dearing from the IAOP (Intercenter Aircraft Operations Panel), Langley Research Center (LaRC) Head of UAV Operations Office Tommy Jordan, and LaRC Research Services Director, Shane Dover came to Building 1222 for a brief tour and demonstration.

Shane Dover and Tsu-Hsien Yen examine Orange 1.
As the Ai pursues its mission of autonomous flight, we are working with the UAV Ops team to create operational and maintenance procedures that account for how safe, reliable, and benign our UAV research platforms are. Small UAVs are different from manned aircraft— which is currently how our quadrotors, Hex Flyers, and octorotors are designated. As we move forward, we hope to innovate all of aviation with us by showing that our small UAVs are trustworthy enough to perform less-restricted research flights in the national airspace.

During the demo, intern Deegan Atha presented his object recognition algorithm, which allows an autonomous UAV to distinguish between obstacles and determine the safest way to avoid them. After Deegan wrapped up, PI Jim Nielan gave an overview of the Ai's aerial vehicles and then handed off to our UAV pilot Zak Johns flew two different micro-UAVs and one large quadcopter of his own design before answering questions from the visitors.

Zak Johns offers to let one of the visitors hold a UAV to see how lightweight it is.
The focus of the indoor flight demonstration was to show how robust and reliable the vehicles in the Ai are, both in hardware and software.

"All the carbon on these is aerospace-grade carbon," Zak explained to the assembled crowd.

Zak demonstrates the quick-release mechanism on his octorotor.
After Zak's presentation, PI Ben Kelley gave an abbreviated version of the Ai's famous Dances With Drones demo (#DancesWithDrones) to drive home how reliable our path-planning and object-avoidance algorithms are. Overall, our visitors left Building 1222 with a thorough impression of how seriously we take safety and robustness in our UAV research platforms.

Monday, June 20, 2016

2016-06-20: Autonomy Incubator Expands into Virtual Reality

"So, imagine I made a racket and a ball. I could play tennis with this," intern Angelica Garcia explained, miming a backhand motion with one of the controllers she held in each hand.

The Autonomy Incubator's (Ai) new HTC Vive™ had just arrived in the mail, and she was running me through the basics of virtual reality as she set it up in her office. Two infrared "lighthouses" perched on camera tripods on opposite sides of the room collect information about the position of the headset and the two handheld controllers, creating an immersive virtual environment for the wearer.

Angelica demonstrates how to use the Vive.
The Vive, its competitor the Oculus Rift™, and other virtual reality systems have mostly gained steam in the video game sector: players can totally immerse themselves in virtual worlds as they hack their way through hordes of zombies or pilot a WWII fighter jet. However, the increasingly powerful options for virtual reality also hold promise in scientific applications. Angelica, a Masters student at the University of Central Florida studying Modeling and Simulation, plans to use virtual reality setups to enable researchers at the Ai to model their algorithms on virtual drones in true-to-life virtual environments.

"Why do that when we have a huge indoor flight range?" I asked.

"Well yeah, we have an indoor flight range here, but we're not just going to fly our UAVs in here all the time," she said. "Ideally, we can use virtual reality to build any terrain we want."

Angelica checks out her virtual environment through the Oculus headset.
Her first step will be modeling the Ai flight range as accurately as possible, using positioning data from the Vicon™ system to ensure that the virtual UAVs behave exactly like the real ones. Then, she'll model the Back 40 — the field in the back of NASA Langley where we do our outdoor test flights— and from there, the possibilities are limitless. The whole point is to use virtual reality to create "unusual" terrain to test our ideas on, like mountains or forests.

"I could build a Mars terrain on here if you wanted," she said.

Angelica designs the terrain on the computer and then views it through the headset.
The 3D model Angelica is developing represents a huge leap forward in computer modeling in the Ai. Until now, we've flown UAVs over 2D models (aka printouts) of terrain to assess algorithm performance, which works fine, but isn't very dynamic. In addition to our new overhead projectors, the virtual reality simulations expand our simulated worlds even further and will give researchers in the Ai a better idea of how their code will behave in the real world.

"We want to get out of being restricted to the areas we're restricted to," she said. "Virtual reality lets us expand our testing range."

Virtual reality also frees the experimenter from the constraints of physics; for example, you can view a test flight from above or follow a vehicle through the air. This gives the added scientific advantage of being really, really cool.

While the task of painstakingly modeling an existing environment seems daunting, Angelica is thrilled to get to work. After graduating with a degree in aerospace engineering from Embry-Riddle Aeronautical University, she spent "a long time" interning at Gulfstream and discovered a passion for computer modeling.

"I realized I didn't want to stay on the mechanical side of aerospace, so I worked on the simulators," she said.  She has been designing models ever since. The Ai is lucky to benefit from her expertise, and we're all excited to see what new capabilities she'll create for us this summer.

Saturday, June 18, 2016

2016-06-17: Autonomy Incubator Installs Overhead Projection System

Interns Josh Eddy and Andrew Patterson lay down white foam tiles to make the "screen."
Latecomers to the Autonomy Incubator (Ai) today did not arrive to find the usual brightly-lit flight range full of industrious engineers, but instead stumbled into what looked like a deleted scene from Tron.  Our new projectors are here!
The team helps the electrician align the projectors.
The projections cover almost half of the flight range, and the new capabilities they bring will advance the Ai's work in an exciting variety of ways. First and foremost, Pi Jim Nielan explains, they allow us to project an autonomous UAV's path planning algorithm onto the environment as it flies, so that observers can see the vehicle making decisions in real time, at scale.

"We can show visitors our work," Jim said. "As in, this is what the algorithm is thinking right now."

Projecting a live feed of an algorithm during a test flight will also increase the speed at which the engineers in the Ai work, Jim said, because it creates an easily-digestible visual representation of what's working and what isn't.

"I think it's going to increase our effective debugging. We're... visual learners," he said. "It really allows us to see what the algorithm is doing."

The new projectors will streamline the way we test path planning and visual odometry algorithms: instead of ordering large-format prints of an environment and taping them onto foam mats, a researcher can just find a high-resolution picture of the pattern or place they want to work with and project it right onto the floor. There are, of course, concerns about shadows, but Jim isn't worried.

"We can do pattern matching and visual odometry—SVO, PTAM— on a large scale on the floor," he explained. "If we can really see what's happening, then we can process that info much quicker than if we were looking at an equation or a bunch of code."

"We can do this if we're smart about how the vehicle moves around in front of the lens," he continued. Further, Ai lead Danette Allen adds, "There are shadows outside on a sunny day. Our systems must be resilient to these types of real world challenges."

A view of the full projection field.
The Ai joins its partner labs at MIT and University of Illinois Urbana-Champagne by using floor projections:"We've been working on this for about a year... other people are doing this and finding benefit," Jim said.

Jim is personally excited for the path-planning and visual odometry applications the projectors will have for his research in integrating autonomous systems. We'll keep you updated on all the scientific (and silly— this is us, after all) exploits the PIs and interns have with our new projectors right here on the blog.

Thursday, June 16, 2016

2016-06-16: Autonomy Incubator a Smash Hit at AIAAAviation 2016

The Autonomy Incubator squad in DC
(Anna, Javier, Alex, Jim, Meghan, Lauren, Loc, Danette, Paul)

The Autonomy Incubator (Ai) team may have dominated at AIAA Aviation 2015 last year, but this year set a new standard of excellence for our PIs and interns. We crushed it, America. There's just no other way to say it.

Danette Allen, head of the Ai, presented both on Monday and on Friday of the conference, giving talks on outer-loop needs for autonomous medical transport in remote regions like Alaska and "serious gaming" for ab-initio design of the National Airspace System, respectively.

Danette holds court during her Monday lecture.

On Thursday, Danette also chaired the Ai's special session, entitled "Transformational Flight: Autonomy." Here's what the lineup looked like:

PI Anna Trujillo kicked the session off with a presentation on intuitive controls for non-UAV pilots controlling a fleet of heterogeneous UAVs. Anna is the Ai's resident expert in human-system interaction (HSI) with a focus on encouraging trust and cooperation between humans and autonomous machines.

Intern Meghan Chandarana complemented Anna's talk by presenting her research on using gestural controls for UAVs. Those of you who remember her work last summer remember Meghan's gesture-recognition software.

Intern Javier Puig-Navarro followed with a discussion of the work he's doing with trajectory generation and the Ai's atmospheric sampling mission. Javier was one half of the power duo from University of Illinois Urbana-Champagne (UIUC) who set our in-lab record for most UAVs in autonomous coordinated flight last year as part of the same mission.

Next, PI Loc Tran presented on object detection and classification and its applications in navigation for small autonomous UAVs. Loc's work combines machine learning with computer vision and other sensors (accelerometers, gyroscopes, etc.) to create UAVs capable of navigating and avoiding obstacles on their own.

PI Jim Neilan kept the momentum going with an engaging description of the state of the art and the state of the practice in building autonomous systems. Jim most recently grabbed the Ai spotlight in our outdoor tests this month.

Finally, the Ai wrapped up its special session with a guest appearance from former intern Alex Hagiopol, who flew in from California to present his Region-based 3D Reconstruction approach developed at Georgia Tech in collaboration with the Ai.

Our presence at AIAA Aviation 2016 was hardly limited to our special session, however. PI Paul Rothhaar gave a talk on distributed control for UAVs...

... and intern Lauren Howell presented her research on spline generation and Bezier curves for path-planning algorithms as part of a student paper contest. She gave such a stellar performance that graduate schools are already trying to recruit her to their labs!

The Ai is thrilled to have had the opportunity to present so much of our Reserach portfolio at this AIAA venue. We're also thrilled that all of our interns made it back safely Thursday night, despite the tornadoes and baseball-sized hail that peppered their drives home. Overall, it was a good week to be a part of the NASA LaRC Autonomy Incubator.

2016-06-15: Autonomy Incubator Social Media Team Member Anicca Harriot Makes STEM #relatable

A member of the Autonomy Incubator (Ai) social media triumvirate, Anicca Harriot comes to the Ai with a narrowly-focused specialty: short-form content. While I (Abigail) run the blog and Kastan creates the videos, Anicca is the powerhouse behind our Twitter and Instagram presences.

Anicca multitasks at her desk.

Anicca is, quite frankly, the coolest person in the lab. (Except you, Danette.) She's majoring in biophysical sciences with a minor in business communications at Regent University and a member of NASA Social (an initiative that recruits members of the public as NASA media correspondents). She spent last summer creating and running a popular Instagram account of volunteer's tongues for the Smithsonian's Genome Exhibit.

A photo posted by Anicca (@13adh13) on

Because of her primary interest in biology, Anicca works remotely for the Ai and spends most of her week at her internship in the Physiological Sciences lab at Eastern Virginia Medical School. She comes in every Tuesday to check in and gather/generate content, then tweets from her personal account about what we're doing for the rest of the week. The official Ai account can then retweet her content, effectively hitting two different audiences with the same message.

 With a schedule as busy as hers, one has to wonder: why volunteer for the Ai?

"I don't think there are enough younger people visible in science," she said. After learning about the Ai through an event she covered with NASA Social, she "immediately wanted to get involved" and use her platform as a young woman in STEM to connect her generation with the innovative research at the Ai.

"If you walk up to a young person and ask them what they used most recently on the Web," she explained, "it's not going to be the Women In STEM website. It's going to be Twitter or social media."

Anicca intends to pursue opportunities to integrate her passion for youth outreach with her love for science well into the future, and plans to keep using social media as her method of choice.

Wednesday, June 15, 2016

2016-06-15: Autonomy Incubator Conducts Outdoor Visual Odometry Test

The Autonomy Incubator (Ai) made a profound leap in progress for its package delivery objective last week with the first outdoor test of simulated package delivery. PIs Loc Tran, Jim Nielan, and Kyle McQuarry directed the test.

Pilot Zak Johns and PIs Jim Nielan, Loc Tran, and Kyle McQuarry celebrate.

We used one of our new quad-rotor UAVs, OG1 (short for OranGe 1) for the test in its maiden outdoor voyage. For safety reasons— we are testing research-level software on autonomous vehicles, after all— the flight was tethered.

OG1 takes to the sky (on a leash).
Wednesday's test was comprised of two short missions. The first mission, designed to test the software pipeline for the sensor payload (a package equipped with an ozone sensor), was a drop-off and pick-up. OG1 autonomously took off, navigated to the drop point using an onboard GPS, simulated landing and dropped off the sensor payload, took off again, took a picture of the payload, and then returned to base.

"We took the picture to recognize the area where we dropped the package," Jim Nielan explained.

Recognizing the target area became vital during the second phase, when the GPS data purposefully became "sporadic," and OG1 was left to navigate back to the sensor payload and retrieve it by relying on visual odometry.

The green pyramid on the bottom of OG1 is the sensor payload.

As far as test flights go, this one was unusual for our lab because of its heavy use of GPS. The Ai designs its UAVs to operate in GPS-deprived environments; it's one of the reasons we fly indoors so often. However, this test used global positioning data in the same way other tests use Vicon™ data: to establish a ground truth. The stored data from the drop-off flight, combined with the sporadic data during the pick-up flight, allowed the PIs to quantify how much drift was happening in the visual odometry algorithm.

The second mission focused entirely on the visual odometry pipeline, and involved just a simple fifteen-meter test flight from one GPS point to another using only visual odometry.

Loc and Kyle confer over the data.

About the results of the day's tests, Jim said that "The first part went very well, and the second part also proves that the visual odometry pipeline works and fails safely." Going forward, the Ai's goal is to create a more robust visual odometry algorithm for longer distances.

"Next test, we'll test 100 yards or more with visual odometry," Jim said.

Jim Nielan holds OG1 aloft in triumph.
The results of Wednesday's tests hold great promise not just for the Ai, but also for the entire scientific community: "There's only a very small number of research centers actually flying physical devices to make autonomy in national airspace safer and more reliable," Jim emphasized.

The live feed from OG1's onboard computer vision setup.