Wednesday, June 28, 2017

2017-06-28: Autonomy Incubator Successfully Tests New Multi-Vehicle Interface



Months of work paid off this week when the Autonomy Incubator's Kyle McQuarry took off and landed two UAVs at once with just a push of a button. Known as MUCS, for Multi UAV Control Station, Kyle's interface promises to simplify the Ai's multi-vehicle flight missions by consolidating the controls for all the vehicles onto one screen.

"Basically, I can select one or more UAVs and send the same command to them at the same time," Kyle said of his control station. "When you're trying to control multiple UAVs, it's easier this way– it's more time critical than anything." Of course, you can still control one at a time if you want to.

Think about it: flying multiple vehicles with individual controls would be a logistical nightmare. There would be no way to ensure all the vehicles got the exact same command at the same time, and managing all the controllers at once would take multiple people. With the MUCS at our disposal, now the Ai can expand its research missions to include as many vehicles as anyone could possibly want.

"[MUCS] allows you to scale up to n-number of vehicles because realistically, if you're flying one hundred vehicles, you won't have one hundred controllers," Kyle said.

The user interface for MUCS. Note the Ai logo tastefully watermarked
into the background. 
MUCS currently runs on a Windows-based laptop/tablet combo. This first iteration is a success, but it's by no means the final version of what Kyle envisions for the software.

"In general, we're still talking about what features we're going to add," he explained. "Right now, we're thinking about a map view, where you'll see the vehicles and maybe their trajectories." Other proposed features include using the tablet to set waypoints and draw no-fly zones right onto the map.

Because of its ease of use, MUCS has the potential to make multi-UAV missions accessible to people who might not have computer science or aviation experience. While Kyle designed the control station for Ai use, he adds that his creation's broader implications are "a nice side effect."

Thursday, June 22, 2017

2017-06-22: Autonomy Incubator Intern Kastan Day Takes Up 3DEEGAN Mantle


Kastan Day had his first internship with the Ai last year, when he made up the video production half of the social media team. Now, he's back from his freshman year at Swarthmore and making a glorious return— not as a videographer, but as a computer scientist.

"Yeah, this is pretty different," he said, pecking at the command line on his computer. With only a year of college under his belt, he's hitting the ground running as an intern. "Progress in the real world is way different than progress at school," he added.

Kastan's work this summer follows the work that Ai member Loc Tran and intern Deegan Atha did last summer, a computer vision and deep learning effort playfully named 3DEEGAN. Here's a helpful video Kastan made on the subject last summer:



While the Ai's computer vision work so far has focused on letting UAVs identify objects in real time— recognizing a tree in the field of view and changing course to avoid it, for example— Kastan is taking a more targeted approach. He's developing a system of unique markers that the computer vision algorithm can recognize instantly, sort of like a barcode at the grocery store. The cash register doesn't have to visually recognize your Twix bar based on its size, shape, and features; it just scans the barcode and matches the pattern up with the one that represents "Twix bar" in its library. By applying unique markers of a known size to objects in the UAV's field of vision, things like identification, calculating distance, and determining pose become much, much easier.

Kastan holds the webcam up to a screen full of markers to test if the computer
vision algorithm recognizes them.

What the machine sees is in the window on the left. See the green outlines
around all the markers? The algorithm works!
"[The markers] are easier to recognize if the search space is smaller, so we only made sixty-four of them as opposed to two hundred or a thousand," Kastan said.

Now, to make sure his system is as efficient and as accurate as possible, he has to determine what kind of grid to use when he generates the markers.

"The three-by-threes give a lot of false positives, but the eight-by-eights are hard to identify quickly," he explained. "We're looking for a solution in between."

To do the time-consuming work of printing out and testing each kind of grid, from three-by-three to eight-by-eight, Kastan has enlisted the help of the Ai's three high school volunteers. This summer's class of volunteers includes Ian Fenn from York High, Dylan Miller from Smithfield High, and Xuan Nguyen from Kecoughtan High. Their job entails printing out a test sheet of markers and then moving it around in front of the camera to see how it behaves.

"We're seeing how many times they're identified and how many times we get false things," Ian said. "We're also seeing if the larger patterns are more easily identifiable than the small ones."

Ian, left, and Dylan, right, move a sheet of five-by-five grid markers farther away
from the camera rig to see at what distance the algorithm stops identifying them.
Xuan also has has an additional relationship to the Ai beyond her scientific contributions.

"I'm more into engineering, but I also like computer science," she said. "My uncle Loc said every engineer needs to learn how to code."

Xuan takes notes about each grid's performance at different distances.
While the high school interns work, Kastan's next steps include "setting up marker detection in a simulated Gazebo environment inside a ROS framework." Basically, he's simulating how the algorithm will behave in the real world.

Thursday, June 15, 2017

2017-06-15: Autonomy Incubator Intern Javier Puig-Navarro Expands Collision Avoidance Algorithm

Javier explains non-Euclidian geometry to me, someone who had to take
Appreciation of Math in college because I didn't make the cut for calculus.

UIUC PhD candidate Javier Puig-Navarro is the professor of the Autonomy Incubator's summer intern squad. Over the four years he's returned to the Ai, he's become the go-to person for everything from math to physics to MatLab questions because he's such a clear, patient teacher. Today, when Javier called me over on my way to the coffee pot, I knew I was about to do some serious learning.

"So, you remember my GJK algorithm that I started working on last summer, right?" he said.

"Yeah, Minkowski Additions and stuff," I replied, eloquently.

Javier's work at the Ai concentrates on collision detection and avoidance, using polytopes and a tricky bit of math called the Minkowski Addition to determine if two UAVs will collide. If the polytopes representing the two vehicles intersect, then that means they're on a collision course. It's completely explained in the blog post from last summer, which I'll link again here.

Javier's algorithm will also move one of the shapes so that they no longer overlap, thereby avoiding the collision. When it does this, he wants to make sure his algorithm chooses which direction to correct a collision in a truly random way— not favoring displacing the polytope upwards over displacing it sideways, for example. It's called removing algorithm bias, and Javier assures me it's very hot right now.

As he works on correcting his algorithm bias, Javier needed a way to visualize distance and direction, to make sure his algorithm is choosing completely randomly. To do that, he built a program that creates 2D and 3D balls of a given circumference— what he called me over to take a look at. There's about to be a lot of math happening, but it's cool math! Ready?

This sphere and circle are generally what we think of when we hear the word "ball." They're defined by Euclidean distance— each of the points on the circumference is the same distance away from the center point if you use this formula:

Euclidean norm 

 This is the geometry you learned in ninth grade. However, it is by no means the only definition of a mathematical "ball." Look at this other visualization from Javier's algorithm.


This diamond thing is also a ball, according to math, because all of the points are equidistant from the center when you calculate them this way instead of the Euclidean way: 

Sum norm

As you can see, this is called the sum norm, or norm 1. The Euclidean Norm, predictably, is called norm 2. So, norm 1 is a diamond and norm 2 is a sphere. There is also an infinity norm, and it looks like this:
 A cube! That's also a ball in this case, if you calculate the distance from the center like this:

Infinity norm


That's the three main norms: 1, 2, and infinity. However, there are infinite norms between those norms. Like, what would norm 3 look like?




A cube with rounded edges, because it's on the spectrum between a sphere (2) and a cube (infinity). So really, distance can be visualized in infinite ways, on a sliding scale from diamond to sphere to cube. Isn't that the wildest?

"So, you need all of these to get rid of your algorithm bias?" I asked.

"No, I actually only need the 2 norm," Javier said. "I just thought this was a good conceptualization for you."

The Autonomy Incubator and Javier Puig-Navarro: bringing you a college-level math lesson to go with your regularly scheduled UAV content. Thanks, Javier!


2017-06-07: Autonomy Incubator Wows Again At AIAAAviation Special Session

The squad celebrates another successful year at AIAA Aviation.

If AIAA Aviation were the Super Bowl, the Autonomy Incubator would be polishing several rings right now. The success we had in our first special session at Aviation 2015 led to us receiving even more exposure for our smash hit presentations at Aviation 2016, and at Aviation 2017 last week, the hype continued to build-- even if our papers weren't named after Star Trek references this time. Our team presented on the full breadth of the Ai's research, from human-machine interaction to time-coordinated flight, and dominated Wednesday's special session on autonomy.

Dr. Danette Allen, Ai head and NASA Senior Technologist for Intelligent Flight Systems, gave the Ai's first presentation of the week on Wednesday morning during the Forum 360 panel on Human-Machine Interaction (HMI). She presented on ATTRACTOR, a new start that leverages HOLII GRAILLE (the Ai's comprehensive mission pipeline for autonomous multi-vehicle missions) and seeks to build a basis for certification of autonomous systems with trust and trustworthiness via explainable AI (XAI) and and natural human-machine interaction. Given the proposed emphasis on human-machine interaction HMI and "trusted" autonomy, we are excited to begin working ATTRACTOR (Autonomy Teaming & TRAjectories for Complex Trusted Operational Reliability) in October 2017.

Danette Allen at Aviation 2017
That afternoon brought the special session on autonomy, which featured a roster of seven NASA Langley scientists-- most of whom were members, alumni, or somehow affiliated with the Ai. Danette opened the session with an overview of the Ai science mission. Jim Nielan followed with a discussion of visual odometry (VO) use in the field. Look here for examples of Jim testing one of our implementations of VO in the field.

Jim Nielan
Next up, Loc Tran presented on his computer vision work as it applies to precision landing and package detection for autonomous UAVs. Loc's other computer vision work has focused on obstacle avoidance for vehicles flying under the forest canopy, which we affectionately call "tree-dodging." This summer will hold some exciting updates from his collaboration with MIT, so stay tuned!

Loc Tran
Javier Puig-Navarro, a PhD candidate at University of Illinois Urbana-Champaign (UIUC) and summer fixture at the Ai, followed with his work on time-coordinated flight. We did an in-depth profile on Javier's work here on the blog last summer, which will not only give you an idea of what he's doing but also let you see what a delightful person he is.

Javier Puig-Navarro
Fellow intern and PhD candidate at Carnegie-Mellon University (CMU), Meghan Chandarana, was up next, talking about gesture recognition as a way to set and adjust flight paths for autonomous UAVs. Meghan's work in gesture controls is as innovative as it is fun to watch, and can be seen in action in the HOLII GRAILLE demo video from last summer.

Meghan Chandarana
Erica Meszaros, a former intern pursuing her second Master's degree at the University of Chicago, followed Meghan with a presentation about her amazing natural-language interface between humans and autonomous UAVs. As you know if you've been reading this blog, Erica and Meghan have worked closely to combine their research into a multi-modal interface for generating UAV trajectories. If you're curious about the results of their joint work, we recommend watching their exit presentation from last summer.

Erica Meszaros
Finally, NASA Langley's own Natalia Alexandrov wrapped up the special session with a presentation on "Explainable AI", an important facet of building trust in human-machine interactions. Natalia and Danette are co-PIs on the ATTRACTOR project, and it was a privilege to have Natalia point to the future and close out the special session on autonomy.

Natalia Alexandrov
Crushing it so hard at Aviation is the perfect way to kick off a summer full of fast-paced, groundbreaking science. Make sure to follow us not just here on the blog, but on Twitter, Instagram, and Facebook as well– you don't want to miss anything, do you?