Wednesday, August 31, 2016
2016-08-31: Meghan Chandarana and Erica Meszaros Summer 2016 Exit Presentation
Meghan Chandarana is a PhD student in mechanical engineering from Carnegie Mellon University in Pittsburgh who first joined the Ai as an intern last summer. Erica Meszaros, who has been with the Ai as an intern since January, recently received a Masters from Eastern Michigan University and is currently continuing her studies at the University of Chicago. Together, they presented the results of their research developing a natural language interface for non-expert autonomous UAV management as part of this summer's HOLII GRAILLE.
Meghan created a gesture node for the system, while Erica created a speech node, both of which could create flight commands independently or in cooperation with each other. In their user study, the results of the speech and gesture nodes were compared against user performance on a mouse-based interface– something they knew subjects would be familiar with and comfortable using.
The results of their user study were promising– as you'll see– and both Meghan and Erica foresee advancements and applications for this summer's research.
Wednesday, August 10, 2016
2016-08-10: Gale Curry Summer 2016 Exit Presentation
Gale Curry from Northwestern University focused on flocking and formation flight during her summer with the Autonomy Incubator. She was challenged to enable a fleet of micro
UAVs fly through an aperture, like a window, and then reassemble on the
other side. Equipping teams of small vehicles with this kind of
adaptability and agility in the air could support search and rescue, earth science data collection, and disaster assessment - just to name a few missions!
Gale may be rejoining the AI team in January and we are excited to see what amazing things she does next!
Gale may be rejoining the AI team in January and we are excited to see what amazing things she does next!
2016-08-10: Kastan Day Summer 2016 Exit Presentation
Kastan Day, the videography half of the Ai's social media powerhouse (ahem ahem), presented on his accomplishments and milestones from the summer. His highlights included creating a lower-third animation for introducing interview subjects, establishing our YouTube channel, and, in his words, "telling the story of the Ai." Kastan and his videos enriched the Ai's web content and freed up my writing in ways I never expected— he was a driving force in making this summer the most popular period in the history of the Ai's social media presence. He intends to study artificial intelligence as a freshman at Swarthmore College this fall.
2016-08-10: Angelica Garcia Summer 2016 Exit Presentation
Angelica, a Masters student at the University of Central Florida, came to the Ai with the goal of incorporating virtual reality into our user interface pipeline. As one of the stars of the HOLII GRAILLE team,
2016-08-10: Jeremy Lim Summer 2016 Exit Presentation
Jeremy Lim presented his work as the systems integration specialist for the HOLII GRAILLE team to wrap up his second internship at the Ai. He provided an in-depth explanation of the HOLII GRAILLE project before describing how he integrated the widely different elements of the demo— gesture recognition, trajectory generation, virtual reality, and autonomous flight— into one continuous pipeline using our in-house network, AEON (Autonomous Entities Operations Network) and DDS (Data Distribution Service).
2016-08-10: Lauren Howell Summer 2016 Exit Presentation
Lauren Howell, an Aero Scholar joining us from the University of Alabama, ended her second summer at the Ai with a presentation on her risk assessment and trajectory generation research. An expansion on her work with Bezier curves from last summer, Lauren's research this summer had her working in tandem with Javier Puig Navarro as the HOLII GRAILLE specialists on trajectory generation and collision avoidance. Highlights of her accomplishments included using Pythagorean hodographs to augment her trajectory generation algorithm and traveling to the Netherlands to attend the Airbus Airnovation conference.
2016-08-10: Josh Eddy Summer 2016 Exit Presentation
Josh, a Masters student at his alma mater Virginia Tech, presented on his research in visual-inertial navigation and state estimation in GPS-denied environments using unscented Kalman filters. If you want a brilliant yet accessible explanation of state estimation, Kalman filtering, or GPS-denied navigation, watch this video. As I write this, Meghan, Nick, and Gale are talking about what an effective teacher Josh is.
"I don't feel like I just watched a half-hour lecture on Kalman filtering," Nick said.
"Yeah, he'd be a great professor," Meghan added.
Josh did super cool, innovative work and he's great at explaining it. We're going to miss him and his stage presence a lot around here.
"I don't feel like I just watched a half-hour lecture on Kalman filtering," Nick said.
"Yeah, he'd be a great professor," Meghan added.
Josh did super cool, innovative work and he's great at explaining it. We're going to miss him and his stage presence a lot around here.
2016-08-10: Deegan Atha Summer 2016 Exit Presentation
Deegan Atha, a rising senior at Purdue University, wrapped up his second semester in the Ai with a presentation on his computer vision research. Deegan's most prominent accomplishment is 3DEEGAN, the object detection and classification system he created to run onboard the Ai's UAVs and allow them to instantaneously survey and recognize their surroundings. He used a convolutional neural network to detect objects in the field of view, and even trained it on a homemade image library of pictures he took of the Ai's vehicles and people.
2016-08-10: Jacob Beck Summer 2016 Exit Presentation
Jacob Beck leaves the Autonomy Incubator to begin pursuing his PhD in robotics at Oregon State University. During his time here, he became known for his dazzling demonstrations of his soft gripper project and his Spider Bot autonomous mobile gripper. In addition, he and intern Nick Woodward collaborated on the design for the package delivery system currently in use on our Orevkon drone for our outdoor science mission.
2016-08-10: Javier Puig-Navarro Summer 2016 Exit Presentation
Javier ended his third summer with the Ai in triumph not only in his individual research, but also as a vital part of the HOLII GRAILLE demo. Javier devotes most of his efforts into avionics trajectories— generating them, coordinating them between multiple vehicles, checking them for collisions, replanning them to avoid collisions, and then mathematically proving them correct. We're grateful to benefit from his expertise, and we can't wait to have him back in the Ai next time!
Wednesday, August 3, 2016
2106-08-03: NASA LaRC Hosts AIAA Intelligent Systems and Technology Conference, Day 1
Director Crisp, Dr. Simmons, and Dr. Casbeer listen as Dr. Borener answers a question. |
NASA LaRC is thrilled to host the three-day AIAA ISTC Workshop this week, where we've welcomed guests from all over the world to come discuss intelligent systems and autonomy in aeronautics. The workshop was off to an auspicious start with a keynote on Unmanned Disaster Response from Dr. Robin Murphy followed by panel on "Lessons Learned from Government Agencies" featuring LaRC Director of Systems Analysis and Concepts, Vicki Crisp, and moderated by our own fearless leader, Ai head Dr.Danette Allen.
Danette announces that the panel is open for questions. |
The four-member panel consisted of Director Crisp from NASA, Dr. David Casbeer from the Air Force Research Laboratory (AFRL), Dr. Sherry Borener from the Federal Aviation Administration (FAA), and Dr. Reid Simmons from the National Science Foundation (NSF). Each of them presented on aspects of autonomy and flight from their agencies' perspectives before Danette opened the floor to questions from the audience.
Hearing the differences in approaches to autonomy between agencies was illuminating, and made for an excellent discussion. For example, Dr. Borener's comments focused on the importance of developing communication abilities for unmanned vehicles in the national airspace.
"The problem is that it has to blend into the airspace with other, manned vehicles," she said.
Dr. Borener makes a point about flight safety and autonomous aircraft in the NAS. |
By contrast, Dr. Casbeer emphasized the continuing relevance of controls research even as autonomy becomes more advanced. While some in aviation view autonomy as a replacement for controls, he argues that controls must and will remain important for autonomy to succeed, especially at the inner loop, on the lowest levels of programming.
"[People] say control is dead... it's like a spoon in the heart," he joked.
Dr. Casbeer addresses the audience. |
The discussion also expanded beyond the purely scientific and into the larger constructs surrounding autonomy research. Director Crisp heard a question from an audience member about the federal government's hesitance to fund what they call "duplication studies" even though replication is a key part of the scientific process, which she answered from her experience as a "self-proclaimed bureaucrat."
"That's a great question, by the way. We're having these conversations not only within our own agency, but with Congress, OMB, and others," she said. "It's them asking, 'Explain to me why this is important'... We're trying to help inform and educate."
Director Crisp offers insight into communicating NASA's goals across the federal government. |
Finally, at the end of the panel, Danette asked a "lightning round question" that she had been asked herself at the DoD HFE TAG held recently at NASA LaRC: "If you were king or queen, and had all the money in the world, what problem in autonomy would you solve?"
Dr. Reid had an immediate answer: "My tall pole would be having intelligent systems understand their limitations."
2016-08-01: Autonomy Incubator Interns Join Forces, Create Autonomous Spider Bot
Deegan Atha and Jacob Beck rehearse for their new spot in our demo lineup. |
Look at this friendly little guy find and hug that ball! |
First of all, if you haven't read about Deegan's and Jacob's work, I'd suggest following the links at the top of the article to catch up— according to Jacob, "the best way to understand what we're doing is to talk about the individual projects first." Although his work and Deegan's are in disparate fields, Jacob emphasizes that the computer vision software and the Spider Bot were actually very easy to integrate.
"Getting our software to talk to each other wasn't all that difficult," he said. "Deegan's sends mine the coordinates and then the robot acts on them."
Jacob tunes up the Spider Bot between tests. |
"The neural net is still running on the same computer, with the lidar and everything. We're just switching the video feed to the Spider Bot," Deegan added. His software runs on a computer onboard the UAV, and uses a forward-facing camera during regular operation. For Spider Bot operations, the video feed will switch to the on-gripper camera.
"We're just changing the view of the neural network," he said.
"We're just changing the view of the neural network," he said.
Deegan gets the lidar and computer vision algorithm ready to go. |
The main objective of integrating Deegan's software with Jacob's gripper is to enhance the level of autonomy the Spider Bot has, therefore taking many of the burdens in package delivery off of the UAV itself. Right now, picking up a package involves challenges with precision landing— challenges that are often met with workarounds like infrared sensors but that's adding structure to the environment which is something that we don't want to do in the AI. An autonomous mobile gripper changes a lot about how we approach package pick-up.
"One of the things with the vision system is that you don't need any visual markers. And with this, we don't need precision landing," Deegan explained. Once deployed, the Spider Bot gripper will find the package, lock onto it and move towards it, then lift its front legs and grab the package before winching itself back up to the UAV.
A Spider Bot's eye view of the author's feet. Luckily, it didn't grab me. |
"It looks like a hunting spider; did you draw inspiration from real spiders for this?" I asked Jacob after he and Deegan wrapped up today.
"I guess it does! Kind of, yeah," he said. "I didn't look at pictures or anything, but it does work like that."
Friday, July 29, 2016
2016-29-16: Autonomy Incubator Takes Tree-Dodging Into The Wild
Okay, so it's not technically tree-dodging now that we're dodging a 20-foot pole, but the concept remains the same: we're using computer vision and lidar to avoid obstacles in real time. And we're really, really good at it.
PIs Loc Tran, Ben Kelley, Jim Neilan, and Kyle McQuarry went to the Back 40 this morning with the goal of testing this part of the pipeline, to roaring success. The UAV flew back and forth over a dozen times, deftly avoiding the pole in the middle of its flight path without a single failure.
The transition to the outdoors is especially exciting when you consider that there's no map and no fiducials involved here, which means that the algorithm has no outside assistance in doing its job and nothing shining bright to look for. There is no extra data we could feed it if it starts to fail. Once that UAV is in the air and on a collision course with the pole, it has to use its autonomous capabilities to detect the obstacle and replan its flight path mid-flight. And it does. It succeeds every time, in a predictable and safe manner.
The next challenge, now that we've conquered stationary obstacles, will be the ultimate in collision avoidance and a highlight of our upcoming outdoor demo: detecting and avoiding another UAV that enters the airspace. Imagine this, but outside and thirty feet in the air:
Congratulations to Jim, Kyle, Ben, and Loc on an amazing end to a packed week!
PIs Loc Tran, Ben Kelley, Jim Neilan, and Kyle McQuarry went to the Back 40 this morning with the goal of testing this part of the pipeline, to roaring success. The UAV flew back and forth over a dozen times, deftly avoiding the pole in the middle of its flight path without a single failure.
The transition to the outdoors is especially exciting when you consider that there's no map and no fiducials involved here, which means that the algorithm has no outside assistance in doing its job and nothing shining bright to look for. There is no extra data we could feed it if it starts to fail. Once that UAV is in the air and on a collision course with the pole, it has to use its autonomous capabilities to detect the obstacle and replan its flight path mid-flight. And it does. It succeeds every time, in a predictable and safe manner.
The next challenge, now that we've conquered stationary obstacles, will be the ultimate in collision avoidance and a highlight of our upcoming outdoor demo: detecting and avoiding another UAV that enters the airspace. Imagine this, but outside and thirty feet in the air:
Congratulations to Jim, Kyle, Ben, and Loc on an amazing end to a packed week!
Wednesday, July 27, 2016
2016-07-27: Autonomy Incubator Intern Jacob Beck and His Marvelous Magical Spider Bot
You saw them steal the show in the Autonomy Incubator final review; now get to know them: Jacob Beck and his creation, the Spider Bot.
The basics of Jacob's design draw upon tried-and-true principles in robotics, such as the design of the leg joints and the six-legged, alternating tripod method of locomotion. However, he's creatively blending these building blocks with computer vision, autonomy and UAVs to create a novel solution for the Ai's package delivery mission.
"Once [the Spider Bot] finds the object, it will use its legs as, instead, fingers, and close around the object and winch itself back up," Jacob said.
The current approach to autonomous package pick-up depends heavily on a precise landing from the UAV, so that a fixed mechanism can grab onto the package. However, autonomous precision landing is tricky in the best research conditions and incredibly difficult in the real world— in fact, we've got an intern who's spending his summer focusing exclusively on precision landing capabilities. Jacob's idea for a mobile, autonomous robot gripper eases our reliance on precision landing by allowing the UAV some room for error when approaching a package.
"An issue we face right now is getting a drone to land precisely over the target," he explained. By using this kind of robot, we hope to greatly expand the area in which the drone can work."
Tuesday, July 26, 2016
2016-07-26: Autonomy Incubator Makes Data-Denied Breakthrough
Yesterday, the Autonomy Incubator (Ai) team assembled in the flight range to watch history being made: the Ai's first completely data-denied autonomous flight. Just a quadrotor, a camera, and a visual odometry algorithm keeping the whole thing on path. No positioning system in sight (literally and figuratively!).
"What was interesting is that yesterday, we had no GPS or [indoor GPS emulator] Vicon™ to help us. It was just the visual odometry, and it handled very well," PI Jim Neilan said. Jim has been working on a data-denied navigation solution with the Ai for years, and yesterday's success was a massive validation for him and his team.
Here's the quick-and-dirty on how the Ai does visual odometry. We use a downward-facing global shutter camera to collect information on where the UAV is. The "global shutter" bit is really key here— most cameras have rolling shutters, which means that only one row of pixels gets exposed to light at a time. Rolling shutters are fine for most things, but when they're in motion, they cause a lot of aliasing and wobbling that makes visual odometry next to impossible. A global shutter camera exposes the entire frame of the picture at once, making for a faster, more reliable source of visual data.
"I'd say it's around forty to fifty frames per second, on average," Jim said. "We're using a very powerful global shutter camera."
The data from the camera (as well as from other sensors, like IMUs and barometers) gets fed into an algorithm called PTAM: Parallel Tracking And Mapping.
"It's actually based on an augmented reality thing designed for cell phones in 2008 by some guys at Oxford," Jim said. The basic idea behind PTAM is creating a grid map of the environment (the mapping) and then using translations to track where the camera moves in that environment (the tracking). These things happen simultaneously, so they're parallel. See what I'm saying? Here's the original paper if you're intrigued.
An aside: augmented reality is also the thing that lets Pokémon Go put little anime animals into your surroundings. So, the next time you throw a Poké Ball at a Charmander on your kitchen table, remember you're using the same technology that's revolutionizing autonomous flight!
We've been using PTAM for a while in both our indoor and outdoor tests, but yesterday's test was exciting because there was no external data source to correct the drift in the algorithm, and it still performed beautifully. Watch the video; doesn't that flight path look buttery smooth? Personally, I didn't realize that they'd switched the algorithm on until I looked over at Jim and saw his hands weren't moving the controls.
With a successful first flight in the logbooks, Jim says they have three concrete goals moving forward.
"We have to do a couple things. We need to clean the code up and make it robust to failure. We have to put hooks in the code so we can inject corrections periodically. And we need to fly it outside."
Stay tuned!
Monday, July 25, 2016
2016-07-25: Autonomy Incubator Amazes and Delights in Final Incubator Review
The HOLI GRAILLE team celebrates post-demo. |
After nearly two hours of presentations, Danette led the crowd into the flight range to show teh audience what we can do firsthand First on the bill was HOLII GRAILLE, the sim-to-flight human factors/path planning/virtual reality demonstration that showcases the full pipeline of technology for our atmospheric science mission. In true Ai style, it went beautifully, from the gesture recognition all the way through the live flight.
Angelica Garcia explains her VR environment as Meghan Chandarana demonstrates how to navigate it. |
Next came intern Deegan Atha and his computer vision research. Using a webcam mounted to a bench rig (a faux-UAV we use for research), he moved around the flight range and let his algorithm recognize trees, drones, and people in real time.
Look at all those beautiful bounding boxes. |
Jim Nielan and Loc Tran led a live demonstration of our GPS-denied obstacle avoidance capabilities. Out of an already stellar lineup, this was the big showstopper of the morning: we had one of our Erevkon UAVs (a powerful all-carbon fiber quadrotor) fly autonomously, drop off a sensor package, fly back to its takeoff point while avoiding another UAV we lowered into its path on a rope, go back and pick up the package, return to its takeoff point again, and land.
Jim Neilan explains the flight path. |
The Orevkon, precision landed. |
Finally, intern Jacob Beck wrapped up the morning with a demonstration of his soft gripper and Spider Bot projects. The Spider Bot, as always, was a crowd-pleaser as it skittered across his desk and grabbed a toy ball, but the soft gripper also elicited some murmurs of admiration.
The Spider Bot descends. |
Overall, we delivered a complex and innovative demonstration of our multifaceted capabilities on Friday, and we're proud of all of our researchers and interns who logged late nights and early mornings to make it possible. So proud, in fact, that we ran it again this afternoon for the LASER group of young NASA managers!
Jim and Josh Eddy discuss the on-board capabilities of our Orevkon UAV. |
Lauren Howell discusses Bezier curves as her flight path generates in the background. |
Thank you and congratulations to all our team for their hard work and brilliance!
Tuesday, July 19, 2016
2016-07-19: Autonomy Incubator Gears Up For HOLII GRAILLE Demo
Lauren Howell and Meghan Chandarana, supervised by Anna Trujillo, measure the projection of the flight path to ensure it's to scale. |
Jeremy's nickname is Indiana Drones, or Dr. Drones if he's being especially brilliant. |
What, exactly, does HOLII GRAILLE stand for?
"Hold on, I've got it written down somewhere," Meghan Chandarana said when I asked. Officially, HOLII GRAILLE is an acronym for Human Operated Language Intuitive Interface with Gesture Recognition Applied in an Immersive Life-Like Environment.
"[Ai head Danette Allen] said it was the 'holy grail' of virtual reality demos, and then we thought, why don't we call it that and make a crazy NASA-style acronym?" Meghan explained.
The presentation incorporates work from all over the Ai to demonstrate what PI Anna Trujillo calls a "multi-modal interface" between humans and autonomous UAVs for the Ai's atmospheric science mission. Anna is the lab's resident expert in human-machine teaming, and her work focuses on natural-language interaction between humans and autonomous robots.
Lauren, Meghan and Anna set up the projectors from the booth. |
"We're defining flight paths," she said. "We're using gestures to define that path and voice to tell it things like the diameter of the spiral or how high it should ascend."
"So, Erica's voice recognition work gets combined with Meghan's gesture stuff , that information gets sent to Javier and Lauren and they calculate the actual flight path, and some of that information is sent to Angelica to show the path in virtual reality. And then Jeremy has been working on the communication in DDS between all these parts," she continued.
Danette tries out Angelica's virtual flight simulator. |
The indoor demo will make use of our overhead projectors to simulate flying over the NASA Langley Research center, with the flight path and a "risk map" Javier generated overlaid on it.
Javier looks over his creation. |
"I've used the minimum distance algorithm I showed you to generate the map," Javier explained. "We added obstacles and weighted the different buildings based on risk." Blue means "not risky," while red means "quite risky." For example, an unmanned system would have little to worry about while flying over a patch of forest, but could cause some problems if it unexpectedly entered the Ai's airspace, so the woods are blue while Building 1222 is red.
A UAV flies over the projected map. |
Ultimately, the HOLII GRAILLE demo showcases how user-friendly and safe unmanned aerial vehicles can be because of the research we're doing at the Ai. Combining so many facets of our work to create one smooth mission certainly hasn't been easy, but we couldn't be more excited to see this demo fly.
"We're showing the capabilities of interacting with a system in a more natural manner," Anna summed up. "It's not you fighting a system; you're both on a team."
Monday, July 18, 2016
2016-07-18: Autonomy Incubator Intern Jacob Beck Develops Soft Gripper
Jacob Beck is a returning intern in the Autonomy Incubator (Ai); like so many of us, he couldn't stay away for too long. Jacob's project this summer is a new iteration of his work at the Ai last year: a UAV-mounted soft gripper that modulates air pressure inside hollow rubber fingers to grip and release.
"This is, I'd say, the third generation," he said of his newest gripper. The first version Jacob ever attempted was a replica of the gripper from this paper by Dr. George M. Whitesides' lab at Harvard University.
The second version, well. The second version experienced a rapid unscheduled disassembly during his exit presentation last spring. But, we all learned a valuable lesson about rubber viscosity and failure points that day.
In order to make his soft gripper, Jacob 3D-printed a custom mold and filled it with silicone rubber. When the outer layer inflates, the gripper's fingers curl in. Jacob estimates that his latest model will have enough strength to lift about 200 grams, once he gets the correct rubber to cast it in. 200 grams might seem small, but it's the perfect strength to pick up the ozone sensors for the Ai's atmospheric science mission.
"[The gripper] will curl completely inward at 10 psi," he said, before putting the air flow tube to his mouth and puffing. The fingers of the gripper twitched inward a fraction of an inch. It was kind of unsettling in a fascinating, can't-stop-looking way; I'm not used to seeing robotics components look so organic. Here, watch this video of a soft gripper from the Whitesides Research Group at Harvard and you'll see what I mean. The technology is so unique, and Jacob is capitalizing on it to make our UAVs more adaptable in their package pick-up and drop-off.
"I'm writing a paper to document my soft gripper use on a drone, which as far as I can tell will be the first time anyone's done something like that," he said.
Friday, July 15, 2016
2016-07-15: Autonomy Incubator Intern Gale Curry Takes Up Robot Flocking Torch
Gale edits her code after a test flight, micro UAV in hand. |
The Autonomy Incubator (Ai) first dipped its toes into the world of UAV flocking algorithms last summer, when former intern Gil Montague began researching possibilities for coordinated flight with multiple micro UAVs. This summer, that work continues in the capable hands of Gale Curry.
A Masters student in Mechanical Engineering at Northwestern University, Gale originally studied physics in undergrad until the allure of robotics pulled her away from her theoretical focus.
"I joined the Battle Bots team at UCLA, and that's where I decided I wanted to do more applied things," she said. "I was the social chair."
"The Battle Bots team had a social chair?" I asked.
"Ours did!" she said.
Gale explains the hand-flying pattern she needs to high school intern, Tien Tran. |
Gale first became interested in autonomous flocking behaviors during her first term of graduate school, when she took a class on swarm robotics. The idea of modeling robot behavior after that of birds, bees, and ants— small animals that work together to perform complex tasks— has remained inspirational to her throughout her education.
"They're really tiny and pretty simple, but together they can accomplish huge feats," she explained. "I've always liked that in swarm robotics, simpler is better."
Her approach to micro UAV flocking, which she's working on coding right now, is to have one smart "leader" and any number of "followers." She hopes it will be more efficient and easier to control than developing a flock of equally intelligent, autonomous vehicles and then trying to make them coordinate.
"This way, you focus your time and energy on controlling the one smart one, and the other, 'dumb' ones will follow," she said.
Gale with a member of her fleet. |
"Flocking couples really easily with the other work here, like the object detection that Deegan is doing or path following, like Javier and Lauren are working on," she said. "It's really applicable to lots of different things."
Thursday, July 14, 2016
2016-07-14: Kevin French Summer 2016 Exit Presentation
Kevin French came to us after graduating from the University of Florida, and will continue his work in Robotics as he begins his PhD at University of Michigan this fall. This was his second consecutive term in the Autonomy Incubator. Kevin's invaluable work this summer focused on simulating flocking behavior in autonomous gliders through two-dimensional cellular automata.
2016-07-14: Autonomy Incubator Bids Farewell to Intern Kevin French
Ai head Danette Allen gave a warm introduction to the assembled crowd of Ai team members and guests, citing Kevin's lengthy stay in the Ai and how much he's contributed to our mission.
Kevin began his presentation with a recap of the work he did with computer vision and object tracking in the spring, which concluded with his software being uploaded to the Ai's in-house network, AEON.
Kevin stands in front of a live demo of his lidar object tracking system, mounted on Herbie. |
"It was a very rewarding experience having my software become a permanent part of this lab's capabilities," he said.
This summer, his goal was to come up with a solution for having large flocks of gliders operate autonomously and cooperatively.
"Nowadays, you can get simple, cheap gliders," he explained. "I wanted to see what we could do with a high quantity of low complexity gliders."
Gliders, although simple, pose unique challenges in autonomy because they're so limited in how they can change direction. To explore this problem and its possible solutions, Kevin decided to simplify a three-dimensional problem down to a 2D analog. He made a "grid world," generated "gliders" of six pixels each (each holding a value for a state: momentum, pitch, etc.) and then set about creating sets of rules to see how the gliders behaved. You remember this from the blog post about his cellular automata, right?
Kevin walks the audience through one of his 2D models. |
Simple as it may look, creating his grid world was incredibly complex. His first iteration had 322 billion states and more possible configurations than the number of atoms in the universe.
"We need to simplify," Kevin said in front of a projection of a number so long, the exponent in its scientific notation was still too long to represent in a PowerPoint slide.
By adding physics, he was able to get his software simpler and more agile. Then he could start trying ways of generating the best sets of rules. He could do it by hand, of course, but that would have left him with a process that was time-intensive and inefficient— two things that do not mix with robotics.
His first attempt, a "sequential floating forward selection" (SFFS) algorithm that he created from scratch, worked by taking his original hand-entered set of rules, removing one or more, and then testing to see what happened. Although it brought him some success, it too proved to be not efficient enough for his needs.
"I let it run all weekend one time and it still didn't finish. Actually, it short-circuited it," he said.
Building on the results he managed to get from the SSFS, Kevin next implemented a genetic algorithm, a kind of software that mimics evolution and natural selection. His genetic algorithm "bred" two sets of rules together to produce "children," the most successful of which would then be bred, and so on until someone hit the stop button. It was this genetic algorithm that finally brought him the agility and the accuracy he needed, and served as the capstone of his research here.
As he wrapped up, Kevin called his two terms in the Ai "the best of my life," and thanked Danette, the Ai, the NIFS program, and "all the robots who have helped me out."
Wednesday, July 13, 2016
2016-07-13: Autonomy Incubator Welcomes UAV Pilot Jeff Hill For Outdoor Demo
Zak Johns trains Jeff Hill on Orange 3. |
Today was another long, hot day outside for the PIs and interns at the Autonomy Incubator (Ai), but the excitement in the air is even thicker than the clouds of mosquitoes as we near our end-of-year demo. Today's advancement: a second UAV pilot, Jeff Hill, began flight training with Ai pilot Zak Johns. Jeff, a NASA UAV pilot, will be joining the demo team in a vital role.
"He'll be flying the vehicle that gets in the way of me. Well, not really me, the computer," Zak said. "The lidar will pick it up and tell [the computer] that it's in the way."
While Jeff is a seasoned RV pilot with NASA Langley, the Ai's vehicles are custom-built for our specific research mission and totally unique on-center, so getting flight time in before the demo is imperative. Every vehicle is different, and ours are larger than most of the UAVs other labs fly.
"These things are very agile and have a lot of power," Ai head Danette Allen said.
Here are Zak and Jeff putting OG3 through its paces (untethered!).
After landing the vehicle and retreating to the shade of the field trailer, Jeff remarked, "[The controls are] a little touchy, but we're doing well." If "touchy" means "agile" then we couldn't agree more!
Tuesday, July 12, 2016
2016-07-12: Autonomy Incubator Intern Lauren Howell Represents USA at Airbus Airnovation 2016
Lauren Howell is finally back after a week of networking, competition, and complimentary beer at Airbus Airnovation 2016 at Technische Universiteit (TU) Delft in the Netherlands. She was one of only forty students selected from around the world to participate in the all-expenses-paid conference.
"Seventeen nationalities were represented among those forty people," she said. "I made some people honorary Americans on July Fourth; we needed to celebrate!"
Airnovation 2016 was open not just to aeronautical engineers like Lauren, but to any major "applicable to the field of innovation," she explained. That includes business majors, computer scientists, or engineers of any stripe.
Lauren and her cohort on top of TU Delft's underground library. |
Once they arrived at TU Delft, the forty participants were divided into five teams and tasked with developing a "game-changing" unmanned aerial system (UAS) by the end of the week. They had to think and behave like a start-up, which meant including a budget, a business model, a physical model, return on investment estimates, and potential partners in their final package.
"We were presented with an open-ended challenge to design a game-changing UAS and pitch it to the board of 'investors,' which was made up of really important people at Airbus who are actually in charge of listening to innovative ideas," she said.
"So who won?" I asked.
"My team won," she said, fighting a smile. Lauren is as notorious in the Ai for her modesty as she is for her brilliance.
Her team's winning design was a UAS with an environmentally-focused, humanitarian mission.
"Our design was really cool— it was a blimp that would sweep through the air and de-pollute the air. Our pilot case was Beijing, China," she explained. "One out of five people who die in China, die as a result of pollution."
In addition to bringing home the gold, Lauren also won an individual award— a victory she credits to the Ai.
"The cool thing about how they work as Airbus is that they also use the AGILE method," she said, referencing the Ai's Agile approach with daily "scrums" and bi-weekly "sprints" that keep everyone involved in what everyone else is doing. "I won the Scrum Master award. So, I'm honorary Jim [Neilan] of the Netherlands."
When she wasn't leading her team to victory, Lauren spent time touring Delft, hearing speakers from high up in Airbus, and experiencing Dutch culture with her diverse community of colleagues.
Downtown Delft, as captured by Lauren. |
"It was a really amazing opportunity for networking, and a beautiful thing to witness people from all over the world coming together to come up with five totally unique ideas," she said.
Now, she's stateside and back at work. But, she said, the impact that Airnovation had on her approach to engineering will be far-reaching.
"It helped me understand the innovative way of thinking," she said. "Engineers tend to come up with something cool, and then think about how to make it practical. Now, I think, 'Let me take a look around me and identify what the pain of the world is. And now, let me design something that will fill that need.'"
Lauren's flight home. |
Friday, July 8, 2016
2016-07-08: Autonomy Incubator Celebrates Successful Outdoor Autonomous Science Mission
Today, despite a heat index of 105 degrees and a frankly Biblical amount of ticks, the Autonomy Incubator (Ai) team took to the skies and completed not one, but two dry runs of our outdoor waypoint following and package delivery demonstration.
The entirely autonomous flight was completed by a custom-built quadcopter carrying the Ai's full research load: lidar, inertial motion sensors (IMUs), GPS, three Odroid computers, several downward facing cameras; everything we've ever developed is on this UAV.
"We exercised every capability we need for a successful mission," Ai head Danette Allen said.
Here's a map of our complete route. We took off at the red waypoint, followed the path to the yellow waypoint, descended to just above the ground and simulated dropping off a package (an ozone sensor for our atmospheric science mission), then returned to flight altitude and made the trip back. Again, all of this happened without anyone piloting the vehicle or cueing it. The PIs just hit "go" on the algorithm, and away the UAV went.
Fun fact: the white path intersecting the orange waypoint is where NASA used to test landing gear for the space shuttle. |
While today's tests used GPS data, further tests will focus on the visual odometry algorithms that the Ai has been developing. In the final demo, the initial package drop-off flight will be GPS-enabled, and the pick-up flight will be purely visual odometry-guided. The precisoin landing for recovery of the ozone sensor worked well. We can't get much closer than this...
OG-1 lands on top of the yellow and silver sensor sensor enclosure |
We're still waiting on the go-ahead to autonomously fly untethered, but in the meantime, Ai head Danette Allen thought up a solution to give our UAV as much mobility as possible while we prepare for our demo: fishing line. PI Jim Neilan served as the official drone-caster, walking behind the vehicle with rod and reel in hand. Insert your own fly-fishing pun here, or let the Ai Twitter do it for you:
Did a little #LaRCai "flight" fishing at @NASA_Langley today. Have tether, will travel! #autonomy #UAV pic.twitter.com/e1ETSqF6mN— Autonomy Incubator (@AutonomyIncub8r) July 7, 2016
The success of today's flights is important not just because they validated the hard work and research of our PIs and interns, but also because they were our first real-life clue of what our demo is going to look like. Being on the Back 40 today was like watching the trailer for a movie we've been waiting to see for three years.
"Today was the first day that we ran our full flight profile," Danette said. "We flew all the trajectories for our science payload mission."
The only major element we have left to rehearse, now that we know we can fly, is the real showstopper of the demo: a second UAV taking off and interrupting our flight path, which will demonstrate our obstacle detection and avoidance capabilities. We've demonstrated this inside the Ai many times and will fly untethered soon so that we can Detect-And-Avoid in the real world. This part is Deegan and Loc's time to shine as the UAV uses computer vision to understand its surroundings in real-time.
When will the Ai finally drop the curtain and roll out our best demo yet? Not until the relentless Tidewater heat subsides, according to Danette.
"It's so brutally hot, we're not gonna do it anytime soon," she said, and confirmed early fall as the projected date for the demo. For now, keep September marked on your calendars and check back here for updates.
Wednesday, July 6, 2016
2016-07-06: Autonomy Incubator Hosts Safety-Critical Avionics Research Flight
Intern Serena Pan and the rest of the Safety-Critical Avionics interns celebrate post-flight. |
The flight range in Building 1222 is already crowded of late, but today we made room for one more as the Safety-Critical Avionics team came over to the Autonomy Incubator (Ai) to fly their octorotor. Evan Dill, an engineer from the Safety-Critical Avionics Research Branch here at NASA Langley, led the band of merry scientists.
Evan checks in with his team before takeoff. |
"We're testing elements for demonstrating our containment system, and some collision avoidance as well," Evan said. Safety-Critical Avionics is working on a geo-containment system for unmanned aircraft, not unlike the one that Ai intern Nick Woodward worked on last summer. Obviously, their project focuses less on autonomy and more on safety, but our labs' missions still align in a way that makes cooperating when necessary easy— like today.
Interns Nathan Lowe, Serena Pan, Kyle Edgerton, and Russell Gilabert conducted the test, with Russell serving as the UAV pilot.
Kyle Edgerton carries the octorotor into position on the flight range. |
"We're dampening the controls so that if I tell it to pitch, it goes ten degrees instead of forty-five degrees," Russell explained. With the pitch dampened, the UAV will maneuver in a slower, more controlled (and predictable) manner.
The flight was successful—the vehicle remained under thirteen degrees of tilt as Russell put it through some simple maneuvers— and after a brief round of congratulations, our guests set off back to their hangar.
The octorotor in flight. Note how gently it's banking. |
With unmanned vehicles gaining greater and greater importance in the scientific community and the world at large, the Ai is happy to support other NASA UAV labs in their missions. Thanks for stopping by, team!
Tuesday, July 5, 2016
2016-07-05: Autonomy Incubator Begins Tests of New Precision Landing Abilities
Ben Kelley, Loc Tran, Matt Burkhardt, and Kyle McQuarry prepare to initialize. |
A new frontier of the Autonomy Incubator's (Ai) research began today as PI Loc Tran and NASA Space and Technology Research Fellow Matt Burkhardt started running their control software on a vehicle in real time. While Matt has worked on varied projects so far this summer, this application of his controls research has become his primary focus.
"I'm supporting the most pressing of challenges, which is precision landing," he said.
What is precision landing, exactly? It's the ability of a UAV to find its landing point, center its body over that point, and land there— precisely where it's supposed to land. An example of a commercially available solution is a product called IR-LOCK™, which uses an infrared beacon on the ground and an infrared camera on the vehicle to facilitate precise landing. It works well, but this is the Ai: we want "unstructured" solutions. Between Loc's visual odometry work and Matt's control algorithm, our vehicles will soon be able to execute precise landings using the input from just regular onboard cameras.
"What we're attempting to do here is replicate the behavior, but eliminate the beacon," Matt said. Eliminating the beacon (or any fiducial, for that matter) is what we mean by "unstructured". We can't rely on being able to mark the environment in any way to assist in our autonomous operations.
Matt and Jim Neilan perform hardware checks before testing. |
Our new autonomous precision landing abilities will have immediate applications in the Ai's sensor package delivery mission.
"In the second phase of the mission, we're going to try to go back to launch without GPS," Loc explained. "The idea is, when we take off, we'll take a picture of the package, so that when we come back, we can look for that exact spot [and land]. We call it the natural feature tracker."
I have some screengrabs from Loc's algorithm at work; take a look. He ran an example image for us: the top one is the reference picture with no alterations, and the bottom one has been shifted around to mimic what a UAV might see as it comes in to land. The algorithm looked at the bottom image and provided directional suggestions to make it line up with the reference picture— in this case, it suggested that we move left and up a bit, plus yaw a little clockwise.
The reference image, taken from the onboard camera... |
...and what the UAV might see as it makes its approach. |
Loc's natural feature tracker (NFT) is only one half of the equation, however. Matt's control algorithm takes the output from the feature tracker and uses it to autonomously guide the vehicle into position.
"The challenge is, given an image like this, what do we tell the vehicle to do?" Matt said. "My controller has to take these images, track the feature, and apply force and torque to the vehicle."
For instance, in the example above, Matt's controller would take the feature tracker's recommendation to go left and up a bit, and then manipulate the vehicle to actually move it left and up a bit. Make sense? Loc's software provides the impulse; Matt's software provides the action. Together, they make a precision-landing powerhouse.
In today's tests, Loc and Matt wanted to hand-fly a quadrotor while their software ran in the background— not flying autonomously yet, but still generating output for the researchers to review for accuracy. However, they had some obstacles to overcome first.
"We need to take the [reference picture] in order to see if the algorithm works, but with someone holding it, their feet were always in the picture. And we can't have that," Matt said. Which led to this in the flight range:
PI Ralph Williams makes a hardware adjustment. |
The ingenuity of the engineers here at NASA Langley Research Center cannot be stifled. Suspended as though hovering about five feet above the ground, the UAV took an accurate, feet-free picture to use in feature matching and tracking. Ralph did the honors of hand-flying the UAV around the range while Matt observed his algorithm's output.
Now, with a few rounds of successful hand-flying tests on the logbooks, Matt and Loc intend to move to remote-controlled indoor flights for their next step. We'll keep you posted on those and any other exciting developments in this mission right here on the Ai blog.
Subscribe to:
Posts (Atom)