Okay, so it's not technically tree-dodging now that we're dodging a 20-foot pole, but the concept remains the same: we're using computer vision and lidar to avoid obstacles in real time. And we're really, really good at it.
PIs Loc Tran, Ben Kelley, Jim Neilan, and Kyle McQuarry went to the Back 40 this morning with the goal of testing this part of the pipeline, to roaring success. The UAV flew back and forth over a dozen times, deftly avoiding the pole in the middle of its flight path without a single failure.
The transition to the outdoors is especially exciting when you consider that there's no map and no fiducials involved here, which means that the algorithm has no outside assistance in doing its job and nothing shining bright to look for. There is no extra data we could feed it if it starts to fail. Once that UAV is in the air and on a collision course with the pole, it has to use its autonomous capabilities to detect the obstacle and replan its flight path mid-flight. And it does. It succeeds every time, in a predictable and safe manner.
The next challenge, now that we've conquered stationary obstacles, will be the ultimate in collision avoidance and a highlight of our upcoming outdoor demo: detecting and avoiding another UAV that enters the airspace. Imagine this, but outside and thirty feet in the air:
Congratulations to Jim, Kyle, Ben, and Loc on an amazing end to a packed week!
Friday, July 29, 2016
Wednesday, July 27, 2016
2016-07-27: Autonomy Incubator Intern Jacob Beck and His Marvelous Magical Spider Bot
You saw them steal the show in the Autonomy Incubator final review; now get to know them: Jacob Beck and his creation, the Spider Bot.
The basics of Jacob's design draw upon tried-and-true principles in robotics, such as the design of the leg joints and the six-legged, alternating tripod method of locomotion. However, he's creatively blending these building blocks with computer vision, autonomy and UAVs to create a novel solution for the Ai's package delivery mission.
"Once [the Spider Bot] finds the object, it will use its legs as, instead, fingers, and close around the object and winch itself back up," Jacob said.
The current approach to autonomous package pick-up depends heavily on a precise landing from the UAV, so that a fixed mechanism can grab onto the package. However, autonomous precision landing is tricky in the best research conditions and incredibly difficult in the real world— in fact, we've got an intern who's spending his summer focusing exclusively on precision landing capabilities. Jacob's idea for a mobile, autonomous robot gripper eases our reliance on precision landing by allowing the UAV some room for error when approaching a package.
"An issue we face right now is getting a drone to land precisely over the target," he explained. By using this kind of robot, we hope to greatly expand the area in which the drone can work."
Tuesday, July 26, 2016
2016-07-26: Autonomy Incubator Makes Data-Denied Breakthrough
Yesterday, the Autonomy Incubator (Ai) team assembled in the flight range to watch history being made: the Ai's first completely data-denied autonomous flight. Just a quadrotor, a camera, and a visual odometry algorithm keeping the whole thing on path. No positioning system in sight (literally and figuratively!).
"What was interesting is that yesterday, we had no GPS or [indoor GPS emulator] Vicon™ to help us. It was just the visual odometry, and it handled very well," PI Jim Neilan said. Jim has been working on a data-denied navigation solution with the Ai for years, and yesterday's success was a massive validation for him and his team.
Here's the quick-and-dirty on how the Ai does visual odometry. We use a downward-facing global shutter camera to collect information on where the UAV is. The "global shutter" bit is really key here— most cameras have rolling shutters, which means that only one row of pixels gets exposed to light at a time. Rolling shutters are fine for most things, but when they're in motion, they cause a lot of aliasing and wobbling that makes visual odometry next to impossible. A global shutter camera exposes the entire frame of the picture at once, making for a faster, more reliable source of visual data.
"I'd say it's around forty to fifty frames per second, on average," Jim said. "We're using a very powerful global shutter camera."
The data from the camera (as well as from other sensors, like IMUs and barometers) gets fed into an algorithm called PTAM: Parallel Tracking And Mapping.
"It's actually based on an augmented reality thing designed for cell phones in 2008 by some guys at Oxford," Jim said. The basic idea behind PTAM is creating a grid map of the environment (the mapping) and then using translations to track where the camera moves in that environment (the tracking). These things happen simultaneously, so they're parallel. See what I'm saying? Here's the original paper if you're intrigued.
An aside: augmented reality is also the thing that lets Pokémon Go put little anime animals into your surroundings. So, the next time you throw a Poké Ball at a Charmander on your kitchen table, remember you're using the same technology that's revolutionizing autonomous flight!
We've been using PTAM for a while in both our indoor and outdoor tests, but yesterday's test was exciting because there was no external data source to correct the drift in the algorithm, and it still performed beautifully. Watch the video; doesn't that flight path look buttery smooth? Personally, I didn't realize that they'd switched the algorithm on until I looked over at Jim and saw his hands weren't moving the controls.
With a successful first flight in the logbooks, Jim says they have three concrete goals moving forward.
"We have to do a couple things. We need to clean the code up and make it robust to failure. We have to put hooks in the code so we can inject corrections periodically. And we need to fly it outside."
Stay tuned!
Monday, July 25, 2016
2016-07-25: Autonomy Incubator Amazes and Delights in Final Incubator Review
The HOLI GRAILLE team celebrates post-demo. |
After nearly two hours of presentations, Danette led the crowd into the flight range to show teh audience what we can do firsthand First on the bill was HOLII GRAILLE, the sim-to-flight human factors/path planning/virtual reality demonstration that showcases the full pipeline of technology for our atmospheric science mission. In true Ai style, it went beautifully, from the gesture recognition all the way through the live flight.
Angelica Garcia explains her VR environment as Meghan Chandarana demonstrates how to navigate it. |
Next came intern Deegan Atha and his computer vision research. Using a webcam mounted to a bench rig (a faux-UAV we use for research), he moved around the flight range and let his algorithm recognize trees, drones, and people in real time.
Look at all those beautiful bounding boxes. |
Jim Nielan and Loc Tran led a live demonstration of our GPS-denied obstacle avoidance capabilities. Out of an already stellar lineup, this was the big showstopper of the morning: we had one of our Erevkon UAVs (a powerful all-carbon fiber quadrotor) fly autonomously, drop off a sensor package, fly back to its takeoff point while avoiding another UAV we lowered into its path on a rope, go back and pick up the package, return to its takeoff point again, and land.
Jim Neilan explains the flight path. |
The Orevkon, precision landed. |
Finally, intern Jacob Beck wrapped up the morning with a demonstration of his soft gripper and Spider Bot projects. The Spider Bot, as always, was a crowd-pleaser as it skittered across his desk and grabbed a toy ball, but the soft gripper also elicited some murmurs of admiration.
The Spider Bot descends. |
Overall, we delivered a complex and innovative demonstration of our multifaceted capabilities on Friday, and we're proud of all of our researchers and interns who logged late nights and early mornings to make it possible. So proud, in fact, that we ran it again this afternoon for the LASER group of young NASA managers!
Jim and Josh Eddy discuss the on-board capabilities of our Orevkon UAV. |
Lauren Howell discusses Bezier curves as her flight path generates in the background. |
Thank you and congratulations to all our team for their hard work and brilliance!
Tuesday, July 19, 2016
2016-07-19: Autonomy Incubator Gears Up For HOLII GRAILLE Demo
Lauren Howell and Meghan Chandarana, supervised by Anna Trujillo, measure the projection of the flight path to ensure it's to scale. |
Jeremy's nickname is Indiana Drones, or Dr. Drones if he's being especially brilliant. |
What, exactly, does HOLII GRAILLE stand for?
"Hold on, I've got it written down somewhere," Meghan Chandarana said when I asked. Officially, HOLII GRAILLE is an acronym for Human Operated Language Intuitive Interface with Gesture Recognition Applied in an Immersive Life-Like Environment.
"[Ai head Danette Allen] said it was the 'holy grail' of virtual reality demos, and then we thought, why don't we call it that and make a crazy NASA-style acronym?" Meghan explained.
The presentation incorporates work from all over the Ai to demonstrate what PI Anna Trujillo calls a "multi-modal interface" between humans and autonomous UAVs for the Ai's atmospheric science mission. Anna is the lab's resident expert in human-machine teaming, and her work focuses on natural-language interaction between humans and autonomous robots.
Lauren, Meghan and Anna set up the projectors from the booth. |
"We're defining flight paths," she said. "We're using gestures to define that path and voice to tell it things like the diameter of the spiral or how high it should ascend."
"So, Erica's voice recognition work gets combined with Meghan's gesture stuff , that information gets sent to Javier and Lauren and they calculate the actual flight path, and some of that information is sent to Angelica to show the path in virtual reality. And then Jeremy has been working on the communication in DDS between all these parts," she continued.
Danette tries out Angelica's virtual flight simulator. |
The indoor demo will make use of our overhead projectors to simulate flying over the NASA Langley Research center, with the flight path and a "risk map" Javier generated overlaid on it.
Javier looks over his creation. |
"I've used the minimum distance algorithm I showed you to generate the map," Javier explained. "We added obstacles and weighted the different buildings based on risk." Blue means "not risky," while red means "quite risky." For example, an unmanned system would have little to worry about while flying over a patch of forest, but could cause some problems if it unexpectedly entered the Ai's airspace, so the woods are blue while Building 1222 is red.
A UAV flies over the projected map. |
Ultimately, the HOLII GRAILLE demo showcases how user-friendly and safe unmanned aerial vehicles can be because of the research we're doing at the Ai. Combining so many facets of our work to create one smooth mission certainly hasn't been easy, but we couldn't be more excited to see this demo fly.
"We're showing the capabilities of interacting with a system in a more natural manner," Anna summed up. "It's not you fighting a system; you're both on a team."
Monday, July 18, 2016
2016-07-18: Autonomy Incubator Intern Jacob Beck Develops Soft Gripper
Jacob Beck is a returning intern in the Autonomy Incubator (Ai); like so many of us, he couldn't stay away for too long. Jacob's project this summer is a new iteration of his work at the Ai last year: a UAV-mounted soft gripper that modulates air pressure inside hollow rubber fingers to grip and release.
"This is, I'd say, the third generation," he said of his newest gripper. The first version Jacob ever attempted was a replica of the gripper from this paper by Dr. George M. Whitesides' lab at Harvard University.
The second version, well. The second version experienced a rapid unscheduled disassembly during his exit presentation last spring. But, we all learned a valuable lesson about rubber viscosity and failure points that day.
In order to make his soft gripper, Jacob 3D-printed a custom mold and filled it with silicone rubber. When the outer layer inflates, the gripper's fingers curl in. Jacob estimates that his latest model will have enough strength to lift about 200 grams, once he gets the correct rubber to cast it in. 200 grams might seem small, but it's the perfect strength to pick up the ozone sensors for the Ai's atmospheric science mission.
"[The gripper] will curl completely inward at 10 psi," he said, before putting the air flow tube to his mouth and puffing. The fingers of the gripper twitched inward a fraction of an inch. It was kind of unsettling in a fascinating, can't-stop-looking way; I'm not used to seeing robotics components look so organic. Here, watch this video of a soft gripper from the Whitesides Research Group at Harvard and you'll see what I mean. The technology is so unique, and Jacob is capitalizing on it to make our UAVs more adaptable in their package pick-up and drop-off.
"I'm writing a paper to document my soft gripper use on a drone, which as far as I can tell will be the first time anyone's done something like that," he said.
Friday, July 15, 2016
2016-07-15: Autonomy Incubator Intern Gale Curry Takes Up Robot Flocking Torch
Gale edits her code after a test flight, micro UAV in hand. |
The Autonomy Incubator (Ai) first dipped its toes into the world of UAV flocking algorithms last summer, when former intern Gil Montague began researching possibilities for coordinated flight with multiple micro UAVs. This summer, that work continues in the capable hands of Gale Curry.
A Masters student in Mechanical Engineering at Northwestern University, Gale originally studied physics in undergrad until the allure of robotics pulled her away from her theoretical focus.
"I joined the Battle Bots team at UCLA, and that's where I decided I wanted to do more applied things," she said. "I was the social chair."
"The Battle Bots team had a social chair?" I asked.
"Ours did!" she said.
Gale explains the hand-flying pattern she needs to high school intern, Tien Tran. |
Gale first became interested in autonomous flocking behaviors during her first term of graduate school, when she took a class on swarm robotics. The idea of modeling robot behavior after that of birds, bees, and ants— small animals that work together to perform complex tasks— has remained inspirational to her throughout her education.
"They're really tiny and pretty simple, but together they can accomplish huge feats," she explained. "I've always liked that in swarm robotics, simpler is better."
Her approach to micro UAV flocking, which she's working on coding right now, is to have one smart "leader" and any number of "followers." She hopes it will be more efficient and easier to control than developing a flock of equally intelligent, autonomous vehicles and then trying to make them coordinate.
"This way, you focus your time and energy on controlling the one smart one, and the other, 'dumb' ones will follow," she said.
Gale with a member of her fleet. |
"Flocking couples really easily with the other work here, like the object detection that Deegan is doing or path following, like Javier and Lauren are working on," she said. "It's really applicable to lots of different things."
Thursday, July 14, 2016
2016-07-14: Kevin French Summer 2016 Exit Presentation
Kevin French came to us after graduating from the University of Florida, and will continue his work in Robotics as he begins his PhD at University of Michigan this fall. This was his second consecutive term in the Autonomy Incubator. Kevin's invaluable work this summer focused on simulating flocking behavior in autonomous gliders through two-dimensional cellular automata.
2016-07-14: Autonomy Incubator Bids Farewell to Intern Kevin French
Ai head Danette Allen gave a warm introduction to the assembled crowd of Ai team members and guests, citing Kevin's lengthy stay in the Ai and how much he's contributed to our mission.
Kevin began his presentation with a recap of the work he did with computer vision and object tracking in the spring, which concluded with his software being uploaded to the Ai's in-house network, AEON.
Kevin stands in front of a live demo of his lidar object tracking system, mounted on Herbie. |
"It was a very rewarding experience having my software become a permanent part of this lab's capabilities," he said.
This summer, his goal was to come up with a solution for having large flocks of gliders operate autonomously and cooperatively.
"Nowadays, you can get simple, cheap gliders," he explained. "I wanted to see what we could do with a high quantity of low complexity gliders."
Gliders, although simple, pose unique challenges in autonomy because they're so limited in how they can change direction. To explore this problem and its possible solutions, Kevin decided to simplify a three-dimensional problem down to a 2D analog. He made a "grid world," generated "gliders" of six pixels each (each holding a value for a state: momentum, pitch, etc.) and then set about creating sets of rules to see how the gliders behaved. You remember this from the blog post about his cellular automata, right?
Kevin walks the audience through one of his 2D models. |
Simple as it may look, creating his grid world was incredibly complex. His first iteration had 322 billion states and more possible configurations than the number of atoms in the universe.
"We need to simplify," Kevin said in front of a projection of a number so long, the exponent in its scientific notation was still too long to represent in a PowerPoint slide.
By adding physics, he was able to get his software simpler and more agile. Then he could start trying ways of generating the best sets of rules. He could do it by hand, of course, but that would have left him with a process that was time-intensive and inefficient— two things that do not mix with robotics.
His first attempt, a "sequential floating forward selection" (SFFS) algorithm that he created from scratch, worked by taking his original hand-entered set of rules, removing one or more, and then testing to see what happened. Although it brought him some success, it too proved to be not efficient enough for his needs.
"I let it run all weekend one time and it still didn't finish. Actually, it short-circuited it," he said.
Building on the results he managed to get from the SSFS, Kevin next implemented a genetic algorithm, a kind of software that mimics evolution and natural selection. His genetic algorithm "bred" two sets of rules together to produce "children," the most successful of which would then be bred, and so on until someone hit the stop button. It was this genetic algorithm that finally brought him the agility and the accuracy he needed, and served as the capstone of his research here.
As he wrapped up, Kevin called his two terms in the Ai "the best of my life," and thanked Danette, the Ai, the NIFS program, and "all the robots who have helped me out."
Wednesday, July 13, 2016
2016-07-13: Autonomy Incubator Welcomes UAV Pilot Jeff Hill For Outdoor Demo
Zak Johns trains Jeff Hill on Orange 3. |
Today was another long, hot day outside for the PIs and interns at the Autonomy Incubator (Ai), but the excitement in the air is even thicker than the clouds of mosquitoes as we near our end-of-year demo. Today's advancement: a second UAV pilot, Jeff Hill, began flight training with Ai pilot Zak Johns. Jeff, a NASA UAV pilot, will be joining the demo team in a vital role.
"He'll be flying the vehicle that gets in the way of me. Well, not really me, the computer," Zak said. "The lidar will pick it up and tell [the computer] that it's in the way."
While Jeff is a seasoned RV pilot with NASA Langley, the Ai's vehicles are custom-built for our specific research mission and totally unique on-center, so getting flight time in before the demo is imperative. Every vehicle is different, and ours are larger than most of the UAVs other labs fly.
"These things are very agile and have a lot of power," Ai head Danette Allen said.
Here are Zak and Jeff putting OG3 through its paces (untethered!).
After landing the vehicle and retreating to the shade of the field trailer, Jeff remarked, "[The controls are] a little touchy, but we're doing well." If "touchy" means "agile" then we couldn't agree more!
Tuesday, July 12, 2016
2016-07-12: Autonomy Incubator Intern Lauren Howell Represents USA at Airbus Airnovation 2016
Lauren Howell is finally back after a week of networking, competition, and complimentary beer at Airbus Airnovation 2016 at Technische Universiteit (TU) Delft in the Netherlands. She was one of only forty students selected from around the world to participate in the all-expenses-paid conference.
"Seventeen nationalities were represented among those forty people," she said. "I made some people honorary Americans on July Fourth; we needed to celebrate!"
Airnovation 2016 was open not just to aeronautical engineers like Lauren, but to any major "applicable to the field of innovation," she explained. That includes business majors, computer scientists, or engineers of any stripe.
Lauren and her cohort on top of TU Delft's underground library. |
Once they arrived at TU Delft, the forty participants were divided into five teams and tasked with developing a "game-changing" unmanned aerial system (UAS) by the end of the week. They had to think and behave like a start-up, which meant including a budget, a business model, a physical model, return on investment estimates, and potential partners in their final package.
"We were presented with an open-ended challenge to design a game-changing UAS and pitch it to the board of 'investors,' which was made up of really important people at Airbus who are actually in charge of listening to innovative ideas," she said.
"So who won?" I asked.
"My team won," she said, fighting a smile. Lauren is as notorious in the Ai for her modesty as she is for her brilliance.
Her team's winning design was a UAS with an environmentally-focused, humanitarian mission.
"Our design was really cool— it was a blimp that would sweep through the air and de-pollute the air. Our pilot case was Beijing, China," she explained. "One out of five people who die in China, die as a result of pollution."
In addition to bringing home the gold, Lauren also won an individual award— a victory she credits to the Ai.
"The cool thing about how they work as Airbus is that they also use the AGILE method," she said, referencing the Ai's Agile approach with daily "scrums" and bi-weekly "sprints" that keep everyone involved in what everyone else is doing. "I won the Scrum Master award. So, I'm honorary Jim [Neilan] of the Netherlands."
When she wasn't leading her team to victory, Lauren spent time touring Delft, hearing speakers from high up in Airbus, and experiencing Dutch culture with her diverse community of colleagues.
Downtown Delft, as captured by Lauren. |
"It was a really amazing opportunity for networking, and a beautiful thing to witness people from all over the world coming together to come up with five totally unique ideas," she said.
Now, she's stateside and back at work. But, she said, the impact that Airnovation had on her approach to engineering will be far-reaching.
"It helped me understand the innovative way of thinking," she said. "Engineers tend to come up with something cool, and then think about how to make it practical. Now, I think, 'Let me take a look around me and identify what the pain of the world is. And now, let me design something that will fill that need.'"
Lauren's flight home. |
Friday, July 8, 2016
2016-07-08: Autonomy Incubator Celebrates Successful Outdoor Autonomous Science Mission
Today, despite a heat index of 105 degrees and a frankly Biblical amount of ticks, the Autonomy Incubator (Ai) team took to the skies and completed not one, but two dry runs of our outdoor waypoint following and package delivery demonstration.
The entirely autonomous flight was completed by a custom-built quadcopter carrying the Ai's full research load: lidar, inertial motion sensors (IMUs), GPS, three Odroid computers, several downward facing cameras; everything we've ever developed is on this UAV.
"We exercised every capability we need for a successful mission," Ai head Danette Allen said.
Here's a map of our complete route. We took off at the red waypoint, followed the path to the yellow waypoint, descended to just above the ground and simulated dropping off a package (an ozone sensor for our atmospheric science mission), then returned to flight altitude and made the trip back. Again, all of this happened without anyone piloting the vehicle or cueing it. The PIs just hit "go" on the algorithm, and away the UAV went.
Fun fact: the white path intersecting the orange waypoint is where NASA used to test landing gear for the space shuttle. |
While today's tests used GPS data, further tests will focus on the visual odometry algorithms that the Ai has been developing. In the final demo, the initial package drop-off flight will be GPS-enabled, and the pick-up flight will be purely visual odometry-guided. The precisoin landing for recovery of the ozone sensor worked well. We can't get much closer than this...
OG-1 lands on top of the yellow and silver sensor sensor enclosure |
We're still waiting on the go-ahead to autonomously fly untethered, but in the meantime, Ai head Danette Allen thought up a solution to give our UAV as much mobility as possible while we prepare for our demo: fishing line. PI Jim Neilan served as the official drone-caster, walking behind the vehicle with rod and reel in hand. Insert your own fly-fishing pun here, or let the Ai Twitter do it for you:
Did a little #LaRCai "flight" fishing at @NASA_Langley today. Have tether, will travel! #autonomy #UAV pic.twitter.com/e1ETSqF6mN— Autonomy Incubator (@AutonomyIncub8r) July 7, 2016
The success of today's flights is important not just because they validated the hard work and research of our PIs and interns, but also because they were our first real-life clue of what our demo is going to look like. Being on the Back 40 today was like watching the trailer for a movie we've been waiting to see for three years.
"Today was the first day that we ran our full flight profile," Danette said. "We flew all the trajectories for our science payload mission."
The only major element we have left to rehearse, now that we know we can fly, is the real showstopper of the demo: a second UAV taking off and interrupting our flight path, which will demonstrate our obstacle detection and avoidance capabilities. We've demonstrated this inside the Ai many times and will fly untethered soon so that we can Detect-And-Avoid in the real world. This part is Deegan and Loc's time to shine as the UAV uses computer vision to understand its surroundings in real-time.
When will the Ai finally drop the curtain and roll out our best demo yet? Not until the relentless Tidewater heat subsides, according to Danette.
"It's so brutally hot, we're not gonna do it anytime soon," she said, and confirmed early fall as the projected date for the demo. For now, keep September marked on your calendars and check back here for updates.
Wednesday, July 6, 2016
2016-07-06: Autonomy Incubator Hosts Safety-Critical Avionics Research Flight
Intern Serena Pan and the rest of the Safety-Critical Avionics interns celebrate post-flight. |
The flight range in Building 1222 is already crowded of late, but today we made room for one more as the Safety-Critical Avionics team came over to the Autonomy Incubator (Ai) to fly their octorotor. Evan Dill, an engineer from the Safety-Critical Avionics Research Branch here at NASA Langley, led the band of merry scientists.
Evan checks in with his team before takeoff. |
"We're testing elements for demonstrating our containment system, and some collision avoidance as well," Evan said. Safety-Critical Avionics is working on a geo-containment system for unmanned aircraft, not unlike the one that Ai intern Nick Woodward worked on last summer. Obviously, their project focuses less on autonomy and more on safety, but our labs' missions still align in a way that makes cooperating when necessary easy— like today.
Interns Nathan Lowe, Serena Pan, Kyle Edgerton, and Russell Gilabert conducted the test, with Russell serving as the UAV pilot.
Kyle Edgerton carries the octorotor into position on the flight range. |
"We're dampening the controls so that if I tell it to pitch, it goes ten degrees instead of forty-five degrees," Russell explained. With the pitch dampened, the UAV will maneuver in a slower, more controlled (and predictable) manner.
The flight was successful—the vehicle remained under thirteen degrees of tilt as Russell put it through some simple maneuvers— and after a brief round of congratulations, our guests set off back to their hangar.
The octorotor in flight. Note how gently it's banking. |
With unmanned vehicles gaining greater and greater importance in the scientific community and the world at large, the Ai is happy to support other NASA UAV labs in their missions. Thanks for stopping by, team!
Tuesday, July 5, 2016
2016-07-05: Autonomy Incubator Begins Tests of New Precision Landing Abilities
Ben Kelley, Loc Tran, Matt Burkhardt, and Kyle McQuarry prepare to initialize. |
A new frontier of the Autonomy Incubator's (Ai) research began today as PI Loc Tran and NASA Space and Technology Research Fellow Matt Burkhardt started running their control software on a vehicle in real time. While Matt has worked on varied projects so far this summer, this application of his controls research has become his primary focus.
"I'm supporting the most pressing of challenges, which is precision landing," he said.
What is precision landing, exactly? It's the ability of a UAV to find its landing point, center its body over that point, and land there— precisely where it's supposed to land. An example of a commercially available solution is a product called IR-LOCK™, which uses an infrared beacon on the ground and an infrared camera on the vehicle to facilitate precise landing. It works well, but this is the Ai: we want "unstructured" solutions. Between Loc's visual odometry work and Matt's control algorithm, our vehicles will soon be able to execute precise landings using the input from just regular onboard cameras.
"What we're attempting to do here is replicate the behavior, but eliminate the beacon," Matt said. Eliminating the beacon (or any fiducial, for that matter) is what we mean by "unstructured". We can't rely on being able to mark the environment in any way to assist in our autonomous operations.
Matt and Jim Neilan perform hardware checks before testing. |
Our new autonomous precision landing abilities will have immediate applications in the Ai's sensor package delivery mission.
"In the second phase of the mission, we're going to try to go back to launch without GPS," Loc explained. "The idea is, when we take off, we'll take a picture of the package, so that when we come back, we can look for that exact spot [and land]. We call it the natural feature tracker."
I have some screengrabs from Loc's algorithm at work; take a look. He ran an example image for us: the top one is the reference picture with no alterations, and the bottom one has been shifted around to mimic what a UAV might see as it comes in to land. The algorithm looked at the bottom image and provided directional suggestions to make it line up with the reference picture— in this case, it suggested that we move left and up a bit, plus yaw a little clockwise.
The reference image, taken from the onboard camera... |
...and what the UAV might see as it makes its approach. |
Loc's natural feature tracker (NFT) is only one half of the equation, however. Matt's control algorithm takes the output from the feature tracker and uses it to autonomously guide the vehicle into position.
"The challenge is, given an image like this, what do we tell the vehicle to do?" Matt said. "My controller has to take these images, track the feature, and apply force and torque to the vehicle."
For instance, in the example above, Matt's controller would take the feature tracker's recommendation to go left and up a bit, and then manipulate the vehicle to actually move it left and up a bit. Make sense? Loc's software provides the impulse; Matt's software provides the action. Together, they make a precision-landing powerhouse.
In today's tests, Loc and Matt wanted to hand-fly a quadrotor while their software ran in the background— not flying autonomously yet, but still generating output for the researchers to review for accuracy. However, they had some obstacles to overcome first.
"We need to take the [reference picture] in order to see if the algorithm works, but with someone holding it, their feet were always in the picture. And we can't have that," Matt said. Which led to this in the flight range:
PI Ralph Williams makes a hardware adjustment. |
The ingenuity of the engineers here at NASA Langley Research Center cannot be stifled. Suspended as though hovering about five feet above the ground, the UAV took an accurate, feet-free picture to use in feature matching and tracking. Ralph did the honors of hand-flying the UAV around the range while Matt observed his algorithm's output.
Now, with a few rounds of successful hand-flying tests on the logbooks, Matt and Loc intend to move to remote-controlled indoor flights for their next step. We'll keep you posted on those and any other exciting developments in this mission right here on the Ai blog.
Subscribe to:
Posts (Atom)