Wednesday, July 22, 2015

2015-07-22: Autonomy Incubator Seminar Series: Dr. Max Versace and the Neurala Team

Dr. Versace stands in front of Neurala's robot to demonstrate its obstacle avoidance capabilities.

A hockey puck-shaped robot, carrying a laptop and a KinectTM on a set of shelves, was the star of the Autonomy Incubator (AI) today as it trundled around a makeshift enclosure in the AI flight range. CEO Dr. Massimiliano "Max" Versace, Dr. Matt Luciw, and and doctoral candidate Jeremy Wurbs represented robotics company Neurala, a NASA Phase II SBIR recipient, on their visit to NASA Langley Research Center. They came not only to deliver a lecture, but also demonstrate their navigational and collision avoidance research, as well as the soon-to-be-released Apple iOS app that has grown out of that research.

Neurala, as its name would suggest, uses concepts from neurobiology in its machine learning research. Specifically, they model their object recognition and their mapping software--two major components-- on the way the mammalian brain processes visual information.

"One main inspiration on the high level is the 'where' and the 'what' pathways in the brain," said Dr. Luciw during his talk. By using software to mimic the way human brains can recognize individual objects and then place those objects in a mental 3D map of their surroundings, Neurala intends to create robots that can position themselves in maps of their own creation, based only on monocular video and IMU (Inertial Measurement Unit) data - what Dr. Versace calls "passive measurements." Remember Loc's research on autonomous navigation in GPS-denied environments? Neurala's robot works somewhat like that, but with more of an emphasis on object recognition and identification, or classification.

Inside the enclosure, the robot detects and identifies a dog. The walls are covered in pictures to give the algorithm plenty of features to latch onto. (Yes, that's a photo of Mark Motter.)

Dr. Luciw pulls up several displays to show what data the robot is taking in.

"What we're building is a way for everybody to make a robot operate hands-free," said Dr. Versace. "The goal is for users to tell robots what to do, but not how to do it."

Neurala started in 2006 as a project in a Boston University business class, and in the nine years since, it has gained massive momentum in the deep learning community. The AI's Mark Motter has worked with Dr. Versace and his team since 2011, when their first NASA award brought them on board to do UAV collision avoidance research.  Now, the applications Neurala wants for its research range from toy robots controlled by their app, to industrial robots made safer through collision avoidance, to telepresence robots able to gracefully navigate through their surroundings.

"Another application we are working on is inspection," Dr. Versace said. Small UAVs can scan pipes or structures for defects, holes, and rust, then report back with its findings. "It can come back and say, hey, out of this five hours of video, I found three things I think you should look at."

Now, about this iOS app we keep mentioning: it's called Roboscope and it can pair your device with your ground or air toy robot to let you choose a "target" from the scene around you on your screen—say, your friend's backpack—then have the robot autonomously follow him, do a "sneak attack" where it swoops by/jumps out at him, and more. You can even "teach" the app to recognize faces and objects and then enter them into its memory, so that whenever the device camera sees that person/thing, its name will pop up on the display.

Dr. Versace teaches Roboscope to recognize Mark Motter.

However, don't be fooled by all the fun Neurala is having: they're doing some serious autonomy research with fixed-wing UAVs as well. As Jeremy Wurbs elaborated during his part of the seminar today, his research on collision avoidance aims to meet the FAA's mandate that a UAS be able to fly "at least as well as a human pilot." Human vision, he explained, uses both peripheral vision and foveal (focused) vision systems to take in visual data, and he's implemented a similar approach in his collision detection algorithm.  The peripheral scans for any approaching aircraft, then the foveal vision locks in on the traffic and uses optic flow to determine if the UAS is on a collision course with it.

"We're looking for radially expanding flow fields," he said.  Essentially, if something is expanding in the UAS's field of view, then the algorithm knows that it's approaching.  If that something's radius is within the area of the UAS, then the algorithm knows it's on a collision course and acts evasively.

Jeremy walks the crowd through some example simulations.

AI interns Meghan, Josh, Nick and Nick are feeling this lecture.

Want to see Jeremy Wurbs' algorithm in action? Neurala conducted real, live test flights with fixed-wing UAS's in restricted airspace, and they were kind enough to upload footage from one of the cameras. Look at how quickly the algorithm picks up on the other vehicle when it enters from the left.




1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete