Wednesday, June 22, 2016

2016-06-22: Autonomy Incubator Seminar Series: Michael Wagner

Today, the Autonomy Incubator (Ai) was thrilled to host another lecturer in the Autonomy Incubator Seminar Series, Michael Wagner. Mr. Wagner is a Senior Project Manager at the National Robotics Engineering Center (NREC) at Carnegie Mellon University, as well as one of the founders of Edge Case Research LLC. His talk was titled, "Developing Trust in Autonomous Vehicles: Strategies for building safer robots."

Ai head, Danette Allen, introduces Michael Wagner.
During his long and illustrious (seriously, he's built robots everywhere from Antarctica to the Atacama desert) career as a roboticist, Wagner noted three errors that caused these autonomous robots to fail: memory leaks, logic bugs, and library instability. As autonomous robots grow more sophisticated and take on more tasks, he has taken up the mission of finding a way to mitigate those errors so that an autonomous robot that looks, say, like this one...

Did I mention he's also built robots for the military?
...always behaves in a safe and predictable way. However, autonomously checking an autonomous robot has obvious challenges.

"You have to basically replicate part of the autonomy algorithm in order to monitor it," he said.

Wagner's solution: create "run-time safety monitors" that take what the autonomy algorithm spits out and check it against other, verified inputs—like, from sensors—before sending it as verified output. If the algorithm's results don't check out, then the safety monitor will catch the error and won't send anything. By working with just the results of the autonomy algorithm instead of tackling the algorithm itself, he saves time and computing power while still monitoring for error.

"Checking is easier than doing," he summarized.

A problem within this safety-checking system, which is the meat of his current research, is testing and verifying perception. Computer vision is messy to work with in the first place; how is it possible to verify it autonomously? For this, Wagner is training a machine learning algorithm. He shows the algorithm pictures of a certain object (car, bus, pedestrian), then asks the machine learning algorithm to try identifying those objects and corrects it when it makes a mistake. With enough training, the machine learning algorithm can be used to verify the results of the autonomy algorithm.

In short: Yes.
With Wagner's "safing gate" in place, autonomous robots will become easier for humans to trust. Not only because they'll consistently behave in predictable ways, but also because we'll know that their behavior is being monitored and regulated.

"It's not, what is it, Skynet? It's not Skynet," he said, to amused chuckles from the audience.



No comments:

Post a Comment