Ai head, Danette Allen, introduces Michael Wagner. |
Did I mention he's also built robots for the military? |
"You have to basically replicate part of the autonomy algorithm in order to monitor it," he said.
Wagner's solution: create "run-time safety monitors" that take what the autonomy algorithm spits out and check it against other, verified inputs—like, from sensors—before sending it as verified output. If the algorithm's results don't check out, then the safety monitor will catch the error and won't send anything. By working with just the results of the autonomy algorithm instead of tackling the algorithm itself, he saves time and computing power while still monitoring for error.
"Checking is easier than doing," he summarized.
A problem within this safety-checking system, which is the meat of his current research, is testing and verifying perception. Computer vision is messy to work with in the first place; how is it possible to verify it autonomously? For this, Wagner is training a machine learning algorithm. He shows the algorithm pictures of a certain object (car, bus, pedestrian), then asks the machine learning algorithm to try identifying those objects and corrects it when it makes a mistake. With enough training, the machine learning algorithm can be used to verify the results of the autonomy algorithm.
In short: Yes. |
"It's not, what is it, Skynet? It's not Skynet," he said, to amused chuckles from the audience.
No comments:
Post a Comment