Dr. Simon Haykin |
The lecture then focused primarily on the concept of controlling risk and mimicking a human's cognitive processing. How do we strengthen and improve an autonomous system? Dr. Haykin proposes the following: supply the system with cognition, bring in control and decision, and ultimately the machine can manage both uncertainty and risk.
Evaluating and overcoming risk is the most difficult cognitive function for a machine to mimic. It requires a similar structure to the feedback channels in humans, as the internal network must process how to act based on observation and evaluation of the environment. The ability of a machine to observe and evaluate thus become integral, to which Dr. Haykin suggests a machine's working memory. Split into two libraries of potential action, one library holds actions that have been taught, while the other holds past experiences. In the face of uncertainty, the latter of the two libraries is more reliable and closer to the world, as the machine can pull on it's experiences in a situation rather than a learned action for the given circumstance.
At the Autonomy Incubator, we design autonomous systems that have the self-contained ability to execute a mission without direct human involvement, and so much of our work falls in accordance with this concept. Similar to Dr. Haykin's ideal system in which the machine recognizes the correct cognitive action to be performed within the environment, our computer vision engineer Dr. Loc Tran is researching machine learning to provide path training and autonomous obstacle avoidance (tree-dodging).
Thanks again to Dr. Haykin for coming to NASA LaRC!
The Autonomy Incubator team with Dr. Simon Haykin. |
Thanks so much with this fantastic new web site. I’m very fired up to show it to anyone. It makes me so satisfied your vast understanding and wisdom have a new channel for trying into the world. spoken english app
ReplyDelete