Friday, December 19, 2014

2014-12-16: Autonomy Incubator Monthly Demonstration: November/December 2014




We held a combined November/December demonstration on 16 December 2014 in our new Autonomy Incubator location. Mark talked about his progress on machine learning for decision-making in critical phases of flight. Bill showed advancement on his onboard target identification. Paul, Charles, and Jim demonstrated heroic efforts and tremendous progress on the control algorithms as you can see in the video.

Monday, April 7, 2014

2014-03-17: Autonomy Incubator Seminar Series, Dr. Jon How


Dr. Jonathan P. "Jon" How, Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT) was the March speaker in our monthly seminar series on autonomy. He presented recent algorithms and results for the safe, real-time motion planning under uncertainty of single and multi-agent systems in dynamic environments. In order to safely navigate in a dynamic environment, an autonomous system must be able to overcome uncertainty in both its own motion and the motion of other agents. Robust planning can be achieved by embedding probabilistic uncertainty models into a chance-constrained RRT* planner (CC-RRT*). This algorithm leverages the sampling-based nature of RRTs to generate probabilistically feasible solutions, with guarantees on maximum risk of constraint violation, in real-time. CC-RRT* has been demonstrated to efficiently produce risk-aware trajectories for a variety of complex aerospace-related motion planning problems, including applications for urban driving and parafoil terminal guidance.

Reliable robust planning also depends on the availability of probabilistic models that accurately represent the uncertainty in the environment and its evolution. This talk will thus present recent results for efficiently learning these models for dynamic environments. We utilize Bayesian nonparametric models that uniquely provide the flexibility to learn model size and parameters, which are often very difficult to determine a priori. For example, Gaussian processes (GPs) are used to represent the trajectory velocity fields of the obstacles (static & dynamic) in the environment, a Dirichlet process GP mixture (DP-GP) is used to learn the number of motion models and their velocity fields, and the dependent Dirichlet process GP mixture (DDP-GP) is also used to capture the same quantities and their temporal evolution. These techniques have been used to learn models of motion/intent behaviors of other drivers and pedestrians to improve the performance of an autonomous car.         


Here are his charts:

Sunday, March 9, 2014

2014-02-27: Autonomy Incubator Seminar Series, Dr. Massmiliano Versace


Dr. Massmiliano "Max" Versace, founding Director of the Neuromorphics Lab at Boston University and CEO of Neurala Inc. was the February speaker in our monthly seminar series. He talked about the state-of-the-art in neuromorphic "brain-based" computing, in particular its relevance to support advanced autonomous behavior in land and aerial vehicles. The talk focused on large-scale neuromorphic models that simulate key elements of perceptual, cognitive, and motivational competencies in both virtual environments and land/air robotic platforms. Compared to alternative approaches, the discussed neuromorphic solutions rely on parallel processing as well as learning and adaptation in relatively simple, neural-like computing elements to solve problems ranging from navigation to sense- and decision-making. The most intriguing implication of neuromorphic research for robotics is that, quite often, the mechanisms observed in the brain appear as natural solutions to the problems that robotic navigation are struggling with, namely: the fusion of multiple sensory streams that dynamically correct each other, increasing the overall precision of the system; redundant representations that increase system robustness; and attractor dynamics that work as a low pass filter to reduce the effects of sensory noise. The talk illustrated two main applications of this principle in the context of the “Adaptive bio-inspired navigation for planetary exploration” NASA STTR Phase II (NASA Langley with Boston University Neuromorphics Laboratory and Neurala LLC). This effort seeks to translate neuroscience research on animal navigation and sensing into usable software that can control land robots and Unmanned Aerial Vehicles (UAVs) specifically a “mini-brain” that can drive a Mars rover in a virtual environment as well as applications to UAV collision avoidance.



Here are his charts:

Monday, January 6, 2014

2013-12-16: Autonomy Incubator Seminar Series, Dr. Kevin Kochersberger


Dr. Kevin Kochersberger, Research Associate Professor in Mechanical Engineering at Virginia Tech was the inaugural speaker for our seminar series. He presented an overview of the unmanned aerial systems (UAS) technologies that have been developed or are under development at the Unmanned Systems Lab. Working with the Defense Threat Reduction Agency (DTRA), the lab has designed and field tested aerial radiation detection, radioactive material sampling and image-based 3D mapping systems. This work has spawned imaging technologies such as 3D terrain classification which are useful in extending the utility of UAS for government (emergency response) and commercial (mining and aggregate) operators. The lab is also exploring aerial crop monitoring using multispectral imaging systems (visual, near infrared, long wave and ultraviolet) as it anticipates a growing accessibility of UAS to farmers for precision crop monitoring. Air and ground vehicle design and integration are strengths of the lab, and an agile morphing flight control system using piezo-ceramic actuators for a 0.6 kg aircraft will be demonstrated. Also presented is a smart aerial radio repeating system and an autonomous ground robotic collection system that uses vision for 3D mapping and navigation.

Here are his charts: