Tuesday, December 4, 2018

2018-12-03: Andrew Miloslavsky: Former Video Game Competitor to NASA Programmer

Andrew Miloslavsky is a member of the AMA team.

Andrew Miloslavsky joined the Autonomy Incubator team nearly a year ago and is currently working as a programmer in support of the ATTRACTOR project.

Andrew received his Bachelor's degree in Computer Science from Hunter College in New York City. During his freshman year, he entered his first video game competition, but it was to his surprise that this would start a streak of wins.  He continued to compete through college, and in doing so established a short but fulfilling career in competitive gaming.

From there, he started his first job at the College of William and Mary in Williamsburg, Virginia doing data analytics.  This then lead him to NASA Langley and into our very own branch!

"What I'm doing right now is I'm working mainly on the ATTRACTOR simulation for autonomous vehicles.  My job is to support the integration of autonomous systems into a simulated environment," he explained.

Simulations are extremely important for a project's goals because they allow one to test an algorithm in a much safer way.  Self-driving cars are an example of this.  In a simulation, you could test drive the autonomous vehicle without risking any accidental harm in real life.  Some people have even taken autonomous algorithms and incorporated them into video games, like Grand Theft Auto. They'd then drive around inside the game, learning how to follow the rules of the road, and if the car does something harmful, you would easily know something went wrong and be able to fix it. We're doing something similar (but for the greater good of society) by building our simulation on top of a gaming engine, as well.

Overall, ATTRACTOR is the main project, and the simulation is basically an environment that allows researchers to test out autonomous behaviors.  "That's my main job," Andrew explained.  "I support the simulation, add new features, fix any bugs, and pretty much cover any feature requests or necessities that come up."

Andrew has been at the Ai for almost a year.

Andrew is part of the AMA (Analytics Mechanical Associates) software team, and also generally supports whatever else comes up within there.  This team includes a lot of the main people that Danette looks to to complete specific tasks.

Once ATTRACTOR finishes up in a few years, they'll be able to demonstrate the different capabilities and accomplishments that the project has come up with.  There are many different parts of the overall project.  "All of the researchers around here are working on their little bits and pieces, and all of them will be joined together," he said.  "Some people are working on machine learning algorithms, some are working with computer vision, some are with trajectories."  It will all eventually be combined to meet the project's end goal.

Andrew has been a big part of the team so far, and we all look forward to seeing how the project progresses!

Friday, November 30, 2018

2018-11-30: Improvements Made to the Autonomy Incubator Flight Area!

Riggers came to help put up the new monitors!

B1230, the new location of our very own Autonomy Incubator, has been undergoing some fun home renovations!  The flight area has had a few changes, and we could not be more excited about it!

Since our mid-summer move, we have had four monitors on the back wall, but monitors five and six have finally joined the family!  Now, we are able to display the simulated versions of live tests to an even greater extent.

Because each monitor weighs several hundred pounds apiece, we had the help of riggers to hoist the monitors to their rightful home.

The final display!

Along with the new additions, the floor has also been bead blasted in preparation for the "flat floor" installation. We're expecting the new floor to be finished in January.

Our floor has had some work as well.

We are ecstatic about finalizing the area and look forward to running more tests again soon.

Wednesday, November 21, 2018

2018-11-21: Autonomy Incubator Welcomes New Team Member Sherif Shazly

Sherif Shazly, the newest addition to the Ai team.

Last week, the Autonomy Incubator welcomed the newest addition to our team, Sherif Shazly!

Sherif received his Bachelor's degree in Mechanical Engineering from North Carolina State University.  After continuing his education, he recently received his Master's in Robotics from the University of Maryland, College Park in May.

Currently, Sherif is working as a Robotics Software Developer/Engineer at the Ai on the In-Space Assembly project.  He is creating a simulation of the set-up pictured below in Gazebo, which is used with ROS (Robot Operating System), the standard framework for writing robot software.

The robotic arm is able to pick up the trusses and move them to separate
locations.

"In the simulation environment, you can test out these algorithms so that you don't have to risk these expensive robots," he explained.  He is currently getting everything ready for a special demo that is coming up next month!

In summary, the In-Space Assembly project asserts the idea that robots can be used to move and assemble trusses autonomously.  Right now, we are doing it on a smaller scale, but the ultimate goal, of course, would be to have this set-up in space.  The process is well-explained in a previous blogpost here, showcasing a previous intern, Chase Noren's, work.

Sherif and Walter Waltz, another Ai team member who is also working on the project, have many goals for the future.  This includes the addition of cameras for autonomous environmental detection, as well as adding optimal controllers for force control.  They also want to "improve our motion planning techniques with more complicated controllers."

Sherif and Walter Waltz are both working on the In-Space Assembly project.

We are very happy to have Sherif join our team and are looking forward to see how their research progresses.  Welcome aboard!

Thursday, August 30, 2018

2018-08-30: Payton Heyman Exit Presentation Summer 2018


As sad as I am to leave, my internship has come to an end for the summer, and I will now be heading towards Georgia to begin my freshman year at the Savannah College of Art and Design.  After ten weeks of tweets, Facebook posts, blogs, YouTube videos, and Instagram posts, watch my exit presentation to see how the Autonomy Incubator's social media presence has grown!

See you again in November!

2018-08-30: Meghan Chandarana Exit Presentation Summer 2018


Meghan has wrapped up her fourth internship with the Autonomy Incubator!  This summer she was a first time Pathways intern.  Watch her exit presentation to learn about her work with mission planning for robot swarms!

2018-08-30: Jim Ecker and his GAN-Filled World

Jim Ecker is a member of the Data Science team at NASA Langley and a part of the ATTRACTOR project, working specifically in the Autonomy Incubator.  Jim received his Bachelor's degree in Computer Science from Florida Southern College and his Master's in Computer Science from Georgia Tech, specializing in machine learning and artificial intelligence.

Coming out of school, he was a software engineer for about eight years and then went to Los Alamos National Lab, where he worked in the supercomputing and intelligence and space research divisions.  From there, he began work at NASA Langley.

Jim has his Master's degree in Computer Science.

HINGE, which was recently outlined here, is partially in support of what Jim is doing.  The main project he is working on right now uses General Adversarial Networks (GAN)a machine learning algorithm that lets a computer generate data from an example data set, such as producing an image from a description.

Using a few different photo databases, including Flickr 30K and COCO (Common Objects in Context), he has over 200,000 real images that have been annotated with five different descriptions of each image.  All of the compiled images include everything you can think of, from cars to boats to animals to people, but overall his main focus was simply people to aid in the search and rescue research that takes place at the Ai.  He went through the COCO dataset and, as best as he could do, pulled out all of the images containing people, and is currently curating data from Flickr30K.

The more images you have with captions, the more data is given to the neural network, and it will eventually begin to better understand what the pieces of an image are.  For example, the neural network, after receiving many different images with a caption containing the word "glasses," will eventually learn exactly what the word means and be able to detect what glasses look like based on the pixels and similarities between the different images.  If you write code to create, say, an image of 'a woman in a red cocktail dress,' the neural network will take what it knows from the other images it has seen and their descriptions to draw that.  "As it looks at more and more examples and learns more and more how to make that kind of thing, it starts to learn what different things are and make a better drawing," he explained.  "It's very weird what it's doing when you first think about it; it starts with what looks completely random, and it eventually learns how to draw the pieces."

A woman in a cocktail dress.

The cocktail dress example is actually quite mind blowing to us.  It connected 'cocktail' to a cocktail bar, and since bars typically have mirrors, it was able to mirror the woman's back in the top right of the image.

The descriptions of the images and how in-depth they go is very important.  You can have a caption that simply says "a woman with an umbrella," but how much does that really tell you? Not much.  A better description would include both what they're wearing and what they're doing as well, like "a woman in a black jacket, a blue and white checkered shirt, white pants, and black shoes is carrying an umbrella while walking through the rain."  This includes more details, so the neural network is able to learn and draw more with the information.  "I need the descriptions to be as specific as possible," Jim said.

Following his explanation, he showed me a demo, where he described me and what I was wearing to see how it would come out.  With the description being "a woman wearing a black and white striped shirt with a black sweater," you can see what I looked like below!


A renaissance painting entitled: Payton Heyman

I'd say it looks just like me! Especially with the wind-swept ponytail.

Along with the image outcome, it also has handy visualizations of "how each word in the description is mapped to different parts of the generated image." as Jim said.  "It gives each of these words some weight, saying here's the woman, here's the sweater, and so on.  These visualizations are key to providing explainability to an agent's environmental understanding."


The annotation tags highlight each characteristic.

He explained how the hope is that it will get much better in being able to accurately visualize what the described person looks like.  It stores all of the information similar to how the human brain stores visual information, based on the Dual-Coding Theory.  This is how it all ties into the search and rescue research.  "It encodes the features basically into memory storage in your mind.  The idea is to kind of try to replicate this so that when an object detector [like a drone] is looking around with a camera, every time it sees a person it can store the represented information and compare it to what it already knows."  When it finds someone who looks like who they are looking for, it would realize, "oh, that's who they were talking about."  The entire idea is that once Jim trains the object detector enough, it would hopefully be able to successfully recognize someone, but in order for it to be that specific you'd have to have thousands of pictures of that person and train it for quite a long time.  "Using synthesized visualizations from a GAN does this in an unsupervised manner, requiring much less data."

We love our monitors at the Autonomy Incubator.

"Other than all of this, I am also working on deep reinforcement learning," he said.  An example of this is how he is teaching an agent to play Super Mario BrosTM. Basically how this works is the agent looks at all of the pixels on the screen in order to decide what to do.  As it plays the game, it learns more and more of what actions do what and what to do in a situation.  Jim is able to pull up the source that shows what actions are going on at any given time during the game.  "It's kind of like a hexadecimal representation in and of the buttons; some of them might even be combinations of a NintendoTM controller."

As mentioned previously, the HINGE project is in support of his research.  It has been able to give him and the rest of the team an idea of what type of data they need to give the GAN in order to get the kind of data they need.  They're able to see how to best talk to it and see what the best descriptions are to help it best visualize.  Jim's work is improving by the day and we look forward to seeing how it progresses even more!

Sunday, August 26, 2018

2018-08-26: Women's Equality Day and the Rise of Women in STEM

The women of the Autonomy Incubator

Its Women's Equality Day, and the women of the Autonomy Incubator are celebrating all of our hard work.  With degrees ranging in Social Science, Mechanical Engineering, Psychology, Media Production, and Computer Science, the women of NASA sport their STEM knowledge with pride.

According to the Economics and Statistics Administration, forty-seven percent of all jobs were held by women in 2015; however, they only held twenty-four percent of STEM jobs.  This is a big part of what has inspired the women of the Ai to pursue their education and empower other women to do the same.

One of the greatest aspects of the Ai is the female leadership.  From Danette Allen, head of the lab and Co-PI of the ATTRACTOR project alongside Natalia Alexandrov, another strong woman at NASA, to Lisa Le Vie, head of the HCI team, they each play significant roles here.  Additionally, they all lead a great example for the younger generation of interns that have the opportunity to be here, like Erica Meszaros, Meghan Chandarana, and myself.

Danette and Lisa even received Mentorship Awards at the beginning of this
month, nominated by Ai interns.  Danette received four nominations, and
Lisa received one.

As mentioned previously, there is a large variety of backgrounds and degrees represented at the Ai.  Erica has always dreamed of working at NASA since she was young.  With an educational background primarily in the social sciences and humanities, she described how "finding a place that recognizes the importance of scientific analysis informed by these disparate backgrounds as crucial, is difficult."  This leads to why we believe the Ai excels so well: "it draws from many different backgrounds in order to pursue its research goals."  Her work, specifically, looks at the use of "linguistic analysis to evaluate human/autonomous system teaming and interface design to aid in trusted autonomy," but surrounding her is very different research as well,  There are people working with deep reinforcement learning, mechanical engineering, and even, in my case, using digital media and social platforms to communicate the science.

Meghan had similar dreams of NASA when she was growing up as well.  "Without the women pioneers that came before me, I would not have even thought it was possible to do the research I do today," she explained.  "On a daily basis I am surrounded with strong, passionate, and talented women that challenge me to reach beyond the visible boundaries – in to the infinite potential that lays waiting to be discovered. My hope is that the work we do in the Ai shows young girls that they can do anything they put their mind to."

Erica, myself, and Meghan are the remaining female summer interns.  We
have all enjoyed every second of it!

Its amazing to see so many powerful women in a field that, statistically-speaking, is generally dominated by men.  As someone who has only just graduated high school, I am so grateful to have had the opportunity to work with so many wonderful people this early in my life and get a realistic view of the world of STEM.

In the words of Erica, "it’s so important to not only see gender representation at the higher levels but also to have the opportunity to see a workplace filled with intelligent and capable women researchers. This is the NASA I dreamed of as a little girl and the direction I hope it continues in the future!"

Friday, August 24, 2018

2018-08-24: An Introduction to the HINGE Project

Bryan Barrows and Erica Meszaros- some of the stars of HINGE.

With the recent introduction to the ATTRACTOR project, there are also pieces that follow behind it that have been patiently waiting for their time to shine on the Ai blog.  HINGE is a project working to support the goals of ATTRACTOR, but before diving too deep, the waterfall of acronyms that it provides must be introduced first.

HINGE stands for Human Informed Natural-language GANs Evaluation.  As you can see, within that acronym is the acronym GAN, meaning Generative Adversarial Network, a machine learning algorithm that lets a computer generate data from an example data set, such as producing an image from a description.

So, what does all this really mean, and what exactly is the project?  Well, with ATTRACTOR and its aim to increase trust in and trustworthiness of autonomous systems, it is critical to examine how human operators interact with these systems of the human/machine interaction (HMI).  HINGE was kind of the first step under the HMI team, examining ways in which autonomous systems use search and rescue (SAR) missions to get useful information from human colleagues.  GANs are able to produce images of the missing person, which then can help the machine identify them; however, more descriptive data is needed to train the GAN and, more importantly, we need to ensure that we can get critical information in an emergency situation.  In summary, HINGE is a project designed to gather data of image descriptions in order to help us understand what descriptive information we need and how we can best provide it.  Of course, we also get to test out some fun machine learning algorithms.

Now, it's time to meet the HMI team!  Lisa Le Vie is the Principal Investigator and lead for the HINGE effort.  Essentially, she sees everything from a global point of view, and where its headed in the future.

Lisa giving a presentation of her work at the 2018 Aviation conference.

Bryan Barrows, a graduate from Virginia Tech, focuses on natural language processing (NLP) and data analysis.  He worked with Lisa and Mary Carolyn earlier this year to collect a data set from employees on center at the Langley cafeteria, which consists of over five hundred descriptions used to train the GAN.  Over the course of the summer, he has been closely working with Lisa and Erica Meszaros on understanding the collected descriptions.  Bryan has developed and employed several NLP methods to better deduce and understand the description data.  His analysis of the collected HINGE data set has led to the derivation of several key semantic features that are helpful for training machine learning models, as well as understanding the desired image representations produced from the GAN.

Bryan was once a NIFS intern, but he is now a civil servant at NASA.

Erica L. Meszaros, a returning Ai intern joined the HMI team at the beginning of the summer.  Because of her background in linguistics and modalities of interaction for human/autonomous system interfaces, she has been focusing primarily on applying linguistic analysis to the HINGE data.  "We’re approaching this analysis from a lot of really interesting angles informed by machine learning requirements, situational and contextual interaction, and modalities of communication, which makes it a really neat kind of interdisciplinary puzzle," she stated.

In addition to all of her HINGE work, she has also assisted me greatly with the extensive analyses we have written for the blog.  This one is included!

 Erica has spent a lot of time on the BirdGAN.

Mary Carolyn Last and Miranda Smith are no longer at the Ai,  but they are very important to mention.  Both of these young women were a big part in the project, and their work does not go unappreciated.

At this point in time, HINGE is actually wrapping up! HMI is still an important area of focus for understanding, informing, and improving trust in autonomous systems, though, and the team is "hoping to use the work from the HINGE project to move toward different research directions," as Erica stated.

A lot of the HINGE work also supports Jim Ecker and his research on environmental understanding using generative models, like GANs, which we will expand on later!  The team is in the process of doing a second data collection, so be sure to stay updated to hear what's next!

Special thanks to Erica Meszaros for writing assistance.

2018:08-24: Erica Meszaros Exit Presentation Summer 2018


Erica Meszaros has wrapped up her exit presentation, after working with the HINGE team throughout the summer.  She worked on "Communication Analysis for Improved Human/Autonomous System Teaming."  Read about her work here and be sure to watch her presentation to learn even more!

Tuesday, August 21, 2018

2018-08-21: Carol Castle Returns to the Autonomy Incubator

Carol has begun decorating her new office to make it her own.

Carol Castle, the former management support assistant to Danette in the Autonomy Incubator, is officially returning to the team this week in support of ATTRACTOR.  She spent the past year working in the Flight Projects and Science Directorate, but we are very excited to have her back as our admin!

"It's nice to come back to my NASA family," she said.

Carol has been married to her husband, David Castle, a recent retiree from the Aeronautics Systems Engineering branch, for twelve years.  She has two kids and is a proud grandma!  Overall, she has been apart of NASA for thirty-two years, and, in the fall, she will be receiving an Exceptional Administrative Achievement Medal to honor all of her hard work and commitment.

Carol is very happy to be back.

Because we value her return so much, we had a lunch celebration of Indian cuisine, one of her favorite things.  Welcome back, Carol!

Friday, August 17, 2018

2018-08-17: New ATTRACTOR Project Continues NASA Tradition of Latin Mottos

Introducing: the new and exciting ATTRACTOR project, brought to you by the Autonomy Incubator's very own Social Media Intern, Payton Heyman, and linguistics expert, Erica Meszaros!


The ATTRACTOR (Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability) project's mission is to work toward both human trust in autonomous systems and creating autonomous systems worthy of trust through explainable AI (XAI).  The motto chosen to represent this mission is docendo discimus, a Latin aphorism, which translates to "by teaching, we learn."  This proverb is often attributed to Seneca's seventh letter to Lucilius; however, throughout time it has been a frequently used motto for institutions around the world, gracing the logos of Central Washington University, the University of Chichester in England, Novosibirsk State Technical University in Russia, and more!  So how did it end up a part of the logo for the ATTRACTOR project?

Dr. Natalia Alexandrov, a mathematician in the Aeronautics Systems Analysis Branch at NASA Langley and Co-Principal Investigator for ATTRACTOR, was the person who chose it.  "It's a perfect fit for the machine learning component of ATTRACTOR," she said.  "As we 'teach' the machine, via, say, active learning methods, we 'learn' about the gaps in problem formulation, the models, and algorithms.  Conversely, the machine 'learns' from data and 'teaches' us what it needs to be a better predictor."

Between our curiosity around the Latin motto, both Erica and I found ourselves diving deep into the depths of a NASA tradition and discovered a long and intricate history of using them.  The badge for the fated Apollo 13 mission bears the word ex luna, scientia, explaining the scientific goal of gaining knowledge from the moon.  To honor Apollo 1 astronauts, there is a plaque at the site where they perished with the Latin phrase ad astra per aspera, which translates to "a rough road leads to the stars" or "to the stars through difficulty."  This was also the name of the 50th anniversary tribute exhibit for the launch at the Kennedy Space Center Visitor Complex in 2017.

nasa.gov

Arrogans avis cauda gravis, which translates to "the proud bird with the heavy tail," was the motto of the 2TV-1 altitude test of the Apollo Command Module (CM) and Lunar Module (LM) in the 1960s.  A current example includes mission operations at NASA's Armstrong Center, which claims the motto of ex binarii, cognition, evoking Apollo 13’s motto for its own goal of gaining understanding from binary.


LaunchPhotography.com

Seneca, the previously mentioned Roman philosopher who is credited with the motto of "by teaching, we learn," exhorts us to turn to those who would seek to make us better and then, in return, accept those whom we could make better.  While Seneca was talking about humans in his letter, there’s no reason that his sentiment should not hold true for relationships between humans and autonomous systems.  We provide what information we can and listen to and interpret the information provided from our autonomous teammates.  But more than speaking to the action of teaming, Seneca’s quote-turned-aphorism can provide guidance for a style of research that is at the heart of the Autonomy Incubator.

In a lab researching autonomous systems, perhaps the first question is who is teaching and who is learning? We, as researchers, are often tasked with teaching other humans, a process highlighted by the incredible number of interns that are welcomed into the Ai. In teaching and collaborating with interns we may uncover surprising aspects of our own research and open ourselves up to new opportunities.

nasa.gov

This humans-teaching-humans process is, perhaps, the most Senecan of the lot, but it’s worth considering where machines fit in. As we teach algorithms and models to understand, replicate, and even learn, we in turn gain additional knowledge. In gathering descriptive training data for a GAN designed to produce images for a search and rescue mission, the HINGE team has gained insight into how human language changes when elicited in different situations and provide through different modalities. (Read here to learn more about HINGE!)

Here, we see humans teaching machines while learning from them, but what if the tables were turned? What if autonomous systems were teaching us? Perhaps the sci-fi vision of machine-tutors is a dream of the future, but it’s not hard to imagine a system that gains information from the field and provides that information to a human operator, who in turn updates an aspect of the machine’s system only to start the process all over.

The truth behind docendo discimus is that it locks us – delightfully! – into a cyclical system, trading off teaching and learning roles with our other teammates, be they human or machine.  Yes, the aphorism itself has a history of its own, and NASA has an increasingly long list of Latin phrases to follow behind it, but it's not always about the weight of connotation.  Natalia, when discussing why she decided to make the phrase ours, explained how she also simply "likes the way [it] sounds.  There is something nice about alliteration."



Sources:
https://en.wikipedia.org/wiki/Docendo_discimus
https://en.wikipedia.org/wiki/Per_aspera_ad_astra 
https://www.nasa.gov/feature/50-years-ago-two-critical-apollo-tests-in-houston 

Thursday, August 16, 2018

2018-08-16: Ben Hargis Exit Presentation Summer 2018

Ben Hargis, a PhD student at the Tennessee Technological University, is wrapping up his summer internship this week.  Ben has spent his time working to improve the robotic arm operations for the In-Space Assembly project.  Learn more by watching his exit presentation!

2018-08-16: Casey Denham Summer Presentation August 2018





Casey Denham is fortunately not leaving NASA Langley quite yet!  She, however, still presented to show off everything she has done this summer as a Pathways intern.  Watch her presentation to learn about her work with "Uncertainty and Risk in Autonomous Systems."

2018-08-16: Skylar Jordan Exit Presentation Summer 2018

Skylar Jordan, a student at the University of Tennessee in Knoxville, has finished up his internship for the summer.  After starting out at the National Institute of Aerospace, he came to the Ai to work with Lisa Le Vie on "Crowd Mapping Applications for Autonomous Vehicle Operations."  Watch his exit presentation to learn more!

Tuesday, August 14, 2018

2018-08-14: Autonomy Incubator Welcomes Three MIT Students

Throughout this past week, the Autonomy Incubator has had the pleasure of working with three graduate students from the Massachusetts Institute of Technology (MIT).  Katherine Liu, Kyel Ok, and Yulun Tian each arrived at NASA Langley ready to learn and progress with their studies.

Yulun, Kyel, Katherine, Loc Tran, and Chester Dolph gather around the
computer to discuss the next flight test.

Their main initiative throughout their time here was to be able to test-fly drones as part of search and rescue research.  On Monday, they were flying inside B1230, but they quickly transitioned to a forested area within NASA Langley's campus for the rest of the week.  They set up a four by four meter "box" outside, inside which the drone(s) takes off.  It then surveys the area using onboard LIDAR.

The drone has an optimization algorithm in order to search the area in the most efficient way possible.  It is autonomous, however, you are able to tell where it is going from a green arrow on the computer screen that points in the direction of travel.  You can also tell which way it is facing.  Katherine worked on the computer side, using a two-way radio to communicate with the pilot, Brian Duvall, who could use the switch controller if something were to go wrong.  Her ability to communicate this was critical because we could then easily get out of the drone's pathway, and I didn't have try to put up a disastrous fight with a Canon camera in hand.

Kyel working on one of the drones.

The crew spent Tuesday, Wednesday, and Thursday outside in the wooded area, test-flying all three drones - sometimes solo, sometimes together.  Each drone made its own map of the area to assure where it is in space and how to find its way through the trees.

The team also tested multi-vehicle path sharing, which proved fairly successful.  The drones communicated well with each other and were able to search in a way that kept them from running into each other.  We had over fifty successful flights outside!

Katherine, Kyel, and Yulun were only here through Friday; however, they are coming back in October for a second time.  We look forward to seeing them again!

Friday, August 10, 2018

2018-08-10: Intern Andrew Puetz Continues his Quadcopter Research in the Ai

Along with all of the other interns who have joined us for the summer, Andrew Puetz (pronounced "pits") has also dedicated many weeks to the Autonomy Incubator; however, this was not his first year spending time with us.  Last summer, Andrew was apart of the 2017 NASA Langley Academy, and his team had several visits to the Ai to do some flight tests.  His work with UAVs introduced him to research work with aeronautic flying.

Despite the fact that Andrew has his Bachelor's degree in Mechanical Engineering from South Dakota State University, he has "always had the goal of doing things with aerospace," he explained to me.  "I've always been interested in [it].  I was a Buzz Lightyear fan and a Star Wars nerd as a kid, so I guess that's what we can blame it all on."

He essentially split his time between two main projects.  He came back to NASA in February to begin his first project with the Vehicle Analysis Branch (VAB), where he was working on an autonomous quadcopter project.  This research, though, was still taking place in the Ai building.

Andrew and Zac O'Gull are not only working on their quadcopter projects
together, but they're also in matching outfits!

The main objective was to serve as an affordable test bed for control scheme testing that could contribute to the Space Launch System (SLS) control scheme in the future.  They aim to eventually balance a completely unimpeded inverted pendulum (IP) on top of itself.

"The IP is a naturally unstable body, so it'll fall if something isn't holding it.  That's the same thing as the SLS.  If we're not controlling it, it'll just spiral out of control, which is where the thrusters at the bottom of it come in."  The quadcopter they are working with is very similar, as it has four motors around it, so it is able to balance itself like how the SLS would.  This allowed them to model the dynamic body of the SLS.

With all of this in mind, they were able to accomplish research with an unstable hanging mass below the quadcopter.  This hanging mass serves as a model for the sloshing action by the fuel tank that would present a problem to the SLS.  "Sometimes during launch, a dynamic body, like the SLS, may experience where the fuel gets to be a rhythmic slosh back and forth, and that can make the SLS's body spiral out of control because the controller isn't able to dampen it down and get it under control."

They attached a hanging mass below the quadcopter,
which they call MamaBird.

They tested their controller with an Augmented Adaptive Controller (AAC), which is an additional level of control.  Andrew explained how the AAC "is able to do things outside the normal design envelope of a controller, so it will notice 'ah, we're really getting out of balance for the stability margin, things are really going bad,' and kick in the AAC and bring it back under control."  They tested it in their control scheme with the disturbed hanging mass, so the quadcopter was able to bring it under control and "dampen it back down," so it wasn't throwing it out of balance again.

All of this work with the VAB technically is not part of the Ai; however, he was able to do just about all of the research for this project in the Ai building because of the great flight area.  Along with this, though, he also has just finished up an internship with the Ai, where he was working on his second project, but he has come back to continue working on it.

Andrew Puetz soldering.

For this Ai specific project, he is working with a small hovercraft testbed to support swarm algorithm research in the future.  His specific part of this began with heavy research on other hovercraft projects from the past.  They were all of the same size, typically the RC-sized small control vehicles, that used control schemes that somebody built on their own using either MATLABTM or a coding language.  Once he became more familiar with it, he then went on to built a crude controller in SimulinkTM with the idea that "maybe this summer I'd get a chance to put it onto a flight controller and try it out on a small hovercraft vehicle," and he made the first few steps.  Matt Vaughan, one of the AMA team members at the Ai, had 3D printed and built the body, and the propeller motors were scavenged off of a small quadcopter.

They call this quadcopter PapaBird.

"The idea is that there's all these people around the world with these small little quadcopters, and when the frames break, they take the thrusters off and 3D print the hovercraft body and they then have another vehicle that they can play with."

"So, basically its just reduce, reuse, recycle, right? Just in robot form!" I asked him quite proudly.

"Yes! Leave no part behind, that's a good way to put it," Andrew responded, obviously just as proud of my discovery.

He has not actually been able to try it out yet, but he did get to work with Sparky 2.0, a flight control board.  Sparky has similar characteristics and abilities to Pixhawk, its just a smaller and lighter board in comparison.  He learned how to put custom firmware onto the Sparky in order to see if it was a viable candidate for a flight controller for the hovercraft swarms of the future.

Andrew's internship technically wrapped up at the beginning of August, but after sifting through many different options for what to do next, he decided to come back.  He is still considering pursuing a Mechanical Engineering Master's degree at SDSU, but for now he has chosen to continue some research at NASA Langley through the fall.  He has many great opportunities ahead of him, and we are all thrilled to have him as a part of the Ai!

Thursday, August 9, 2018

2018-08-09: Derek Zhao Exit Presentation Summer 2018


Derez Zhao, a student at Columbia University, wrapped up his summer internship at the Autonomy Incubator.  Derek spent his summer researching a way to develop effective and explainable image classification through the exploration of deep neural networks.  He created an app called "Moisturize," that consists of facial reconstructions through the use of individual sliders for different features.  Watch his exit presentation to learn more about his work!

2018-08-09: Chase Noren Exit Presentation Summer 2018


Chase Noren has wrapped his internship for the summer, where he helped with the In-Space Assembly project.  Watch his exit presentation to learn more, and make sure to read about his work here!

Tuesday, August 7, 2018

2018-08-07: Kastan Day Exit Presentation Summer 2018


After returning to the Ai for a third summer, Kastan Day has wrapped up yet another successful internship!  This time around, he was working on a Robotic In-Space Assembly project.

Last summer, he created barcode-like fiducials to be placed on objects, such as trusses, but he wanted to progress away from that this year.  He started using cameras with 3D depth information to create point-cloud (position) data in order to find objects in a space.  Once the object is found, the robot can pick it up and assemble it with other objects to create a final structure.  This way, objects can be found based on their intrinsic shape.  Watch his Exit Presentation to learn more!

Monday, August 6, 2018

2018-08-06: Autonomy Incubator Takes on a Swarm of GoPiGo Rovers

Fifteen GoPiGo rovers lined up in the flight room.

The drones that you see in these five single file lines are GoPiGo rovers, or as we like to call them in the Ai, Piglets!  Each Piglet comes in a do-it-yourself kit, and Autonomy Incubator team members have been building them throughout the past week.  In order to run, each Piglet requires a Raspberry Pi, which is a micro computer.  By building them in mass, we have been able to research a few different things.

Skylar Jordan is using them to do tests for the human crowd mapping algorithms when applied to the real world.  Using a simulation on his computer, he is going to adapt it to operate the GoPiGo drones on the field, using Vicon systems for feedback.  "We will take a number of drones and a real person, or multiple real people even, and have them walk among the drones to see if the drones and the humans interact in the same way that humans would interact with other humans."

Meghan Chandarana is researching swarm technology. The Autonomy Incubator team is highly interested in missions, such as search and rescue, and with these GoPiGo rovers, "what will eventually happen is the vehicles will move through the environment and eventually come to a person that you want to help," Meghan explained.  What they've done is develop techniques for the vehicles' decision-making capabilities and be able to determine how the swarm would break up in order to find and help the person.  "The piglets are like a testing platform for us to be able to test those behaviors to see how well they work on real platforms and what a small-scale mission of that type would look like."


Each individual GoPiGo rover comes in a DIY kit.

The Piglets only drive, providing us with a safer and less expensive alternative to flying drones. They're a good testbed for both Meghan and Skylar's research.  Skylar explained to me how he views it all as sort of "a transition between computer simulation and flight tests to prove that your algorithms actually work."

Wednesday, August 1, 2018

2018-08-01: Andrew Puetz Exit Presentation Summer 2018


Andrew Puetz, a recent graduate from South Dakota State University (SDSU) with a Bachelor's degree in Mechanical Engineering, has finished up his Summer 2018 internship with the Autonomy Incubator.  For the past ten weeks, he split his time between two major projects.  The first was with the Vehicle Analysis Branch (VAB), working on an autonomous quadcopter project, which actually started back in February, and the second Ai-specific project consisted of the development of a small hovercraft swarm vehicle testbed.

Andrew's time with us has not come to a close, though, as he will be rejoining us for more research on hovercrafts.  Watch his Exit Presentation to learn more about his work this summer!

Tuesday, July 31, 2018

2018-07-31: Autonomy Incubator Intern Chase Noren on In-Space Assembly


Charles Noren, also known as Chase, is a second time intern here at the Autonomy Incubator.  He first joined us in the fall of 2017, beginning his research in August and concluding in December.  He began his current internship in May and is furthering his research through mid August.  Chase will be graduating from Texas A&M in December with a Bachelor's degree in Aerospace Engineering.

Chase is helping with RAMSES (Rule-Based Asset Management for Space Exploration Systems), which is part of the In-Space Assembly project.  Throughout his internship, he has been developing mobility solutions for the coarse alignment of different objects in space, such as trusses.

Chase Noren first joined the Ai last fall.

"I'm working on the simulation aspect of it right now, using the rover to achieve that angle," Chase told me.  "I'm using internal capabilities in order to control and manipulate the rover."

Recognize the robot? Its Galgabot! High school volunteers built it last summer.

How exactly is he doing this, though?  Well, there are essentially two phases of it: coarse alignment and assembly.  Coarse alignment is taking objects that are simply floating and trying to get them close enough together in order to align them.  Assembly is the connection phase, where the parts come together as truly one structure.

"I'm taking a small rover, a robot effectively, and using it to maneuver our trusses to a specific environment so that a robotic arm can more finely place them," he explained.  These trusses are the same scale that they would be in space.  "I'm basically the first aspect of it.  I'm kind of like a tug boat and I'm going to grab the truss, bring it over to an arm, who is then going to take it from me, and then position it so that the fine assembly robots can do the last little bit of coming together to form a structure."

Chase is holding a small scale truss.

His research and overall project plans stand as a "technology demonstrator," as he said.  The In-Space Assembly project, as of right now, just shows the capabilities that they have and want to have.  His project demonstrates how they could go about constructing something autonomously that could then be sent into orbit or in space.

What makes In-Space Assembly so important? There are two major constraints.  There is the geometric constraint, which defines the size of an object  and the limits it has based on its launch vehicle.  If it is too large, you cannot simply take the whole object into space all at once.  One solution to this would be folding it in an "almost origami-like approach," as Chase described, "so it can unfold at its destination."  James Webb Space Telescope is an example of something that uses this approach.  "In other cases, such as the mass-constraint, you must be able to break up the payload and launch it in separate chunks." Then, it would be reassembled at the destination.

"The reason why we want to do it autonomously is because, there are a lot of safety constraints, and it could simply be too far way.  James Webb is around a million miles away, and it is not feasible to have a human being travel out there to wherever that location may be.  That's why autonomy and In-Space Assembly is such an important thing."

The robot has been armed with VICON balls and Velcro to place GoPros on.

The main end goal is for the rover to do everything autonomously, and Chase is doing this through Python, where he programs everything.  Using the Python language, he has built an architecture that allows for the vehicle to communicate with a separate flight computer, known as a Pixhawk.

"I just sit here and program all day... that's my job!"

Chase is planning on applying for grad school once applications open up in the fall.  Good luck to him, and congrats on everything he has done thus far!

Friday, July 27, 2018

2018-07-27: Autonomy Incubator Welcomes NAVSEA

Dr. Julie Stark and a NAVSEA intern from Hampton University.

Wednesday afternoon, we hosted our first visitors in our new building!  Dr. Julie Stark from NAVSEA came with four interns to share with us their work and learn about our research.

Dr. Stark used to be an intern at NASA Langley when she was in the midst of her graduate studies.  At the time she was working in crew systems, and when she did her dissertation, it was actually in the flight simulator using real pilots.  Her research was based on topics related to synthetic vision, levels of automation, and eye tracking.

"I basically went to different people I worked with and said 'okay this is what I want to do for my dissertation- this fits in your milestone and yours and yours.'  Eventually someone I worked with actually said to me, 'you're crazy, you're never going to finish this!'"  Three and a half years later she ended up publishing her studies and winning awards for two of them!

After her research at NASA and her post-doc work, she went on to the Navy, "I was kind of a consultant for a little while, but now I've been at Carderock for about twelve to fifteen years."

Carderock is part of the Naval Sea Systems Command. Within the Navy, there are twelve different Workers' Centers, and Carderock is one of them, where they work with basically all surface vehicles.  To add, there are many divisions throughout their center, "we have people who work on submarines, we have people who work on air platforms, generally they're pay-loads to us but not always.  That's the greater Carderock, so we- everybody here- is in the code that we use for ship systems." Furthermore, their division is considerably large with about four hundred people and is located at Little Creek in Virginia.  

Ai members and the NAVSEA visitors gathered in our new conference room!

"I wear two hats," Dr. Stark explained.  "One of my jobs is to directly report to the captain, and then I'm also in charge of the Human Research Protection Program."  Within the craft division, she works with full life cycle engineering, meaning they design concept developments and prototypes.  "We design anything from a jet ski to specific things for special people, generally up to one hundred seventy feet."

NAVSEA works with just about all of the boats you see in Naval swarm demonstrations. "If you have seen a swarm thing done, and its a Navy thing, we do it, that's our group," Dr. Stark explained.

They now have a branch that is entirely focused on autonomy and unmanned systems and have been working to develop an unmanned systems laboratory that is very close to being finalized and available for demos.  "I want to be able to give you a full briefing to say here's our lab, and this is what we do," Dr. Stark told Danette and our team members.

Following her introduction statements and some fun videos, Javier Puig-Navarro and Meghan Chandarana, two very valuable interns at the Ai, gave presentations on our research and what we do here as well.  Danette also gave them a short tour of what our new space has to offer.

Hello from the observation room!

Dr. Julie Stark has been a colleague of Danette's for quite some time now, so we were all very happy to invite her and a portion of her team in for a visit.  Hopefully they can come again in the future to see how our new flight area will progress!

Wednesday, July 25, 2018

2018-07-25: Miranda Smith Exit Presentation Summer 2018


Miranda Smith, a student at Old Dominion University and Autonomy Incubator summer intern, has accomplished many things throughout the ten weeks she has been apart of the team.  With a Bachelor's degree in Computer Science and two more semesters left until she receives her Master's degree, she spent this summer working with the Human Machine Interface team.

She split her time this summer between two major projects.  The first was through Amazon Mechanical Turk, where she set up two human intelligence tasks and helped pioneer MTurk capabilities across NASA centers for the HINGE experiment.  The second project was Interface Design, or the Peacock-PIT (Per Entity Adjustable Cockpits), a Unity mod.  Watch her Summer 2018 Exit Presentation to learn more about her research!

Thursday, July 19, 2018

2018-07-19: Autonomy Incubator Says Goodbye to B1222

This week, the Autonomy Incubator will be saying goodbye to the building it grew up in, B1222.  The current Ai building has been in place for almost five years, but change has finally come; the building will soon undergo a demolition.  In fact, the Ai has almost moved twice in the past year, but this time it's officially happening!  To honor B1222 and all of the memories that were created within it, eighteen Ai members have taped some goodbye messages.


In the meantime, we have been spending this week packing boxes, moving desks and chairs into the big trailer truck, and transporting them to their new home.

Brian Duvall diligently taping boxes while Jim Farrington watches with
admiration.

Ben Kelley, Kyle McQuarry, and Jeremy Castagno being lifted into the moving
truck, along with their desks and chairs and assistance from Matt Vaughan.

Dylan Miller, Andrew Puetz, and Skylar Jordan moving boxes in the new
building, B1230.

Most of our computers and main work necessities were already moved to B1230 on Tuesday, but many of us have been gathering in the open spaces of B1222 to enjoy our final hours here.

We will miss B1222 very much, but we are all excited about the new space. Everyone is looking forward to experiencing what it has to offer!

Now, say hello to building B1230!

Similar to B1222, we have an indoor flying space, .  The space itself is not as big as our old one  however, there will be an outdoor flying space attached that measures sixty by sixty feet and is fifty feet high, which we refer to as the AVIARY!  Danette Allen, head of the Ai, discussed how "we can simply fly from inside to outside and back, or at least use the infrastructure that we have inside when we want to go outside.  It just makes life so much easier for us when we want to test outdoors."

Both spaces provide for some powerful new opportunities!

The flight area is currently filled with moving boxes.  Look how many we
have!

Also, the painting of Samuel P. Langley that was left in B1222 when we first moved in is also moving to B1230 with us!

Our lovely painting of Samuel P. Langley

Of course, we could not leave without all of our beautiful robots either!

One corner of B1230 is currently filled with some of our favorite robots.  Is
this not a dream come true?

B1230 is a wonderful space that we all look forward to working with; however, B1222 will always be in our hearts.

In the words of Danette, "the building learned from us, and we learned from the building, and ultimately we ended up with this great partnership."