Tuesday, December 4, 2018

2018-12-03: Andrew Miloslavsky: Former Video Game Competitor to NASA Programmer

Andrew Miloslavsky is a member of the AMA team.

Andrew Miloslavsky joined the Autonomy Incubator team nearly a year ago and is currently working as a programmer in support of the ATTRACTOR project.

Andrew received his Bachelor's degree in Computer Science from Hunter College in New York City. During his freshman year, he entered his first video game competition, but it was to his surprise that this would start a streak of wins.  He continued to compete through college, and in doing so established a short but fulfilling career in competitive gaming.

From there, he started his first job at the College of William and Mary in Williamsburg, Virginia doing data analytics.  This then lead him to NASA Langley and into our very own branch!

"What I'm doing right now is I'm working mainly on the ATTRACTOR simulation for autonomous vehicles.  My job is to support the integration of autonomous systems into a simulated environment," he explained.

Simulations are extremely important for a project's goals because they allow one to test an algorithm in a much safer way.  Self-driving cars are an example of this.  In a simulation, you could test drive the autonomous vehicle without risking any accidental harm in real life.  Some people have even taken autonomous algorithms and incorporated them into video games, like Grand Theft Auto. They'd then drive around inside the game, learning how to follow the rules of the road, and if the car does something harmful, you would easily know something went wrong and be able to fix it. We're doing something similar (but for the greater good of society) by building our simulation on top of a gaming engine, as well.

Overall, ATTRACTOR is the main project, and the simulation is basically an environment that allows researchers to test out autonomous behaviors.  "That's my main job," Andrew explained.  "I support the simulation, add new features, fix any bugs, and pretty much cover any feature requests or necessities that come up."

Andrew has been at the Ai for almost a year.

Andrew is part of the AMA (Analytics Mechanical Associates) software team, and also generally supports whatever else comes up within there.  This team includes a lot of the main people that Danette looks to to complete specific tasks.

Once ATTRACTOR finishes up in a few years, they'll be able to demonstrate the different capabilities and accomplishments that the project has come up with.  There are many different parts of the overall project.  "All of the researchers around here are working on their little bits and pieces, and all of them will be joined together," he said.  "Some people are working on machine learning algorithms, some are working with computer vision, some are with trajectories."  It will all eventually be combined to meet the project's end goal.

Andrew has been a big part of the team so far, and we all look forward to seeing how the project progresses!

Friday, November 30, 2018

2018-11-30: Improvements Made to the Autonomy Incubator Flight Area!

Riggers came to help put up the new monitors!

B1230, the new location of our very own Autonomy Incubator, has been undergoing some fun home renovations!  The flight area has had a few changes, and we could not be more excited about it!

Since our mid-summer move, we have had four monitors on the back wall, but monitors five and six have finally joined the family!  Now, we are able to display the simulated versions of live tests to an even greater extent.

Because each monitor weighs several hundred pounds apiece, we had the help of riggers to hoist the monitors to their rightful home.

The final display!

Along with the new additions, the floor has also been bead blasted in preparation for the "flat floor" installation. We're expecting the new floor to be finished in January.

Our floor has had some work as well.

We are ecstatic about finalizing the area and look forward to running more tests again soon.

Wednesday, November 21, 2018

2018-11-21: Autonomy Incubator Welcomes New Team Member Sherif Shazly

Sherif Shazly, the newest addition to the Ai team.

Last week, the Autonomy Incubator welcomed the newest addition to our team, Sherif Shazly!

Sherif received his Bachelor's degree in Mechanical Engineering from North Carolina State University.  After continuing his education, he recently received his Master's in Robotics from the University of Maryland, College Park in May.

Currently, Sherif is working as a Robotics Software Developer/Engineer at the Ai on the In-Space Assembly project.  He is creating a simulation of the set-up pictured below in Gazebo, which is used with ROS (Robot Operating System), the standard framework for writing robot software.

The robotic arm is able to pick up the trusses and move them to separate
locations.

"In the simulation environment, you can test out these algorithms so that you don't have to risk these expensive robots," he explained.  He is currently getting everything ready for a special demo that is coming up next month!

In summary, the In-Space Assembly project asserts the idea that robots can be used to move and assemble trusses autonomously.  Right now, we are doing it on a smaller scale, but the ultimate goal, of course, would be to have this set-up in space.  The process is well-explained in a previous blogpost here, showcasing a previous intern, Chase Noren's, work.

Sherif and Walter Waltz, another Ai team member who is also working on the project, have many goals for the future.  This includes the addition of cameras for autonomous environmental detection, as well as adding optimal controllers for force control.  They also want to "improve our motion planning techniques with more complicated controllers."

Sherif and Walter Waltz are both working on the In-Space Assembly project.

We are very happy to have Sherif join our team and are looking forward to see how their research progresses.  Welcome aboard!

Thursday, August 30, 2018

2018-08-30: Payton Heyman Exit Presentation Summer 2018


As sad as I am to leave, my internship has come to an end for the summer, and I will now be heading towards Georgia to begin my freshman year at the Savannah College of Art and Design.  After ten weeks of tweets, Facebook posts, blogs, YouTube videos, and Instagram posts, watch my exit presentation to see how the Autonomy Incubator's social media presence has grown!

See you again in November!

2018-08-30: Meghan Chandarana Exit Presentation Summer 2018


Meghan has wrapped up her fourth internship with the Autonomy Incubator!  This summer she was a first time Pathways intern.  Watch her exit presentation to learn about her work with mission planning for robot swarms!

2018-08-30: Jim Ecker and his GAN-Filled World

Jim Ecker is a member of the Data Science team at NASA Langley and a part of the ATTRACTOR project, working specifically in the Autonomy Incubator.  Jim received his Bachelor's degree in Computer Science from Florida Southern College and his Master's in Computer Science from Georgia Tech, specializing in machine learning and artificial intelligence.

Coming out of school, he was a software engineer for about eight years and then went to Los Alamos National Lab, where he worked in the supercomputing and intelligence and space research divisions.  From there, he began work at NASA Langley.

Jim has his Master's degree in Computer Science.

HINGE, which was recently outlined here, is partially in support of what Jim is doing.  The main project he is working on right now uses General Adversarial Networks (GAN)a machine learning algorithm that lets a computer generate data from an example data set, such as producing an image from a description.

Using a few different photo databases, including Flickr 30K and COCO (Common Objects in Context), he has over 200,000 real images that have been annotated with five different descriptions of each image.  All of the compiled images include everything you can think of, from cars to boats to animals to people, but overall his main focus was simply people to aid in the search and rescue research that takes place at the Ai.  He went through the COCO dataset and, as best as he could do, pulled out all of the images containing people, and is currently curating data from Flickr30K.

The more images you have with captions, the more data is given to the neural network, and it will eventually begin to better understand what the pieces of an image are.  For example, the neural network, after receiving many different images with a caption containing the word "glasses," will eventually learn exactly what the word means and be able to detect what glasses look like based on the pixels and similarities between the different images.  If you write code to create, say, an image of 'a woman in a red cocktail dress,' the neural network will take what it knows from the other images it has seen and their descriptions to draw that.  "As it looks at more and more examples and learns more and more how to make that kind of thing, it starts to learn what different things are and make a better drawing," he explained.  "It's very weird what it's doing when you first think about it; it starts with what looks completely random, and it eventually learns how to draw the pieces."

A woman in a cocktail dress.

The cocktail dress example is actually quite mind blowing to us.  It connected 'cocktail' to a cocktail bar, and since bars typically have mirrors, it was able to mirror the woman's back in the top right of the image.

The descriptions of the images and how in-depth they go is very important.  You can have a caption that simply says "a woman with an umbrella," but how much does that really tell you? Not much.  A better description would include both what they're wearing and what they're doing as well, like "a woman in a black jacket, a blue and white checkered shirt, white pants, and black shoes is carrying an umbrella while walking through the rain."  This includes more details, so the neural network is able to learn and draw more with the information.  "I need the descriptions to be as specific as possible," Jim said.

Following his explanation, he showed me a demo, where he described me and what I was wearing to see how it would come out.  With the description being "a woman wearing a black and white striped shirt with a black sweater," you can see what I looked like below!


A renaissance painting entitled: Payton Heyman

I'd say it looks just like me! Especially with the wind-swept ponytail.

Along with the image outcome, it also has handy visualizations of "how each word in the description is mapped to different parts of the generated image." as Jim said.  "It gives each of these words some weight, saying here's the woman, here's the sweater, and so on.  These visualizations are key to providing explainability to an agent's environmental understanding."


The annotation tags highlight each characteristic.

He explained how the hope is that it will get much better in being able to accurately visualize what the described person looks like.  It stores all of the information similar to how the human brain stores visual information, based on the Dual-Coding Theory.  This is how it all ties into the search and rescue research.  "It encodes the features basically into memory storage in your mind.  The idea is to kind of try to replicate this so that when an object detector [like a drone] is looking around with a camera, every time it sees a person it can store the represented information and compare it to what it already knows."  When it finds someone who looks like who they are looking for, it would realize, "oh, that's who they were talking about."  The entire idea is that once Jim trains the object detector enough, it would hopefully be able to successfully recognize someone, but in order for it to be that specific you'd have to have thousands of pictures of that person and train it for quite a long time.  "Using synthesized visualizations from a GAN does this in an unsupervised manner, requiring much less data."

We love our monitors at the Autonomy Incubator.

"Other than all of this, I am also working on deep reinforcement learning," he said.  An example of this is how he is teaching an agent to play Super Mario BrosTM. Basically how this works is the agent looks at all of the pixels on the screen in order to decide what to do.  As it plays the game, it learns more and more of what actions do what and what to do in a situation.  Jim is able to pull up the source that shows what actions are going on at any given time during the game.  "It's kind of like a hexadecimal representation in and of the buttons; some of them might even be combinations of a NintendoTM controller."

As mentioned previously, the HINGE project is in support of his research.  It has been able to give him and the rest of the team an idea of what type of data they need to give the GAN in order to get the kind of data they need.  They're able to see how to best talk to it and see what the best descriptions are to help it best visualize.  Jim's work is improving by the day and we look forward to seeing how it progresses even more!

Sunday, August 26, 2018

2018-08-26: Women's Equality Day and the Rise of Women in STEM

The women of the Autonomy Incubator

Its Women's Equality Day, and the women of the Autonomy Incubator are celebrating all of our hard work.  With degrees ranging in Social Science, Mechanical Engineering, Psychology, Media Production, and Computer Science, the women of NASA sport their STEM knowledge with pride.

According to the Economics and Statistics Administration, forty-seven percent of all jobs were held by women in 2015; however, they only held twenty-four percent of STEM jobs.  This is a big part of what has inspired the women of the Ai to pursue their education and empower other women to do the same.

One of the greatest aspects of the Ai is the female leadership.  From Danette Allen, head of the lab and Co-PI of the ATTRACTOR project alongside Natalia Alexandrov, another strong woman at NASA, to Lisa Le Vie, head of the HCI team, they each play significant roles here.  Additionally, they all lead a great example for the younger generation of interns that have the opportunity to be here, like Erica Meszaros, Meghan Chandarana, and myself.

Danette and Lisa even received Mentorship Awards at the beginning of this
month, nominated by Ai interns.  Danette received four nominations, and
Lisa received one.

As mentioned previously, there is a large variety of backgrounds and degrees represented at the Ai.  Erica has always dreamed of working at NASA since she was young.  With an educational background primarily in the social sciences and humanities, she described how "finding a place that recognizes the importance of scientific analysis informed by these disparate backgrounds as crucial, is difficult."  This leads to why we believe the Ai excels so well: "it draws from many different backgrounds in order to pursue its research goals."  Her work, specifically, looks at the use of "linguistic analysis to evaluate human/autonomous system teaming and interface design to aid in trusted autonomy," but surrounding her is very different research as well,  There are people working with deep reinforcement learning, mechanical engineering, and even, in my case, using digital media and social platforms to communicate the science.

Meghan had similar dreams of NASA when she was growing up as well.  "Without the women pioneers that came before me, I would not have even thought it was possible to do the research I do today," she explained.  "On a daily basis I am surrounded with strong, passionate, and talented women that challenge me to reach beyond the visible boundaries – in to the infinite potential that lays waiting to be discovered. My hope is that the work we do in the Ai shows young girls that they can do anything they put their mind to."

Erica, myself, and Meghan are the remaining female summer interns.  We
have all enjoyed every second of it!

Its amazing to see so many powerful women in a field that, statistically-speaking, is generally dominated by men.  As someone who has only just graduated high school, I am so grateful to have had the opportunity to work with so many wonderful people this early in my life and get a realistic view of the world of STEM.

In the words of Erica, "it’s so important to not only see gender representation at the higher levels but also to have the opportunity to see a workplace filled with intelligent and capable women researchers. This is the NASA I dreamed of as a little girl and the direction I hope it continues in the future!"