Tuesday, July 31, 2018

2018-07-31: Autonomy Incubator Intern Chase Noren on In-Space Assembly


Charles Noren, also known as Chase, is a second time intern here at the Autonomy Incubator.  He first joined us in the fall of 2017, beginning his research in August and concluding in December.  He began his current internship in May and is furthering his research through mid August.  Chase will be graduating from Texas A&M in December with a Bachelor's degree in Aerospace Engineering.

Chase is helping with RAMSES (Rule-Based Asset Management for Space Exploration Systems), which is part of the In-Space Assembly project.  Throughout his internship, he has been developing mobility solutions for the coarse alignment of different objects in space, such as trusses.

Chase Noren first joined the Ai last fall.

"I'm working on the simulation aspect of it right now, using the rover to achieve that angle," Chase told me.  "I'm using internal capabilities in order to control and manipulate the rover."

Recognize the robot? Its Galgabot! High school volunteers built it last summer.

How exactly is he doing this, though?  Well, there are essentially two phases of it: coarse alignment and assembly.  Coarse alignment is taking objects that are simply floating and trying to get them close enough together in order to align them.  Assembly is the connection phase, where the parts come together as truly one structure.

"I'm taking a small rover, a robot effectively, and using it to maneuver our trusses to a specific environment so that a robotic arm can more finely place them," he explained.  These trusses are the same scale that they would be in space.  "I'm basically the first aspect of it.  I'm kind of like a tug boat and I'm going to grab the truss, bring it over to an arm, who is then going to take it from me, and then position it so that the fine assembly robots can do the last little bit of coming together to form a structure."

Chase is holding a small scale truss.

His research and overall project plans stand as a "technology demonstrator," as he said.  The In-Space Assembly project, as of right now, just shows the capabilities that they have and want to have.  His project demonstrates how they could go about constructing something autonomously that could then be sent into orbit or in space.

What makes In-Space Assembly so important? There are two major constraints.  There is the geometric constraint, which defines the size of an object  and the limits it has based on its launch vehicle.  If it is too large, you cannot simply take the whole object into space all at once.  One solution to this would be folding it in an "almost origami-like approach," as Chase described, "so it can unfold at its destination."  James Webb Space Telescope is an example of something that uses this approach.  "In other cases, such as the mass-constraint, you must be able to break up the payload and launch it in separate chunks." Then, it would be reassembled at the destination.

"The reason why we want to do it autonomously is because, there are a lot of safety constraints, and it could simply be too far way.  James Webb is around a million miles away, and it is not feasible to have a human being travel out there to wherever that location may be.  That's why autonomy and In-Space Assembly is such an important thing."

The robot has been armed with VICON balls and Velcro to place GoPros on.

The main end goal is for the rover to do everything autonomously, and Chase is doing this through Python, where he programs everything.  Using the Python language, he has built an architecture that allows for the vehicle to communicate with a separate flight computer, known as a Pixhawk.

"I just sit here and program all day... that's my job!"

Chase is planning on applying for grad school once applications open up in the fall.  Good luck to him, and congrats on everything he has done thus far!

Friday, July 27, 2018

2018-07-27: Autonomy Incubator Welcomes NAVSEA

Dr. Julie Stark and a NAVSEA intern from Hampton University.

Wednesday afternoon, we hosted our first visitors in our new building!  Dr. Julie Stark from NAVSEA came with four interns to share with us their work and learn about our research.

Dr. Stark used to be an intern at NASA Langley when she was in the midst of her graduate studies.  At the time she was working in crew systems, and when she did her dissertation, it was actually in the flight simulator using real pilots.  Her research was based on topics related to synthetic vision, levels of automation, and eye tracking.

"I basically went to different people I worked with and said 'okay this is what I want to do for my dissertation- this fits in your milestone and yours and yours.'  Eventually someone I worked with actually said to me, 'you're crazy, you're never going to finish this!'"  Three and a half years later she ended up publishing her studies and winning awards for two of them!

After her research at NASA and her post-doc work, she went on to the Navy, "I was kind of a consultant for a little while, but now I've been at Carderock for about twelve to fifteen years."

Carderock is part of the Naval Sea Systems Command. Within the Navy, there are twelve different Workers' Centers, and Carderock is one of them, where they work with basically all surface vehicles.  To add, there are many divisions throughout their center, "we have people who work on submarines, we have people who work on air platforms, generally they're pay-loads to us but not always.  That's the greater Carderock, so we- everybody here- is in the code that we use for ship systems." Furthermore, their division is considerably large with about four hundred people and is located at Little Creek in Virginia.  

Ai members and the NAVSEA visitors gathered in our new conference room!

"I wear two hats," Dr. Stark explained.  "One of my jobs is to directly report to the captain, and then I'm also in charge of the Human Research Protection Program."  Within the craft division, she works with full life cycle engineering, meaning they design concept developments and prototypes.  "We design anything from a jet ski to specific things for special people, generally up to one hundred seventy feet."

NAVSEA works with just about all of the boats you see in Naval swarm demonstrations. "If you have seen a swarm thing done, and its a Navy thing, we do it, that's our group," Dr. Stark explained.

They now have a branch that is entirely focused on autonomy and unmanned systems and have been working to develop an unmanned systems laboratory that is very close to being finalized and available for demos.  "I want to be able to give you a full briefing to say here's our lab, and this is what we do," Dr. Stark told Danette and our team members.

Following her introduction statements and some fun videos, Javier Puig-Navarro and Meghan Chandarana, two very valuable interns at the Ai, gave presentations on our research and what we do here as well.  Danette also gave them a short tour of what our new space has to offer.

Hello from the observation room!

Dr. Julie Stark has been a colleague of Danette's for quite some time now, so we were all very happy to invite her and a portion of her team in for a visit.  Hopefully they can come again in the future to see how our new flight area will progress!

Wednesday, July 25, 2018

2018-07-25: Miranda Smith Exit Presentation Summer 2018


Miranda Smith, a student at Old Dominion University and Autonomy Incubator summer intern, has accomplished many things throughout the ten weeks she has been apart of the team.  With a Bachelor's degree in Computer Science and two more semesters left until she receives her Master's degree, she spent this summer working with the Human Machine Interface team.

She split her time this summer between two major projects.  The first was through Amazon Mechanical Turk, where she set up two human intelligence tasks and helped pioneer MTurk capabilities across NASA centers for the HINGE experiment.  The second project was Interface Design, or the Peacock-PIT (Per Entity Adjustable Cockpits), a Unity mod.  Watch her Summer 2018 Exit Presentation to learn more about her research!

Thursday, July 19, 2018

2018-07-19: Autonomy Incubator Says Goodbye to B1222

This week, the Autonomy Incubator will be saying goodbye to the building it grew up in, B1222.  The current Ai building has been in place for almost five years, but change has finally come; the building will soon undergo a demolition.  In fact, the Ai has almost moved twice in the past year, but this time it's officially happening!  To honor B1222 and all of the memories that were created within it, eighteen Ai members have taped some goodbye messages.


In the meantime, we have been spending this week packing boxes, moving desks and chairs into the big trailer truck, and transporting them to their new home.

Brian Duvall diligently taping boxes while Jim Farrington watches with
admiration.

Ben Kelley, Kyle McQuarry, and Jeremy Castagno being lifted into the moving
truck, along with their desks and chairs and assistance from Matt Vaughan.

Dylan Miller, Andrew Puetz, and Skylar Jordan moving boxes in the new
building, B1230.

Most of our computers and main work necessities were already moved to B1230 on Tuesday, but many of us have been gathering in the open spaces of B1222 to enjoy our final hours here.

We will miss B1222 very much, but we are all excited about the new space. Everyone is looking forward to experiencing what it has to offer!

Now, say hello to building B1230!

Similar to B1222, we have an indoor flying space, .  The space itself is not as big as our old one  however, there will be an outdoor flying space attached that measures sixty by sixty feet and is fifty feet high, which we refer to as the AVIARY!  Danette Allen, head of the Ai, discussed how "we can simply fly from inside to outside and back, or at least use the infrastructure that we have inside when we want to go outside.  It just makes life so much easier for us when we want to test outdoors."

Both spaces provide for some powerful new opportunities!

The flight area is currently filled with moving boxes.  Look how many we
have!

Also, the painting of Samuel P. Langley that was left in B1222 when we first moved in is also moving to B1230 with us!

Our lovely painting of Samuel P. Langley

Of course, we could not leave without all of our beautiful robots either!

One corner of B1230 is currently filled with some of our favorite robots.  Is
this not a dream come true?

B1230 is a wonderful space that we all look forward to working with; however, B1222 will always be in our hearts.

In the words of Danette, "the building learned from us, and we learned from the building, and ultimately we ended up with this great partnership."

Thursday, July 12, 2018

2018-07-12: Autonomy Incubator Welcomes NAVAIR

On Wednesday, a few members of the Naval Air Warfare Center Aircraft Division (NAWCAD) from the Naval Air Systems Command (NAVAIR) visited the Autonomy Incubator to take a tour and learn about what we do at the Autonomy Incubator.

Our five visitors included Dr. Joe Schaff, Johann Soto, Maria Thorpe, Steve Kracinovich, and Barbette Ivery.

As most of our visits begin, Danette Allen, head of the Ai, introduced them to our main research, which included our accomplishments and goals for the future.  Each of them came ready with plenty of questions, especially Johann Soto.

Johann Soto is working to receive his Master's degree in artificial intelligence,
and was intrigued to learn what schools were represented throughout our
community of interns at the Ai.

"I'm like a kid in a candy shop," he told Danette.  "Just wait until we get inside," she responded.

Danette introducing the first demo.

Upon entering the main doors of the fly zone, Danette walked them through each of the four demos.  The first stop was to Javier Puig-Navarro and Meghan Chandarana, who discussed and showcased their work with a wire maze.

They flew two crazyflie drones through the maze to test "a path planning algorithm for multiple vehicles," as Javier described.  They "utilized silhouette information of the neighboring obstacles to compute trajectories that maintain a safe distance between all obstacles in the environment."

This is Javier's fifth year as an Ai intern and Meghan's fourth.

Javier placing the two crazyflie drones in their starting place.

Next, Loc Tran introduced them to how we can use a camera-equipped quadrotor to classify objects in search and rescue missions.  The multi rotor helicopter can help find a lost hiker, for example, due to algorithms that were built in order to recognize objects in the surrounding environment.  It can detect people (including four of our NAVAIR visitors) and, of course, trees as well.


Loc Tran researches object classification at the Ai.

Spotted: four members of NAVAIR

Danette and Derek Goddeau then introduced them to the robotic arm we received from the Defense Advanced Research Projects Agency (DARPA).

Derek explained how the robot will go through a search, where it scans for a marker printed on the truss, which then tells it where it is.  "It will then generate a plan so it can safely grasp the truss, pick it up, and then generate another plan for the final placement location."  Soon after, it will go back to the start, continuing to generate random placement locations as it runs.

"There is a certain distance that we require it to meet, so that it's safe.  If it doesn't succeed, then it will start all over and generate new plans," he explained.

"Like a human system, if it doesn't work the first time we don't just walk away...
we try again! That's exactly what wedo with this kind of system," Danette told
our visitors.

Derek has been a part of the Ai for a year now.

Kastan Day is a third-year intern at the Ai, and he also explained his work to our visitors.

He spent last summer creating barcode-like fiducials that "calculate where objects are in space."  This summer, however, his goal is to replace the barcodes with 3D cameras.

He discussed how he analyzes the point clouds created by these 3D cameras to find a truss and pick it up with the robotic arm that Derek showcased.  Kastan explained how "both the barcodes and the point clouds serve the same purpose, but we want to avoid the parasitic mass of sending fiducial markers into space. Using 3D cameras allows us to find the objects based on their intrinsic shape instead of relying on extra barcode markers."

Kastan Day presenting the last demo on point cloud alignment.

Displayed on the top screen are the point clouds created by the
3D cameras.

Unfortunately, this was actually the last tour that will ever be given in the Ai building.  Next week, the team will be moving to a different site across NASA Langley.  It is a scary change, but we are all looking forward to having something new!

The Final Tour: Autonomy Incubator ft. NAVAIR

Thank you to our friends at NAVAIR for coming to learn about our work and allowing us to end our time in this building on such a good note!

Wednesday, July 11, 2018

2018-07-11: Derek Zhao and Loc Tran on Computer Vision and Machine Learning

Derek Zhao and Loc Tran have taken on a new operation under the ATTRACTOR project, an enterprise that is aiming to gain better artificial intelligence explainability. Their goal is to develop effective and explainable image classification through the exploration of deep neural networks.

In this image, the two figures can be classified as Derek
Zhao and Loc Tran.

Traditionally, machine learning consists of the ingestion of a large body of data and recognizing the patterns that exist within it.  From these patterns, predictions can be made on new data.  An example is spam classification in emails.  Given a large body of emails that have already been labeled as spam or not spam, a machine learning algorithm will learn patterns and key words that indicate spam or not spam.  Once these algorithms are trained, they can be used to accurately predict whether a new body of unlabeled emails are spam.

In the context of image classification, this involves using a learning algorithm to identify, for example, whether an image is of a person.

Computer vision tasks typically generate numerical or symbolic information that exist in the form of a decision.  These symbols are a means of sorting and classifying real world data.  One of the most common algorithms that people use for computer vision tasks and image classification are neural networks, though they come with a significant drawback.

"They're pretty black box-y," says Derek.  "There are a series of computations that work well, but there are so many computations going on under the hood, you end up having a lot of trouble developing an intuition for what is actually happening and determining what the network is actually learning. There are hundreds of thousands of pixel values and activations being passed through and transformed."

Loc is Derek's mentor at the Autonomy Incubator during
his internship.

Derek and Loc are trying to peek under that hood of neural networks using an algorithm called the variational autoencoder (VAE).

In image processing, an image is defined by all of its pixel values, which is then unspooled into a tall, skinny vector.  It is fed into a neural network, and gradually compressed until it produces a code for whether that image is of a person or other entity.  For example, 0 could equal not person and 1 could equal person.

"You normally just take a big vector and keep compressing it down until you get a code out.

An autoencoder, however, is different, in that you take this big, long vector of around 4,000 pixels, and compress it down into an intermediate form, which is typically about thirty-two numbers. Then, you try to reconstruct the image that you put in by recreating the same pixels.

At this point, it isn't image classification anymore, it's image reconstruction, and it already has a lot of applications in just compressing and decompressing images!" 

Derek described his coding display as being "kind of like a bowtie
architecture."

An autoencoder is comprised of two components: the first is called an encoder network, which compresses an input into a latent representation.  The second component, the decoder network, decompresses the latent representation and attempts to reconstruct the original image.

In Loc and Derek's current model, this latent representation consists of thirty-two numbers, and one of the most commonly asked questions is simply: what do these thirty-two numbers mean?

"It turns out that, under certain conditions, if you fix the latent representation for an image and take just one of these components and move it around, the reconstructions that you get back out change only on one facet of the image, like hair or chin width," as you can see in the images below.

For his research with faces, Derek has been using celebrities as examples.

Steve Carell

David Schwimmer

In this gif, the hair changes, as well as facial rotation.

Unfortunately, the latent representations of traditional autoencoders don't mean all that much.  Changing one compound around, results in reconstructions that morph wildly from one face to another face.  By modifying the architecture slightly to a VAE instead, they found that they could get some latent dimensions disentangled from the others so that changing one component changes only one aspect of the image instead of changing many, as the autoencoder would have done.

Their research examines many cutting edge variations of the autoencoder.  There are several variants, including the VAE, and the beta-VAE, but their research has recently focused on the TC-VAE for faces, such as those in the CelebA dataset.

By looking at each of these grids of faces, you're looking at
the results from the TC-VAE.

Have you ever wondered what you'd look like with a
bowl cut?

The TC-VAE can even change the gender of the person
in the photograph.

Their immediate goal is to build an app that performs live reconstructions on a webcam stream and offers sliders for manually explaining the latent space.  It offers an easy way to gain a better intuition about what exactly the TC-VAE is learning.

As Derek told me, in the end, "if you have a neural network that can compress your images into latent dimensions that are human interpretable then you suddenly have a means of telling people what exactly is in an image or even why a more classical neural network is making the decisions that it is."

Total Correlation VAE -  https://arxiv.org/abs/1802.04942

Tuesday, July 10, 2018

2018-07-10: Meet our Summer 2018 Interns!

Summer has officially started and the Autonomy Incubator has taken in several new interns along with a handful of returning friends!  This time around we have a total of eleven wonderful people helping out with everything. Let's introduce them!

First, meet Kastan Day.  This is his third year in the Ai, and he has been a great help throughout the past few years.  After graduating high school in 2016, he began his first year here as a social media intern.  When he returned his second year, he decided to take the coding and computer science route instead, which is where he continues to be this summer. This year he is working on an In-Space Assembly project. Kastan will graduate from Swarthmore College in 2020 with a Bachelor's Degree in Computer Science.

Kastan is working with In-Space Assembly this summer.

Next, meet Miranda Smith.  Though this is her first year interning here, she has already become a joy to have around!  Miranda graduated from Old Dominion University in 2017 with a Bachelor's Degree in Computer Science.  She is continuing to further her studies in order to receive her Master's Degree in 2019.  This summer, she is working with the Human Machine Interface team.

Miranda Smith working on a website to research the way people talk.

Chase Noren, like Kastan, is also working on In-Space Assembly. He will graduate this December from Texas A&M with a degree in Aerospace Engineering.  This is his second time interning with the Autonomy Incubator, as he joined us for the first time in the fall of 2017.

Chase showing me some of his work.

Derek Zhao is another new part of the Ai team and is working with computer vision and machine learning.  In 2011, he graduated from the University of Southern California with a Bachelor's Degree in Music Composition.  He then went on to Columbia University to receive his Master's in Data Science.

Derek Zhao is working with computer vision and machine
learning here at NASA Langley.

Javier Puig-Navarro has been with us for almost five years now.  He first joined us in 2014 after he received his Master's in Aerospace Engineering from the Polytechnic University of Valencia. He is now a PhD student, studying at the University of Illinois at Urbana-Champaign.  At the Ai, he is researching time critical coordination and path planning for multiple unmanned vehicles.

Javier is a very dedicated and hardworking member of our
team and we are so happy to have him here for yet another
summer!

Meghan Chandarana has also dedicated many summers to NASA Langley, as this is now her fourth year as an Ai intern!  After completing her undergrad studies at UC Berkeley, she is now a student at Carnegie Mellon University, working towards her PhD in Mechanical Engineering.  After receiving her Master's degree in 2014, she was an intern at Marshall Space Flight Center in Huntsville, Alabama. In 2015, she then began interning at the Ai and, through the past few years, has worked with natural language-based interfaces.  This summer, Meghan is working as a pathways intern and continuing her research from last summer with mission planning for large swarms.

Though this is Meghan's fourth year at the Ai, this is only
her first year as a pathways intern.

Ben Hargis received his Bachelor's Degree in Mechanical Engineering in 2017 from Tennessee Technological University.  He is currently continuing his studies as a PhD student in mechanical engineering with a focus in robotics.  Ben was a part of the Multi-Disciplinary Aeronautics Research Team Initiative (MARTI) in 2016 and worked on E-Cell tether modeling for Marshall Space Flight Center in 2017.  This summer, as an intern at the Ai, he is working to improve the robotic arm operations for the In-Space Assembly project.

Ben is a PhD student at Tennessee Technological University.

Erica Meszaros has returned for her second summer at the Ai, after completing her first internship in 2016.  She received her Bachelor's in Classical Languages (Greek/Latin) from the College of Wooster, her Master's in Linguistics with a Certificate in Artificial Intelligence from Eastern Michigan University, her MA in Social Science from the University of Chicago, and will be starting a PhD program in the fall at Brown University with a focus in the History of Science.  At NASA Langley, she uses linguistic analysis to evaluate human/autonomous system teaming and interface design to aid in trusted autonomy.  We are so happy to have her back for another summer!

Erica's research analyzes the language we use to describe
scientific knowledge and advancements, specifically through
metaphor throughout time.

Andrew Puetz is a recent graduate from South Dakota State University.  This past fall he received his Bachlor's degree in Mechanical Engineering, but he is hoping to continue his education with a Master's degree in aerospace engineering.  He started his first internship at the Ai in the Spring, but this is not his first time working at NASA Langley.  Last summer, he was part of a twelve-person intern team, which is now called the Aero Academy.  This summer he is working with hovercraft control systems at the Ai.


Andrew wants to create a fleet with RC Hovercrafts this
summer.

Jeremy Castagno is interning at the Ai this summer for about two weeks but, as a pathways intern,  is in high hopes of coming back in the following summers! He received his Bachelor's degree in Chemical Engineering from Brigham Young University with minors in Computer Science and Mathematics.  Through his work in Memphis as a control systems engineer for oil and gas, he ended up falling in love with the control and design side of things and cyber physical systems, and he is now a PhD candidate at the University of Michigan, where he has just received his Master's in Robotics.  At the Autonomy Incubator, he will be working with emergency landings for Unmanned Aerial Systems (UAS).  His main goal with this is for autonomous machines to have more humanlike decision-making, such as deciding for themselves where to land and being able to assess risk.

"Hopefully I can come back for the next three summers while I
work towards my PhD."

Finally, I suppose I should reintroduce myself. I am the social media intern in the Ai, and I go by the name of Payton Heyman.  Last summer I did a short internship for about two weeks and covered the open house in October of 2017, but I loved it so much that I have come back for a full summer this time! I will begin to pursue my first year of college at the Savannah College of Art and Design in the fall, but I am very much looking forward to spending the summer here at NASA Langley.

Payton looking through photos to edit in Photoshop.

All of our beloved interns have been working very hard since each of their arrivals.  It is a pleasure to be here and work with everyone.

Cheers to a summer of research and fun!