Thursday, August 30, 2018

2018-08-30: Payton Heyman Exit Presentation Summer 2018


As sad as I am to leave, my internship has come to an end for the summer, and I will now be heading towards Georgia to begin my freshman year at the Savannah College of Art and Design.  After ten weeks of tweets, Facebook posts, blogs, YouTube videos, and Instagram posts, watch my exit presentation to see how the Autonomy Incubator's social media presence has grown!

See you again in November!

2018-08-30: Meghan Chandarana Exit Presentation Summer 2018


Meghan has wrapped up her fourth internship with the Autonomy Incubator!  This summer she was a first time Pathways intern.  Watch her exit presentation to learn about her work with mission planning for robot swarms!

2018-08-30: Jim Ecker and his GAN-Filled World

Jim Ecker is a member of the Data Science team at NASA Langley and a part of the ATTRACTOR project, working specifically in the Autonomy Incubator.  Jim received his Bachelor's degree in Computer Science from Florida Southern College and his Master's in Computer Science from Georgia Tech, specializing in machine learning and artificial intelligence.

Coming out of school, he was a software engineer for about eight years and then went to Los Alamos National Lab, where he worked in the supercomputing and intelligence and space research divisions.  From there, he began work at NASA Langley.

Jim has his Master's degree in Computer Science.

HINGE, which was recently outlined here, is partially in support of what Jim is doing.  The main project he is working on right now uses General Adversarial Networks (GAN)a machine learning algorithm that lets a computer generate data from an example data set, such as producing an image from a description.

Using a few different photo databases, including Flickr 30K and COCO (Common Objects in Context), he has over 200,000 real images that have been annotated with five different descriptions of each image.  All of the compiled images include everything you can think of, from cars to boats to animals to people, but overall his main focus was simply people to aid in the search and rescue research that takes place at the Ai.  He went through the COCO dataset and, as best as he could do, pulled out all of the images containing people, and is currently curating data from Flickr30K.

The more images you have with captions, the more data is given to the neural network, and it will eventually begin to better understand what the pieces of an image are.  For example, the neural network, after receiving many different images with a caption containing the word "glasses," will eventually learn exactly what the word means and be able to detect what glasses look like based on the pixels and similarities between the different images.  If you write code to create, say, an image of 'a woman in a red cocktail dress,' the neural network will take what it knows from the other images it has seen and their descriptions to draw that.  "As it looks at more and more examples and learns more and more how to make that kind of thing, it starts to learn what different things are and make a better drawing," he explained.  "It's very weird what it's doing when you first think about it; it starts with what looks completely random, and it eventually learns how to draw the pieces."

A woman in a cocktail dress.

The cocktail dress example is actually quite mind blowing to us.  It connected 'cocktail' to a cocktail bar, and since bars typically have mirrors, it was able to mirror the woman's back in the top right of the image.

The descriptions of the images and how in-depth they go is very important.  You can have a caption that simply says "a woman with an umbrella," but how much does that really tell you? Not much.  A better description would include both what they're wearing and what they're doing as well, like "a woman in a black jacket, a blue and white checkered shirt, white pants, and black shoes is carrying an umbrella while walking through the rain."  This includes more details, so the neural network is able to learn and draw more with the information.  "I need the descriptions to be as specific as possible," Jim said.

Following his explanation, he showed me a demo, where he described me and what I was wearing to see how it would come out.  With the description being "a woman wearing a black and white striped shirt with a black sweater," you can see what I looked like below!


A renaissance painting entitled: Payton Heyman

I'd say it looks just like me! Especially with the wind-swept ponytail.

Along with the image outcome, it also has handy visualizations of "how each word in the description is mapped to different parts of the generated image." as Jim said.  "It gives each of these words some weight, saying here's the woman, here's the sweater, and so on.  These visualizations are key to providing explainability to an agent's environmental understanding."


The annotation tags highlight each characteristic.

He explained how the hope is that it will get much better in being able to accurately visualize what the described person looks like.  It stores all of the information similar to how the human brain stores visual information, based on the Dual-Coding Theory.  This is how it all ties into the search and rescue research.  "It encodes the features basically into memory storage in your mind.  The idea is to kind of try to replicate this so that when an object detector [like a drone] is looking around with a camera, every time it sees a person it can store the represented information and compare it to what it already knows."  When it finds someone who looks like who they are looking for, it would realize, "oh, that's who they were talking about."  The entire idea is that once Jim trains the object detector enough, it would hopefully be able to successfully recognize someone, but in order for it to be that specific you'd have to have thousands of pictures of that person and train it for quite a long time.  "Using synthesized visualizations from a GAN does this in an unsupervised manner, requiring much less data."

We love our monitors at the Autonomy Incubator.

"Other than all of this, I am also working on deep reinforcement learning," he said.  An example of this is how he is teaching an agent to play Super Mario BrosTM. Basically how this works is the agent looks at all of the pixels on the screen in order to decide what to do.  As it plays the game, it learns more and more of what actions do what and what to do in a situation.  Jim is able to pull up the source that shows what actions are going on at any given time during the game.  "It's kind of like a hexadecimal representation in and of the buttons; some of them might even be combinations of a NintendoTM controller."

As mentioned previously, the HINGE project is in support of his research.  It has been able to give him and the rest of the team an idea of what type of data they need to give the GAN in order to get the kind of data they need.  They're able to see how to best talk to it and see what the best descriptions are to help it best visualize.  Jim's work is improving by the day and we look forward to seeing how it progresses even more!

Sunday, August 26, 2018

2018-08-26: Women's Equality Day and the Rise of Women in STEM

The women of the Autonomy Incubator

Its Women's Equality Day, and the women of the Autonomy Incubator are celebrating all of our hard work.  With degrees ranging in Social Science, Mechanical Engineering, Psychology, Media Production, and Computer Science, the women of NASA sport their STEM knowledge with pride.

According to the Economics and Statistics Administration, forty-seven percent of all jobs were held by women in 2015; however, they only held twenty-four percent of STEM jobs.  This is a big part of what has inspired the women of the Ai to pursue their education and empower other women to do the same.

One of the greatest aspects of the Ai is the female leadership.  From Danette Allen, head of the lab and Co-PI of the ATTRACTOR project alongside Natalia Alexandrov, another strong woman at NASA, to Lisa Le Vie, head of the HCI team, they each play significant roles here.  Additionally, they all lead a great example for the younger generation of interns that have the opportunity to be here, like Erica Meszaros, Meghan Chandarana, and myself.

Danette and Lisa even received Mentorship Awards at the beginning of this
month, nominated by Ai interns.  Danette received four nominations, and
Lisa received one.

As mentioned previously, there is a large variety of backgrounds and degrees represented at the Ai.  Erica has always dreamed of working at NASA since she was young.  With an educational background primarily in the social sciences and humanities, she described how "finding a place that recognizes the importance of scientific analysis informed by these disparate backgrounds as crucial, is difficult."  This leads to why we believe the Ai excels so well: "it draws from many different backgrounds in order to pursue its research goals."  Her work, specifically, looks at the use of "linguistic analysis to evaluate human/autonomous system teaming and interface design to aid in trusted autonomy," but surrounding her is very different research as well,  There are people working with deep reinforcement learning, mechanical engineering, and even, in my case, using digital media and social platforms to communicate the science.

Meghan had similar dreams of NASA when she was growing up as well.  "Without the women pioneers that came before me, I would not have even thought it was possible to do the research I do today," she explained.  "On a daily basis I am surrounded with strong, passionate, and talented women that challenge me to reach beyond the visible boundaries – in to the infinite potential that lays waiting to be discovered. My hope is that the work we do in the Ai shows young girls that they can do anything they put their mind to."

Erica, myself, and Meghan are the remaining female summer interns.  We
have all enjoyed every second of it!

Its amazing to see so many powerful women in a field that, statistically-speaking, is generally dominated by men.  As someone who has only just graduated high school, I am so grateful to have had the opportunity to work with so many wonderful people this early in my life and get a realistic view of the world of STEM.

In the words of Erica, "it’s so important to not only see gender representation at the higher levels but also to have the opportunity to see a workplace filled with intelligent and capable women researchers. This is the NASA I dreamed of as a little girl and the direction I hope it continues in the future!"

Friday, August 24, 2018

2018-08-24: An Introduction to the HINGE Project

Bryan Barrows and Erica Meszaros- some of the stars of HINGE.

With the recent introduction to the ATTRACTOR project, there are also pieces that follow behind it that have been patiently waiting for their time to shine on the Ai blog.  HINGE is a project working to support the goals of ATTRACTOR, but before diving too deep, the waterfall of acronyms that it provides must be introduced first.

HINGE stands for Human Informed Natural-language GANs Evaluation.  As you can see, within that acronym is the acronym GAN, meaning Generative Adversarial Network, a machine learning algorithm that lets a computer generate data from an example data set, such as producing an image from a description.

So, what does all this really mean, and what exactly is the project?  Well, with ATTRACTOR and its aim to increase trust in and trustworthiness of autonomous systems, it is critical to examine how human operators interact with these systems of the human/machine interaction (HMI).  HINGE was kind of the first step under the HMI team, examining ways in which autonomous systems use search and rescue (SAR) missions to get useful information from human colleagues.  GANs are able to produce images of the missing person, which then can help the machine identify them; however, more descriptive data is needed to train the GAN and, more importantly, we need to ensure that we can get critical information in an emergency situation.  In summary, HINGE is a project designed to gather data of image descriptions in order to help us understand what descriptive information we need and how we can best provide it.  Of course, we also get to test out some fun machine learning algorithms.

Now, it's time to meet the HMI team!  Lisa Le Vie is the Principal Investigator and lead for the HINGE effort.  Essentially, she sees everything from a global point of view, and where its headed in the future.

Lisa giving a presentation of her work at the 2018 Aviation conference.

Bryan Barrows, a graduate from Virginia Tech, focuses on natural language processing (NLP) and data analysis.  He worked with Lisa and Mary Carolyn earlier this year to collect a data set from employees on center at the Langley cafeteria, which consists of over five hundred descriptions used to train the GAN.  Over the course of the summer, he has been closely working with Lisa and Erica Meszaros on understanding the collected descriptions.  Bryan has developed and employed several NLP methods to better deduce and understand the description data.  His analysis of the collected HINGE data set has led to the derivation of several key semantic features that are helpful for training machine learning models, as well as understanding the desired image representations produced from the GAN.

Bryan was once a NIFS intern, but he is now a civil servant at NASA.

Erica L. Meszaros, a returning Ai intern joined the HMI team at the beginning of the summer.  Because of her background in linguistics and modalities of interaction for human/autonomous system interfaces, she has been focusing primarily on applying linguistic analysis to the HINGE data.  "We’re approaching this analysis from a lot of really interesting angles informed by machine learning requirements, situational and contextual interaction, and modalities of communication, which makes it a really neat kind of interdisciplinary puzzle," she stated.

In addition to all of her HINGE work, she has also assisted me greatly with the extensive analyses we have written for the blog.  This one is included!

 Erica has spent a lot of time on the BirdGAN.

Mary Carolyn Last and Miranda Smith are no longer at the Ai,  but they are very important to mention.  Both of these young women were a big part in the project, and their work does not go unappreciated.

At this point in time, HINGE is actually wrapping up! HMI is still an important area of focus for understanding, informing, and improving trust in autonomous systems, though, and the team is "hoping to use the work from the HINGE project to move toward different research directions," as Erica stated.

A lot of the HINGE work also supports Jim Ecker and his research on environmental understanding using generative models, like GANs, which we will expand on later!  The team is in the process of doing a second data collection, so be sure to stay updated to hear what's next!

Special thanks to Erica Meszaros for writing assistance.

2018:08-24: Erica Meszaros Exit Presentation Summer 2018


Erica Meszaros has wrapped up her exit presentation, after working with the HINGE team throughout the summer.  She worked on "Communication Analysis for Improved Human/Autonomous System Teaming."  Read about her work here and be sure to watch her presentation to learn even more!

Tuesday, August 21, 2018

2018-08-21: Carol Castle Returns to the Autonomy Incubator

Carol has begun decorating her new office to make it her own.

Carol Castle, the former management support assistant to Danette in the Autonomy Incubator, is officially returning to the team this week in support of ATTRACTOR.  She spent the past year working in the Flight Projects and Science Directorate, but we are very excited to have her back as our admin!

"It's nice to come back to my NASA family," she said.

Carol has been married to her husband, David Castle, a recent retiree from the Aeronautics Systems Engineering branch, for twelve years.  She has two kids and is a proud grandma!  Overall, she has been apart of NASA for thirty-two years, and, in the fall, she will be receiving an Exceptional Administrative Achievement Medal to honor all of her hard work and commitment.

Carol is very happy to be back.

Because we value her return so much, we had a lunch celebration of Indian cuisine, one of her favorite things.  Welcome back, Carol!

Friday, August 17, 2018

2018-08-17: New ATTRACTOR Project Continues NASA Tradition of Latin Mottos

Introducing: the new and exciting ATTRACTOR project, brought to you by the Autonomy Incubator's very own Social Media Intern, Payton Heyman, and linguistics expert, Erica Meszaros!


The ATTRACTOR (Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability) project's mission is to work toward both human trust in autonomous systems and creating autonomous systems worthy of trust through explainable AI (XAI).  The motto chosen to represent this mission is docendo discimus, a Latin aphorism, which translates to "by teaching, we learn."  This proverb is often attributed to Seneca's seventh letter to Lucilius; however, throughout time it has been a frequently used motto for institutions around the world, gracing the logos of Central Washington University, the University of Chichester in England, Novosibirsk State Technical University in Russia, and more!  So how did it end up a part of the logo for the ATTRACTOR project?

Dr. Natalia Alexandrov, a mathematician in the Aeronautics Systems Analysis Branch at NASA Langley and Co-Principal Investigator for ATTRACTOR, was the person who chose it.  "It's a perfect fit for the machine learning component of ATTRACTOR," she said.  "As we 'teach' the machine, via, say, active learning methods, we 'learn' about the gaps in problem formulation, the models, and algorithms.  Conversely, the machine 'learns' from data and 'teaches' us what it needs to be a better predictor."

Between our curiosity around the Latin motto, both Erica and I found ourselves diving deep into the depths of a NASA tradition and discovered a long and intricate history of using them.  The badge for the fated Apollo 13 mission bears the word ex luna, scientia, explaining the scientific goal of gaining knowledge from the moon.  To honor Apollo 1 astronauts, there is a plaque at the site where they perished with the Latin phrase ad astra per aspera, which translates to "a rough road leads to the stars" or "to the stars through difficulty."  This was also the name of the 50th anniversary tribute exhibit for the launch at the Kennedy Space Center Visitor Complex in 2017.

nasa.gov

Arrogans avis cauda gravis, which translates to "the proud bird with the heavy tail," was the motto of the 2TV-1 altitude test of the Apollo Command Module (CM) and Lunar Module (LM) in the 1960s.  A current example includes mission operations at NASA's Armstrong Center, which claims the motto of ex binarii, cognition, evoking Apollo 13’s motto for its own goal of gaining understanding from binary.


LaunchPhotography.com

Seneca, the previously mentioned Roman philosopher who is credited with the motto of "by teaching, we learn," exhorts us to turn to those who would seek to make us better and then, in return, accept those whom we could make better.  While Seneca was talking about humans in his letter, there’s no reason that his sentiment should not hold true for relationships between humans and autonomous systems.  We provide what information we can and listen to and interpret the information provided from our autonomous teammates.  But more than speaking to the action of teaming, Seneca’s quote-turned-aphorism can provide guidance for a style of research that is at the heart of the Autonomy Incubator.

In a lab researching autonomous systems, perhaps the first question is who is teaching and who is learning? We, as researchers, are often tasked with teaching other humans, a process highlighted by the incredible number of interns that are welcomed into the Ai. In teaching and collaborating with interns we may uncover surprising aspects of our own research and open ourselves up to new opportunities.

nasa.gov

This humans-teaching-humans process is, perhaps, the most Senecan of the lot, but it’s worth considering where machines fit in. As we teach algorithms and models to understand, replicate, and even learn, we in turn gain additional knowledge. In gathering descriptive training data for a GAN designed to produce images for a search and rescue mission, the HINGE team has gained insight into how human language changes when elicited in different situations and provide through different modalities. (Read here to learn more about HINGE!)

Here, we see humans teaching machines while learning from them, but what if the tables were turned? What if autonomous systems were teaching us? Perhaps the sci-fi vision of machine-tutors is a dream of the future, but it’s not hard to imagine a system that gains information from the field and provides that information to a human operator, who in turn updates an aspect of the machine’s system only to start the process all over.

The truth behind docendo discimus is that it locks us – delightfully! – into a cyclical system, trading off teaching and learning roles with our other teammates, be they human or machine.  Yes, the aphorism itself has a history of its own, and NASA has an increasingly long list of Latin phrases to follow behind it, but it's not always about the weight of connotation.  Natalia, when discussing why she decided to make the phrase ours, explained how she also simply "likes the way [it] sounds.  There is something nice about alliteration."



Sources:
https://en.wikipedia.org/wiki/Docendo_discimus
https://en.wikipedia.org/wiki/Per_aspera_ad_astra 
https://www.nasa.gov/feature/50-years-ago-two-critical-apollo-tests-in-houston 

Thursday, August 16, 2018

2018-08-16: Ben Hargis Exit Presentation Summer 2018

Ben Hargis, a PhD student at the Tennessee Technological University, is wrapping up his summer internship this week.  Ben has spent his time working to improve the robotic arm operations for the In-Space Assembly project.  Learn more by watching his exit presentation!

2018-08-16: Casey Denham Summer Presentation August 2018





Casey Denham is fortunately not leaving NASA Langley quite yet!  She, however, still presented to show off everything she has done this summer as a Pathways intern.  Watch her presentation to learn about her work with "Uncertainty and Risk in Autonomous Systems."

2018-08-16: Skylar Jordan Exit Presentation Summer 2018

Skylar Jordan, a student at the University of Tennessee in Knoxville, has finished up his internship for the summer.  After starting out at the National Institute of Aerospace, he came to the Ai to work with Lisa Le Vie on "Crowd Mapping Applications for Autonomous Vehicle Operations."  Watch his exit presentation to learn more!

Tuesday, August 14, 2018

2018-08-14: Autonomy Incubator Welcomes Three MIT Students

Throughout this past week, the Autonomy Incubator has had the pleasure of working with three graduate students from the Massachusetts Institute of Technology (MIT).  Katherine Liu, Kyel Ok, and Yulun Tian each arrived at NASA Langley ready to learn and progress with their studies.

Yulun, Kyel, Katherine, Loc Tran, and Chester Dolph gather around the
computer to discuss the next flight test.

Their main initiative throughout their time here was to be able to test-fly drones as part of search and rescue research.  On Monday, they were flying inside B1230, but they quickly transitioned to a forested area within NASA Langley's campus for the rest of the week.  They set up a four by four meter "box" outside, inside which the drone(s) takes off.  It then surveys the area using onboard LIDAR.

The drone has an optimization algorithm in order to search the area in the most efficient way possible.  It is autonomous, however, you are able to tell where it is going from a green arrow on the computer screen that points in the direction of travel.  You can also tell which way it is facing.  Katherine worked on the computer side, using a two-way radio to communicate with the pilot, Brian Duvall, who could use the switch controller if something were to go wrong.  Her ability to communicate this was critical because we could then easily get out of the drone's pathway, and I didn't have try to put up a disastrous fight with a Canon camera in hand.

Kyel working on one of the drones.

The crew spent Tuesday, Wednesday, and Thursday outside in the wooded area, test-flying all three drones - sometimes solo, sometimes together.  Each drone made its own map of the area to assure where it is in space and how to find its way through the trees.

The team also tested multi-vehicle path sharing, which proved fairly successful.  The drones communicated well with each other and were able to search in a way that kept them from running into each other.  We had over fifty successful flights outside!

Katherine, Kyel, and Yulun were only here through Friday; however, they are coming back in October for a second time.  We look forward to seeing them again!

Friday, August 10, 2018

2018-08-10: Intern Andrew Puetz Continues his Quadcopter Research in the Ai

Along with all of the other interns who have joined us for the summer, Andrew Puetz (pronounced "pits") has also dedicated many weeks to the Autonomy Incubator; however, this was not his first year spending time with us.  Last summer, Andrew was apart of the 2017 NASA Langley Academy, and his team had several visits to the Ai to do some flight tests.  His work with UAVs introduced him to research work with aeronautic flying.

Despite the fact that Andrew has his Bachelor's degree in Mechanical Engineering from South Dakota State University, he has "always had the goal of doing things with aerospace," he explained to me.  "I've always been interested in [it].  I was a Buzz Lightyear fan and a Star Wars nerd as a kid, so I guess that's what we can blame it all on."

He essentially split his time between two main projects.  He came back to NASA in February to begin his first project with the Vehicle Analysis Branch (VAB), where he was working on an autonomous quadcopter project.  This research, though, was still taking place in the Ai building.

Andrew and Zac O'Gull are not only working on their quadcopter projects
together, but they're also in matching outfits!

The main objective was to serve as an affordable test bed for control scheme testing that could contribute to the Space Launch System (SLS) control scheme in the future.  They aim to eventually balance a completely unimpeded inverted pendulum (IP) on top of itself.

"The IP is a naturally unstable body, so it'll fall if something isn't holding it.  That's the same thing as the SLS.  If we're not controlling it, it'll just spiral out of control, which is where the thrusters at the bottom of it come in."  The quadcopter they are working with is very similar, as it has four motors around it, so it is able to balance itself like how the SLS would.  This allowed them to model the dynamic body of the SLS.

With all of this in mind, they were able to accomplish research with an unstable hanging mass below the quadcopter.  This hanging mass serves as a model for the sloshing action by the fuel tank that would present a problem to the SLS.  "Sometimes during launch, a dynamic body, like the SLS, may experience where the fuel gets to be a rhythmic slosh back and forth, and that can make the SLS's body spiral out of control because the controller isn't able to dampen it down and get it under control."

They attached a hanging mass below the quadcopter,
which they call MamaBird.

They tested their controller with an Augmented Adaptive Controller (AAC), which is an additional level of control.  Andrew explained how the AAC "is able to do things outside the normal design envelope of a controller, so it will notice 'ah, we're really getting out of balance for the stability margin, things are really going bad,' and kick in the AAC and bring it back under control."  They tested it in their control scheme with the disturbed hanging mass, so the quadcopter was able to bring it under control and "dampen it back down," so it wasn't throwing it out of balance again.

All of this work with the VAB technically is not part of the Ai; however, he was able to do just about all of the research for this project in the Ai building because of the great flight area.  Along with this, though, he also has just finished up an internship with the Ai, where he was working on his second project, but he has come back to continue working on it.

Andrew Puetz soldering.

For this Ai specific project, he is working with a small hovercraft testbed to support swarm algorithm research in the future.  His specific part of this began with heavy research on other hovercraft projects from the past.  They were all of the same size, typically the RC-sized small control vehicles, that used control schemes that somebody built on their own using either MATLABTM or a coding language.  Once he became more familiar with it, he then went on to built a crude controller in SimulinkTM with the idea that "maybe this summer I'd get a chance to put it onto a flight controller and try it out on a small hovercraft vehicle," and he made the first few steps.  Matt Vaughan, one of the AMA team members at the Ai, had 3D printed and built the body, and the propeller motors were scavenged off of a small quadcopter.

They call this quadcopter PapaBird.

"The idea is that there's all these people around the world with these small little quadcopters, and when the frames break, they take the thrusters off and 3D print the hovercraft body and they then have another vehicle that they can play with."

"So, basically its just reduce, reuse, recycle, right? Just in robot form!" I asked him quite proudly.

"Yes! Leave no part behind, that's a good way to put it," Andrew responded, obviously just as proud of my discovery.

He has not actually been able to try it out yet, but he did get to work with Sparky 2.0, a flight control board.  Sparky has similar characteristics and abilities to Pixhawk, its just a smaller and lighter board in comparison.  He learned how to put custom firmware onto the Sparky in order to see if it was a viable candidate for a flight controller for the hovercraft swarms of the future.

Andrew's internship technically wrapped up at the beginning of August, but after sifting through many different options for what to do next, he decided to come back.  He is still considering pursuing a Mechanical Engineering Master's degree at SDSU, but for now he has chosen to continue some research at NASA Langley through the fall.  He has many great opportunities ahead of him, and we are all thrilled to have him as a part of the Ai!

Thursday, August 9, 2018

2018-08-09: Derek Zhao Exit Presentation Summer 2018


Derez Zhao, a student at Columbia University, wrapped up his summer internship at the Autonomy Incubator.  Derek spent his summer researching a way to develop effective and explainable image classification through the exploration of deep neural networks.  He created an app called "Moisturize," that consists of facial reconstructions through the use of individual sliders for different features.  Watch his exit presentation to learn more about his work!

2018-08-09: Chase Noren Exit Presentation Summer 2018


Chase Noren has wrapped his internship for the summer, where he helped with the In-Space Assembly project.  Watch his exit presentation to learn more, and make sure to read about his work here!

Tuesday, August 7, 2018

2018-08-07: Kastan Day Exit Presentation Summer 2018


After returning to the Ai for a third summer, Kastan Day has wrapped up yet another successful internship!  This time around, he was working on a Robotic In-Space Assembly project.

Last summer, he created barcode-like fiducials to be placed on objects, such as trusses, but he wanted to progress away from that this year.  He started using cameras with 3D depth information to create point-cloud (position) data in order to find objects in a space.  Once the object is found, the robot can pick it up and assemble it with other objects to create a final structure.  This way, objects can be found based on their intrinsic shape.  Watch his Exit Presentation to learn more!

Monday, August 6, 2018

2018-08-06: Autonomy Incubator Takes on a Swarm of GoPiGo Rovers

Fifteen GoPiGo rovers lined up in the flight room.

The drones that you see in these five single file lines are GoPiGo rovers, or as we like to call them in the Ai, Piglets!  Each Piglet comes in a do-it-yourself kit, and Autonomy Incubator team members have been building them throughout the past week.  In order to run, each Piglet requires a Raspberry Pi, which is a micro computer.  By building them in mass, we have been able to research a few different things.

Skylar Jordan is using them to do tests for the human crowd mapping algorithms when applied to the real world.  Using a simulation on his computer, he is going to adapt it to operate the GoPiGo drones on the field, using Vicon systems for feedback.  "We will take a number of drones and a real person, or multiple real people even, and have them walk among the drones to see if the drones and the humans interact in the same way that humans would interact with other humans."

Meghan Chandarana is researching swarm technology. The Autonomy Incubator team is highly interested in missions, such as search and rescue, and with these GoPiGo rovers, "what will eventually happen is the vehicles will move through the environment and eventually come to a person that you want to help," Meghan explained.  What they've done is develop techniques for the vehicles' decision-making capabilities and be able to determine how the swarm would break up in order to find and help the person.  "The piglets are like a testing platform for us to be able to test those behaviors to see how well they work on real platforms and what a small-scale mission of that type would look like."


Each individual GoPiGo rover comes in a DIY kit.

The Piglets only drive, providing us with a safer and less expensive alternative to flying drones. They're a good testbed for both Meghan and Skylar's research.  Skylar explained to me how he views it all as sort of "a transition between computer simulation and flight tests to prove that your algorithms actually work."

Wednesday, August 1, 2018

2018-08-01: Andrew Puetz Exit Presentation Summer 2018


Andrew Puetz, a recent graduate from South Dakota State University (SDSU) with a Bachelor's degree in Mechanical Engineering, has finished up his Summer 2018 internship with the Autonomy Incubator.  For the past ten weeks, he split his time between two major projects.  The first was with the Vehicle Analysis Branch (VAB), working on an autonomous quadcopter project, which actually started back in February, and the second Ai-specific project consisted of the development of a small hovercraft swarm vehicle testbed.

Andrew's time with us has not come to a close, though, as he will be rejoining us for more research on hovercrafts.  Watch his Exit Presentation to learn more about his work this summer!