Posts tagged "nasa"


At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

a laptop, textbook, and piles of papers and notes on a carpeted floor

A month after graduation, I'm well on my way to learning all sorts of crazy new things. This summer, I'm learning about...

  • HAM radio. On Tuesday, I attended the first of a summer-long amateur radio FCC licensing class. I know very little about radios and their components - the president of GSFC's amateur radio club told a story about how easy it was to build a circuit to convert 5 volts down to 3.3 volts, and kept throwing out electronics jargon. I'm looking forward to increasing my knowledge of the subject!

  • Computer innards. On a similarly technical note, my laptop's hard drive stopped spinning up last week. With the help of a computer engineering friend, I opened up the laptop and replaced the drive. Didn't even lose a screw! It's a small step into the world of computer hardware, but that was the first time I've opened up a computer, so it counts for a lot.

  • Multiple realizability. That is, that people can take entirely different paths to the same place. People with ridiculously different beliefs can still be thinking exactly the same thing at exactly the same time on ridiculously frequent occasions.

  • Tae Kwon Do. An activity I'd never done before: martial arts! All the interns/apprentices in my lab this summer were encouraged to try it out, since the GSFC club is so friendly. We've learned miscellaneous self-defense maneuvers and more ways of kicking than I remember names for - I even got to kick through a board!

  • And software... My lab group is using a variety of software tools and open source code libraries that are new to me: ROS (the Robot Operating System), a code repository via SVN, the MRPT libraries, the point cloud library (PCL), and many more. I'm remembering C++, delving into path planning algorithms, and reading up on SLAM (simultaneous localization and mapping). Yes, it's a whirlwind of acronyms.


backs of students heads, wearing black motorboard hats and tassels - photo by Terry Bolstad

Don't ever stop

This one's a life update post, but it's also a "here's some cool science!" post.

A few days ago, I graduated from Vassar College with a Bachelor of Arts in Cognitive Science and a correlate in Computer Science. I was decorated with general honors, departmental honors, membership to Psi Chi, and membership to Sigma Xi. My time there was awesome.

What's next?

No lazy summer!

Well, no lazy summer break for me! I've already spent three days in my summer lab at NASA Goddard Space Flight Center, where I'll be working on a number of software development projects. The primary one is a LIDAR-assisted robotic group exploration project, in which we're going to have a small fleet of robots -- a mothership and some workerbots -- use 3D LIDAR data to autonomously map and plot paths through an area. This kind of robot fleet could, eventually, be used to explore other planets. One of the big challenges will be dealing with the 3D image data. I'm looking forward to learning more image processing algorithms!

Another project is the redesign of the Greenland Robotic Vehicle, a big autonomous rover that'll drive across Greenland, collecting a data about snowfall, mapping, and exploring. Did you know there's ice on that country two miles thick? I may also get to play with a robot that has stereo vision.

You can see some of these robots (and what life in the lab may be like) in this great video about last year's interns.

So far, I've met a bunch of intelligent, friendly folks, started catching up on already-written code, and begun to delve into the platforms, libraries, and algorithms we'll be using and developing this summer. Our mentors have already proven themselves to be enthusiastic and helpful. Just yesterday, one of them told us,

"You're engineers at NASA. You want to go where things are, and then go beyond."

That may end up being our theme for the summer.

A little overwhelming?

shiny silver model of a space shuttle

There's going to be so much going on. It'd be easy to get overwhelmed -- especially now, jumping in and floundering around in the code, the projects, the people. So much to learn.

But as I sat in the lab today, reading about ROS, going through tutorials, reading about PCL and feature detection in point clouds, digging through last summer's confusing pile of C# and C++ programs, I realized I wasn't overwhelmed. And it was because of all the other experiences I've had that've gotten me to this point.

Confidence. My first URSI summer, flailing through Microsoft Robotics Studio and complicated conceptual theories. Figuring out how to deal with webcams and image data my second URSI summer, reading papers on optical flow and implementing algorithms. Last summer: excavations of an open source flight simulator, the Aeronautics Student Forum, dealing with different work styles and communication styles in my LARSS lab. And more.

I think about all those experiences, and I'm not afraid of this summer. I could almost be overwhelmed -- perhaps thinking that everyone else has more of the right kind of experience; I wasn't trained as a classic engineer -- but I know I can succeed. My non-engineering, cognitive science background sets me apart and lets me look at problems a little differently than everyone else. I'm an asset.

I know how to learn. I know how to do research.

I can conquer this summer.


Summer plans

My first post-graduation plans have been finalized: I'll be returning to the fine world of software development and robotics for a summer internship at NASA Goddard Space Flight Center. I'll be working with a diverse bunch of engineers and interns on what I expect will be super exciting, super cool projects.


_red and blue simulated robots in a flat simulated world_

On Friday, I turned in my undergraduate cognitive science thesis. It's been a year in the making -- I started brainstorming ideas last April, spent all summer reading up on relevant literature, and all of this school year developing my model, programming the simulation, running experiments, and finally, writing about all of that.

It's a little weird to realize that I don't have to constantly be thinking about this project any more. I don't have to be, but ever since handing it in, my thoughts continue to swirl around what further analyses to do on the data I collected, how to fix up the studies I did to get more powerful results, which studies would make sense as the next step...

Here's the abstract:

A biologically inspired predator-prey study of the effects of emotion and communication on emergent group behavior

Any agent that functions successfully in a constantly changing world must be able to adapt its behavior to its current situation. In biological organisms, emotion is often highlighted as a crucial system for generating adaptive behavior. This paper presents a biologically-inspired predator-prey model to investigate the effectiveness of an emotion-like system in guiding the behavior of artificial agents, implemented in a set of simulated robots. The predator's behavior was governed by a simple subsumption hierarchy; the prey selected actions based on direct sensory perceptions dynamically integrated with information about past motivational/emotional states. Aspects of the prey's emotion system were evolved over time. The first study examined the interactions of a single prey with the predator, indicating that having an emotion system can led to more diverse behavioral patterns, but may not lead to optimal action selection strategies. In the second study, groups of prey agents were evolved. These agents began to utilize alarm signaling and displayed fear contagion, with more group members surviving than in groups of emotionless prey. These results point to the pivotal role emotion plays in social scenarios. The model adds to a critical body of research in which important aspects of biological emotion are incorporated into the action selection mechanisms of artificial agents to achieve more adaptive, context-dependent behavior.


group shot of nine interns and Garry (one intern, Leo, is not pictured) in front of blimps, holding quadcopters and shiny cars

In the summer of 2010, I interned at NASA Langley Research Center in the Langley Aerospace Research Summer Scholars Program.

My lab established an Autonomous Vehicle Lab for testing unmanned aerial vehicles, both indoors and outdoors.


I worked in the Laser Remote Sensing Branch of the Engineering Directorate under mentor Garry D. Qualls. There were nine interns besides me - here's the full list, alphabetically:

  • Brianna Conrad, Massachusetts Institute of Technology
  • Avik Dayal, University of Virginia
  • Michael Donnelly, Christopher Newport University
  • Jake Forsberg, Boise State University
  • Amanda Huff, Western Kentucky University
  • Jacqueline Kory, Vassar College
  • Leonardo Le, University of Minnesota
  • Duncan Miller, University of Michigan
  • Stephen Pace, Virginia Tech
  • Elizabeth Semelsberger, Christopher Newport University

several quadcopters stacked up in a pile

Our project's abstract

Autonomous Vehicle Laboratory for "Sense and Avoid" Research

As autonomous, unmanned aerial vehicles begin to operate regularly in the National Airspace System, the ability to safely test the coordination and control of multiple vehicles will be an important capability. This team has been working to establish a autonomous vehicle testing facility that will allow complex, multi-vehicle tests to be run both indoors and outdoors. Indoors, a commercial motion capture system is used to track vehicles in a 20'x20'x8' volume with sub-millimeter accuracy. This tracking information is transmitted to navigation controllers, a flight management system, and real-time visual displays. All data packets sent over the network are recorded and the system has the ability to play back any test for further analysis. Outdoors, a differential GPS system replaces the functionality of the motion capture system, allowing the same tests to be conducted as indoors, but on a much larger scale.

Presently, two quadrotor helicopters and one wheeled ground vehicle operate routinely in the volume. The navigation controllers implement Proportional-Integral-Derivative (PID) control algorithms and collision avoidance capabilities for each vehicle. Virtual, moving points in the volume are generated by the flight management system for the vehicles to track and follow. This allows the creation of specific flight paths, allowing the efficient evaluation of navigation control algorithms. Data from actual vehicles, virtual vehicles, and vehicles that are part of hardware in the loop simulations are merged into a common simulation environment using FlightGear, an open source flight simulator. Evaluating the reactions of both air and ground vehicles in a simulated environment reduces time and cost, while allowing the user to log, replay and explore critical events with greater precision. This testing facility will allow NASA researchers and aerospace contractors to address sense and avoid problems associated with autonomous multi-vehicle flight control in a safe and flexible manner.

Articles and other media

In the media

On my blog


Most of the summer was spent developing all the pieces of software and hardware needed to get our autonomous vehicle facility up and running, but by the end, we were flying quadcopters! (Captions are below their corresponding videos.)

Credit for these videos goes to one of my labmates, Jake Forsberg.

Object tracking for human interaction with autonomous quadcopter

Object tracking for human interaction with autonomous quadcopter: Here, the flying quadcopter is changing its yaw and altitude to match the other object in the flight volume (at first, another copter's protective foam frame; later, the entertaining hat we constructed). The cameras you see in the background track the little retro-reflective markers that we place on objects we want to track -- this kind of motion capture systems is often used to acquire human movement for animation in movies and video games. In the camera software, groups of markers can be selected as representing an object so that the object is recognized any time that specific arrangement of markers is seen. Our control software uses the position and orientation data from the camera software and sends commands to the copter via wifi.

Autonomous sense and avoid with AR.Drone quadcopter

Autonomous sense and avoid with AR.Drone quadcopter: The flying copter is attempting to maintain a certain position in the flight volume. When another tracked object gets too close, the copter avoids. We improved our algorithm between the first and second halves of this video. Presently, only objects tracked by the cameras are avoided, since we have yet to put local sensors on the copters (the obstacle avoidance is done using global information from the camera system about all the objects' locations).

Autonomous quadcopter tracking and following a ground vehicle

Autonomous quadcopter tracking and following a ground vehicle: The flying copter is attempting to maintain a position above the truck. The truck was driven manually by one of my labmates, though eventually, it'll be autonomous, too.

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system: As a safety precaution, we implemented virtual boundaries in our flight volume. Even if the copter is commanded to fly to a point beyond one of the virtual walls, it won't fly past the walls.

Hardware-in-the-loop simulation

Hardware-in-the-loop simulation: Some of my labmates built a hardware-in-the-loop simulation with a truck, and also with a plane. Essentially, a simulated environment emulates sensor and state data for the real vehicle, which responds as if it is in the simulated world.


_My labmates, our mentor, our vehicles, and I_

On the last day of my LARSS internship, NASA EDGE filmed my lab for their Future of Aeronautics episode! It's currently up on NASA's main page in the "Podcasts and Vodcasts" section, and it's available both online and through iTunes. The opening montage has clips of my labmates and I, and the segment about our work starts at 19:18 and lasts three minutes.

I encourage you to take a look!