Posts tagged "projects"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

four people standing around a pair of boxy robots

Summer at NASA

In 2011, the summer after I graduated college, I headed to Greenbelt, Maryland to work with an international team of engineers and computer scientists at NASA Goddard Space Flight Center. The catch: we were all students! Over forty interns from at least four countries participated in Mike Comberiate's Engineering Boot Camp.

two men crouching over a boxy robot

Overview

The boot camp included several different projects. The most famous was GROVER, the Greenland Rover, a large autonomous vehicle that's now driving across the Greenland ice sheets mapping and exploring.

The main project I worked on was called LARGE: LIDAR-Assisted Robotic Group Exploration. A small fleet of robots -- a mothership and some workerbots -- used 3D LIDAR data to explore novel areas. My software team developed object recognition, mapping, path planning, and other software autonomously control the workerbots between infrequent contacts with human monitors. We wrote control programs using ROS.

artificial color 3D LIDAR image of an area

Later in the summer, we presented demonstrations of our work at both NASA Wallops Flight Facility and at NASA Goddard Space Flight Center.

The LARGE team

  • Mentors: NASA Mike, Jaime Cervantes, Cornelia Fermuller, Marco Figueiredo, Pat Stakem

  • Software team: Felipe Farias, Bruno Fernades, Thomaz Gaio, Jacqueline Kory, Christopher Lin, Austin Myers, Richard Pang, Robert Taylor, Gabriel Trisca

  • Hardware team: Andrew Gravunder, David Rochell, Gustavo Salazar, Matias Soto, Gabriel Sffair

  • Others involved: Mike Huang, William Martin, Randy Westlund

a group of men standing around a robot

Project description

The goal of the LARGE project is to assemble a networked team of autonomous robots to be used for three-dimensional terrain mapping, high-resolution imaging, and sample collection in unexplored territories. The software we develop in this proof-of-concept project will be transportable from our test vehicles to actual flight vehicles, which could be sent anywhere from toxic waste dumps or disaster zones on Earth to asteroids, moons, and planetary surfaces beyond.

artificial color 3D point cloud image

The robot fleet consists of a single motherbot and a set of workerbots. The motherbot is capable of recognizing the location and orientation of each workerbot, allowing her to designate target destinations for any worker and track their progress. Presently, localization and recognition is performed via the detection of spheres mounted in a unique configuration atop each robot. Each worker can independently plot a safe path through the terrain to the goal assigned by the motherbot. Communication between robots is interdependent and redundant, with messages sent over a local network. If communication between workers and the motherbot is lost, the workers will be able to establish a new motherbot and continue the mission. The failure of any single robot or device will not prevent the mission from being completed.

The robots use LIDAR sensors to take images of the terrain, stitching successive images together to create global maps. These maps can then be used for navigation. Eventually, several of the workers will carry other imaging sensors, such as cameras for stereo vision or a Microsoft Kinect, to complement the LIDAR and enable the corroboration of data across sensory modalities.

Articles and other media

In the media:

three metal boxy robots with treads

On my blog:

Videos

I spent the summer writing code, learning ROS, and dealing with our LIDAR images. Other people took videos! (Captions, links to videos, & credits are below the corresponding videos.) More may be available on Geeked on Goddard or from nasagogblog's youtube channel.


0 comments

me, at a desk, in the lab, working on documentation at a computer

As my undergrad years draw to a close, I've compiled a list of internships and related opportunities for students in Cognitive Science and Computer Science. Most programs are also open to students in other engineering and technology fields and are not limited to undergraduate students!

Take a look! Pass along the page to anyone you know who may find it useful. Although deadlines for some summer 2011 programs have passed, many have March or April deadlines, and many of the semester or year-round programs have later deadlines.


0 comments

group shot of nine interns and Garry (one intern, Leo, is not pictured) in front of blimps, holding quadcopters and shiny cars

In the summer of 2010, I interned at NASA Langley Research Center in the Langley Aerospace Research Summer Scholars Program.

My lab established an Autonomous Vehicle Lab for testing unmanned aerial vehicles, both indoors and outdoors.

Overview

I worked in the Laser Remote Sensing Branch of the Engineering Directorate under mentor Garry D. Qualls. There were nine interns besides me - here's the full list, alphabetically:

  • Brianna Conrad, Massachusetts Institute of Technology
  • Avik Dayal, University of Virginia
  • Michael Donnelly, Christopher Newport University
  • Jake Forsberg, Boise State University
  • Amanda Huff, Western Kentucky University
  • Jacqueline Kory, Vassar College
  • Leonardo Le, University of Minnesota
  • Duncan Miller, University of Michigan
  • Stephen Pace, Virginia Tech
  • Elizabeth Semelsberger, Christopher Newport University

several quadcopters stacked up in a pile

Our project's abstract

Autonomous Vehicle Laboratory for "Sense and Avoid" Research

As autonomous, unmanned aerial vehicles begin to operate regularly in the National Airspace System, the ability to safely test the coordination and control of multiple vehicles will be an important capability. This team has been working to establish a autonomous vehicle testing facility that will allow complex, multi-vehicle tests to be run both indoors and outdoors. Indoors, a commercial motion capture system is used to track vehicles in a 20'x20'x8' volume with sub-millimeter accuracy. This tracking information is transmitted to navigation controllers, a flight management system, and real-time visual displays. All data packets sent over the network are recorded and the system has the ability to play back any test for further analysis. Outdoors, a differential GPS system replaces the functionality of the motion capture system, allowing the same tests to be conducted as indoors, but on a much larger scale.

Presently, two quadrotor helicopters and one wheeled ground vehicle operate routinely in the volume. The navigation controllers implement Proportional-Integral-Derivative (PID) control algorithms and collision avoidance capabilities for each vehicle. Virtual, moving points in the volume are generated by the flight management system for the vehicles to track and follow. This allows the creation of specific flight paths, allowing the efficient evaluation of navigation control algorithms. Data from actual vehicles, virtual vehicles, and vehicles that are part of hardware in the loop simulations are merged into a common simulation environment using FlightGear, an open source flight simulator. Evaluating the reactions of both air and ground vehicles in a simulated environment reduces time and cost, while allowing the user to log, replay and explore critical events with greater precision. This testing facility will allow NASA researchers and aerospace contractors to address sense and avoid problems associated with autonomous multi-vehicle flight control in a safe and flexible manner.

Articles and other media

In the media

On my blog

Videos

Most of the summer was spent developing all the pieces of software and hardware needed to get our autonomous vehicle facility up and running, but by the end, we were flying quadcopters! (Captions are below their corresponding videos.)

Credit for these videos goes to one of my labmates, Jake Forsberg.

Object tracking for human interaction with autonomous quadcopter

Object tracking for human interaction with autonomous quadcopter: Here, the flying quadcopter is changing its yaw and altitude to match the other object in the flight volume (at first, another copter's protective foam frame; later, the entertaining hat we constructed). The cameras you see in the background track the little retro-reflective markers that we place on objects we want to track -- this kind of motion capture systems is often used to acquire human movement for animation in movies and video games. In the camera software, groups of markers can be selected as representing an object so that the object is recognized any time that specific arrangement of markers is seen. Our control software uses the position and orientation data from the camera software and sends commands to the copter via wifi.

Autonomous sense and avoid with AR.Drone quadcopter

Autonomous sense and avoid with AR.Drone quadcopter: The flying copter is attempting to maintain a certain position in the flight volume. When another tracked object gets too close, the copter avoids. We improved our algorithm between the first and second halves of this video. Presently, only objects tracked by the cameras are avoided, since we have yet to put local sensors on the copters (the obstacle avoidance is done using global information from the camera system about all the objects' locations).

Autonomous quadcopter tracking and following a ground vehicle

Autonomous quadcopter tracking and following a ground vehicle: The flying copter is attempting to maintain a position above the truck. The truck was driven manually by one of my labmates, though eventually, it'll be autonomous, too.

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system: As a safety precaution, we implemented virtual boundaries in our flight volume. Even if the copter is commanded to fly to a point beyond one of the virtual walls, it won't fly past the walls.

Hardware-in-the-loop simulation

Hardware-in-the-loop simulation: Some of my labmates built a hardware-in-the-loop simulation with a truck, and also with a plane. Essentially, a simulated environment emulates sensor and state data for the real vehicle, which responds as if it is in the simulated world.


0 comments

_a shelf of leatherbound books_

I read a lot (when I have time).

I'm an enthusiastic reader of science fiction and fantasy novels. I pick up non-fiction for fun (or for class), and I periodically read other stuff, too.

For example, just this week, on Monday and Tuesday, I consumed Scott Westerfeld's The Risen Empire and The Killing of Worlds. Yesterday, I started K. J. Parker's Devices and Desires. Tomorrow... well, I keep this lengthy list of books I want to read. I also keep a list of books I've already read (it comes in handy when people ask me for recommendations, or, as was the case nearly four years ago, when a college application asks me to provide a list of all the books I've read in the past year). Add these lists together: The result is a long list of great books.

Next time you're perusing the shelves, stumped on which pages to turn next, perhaps you could pick one of my favorites! (List last updated Oct. 25, 2015.)

COGNITIVE SCIENCE & PHILOSOPHY

consciousness

embodied cognition & related

  • Mark Johnson - The Meaning of the Body
  • Matthew Ratcliffe - Rethinking Commonsense Psychology, Feelings of Being
  • Shigehisa Kuriyama - The Expressiveness of the Body and the Divergence of Greek and Chinese Medicine
  • Richard Nisbett - The Geography of Thought
  • Jeff Hawkins & Sandra Blakeslee - On Intelligence
  • Henry Plotkin - Darwin Machines
  • Alva Noë - Action in Perception
  • Edward Reed - Encountering the World
  • Lawrence Shapiro - The Mind Incarnate

atheism, religion, metaphysics

  • Gordon Stein (Ed.) - An Anthology of Atheism & Rationalism
  • S.T. Joshi (Ed.) - Atheism: A Reader
  • Dale McGovern - In Faith and In Doubt
  • Quentin Smith & Nathan Oaklander -Time, Change, & Freedom: An Introduction to Metaphysics
  • Rita Gross - Soaring and Settling: Buddhist Perspectives on Contemporary Social and Religious Issues

robot ethics

  • David Gunkel - The Machine Question

misc

  • Valentino Braitenberg - Vehicles: Experiments in Synthetic Psychology
  • Stanislas Dehaene - The Number Sense: How the Mind Creates Mathematics
  • John Long - Darwin's Devices
  • Eric R. Kandel - In Search of Memory
  • Martin Seligman - Flourish
  • Susan Engel - The Stories Children Tell

OTHER NON-FICTION

women's health

  • Tony Weschler: Taking Charge of Your Fertility
  • Marilyn M Shannon: Fertility, Nutrition, and cycles

misc

  • Scott McCloud - Understanding Comics
  • Mario Livio - The Golden Ratio
  • Jeff Potter - Cooking For Geeks
  • Sun Tzu - The Art of War
  • Vera John-Steiner - Notebooks of the Mind

FANTASY

SCIENCE-FICTION

  • Isaac Asimov - I, Robot
  • Margaret Atwood - The Handmaid's Tale
  • Paolo Bacigalupi - The Windup Girl
  • Alfred Bester - The Stars My Destination
  • Lois McMaster Bujold - Cordelia's Honor, the Vorkosigan Saga
  • Orson Scott Card - Ender's Game, Speaker for the Dead, Xenocide, Children of the Mind, Ender's Shadow, Shadow of the Hegemon, Shadow Puppets, Shadow of the Giant
  • Peter Dickinson - Eva
  • Nicola Griffith - Slow River, Ammonite
  • Kameron Hurley - God's War, Infidel, Rapture
  • Lois Lowry - The Giver, Star Split
  • China Miéville - Perdido Street Station, The Scar, The City & The City
  • Richard Powers - Galatea 2.2
  • Ramez Naam - Nexus, Crux, Apex
  • John Scalzi - Old Man's War, The Ghost Brigades, Zoe's Tale, Redshirts
  • Kim Stanley Robinson - Red Mars, Blue Mars, Green Mars
  • Neal Stephenson - Cryptonomicon, Anathem, Diamond Age
  • Vernor Vinge - Fast Times at Fairmont High
  • Scott Westerfeld - The Risen Empire, The Killing of Worlds
  • Robert Charles Wilson - Spin, Axis

GENERAL FICTION

  • Jane Austen - Pride & Prejudice
  • Lewis Caroll - Alice in Wonderland and Through the Looking-Glass
  • James Clavell - Shogun
  • Arthur Golden - Memoirs of a Geisha
  • Nicole Krauss - The History of Love, Man Walks Into a Room, Great House
  • Khaled Hosseini - The Kite Runner, A Thousand Splendid Suns
  • Kazuo Ishiguro - Never Let Me Go, The Remains of the Day
  • Chuck Palahniuk - Fight Club, Invisible Monsters, Choke, Haunted
  • Mary Renault - The Persian Boy, The Mask of Apollo
  • Gail Tsukiyama - The Samurai's Garden
  • Anthony Swofford - Jarhead
  • Thorton Wilder - The Bridge of San Luis Rey
  • Virginia Woolf - To the Lighthouse, The Waves

0 comments

Background

During my college semester studying abroad in Sydney, Australia in 2009, I took a sculpture class. I've already documented the sculpture I created during the second half of the class, which focused on space.

This project is from the first half of the class, where we considered mass. We worked in clay and plaster.

Mass:

  1. A coherent, typically large body of matter with no definite shape
  2. A collection of incoherent particles, parts, or objects regarded as forming one body

Concept

The assignment was to sculpt a head -- our professor's head, who sat as the model -- out of clay, and then cast it in plaster.

I decided I wanted to sculpt a realistic head, rather than an overtly abstract one. The biggest reason for this was that I had never sculpted anything big out of clay before. Although it certainly takes skill to create chaos in a visually pleasing way, it is perhaps more difficult to create order in one-to-one correspondence with the actual world. I like a challenge.

We started by talking about the proportions of the human head, looking at example sculptures of human heads that varied from realistic to highly abstract. The next task was to practice: we took lumps of brown clay and mushed them into representations of various facial features. Eyes, noses, mouths. As it turned out, this practice was remarkably helpful when trying to form the much larger block of clay into a realistic head shape.

fairly realistic eyes, noses, and mouths sculpted out of brown clay

Construction of the clay model

I added lumps of clay to a wooden base set with a wooden center post piece by piece, using my hands to mold the clay into a general head-like shape. A variety of tools for working with clay were provided. I preferred to use my hands. I felt I had better control over the resultant shapes that way.

a bald head sculpted in brown clay, features a little rough around the edges

After finishing the initial form, I smoothed out his features a bit:

a bald clay head, features smoothed and shiny

Casting in plaster

The next stage was to make a plaster waste-mold using the clay head as a base.

After touching up the clay head, I sketched a seam line through across the top of the head and down in front of its ears to mark out where the metal shim fence would go. This would keep the two halves of the plaster mold apart.

clay head with thin metal pieces inserted across the crown of the head and down in front of the ears

Then it was time to throw plaster. Literally. First, I applied two aptly-named splash coats. The point of throwing handfuls of plaster at the clay model was to remove air bubbles. These layers were followed by a clay rubbing--which makes it easier to remove the plaster layers later--and three layers of thicker plaster, about the consistency of thick whipped cream.

head with metal shim fence, features less distinct now that they are coated in two layers of thin plaster

head with metal shim fence, looking blob-like with the last thick coat of plaster applied

The next class, we separated our molds. A chisel and mallet did the trick. The front half came off clean, except for the nose. I scraped the clay out of the back half and cleaned them both up. This was followed by painting on two coats of shellac to seal the mold.

front half of the mold sits empty on the right; on the left, most of the clay head rests intact in the back half

two empty halves of the mold, clean, shiny with shellac

Then it was time to fill the mold! After spraying on a thin coat of WD-40 to prevent the poured plaster from sticking, I tied the two halves together with wire. Any gaps along the seam line were plugged with clay. Then I propped up the mold open-side up in a bucket, and poured in the plaster.

mold tied together, upside-down in a bucket, wet plaster visible in the opening at the neck

The following week, it was time to remove the mold from the casting. A chisel and mallet came in handy once again.

with part of the layers of plaster removed, the neck and chin of the casting is visible

I chipped away at the plaster mold to reveal the casting. Some cleanup with sandpaper, a scrub brush, and a metal scraper was required.

plaster head with rough patches, excess plaster from the mold stuck in the ears, mouth, and seam line

In the end...

"The world is an okay place."

I had tried not to distort the face's features during my initial clay work. I returned later to adjust the clay to work better for the plaster mold, smoothing out some features, slightly exaggerating or emphasizing others. The result is a calm face, a peaceful face. He looks content, does he not?

a plaster casting, smoothed, with calm, rounded features and a slight smile

Despite the little holes here and there, the pockmark at the corner of his mouth, the pimple of plaster--he is content. He knows that no person is perfect. The blemishes, the marks, the indents and pocks on our faces are evidence that we have lived in and interacted with the world around us, instead of hiding in a sterile box where the world is not.

I tried, with this sculpture, to convey a sense of acceptance of things as they are, of life as it is. So I did not fill in the holes or file away all the pimples; I didn't cover the plaster by some other color or finish. It reflects the way life is: a little imperfect, a little unfinished. But despite, okay. Good enough for us.

Thus the title:

The world is an okay place.


0 comments