Posts tagged "robots"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

paper robots hung on windows saying 'am I alive?'

Alive and not alive

At the core of this project is the idea that new technologies are not alive in the same way as people, plants, and animals -- but nor are they inanimate like tables, rocks, and toasters. We attribute perception, intelligence, emotion, volition, even moral standing to social robots, computers, tutoring agents, tangible media, any media that takes -- or seems to take -- a life of its own.

Sometimes, we relate to technology not as a thing or an inanimate object, but as an other, a quasi-human. We talk to our technology rather than about the technology, moving from the impersonal third-person to the personal second-person, moving into social relation with the technology.

So, given that we perceive and interact with these technologies as if they are alive... are they? At what point do they become alive?

What does it mean for a technology to be alive?

How much does whether they are “actually” alive matter, and how much is our categorization of them dependent on how they appear to us?

Maybe they will not fit into our existing ontological categories at all.

Not things.

Not living.

Something in between.

paper robot on a window saying 'I'm not a person but I'm not a rock'

Story

sketch of robot holding a flower

I explored the question of how to encounter the "aliveness" of new technologies through a set of life-size sequential art pieces.

The story followed several robots in the human world. Life-size frames filled entire windows. The robots ask about their own aliveness, self-aware and struggling with their own identity. They try to fit in, but don't. A wheeled robot looks sadly up at a staircase. A shorter wheeled robot sits in an elevator, unable to reach the elevator buttons. A stained-glass robot draws our attention to the personal connections we have with our technology.

Social robots. Virtual humans. Tutoring agents.

They are here. They are probably not taking over the world. They are game-changers and they make us think.

Perhaps they cannot replace people or make people obsolete. Perhaps they are fundamentally different. Perhaps they will be a positive force in our world, if done right. If viewed right. If understood as what they are. As something in between.

How will we deal with them? How will we interact? How will we understand them?

two blue paper robots on the floor

Medium

The story was created as a life-story story that the reader could walk through, so reading would felt more like walking down the hall having a conversation with the character than like reading.

I read Scott McCloud's great book, Understanding Comics, around the same time as doing this project. (Perhaps you can see the influence. Perhaps.) Comic-style, sequential art to promote a dialogue. An abstract character, because if you had an actual robot tell the story, something would be lost. Outlining the robot character in less detail, as more abstract, drew more attention to the ideas being conveyed, and let viewers project more of themselves onto the art.

colorful stained glass style robot in a window

The low-tech nature was partially inspired by ancient Chinese cut paper methods, as well as by some comics styles. The interaction between the flat, non-technological medium through which the story is told and the content of the story -- questions about technology -- calls attention to the contrast between living and thing. What is the role of technology in our lives?

Installation

Select frames from Am I Alive? were installed at the MIT Media Lab during The Other Festival.

Video

I made a short video showing the concept, making of the pieces for the installation, and photos of the installation. Watch it here!

Relevant research

If you're curious about the topic of how robots are perceived, here are a couple research papers you might find interesting:

  • Coeckelbergh, M. (2011). Talking to robots: On the linguistic construction of personal human-robot relations. Human-robot personal relationships (pp. 126-129) Springer.

  • Kahn Jr, P. H., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H., Shen, S. (2012). “Robovie, you'll have to go into the closet now”: Children's social and moral relationships with a humanoid robot. Developmental Psychology, 48(2), 303.

  • Severson, R. L., & Carlson, S. M. (2010). Behaving as or behaving as if? Children’s conceptions of personified robots and the emergence of a new ontological category. Neural Networks, 23(8), 1099- 1103.


0 comments

four people standing around a pair of boxy robots

Summer at NASA

In 2011, the summer after I graduated college, I headed to Greenbelt, Maryland to work with an international team of engineers and computer scientists at NASA Goddard Space Flight Center. The catch: we were all students! Over forty interns from at least four countries participated in Mike Comberiate's Engineering Boot Camp.

two men crouching over a boxy robot

Overview

The boot camp included several different projects. The most famous was GROVER, the Greenland Rover, a large autonomous vehicle that's now driving across the Greenland ice sheets mapping and exploring.

The main project I worked on was called LARGE: LIDAR-Assisted Robotic Group Exploration. A small fleet of robots -- a mothership and some workerbots -- used 3D LIDAR data to explore novel areas. My software team developed object recognition, mapping, path planning, and other software autonomously control the workerbots between infrequent contacts with human monitors. We wrote control programs using ROS.

artificial color 3D LIDAR image of an area

Later in the summer, we presented demonstrations of our work at both NASA Wallops Flight Facility and at NASA Goddard Space Flight Center.

The LARGE team

  • Mentors: NASA Mike, Jaime Cervantes, Cornelia Fermuller, Marco Figueiredo, Pat Stakem

  • Software team: Felipe Farias, Bruno Fernades, Thomaz Gaio, Jacqueline Kory, Christopher Lin, Austin Myers, Richard Pang, Robert Taylor, Gabriel Trisca

  • Hardware team: Andrew Gravunder, David Rochell, Gustavo Salazar, Matias Soto, Gabriel Sffair

  • Others involved: Mike Huang, William Martin, Randy Westlund

a group of men standing around a robot

Project description

The goal of the LARGE project is to assemble a networked team of autonomous robots to be used for three-dimensional terrain mapping, high-resolution imaging, and sample collection in unexplored territories. The software we develop in this proof-of-concept project will be transportable from our test vehicles to actual flight vehicles, which could be sent anywhere from toxic waste dumps or disaster zones on Earth to asteroids, moons, and planetary surfaces beyond.

artificial color 3D point cloud image

The robot fleet consists of a single motherbot and a set of workerbots. The motherbot is capable of recognizing the location and orientation of each workerbot, allowing her to designate target destinations for any worker and track their progress. Presently, localization and recognition is performed via the detection of spheres mounted in a unique configuration atop each robot. Each worker can independently plot a safe path through the terrain to the goal assigned by the motherbot. Communication between robots is interdependent and redundant, with messages sent over a local network. If communication between workers and the motherbot is lost, the workers will be able to establish a new motherbot and continue the mission. The failure of any single robot or device will not prevent the mission from being completed.

The robots use LIDAR sensors to take images of the terrain, stitching successive images together to create global maps. These maps can then be used for navigation. Eventually, several of the workers will carry other imaging sensors, such as cameras for stereo vision or a Microsoft Kinect, to complement the LIDAR and enable the corroboration of data across sensory modalities.

Articles and other media

In the media:

three metal boxy robots with treads

On my blog:

Videos

I spent the summer writing code, learning ROS, and dealing with our LIDAR images. Other people took videos! (Captions, links to videos, & credits are below the corresponding videos.) More may be available on Geeked on Goddard or from nasagogblog's youtube channel.


0 comments

group shot of nine interns and Garry (one intern, Leo, is not pictured) in front of blimps, holding quadcopters and shiny cars

In the summer of 2010, I interned at NASA Langley Research Center in the Langley Aerospace Research Summer Scholars Program.

My lab established an Autonomous Vehicle Lab for testing unmanned aerial vehicles, both indoors and outdoors.

Overview

I worked in the Laser Remote Sensing Branch of the Engineering Directorate under mentor Garry D. Qualls. There were nine interns besides me - here's the full list, alphabetically:

  • Brianna Conrad, Massachusetts Institute of Technology
  • Avik Dayal, University of Virginia
  • Michael Donnelly, Christopher Newport University
  • Jake Forsberg, Boise State University
  • Amanda Huff, Western Kentucky University
  • Jacqueline Kory, Vassar College
  • Leonardo Le, University of Minnesota
  • Duncan Miller, University of Michigan
  • Stephen Pace, Virginia Tech
  • Elizabeth Semelsberger, Christopher Newport University

several quadcopters stacked up in a pile

Our project's abstract

Autonomous Vehicle Laboratory for "Sense and Avoid" Research

As autonomous, unmanned aerial vehicles begin to operate regularly in the National Airspace System, the ability to safely test the coordination and control of multiple vehicles will be an important capability. This team has been working to establish a autonomous vehicle testing facility that will allow complex, multi-vehicle tests to be run both indoors and outdoors. Indoors, a commercial motion capture system is used to track vehicles in a 20'x20'x8' volume with sub-millimeter accuracy. This tracking information is transmitted to navigation controllers, a flight management system, and real-time visual displays. All data packets sent over the network are recorded and the system has the ability to play back any test for further analysis. Outdoors, a differential GPS system replaces the functionality of the motion capture system, allowing the same tests to be conducted as indoors, but on a much larger scale.

Presently, two quadrotor helicopters and one wheeled ground vehicle operate routinely in the volume. The navigation controllers implement Proportional-Integral-Derivative (PID) control algorithms and collision avoidance capabilities for each vehicle. Virtual, moving points in the volume are generated by the flight management system for the vehicles to track and follow. This allows the creation of specific flight paths, allowing the efficient evaluation of navigation control algorithms. Data from actual vehicles, virtual vehicles, and vehicles that are part of hardware in the loop simulations are merged into a common simulation environment using FlightGear, an open source flight simulator. Evaluating the reactions of both air and ground vehicles in a simulated environment reduces time and cost, while allowing the user to log, replay and explore critical events with greater precision. This testing facility will allow NASA researchers and aerospace contractors to address sense and avoid problems associated with autonomous multi-vehicle flight control in a safe and flexible manner.

Articles and other media

In the media

On my blog

Videos

Most of the summer was spent developing all the pieces of software and hardware needed to get our autonomous vehicle facility up and running, but by the end, we were flying quadcopters! (Captions are below their corresponding videos.)

Credit for these videos goes to one of my labmates, Jake Forsberg.

Object tracking for human interaction with autonomous quadcopter

Object tracking for human interaction with autonomous quadcopter: Here, the flying quadcopter is changing its yaw and altitude to match the other object in the flight volume (at first, another copter's protective foam frame; later, the entertaining hat we constructed). The cameras you see in the background track the little retro-reflective markers that we place on objects we want to track -- this kind of motion capture systems is often used to acquire human movement for animation in movies and video games. In the camera software, groups of markers can be selected as representing an object so that the object is recognized any time that specific arrangement of markers is seen. Our control software uses the position and orientation data from the camera software and sends commands to the copter via wifi.

Autonomous sense and avoid with AR.Drone quadcopter

Autonomous sense and avoid with AR.Drone quadcopter: The flying copter is attempting to maintain a certain position in the flight volume. When another tracked object gets too close, the copter avoids. We improved our algorithm between the first and second halves of this video. Presently, only objects tracked by the cameras are avoided, since we have yet to put local sensors on the copters (the obstacle avoidance is done using global information from the camera system about all the objects' locations).

Autonomous quadcopter tracking and following a ground vehicle

Autonomous quadcopter tracking and following a ground vehicle: The flying copter is attempting to maintain a position above the truck. The truck was driven manually by one of my labmates, though eventually, it'll be autonomous, too.

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system: As a safety precaution, we implemented virtual boundaries in our flight volume. Even if the copter is commanded to fly to a point beyond one of the virtual walls, it won't fly past the walls.

Hardware-in-the-loop simulation

Hardware-in-the-loop simulation: Some of my labmates built a hardware-in-the-loop simulation with a truck, and also with a plane. Essentially, a simulated environment emulates sensor and state data for the real vehicle, which responds as if it is in the simulated world.


0 comments