Posts tagged "technology"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

paper robots hung on windows saying 'am I alive?'

Alive and not alive

At the core of this project is the idea that new technologies are not alive in the same way as people, plants, and animals -- but nor are they inanimate like tables, rocks, and toasters. We attribute perception, intelligence, emotion, volition, even moral standing to social robots, computers, tutoring agents, tangible media, any media that takes -- or seems to take -- a life of its own.

Sometimes, we relate to technology not as a thing or an inanimate object, but as an other, a quasi-human. We talk to our technology rather than about the technology, moving from the impersonal third-person to the personal second-person, moving into social relation with the technology.

So, given that we perceive and interact with these technologies as if they are alive... are they? At what point do they become alive?

What does it mean for a technology to be alive?

How much does whether they are “actually” alive matter, and how much is our categorization of them dependent on how they appear to us?

Maybe they will not fit into our existing ontological categories at all.

Not things.

Not living.

Something in between.

paper robot on a window saying 'I'm not a person but I'm not a rock'

Story

sketch of robot holding a flower

I explored the question of how to encounter the "aliveness" of new technologies through a set of life-size sequential art pieces.

The story followed several robots in the human world. Life-size frames filled entire windows. The robots ask about their own aliveness, self-aware and struggling with their own identity. They try to fit in, but don't. A wheeled robot looks sadly up at a staircase. A shorter wheeled robot sits in an elevator, unable to reach the elevator buttons. A stained-glass robot draws our attention to the personal connections we have with our technology.

Social robots. Virtual humans. Tutoring agents.

They are here. They are probably not taking over the world. They are game-changers and they make us think.

Perhaps they cannot replace people or make people obsolete. Perhaps they are fundamentally different. Perhaps they will be a positive force in our world, if done right. If viewed right. If understood as what they are. As something in between.

How will we deal with them? How will we interact? How will we understand them?

two blue paper robots on the floor

Medium

The story was created as a life-story story that the reader could walk through, so reading would felt more like walking down the hall having a conversation with the character than like reading.

I read Scott McCloud's great book, Understanding Comics, around the same time as doing this project. (Perhaps you can see the influence. Perhaps.) Comic-style, sequential art to promote a dialogue. An abstract character, because if you had an actual robot tell the story, something would be lost. Outlining the robot character in less detail, as more abstract, drew more attention to the ideas being conveyed, and let viewers project more of themselves onto the art.

colorful stained glass style robot in a window

The low-tech nature was partially inspired by ancient Chinese cut paper methods, as well as by some comics styles. The interaction between the flat, non-technological medium through which the story is told and the content of the story -- questions about technology -- calls attention to the contrast between living and thing. What is the role of technology in our lives?

Installation

Select frames from Am I Alive? were installed at the MIT Media Lab during The Other Festival.

Video

I made a short video showing the concept, making of the pieces for the installation, and photos of the installation. Watch it here!

Relevant research

If you're curious about the topic of how robots are perceived, here are a couple research papers you might find interesting:

  • Coeckelbergh, M. (2011). Talking to robots: On the linguistic construction of personal human-robot relations. Human-robot personal relationships (pp. 126-129) Springer.

  • Kahn Jr, P. H., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H., Shen, S. (2012). “Robovie, you'll have to go into the closet now”: Children's social and moral relationships with a humanoid robot. Developmental Psychology, 48(2), 303.

  • Severson, R. L., & Carlson, S. M. (2010). Behaving as or behaving as if? Children’s conceptions of personified robots and the emergence of a new ontological category. Neural Networks, 23(8), 1099- 1103.


0 comments

wood bridge with rope railing stretched over a green ravine

Learning is awesome

My favorite part of just living is how much I learn. Here are some pieces of advice you might find useful, some cool skills I've acquired (maybe you'll be inspired), and a couple other things, too:

Because lists are awesome, too...

  • A GPS is only helpful in localizing large vehicles, particularly when you're trying to use the GPS to direct navigation. When your vehicle is smaller than the error margin of plus or minus two meters (e.g., an RC car), it doesn't work so well! (This from last summer, at NASA Langley.)
  • Pens with lights attached are a fantastic invention. I got a combo flashlight-pen at GHC last year. It writes. It lights up. This pen lives next to the pad of sticky notes by my bed. Now all my middle-of-the-night ideas are legible!
  • If you're working on a big important project, always work on it, every day. Could be a thesis. Could be a novel, or a software project. Even on the days when you really don't want to work on it and you're entirely unmotivated, work on it anyway. Do a tiny little bit, then do a tiny little bit more, and maybe you'll convince yourself that you are in the mood to work on it after all. If not, at least you did a little bit, right?
  • Just how cool people think NASA is. Specifically, how cool people think it is when they find out I interned there, twice. I continue to be surprised. Quite seriously. Are my standards for what counts as super awesome too high? Do I just expect everyone else to be similarly awesome, making my accomplishments average on the scale of awesomeness? Maybe I do ... everyone has the capacity for brilliance. Maybe not everyone fulfills that capacity, but I think you're suppose to take this as your cue to go be brilliant.
  • I earned my Amateur Radio Technician's license. I am now qualified to talk on the HAM radio bands! I know more than I used to about electronics, antennae, and radio frequencies. I'm still working on learning Morse Code.
  • Philosophy of mind. I know a decent amount on the subject from my cognitive science background, but there's always more to learn! A friend and I have delved into some fun readings: Aristotle's conception of matter and form, Aquinas on the immateriality of mind, Lawrence Shapiro on embodiment and reductionism, and many more. I'm re-reading Shapiro's The Mind Incarnate, which I initially read in my second cognitive science class ever, some three and a half years ago.
  • How to successfully relocate to a new city in a new state. Yeah, I did that. It involved a lot of talking to people, a lot of driving, and a lot of paperwork and standing in lines.
  • Just how flexible my sleep schedule can be. I used to be a stickler for getting my full eight hours every single night of the week. I realized over the summer that I can function just fine on a weird schedule of eight hours, then three hours, then seven hours, then maybe five, followed by nine or ten hours to catch up... I'll write more on this sometime. Carol Worthman wrote a particularly relevant chapter on sleep for Evolutionary Medicine and Health that I plan to outline for you.
  • The rudiments of tae kwon do. According to the instructors at the Goddard Tae Kwon Do club, I have a decent roundhouse kick. I'd like to learn more -- I'm still very much the beginner white belt.

And a whole slew of technology-related items:

  • Octave, essentially an open-source Matlab.
  • R, a statistical computing language and environment.
  • The rudiments of time series analysis
  • ROS, an open-source platform for robotics work
  • Mobile Robotics Programming Toolkit (MRPT) libraries
  • PCL, the point cloud library and useful for feature detection in point clouds
  • Simultaneous localization and mapping (SLAM) algorithms, as well as other common mapping and path planning algorithms.
  • How to use subversion.
  • Random little things about Ubuntu, including the "alt-f9" shortcut to minimize the current window
  • How to use the Tobii T60 eye tracker.
  • And so much more ...

I wonder if I can double this list by this time next year..?


2 comments

four people standing around a pair of boxy robots

Summer at NASA

In 2011, the summer after I graduated college, I headed to Greenbelt, Maryland to work with an international team of engineers and computer scientists at NASA Goddard Space Flight Center. The catch: we were all students! Over forty interns from at least four countries participated in Mike Comberiate's Engineering Boot Camp.

two men crouching over a boxy robot

Overview

The boot camp included several different projects. The most famous was GROVER, the Greenland Rover, a large autonomous vehicle that's now driving across the Greenland ice sheets mapping and exploring.

The main project I worked on was called LARGE: LIDAR-Assisted Robotic Group Exploration. A small fleet of robots -- a mothership and some workerbots -- used 3D LIDAR data to explore novel areas. My software team developed object recognition, mapping, path planning, and other software autonomously control the workerbots between infrequent contacts with human monitors. We wrote control programs using ROS.

artificial color 3D LIDAR image of an area

Later in the summer, we presented demonstrations of our work at both NASA Wallops Flight Facility and at NASA Goddard Space Flight Center.

The LARGE team

  • Mentors: NASA Mike, Jaime Cervantes, Cornelia Fermuller, Marco Figueiredo, Pat Stakem

  • Software team: Felipe Farias, Bruno Fernades, Thomaz Gaio, Jacqueline Kory, Christopher Lin, Austin Myers, Richard Pang, Robert Taylor, Gabriel Trisca

  • Hardware team: Andrew Gravunder, David Rochell, Gustavo Salazar, Matias Soto, Gabriel Sffair

  • Others involved: Mike Huang, William Martin, Randy Westlund

a group of men standing around a robot

Project description

The goal of the LARGE project is to assemble a networked team of autonomous robots to be used for three-dimensional terrain mapping, high-resolution imaging, and sample collection in unexplored territories. The software we develop in this proof-of-concept project will be transportable from our test vehicles to actual flight vehicles, which could be sent anywhere from toxic waste dumps or disaster zones on Earth to asteroids, moons, and planetary surfaces beyond.

artificial color 3D point cloud image

The robot fleet consists of a single motherbot and a set of workerbots. The motherbot is capable of recognizing the location and orientation of each workerbot, allowing her to designate target destinations for any worker and track their progress. Presently, localization and recognition is performed via the detection of spheres mounted in a unique configuration atop each robot. Each worker can independently plot a safe path through the terrain to the goal assigned by the motherbot. Communication between robots is interdependent and redundant, with messages sent over a local network. If communication between workers and the motherbot is lost, the workers will be able to establish a new motherbot and continue the mission. The failure of any single robot or device will not prevent the mission from being completed.

The robots use LIDAR sensors to take images of the terrain, stitching successive images together to create global maps. These maps can then be used for navigation. Eventually, several of the workers will carry other imaging sensors, such as cameras for stereo vision or a Microsoft Kinect, to complement the LIDAR and enable the corroboration of data across sensory modalities.

Articles and other media

In the media:

three metal boxy robots with treads

On my blog:

Videos

I spent the summer writing code, learning ROS, and dealing with our LIDAR images. Other people took videos! (Captions, links to videos, & credits are below the corresponding videos.) More may be available on Geeked on Goddard or from nasagogblog's youtube channel.


0 comments

the entrance arch under the library to Vassar's campus with a banner hung welcoming the newest class: of 2011

Gender, scientists, and reductionism: Why Vassar is special

This fall, for the first time in four years, I'm not returning to Vassar. What better time to muse on the college's specialness?

Over the summer, a couple divisions became more apparent to me than they had been previously:

  1. Gender in technology fields - I worked in a lab at NASA Goddard of fifty-some interns/apprentices with a large number of mentors who dropped in on a regular basis. I was the only female on my project; I generally worked in a room with fifteen guys. Only one of the mentors I knew was female, and she was a professor from a collaborating university, not from NASA.

    I should emphasize that the difference I'm focusing on here is not in treatment but in sheer numbers. Why is it that fewer women end up in technology fields? The fact that so many prominent organizations focus on promoting women in technology -- including WIT, the Women in Technology project, NCWIT, and of course the Grace Hopper Celebration, which I attended last year -- suggests there's a problem. It's at the point where it doesn't even feel weird to be the only female in the room. Is there something wrong with that?

  2. Science vs engineering - I mentioned this recently. There is a clear division between those who have been trained as scientists and those trained as engineers. Yes, each have their own goals and purposes, but why isn't there more crossover?

  3. Reductionism vs dualism - As elaborated in one of the first essays I wrote here, I'm not a dualist. A prominent place to find dualisms is in many of the world's fine religions. Some of the conversations I had with people this summer have accentuated just how different that point of view is from my own.

The fact that I noticed these differences now -- not during a previous summer or semester -- highlights just how special a place Vassar is, and how different being at an undergraduate liberal arts college is from being in the rest of the world.

My closest friends at Vassar were also non-dualists; Vassar's mix of genders is unique enough to begin with that the ratio in technology-related majors continues to be unique; Vassar lacks an engineering department and is generally full of scientists.

The rest of the world has different ratios of people and mixes of beliefs. I'm finding it fascinating to explore.


0 comments

group shot of nine interns and Garry (one intern, Leo, is not pictured) in front of blimps, holding quadcopters and shiny cars

In the summer of 2010, I interned at NASA Langley Research Center in the Langley Aerospace Research Summer Scholars Program.

My lab established an Autonomous Vehicle Lab for testing unmanned aerial vehicles, both indoors and outdoors.

Overview

I worked in the Laser Remote Sensing Branch of the Engineering Directorate under mentor Garry D. Qualls. There were nine interns besides me - here's the full list, alphabetically:

  • Brianna Conrad, Massachusetts Institute of Technology
  • Avik Dayal, University of Virginia
  • Michael Donnelly, Christopher Newport University
  • Jake Forsberg, Boise State University
  • Amanda Huff, Western Kentucky University
  • Jacqueline Kory, Vassar College
  • Leonardo Le, University of Minnesota
  • Duncan Miller, University of Michigan
  • Stephen Pace, Virginia Tech
  • Elizabeth Semelsberger, Christopher Newport University

several quadcopters stacked up in a pile

Our project's abstract

Autonomous Vehicle Laboratory for "Sense and Avoid" Research

As autonomous, unmanned aerial vehicles begin to operate regularly in the National Airspace System, the ability to safely test the coordination and control of multiple vehicles will be an important capability. This team has been working to establish a autonomous vehicle testing facility that will allow complex, multi-vehicle tests to be run both indoors and outdoors. Indoors, a commercial motion capture system is used to track vehicles in a 20'x20'x8' volume with sub-millimeter accuracy. This tracking information is transmitted to navigation controllers, a flight management system, and real-time visual displays. All data packets sent over the network are recorded and the system has the ability to play back any test for further analysis. Outdoors, a differential GPS system replaces the functionality of the motion capture system, allowing the same tests to be conducted as indoors, but on a much larger scale.

Presently, two quadrotor helicopters and one wheeled ground vehicle operate routinely in the volume. The navigation controllers implement Proportional-Integral-Derivative (PID) control algorithms and collision avoidance capabilities for each vehicle. Virtual, moving points in the volume are generated by the flight management system for the vehicles to track and follow. This allows the creation of specific flight paths, allowing the efficient evaluation of navigation control algorithms. Data from actual vehicles, virtual vehicles, and vehicles that are part of hardware in the loop simulations are merged into a common simulation environment using FlightGear, an open source flight simulator. Evaluating the reactions of both air and ground vehicles in a simulated environment reduces time and cost, while allowing the user to log, replay and explore critical events with greater precision. This testing facility will allow NASA researchers and aerospace contractors to address sense and avoid problems associated with autonomous multi-vehicle flight control in a safe and flexible manner.

Articles and other media

In the media

On my blog

Videos

Most of the summer was spent developing all the pieces of software and hardware needed to get our autonomous vehicle facility up and running, but by the end, we were flying quadcopters! (Captions are below their corresponding videos.)

Credit for these videos goes to one of my labmates, Jake Forsberg.

Object tracking for human interaction with autonomous quadcopter

Object tracking for human interaction with autonomous quadcopter: Here, the flying quadcopter is changing its yaw and altitude to match the other object in the flight volume (at first, another copter's protective foam frame; later, the entertaining hat we constructed). The cameras you see in the background track the little retro-reflective markers that we place on objects we want to track -- this kind of motion capture systems is often used to acquire human movement for animation in movies and video games. In the camera software, groups of markers can be selected as representing an object so that the object is recognized any time that specific arrangement of markers is seen. Our control software uses the position and orientation data from the camera software and sends commands to the copter via wifi.

Autonomous sense and avoid with AR.Drone quadcopter

Autonomous sense and avoid with AR.Drone quadcopter: The flying copter is attempting to maintain a certain position in the flight volume. When another tracked object gets too close, the copter avoids. We improved our algorithm between the first and second halves of this video. Presently, only objects tracked by the cameras are avoided, since we have yet to put local sensors on the copters (the obstacle avoidance is done using global information from the camera system about all the objects' locations).

Autonomous quadcopter tracking and following a ground vehicle

Autonomous quadcopter tracking and following a ground vehicle: The flying copter is attempting to maintain a position above the truck. The truck was driven manually by one of my labmates, though eventually, it'll be autonomous, too.

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system

Virtual flight boundaries with the AR.Drone and the Vicon motion capture system: As a safety precaution, we implemented virtual boundaries in our flight volume. Even if the copter is commanded to fly to a point beyond one of the virtual walls, it won't fly past the walls.

Hardware-in-the-loop simulation

Hardware-in-the-loop simulation: Some of my labmates built a hardware-in-the-loop simulation with a truck, and also with a plane. Essentially, a simulated environment emulates sensor and state data for the real vehicle, which responds as if it is in the simulated world.


0 comments