a child puts her arm around a fluffy red and blue robot and grins

Relational Robots

My latest research in Cynthia Breazeal's Personal Robots Group has been on relational technology.

By relational, I mean technology that is designed to build and maintain long—term, social—emotional relationships with users. It's technology that's not just social—it's more than a digital assistant. It doesn't just answer your questions, tell jokes on command, or play music and adjust the lights. It collects data about you over time. It uses that data to personalize its behavior and responses to help you achieve long term goals. It probably interacts using human social cues so as to be more understandable and relatable—and furthermore, in areas such as education and health, positive relationships (such as teacher—student or doctor—patient) are correlated with better outcomes. It might know your name. It might try to cheer you up if it detects that you're looking sad. It might refer to what you've done together in the past or talk about future activities with you.

Relational technology is new. Some digital assistants and personal home robots on the market have some features of relational technology, but not all the features. Relational technology is still a research idea more than a commercial product. Which means right now, before it's on the market, is the exact right time to talk about how we ought to design relational technology.

As part of my dissertation, I performed a three-month study with 49 kids aged 4-7 years. All the kids played conversation and storytelling games with a social robot. Half the kids played with a version of the robot that was relational, using all the features of relational technology to build and maintain a relationship and personalize to individual kids. It talked about its relationship with the child and disclosed personal information about itself; referenced shared experiences (such as stories told together); used the child's name; mirrored the child's affective expressions, posture, speaking rate, and volume; selected stories to tell based on appropriate syntactic difficulty and similarity of story content to the child's stories; and used appropriate backchanneling actions (such as smiles, nods, saying "uh huh!"). The other half of the kids played with a not-relational robot that was just as friendly and expressive, but without the special relational stuff.

Besides finding some fascinating links between children's relationships with the robot, their perception of it as a social-relational agent, their mirroring of the robot's behaviors, and their language learning, I also found some surprises. One surprise was that we found gender differences in how kids interacted with the robot. In general, boys and girls treated the relational and not-relational robots differently.

Boys and girls treated the robots differently

Girls responded positively to the relational robot and less positively to the not-relational robot. This was the pattern I expected to see, since the relational robot was doing a lot more social stuff to build a relationship. I'd hypothesized that kids would like the relational robot more, feel closer to it, and treat it more socially. And that's what girls did. Girls generally rated the relational robot as more of a social-relational agent than the not-relational robot. They liked it more and felt closer to it. Girls often mirrored the relational robot's language more (we often mirror people more when we feel rapport with them), disclosed more information (we share more with people we're closer to), showed more positive emotions, and reported feeling more comfortable with the robot. They also showed stronger correlations between their scores on various relationship assessments and their vocabulary learning and vocabulary word use, suggesting that they learned more when they had a stronger relationship.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest—but this trend was stronger for girls than for boys.

Boys showed the opposite pattern. Contrary to my hypotheses, boys tended to like the relational robot less than the not-relational one. They felt less close to it, mirrored it less, disclosed less, showed more negative emotions, showed weaker correlations between their relationship and learning (but they did still learn—it just wasn't as strongly related to their relationship), and so forth. Boys also liked both robots less than girls did. This was the first time we'd seen this gender difference, even after observing 300+ kids in 8+ prior studies. What was going on here? Why did the boys in this study react so differently to the relational and not—relational robots?

I dug into the literature to learn more about gender differences. There's actually quite a bit of psychology research looking at how young girls and boys approach social relationships differently (e.g., see Joyce Benenson's awesome book Warriors and Worriers: The Survival of the Sexes). For example, girls tend to be more focused on individual relationships and tend to have fewer, closer friends. They tend to care about exchanging personal information and learning about others' relationships and status. Girls are often more likely to try to avoid conflict, more egalitarian than boys, and more competent at social problem solving.

Boys, on the other hand, often care more about being part of their peer group. They tend to be friends with a lot of other boys and are often less exclusive in their friendships. They frequently care more about understanding their skills relative to the skills other boys have, and care less about exchanging personal information or explicitly talking about their relationships.

Of course, these are broad generalizations about girls versus boys that may not apply to any particular individual child. But as generalizations, they were often consistent with the patterns I saw in my data. For example, the relational robot used a lot of behaviors that are more typical of girls than of boys, like explicitly sharing information about itself and talking about its relationship with the child. The not-relational robot used fewer actions like these. Plus, both robots may have talked and acted more like a girl than a boy, because its speech and behavior were designed by a woman (me), and its voice was recorded by a woman (shifted to a higher pitch to sound like a kid). We also had only women experimenters running the study, something that has varied more in prior studies.

I looked at kids' pronoun usage to see how they referred to the relational versus not-relational robot. There wasn't a big difference among girls; most of them used "he/his." Boys, however, were somewhat more likely to use "she/her." So one reason boys might've reacted less positively to it because they saw it as more of a girl, and they preferred to play with other boys.

We need to do follow-up work to examine whether any of these gender-related differences were actually causal factors. For example, would we see the same patterns if we explicitly introduced the robot as a boy versus as a girl, included more behaviors typically associated with boys, or had female versus male experimenters introduce the robot?

a child sits at a table that has a fluffy robot sitting on it

Designing Relational Technology

These data have interesting implications for how we design relational technology. First, the mere fact that we observed gender differences means we should probably start paying more attention to how we design the robot's gender and gendered behaviors. In our current culture and society, there are a range of behaviors that are generally associated with masculine versus feminine, male versus female, boys versus girls. Which means that if the robot acts like a stereotypical girl, even if you don't explicitly say that it is a girl, kids are probably going to treat it like a girl. Perhaps this might change if children are concurrently taught about gender and gender stereotypes, but there are a lot of open questions here and more research is needed.

One issue is that you may not need much stereotypical behavior in a robot to see an effect—back in 2009, Mikey Siegel performed a study in the Personal Robots group that compared two voices for a humanoid robot. Study participants conversed with the robot, and then the robot solicited a donation. Just changing the voice from male to female affected how persuasive, credible, trustworthy, and engaging people found the robot. Men, for example, were more likely to donate money to the female robot, while women showed little preference. Most participants rated the opposite sex robot as more credible, trustworthy, and engaging.

As I mentioned earlier, this was the first time we'd seen these gender differences in our studies with kids. Why now, but not in earlier work? Is the finding repeatable? A few other researchers have seen similar gender patterns in their work with virtual agents...but it's not clear yet why we see differences in some studies but not others.

What gender should technology have—if any?

Gendering technological entities isn't new. People frequently assign gender to relatively neutral technologies, like robot vacuum cleaners and robot dogs—not to mention their cars! In our studies, I've rarely seen kids not ascribe gender to our fluffy robots (and it has varied by study and by robot what gender they typically pick). Which raises the question of what gender a robot should be—if any? Should a robot use particular gender labels or exhibit particular gender cues?

This is largely a moral question. We may be able to design a robot that acts like a girl, or a boy, or some androgynous combination of both, and calls itself male, female, nonbinary, or any number of other things. We could study whether a girl robot, a boy robot, or some other robot might work better when helping kids with different kinds of activities. We could try personalizing the robot's gender or gendered behaviors to individual children's preferences for playmates.

But the moral question, regardless of whatever answers we might find regarding what works better in different scenarios or what kids prefer, is how we ought to design gender in robots. I don't have a solid recommendation on that—the answer depends on what you value. We don't all value the same things, and what we value may change in different situations. We also don't know yet whether children's preferences or biases are necessarily at odds with how we might think we should design gender in robots! (Again: More research needed!)

Personalizing robots, beyond gender

A robot's gender may not be a key factor for some kids. They may react more to whether the robot is introverted versus extraverted, or really into rockets versus horses. We could personalize other attributes of the robot, like aspects of personality (such as extraversion, openness, conscientiousness), the robot's "age" (e.g., is it more of a novice than the child or more advanced?), whether it uses humor in conversation, and any number of other things. Furthermore, I'd expect different kids to treat the same robot differently, and to frequently prefer different robot behaviors or personalities. After all, not all kids get along with all other kids!

However, there's not a lot of research yet exploring how to personalize an agent's personality, its styles of speech and behavior, or its gendered expressions of emotions and ideas to individuals. There's room to explore. We could draw on stereotypes and generalizations from psychological research and our own experiences about which kids other kids like playing with, how different kids (girls, boys, extroverts, etc.) express themselves and form friendships, or what kinds of stories and play boys or girls prefer (e.g., Joyce Benenson talks in her book Warriors and Worriers about how boys are more likely to include fighting enemies in their play, while girls are more likely to include nurturing activities).

We need to be careful, too, to consider whether making relational robots that provide more of what the child is comfortable with, more of what the child responds to best, more of the same, might in some cases be detrimental. Yes, a particular child may love stories about dinosaurs, battles, and knights in shining armor, but they may need to hear stories about friendship, gardening, and mammals in order to grow, learn, and develop. Children do need to be exposed to different ideas, different viewpoints, and different personalities with whom they must connect and resolve conflicts. Maybe the robots shouldn't only cater to what a child likes best, but also to what invites them out of their comfort zone and promotes growth. Maybe a robot's assigned gender should not reinforce current cultural stereotypes.

Dealing with gender stereotypes

A related question is whether, given gender stereotypes, we could make a robot appear more neutral if we tried. While we know a lot about what behaviors are typically considered feminine or masculine, it's harder to say what would make a robot come across as neither a boy nor a girl. Some evidence suggests that girls and women are culturally "allowed" to display a wider range of behaviors and still be considered female than are boys, who are subject to stronger cultural rules about what "counts" as appropriate masculine behavior (Joyce Benenson talks about this in her book that I mentioned earlier). So this might mean that there's a narrower range of behaviors a robot could use to be perceived as more masculine... and raises more questions about gender labels and behavior. What's needed to "override" stereotypes? And is that different for boys versus girls?

One thing we could do is give the robot a backstory about its gender or lack thereof. The story the robot tells about itself could help change children's construal of the robot. But would a robot explicitly telling kids that it's nonbinary, or that it's a boy or a girl, or that it's just a robot and doesn't have a gender be enough to change how kids interact with it? The robot's behaviors—such as how it talks, what it talks about, and what emotions it expresses—may "override" the backstory. Or not. Or only for some kids. Or only for some combinations of gendered behaviors and robot claims about its gender. These are all open empirical questions that should be explored while keeping in mind the main moral question regarding what robots we ought to be designing in the first place.

Another thing to explore in relation to gender is the robot's use of relational behaviors. In my study, I saw the robot's relational behaviors made a bigger difference for girls than for boys. Adding in all that relational stuff made girls a lot more likely to engage positively with the robot.

This isn't a totally new finding—earlier work from a number of researchers on kids' interactions with virtual agents and robots has found similar gender patterns. Girls frequently reacted more strongly to the psychological, social, and relational attributes of technological systems. Girls' affiliation, rapport, and relationship with an agent often affected their perception of the agent and their performance on tasks done with the agent more than for boys. This suggests that making a robot more social and relational might engage girls more, and lead to greater rapport, imitation, and learning. Of course, that might not work for all girls... and the next questions to ask are how we can also engage those girls, and what features the robot ought to have to better engage boys, too. How do we tune the robot's social-relational behavior to engage different people?

More to relationships than gender

There's also a lot more going on in any individual or in any relationship than gender! Things like shared interests and shared experiences build rapport and affiliation, regardless of the gender of those involved. When building and maintaining kids' engagement and attention during learning activities with the robots over time, there's a lot more than the robot's personality or gender that matters. Here's a few I think are especially helpful:

  • Personalization to individuals, e.g., choosing stories to tell with an appropriate linguistic/syntactic level,
  • Referencing shared experiences like stories told together and facts the child had shared, such as the child's favorite color,
  • Sharing backstory and setting expectations about the robot's history, capabilities, and limitations through conversation,
  • Using playful and creative story and learning activities, and
  • The robot's design from the ground up as a social agent—i.e., considering how to make the robot's facial expressions, movement, dialogue, and other behaviors understandable to humans.

Bottom line: People relate socially and relationally to technology

When it comes to the design of relational technology, the bottom line is that people seem to use the same social-relational mechanisms to understand and relate to technology that they use when dealing with other people. People react to social cues and personality. People assume gender (whether male, female, or some culturally acceptable third gender or in-between). People engage in dialogue and build shared narratives. The more social and relational the technology is, the more people treat it socially and relationally.

This means, when designing relational technology, we need to be aware of how people interact socially and how people typically develop relationships with other people, since that'll tell us a lot about how people might treat a social, relational robot. This will also vary across cultures with different cultural norms. We need to consider the ethical implications of our design decisions, whether that's considering how a robot's behavior might be perpetuating undesirable gender stereotypes, challenging them in positive ways, or whether it's mitigating risks around emotional interaction, attachment, and social manipulation. Do our interactions with gendered robots change to how we interact with people of different genders? (Some of these ethical concerns will be the topic of a later post. Stay tuned!).

We need to explicitly study people's social interactions and relationships with these technologies, like we've been doing in the Personal Robots group, because these technologies are not people, and there are going to be differences in how we respond to them—and this may influence how we interact with other people. Relational technologies have a unique power because they are social and relational. They can engage us and help us in new ways, and they can help us to interact with other people. In order to design them in effective, supportive, and ethical ways, we need to understand the myriad factors that affect our relationships with them—like children's gender.

This article originally appeared on the MIT Media Lab website, August, 2019


0 comments

a red and blue robot sits on a table

Tega sits at a school, ready to begin a storytelling activity with kids!

Last spring, you could find me every morning alternately sitting in a storage closet, a multipurpose meeting room, and a book nook beside our fluffy, red and blue striped robot Tega. Forty-nine different kids came to play storytelling and conversation games with Tega every week, eight times each over the course of the spring semester. I also administered pre- and post-assessments to find out what kids thought about the robot, what they had learned, and what their relationships with the robot were like.

Suffice to say, I spent a lot of time in that storage closet.

a child sits at a table that has a fluffy robot sitting on it

A child talks with the Tega robot.

Studying how kids learn with robots

The experiment I was running was, ostensibly, straightforward. I was exploring a theorized link between the relationship children formed with the robot and children's engagement and learning during the activities they did with the robot. This was the big final piece of my dissertation in the Personal Robots Group. My advisor, Cynthia Breazeal, and my committee, Rosalind Picard (also of the MIT Media Lab) and Paul Harris (Harvard Graduate School of Education), were excited to see how the experiment turned out, as were some of our other collaborators, like Dave DeSteno (Northeastern University), who have worked with us on quite a few social robot studies.

In some of those earlier studies, as I've talked about before, we've seen that the robot's social behaviors—like its nonverbal cues (such as gaze and posture), its social contingency (e.g., using appropriate social cues at the right times), and its expressivity (such using an expressive voice versus a flat and boring one)—can affect how much kids learn, how engaged they are in learning activities, and their perception of the robot's credibility. Kids frequently treat the robot as something kind of like a friend and use a lot of social behaviors themselves—like hugging and talking; sharing stories; showing affection; taking turns; mirroring the robot's behaviors, emotions, and language; and learning from the robot like they learn from human peers.

Five years of looking at the impact of the robot's social behaviors hinted to me that there was probably more going on. Kids weren't just responding to the robot using appropriate social cues or being expressive and cute. They were responding to more stuff—relational stuff. Relational stuff is all the social behavior plus more stuff that contributes to building and maintaining a relationship, interacting multiple times, changing in response to those interactions, referencing experiences shared together, being responsive, showing rapport (e.g., with mirroring and entrainment), and reciprocating behaviors (e.g., helping, sharing personal information or stories, providing companionship).

While the robots didn't do most of these things, whenever they used some (like being responsive or personalizing behavior), it often increased kids' learning, mirroring, and engagement.

So... what if the robot did use all those relational behaviors? Would that increase children's engagement and learning? Would children feel closer to the robot and perceive it as a more social, relational agent?

I created two versions of the robot. Half the kids played with the relational robot: the version that used all the social and relational behaviors listed above. For example, it mirrored kids' pitch and speaking rate. It mirrored some emotions. It tracked activities done together, like stories told, and referred to them in conversation later. It told personalized stories.

The other half of the kids played with the not-relational robot—it was just as friendly and expressive, but didn't do any of the special relational stuff.

Kids played with the robot every week. I measured their vocabulary learning and their relationships, looked at their language and mirroring of the robot, examined their emotions during the sessions, and more. From all this data, I got a decent sense of what kids thought about the two versions of the robot, and what kind of effects the relational stuff had.

In short: The relational stuff mattered.

Relationships and learning

Kids who played with the relational robot rated it as more human-like. They said they felt closer to it than kids who played with the not-relational robot, and disclosed more information (we tend to share more with people we're closer to). They were more likely to say goodbye to the robot (when we leave, we say goodbye to people, but not to things). They showed more positive emotions. They were more likely to say that playing with the robot was like playing with another child. They also were more confident that the robot remembered them, frequently referencing relational behaviors to explain their confidence.

All of this was evidence that the robot's relational behaviors affected kids' perceptions of it and kids' behavior with it in the expected ways. If a robot acted more in more social and relational ways, kids viewed it as more social and relational.

Then I looked at kids' learning.

I found that kids who felt closer to the robot, rated it as more human-like, or treated it more socially (like saying goodbye) learned more words. They mirrored the robot's language more during their own storytelling. They told longer stories. All these correlations were stronger for kids who played with the relational robot—meaning, in effect, that kids who had a stronger relationship with the robot learned more and demonstrated more behaviors related to learning and rapport (like mirroring language). This was evidence for my hypotheses that the relationships kids form with peers contribute to their learning.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest.

This was an exciting finding. There are plenty of theories about how kids learn from peers and how peers are really important to kids' learning (famous names in the subject include Piaget, Vygotsky, and Bandura), but there's not as much research looking at the mechanisms that influence peer learning. For example, I'd found research showing that kids' peers can positively affect their language learning... but not why they could. Digging into the literature further, I'd found one recent study linking learning to rapport, and several more showing links between an agent's social behavior and various learning-related emotions (like increased engagement or decreased frustration), but not learning specifically. I'd seen some work showing that social bonds between teachers and kids could predict academic performance—but that said nothing about peers.

In exploring my hypotheses about kids' relationships and learning, I also dug into some previously-collected data to see if there were any of the same connections. Long story short, there were. I found similar correlations between kids' vocabulary learning, emulation of the robot's language, and relationship measures (such as ratings of the robot as a social-relational agent and self-disclosure to the robot).

All in all, I found some pretty good evidence for my hypothesized links between kids' relationships and learning.

I also found some fascinating nuances in the data involving kids' gender and their perception of the robot, which I'll talk about in a later post. And, of course, whenever we talk about technology, ethical concerns abound, so I'll talk more about that in a later post, too.

This article originally appeared on the MIT Media Lab website, February, 2019


0 comments

A poem to celebrate my year

2018: A year defined by a PhD,
A study, analyses, and a writing spree.
A kid who’s growing; a family, moving.
Always learning, ever improving.

In January, I was glued to a laptop,
Programming robots and testing nonstop.
I recorded dialogue; recruited schools;
Prepped assessments; built software tools.

snow-covered front steps of a house

February is a wild, snowy blur
Of consent forms, paperwork, and red and blue fur.
Kids signed up!
The robot was ready!
All this made me happy, since progress was steady.

the robot tega's face

As March snow melted, the study began!
I drove to schools and followed my plan.
Eight sessions each, plus pre and post;
The robot was keeping the kids engrossed.

In April, one kid, who wasn’t too shy,
Told me he was “actually part robot, so I can fly!”
(Tega, our robot, it’s worth pointing out,
Just talks, and sits, and looks about.)

By May, I was glad if the robots didn’t break,
But why oh why did I choose this headache?
Long-term studies will be my demise
Why oh why do I do this, you guys?

Oh wait, it’s June, long-term studies are the best!
Look, I have data, totally worth being stressed!
Learning with robots over time—this is nice!
Awesome research, look: data! Worth the price.

sunny blue couer d alene lake

In July, let’s mix it up and buy a home,
Way out west where there’s space to roam,
More lakes, more space, and bonus, it’s cheap!
Less traffic, more mountains; more yard upkeep.

In a haze of boxes and packing tape,
The month of August and ggplot graphs take shape.
Let’s leave the humidity and Boston’s heat:
Analyze data; start writing; retreat.

light coming through leaves

September is data, papers, and writing.
And writing, revising, and then some rewriting.
I find getting three great professors to be
In the same place at the same time isn’t all that easy.

yellow leaves on a maple tree

I like watching the colored October leaves from my chair.
They dance and they spin, red and yellow in the air.
Oh wait, I’m still writing. I need a new graph…
Add to this chapter; fix that paragraph….

me hugging little Elian in front of evergreens

My baby is two! He’s as tall as a table!
He’s finally stopped trying to eat all our cables!
I’m still writing. Time to start my talk prep.
Defense Day is looming on the doorstep.

my PhD committee and me, post-defense!

Now here’s a day that I’ll remember!
Dissertation defense on the 12th of December.
Crazy year it’s been, that and then some…
But hey: Dr. Jackie, here I come!

This article originally appeared on www.media.mit.edu, January 2019


0 comments

Exploring how the relational features of robots impact children's engagement and learning

One challenge I've faced in my research is assessment. That's because some of the stuff I'd like to measure is hard to measure—namely, kids' relationships with robots.

a child puts her arm around a fluffy red and blue robot and grins

During one study, the Tega robot asked kids to take a photo with it so it could remember them. We gave each kid a copy of their photo at the end of the study as a keepsake.

I study kids, learning, and how we can use social robots to help kids learn. The social robots I've worked with are fluffy, animated characters that are more akin to Disney sidekicks than to vacuum cleaners—Tega, and its predecessor, DragonBot. Both robots use Android phones to display an animated face; they squash and stretch as they move; they can playback sounds and respond to a variety of sensors.

In my work so far, I've found evidence that the social behaviors of the robot—such as its nonverbal behavior (e.g., gaze and posture), social contingency (e.g., performing the right social behaviors at the right times), and expressivity (such as using a very expressive voice versus a flat/boring one)—significantly impact how much kids learn, how engaged they are in the learning activities, and how credible they think the robot is.

I've also seen kids treat the robot as something kind of like a friend. As I've talked about before, kids treat the robot as something in between a pet, a tutor, and a technology. They show many social behaviors with robots—hugging, talking, tickling, giving presents, sharing stories, inviting to picnics—and they also show understanding that the robot can turn off and needs battery power to turn back on. In some of our studies, we've asked kids questions about the properties of the robot: Can it think? Can it break? Does it feel tickles? Kids' answers show that they understand that robot is a technological, human-made entity, but also that it shares properties with animate agents.

In many of our studies, we've deliberately tried to situate the robot as a peer. After all, one key way that children learn is through observing, cooperating with, and being in conflict with their peers. Putting the cute, fluffy robot in a peer-like role seemed natural. And over the past six years, I've seen kids mirror robots' behaviors and language use, learning from them the same way they learn from peers.

I began to wonder about the impact of the relational features of the robot on children's engagement and learning: that is, the stuff about the robot that influences children's relationships with the robot. These relational features include the social behaviors we have been investigating, as well as others: mirroring, entrainment, personalization, change over time in response to the interaction, references to a shared narrative, and more. Some teachers I've talked to have said that it's their relationship with their students that really matters in helping kids learn—what if the same was true with robots?

My hunch—one I'm exploring in my dissertation right now via a 12-week study at Boston-area schools—is that yes: kids' relationships with the robot do matter for learning.

But how do you measure that?

I dug into the literature. As it turns out, psychologists have observed and interviewed children, their parents, and their teachers about kids' peer relationships and friendship quality. There are also scales and questionnaires for assessing adults' relationships, personal space, empathy, and closeness to others.

I ran into two main problems. First, all of the work with kids involved assumptions about peer interactions that didn't hold with the robot. For example, several observation-based methodologies assumed that kids would be freely associating with other kids in a classroom. Frequency of contact and exclusivity were two variables they coded for (higher frequency and more exclusive contact meant the kids were more likely to be friends). Nope: Due to the setup of our experimental studies, kids only had the option of doing a fairly structured activity with the robot once a week, at specific times of the day.

The next problem was that all the work with adults assumed that the experimental subjects would be able to read. As you might imagine, five-year-olds aren't prime candidates for filling out written questionnaires full of "how do you feel about X, Y, or Z on a 1-5 scale." These kids are still working on language comprehension and self-reflection skills.

I found a lot of inspiration, though, including several gems that I thought could be adapted to work with my target age group of 4–6 year-olds. I ended up with an assortment of assessments that tap into a variety of methodologies: questions, interviews, activities, and observations.

three drawings of a robot, with the one on the left frowning, the middle one looking neutral, and the one on the right looking happy

We showed pictures of the robot to help kids choose an initial answer when asking some interview questions. These pictures were shown for the question, 'Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?'

We ask kids questions about how they think robots feel, trying to understand their perceptions of the robot as a social, relational agent. For example, one question was, "Does the robot really like you, or is the robot just pretending?" Another was, "Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?" For each question, we also ask kids to explain their answer, and whether they would feel the same way. This can reveal a lot about what criteria they use to determine whether the robot has social, relational qualities, such as having feelings, actions the robot takes, consequences of actions, or moral rules. For example, one boy thought the robot really liked him "because I'm nice" (i.e., because of the child's attributes), while another girl said the robot liked her "because I told her a story" (i.e., because of actions the child took).

seven cards, each with a picture of a pair of increasingly overlapping circles on it

The set of circles used in our adapted Inclusion of Other in the Self task.

Some of these questions used pictorial response options, such as our adaptation of the Inclusion of Other in the Self scale. In this scale, kids are shown seven pairs of increasingly overlapping circles, and asked to point to the pair of circles that best shows their relationship with someone. We ask not only about the robot, but also about kids' parents, pets, best friends, and a bad guy in the movies. This lets us see how kids rate the robot in relation to other characters in their lives.

a girl sits at a table with paper and pictures of different robots and things

This girl is doing the Robot Sorting Task, in which she decides how much like a person each entity is and places each picture in an appropriate place along the line.

Another activity we created asks kids to sort a set of pictures of various entities along a line—entities such as a frog, a cat, a baby, a robot from a movie (like Baymax, WALL-e, or R2D2), a mechanical robot arm, Tega, and a computer. The line is anchored on one end with a picture of a human adult, and on the other with a picture of a table. We want to see not only where kids put Tega in relation to the other entities, but also what kids say as they sort them. Their explanations of why they place each entity where they do can reveal what qualities they consider important for being like a person: The ability to move? Talk? Think? Feel?

In the behavioral assessments, the robot or experimenter does something, and we observe what kids do in response. For example, when kids played with the robot, we had the robot disclose personal information, such as skills it was good or bad at, or how it felt about its appearance: "Did you know, I think I'm good at telling stories because I try hard to tell nice stories. I also think my blue fluffy hair is cool." Then the robot prompted for information disclosure in return. Because people tend to disclosure more information, and more personal or sensitive information, to people to whom they feel closer, we listened to see whether kids disclosed anything to the robot: "I'm good at reading," "I can ride a bike," "My teacher says I'm bad at listening."

a fluffy red and blue tega robot with stickers stuck to its tummy

Tega sports several stickers given to it by one child.

Another activity looked at conflict and kids' tendency to share (like they might with another child). The experimenter holds out a handful of stickers and tells the child and robot that they can each have one. The child is allowed to pick a sticker first. The robot says, "Hey! I want that sticker!" We observe to see if the child says anything or spontaneously offers up their sticker to the robot. (Don't worry: If the child does give the robot the sticker, the experimenter fishes a duplicate sticker out of her pocket for the child.)

Using this variety of assessments—rather than using only questions or only observations—can give us more insight into how kids think and feel. We can see if what kids say aligns with what kids do. We can get at the same concepts and questions from multiple angles, which may give us a more accurate picture of kids' relationships and conceptualizations.

Through the process of searching for assessments I need, discovering nothing quite right existed, and creating new ways of capturing kids' behaviors, feelings, and thoughts, the importance of assessment really hit home. Measurement and assessment is one of the most important things I do in research. I could ask any number of questions, hypothesize any number of outcomes, but without performing an experiment and actually measuring something relevant to my questions, I would get no answers.

We've just published a conference paper on our first pilot study validating four of these assessments. The assessments were able to capture differences in children's relationships with a social robot, as expected, as well as how their relationships change over time. If you study relationships with young kids (or simply want to learn more), check it out!

This article originally appeared on the MIT Media Lab website, May 2018

Acknowledgments

The research I talk about in this post was only possible with help from multiple collaborators, most notably Cynthia Breazeal, Hae Won Park, Randi Williams, and Paul Harris.

This research was supported by a MIT Media Lab Learning Innovation Fellowship and by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this article are those of the authors and do not represent the views of the NSF.


0 comments

me wearing a red dress holding tega, a fluffy red and blue robot

Undervaluing hard work in grad school

Wow, you're at MIT? You must be a genius!"

Um. Not sure how to answer that. Look down at my shoes. Nervous laugh.

"Uh, thanks?"

The random passerby who saw my MIT shirt and just had to comment on my presumed brilliance seems satisfied with my response. Perhaps the "awkward genius" trope played in my favor?

See, I'm no genius. And I'll let you in on a little secret: Most of us at MIT aren't inherent geniuses, gliding by on the strength of a vast, extraordinary intellect.

We're not born super smart. Instead, we do things the old-fashioned way: with copious amounts of caffeine, liberally applied elbow grease, and emphatic grunts of effort that would make a Cro-Magnon proud.

The reality on campus is not exactly the effortless, glamorous image the media likes to paint. You know, headlines like:

  • MIT physicists create unbelievable new space dimension!
  • MIT scientists discover that chocolate and coffee cure cancer!
  • MIT engineers fly to the moon in a ship they built out of carbon nanotubes and crystal lattices!
  • Look, it's MIT! Land of the Brilliant, the Inventive, the Brave!

The reality is more like the Land of the Confused, the Obstinate, and the "Let's try it again and see if maybe it works this time so we can get at least one significant result for a paper!"

Yes, I'm exaggerating a little. I have, after all, met a ton of amazing, brilliant people here -- but they're amazing and brilliant because of their effort, curiosity, tenacity, and enthusiasm. Not their inherent genius. None of them are little cartoon figures with cartoon lightbulbs flashing around them like strobe lights as they are struck with amazing idea after amazing idea.

They're people like my labmate, who routinely shows up late to group meetings because he accidentally stays up all night trying to implement some cool machine learning algorithm he found in an obscure-but-possibly-relevant paper (eventually, I'm sure, the effort will pay off!).

They're people like my professors, who set aside entire days each week just for meeting with their students, to hash out ideas and go over paper drafts.

They're people like me, who spend 260% more effort than strictly necessary on making a child-robot interaction flow right, even though the study would probably be fine with subpar dialogue (for the curious: I work on fluffy robots that help kids learn stuff).

The reality is long hours in the library—reading papers, trying to understand what other people have already done and how it relates to my research—and long hours in the lab—trying to put that understanding to use (often learning in the process that I didn't really understand something after all and should probably do more reading).

I think MIT's reputation as being full of inherent geniuses gives many of us the short stick and fails to recognize the sheer amount of hard work and failure that goes into nearly every discovery and invention that's made. Sure, sometimes people get lucky.

There are certainly a few things that someone got right the first time, but let's be honest. The last time my Python code ran on the first try, I went looking for bugs anyway because that never happens (and I was right; hours later, there were still bugs aplenty). Likewise, the last time I got a really interesting experimental result, it was after months of thinking and re-planning, months of programming and testing on the robot, and months of wrangling participants in the lab. All the amazing insights that show up in the final paper draft only come after a lot of analysis, realizing the analysis missed something, rewriting all the R code to do the analysis right, and re-analyzing.

Think of it this way, if a PhD student has signed on to work in a lab for the next indefinite-but-hopefully-only-five-or-maybe-seven years (with a small stipend if they're lucky) and have no idea what magical, impactful dissertation topic will be their ticket out, they're probably already one of those people who likes a challenge. Maybe perseverance is their middle name.

And that's what I think being at MIT is actually about: Learning to fail, struggling to succeed, and knowing the value in the struggle.

Of the real "geniuses" I know, they're people who just want to know what's going on and are okay with doing a lot of hard work to find out.

They're people who keep asking "and then what? and then what?" after they learn something, and spend months or years chasing down answers. For example: "So I find that 5-year-olds mirror the robot's phrases when playing storytelling games with it, and learn more when they do—Why? What does this say about rapport and peer learning? What modulates this effect? What are the implications for educational technology more generally?"

They're people who dive wholeheartedly into each rabbit hole to see how far it goes and what useful tidbits of scientific knowledge can be gleaned along the way.

They're people who keep probing. Sometimes, that leads to dramatic headlines. More often, it doesn't.

This article originally appeared on the MIT Graduate Student Blog, February 2018


0 comments