Posts tagged "robots"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

a child puts her arm around a fluffy red and blue robot and grins

Relational Robots

My latest research in Cynthia Breazeal's Personal Robots Group has been on relational technology.

By relational, I mean technology that is designed to build and maintain long—term, social—emotional relationships with users. It's technology that's not just social—it's more than a digital assistant. It doesn't just answer your questions, tell jokes on command, or play music and adjust the lights. It collects data about you over time. It uses that data to personalize its behavior and responses to help you achieve long term goals. It probably interacts using human social cues so as to be more understandable and relatable—and furthermore, in areas such as education and health, positive relationships (such as teacher—student or doctor—patient) are correlated with better outcomes. It might know your name. It might try to cheer you up if it detects that you're looking sad. It might refer to what you've done together in the past or talk about future activities with you.

Relational technology is new. Some digital assistants and personal home robots on the market have some features of relational technology, but not all the features. Relational technology is still a research idea more than a commercial product. Which means right now, before it's on the market, is the exact right time to talk about how we ought to design relational technology.

As part of my dissertation, I performed a three-month study with 49 kids aged 4-7 years. All the kids played conversation and storytelling games with a social robot. Half the kids played with a version of the robot that was relational, using all the features of relational technology to build and maintain a relationship and personalize to individual kids. It talked about its relationship with the child and disclosed personal information about itself; referenced shared experiences (such as stories told together); used the child's name; mirrored the child's affective expressions, posture, speaking rate, and volume; selected stories to tell based on appropriate syntactic difficulty and similarity of story content to the child's stories; and used appropriate backchanneling actions (such as smiles, nods, saying "uh huh!"). The other half of the kids played with a not-relational robot that was just as friendly and expressive, but without the special relational stuff.

Besides finding some fascinating links between children's relationships with the robot, their perception of it as a social-relational agent, their mirroring of the robot's behaviors, and their language learning, I also found some surprises. One surprise was that we found gender differences in how kids interacted with the robot. In general, boys and girls treated the relational and not-relational robots differently.

Boys and girls treated the robots differently

Girls responded positively to the relational robot and less positively to the not-relational robot. This was the pattern I expected to see, since the relational robot was doing a lot more social stuff to build a relationship. I'd hypothesized that kids would like the relational robot more, feel closer to it, and treat it more socially. And that's what girls did. Girls generally rated the relational robot as more of a social-relational agent than the not-relational robot. They liked it more and felt closer to it. Girls often mirrored the relational robot's language more (we often mirror people more when we feel rapport with them), disclosed more information (we share more with people we're closer to), showed more positive emotions, and reported feeling more comfortable with the robot. They also showed stronger correlations between their scores on various relationship assessments and their vocabulary learning and vocabulary word use, suggesting that they learned more when they had a stronger relationship.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest—but this trend was stronger for girls than for boys.

Boys showed the opposite pattern. Contrary to my hypotheses, boys tended to like the relational robot less than the not-relational one. They felt less close to it, mirrored it less, disclosed less, showed more negative emotions, showed weaker correlations between their relationship and learning (but they did still learn—it just wasn't as strongly related to their relationship), and so forth. Boys also liked both robots less than girls did. This was the first time we'd seen this gender difference, even after observing 300+ kids in 8+ prior studies. What was going on here? Why did the boys in this study react so differently to the relational and not—relational robots?

I dug into the literature to learn more about gender differences. There's actually quite a bit of psychology research looking at how young girls and boys approach social relationships differently (e.g., see Joyce Benenson's awesome book Warriors and Worriers: The Survival of the Sexes). For example, girls tend to be more focused on individual relationships and tend to have fewer, closer friends. They tend to care about exchanging personal information and learning about others' relationships and status. Girls are often more likely to try to avoid conflict, more egalitarian than boys, and more competent at social problem solving.

Boys, on the other hand, often care more about being part of their peer group. They tend to be friends with a lot of other boys and are often less exclusive in their friendships. They frequently care more about understanding their skills relative to the skills other boys have, and care less about exchanging personal information or explicitly talking about their relationships.

Of course, these are broad generalizations about girls versus boys that may not apply to any particular individual child. But as generalizations, they were often consistent with the patterns I saw in my data. For example, the relational robot used a lot of behaviors that are more typical of girls than of boys, like explicitly sharing information about itself and talking about its relationship with the child. The not-relational robot used fewer actions like these. Plus, both robots may have talked and acted more like a girl than a boy, because its speech and behavior were designed by a woman (me), and its voice was recorded by a woman (shifted to a higher pitch to sound like a kid). We also had only women experimenters running the study, something that has varied more in prior studies.

I looked at kids' pronoun usage to see how they referred to the relational versus not-relational robot. There wasn't a big difference among girls; most of them used "he/his." Boys, however, were somewhat more likely to use "she/her." So one reason boys might've reacted less positively to it because they saw it as more of a girl, and they preferred to play with other boys.

We need to do follow-up work to examine whether any of these gender-related differences were actually causal factors. For example, would we see the same patterns if we explicitly introduced the robot as a boy versus as a girl, included more behaviors typically associated with boys, or had female versus male experimenters introduce the robot?

a child sits at a table that has a fluffy robot sitting on it

Designing Relational Technology

These data have interesting implications for how we design relational technology. First, the mere fact that we observed gender differences means we should probably start paying more attention to how we design the robot's gender and gendered behaviors. In our current culture and society, there are a range of behaviors that are generally associated with masculine versus feminine, male versus female, boys versus girls. Which means that if the robot acts like a stereotypical girl, even if you don't explicitly say that it is a girl, kids are probably going to treat it like a girl. Perhaps this might change if children are concurrently taught about gender and gender stereotypes, but there are a lot of open questions here and more research is needed.

One issue is that you may not need much stereotypical behavior in a robot to see an effect—back in 2009, Mikey Siegel performed a study in the Personal Robots group that compared two voices for a humanoid robot. Study participants conversed with the robot, and then the robot solicited a donation. Just changing the voice from male to female affected how persuasive, credible, trustworthy, and engaging people found the robot. Men, for example, were more likely to donate money to the female robot, while women showed little preference. Most participants rated the opposite sex robot as more credible, trustworthy, and engaging.

As I mentioned earlier, this was the first time we'd seen these gender differences in our studies with kids. Why now, but not in earlier work? Is the finding repeatable? A few other researchers have seen similar gender patterns in their work with virtual agents...but it's not clear yet why we see differences in some studies but not others.

What gender should technology have—if any?

Gendering technological entities isn't new. People frequently assign gender to relatively neutral technologies, like robot vacuum cleaners and robot dogs—not to mention their cars! In our studies, I've rarely seen kids not ascribe gender to our fluffy robots (and it has varied by study and by robot what gender they typically pick). Which raises the question of what gender a robot should be—if any? Should a robot use particular gender labels or exhibit particular gender cues?

This is largely a moral question. We may be able to design a robot that acts like a girl, or a boy, or some androgynous combination of both, and calls itself male, female, nonbinary, or any number of other things. We could study whether a girl robot, a boy robot, or some other robot might work better when helping kids with different kinds of activities. We could try personalizing the robot's gender or gendered behaviors to individual children's preferences for playmates.

But the moral question, regardless of whatever answers we might find regarding what works better in different scenarios or what kids prefer, is how we ought to design gender in robots. I don't have a solid recommendation on that—the answer depends on what you value. We don't all value the same things, and what we value may change in different situations. We also don't know yet whether children's preferences or biases are necessarily at odds with how we might think we should design gender in robots! (Again: More research needed!)

Personalizing robots, beyond gender

A robot's gender may not be a key factor for some kids. They may react more to whether the robot is introverted versus extraverted, or really into rockets versus horses. We could personalize other attributes of the robot, like aspects of personality (such as extraversion, openness, conscientiousness), the robot's "age" (e.g., is it more of a novice than the child or more advanced?), whether it uses humor in conversation, and any number of other things. Furthermore, I'd expect different kids to treat the same robot differently, and to frequently prefer different robot behaviors or personalities. After all, not all kids get along with all other kids!

However, there's not a lot of research yet exploring how to personalize an agent's personality, its styles of speech and behavior, or its gendered expressions of emotions and ideas to individuals. There's room to explore. We could draw on stereotypes and generalizations from psychological research and our own experiences about which kids other kids like playing with, how different kids (girls, boys, extroverts, etc.) express themselves and form friendships, or what kinds of stories and play boys or girls prefer (e.g., Joyce Benenson talks in her book Warriors and Worriers about how boys are more likely to include fighting enemies in their play, while girls are more likely to include nurturing activities).

We need to be careful, too, to consider whether making relational robots that provide more of what the child is comfortable with, more of what the child responds to best, more of the same, might in some cases be detrimental. Yes, a particular child may love stories about dinosaurs, battles, and knights in shining armor, but they may need to hear stories about friendship, gardening, and mammals in order to grow, learn, and develop. Children do need to be exposed to different ideas, different viewpoints, and different personalities with whom they must connect and resolve conflicts. Maybe the robots shouldn't only cater to what a child likes best, but also to what invites them out of their comfort zone and promotes growth. Maybe a robot's assigned gender should not reinforce current cultural stereotypes.

Dealing with gender stereotypes

A related question is whether, given gender stereotypes, we could make a robot appear more neutral if we tried. While we know a lot about what behaviors are typically considered feminine or masculine, it's harder to say what would make a robot come across as neither a boy nor a girl. Some evidence suggests that girls and women are culturally "allowed" to display a wider range of behaviors and still be considered female than are boys, who are subject to stronger cultural rules about what "counts" as appropriate masculine behavior (Joyce Benenson talks about this in her book that I mentioned earlier). So this might mean that there's a narrower range of behaviors a robot could use to be perceived as more masculine... and raises more questions about gender labels and behavior. What's needed to "override" stereotypes? And is that different for boys versus girls?

One thing we could do is give the robot a backstory about its gender or lack thereof. The story the robot tells about itself could help change children's construal of the robot. But would a robot explicitly telling kids that it's nonbinary, or that it's a boy or a girl, or that it's just a robot and doesn't have a gender be enough to change how kids interact with it? The robot's behaviors—such as how it talks, what it talks about, and what emotions it expresses—may "override" the backstory. Or not. Or only for some kids. Or only for some combinations of gendered behaviors and robot claims about its gender. These are all open empirical questions that should be explored while keeping in mind the main moral question regarding what robots we ought to be designing in the first place.

Another thing to explore in relation to gender is the robot's use of relational behaviors. In my study, I saw the robot's relational behaviors made a bigger difference for girls than for boys. Adding in all that relational stuff made girls a lot more likely to engage positively with the robot.

This isn't a totally new finding—earlier work from a number of researchers on kids' interactions with virtual agents and robots has found similar gender patterns. Girls frequently reacted more strongly to the psychological, social, and relational attributes of technological systems. Girls' affiliation, rapport, and relationship with an agent often affected their perception of the agent and their performance on tasks done with the agent more than for boys. This suggests that making a robot more social and relational might engage girls more, and lead to greater rapport, imitation, and learning. Of course, that might not work for all girls... and the next questions to ask are how we can also engage those girls, and what features the robot ought to have to better engage boys, too. How do we tune the robot's social-relational behavior to engage different people?

More to relationships than gender

There's also a lot more going on in any individual or in any relationship than gender! Things like shared interests and shared experiences build rapport and affiliation, regardless of the gender of those involved. When building and maintaining kids' engagement and attention during learning activities with the robots over time, there's a lot more than the robot's personality or gender that matters. Here's a few I think are especially helpful:

  • Personalization to individuals, e.g., choosing stories to tell with an appropriate linguistic/syntactic level,
  • Referencing shared experiences like stories told together and facts the child had shared, such as the child's favorite color,
  • Sharing backstory and setting expectations about the robot's history, capabilities, and limitations through conversation,
  • Using playful and creative story and learning activities, and
  • The robot's design from the ground up as a social agent—i.e., considering how to make the robot's facial expressions, movement, dialogue, and other behaviors understandable to humans.

Bottom line: People relate socially and relationally to technology

When it comes to the design of relational technology, the bottom line is that people seem to use the same social-relational mechanisms to understand and relate to technology that they use when dealing with other people. People react to social cues and personality. People assume gender (whether male, female, or some culturally acceptable third gender or in-between). People engage in dialogue and build shared narratives. The more social and relational the technology is, the more people treat it socially and relationally.

This means, when designing relational technology, we need to be aware of how people interact socially and how people typically develop relationships with other people, since that'll tell us a lot about how people might treat a social, relational robot. This will also vary across cultures with different cultural norms. We need to consider the ethical implications of our design decisions, whether that's considering how a robot's behavior might be perpetuating undesirable gender stereotypes, challenging them in positive ways, or whether it's mitigating risks around emotional interaction, attachment, and social manipulation. Do our interactions with gendered robots change to how we interact with people of different genders? (Some of these ethical concerns will be the topic of a later post. Stay tuned!).

We need to explicitly study people's social interactions and relationships with these technologies, like we've been doing in the Personal Robots group, because these technologies are not people, and there are going to be differences in how we respond to them—and this may influence how we interact with other people. Relational technologies have a unique power because they are social and relational. They can engage us and help us in new ways, and they can help us to interact with other people. In order to design them in effective, supportive, and ethical ways, we need to understand the myriad factors that affect our relationships with them—like children's gender.

This article originally appeared on the MIT Media Lab website, August, 2019


0 comments

a red and blue robot sits on a table

Tega sits at a school, ready to begin a storytelling activity with kids!

Last spring, you could find me every morning alternately sitting in a storage closet, a multipurpose meeting room, and a book nook beside our fluffy, red and blue striped robot Tega. Forty-nine different kids came to play storytelling and conversation games with Tega every week, eight times each over the course of the spring semester. I also administered pre- and post-assessments to find out what kids thought about the robot, what they had learned, and what their relationships with the robot were like.

Suffice to say, I spent a lot of time in that storage closet.

a child sits at a table that has a fluffy robot sitting on it

A child talks with the Tega robot.

Studying how kids learn with robots

The experiment I was running was, ostensibly, straightforward. I was exploring a theorized link between the relationship children formed with the robot and children's engagement and learning during the activities they did with the robot. This was the big final piece of my dissertation in the Personal Robots Group. My advisor, Cynthia Breazeal, and my committee, Rosalind Picard (also of the MIT Media Lab) and Paul Harris (Harvard Graduate School of Education), were excited to see how the experiment turned out, as were some of our other collaborators, like Dave DeSteno (Northeastern University), who have worked with us on quite a few social robot studies.

In some of those earlier studies, as I've talked about before, we've seen that the robot's social behaviors—like its nonverbal cues (such as gaze and posture), its social contingency (e.g., using appropriate social cues at the right times), and its expressivity (such using an expressive voice versus a flat and boring one)—can affect how much kids learn, how engaged they are in learning activities, and their perception of the robot's credibility. Kids frequently treat the robot as something kind of like a friend and use a lot of social behaviors themselves—like hugging and talking; sharing stories; showing affection; taking turns; mirroring the robot's behaviors, emotions, and language; and learning from the robot like they learn from human peers.

Five years of looking at the impact of the robot's social behaviors hinted to me that there was probably more going on. Kids weren't just responding to the robot using appropriate social cues or being expressive and cute. They were responding to more stuff—relational stuff. Relational stuff is all the social behavior plus more stuff that contributes to building and maintaining a relationship, interacting multiple times, changing in response to those interactions, referencing experiences shared together, being responsive, showing rapport (e.g., with mirroring and entrainment), and reciprocating behaviors (e.g., helping, sharing personal information or stories, providing companionship).

While the robots didn't do most of these things, whenever they used some (like being responsive or personalizing behavior), it often increased kids' learning, mirroring, and engagement.

So... what if the robot did use all those relational behaviors? Would that increase children's engagement and learning? Would children feel closer to the robot and perceive it as a more social, relational agent?

I created two versions of the robot. Half the kids played with the relational robot: the version that used all the social and relational behaviors listed above. For example, it mirrored kids' pitch and speaking rate. It mirrored some emotions. It tracked activities done together, like stories told, and referred to them in conversation later. It told personalized stories.

The other half of the kids played with the not-relational robot—it was just as friendly and expressive, but didn't do any of the special relational stuff.

Kids played with the robot every week. I measured their vocabulary learning and their relationships, looked at their language and mirroring of the robot, examined their emotions during the sessions, and more. From all this data, I got a decent sense of what kids thought about the two versions of the robot, and what kind of effects the relational stuff had.

In short: The relational stuff mattered.

Relationships and learning

Kids who played with the relational robot rated it as more human-like. They said they felt closer to it than kids who played with the not-relational robot, and disclosed more information (we tend to share more with people we're closer to). They were more likely to say goodbye to the robot (when we leave, we say goodbye to people, but not to things). They showed more positive emotions. They were more likely to say that playing with the robot was like playing with another child. They also were more confident that the robot remembered them, frequently referencing relational behaviors to explain their confidence.

All of this was evidence that the robot's relational behaviors affected kids' perceptions of it and kids' behavior with it in the expected ways. If a robot acted more in more social and relational ways, kids viewed it as more social and relational.

Then I looked at kids' learning.

I found that kids who felt closer to the robot, rated it as more human-like, or treated it more socially (like saying goodbye) learned more words. They mirrored the robot's language more during their own storytelling. They told longer stories. All these correlations were stronger for kids who played with the relational robot—meaning, in effect, that kids who had a stronger relationship with the robot learned more and demonstrated more behaviors related to learning and rapport (like mirroring language). This was evidence for my hypotheses that the relationships kids form with peers contribute to their learning.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest.

This was an exciting finding. There are plenty of theories about how kids learn from peers and how peers are really important to kids' learning (famous names in the subject include Piaget, Vygotsky, and Bandura), but there's not as much research looking at the mechanisms that influence peer learning. For example, I'd found research showing that kids' peers can positively affect their language learning... but not why they could. Digging into the literature further, I'd found one recent study linking learning to rapport, and several more showing links between an agent's social behavior and various learning-related emotions (like increased engagement or decreased frustration), but not learning specifically. I'd seen some work showing that social bonds between teachers and kids could predict academic performance—but that said nothing about peers.

In exploring my hypotheses about kids' relationships and learning, I also dug into some previously-collected data to see if there were any of the same connections. Long story short, there were. I found similar correlations between kids' vocabulary learning, emulation of the robot's language, and relationship measures (such as ratings of the robot as a social-relational agent and self-disclosure to the robot).

All in all, I found some pretty good evidence for my hypothesized links between kids' relationships and learning.

I also found some fascinating nuances in the data involving kids' gender and their perception of the robot, which I'll talk about in a later post. And, of course, whenever we talk about technology, ethical concerns abound, so I'll talk more about that in a later post, too.

This article originally appeared on the MIT Media Lab website, February, 2019


0 comments

A girl grins at a red and blue fluffy robot and puts her arm around it

Relational AI: Creating long-term interpersonal interaction, rapport, and relationships with social robots

Children today are growing up with a wide range of Internet of Things devices, digital assistants, personal home robots for education, health, and security, and more. With so many AI-enabled socially interactive technologies entering everyday life, we need to deeply understand how these technologies affect us—such as how we respond to them, how we conceptualize them, what kinds of relationships we form with them, the long-term consequences of use, and how to mitigate ethical concerns (of which there are many).

In my dissertation, I explored some of these questions through the lens of children's interacts and relationships with social robots that acted as language learning companions.

Many of the other projects I worked on at the MIT Media Lab explored how we could use social robots as a technology to support young children's early language development. When I turned to relational AI, instead of focusing simply on how to make social robots effective as an educational tools, I delved into why they are effective—as well as the ethical, social, and societal implications of bringing social-relational technology into children's lives.

Here is a précis of my dissertation. (Or read the whole thing!)

a girl looks at the dragonbot robot as it tells a story

Exploring children's relationships with peer-like social robots

In earlier projects in the Personal Robots Group, we had found evidence that children can learn language skills with social robots—and the robot's social behaviors seemed to be a key piece of why children responded so well! One key strategy children used to learn with the robots was social emulation—i.e., copying or mirroring the behaviors used by the robot, such as speech patterns, words, even curiosity and a growth mindset.

My hunch, and my key hypothesis, was this: Social robots can benefit children because they can be social and relational. They can tap into our human capacity to build and respond to relationships. Relational technology, thus, is technology that can build long-term, social-emotional relationships with users.

I took a new look at data I'd collected during my master's thesis to see if there was any evidence for my hypothesis. Spoiler: There was. Children's emulation of the robot's language during the storytelling activity appeared to be related both to children's rapport with the robot and their learning.

Assessing children's relationships

Because I wanted to measure children's relationships with the robot and gain an understanding of how children treated it relative to other characters in their lives, I created a bunch of assessments. Here's a summary of a few of them.

We used some of these in another longitudinal learning study where kids listened to and retold stories with a social robot. I found correlations between measures of engagement, learning, and relationships. For example, children who reported a stronger relationship or rated the robot as a greater social-relational agent showed higher vocabulary posttest scores. These were promising results...

So, armed with my assessments and hypotheses, I ran some more experimental studies.

a boy sits across a table from a red and blue robot

Evaluating relational AI: Entrainment and Backstory

First, I performed a one-session experiment that explored whether enabling a social robot to perform several rapport- and relationship-building behaviors would increase children's engagement and learning: entrainment and self-disclosure (backstory).

In positive human-human relationships, people frequently mirror or mimic each other's behavior. This mimicry (also called entrainment) is associated with rapport and smoother social interaction. I gave the robot a speech entrainment module, which matched vocal features of the robot's speech, such as speaking rate and volume, to the user's.

I also had the robot disclose personal information, about its poor speech and hearing abilities, in the form of a backstory.

86 kids played with the robot in a 2x2 study (entrainment vs. no entrainment and backstory vs. no backstory). The robot engaged the children one-on-one in conversation, told a story embedded with key vocabulary words, and asked children to retell the story.

I measured children's recall of the key words and their emotions during the interaction, examined their story retellings, and asked children questions about their relationship with the robot.

I found that the robot's entrainment led children to show more positive emotions and fewer negative emotions. Children who heard the robot's backstory were more likely to accept the robot's poor hearing abilities. Entrainment paired with backstory led children to emulate more of the robot's speech in their stories; these children were also more likely to comply with one of the robot's requests.

In short, the robot's speech entrainment and backstory appeared to increase children's engagement and enjoyment in the interaction, improve their perception of the relationship, and contributed to children's success at retelling the story.

A girl smiles at a red and blue fluffy robot

Evaluating relational AI: Relationships through time

My goals in the final study were twofold. First, I wanted to understand how children think about social robots as relational agents in learning contexts, especially over multiple encounters. Second, I wanted to see how adding relational capabilities to a social robot would impact children's learning, engagement, and relationship with the robot.

Long-term study

Would children who played with a relational robot show greater rapport, a closer relationship, increased learning, greater engagement, more positive affect, more peer mirroring, and treat the robot as more of a social other than children who played with a non-relational robot? Would children who reported feeling closer to the robot (regardless of condition) more learning and peer mirroring?

In this study, 50 kids played with either a relational or not relational robot. The relational robot was situated as a social contingent agent, using entrainment and affect mirroring; it referenced shared experiences such as past activities performed together and used the child's name; it took specific actions with regards to relationship management; it told stories that personalized both level (i.e., syntactic difficulty) and content (i.e., similarity of the robot's stories to the child's).

The not relational robot did not use these features. It simply followed its script. It did personalize stories based on level, since this is beneficial but not specifically related to the relationship.

Each child participated in a pretest session; 8 sessions with the robot that each included a pretest, the robot interaction with greeting, conversation, story activity, and closing, and posttest; and a final posttest session.

graph showing that children who rated robot as more social and relational also showed more learning

Results: Relationships, learning, and ... gender?

I collected a unique dataset about children's relationships with a social robot over time, which enabled me to look beyond whether children liked the robot or not or whether they learned new words or not. The main findings include:

  • Children in the \textit{Relational} condition reported that the robot was a more human-like, social, relational agent and responded to it in more social and relational ways. They often showed more positive affect, disclosed more information over time, and reported becoming more accepting of both the robot and other children with disabilities.

  • Children in the \textit{Relational} condition showed stronger correlations between their scores on the relationships assessments and their learning and behavior, such as their vocabulary posttest scores, emulation of the robot's language during storytelling, and use of target vocabulary words.

  • Regardless of condition, children who rated the robot as a more social and relational agent were more likely to treat it as such, as well as showing more learning.

  • Children's behavior showed that they thought of the robot and their relationship with it differently than their relationships with their parents, friends, and pets. They appeared to understand that the robot was an "in between" entity that had some properties of both alive, animate beings and inanimate machines.

The results of the study provide evidence for links between children's imitation of the robot during storytelling, their affect and valence, and their construal of the robot as a social-relational other. A large part of the power of social robots seems to come from their social presence.

In addition, children's behavior depended on both the robot's behavior and their own personalities and inclinations. Girls and boys seemed to imitate, interact, and respond differently to the relational and non-relational robots. Gender may be something to pay attention to in future work!

Ethics, design, and implications

I include several chapters in my dissertation discussing the design implications, ethical implications, and theoretical implications of my work.

Because of the power social and relational interaction has for humans, relational AI has the potential to engage and empower not only children across many domains—such as education, in therapy, and pediatrics for long-term health support—but also other populations: older children, adults, and the elderly. We can and should use relational AI to help all people flourish, to augment and support human relationships, and to enable people to be happier, healthier, more educated, and more able to lead the lives they want to live.

Further reading

Links

Publications

  • Kory-Westlund, J. M. (2019). Relational AI: Creating Long-Term Interpersonal Interaction, Rapport, and Relationships with Social Robots. PhD Thesis, Media Arts and Sciences, Massachusetts Institute of Technology, Cambridge, MA. [PDF]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). A Long-Term Study of Young Children's Rapport, Social Emulation, and Language Learning With a Peer-Like Robot Playmate in Preschool Frontiers in Robotics and AI, 6. [PDF] [online]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). Exploring the effects of a social robot's speech entrainment and backstory on young children's emotion, rapport, relationships, and learning. Frontiers in Robotics and AI, 6. [PDF] [online]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). Assessing Children's Perception and Acceptance of a Social Robot. Proceedings of the 18th ACM Interaction Design and Children Conference (IDC) (pp. 38-50). ACM: New York, NY. [PDF]

  • Kory-Westlund, J. M., Park, H., Williams, R., & Breazeal, C. (2018). Measuring Young Children's Long-term Relationships with Social Robots. Proceedings of the 17th ACM Interaction Design and Children Conference (IDC) (pp. 207-218). ACM: New York, NY. [talk] [PDF]

  • Kory-Westlund, J. M., Park, H. W., Williams, R., & Breazeal, C. (2017). Measuring children's long-term relationships with social robots Workshop on Perception and Interaction dynamics in Child-Robot Interaction, held in conjunction with the Robotics: Science and Systems XIII. (pp. 625-626). Workshop website [PDF]


0 comments

Exploring how the relational features of robots impact children's engagement and learning

One challenge I've faced in my research is assessment. That's because some of the stuff I'd like to measure is hard to measure—namely, kids' relationships with robots.

a child puts her arm around a fluffy red and blue robot and grins

During one study, the Tega robot asked kids to take a photo with it so it could remember them. We gave each kid a copy of their photo at the end of the study as a keepsake.

I study kids, learning, and how we can use social robots to help kids learn. The social robots I've worked with are fluffy, animated characters that are more akin to Disney sidekicks than to vacuum cleaners—Tega, and its predecessor, DragonBot. Both robots use Android phones to display an animated face; they squash and stretch as they move; they can playback sounds and respond to a variety of sensors.

In my work so far, I've found evidence that the social behaviors of the robot—such as its nonverbal behavior (e.g., gaze and posture), social contingency (e.g., performing the right social behaviors at the right times), and expressivity (such as using a very expressive voice versus a flat/boring one)—significantly impact how much kids learn, how engaged they are in the learning activities, and how credible they think the robot is.

I've also seen kids treat the robot as something kind of like a friend. As I've talked about before, kids treat the robot as something in between a pet, a tutor, and a technology. They show many social behaviors with robots—hugging, talking, tickling, giving presents, sharing stories, inviting to picnics—and they also show understanding that the robot can turn off and needs battery power to turn back on. In some of our studies, we've asked kids questions about the properties of the robot: Can it think? Can it break? Does it feel tickles? Kids' answers show that they understand that robot is a technological, human-made entity, but also that it shares properties with animate agents.

In many of our studies, we've deliberately tried to situate the robot as a peer. After all, one key way that children learn is through observing, cooperating with, and being in conflict with their peers. Putting the cute, fluffy robot in a peer-like role seemed natural. And over the past six years, I've seen kids mirror robots' behaviors and language use, learning from them the same way they learn from peers.

I began to wonder about the impact of the relational features of the robot on children's engagement and learning: that is, the stuff about the robot that influences children's relationships with the robot. These relational features include the social behaviors we have been investigating, as well as others: mirroring, entrainment, personalization, change over time in response to the interaction, references to a shared narrative, and more. Some teachers I've talked to have said that it's their relationship with their students that really matters in helping kids learn—what if the same was true with robots?

My hunch—one I'm exploring in my dissertation right now via a 12-week study at Boston-area schools—is that yes: kids' relationships with the robot do matter for learning.

But how do you measure that?

I dug into the literature. As it turns out, psychologists have observed and interviewed children, their parents, and their teachers about kids' peer relationships and friendship quality. There are also scales and questionnaires for assessing adults' relationships, personal space, empathy, and closeness to others.

I ran into two main problems. First, all of the work with kids involved assumptions about peer interactions that didn't hold with the robot. For example, several observation-based methodologies assumed that kids would be freely associating with other kids in a classroom. Frequency of contact and exclusivity were two variables they coded for (higher frequency and more exclusive contact meant the kids were more likely to be friends). Nope: Due to the setup of our experimental studies, kids only had the option of doing a fairly structured activity with the robot once a week, at specific times of the day.

The next problem was that all the work with adults assumed that the experimental subjects would be able to read. As you might imagine, five-year-olds aren't prime candidates for filling out written questionnaires full of "how do you feel about X, Y, or Z on a 1-5 scale." These kids are still working on language comprehension and self-reflection skills.

I found a lot of inspiration, though, including several gems that I thought could be adapted to work with my target age group of 4–6 year-olds. I ended up with an assortment of assessments that tap into a variety of methodologies: questions, interviews, activities, and observations.

three drawings of a robot, with the one on the left frowning, the middle one looking neutral, and the one on the right looking happy

We showed pictures of the robot to help kids choose an initial answer when asking some interview questions. These pictures were shown for the question, 'Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?'

We ask kids questions about how they think robots feel, trying to understand their perceptions of the robot as a social, relational agent. For example, one question was, "Does the robot really like you, or is the robot just pretending?" Another was, "Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?" For each question, we also ask kids to explain their answer, and whether they would feel the same way. This can reveal a lot about what criteria they use to determine whether the robot has social, relational qualities, such as having feelings, actions the robot takes, consequences of actions, or moral rules. For example, one boy thought the robot really liked him "because I'm nice" (i.e., because of the child's attributes), while another girl said the robot liked her "because I told her a story" (i.e., because of actions the child took).

seven cards, each with a picture of a pair of increasingly overlapping circles on it

The set of circles used in our adapted Inclusion of Other in the Self task.

Some of these questions used pictorial response options, such as our adaptation of the Inclusion of Other in the Self scale. In this scale, kids are shown seven pairs of increasingly overlapping circles, and asked to point to the pair of circles that best shows their relationship with someone. We ask not only about the robot, but also about kids' parents, pets, best friends, and a bad guy in the movies. This lets us see how kids rate the robot in relation to other characters in their lives.

a girl sits at a table with paper and pictures of different robots and things

This girl is doing the Robot Sorting Task, in which she decides how much like a person each entity is and places each picture in an appropriate place along the line.

Another activity we created asks kids to sort a set of pictures of various entities along a line—entities such as a frog, a cat, a baby, a robot from a movie (like Baymax, WALL-e, or R2D2), a mechanical robot arm, Tega, and a computer. The line is anchored on one end with a picture of a human adult, and on the other with a picture of a table. We want to see not only where kids put Tega in relation to the other entities, but also what kids say as they sort them. Their explanations of why they place each entity where they do can reveal what qualities they consider important for being like a person: The ability to move? Talk? Think? Feel?

In the behavioral assessments, the robot or experimenter does something, and we observe what kids do in response. For example, when kids played with the robot, we had the robot disclose personal information, such as skills it was good or bad at, or how it felt about its appearance: "Did you know, I think I'm good at telling stories because I try hard to tell nice stories. I also think my blue fluffy hair is cool." Then the robot prompted for information disclosure in return. Because people tend to disclosure more information, and more personal or sensitive information, to people to whom they feel closer, we listened to see whether kids disclosed anything to the robot: "I'm good at reading," "I can ride a bike," "My teacher says I'm bad at listening."

a fluffy red and blue tega robot with stickers stuck to its tummy

Tega sports several stickers given to it by one child.

Another activity looked at conflict and kids' tendency to share (like they might with another child). The experimenter holds out a handful of stickers and tells the child and robot that they can each have one. The child is allowed to pick a sticker first. The robot says, "Hey! I want that sticker!" We observe to see if the child says anything or spontaneously offers up their sticker to the robot. (Don't worry: If the child does give the robot the sticker, the experimenter fishes a duplicate sticker out of her pocket for the child.)

Using this variety of assessments—rather than using only questions or only observations—can give us more insight into how kids think and feel. We can see if what kids say aligns with what kids do. We can get at the same concepts and questions from multiple angles, which may give us a more accurate picture of kids' relationships and conceptualizations.

Through the process of searching for assessments I need, discovering nothing quite right existed, and creating new ways of capturing kids' behaviors, feelings, and thoughts, the importance of assessment really hit home. Measurement and assessment is one of the most important things I do in research. I could ask any number of questions, hypothesize any number of outcomes, but without performing an experiment and actually measuring something relevant to my questions, I would get no answers.

We've just published a conference paper on our first pilot study validating four of these assessments. The assessments were able to capture differences in children's relationships with a social robot, as expected, as well as how their relationships change over time. If you study relationships with young kids (or simply want to learn more), check it out!

This article originally appeared on the MIT Media Lab website, May 2018

Acknowledgments

The research I talk about in this post was only possible with help from multiple collaborators, most notably Cynthia Breazeal, Hae Won Park, Randi Williams, and Paul Harris.

This research was supported by a MIT Media Lab Learning Innovation Fellowship and by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this article are those of the authors and do not represent the views of the NSF.


0 comments