Posts tagged "learning"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

silhouette of a person with arms outstretched on wintery day, in front of bare-limbed trees and a dim sunset sky

Some personal news: I have a book deal!

I'm writing a pragmatic, up-to-date guide to thriving in graduate school while keeping a healthy personal life, filled with sensible suggestions, concrete exercises, and detailed resource lists.

Tentatively titled #PhDone: How to Get Through Grad School Without Leaving the Rest of Your Life Behind, it'll be published by Columbia University Press in spring 2023 (tentatively—titles and dates will be finalized later!). I'm represented by Joe Perry.

From my proposal:

Every year, more than 500,000 people start graduate programs. Although more than half of these students are women, there's no book out there explaining how to balance breastfeeding with benchwork, or childcare with conference travel. Grad students today are on average 33 years old ... so why aren't we talking about managing marriage and a thesis, saving for retirement, or the fact that nearly 57% of students are also employed outside of school? Not only that, but of the 50,000 students who complete PhDs each year, a shrinking number collect coveted tenure-track positions ... even though everyone's still being trained as if they're all professors-to-be.

There's a serious mismatch between the advice about grad school that's currently available and our present reality. It's time to fix that.

I'm excited about this book. It's the book I wish I'd been able to read when I started grad school.

A long game

This project is years in the making. I spent months crafting a book proposal. I submitted to agents for a year before landing on the right fit. Then it took us over a year to find the right publisher.

Many people would have become discouraged even part of the way through this process. Some may have given up entirely. Others may have switched to self-publishing, thinking the speed of getting their work out and the upfront costs would be worth it—and for some, it would be.

But I went in knowing that publishing is a long game. Getting your writing out into the world takes time: to submit, resubmit, get reviews, revise, revise again. I don't want to be my own publisher; I want to write and have a team working with me on editing, publishing, marketing, etc.

Next steps for the book

Now that the book's been picked up by Columbia University Press, I have a deadline—which is exciting! I like knowing when my deadline is. That way, I can plan backwards and ensure I'm working enough up front, incrementally, so that I never run into crunch time. And yes, I've already made a spreadsheet to track my progress and keep tabs on book-related tasks.

While the full book timeline is approximate at this stage, the next steps are:

  • I write the book. I have a couple chapters drafted already, with outlines and notes for the rest. That's an interesting thing about nonfiction books—they're generally sold on proposal and not from a finished manuscript.
  • My editor at CUP reads it. I revise as needed.
  • Once the manuscript is finished, time to print is less than a year. In that time, the publishing team works their magic: formatting, cover design, cover copy, production, sales and distribution work, etc. We ramp up marketing for the book.
  • Then you can buy it!

I'll post updates along the way!

* This post first appeared on The Deliberate Owl.


20 comments

river rocks partly submerged in still water

Revise and resubmit the paper... again??

One morning, in my second year of grad school, I opened my email to find a note from Sidney, a professor I'd worked for a while back:

"Reopening this old thread ... Someone requested the paper ... looking through it again I thought it was a damn good paper. We should definitely resubmit. What do you think?"

What did I think? After rejections from several journals two years prior, and over 30 revisions (I lost count), thinking about that particular unpublished paper made me feel tired. I'd finally given up on it as a lost cause. Its fate was to forever be one of those learning experiences that was probably valuable, but ultimately showed no tangible result and felt like a waste of time.

I tried ignoring the email while I drank a cup of tea and tended to the rest of my inbox. It nagged.

Really? A damn good paper? Maybe revising it again and resubmitting wouldn't be so bad. Sure, every round of reviewers had their own ideas of where any given paper should go and would nitpick different things, but we had already fixed so many minor errors and clarified potential points of confusion… I liked the idea of having something to show for all my effort on the paper so far. Or was this line of reasoning an instance of the sunk costs fallacy? (That is: I'd put so much work in so far, I should put in more work instead of cutting my losses.)

The First Draft

I had gotten a year-long job as a research intern straight out of college. I'd enjoyed my undergrad research experiences and liked the idea of getting more experience while applying to grad school. So I joined Sidney's lab. I shadowed his grad students, worked on odd bits of many different projects, ran participants through experimental studies, and learned about research at the intersection of psychology and computer science.

One day, Sidney handed me the project that became the damn good paper. He and a colleague had an algorithm and some software that their labs had used to track human body motion in a couple studies. He wanted to verify that the software worked as intended—i.e., that it tracked body motion from video in a way comparable to some other sensors. So, the plan was to collect some clean data with a camera and those other sensors. Compare the output. Run the software on a couple existing datasets that had captured body motion in video and with other sensors. Write it up, cite the paper whenever he used the software in future projects, open source the software so other folks could use it, too.

Sidney was a powerhouse writer. This was my first proper academic paper. He gave me the reins of the project and said he'd check in later.

In retrospect, having supervised a number of undergrad research assistants during my PhD, the project was a classic "give it to the student who wants experience, I don't have time for it, but like the idea of it being finished someday" project. (I have a growing list of these…) It was a good bet on Sidney's part—I took ownership. I wanted to learn how to put together a good paper.

I collected the data. We talked over an outline, and I started writing. We went back and forth on drafts a dozen times. Sidney picked a journal to submit to and sent me a couple cover letters as examples. When we got reviews back, he explained how they weren't so bad (they looked bad), and gave me some example revision response letters for when I revised the paper and drafted a reply.

But it took a while. The reviewers weren't happy. Ultimately, they rejected it. The next journal was a desk reject. And so on. Eventually, when I left Sidney's lab and started my PhD program at MIT, I left the paper, too.

The Words Aren't Right Yet

Thanks to Twitter, around the time I got Sidney's email asking about resubmission, I found myself reading the blog of science fiction and fantasy author Kameron Hurley. I enjoy her books in large part because of the gruesome realism about life and survival: characters who make it to the end of a book alive are the ones who are winning.

In one post, Kameron Hurley wrote about her experiences as a professional copywriter. She wrote words for other people for a living. She talked about a manager trying to "gently" give her feedback from a client, to which she replied: Don't mince words. Give it to me. If the words are wrong, write them until they are the right words. It was literally her job to make the words right for that client. If they weren't right, they needed revision. She needed the client's hard-hitting feedback.

Her attitude toward writing was inspirational. Her post reminded me that the words on the page aren't me. They're just one attempt at communicating an idea through the imperfect and difficult medium of language. If that communication attempt fails, we are given the opportunity to try again. As Hurley put it, "You write until the words are the right ones."

If we care about communicating our ideas, then the revision process can be a conversation. The goal is to make the writing better. The goal is to improve the presentation of ideas. The goal is to make the words right.

Writing isn't a one-time action. It's not like baking a cake—mix the ingredients, pop it in the oven, and it's done. Writing is a process. Editing is part of that process.

Reviewer feedback, like any other feedback, is aimed at making the writing better—and like any other feedback, it may need to be taken with a grain of salt. There are myriad ways to present ideas. People encounter ideas from where they are at; they may need different amounts of detail or supporting information to understand your words. And that's okay. Learning to judge your audience is a skill that takes practice, too.

Revision and Resubmission

I revised the paper for what felt like the millionth time. This time, though, it wasn't as bad as I had feared. In fact, the two years that had passed had lent me much-needed distance from the paper. As I re-read the reviewer comments from our last rejection, all the comments felt addressable. I could see where the words weren't right.

My co-authors commented and gave feedback. I revised the paper more. We submitted it to a new journal. Major revisions. We resubmitted. Major revisions. We resubmitted. Finally, the words were almost right: Minor revisions. And then it was published.

It's not the paper I'm most proud of, but it is a paper that taught me more than most. When I look at work I have in progress now—like a paper that's now on its 15th+ version, second journal, fifth year of work—I try to remember that academic publishing is often a long process. I try to remember that if the words aren't right yet, then with more time, effort, practice, and feedback, I can get a little closer to making the words right. Even a paper I'd initially given up on could be vanquished.

This article originally appeared on the Resilience in Academic Writing Blog, March 29, 2020


0 comments

a child puts her arm around a fluffy red and blue robot and grins

Relational Robots

My latest research in Cynthia Breazeal's Personal Robots Group has been on relational technology.

By relational, I mean technology that is designed to build and maintain long—term, social—emotional relationships with users. It's technology that's not just social—it's more than a digital assistant. It doesn't just answer your questions, tell jokes on command, or play music and adjust the lights. It collects data about you over time. It uses that data to personalize its behavior and responses to help you achieve long term goals. It probably interacts using human social cues so as to be more understandable and relatable—and furthermore, in areas such as education and health, positive relationships (such as teacher—student or doctor—patient) are correlated with better outcomes. It might know your name. It might try to cheer you up if it detects that you're looking sad. It might refer to what you've done together in the past or talk about future activities with you.

Relational technology is new. Some digital assistants and personal home robots on the market have some features of relational technology, but not all the features. Relational technology is still a research idea more than a commercial product. Which means right now, before it's on the market, is the exact right time to talk about how we ought to design relational technology.

As part of my dissertation, I performed a three-month study with 49 kids aged 4-7 years. All the kids played conversation and storytelling games with a social robot. Half the kids played with a version of the robot that was relational, using all the features of relational technology to build and maintain a relationship and personalize to individual kids. It talked about its relationship with the child and disclosed personal information about itself; referenced shared experiences (such as stories told together); used the child's name; mirrored the child's affective expressions, posture, speaking rate, and volume; selected stories to tell based on appropriate syntactic difficulty and similarity of story content to the child's stories; and used appropriate backchanneling actions (such as smiles, nods, saying "uh huh!"). The other half of the kids played with a not-relational robot that was just as friendly and expressive, but without the special relational stuff.

Besides finding some fascinating links between children's relationships with the robot, their perception of it as a social-relational agent, their mirroring of the robot's behaviors, and their language learning, I also found some surprises. One surprise was that we found gender differences in how kids interacted with the robot. In general, boys and girls treated the relational and not-relational robots differently.

Boys and girls treated the robots differently

Girls responded positively to the relational robot and less positively to the not-relational robot. This was the pattern I expected to see, since the relational robot was doing a lot more social stuff to build a relationship. I'd hypothesized that kids would like the relational robot more, feel closer to it, and treat it more socially. And that's what girls did. Girls generally rated the relational robot as more of a social-relational agent than the not-relational robot. They liked it more and felt closer to it. Girls often mirrored the relational robot's language more (we often mirror people more when we feel rapport with them), disclosed more information (we share more with people we're closer to), showed more positive emotions, and reported feeling more comfortable with the robot. They also showed stronger correlations between their scores on various relationship assessments and their vocabulary learning and vocabulary word use, suggesting that they learned more when they had a stronger relationship.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest—but this trend was stronger for girls than for boys.

Boys showed the opposite pattern. Contrary to my hypotheses, boys tended to like the relational robot less than the not-relational one. They felt less close to it, mirrored it less, disclosed less, showed more negative emotions, showed weaker correlations between their relationship and learning (but they did still learn—it just wasn't as strongly related to their relationship), and so forth. Boys also liked both robots less than girls did. This was the first time we'd seen this gender difference, even after observing 300+ kids in 8+ prior studies. What was going on here? Why did the boys in this study react so differently to the relational and not—relational robots?

I dug into the literature to learn more about gender differences. There's actually quite a bit of psychology research looking at how young girls and boys approach social relationships differently (e.g., see Joyce Benenson's awesome book Warriors and Worriers: The Survival of the Sexes). For example, girls tend to be more focused on individual relationships and tend to have fewer, closer friends. They tend to care about exchanging personal information and learning about others' relationships and status. Girls are often more likely to try to avoid conflict, more egalitarian than boys, and more competent at social problem solving.

Boys, on the other hand, often care more about being part of their peer group. They tend to be friends with a lot of other boys and are often less exclusive in their friendships. They frequently care more about understanding their skills relative to the skills other boys have, and care less about exchanging personal information or explicitly talking about their relationships.

Of course, these are broad generalizations about girls versus boys that may not apply to any particular individual child. But as generalizations, they were often consistent with the patterns I saw in my data. For example, the relational robot used a lot of behaviors that are more typical of girls than of boys, like explicitly sharing information about itself and talking about its relationship with the child. The not-relational robot used fewer actions like these. Plus, both robots may have talked and acted more like a girl than a boy, because its speech and behavior were designed by a woman (me), and its voice was recorded by a woman (shifted to a higher pitch to sound like a kid). We also had only women experimenters running the study, something that has varied more in prior studies.

I looked at kids' pronoun usage to see how they referred to the relational versus not-relational robot. There wasn't a big difference among girls; most of them used "he/his." Boys, however, were somewhat more likely to use "she/her." So one reason boys might've reacted less positively to it because they saw it as more of a girl, and they preferred to play with other boys.

We need to do follow-up work to examine whether any of these gender-related differences were actually causal factors. For example, would we see the same patterns if we explicitly introduced the robot as a boy versus as a girl, included more behaviors typically associated with boys, or had female versus male experimenters introduce the robot?

a child sits at a table that has a fluffy robot sitting on it

Designing Relational Technology

These data have interesting implications for how we design relational technology. First, the mere fact that we observed gender differences means we should probably start paying more attention to how we design the robot's gender and gendered behaviors. In our current culture and society, there are a range of behaviors that are generally associated with masculine versus feminine, male versus female, boys versus girls. Which means that if the robot acts like a stereotypical girl, even if you don't explicitly say that it is a girl, kids are probably going to treat it like a girl. Perhaps this might change if children are concurrently taught about gender and gender stereotypes, but there are a lot of open questions here and more research is needed.

One issue is that you may not need much stereotypical behavior in a robot to see an effect—back in 2009, Mikey Siegel performed a study in the Personal Robots group that compared two voices for a humanoid robot. Study participants conversed with the robot, and then the robot solicited a donation. Just changing the voice from male to female affected how persuasive, credible, trustworthy, and engaging people found the robot. Men, for example, were more likely to donate money to the female robot, while women showed little preference. Most participants rated the opposite sex robot as more credible, trustworthy, and engaging.

As I mentioned earlier, this was the first time we'd seen these gender differences in our studies with kids. Why now, but not in earlier work? Is the finding repeatable? A few other researchers have seen similar gender patterns in their work with virtual agents...but it's not clear yet why we see differences in some studies but not others.

What gender should technology have—if any?

Gendering technological entities isn't new. People frequently assign gender to relatively neutral technologies, like robot vacuum cleaners and robot dogs—not to mention their cars! In our studies, I've rarely seen kids not ascribe gender to our fluffy robots (and it has varied by study and by robot what gender they typically pick). Which raises the question of what gender a robot should be—if any? Should a robot use particular gender labels or exhibit particular gender cues?

This is largely a moral question. We may be able to design a robot that acts like a girl, or a boy, or some androgynous combination of both, and calls itself male, female, nonbinary, or any number of other things. We could study whether a girl robot, a boy robot, or some other robot might work better when helping kids with different kinds of activities. We could try personalizing the robot's gender or gendered behaviors to individual children's preferences for playmates.

But the moral question, regardless of whatever answers we might find regarding what works better in different scenarios or what kids prefer, is how we ought to design gender in robots. I don't have a solid recommendation on that—the answer depends on what you value. We don't all value the same things, and what we value may change in different situations. We also don't know yet whether children's preferences or biases are necessarily at odds with how we might think we should design gender in robots! (Again: More research needed!)

Personalizing robots, beyond gender

A robot's gender may not be a key factor for some kids. They may react more to whether the robot is introverted versus extraverted, or really into rockets versus horses. We could personalize other attributes of the robot, like aspects of personality (such as extraversion, openness, conscientiousness), the robot's "age" (e.g., is it more of a novice than the child or more advanced?), whether it uses humor in conversation, and any number of other things. Furthermore, I'd expect different kids to treat the same robot differently, and to frequently prefer different robot behaviors or personalities. After all, not all kids get along with all other kids!

However, there's not a lot of research yet exploring how to personalize an agent's personality, its styles of speech and behavior, or its gendered expressions of emotions and ideas to individuals. There's room to explore. We could draw on stereotypes and generalizations from psychological research and our own experiences about which kids other kids like playing with, how different kids (girls, boys, extroverts, etc.) express themselves and form friendships, or what kinds of stories and play boys or girls prefer (e.g., Joyce Benenson talks in her book Warriors and Worriers about how boys are more likely to include fighting enemies in their play, while girls are more likely to include nurturing activities).

We need to be careful, too, to consider whether making relational robots that provide more of what the child is comfortable with, more of what the child responds to best, more of the same, might in some cases be detrimental. Yes, a particular child may love stories about dinosaurs, battles, and knights in shining armor, but they may need to hear stories about friendship, gardening, and mammals in order to grow, learn, and develop. Children do need to be exposed to different ideas, different viewpoints, and different personalities with whom they must connect and resolve conflicts. Maybe the robots shouldn't only cater to what a child likes best, but also to what invites them out of their comfort zone and promotes growth. Maybe a robot's assigned gender should not reinforce current cultural stereotypes.

Dealing with gender stereotypes

A related question is whether, given gender stereotypes, we could make a robot appear more neutral if we tried. While we know a lot about what behaviors are typically considered feminine or masculine, it's harder to say what would make a robot come across as neither a boy nor a girl. Some evidence suggests that girls and women are culturally "allowed" to display a wider range of behaviors and still be considered female than are boys, who are subject to stronger cultural rules about what "counts" as appropriate masculine behavior (Joyce Benenson talks about this in her book that I mentioned earlier). So this might mean that there's a narrower range of behaviors a robot could use to be perceived as more masculine... and raises more questions about gender labels and behavior. What's needed to "override" stereotypes? And is that different for boys versus girls?

One thing we could do is give the robot a backstory about its gender or lack thereof. The story the robot tells about itself could help change children's construal of the robot. But would a robot explicitly telling kids that it's nonbinary, or that it's a boy or a girl, or that it's just a robot and doesn't have a gender be enough to change how kids interact with it? The robot's behaviors—such as how it talks, what it talks about, and what emotions it expresses—may "override" the backstory. Or not. Or only for some kids. Or only for some combinations of gendered behaviors and robot claims about its gender. These are all open empirical questions that should be explored while keeping in mind the main moral question regarding what robots we ought to be designing in the first place.

Another thing to explore in relation to gender is the robot's use of relational behaviors. In my study, I saw the robot's relational behaviors made a bigger difference for girls than for boys. Adding in all that relational stuff made girls a lot more likely to engage positively with the robot.

This isn't a totally new finding—earlier work from a number of researchers on kids' interactions with virtual agents and robots has found similar gender patterns. Girls frequently reacted more strongly to the psychological, social, and relational attributes of technological systems. Girls' affiliation, rapport, and relationship with an agent often affected their perception of the agent and their performance on tasks done with the agent more than for boys. This suggests that making a robot more social and relational might engage girls more, and lead to greater rapport, imitation, and learning. Of course, that might not work for all girls... and the next questions to ask are how we can also engage those girls, and what features the robot ought to have to better engage boys, too. How do we tune the robot's social-relational behavior to engage different people?

More to relationships than gender

There's also a lot more going on in any individual or in any relationship than gender! Things like shared interests and shared experiences build rapport and affiliation, regardless of the gender of those involved. When building and maintaining kids' engagement and attention during learning activities with the robots over time, there's a lot more than the robot's personality or gender that matters. Here's a few I think are especially helpful:

  • Personalization to individuals, e.g., choosing stories to tell with an appropriate linguistic/syntactic level,
  • Referencing shared experiences like stories told together and facts the child had shared, such as the child's favorite color,
  • Sharing backstory and setting expectations about the robot's history, capabilities, and limitations through conversation,
  • Using playful and creative story and learning activities, and
  • The robot's design from the ground up as a social agent—i.e., considering how to make the robot's facial expressions, movement, dialogue, and other behaviors understandable to humans.

Bottom line: People relate socially and relationally to technology

When it comes to the design of relational technology, the bottom line is that people seem to use the same social-relational mechanisms to understand and relate to technology that they use when dealing with other people. People react to social cues and personality. People assume gender (whether male, female, or some culturally acceptable third gender or in-between). People engage in dialogue and build shared narratives. The more social and relational the technology is, the more people treat it socially and relationally.

This means, when designing relational technology, we need to be aware of how people interact socially and how people typically develop relationships with other people, since that'll tell us a lot about how people might treat a social, relational robot. This will also vary across cultures with different cultural norms. We need to consider the ethical implications of our design decisions, whether that's considering how a robot's behavior might be perpetuating undesirable gender stereotypes, challenging them in positive ways, or whether it's mitigating risks around emotional interaction, attachment, and social manipulation. Do our interactions with gendered robots change to how we interact with people of different genders? (Some of these ethical concerns will be the topic of a later post. Stay tuned!).

We need to explicitly study people's social interactions and relationships with these technologies, like we've been doing in the Personal Robots group, because these technologies are not people, and there are going to be differences in how we respond to them—and this may influence how we interact with other people. Relational technologies have a unique power because they are social and relational. They can engage us and help us in new ways, and they can help us to interact with other people. In order to design them in effective, supportive, and ethical ways, we need to understand the myriad factors that affect our relationships with them—like children's gender.

This article originally appeared on the MIT Media Lab website, August, 2019


0 comments

a red and blue robot sits on a table

Tega sits at a school, ready to begin a storytelling activity with kids!

Last spring, you could find me every morning alternately sitting in a storage closet, a multipurpose meeting room, and a book nook beside our fluffy, red and blue striped robot Tega. Forty-nine different kids came to play storytelling and conversation games with Tega every week, eight times each over the course of the spring semester. I also administered pre- and post-assessments to find out what kids thought about the robot, what they had learned, and what their relationships with the robot were like.

Suffice to say, I spent a lot of time in that storage closet.

a child sits at a table that has a fluffy robot sitting on it

A child talks with the Tega robot.

Studying how kids learn with robots

The experiment I was running was, ostensibly, straightforward. I was exploring a theorized link between the relationship children formed with the robot and children's engagement and learning during the activities they did with the robot. This was the big final piece of my dissertation in the Personal Robots Group. My advisor, Cynthia Breazeal, and my committee, Rosalind Picard (also of the MIT Media Lab) and Paul Harris (Harvard Graduate School of Education), were excited to see how the experiment turned out, as were some of our other collaborators, like Dave DeSteno (Northeastern University), who have worked with us on quite a few social robot studies.

In some of those earlier studies, as I've talked about before, we've seen that the robot's social behaviors—like its nonverbal cues (such as gaze and posture), its social contingency (e.g., using appropriate social cues at the right times), and its expressivity (such using an expressive voice versus a flat and boring one)—can affect how much kids learn, how engaged they are in learning activities, and their perception of the robot's credibility. Kids frequently treat the robot as something kind of like a friend and use a lot of social behaviors themselves—like hugging and talking; sharing stories; showing affection; taking turns; mirroring the robot's behaviors, emotions, and language; and learning from the robot like they learn from human peers.

Five years of looking at the impact of the robot's social behaviors hinted to me that there was probably more going on. Kids weren't just responding to the robot using appropriate social cues or being expressive and cute. They were responding to more stuff—relational stuff. Relational stuff is all the social behavior plus more stuff that contributes to building and maintaining a relationship, interacting multiple times, changing in response to those interactions, referencing experiences shared together, being responsive, showing rapport (e.g., with mirroring and entrainment), and reciprocating behaviors (e.g., helping, sharing personal information or stories, providing companionship).

While the robots didn't do most of these things, whenever they used some (like being responsive or personalizing behavior), it often increased kids' learning, mirroring, and engagement.

So... what if the robot did use all those relational behaviors? Would that increase children's engagement and learning? Would children feel closer to the robot and perceive it as a more social, relational agent?

I created two versions of the robot. Half the kids played with the relational robot: the version that used all the social and relational behaviors listed above. For example, it mirrored kids' pitch and speaking rate. It mirrored some emotions. It tracked activities done together, like stories told, and referred to them in conversation later. It told personalized stories.

The other half of the kids played with the not-relational robot—it was just as friendly and expressive, but didn't do any of the special relational stuff.

Kids played with the robot every week. I measured their vocabulary learning and their relationships, looked at their language and mirroring of the robot, examined their emotions during the sessions, and more. From all this data, I got a decent sense of what kids thought about the two versions of the robot, and what kind of effects the relational stuff had.

In short: The relational stuff mattered.

Relationships and learning

Kids who played with the relational robot rated it as more human-like. They said they felt closer to it than kids who played with the not-relational robot, and disclosed more information (we tend to share more with people we're closer to). They were more likely to say goodbye to the robot (when we leave, we say goodbye to people, but not to things). They showed more positive emotions. They were more likely to say that playing with the robot was like playing with another child. They also were more confident that the robot remembered them, frequently referencing relational behaviors to explain their confidence.

All of this was evidence that the robot's relational behaviors affected kids' perceptions of it and kids' behavior with it in the expected ways. If a robot acted more in more social and relational ways, kids viewed it as more social and relational.

Then I looked at kids' learning.

I found that kids who felt closer to the robot, rated it as more human-like, or treated it more socially (like saying goodbye) learned more words. They mirrored the robot's language more during their own storytelling. They told longer stories. All these correlations were stronger for kids who played with the relational robot—meaning, in effect, that kids who had a stronger relationship with the robot learned more and demonstrated more behaviors related to learning and rapport (like mirroring language). This was evidence for my hypotheses that the relationships kids form with peers contribute to their learning.

graph showing on the left, that kids in the not-relational condition didn't have as strong a correlation while in the relational condition, there was a stronger correlation - but that this varied by gender

Children who rated the robot as more of a social-relational agent also scored higher on the vocabulary posttest.

This was an exciting finding. There are plenty of theories about how kids learn from peers and how peers are really important to kids' learning (famous names in the subject include Piaget, Vygotsky, and Bandura), but there's not as much research looking at the mechanisms that influence peer learning. For example, I'd found research showing that kids' peers can positively affect their language learning... but not why they could. Digging into the literature further, I'd found one recent study linking learning to rapport, and several more showing links between an agent's social behavior and various learning-related emotions (like increased engagement or decreased frustration), but not learning specifically. I'd seen some work showing that social bonds between teachers and kids could predict academic performance—but that said nothing about peers.

In exploring my hypotheses about kids' relationships and learning, I also dug into some previously-collected data to see if there were any of the same connections. Long story short, there were. I found similar correlations between kids' vocabulary learning, emulation of the robot's language, and relationship measures (such as ratings of the robot as a social-relational agent and self-disclosure to the robot).

All in all, I found some pretty good evidence for my hypothesized links between kids' relationships and learning.

I also found some fascinating nuances in the data involving kids' gender and their perception of the robot, which I'll talk about in a later post. And, of course, whenever we talk about technology, ethical concerns abound, so I'll talk more about that in a later post, too.

This article originally appeared on the MIT Media Lab website, February, 2019


0 comments

A girl grins at a red and blue fluffy robot and puts her arm around it

Relational AI: Creating long-term interpersonal interaction, rapport, and relationships with social robots

Children today are growing up with a wide range of Internet of Things devices, digital assistants, personal home robots for education, health, and security, and more. With so many AI-enabled socially interactive technologies entering everyday life, we need to deeply understand how these technologies affect us—such as how we respond to them, how we conceptualize them, what kinds of relationships we form with them, the long-term consequences of use, and how to mitigate ethical concerns (of which there are many).

In my dissertation, I explored some of these questions through the lens of children's interacts and relationships with social robots that acted as language learning companions.

Many of the other projects I worked on at the MIT Media Lab explored how we could use social robots as a technology to support young children's early language development. When I turned to relational AI, instead of focusing simply on how to make social robots effective as an educational tools, I delved into why they are effective—as well as the ethical, social, and societal implications of bringing social-relational technology into children's lives.

Here is a précis of my dissertation. (Or read the whole thing!)

a girl looks at the dragonbot robot as it tells a story

Exploring children's relationships with peer-like social robots

In earlier projects in the Personal Robots Group, we had found evidence that children can learn language skills with social robots—and the robot's social behaviors seemed to be a key piece of why children responded so well! One key strategy children used to learn with the robots was social emulation—i.e., copying or mirroring the behaviors used by the robot, such as speech patterns, words, even curiosity and a growth mindset.

My hunch, and my key hypothesis, was this: Social robots can benefit children because they can be social and relational. They can tap into our human capacity to build and respond to relationships. Relational technology, thus, is technology that can build long-term, social-emotional relationships with users.

I took a new look at data I'd collected during my master's thesis to see if there was any evidence for my hypothesis. Spoiler: There was. Children's emulation of the robot's language during the storytelling activity appeared to be related both to children's rapport with the robot and their learning.

Assessing children's relationships

Because I wanted to measure children's relationships with the robot and gain an understanding of how children treated it relative to other characters in their lives, I created a bunch of assessments. Here's a summary of a few of them.

We used some of these in another longitudinal learning study where kids listened to and retold stories with a social robot. I found correlations between measures of engagement, learning, and relationships. For example, children who reported a stronger relationship or rated the robot as a greater social-relational agent showed higher vocabulary posttest scores. These were promising results...

So, armed with my assessments and hypotheses, I ran some more experimental studies.

a boy sits across a table from a red and blue robot

Evaluating relational AI: Entrainment and Backstory

First, I performed a one-session experiment that explored whether enabling a social robot to perform several rapport- and relationship-building behaviors would increase children's engagement and learning: entrainment and self-disclosure (backstory).

In positive human-human relationships, people frequently mirror or mimic each other's behavior. This mimicry (also called entrainment) is associated with rapport and smoother social interaction. I gave the robot a speech entrainment module, which matched vocal features of the robot's speech, such as speaking rate and volume, to the user's.

I also had the robot disclose personal information, about its poor speech and hearing abilities, in the form of a backstory.

86 kids played with the robot in a 2x2 study (entrainment vs. no entrainment and backstory vs. no backstory). The robot engaged the children one-on-one in conversation, told a story embedded with key vocabulary words, and asked children to retell the story.

I measured children's recall of the key words and their emotions during the interaction, examined their story retellings, and asked children questions about their relationship with the robot.

I found that the robot's entrainment led children to show more positive emotions and fewer negative emotions. Children who heard the robot's backstory were more likely to accept the robot's poor hearing abilities. Entrainment paired with backstory led children to emulate more of the robot's speech in their stories; these children were also more likely to comply with one of the robot's requests.

In short, the robot's speech entrainment and backstory appeared to increase children's engagement and enjoyment in the interaction, improve their perception of the relationship, and contributed to children's success at retelling the story.

A girl smiles at a red and blue fluffy robot

Evaluating relational AI: Relationships through time

My goals in the final study were twofold. First, I wanted to understand how children think about social robots as relational agents in learning contexts, especially over multiple encounters. Second, I wanted to see how adding relational capabilities to a social robot would impact children's learning, engagement, and relationship with the robot.

Long-term study

Would children who played with a relational robot show greater rapport, a closer relationship, increased learning, greater engagement, more positive affect, more peer mirroring, and treat the robot as more of a social other than children who played with a non-relational robot? Would children who reported feeling closer to the robot (regardless of condition) more learning and peer mirroring?

In this study, 50 kids played with either a relational or not relational robot. The relational robot was situated as a social contingent agent, using entrainment and affect mirroring; it referenced shared experiences such as past activities performed together and used the child's name; it took specific actions with regards to relationship management; it told stories that personalized both level (i.e., syntactic difficulty) and content (i.e., similarity of the robot's stories to the child's).

The not relational robot did not use these features. It simply followed its script. It did personalize stories based on level, since this is beneficial but not specifically related to the relationship.

Each child participated in a pretest session; 8 sessions with the robot that each included a pretest, the robot interaction with greeting, conversation, story activity, and closing, and posttest; and a final posttest session.

graph showing that children who rated robot as more social and relational also showed more learning

Results: Relationships, learning, and ... gender?

I collected a unique dataset about children's relationships with a social robot over time, which enabled me to look beyond whether children liked the robot or not or whether they learned new words or not. The main findings include:

  • Children in the \textit{Relational} condition reported that the robot was a more human-like, social, relational agent and responded to it in more social and relational ways. They often showed more positive affect, disclosed more information over time, and reported becoming more accepting of both the robot and other children with disabilities.

  • Children in the \textit{Relational} condition showed stronger correlations between their scores on the relationships assessments and their learning and behavior, such as their vocabulary posttest scores, emulation of the robot's language during storytelling, and use of target vocabulary words.

  • Regardless of condition, children who rated the robot as a more social and relational agent were more likely to treat it as such, as well as showing more learning.

  • Children's behavior showed that they thought of the robot and their relationship with it differently than their relationships with their parents, friends, and pets. They appeared to understand that the robot was an "in between" entity that had some properties of both alive, animate beings and inanimate machines.

The results of the study provide evidence for links between children's imitation of the robot during storytelling, their affect and valence, and their construal of the robot as a social-relational other. A large part of the power of social robots seems to come from their social presence.

In addition, children's behavior depended on both the robot's behavior and their own personalities and inclinations. Girls and boys seemed to imitate, interact, and respond differently to the relational and non-relational robots. Gender may be something to pay attention to in future work!

Ethics, design, and implications

I include several chapters in my dissertation discussing the design implications, ethical implications, and theoretical implications of my work.

Because of the power social and relational interaction has for humans, relational AI has the potential to engage and empower not only children across many domains—such as education, in therapy, and pediatrics for long-term health support—but also other populations: older children, adults, and the elderly. We can and should use relational AI to help all people flourish, to augment and support human relationships, and to enable people to be happier, healthier, more educated, and more able to lead the lives they want to live.

Further reading

Links

Publications

  • Kory-Westlund, J. M. (2019). Relational AI: Creating Long-Term Interpersonal Interaction, Rapport, and Relationships with Social Robots. PhD Thesis, Media Arts and Sciences, Massachusetts Institute of Technology, Cambridge, MA. [PDF]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). A Long-Term Study of Young Children's Rapport, Social Emulation, and Language Learning With a Peer-Like Robot Playmate in Preschool Frontiers in Robotics and AI, 6. [PDF] [online]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). Exploring the effects of a social robot's speech entrainment and backstory on young children's emotion, rapport, relationships, and learning. Frontiers in Robotics and AI, 6. [PDF] [online]

  • Kory-Westlund, J. M., & Breazeal, C. (2019). Assessing Children's Perception and Acceptance of a Social Robot. Proceedings of the 18th ACM Interaction Design and Children Conference (IDC) (pp. 38-50). ACM: New York, NY. [PDF]

  • Kory-Westlund, J. M., Park, H., Williams, R., & Breazeal, C. (2018). Measuring Young Children's Long-term Relationships with Social Robots. Proceedings of the 17th ACM Interaction Design and Children Conference (IDC) (pp. 207-218). ACM: New York, NY. [talk] [PDF]

  • Kory-Westlund, J. M., Park, H. W., Williams, R., & Breazeal, C. (2017). Measuring children's long-term relationships with social robots Workshop on Perception and Interaction dynamics in Child-Robot Interaction, held in conjunction with the Robotics: Science and Systems XIII. (pp. 625-626). Workshop website [PDF]


0 comments