Does the robot's expressivity affect children's learning and engagement?
Reading books is great. Reading picture books with kids is extra great, especially when kids are encouraged to actively process the story materials through dialogic reading (i.e., asking questions, talking about what's happening in the book and what might happen next, connecting stuff in the book to other stuff the kid knows). Dialogic reading can, e.g., help kids learn new words and remember the story better.
Since we were already studying how we could use social robots as language learning companions and tutors for young kids, we decided to explore whether social robots could effectively engage preschoolers in dialogic reading. Given that past work has shown that children can and do learn new words from social robots, we decided to also look at what factors may modulate their engagement and learning—such as the verbal expressiveness of the robot.
Tega robot
For this study, we used the Tega robot. Designed and built in the Personal Robots Group, it's a squash-and-stretch robot specifically designed to be an expressive, friendly creature. An Android phone displays an animated face and runs control software. The phone's sensors can be used to capture audio and video, which we can stream to another computer so a teleoperator can figure out what the robot should do next, or, in other projects, as input for various behavior modules, such as speech entrainment or affect recognition. We can stream live human speech, with the pitch shifted up to sound more child-like, to play on the robot, or playback recorded audio files.
Here is a video showing one of the earlier versions of Tega. Here's research scientist Dr. Hae Won Park talking about Tega and some of our projects, with a newer version of the robot.
Study: Does vocal expressivity matter?
We wanted to understand how the robot's vocal expressiveness might impact children's engagement and learning during a story and dialogic reading activity. So we set up two versions of the robot. One used a voice with a wide range of intonation and emotion. The other read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range. Both robots moved and interacted the exact same way—the only difference was the voice.
This video shows the robot's expressive and not-so-expressive voices.
Half of the 45 kids in the study heard the expressive voice; the other half heard the flat voice. They heard a story from the robot that had several target vocabulary words embedded in it. The robot asked dialogic questions during reading. Kids were asked to retell the story back to a fluffy purple toucan puppet (who had conveniently fallen asleep during the story and was so sad to have missed it).
We found that all children learned new words from the robot, emulated the robot's storytelling in their own story retells, and treated the robot as a social being. However, children who heard the story from the expressive robot showed deeper engagement, increased learning and story retention, and more emulation of the robot's story in their story retells.
This study provided evidence that children will show peer-to-peer modeling of a social robot's language. In addition, they will also emulate the robot's affect, and they will show deeper engagement and learning when the robot is expressive.
Links
- Video showing this project
- This work is discussed on the Personal Robots Group site and the MIT Media Lab site
- I wrote about this project for the MIT Media Lab blog: Making new (robot) friends: Understanding children's relationships with social robots.
- That blog post was republished on IEEE Spectrum: Robots for Kids: Designing Social Machines That Support Children's Learning.