Robotic guide for literacy learning

...

MIT Media Lab, 2020-present
Roles: Project lead; Design; Development; Research
Collaborators: Safinah Ali, Pedro Colon-Hernandez, Hae Won Park. Advisor: Cynthia Breazeal

Our previous studies (#1, #2) show both the promise of child-driven approach to literacy learning and the necessity of scaffolding (preferably automatic) to support it. Robotic characters allow to deliver more sophisticated, humanlike scaffolding. Studies by the Personal Robots group at the MIT Media Lab show that children relation to robots as to lifelike characters holds many benefits for learning. In this project, we expanded scaffolding into a new area, not only helping children to build words, but also helping them to come up with ideas on what to build.

During COVID pandemic, it became difficult to run studies with physical robots in schools. Therefore, I also developed a virtual version of Jibo the robot, that runs on a tablet. This helps to conduct studies in home settings.

Word scaffolding

Literature (e.g. Kegel & Bus, 2012) indicates that scaffolding modeled after human tutors is quite essential for children's learning from literacy apps - particularly in the case of children with lower self-regulation. Robotic tutors offer a unique way to provide human-like scaffolding. In the video below, Jibo the robot is used to support open-ended word construction.

Co-creativity

Our literacy learning approach is centered on child-driven interactions with an open-ended, expressive app, where children spell words to make scenes and stories. In our previous studies, we saw how children created stories via a kind of back-and-forth interaction between them and the app: some ideas originated from them, while some were inspired by the medium. We think that this dynamics can be reinforced by adding co-creation capabilities to the app. This might lead to better engagement in creative activities and alleviate the "fear of the blank canvas", which is one of the barriers creators of all kinds have to face. Robotic companions seem to be ideal in the role of a co-creative partner.

To that end, we added an "idea button" to the interface of the app. Pressing on it prompts the robot (on this picture, a virtual one) to give suggestions on what the child can make. If there is only an empty canvas, it will give a starter idea, for instance: "I think airports are very cool! Let's make a picture of an airport?" If there are already objects on the scene, the robot will suggest something that is contextually fitting. For instance, tapping the idea button in the situation shown below caused the robot to give a prompt: "Who steers the plane? Let's add this person to the picture!"

...

Upcoming studies

We are planning a study involving our child-driven literacy app and the virtual robotic guide in the Spring of 2022. First, we are interested in continuing the line of research into child-driven, machine-guided systems and compare a version of the app where the choices on what to make come from the robot with a version where children have creative freedom and agency. We will assess different aspects of children's self-regulation, personality and home environments to see what can influence their preferences towards one or another option. We also will use the same study as a testbed for the co-creative scaffolding tool. Stay tuned for our results!