MIT's Huggable Robot Teddy Enhances Human Relationships

The Huggable Robot
The Huggable robot, designed by researchers at the MIT Media Lab, was designed to enrich long-distance communication between users. Image credit: MIT Media Lab.

(PhysOrg.com) -- It's probably the most sophisticated teddy bear ever designed, but that doesn't stop MIT's companion robot called "the Huggable" from being pretty adorable, as well. The Huggable is the latest project to come from the MIT Media Lab, and could one day be used for healthcare, education, and social communication applications.

As the lab explains, the Huggable is designed to be more than a fun robotic companion. Its main purpose is to enhance human relationships by functioning as a visual tool for long-distance communication. Grandparents who want to talk to young grandchildren, teachers instructing students, or healthcare providers communicating with patients could all enrich their interactions using the robot.

The Huggable features more than 1500 sensors on its skin, along with quiet actuators, video cameras in its eyes, microphones in its ears, a speaker in its mouth, and an embedded PC with 802.11g wireless networking.

"The movements, gestures and expressions of the bear convey a personality-rich character, not a robotic artifact," the MIT Media Lab's Web site explains. "A soft silicone-based skin covers the entire bear to give it a more lifelike feel and heft, so you do not feel the technology underneath. Holding the Huggable feels more like holding a puppy, rather than a pillow-like plush doll."

The Huggable connects to a Web interface that enables the remote person to not only view the person on the other end through the bear's eyes, but also view the robot's behaviors through streaming audio and video. The remote person can also control the robot using several features. A grandparent, for instance, can enter text for the robot to speak via speech synthesis or command the robot to make various sounds, such as giggling. The grandparent can then watch the child's facial reaction on the screen and listen to their response, as well as watch a 3D virtual model of the robot and an animated cartoon that indicates gestures, such as when the robot is being bounced or rocked. Overall, the robot enables the grandparent to see and hear the child through the eyes and ears of the Huggable.

The robot can operate in either fully or semi autonomous mode. The Huggable can be programmed to remember the faces of specific people, and can then track the moving faces without external control. In semi-autonomous mode, a user can use a joystick to move the robot's head vertically and horizontally.

The Huggable was originally based on the concept of therapeutic companion animals, and has important touch-based features. The robot's neural network can recognize nine different classes of touch, such as tickling, poking, and scratching, etc., and each class is further divided into six response types, such as teasing pleasant, punishment light, etc. Based on the response type, the robot interprets the intent of the touch and how to respond.

Currently, the MIT Media Lab is working to create a series of Huggables for real-world trials. The Huggable was created using Microsoft Robotic Studio, and the project is supported in part by a Microsoft iCampus grant.

More information: MIT Media Lab

© 2008 PhysOrg.com

Citation: MIT's Huggable Robot Teddy Enhances Human Relationships (2008, December 17) retrieved 29 March 2024 from https://phys.org/news/2008-12-mit-huggable-robot-teddy-human.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

What's the point of giving gifts? An anthropologist explains this ancient part of being human

1 shares

Feedback to editors