Social robotics: Beyond the uncanny valley

Social Robotics: Beyond the Uncanny Valley
The Uncanny Valley. Courtesy: Masahiro Mori, Karl F. MacDorman, Takashi Minato

(PhysOrg.com) -- From science fiction and academia through assembly lines and telemedicine, robots have become both conceptually and physically ubiquitous. Technologically, robotics technology has advanced dramatically since the time of their namesake introduction in R.U.R. (Rossum's Universal Robots), a 1920 Czech-language science fiction (which nonetheless was conceptually quite visionary, since the robots it depicted were biological, and therefore essentially synthetic humans) in which robot was the English version of robota, meaning forced labor, in turn derived from rab, or slave. Today’s virtual and physical robots, however – imbued with artificial intelligence, artificial muscles, vision and pattern recognition, speech recognition and synthesis, sensors and actuators, and increasingly sophisticated interactivity – seem to be approaching those envisioned in Isaac Asimov’s seminal work I, Robot (but still from their human-level-and-beyond artificial intelligence, and certainly nowhere near the living robots envisioned in R.U.R.) That said, however, something’s still glaringly missing – namely, the ability to seamlessly interact with humans and other robots in a spontaneous, natural way that does not rely exclusively on specific preprogrammed behaviors. This is far more difficult than it seems, owing largely to the challenge of computationally emulating evolutionarily-determined perceptually-and emotionally-mediated contextual engagement. Enter Social Robotics: the effort to make robots more…well, sociable.

Social Robotics has its roots in the mid-20th century work of William Grey Walter, a neurophysiologist and roboticist who constructed autonomous electronic robots to demonstrate that complex behavior could arise from robust connectivity between just a few neurons. As robots became more sophisticated and animations more realistic, it was found that our empathy for these human analogues grew with their similarity to ourselves. But there’s a catch: As robots become increasingly humanoid in appearance and behavior past a certain point, a phenomenon known as the uncanny valley emerges.

A phrase introduced in 1970 by robotics professor Masahiro Mori, the uncanny valley is best described as the reaction we have to robotic appearance or behavior when it is perceived as almost human. The gap between barely human and fully human leaves us feeling uneasy as a result of the way evolution has shaped our brains when perceiving familiarity – especially that of anthropomorphic forms. As Mori wrote in his original paper about a prosthetic hand that is lifelike o the eye but not to the touch (and as translated by Karl F. MacDorman and Takashi Minato), “In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

This experience of strangeness or even revulsion in turn prevents us from experiencing empathy for the robot or animation – and empathy is essential for optimal human/robot social interactions. In terms of robot behavior, this means that social robots must successfully operate with the complex web of societal rules that humans learn primarily through implicit experience rather than explicit programming.

At a recent New York Academy of Sciences event, Familiar but Strange: Exploring our Relationships with Robots, two exceptional speakers – roboticist extraordinaire Heather Knight and motion capture, computer vision and animation wizard Christoph Bregler – explored this mysterious space in considerable depth.

Social Robotics: Beyond the Uncanny Valley
Geminoid Research. Copyright © Hiroshi Ishiguro Laboratory, ATR

Heather Knight – currently conducting her doctoral research at Carnegie Mellon's Robotics Institute – approaches social robotics in innovative ways. Founder of Marilyn Monrobot Labs in New York City, which creates socially intelligent robot performances and sensor-based electronic art, one of her more engaging projects is the integration of robot design and theater – as demonstrated by her creation and production of Robot Film Festival, which took place in New York City on July 16-17, 2010. (The next Robot Film Festival is scheduled for 2012; submissions will be accepted for consideration starting in January.)

“As we go about designing our future,” Knight says, “some of the things I think about are creating intelligent machines, building relatable robotic characters, and creating companions that can exist and help us in our everyday lives.” Knight sees the integration of robotics and theater and a key pathway to that future.

”Robots and theater have a long history together,” Knight points out – a history beyond the legacy of R.U.R. “Artists tend to use the medium of their times – and right now, that medium is technology.” Moreover, she observes, making robots responsive to their audience, as well as applying physical theater techniques through gestural communications, allows emotion to infuse the entire robotic form. Knight illustrates this point with the example of look-step-reach-grab behavior: Even without a word being spoken, every such sequence and its associated timing profile embodies and communicates a robust amount of specific but distinct intentional, emotional and cognitive meaning.

However, Knight – whose previous work includes robotics and instrumentation at NASA's Jet Propulsion Laboratory, interactive installations with Syyn Labs, field applications and sensor design at Aldebaran Robotics, as well as being an alumnus of MIT Media Lab’s Personal Robots Group – cautions that it’s not a matter of simply taking existing templates already successful in acting and embedding them in robots. “I think actual collaboration – the procedural knowledge of working with performance, performers and directors – is really important, because not everything can be codified in words.”

When discussing the uncanny valley, Knight cites how culture influences not only how we react to a perceived as being almost human, but also the way we represent robots and other technology in the world, as well as how we select the types of robotic research we fund and pursue. “I think that if we create modern narratives, perhaps we can reshape some of the current emphasis of where we’re going with technology.”

Chris Bregler, Associate Professor of Computer Science at NYU's Courant Institute and director of the NYU Movement Lab, inhabits the world of animation and entertainment, focusing on Hollywood robots – virtual characters and actors without physical embodiment. Conducting interdisciplinary research in the virtual world of motion capture, animation, computer vision, graphics, statistical learning, gaming, biomedical applications, human-computer interaction, and artificial intelligence – work that has resulted in numerous publications, patents, and awards from the National Science Foundation, Sloan Foundation, Packard Foundation, Electronic Arts, Microsoft, Google, U.S. Navy, U.S. Air Force, and other sources - he has a visually demonstrable take on the uncanny valley.

Social Robotics: Beyond the Uncanny Valley
Final Fantasy image. Copyright © 2001 FFFP All rights reserved.

“Hollywood robots were previously played by human actors,” Bregler notes, commenting on the film AI: Artificial Intelligence. “Over the last 10 years, however, there’s been a huge revolution in Hollywood.” Today those same Hollywood robots would be played by virtual robots due to advances in animation, special effects, and live action – and, he adds, “Everything changes every year.”

For Bregler, the uncanny valley is also evident in motion capture as applied to virtual actors, such as the animated characters in Final Fantasy which have achieved astounding realism as stationary images – but all familiarity is lost once there’s facial movement. Due to technological limitations, body and face motion capture cannot reproduce subtle small-scale movements that communicate essential real-world information. Without these present, viewers immediately enter the uncanny valley.

Bregler points out another problem with virtual characters: There’s no weight or force in their virtual body or captured movements. “When you do animation, there are reverse kinematics – a bit like a puppet.” In other words, the force is externally applied rather than being an intrinsic component of the character’s physicality. This was perfectly fine with the aliens in Avatar, Bregler illustrates, but if the same performance was applied to people, the illusion would have dissolved – and the same holds true for more ephemeral issues, such as the way evolution has enabled us to tell, for example, if someone may be lying from facial, gestural and tonal cues we may not be explicitly aware of (familiarity) until they’re absent (uncanny valley). Surprisingly, even higher primates have an uncanny valley regarding others of their species.

Moreover, he adds that “The uncanny valley is now a common problem everywhere – In the game industry, in the motion capture industry – and while there are movies that dance around the , others fall right into it.” Bregler sees The Incredibles, which was hand-animated, as being in the former category – but the precisely motion captured Polar Express in the latter.

“We’re not there yet with primary human motion and behavior,” he explains. “We’re also very far away from being able to simulate the human brain – even the spinal cord. We need a shortcut.”

More information: Masahiro Mori, The Uncanny Valley. Energy, 7(4), pp. 33-35, 1970, Translated by Karl F. MacDorman and Takashi Minato

Toward Social Mechanisms of Android Science, Vancouver, Canada, 26 July 2006

Heather Knight, Eight Lessons Learned about Non-verbal Interactions through Robot Theater. SOCIAL ROBOTICS: Lecture Notes in Computer Science, 2011, Volume 7072/2011, 42-51, DOI: 10.1007/978-3-642-25504-5_5

Heather Knight: Silicon-based comedy. TED Initiatives, TED Women, December 2010

Christoph Bregler, Next Generation Motion Capture: From the Silver Screen to the Stadium. MIT Sloan Sports Analytics Conference, March 2-3, 2011

Christoph Bregler et al., Squidball: An Experiment in Large-Scale Motion Capture and Game Design. Intelligent Technologies for Interactive Entertainment, Lecture Notes in Computer Science, 2005, Volume 3814/2005, 23-33, DOI: 10.1007/11590323_3

Copyright 2011 PhysOrg.com.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

Citation: Social robotics: Beyond the uncanny valley (2011, December 29) retrieved 29 March 2024 from https://phys.org/news/2011-12-social-robotics-uncanny-valley.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Scientists striving to put a human face on the robot generation

0 shares

Feedback to editors