Why Robots Scare Their Masters

One of the most talked about subjects in robotics today is the uncanny valley hypothesis. So many works of speculative fiction feature robots in relationships with humans that it’s become a cliche, but this idea states that there’s a dip in the graph of human comfort levels when they approach machines that look too much like people. Devices that are disturbingly close to organic life forms often repulse human observers. However, the emotional response becomes far more positive as the machine becomes even closer to humanity.

The term comes from a robotics professor named Masahiro Mori, who referred to the idea as Bukimi no Tani Gensho. This hypothesis was linked to the much earlier essay “On the Psychology of the Uncanny,” which had been completed by Ernst Jentsch in 1906. Even Sigmund Freud‘s 1919 essay “Das Unheimliche” has been linked with the idea that humans are repulsed by devices that are too close to humans.

Several Japanese and Korean companies have built robots that are eerily close to their creators. People are often unsettled when they view images of these androids. Overcoming the uncanny valley opens up a new can of worms. A society in which people are indistinguishable from machinery would be filled with ethical quandaries.

Image Credit: Posterwire.com

  • I don’t see the point of making robots look like us. I know some people want sexbots, but that’s absurd. I want robots that are better than us physically and mentally.

    What if we could buy robots that looked human and functioned like butlers and maids, would we accept that? I’d love to have a Jeeves, but wouldn’t that be a kind of slavery? If they had consciousness.

    • Jason Carr

      Interesting that you mention the slavery aspect. I think this is the type of question we’ll be hearing more in the near-term future as AI/Robotic technologies progress exponentially.

  • As long as they are just machines I don’t think making a robot work for us will be a problem. But what if they are self-aware?

    • Jason Carr

      I think once they’re self-aware or have a “consciousness”, we’ll most certainly have to re-evaluate how we treat them and I think this is where things like neuro-ethics and bio-ethics are important. This is a complicated matter and there are no easy answers. I think they are the kinds of questions we need to ask however because people in AI/Neuroscience will tell you it’s only a matter of time before these amazing machines will have far superior intellects than humans. Development of self-awareness is not that far off IMO.

Post Navigation