Do Robots Have Feelings Too?

Posted on: November 20, 2015 by Marianne Hewlett

Do you talk to your car and lovingly stroke its shiny bonnet when it performs well? Or perhaps you have heated discussions with your computer, and threaten separation if it doesn’t speed up. Whilst these are obviously machines and don’t respond to, nor reciprocate your feelings, the latest robots not only look increasingly like us, they can recognize and respond to our emotions as well. This raises the question of whether – once it is advanced enough – a robot will be merely an emotional actor or could be conceived as a sentient being with its own feelings and emotional characteristics.

Last year, Toshiba caused a sensation with the introduction of Aiko Chihira, a pretty humanoid robot clad in a traditional silk kimono working in customer service at the Mitsukoshi department store. She was so lifelike that customers confused her for a human being. At the same time, Aldebaran Robotics released Pepper, the first humanoid “emotional” robot designed to live with humans. Pepper can identify someone’s emotional state and respond to it, for instance cheering you up if you appear to be sad. It can also mirror emotions and interestingly this works both ways. Even though we know it’s a robot, when it mirrors our emotions we immediately start to bond. This is a particularly helpful phenomenon in areas such as healthcare, where patients will be more likely to respond positively to requests.

Charming and cute as they are, the capabilities and intelligence of “emotional” robots are still very limited. They don’t have feelings and are simply programmed to detect emotions and respond accordingly. But things are set to change very rapidly. As robots like Pepper become more affordable at a mere $1600, we could soon see Pepper-like humanoids amusing us in our homes and, potentially, assisting us in our offices.

Initially, these will be seen as a novelty, a cute toy that everyone wants to interact with, that makes us feel good and that can be helpful performing tasks around the house or office. However, as we bond with these robots, could our relationships change into something more serious, perhaps even friendships? After all, such a relationship isn’t so different from those we have with animals, who are also capable of recognizing and responding to our emotions. And that raises the question of consciousness. To feel emotion, you need to be conscious and self-aware. Only recently did scientists agree that animals are conscious, so how and when will we be able to determine that robots are too?

In 1950, Alan Turing introduced the Turing test to “test a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human”. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine and the conversation would be limited to a text-only channel such as a computer keyboard and screen so that the result would not be dependent on the machine's ability talk. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

Interestingly, Turing later changed the original question into "Are there imaginable digital computers which would do well in the imitation game? This is easier to measure but still doesn’t answer the question of how to determine if they are sentient beings.

To date, Turing’s test is still very relevant as discussions heat up on whether to design robots that only recognize emotions, or to also include the ability for them to respond emotionally. Research indicates that adding emotional circuits to robots improves their performance and that, in tests, they are better able to complete programmed tasks such as searching for food, escaping predators, and finding mates. This led the researchers to conclude that including emotional states made robots fitter for survival.

Do robots have feelings too? It will be up to us to determine how we want robots and our relationships with them will develop in the future. Joelle Renstrom writes in Slate/Future Tense about artificial intelligence and states:

“If robots can learn emotions through experience, then we will be their emotional guides—both a comforting and a terrifying thought!”


Share this blog article

  • Share on Linked In

About Marianne Hewlett
Senior Vice President and member of the Scientific Community
Marianne Hewlett is a Senior Vice President at Atos and a seasoned marketeer and communications expert. Passionate about connecting people, technology and business, she is a member of the Atos Scientific Community where she explores the Future of Work and the impact of technology on individuals, organizations and society. She is a strong ambassador for diversity and inclusivity – and particularly encourages female talent to pursue a career in IT – as she believes a diverse and happy workforce is a key driver for business success. As an ambassador for the company’s global transformation program Wellbeing@work, she explores new technologies and ways of working that address the needs of current and future generations of employees. A storyteller at heart, she writes about the human side of business and technology and posts include insights into the future of work, the science of happiness, and how wellbeing and diversity can drive success.

Follow or contact Marianne