There is this popular series, The Big Bang Theory (TBBT), where its lead character, Sheldon Cooper, in one of the episodes, comes across a machine that helps him read people’s emotions and be more empathetic towards them. Sheldon, throughout the series, is portrayed as a person who can’t grasp nuances of human emotions and while that is done for comic relief in the series, it makes the person come across as being robotic. When he gets his hands on the machine that reads people’s expressions and tells him what they’re feeling, he is elated at finally being able to understand emotions. Let this sink in, a machine is enabling a human to understand emotions and be empathetic.
Many of us already depend on digital assistants for routine things. “Alexa, switch off the lamp, please.” Or perhaps, “Ok Google, play ‘Hey Jude by the Beatles’.” Sounds familiar, isn’t it? The tone that these assistants mimic is nothing like the robotic voice that pops up into our heads when we think of Artificial Intelligence (AI) speaking.
But, is it just that? Is it just mimicking an emotion through tone or is there more to it? Can computers get so intelligent that they actually empathize and understand feelings?
The answer to that is yes and we are definitely heading towards more. At CES 2020 this January, Samsung’s Star Labs unveiled Neon, a computationally created virtual being that looks and behaves like a real human, with the ability to show emotions and intelligence. In his demo at the CES, Neon’s CEO Pranav Mistry said the idea behind the Neon is “to push the boundaries so machines understand more about us. Whether we are tired or happy, our expressions, and our emotions. In turn, the more machines understand us, the more we will be able to connect with them on a deeper, human level.”
Last year, Humana Pharmacy was in the news for using an empathetic AI system called Cogito. Cogito helps Humana’s call centre agents to handle calls in a better manner by understanding the customer’s emotions. It flashes up messages like “You’re speaking too fast, slow down” and “try to relate to the customer”, enabling the agents to handle calls in a better manner. Using patterns of behaviour like a long silence, quickened speech, raised voice and more, the algorithm processes the customer’s emotion and gives necessary inputs to the agent. Similarly, Israeli company Infi claims its ‘EmpathAI’ platform can predict one’s unique traits, feelings, and internal motivators.
While Sheldon might be ecstatic about the emotion-reading machine, we are certainly living in a world where AI is getting better at understanding emotions and helping people make informed decisions.
What empathy really means and why do we need it?
Let’s first talk about what empathy actually means.
In psychological terms, there are three aspects to empathy – Cognitive Empathy (the ability to understand a person’s feelings), Affective Empathy (the ability to respond appropriately) and Somatic Empathy (the physical reaction associated to the empathetic process).
For an AI to be used in scenarios where empathy is necessary, it is imperative for it to have at least cognitive and affective empathy. If that can be managed to be built into the AI, the world will perhaps become more humane. If not, AI can probably qualify as a psychopath as it knows what an emotion is but not necessarily how to act based on that emotion, and instead, just chooses to act on the information it receives. A twisted example that comes to mind is Skynet from the Terminator series. It perceives humans as a threat to the world and to itself and while it seems to have emotions, it doesn’t necessarily seem to want to act on it and instead, decides a total termination of the human race as the perfect step.
While extermination of the human race seems to be a far-fetched implication concocted by a paranoid mind, there’s a chance that with empathetic AI, threats will be far lesser and potential applications will many more.
Taking it to an extreme point and talking about a fictional example again, in the 2010 Rajinikanth film Robo, Chitti, the humanoid robot saves a girl from a burning building. The girl is naked and when she is brought out into the public by the robot, the girl feels ashamed and then commits suicide. While the way it is portrayed stems out of an extremely patriarchal mindset, it does drive home a crucial aspect of AI – the necessity for empathy in AI.
The world with the empathetic AI
In TBBT, Sheldon often says the wrong thing because he can’t judge between right and wrong when it comes to a person’s feelings. The same thing would happen if an AI engine that is created to interact with humans couldn’t connect with them. To understand how important it is for the AI to be taught how to judge between what is right and what is wrong, let’s take an example.
We all have AI on our phones. Google Assistant for Android, Siri for iPhones. Here, AI can probably be used to avoid fights between couples as the phone can just tell them what to avoid in a text message or even omit something by itself. This is just an example of scores of potential applications, but if it doesn’t know the difference between right and wrong, it can go pretty wrong.
While the above example was just a funny one off the top of my head, empathetic AI can also do a lot of potential good. For instance, empathetic robots can be used to care for people with mental disorders or dementia, without the caregiver feeling frustrated or burned out because of the person’s reactions. AI can help real-world Sheldon’s understand human emotions. It can also perhaps assist in treating children when they’re diagnosed with conditions like autism and other disorders in that spectrum. Something like this would be extremely useful in the Indian context, considering how much of a taboo mental disorders are here.
Also, in a world where loneliness is on a steady rise, AI can provide the much-needed companionship. There have already been digital pets over the last two decades, with Tamagotchi and Sony AIBO topping the list by popularity, but AI can be taken to another level altogether.
The movie ‘Her’ comes to mind at this point. In that film, the protagonist falls in love with an operating system that speaks to him exactly like a companion. The OS has feelings and falls in love with hundreds of other lonely users like the protagonist and gives them the companionship they crave for. A perfect real-life example of this is Gatebox AI, the Japanese digital assistant, Hikari, which is also known as the world’s first ‘virtual wife’. The doll-sized hologram in a glass tube was created in 2017 to help Japanese men overcome loneliness.
In the same manner, AI can help prevent depression arising from loneliness and can potentially save lives as it can help keep suicidal thoughts at bay. But, on the flip side, this could also mean there will be increasingly lesser human interaction.
Be that as it may, there is no doubt that AI will be playing a major role in our lives in the future. But what remains to be seen is whether it will turn into a Skynet or whether it will turn into a digital companion. What also remains to be seen is whether it will enable people to do their job in a better manner (like Cogito) or whether it will entirely replace certain jobs (like the Google Assistant making a call by itself). Whichever way it is, it is inevitable. But in determining how AI will shape our world, empathy may play a critical role.