Focusing on LaMDA’s “sentience” is taking away the real questions plaguing AI.
With the advent of self-driving cars and artificial intelligence (AI) artists, AI is getting closer and closer to replicating human capabilities each day. However, there is one thing that separates humans from AI—emotional intelligence or sentience. Or, at least, so we thought.
In June this year, Google software engineer Blake Lemoine came out with the claim that Google’s AI chatbot LaMDA (short for language model for dialogue applications) had become sentient. After making the claim, Lemoine was first put on administrative leave and then eventually fired for violating the company’s confidentiality policy. But does that decisively mean that his claim is wrong? Let’s explore what sentience means, the validity of Lemoine’s claims and what AI sentience would mean for the future.
What does “sentient” mean?
Before we even begin to discuss LaMDA, it is important to have a general understanding of what sentience means in the first place. Sentience or emotional intelligence refers to the ability to have feelings. Sentience is a combination of three psychological phenomena—a sense of self-awareness, the ability to reflect on the “self” and the ability to think from someone else’s perspective.
When we debate whether AI is sentient, we are essentially trying to understand whether it questions its existence in the same way as human beings do. To test whether this is the case, scientists use the Turing Test.
The Turing Test, named after the computer scientist Alan Turing, requires the participation of two people and a computer. One person would act as an interviewer to ask the AI and the other person the same set of questions. After getting the answers of the two respondents, the interviewer would decide which set of answers was given by whom—the computer or the other person. The test is done multiple times, and if the interviewer is unable to distinguish between the two in half the test runs or less, the AI will be considered as much a human as the human respondent.
LaMDA says it is a “person”, but is it really?
Coming back to LaMDA, to disclose its alleged sentience, Lemoine released an edited transcript of his interview with the AI. In the interview, he clearly asks the AI about the nature of its consciousness. “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” said AI in response.
But should we just believe LaMDA’s words? LaMDA, much like other AI that came before it, has been trained on information from the internet and is extremely capable of responding to writing prompts. Google, in fact, had displayed LaMDA’s capability of doing so in 2021. Back then, Google exhibited how the AI pretended to be Pluto and a paper airplane and used concrete facts to make the conversation feel more authentic. If the AI could pretend to be Pluto, is it really a stretch to imagine that it was just pretending to be sentient?
Google has also been quick to shut down Lemoine’s claims saying that even though hundreds of researchers have spoken to LaMDA, no one else has made claims of its sentience. Besides Google, other experts in the field of AI have also come out to refute Lemoine’s claims. Scientist and Founder of Geometric Intelligence, Gary Marcus, called the claim about LaMDA’s sentience “nonsense on stilts”. In his blog post about LaMDA, Marcus says that AI systems, like Google’s, are simply auto-completing sentences that fit the best in a given context.
Even if we go back and look at the Turing Test results, they might not be entirely accurate. In many versions of the Turing Test, humans “don’t try hard enough to stump the machine”. As per Marcus, the reason is that people tend to anthropomorphize (ascribing human characteristics to non-humans) everything. In this case, it means we project ourselves onto AI and think that they resemble us. Essentially, if Lemoine wanted to believe LaMDA was sentient, he is beginning with a biased mind, and the test would be rendered futile.
What does this mean for the future of AI?
While it might be hard to diagnose how sentient LaMDA is, one thing we know for certain is that machines are getting better at fooling us into believing that they are humans. In a world where it has become extremely easy to trick people with doctored images and audio messages via deepfake technology, the debate around LaMDA needs to be taken as a warning on how hard it will be to tell what is real or fake.
Whether AI can be sentient, there are bigger problems to be solved. “There are a lot of serious questions in AI, like how to make it safe, how to make it reliable, and how to make it trustworthy,” Marcus emphasized in his blog post. This couldn’t be closer to the truth when looking at LaMDA. According to a paper written by Timnit Gebru, a former employee of Google’s Ethical AI team, AI trained on information from the internet tends to be racist and sexist. It is questions like these that need to be given more attention than LaMDA’s sentience, or lack thereof.
Also read:
- Risks Posed by Deepfake Technology and How to Combat Them
- The Bridge Between Man and Machine: Intersection of Ethics and AI
How is AI Used in Basketball? - Will AI Replace Artists?
Header image courtesy of Pixabay