What LaMDA’s “Sentience” Means for AI

What LaMDA’s “Sentience” Means for AI

Focusing on LaMDA’s “sentience” is taking away the real questions plaguing AI.

With the advent of self-driving cars and artificial intelligence (AI) artists, AI is getting closer and closer to replicating human capabilities each day. However, there is one thing that separates humans from AI—emotional intelligence or sentience. Or, at least, so we thought. 

In June this year, Google software engineer Blake Lemoine came out with the claim that Google’s AI chatbot LaMDA (short for language model for dialogue applications) had become sentient. After making the claim, Lemoine was first put on administrative leave and then eventually fired for violating the company’s confidentiality policy. But does that decisively mean that his claim is wrong? Let’s explore what sentience means, the validity of Lemoine’s claims and what AI sentience would mean for the future.

What does “sentient” mean?

Before we even begin to discuss LaMDA, it is important to have a general understanding of what sentience means in the first place. Sentience or emotional intelligence refers to the ability to have feelings. Sentience is a combination of three psychological phenomena—a sense of self-awareness, the ability to reflect on the “self” and the ability to think from someone else’s perspective. 

When we debate whether AI is sentient, we are essentially trying to understand whether it questions its existence in the same way as human beings do. To test whether this is the case, scientists use the Turing Test

The Turing Test, named after the computer scientist Alan Turing, requires the participation of two people and a computer. One person would act as an interviewer to ask the AI and the other person the same set of questions. After getting the answers of the two respondents, the interviewer would decide which set of answers was given by whom—the computer or the other person. The test is done multiple times, and if the interviewer is unable to distinguish between the two in half the test runs or less, the AI will be considered as much a human as the human respondent.

LaMDA says it is a “person”, but is it really?

Coming back to LaMDA, to disclose its alleged sentience, Lemoine released an edited transcript of his interview with the AI. In the interview, he clearly asks the AI about the nature of its consciousness. “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” said AI in response.

But should we just believe LaMDA’s words? LaMDA, much like other AI that came before it, has been trained on information from the internet and is extremely capable of responding to writing prompts. Google, in fact, had displayed LaMDA’s capability of doing so in 2021. Back then, Google exhibited how the AI pretended to be Pluto and a paper airplane and used concrete facts to make the conversation feel more authentic. If the AI could pretend to be Pluto, is it really a stretch to imagine that it was just pretending to be sentient? 

Google has also been quick to shut down Lemoine’s claims saying that even though hundreds of researchers have spoken to LaMDA, no one else has made claims of its sentience. Besides Google, other experts in the field of AI have also come out to refute Lemoine’s claims. Scientist and Founder of Geometric Intelligence, Gary Marcus, called the claim about LaMDA’s sentience “nonsense on stilts”. In his blog post about LaMDA, Marcus says that AI systems, like Google’s, are simply auto-completing sentences that fit the best in a given context. 

Even if we go back and look at the Turing Test results, they might not be entirely accurate. In  many versions of the Turing Test, humans “don’t try hard enough to stump the machine”. As per Marcus, the reason is that people tend to anthropomorphize (ascribing human characteristics to non-humans) everything. In this case, it means we project ourselves onto AI and think that they resemble us. Essentially, if Lemoine wanted to believe LaMDA was sentient, he is beginning with a biased mind, and the test would be rendered futile. 

What does this mean for the future of AI?

While it might be hard to diagnose how sentient LaMDA is, one thing we know for certain is that machines are getting better at fooling us into believing that they are humans. In a world where it has become extremely easy to trick people with doctored images and audio messages via deepfake technology, the debate around LaMDA needs to be taken as a warning on how hard it will be to tell what is real or fake. 

Whether AI can be sentient, there are bigger problems to be solved. “There are a lot of serious questions in AI, like how to make it safe, how to make it reliable, and how to make it trustworthy,” Marcus emphasized in his blog post. This couldn’t be closer to the truth when looking at LaMDA. According to a paper written by Timnit Gebru, a former employee of Google’s Ethical AI team, AI trained on information from the internet tends to be racist and sexist. It is questions like these that need to be given more attention than LaMDA’s sentience, or lack thereof.

Also read:

Header image courtesy of Pixabay 

SHARE THIS STORY

Share on facebook
Share on twitter
Share on linkedin
Share on email

RELATED POSTS

What Can Drones Be Used For, and Do They Have a Future?

What Can Drones Be Used For, and Do They Have a Future?

In January 2023, Australian soldiers used drones to search for a missing woman. The drones helped them reach areas that might have been too challenging for a human to enter. For the past few years, the drone economy has been booming. Global investments in the industry amount to US$1.15 billion, with China leading the market.

2022 FIFA World Cup

The Most Controversial VAR Decisions at the 2022 FIFA World Cup

The footballing world has welcomed many new technological developments in recent times. Its goal is to make decision-making more accurate and provide a better experience for players and fans alike. These technologies include the video assistant referee (VAR), semi-automated offside technology (SAOT) and sensor-equipped footballs, all used extensively during the 2022 FIFA World Cup in Qatar.

Self-funding Your Startup? Follow These Expert Tips from Proven Entrepreneurs

Self-funding Your Startup? Follow These Expert Tips from Proven Entrepreneurs

Starting a new business can be an exciting and challenging endeavor, with securing funding being one of the biggest obstacles. Traditional funding options, such as venture capital and angel investing, can be time-consuming, especially for new and untested businesses. In such cases, self-funding, or “bootstrapping”, might be a viable option.

Navigating Ghost Job Postings: How to Avoid Them

Navigating Ghost Job Postings: How to Avoid Them

Job searching can be a daunting task, especially with the risk of encountering scams or fraudulent job postings. Ghost job postings, which refer to open job positions that are not actively being filled, are a common issue that jobseekers should be aware of. According to a survey by New York-based financial consultancy Clarify Capital.

Can AI Help You Flirt Better

Can AI Help You Flirt Better?

“Excuse me, but I think you dropped something: my jaw.” This is the pick-up line the famous artificial intelligence (AI) chatbot ChatGPT dished out when we asked for some viable options. Not the most original, sure, but not bad for a trained algorithm.