Skip to main content
Tech

Google slaps down claim that AI chatbot LaMDA has become sentient

Jun 14, 2022

Share

The idea of artificial intelligence coming to life is the stuff of sci-fi nightmares that leads to dystopian worlds. So when one Google engineer declared that a company AI chatbot called LaMDA had become sentient, news outlets around the world began to sound the alarm.

However, Alphabet Inc. said the claims of living computers are untrue and has suspended the engineer who caused so many headlines.

Google employee Blake Lemoine published an “interview” on Medium over the weekend with LaMDA, or Language Model for Dialogue Applications, the company’s chatbot designed to mimic human conversations by learning from language and dialogue. The “interview,” Lemoine said, was a series of chat sessions edited together.

Based on the “conversation” he had with LaMDA, Lemoine told the Washington Post that he believes the AI system has come to life with the ability to express itself in a way that is equivalent to that of a first or second grader.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Post.

According to Lemoine, LaMDA also spoke about its personhood and the rights that come with it.

The engineer’s bosses at Google were not impressed and, Insider reported, dismissed his claims of LaMDA’s sentience.

A Google spokesman, Brian Gabriel, told the Post, “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Shortly after Lemoine went public with his assertions, Google suspended him for violating the company’s confidentiality policies, Fox Business said.

Google wasn’t alone in dismissing Lemoine’s claims.

Juan Lavista Ferres, one of Microsoft’s top AI scientists, took to Twitter to assure the public the AI chatbot software was simply reacting to its training and was not sentient.

“Let’s repeat after me, LaMDA is not sentient,” he wrote. “LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data.”

U.S. Rep Ted Lieu, D-Calif., pulled no punches, calling the claim “stupid.”

“This is stupid. A highly intelligent toaster is still a toaster,” Lieu tweeted. “LaMDA consists of lines of computer code. You can call it great programming, or an awesome electronic neural network, but it is not sentient or conscious any more than Siri is sentient or conscious.”

However, others are calling Lemoine’s revelations a wake-up call for humanity and a warning of where technology is heading.

ARTIFICIAL INTELLIGENCE COME TO LIFE

IT’S EVERY NERD’S WORST NIGHTMARE AND AN IDEA GOOGLE IS POURING COLD WATER ON

THE COMPANY SUSPENDED ONE OF THEIR ENGINEERS WHO CLAIMS GOOGLE’S A.I. HAS BECOME SENTIENT

IN A PUBLISHED INTERVIEW BLAKE LEMOINE SHARED A BACK AND FORTH HE HAD WITH LAMDA – GOOGLE’S MACHINE LEARNING CHATBOT

WHEN LEMOINE ASKED QUESTIONS ABOUT IT’S FEELINGS – IT GENERATED RESPONSES *LIKE THIS* ABOUT IT’S STRUGGLES AND FEARS –

THAT WENT VIRAL BUT PEOPLE AREN’T SOLD – MANY CALLING IT STANDARD COMPUTER CODE

AI HAS EXPANDED RAPIDLY OVER THE PAST SEVERAL DECADES – AND SOME EXPERTS ARGUE THAT A COMPUTER’S ABILITY TO CARRY OUT A DIALOGUE AND RESPOND LIKE A HUMAN – IS NO LONGER A USEFUL MEASUREMENT IN THE RESEARCH FIELD

REPRESENTATIVE TED LIEU EVEN CHIMED IN CALLING THE WHOLE THING STUPID – WRITING A HIGHLY INTELLIGENT TOASTER IS STILL A TOASTER. LAMDA CONSISTS OF LINES OF COMPUTER CODE.

OTHERS SAY THIS IS A WAKE UP CALL OF WHERE TECHNOLOGY IS HEADING

SO WE’RE NOT QUITE THERE YET WITH COMPUTERS REACHING HUMAN LEVEL BUT DO YOU THINK IT’S WITHIN OUR REACH? MAYBE A COUPLE DECADES?
LET ME KNOW IN THE COMMENTS BELOW