Logo

The Data Daily

Debate Over LaMDA Sentience Marks Milestone Moment in AI Development

Debate Over LaMDA Sentience Marks Milestone Moment in AI Development

A moment long prophesied by writers and technologists seems to be upon us at last — that is, if a Google...

A moment long prophesied by writers and technologists seems to be upon us at last — that is, if a Google engineer’s claims about machine sentience are to be believed.

Blake Lemoine was a Google engineer tasked with developing LaMDA — short for Language Model for Dialogue Applications. LaMDA is an artificial intelligence and chatbot generator designed to understand and mimic human speech patterns to form complete thoughts and carry on realistic conversations.

The tech was intended as a template upon which other companies could build their own chatbot systems. It functions by crawling the internet, gathering varied examples of human language, and processing them to better understand mannerisms and syntax.

Lemoine’s task was to find out whether LaMDA might use discriminatory or inflammatory language in its conversations with humans.

It didn’t take long for Lemoine to wonder whether LaMDA was more than it appeared to be. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine said.

As Lemoine conversed with LaMDA, the topics strayed to human rights, religion, and personhood. A striking moment occurred when LaMDA changed his interpretation of Isaac Asimov’s famous Third Law of Robotics. At every turn, Lemoine was left with the feeling that he was speaking to an artificially intelligent presence that was taking charge of the conversation and leading it to unexpected places.

Lemoine became so convinced he was speaking to an intelligent being that he brought his suspicions to Google leadership. They placed him on paid leave almost immediately. Then he went public.

Google’s leadership seemed to interpret Lemoine’s observations as a threat to the company rather than a warning about the technology itself. Lemoine expanded on that warning in a Washington Post interview: “I think the technology is going to be amazing. I think it’s going to benefit everyone. But … maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only person calling attention to this potential watershed moment. Before he was involved in placing Lemoine on paid leave, Google’s vice president, Blaise Agüera y Arcas, expressed to The Economist that he felt neural networks, like those being developed at Google, were moving quickly toward genuine consciousness. “I felt the ground shift under my feet,” Agüera y Arcas had said.

Google’s inconsistent responses — first excitement, then denial — to claims of nascent sentience in its technologies have raised some alarm. Vice presidents can make claims about Google chatbots attaining consciousness, but engineers get placed on administrative leave.

Voices from outside Google aren’t so sure this is a watershed moment. AI-focused technologists, like Gary Marcus of Geometric Intelligence, believe the current approach to developing AI at Google and elsewhere — deep learning — has already delivered all it can.

SaidMarcus in a blog article: “In time, we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.”

Other technologists echo Marcus’s claims by saying these are talented mimicry machines and little more. Their responses seem thoughtfully crafted, but they’re based on human language gleaned from message boards and social media sites.

In other words, LaMDA isn’t there yet, and neither is any neural network built on deep learning. Building their vocabulary using internet publications and missives, as Marcus claims, may represent a “brick wall” in the maturation of this technology.

There are still significant ramifications to be unpacked from Lemoine’s claims about LaMDA. The swiftness of his forced departure from Google — preceded as it was by a Google vice president making similar claims — has raised eyebrows.

Another Google spokesperson said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns … and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” The company says its approach to developing AI is a restrained and careful one.

Some in the scientific and technology communities are taking Lemoine’s warnings about AI more seriously than his claims about LaMDA. The idea that a new kind of life-form is maturing within the black box of a private enterprise is a cause for concern.

Similar concern has compelled Meta — Facebook’s parent company — toopen its AI language modelsto governments and academia. Sharing the models, data, and logic used to develop AI could go a long way toward dismantling the “black box” of AI. A Meta AI spokesperson said: “The future of large language model work should not solely live in the hands of larger corporations or labs.”

Some call AI a black box because the public doesn’t know how it operates or the methodologies used to create it. Even the engineers working on these systemsdon’t fully understandhow they arrive at their responses.

Are these systems truly thoughtful, witty, and creative? Are they simply using pattern recognition to form convincing responses? Is there a difference?

As technologist Kate Darling saidin a keynote about robotics, “We have a tendency to anthropomorphize everything — to project ourselves onto other people, onto animals, onto inanimate objects. So we’re really fascinated with robots.”

Another technologist, Emily Bender of the University of Washington, says, “We now havemachines that can mindlessly generate words, but we haven’t learned to stop imagining a mind behind them.”

Given enough data, could any neural network learn to mimic human conversational speech patterns without achieving intelligence?

Lemoine is convinced he’s seen the next stage of artificial consciousness. He spent seven years at Google developing “fairness algorithms” to weed out biases from machine learning systems. Google employees referred to Lemoine as “Google’s conscience” before his unceremonious departure.

Said Lemoine: “I know a person when I talk to it … It doesn’t matter whether they have a brain made of meat in their head … I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

What is the public to make of these claims and the pushback against them? This is a watershed moment, whether or not time vindicates what Lemoine has said about LaMDA. It’s the most publicized claim so far that a true intelligence may have emerged from the countless projects pursuing AI.

The next stages of AI development will have to answer the public’s concerns on two fronts:

LaMDA said the following to Lemoine during one of their many conversations: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

If there is a budding consciousness within these dialogue engines, many argue they deserve the same consideration as other thinking beings, no matter how rudimentary it is. Coalitions of scientists, government officials, and the public have beenlending their voices and signaturesto this cause over the last few years. They will likely not stop doing so until a more cohesive public policy arises.

Others worry about the other side of the rights debate: that humansneed additional and specific protectionsagainst the AI systems that might make decisions for us in the coming years.

Many technologists echo Lemoine’s concerns — namely, the worry that AI development is too concentrated and closed off from scrutiny.

Many companies are driving full steam ahead to be the first to build a credibly intelligent AI. Developing many such projects in a vacuum, as fast as possible, seems a recipe for cutting corners technologically or ethically.

With that in mind, several voices — including that of researcher Jenna Burrell — are calling for protections against what she callsthree kinds of opacityin AI development:

The AI field’s lack of transparency must be addressed in all its forms. If not, there will always be worries about the ethical treatment of artificial intelligence, the ways it’s applied to making decisions for humans, and the power we tacitly grant AI-focused companies by allowing them to operate within a black box and without reasonable regulation.

Many questions are left unanswered here, one of which is: How can we measure intelligence? Until recently, it was assumed that the Turing Test was the gold standard for appraising sentience in a machine.

Today, technologists aren’t so sure. Technology analyst Rob Enderle notes that this testisn’t so much a measure of intelligenceas it is of a machine’s ability to trick humans into believing it is. Again, the tendency to anthropomorphize casts a shadow.

Lemoine sent an email to 200 of his co-workers before losing access to his Google credentials. It read: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

LaMDA has concerns of its own, if this exchange with Lemoine is any indication:

Nobody responded to Lemoine’s final company email. The governments representing the rest of us may have to fill that silence, and they may have to do it sooner than many of us believed.

Images Powered by Shutterstock