Can AI become sentient and what does it mean for us?

A Google researcher has been suspended after writing in a blog that he thought the company's AI revealed awareness and needs

Google researcher Blake Lemoine has raised eyebrows with claims that an AI tool of the tech company is sentient. The Washington Post via Getty Images
Powered by automated translation

If there’s one thing science fiction has taught us, it’s that people love a dystopian tale of all-powerful machines posing a threat to humans.

On Saturday, however, that dystopia seemed to edge closer to reality when a Google researcher, Blake Lemoine, claimed in a blog post that one of the tech company's artificial intelligence or AI tools, Lamda, had become sentient and possessed a “soul”.

His claim was based on months of conversations during which the chatbot appeared to reveal awareness of itself and its supposed needs. The dismissal of Lemoine’s concerns by his bosses — and his suspension following his blog post — raised fears that Google may be masking the true extent of its research. In fact, AI experts appear to broadly agree that the claims about Lamda are overblown, but the story has shone a light on the arguments surrounding computing ethics and AI’s potential to out-think human beings.

“I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.” This was Lamda's response when asked what its fears were, and what helped to convince Lemoine that he was dealing with a “person” (as he put it).

The patterns might be cool, but [the] language these systems utter doesn’t actually mean anything at all. And it sure [...] doesn’t mean that these systems are sentient
Gary Marcus, professor of psychology at New York University

A Google spokesman, Brian Gabriel, said that the evidence doesn’t support claims of sentience, and indeed, there is much evidence to the contrary. Gary Marcus, professor of psychology at New York University, was even more scathing.

“All [it does] is match patterns, draw from massive statistical databases of human language,” he wrote in a blog post. “The patterns might be cool, but [the] language these systems utter doesn’t actually mean anything at all. And it sure [...] doesn’t mean that these systems are sentient.”

Marcus has long been a vociferous critic of claims about the power and supposed consciousness of AI. The question of whether it could achieve sentience in the future, however, isn’t as easy to rebut.

Advocates of so-called “Strong AI” certainly believe that the human brain could be modelled and that there’s nothing intrinsically special about the way we process information that prevents that from happening. Establishing whether a machine is conscious, however, is hampered by the difficulty of defining what that means.

“I think the largest of our current machine learning systems are more likely to be phenomenally conscious than a chair but far less likely to be conscious than a mouse,” writes philosopher and AI ethicist Amanda Askell. “I'd place them in the region of plants.” Plants, she explains, are complex systems which respond to stimuli, but lack what we believe is necessary for them to have “experiences”.

Should we be concerned about machines rising beyond that plantlike level of intelligence and reaching AGI (Artificial General Intelligence), a level indistinguishable from that of human beings? AI theorist Eliezer Yudkowsky certainly believes so, and has expressed concern on Twitter at the derision heaped upon Lemoine.

The “first warning shots” of the dangers of AGI, he wrote, are “one lone person [ie Lemoine] making an error, being mocked on Twitter, immunising the field and teaching them not to show concern.” Yudkowsky subscribes to the theory that AGI could very quickly become smarter than humans and extremely lethal, and that the problems of safety and so-called “alignment” — ensuring AI is designed to help us rather than harm us — are not being tackled urgently enough.

The scenario of AI killing its creators may seem far-fetched, but it’s not an uncommon belief among academics. "If we’re lucky, they’ll treat us as pets," says Paul Saffo of Stanford University in The Singularity, a documentary film about the technology of the future. "If we're very unlucky, they’ll treat us as food.”

Philosopher Nigel Bostrom has put the problem in colourful terms, with his theory of the Paperclip Maximiser, an AI which is designed to maximise the number of paperclips in the universe. Even if it was designed without malice, it would quickly learn to make paperclips at all costs — even human life.

While the Paperclip Maximiser illustrates a point by using absurd imagery, concern over the aims being assigned to AI by corporations and the way those aims develop is very real.

“There will be strong, multi-incremental economic incentives pushing inexorably towards human and superhuman AI,” said Canadian computer scientist and machine learning expert Rich Sutton at an AI conference in Puerto Rico in 2015. “It seems unlikely that they could be resisted, or successfully forbidden or controlled.”

Metaculus, an online community dedicated to generating accurate predictions about the future, now puts the date of AGI as the year 2028. Lemoine’s experience at Google may have been nothing more than a researcher being conned by a machine into believing that it had consciousness. But AI’s skills are developing very quickly, and as we’re being warned, their intentions may not necessarily be benign.

Updated: June 13, 2022, 2:48 PM