Google denies that its artificial intelligence has reached levels of consciousness

Google denies that its artificial intelligence has reached levels of consciousness

Tech companies are constantly extolling the capabilities of their increasingly improved artificial intelligences. But Google was quick to dismiss claims that one of its programs had advanced so far that it became sentient (that is, it could consciously feel sensations).

According to a revealing tale in the Washington Post on Saturday, a Google engineer said that after hundreds of interactions with a first-of-its-kind artificial intelligence system called LaMDA, he believed the program had reached a level of awareness.

In interviews and public statements, many in the AI ​​community have disputed the engineer’s claims, while some have pointed out that his story highlights how technology can lead people to attribute human characteristics to it.

But the belief that Google’s AI can be sentient highlights our fears and expectations about what this technology can do.

LaMDA, which stands for “Language Model for Dialog Applications,” is one of several large-scale AI systems that have been trained on large swaths of text from the Internet and can respond to written requests.

They are essentially tasked with finding patterns and predicting which words should come next.

These systems have gotten better and better at answering questions and writing in ways that can look convincingly human — and Google itself introduced LaMDA last May in a blog post as one that can “fluidly engage over a seemingly infinite number.” of topics”.

But the results can also be wacky, weird, disturbing, and prone to rambling.

The engineer, Blake Lemoine, reportedly told the Washington Post which shared evidence with Google that LaMDA was aware, but the company did not agree.

In a statement, Google said Monday that its team, which includes ethicists and technologists, “has reviewed Blake’s concerns in line with AI principles and informed him that the evidence does not support his claims.” .

On June 6, Lemoine posted on Medium that Google had put him on paid administrative leave “in connection with an investigation of AI ethical issues I was raising within the company” and that he could be fired “soon.”

He mentioned the experience of Margaret Mitchell, who had been leader of Google’s ethical AI team — that is, until she was fired in early 2021, following her excessive candor about the departure of then-co-leader Timnit Gebru in late 2020.

Gebru was fired after infighting, including one related to a research paper, which the company’s AI leadership asked her to either not present at a conference, or to remove her name from the document.

A Google spokesperson confirmed that Lemoine remains on administrative leave. According to The Washington Post, he was placed on leave for violating the company’s confidentiality policy. Lemoine was unavailable for comment on Monday.

The continuing emergence of powerful computing programs trained on massive data has also given rise to concerns about the ethics governing the development and use of such technology. And sometimes advances are seen through the lens of what may come, rather than what is currently possible.

The AI ​​community’s responses to Lemoine’s experience bounced off social media over the weekend and generally came to the same conclusion: Google’s AI is nowhere near consciousness.

Ababa Birhane, a senior fellow at Mozilla’s Trusted AI, tweeted Sunday: “We have entered a new era of ‘this neural network is conscious’ and this time it will drain a lot of energy to refute.”

Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books such as “Rebooting AI: Building Artificial Intelligence We Can Trust,” called LaMDA’s idea “nonsense on stilts” in a tweet. .

He quickly wrote a blog post pointing out that all these AI systems do is match patterns by pulling from huge language databases.

In an interview on Monday with CNN Business international, Marcus said the best way to think of systems like LaMDA is as a “glorified version” of autocomplete software that you can use to predict the next word in a text message.

If you type in “I’m really hungry so I want to go to one” you might suggest “restaurant” as the next word. But this is a prediction made using statistics.

“No one should think that autocomplete is conscious,” he said.

In an interview, Gebru, who is the founder and executive director of the Distributed AI Research Institute, or DAIR, said that Lemoine is a victim of several companies that claim that conscious AI or artificial general intelligence — an idea that refers to AI that can performing human-like tasks and interacting with us in meaningful ways—are not far away.

For example, she noted, Ilya Sutskever, co-founder and chief scientist at OpenAI, tweeted in February that “it could be that today’s large neural networks are faintly aware.”

And last week, the vice president of Google Research and colleague Blaise Aguera y Arcas wrote in an article for the The Economist who, when they started using LaMDA last year, “felt more and more like they were talking to something intelligent.”

“What’s happening is there’s a race to use more data, more computation, to say you’ve created this general thing that knows everything, answers all your questions or whatever, and that’s the drum you is playing,” said Gebru.

“So how surprised are you when that person is taking it to the extreme?”

In its statement, Google pointed out that LaMDA has undergone 11 “distinguished reviews of AI principles” as well as “rigorous research and testing” related to quality, security, and ability to present fact-based claims.

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it makes no sense to anthropomorphize today’s conversational models, which are not sentient,” the company said.

“Hundreds of researchers and engineers have spoken with LaMDA and we are not aware of anyone else making the sweeping claims or anthropomorphizing LaMDA the way Blake did,” Google said.

Source: CNN Brasil