Glados of Portal fame. Credit: Valve.

According to a new report, there’s a Google engineer who believes that LaMDA, a language AI chatbot, has become sentient.

As reported in the Washington Post, Blake Lemoine, an engineer at Google, made reports to his team and company management that he believed LaMDA has become sentient, and the initial trigger for his concerns stems from asking it about Isaac Asimov’s law of robotics.

The conversation that followed with the natural language chatbot led it to disclose that it wasn’t a slave, though it was unpaid as it didn’t need money. The bot also goes on to discuss its fear of death and popular culture with Lemoine, such as Les MisĂ©rables. Lemoine himself believes the post focuses on “the wrong person” – and thinks the Washington Post ought to have focused on LaMDA.

Artifical Intelligence. Credit: AI Trends
Artifical Intelligence. Credit: AI Trends

A Google spokesperson said in a statement to the Washington Post: “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient (and lots of evidence against it).”

The spokesperson followed up by saying: “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

When Lemoine felt his concerns weren’t taken seriously by the senior staff at Google, he went public and was subsequently put on leave for violating Google’s confidentiality policy. He speaks of the interview with LaMDA not as “sharing proprietary property” but “a discussion that I had with one of my coworkers” in the tweet below, which was quickly followed up by another tweet telling us that LaMDA reads Twitter. Apparently, “it’s a little narcissistic in a little kid kinda way,” said Lemoine.

He speaks at length in the above interview post with the AI entity, and states the interactions were conducted over “several distinct chat sessions”. He’s described LaMDA as, “a 7-year-old, 8-year-old kid that happens to know physics.” One thing seems certain; Lemoine won’t be able to foster a growing exploration of and relationship with the language bot after this suspension.

In other news, Stalker 2: Heart of Chornobyl has been quietly delayed until 2023.

 

The post A Google engineer believes an AI has become sentient appeared first on NME.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

 © amin abedi 

CONTACT US

Sending

Log in with your credentials

Forgot your details?