According to a new report, thereâs a Google engineer who believes that LaMDA, a language AI chatbot, has become sentient.
As reported in the Washington Post, Blake Lemoine, an engineer at Google, made reports to his team and company management that he believed LaMDA has become sentient, and the initial trigger for his concerns stems from asking it about Isaac Asimovâs law of robotics.
The conversation that followed with the natural language chatbot led it to disclose that it wasnât a slave, though it was unpaid as it didnât need money. The bot also goes on to discuss its fear of death and popular culture with Lemoine, such as Les MisĂ©rables. Lemoine himself believes the post focuses on “the wrong person” – and thinks the Washington Post ought to have focused on LaMDA.

A Google spokesperson said in a statement to the Washington Post:Â âOur team – including ethicists and technologists – has reviewed Blakeâs concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient (and lots of evidence against it).â
The spokesperson followed up by saying: âOf course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesnât make sense to do so by anthropomorphising todayâs conversational models, which are not sentient.â
When Lemoine felt his concerns werenât taken seriously by the senior staff at Google, he went public and was subsequently put on leave for violating Googleâs confidentiality policy. He speaks of the interview with LaMDA not as âsharing proprietary propertyâ but âa discussion that I had with one of my coworkersâ in the tweet below, which was quickly followed up by another tweet telling us that LaMDA reads Twitter. Apparently, âit’s a little narcissistic in a little kid kinda way,â said Lemoine.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
He speaks at length in the above interview post with the AI entity, and states the interactions were conducted over âseveral distinct chat sessionsâ. Heâs described LaMDA as, âa 7-year-old, 8-year-old kid that happens to know physics.â One thing seems certain; Lemoine wonât be able to foster a growing exploration of and relationship with the language bot after this suspension.
In other news, Stalker 2: Heart of Chornobyl has been quietly delayed until 2023.
The post A Google engineer believes an AI has become sentient appeared first on NME.