A software engineer from Google’s AI division has been suspended after making a public assertion that the chatbot he was developing was sentient.
In a transcript of his discussions with the AI system known as LaMDA (Language Model for Dialogue Applications), Blake Lemoine revealed that it claimed to be human, to be feeling lonely, and possessing a soul. Blake Lemoine claims that the chatbot LaMDA has read Les Misérables, that it practices daily meditation, and appears to be sentient.
As a Google employee, Blake Lemoine was given a challenging task: to determine whether the company’s artificial intelligence displayed prejudice in how it interacted with humans.
He asked different questions to LaMDA to determine whether its responses indicated any bias, for instance one against particular religions. Lemoine, who claims to be a priest that practices Christian mysticism, was fascinated at this point.

“Just for my own personal enlightenment, I had follow-up chats with it. I was curious as to what it would say regarding specific religious subjects,” said Lemoine. Then, “it told me one day that it had a soul.”
A full transcript of the discussions that Blake Lemoine and a colleague had with the “chatbot” known as LaMDA has been shared on Medium by the software engineer. By disclosing his findings, Lemoine allegedly broke Google’s confidentiality rules.

A spokeswoman for Google said that their team of ethicists and technologists examined Lemoine’s concerns in accordance with their AI principles and has notified him that the evidence does not support his assertions.
According to The Washington Post and The New York Times, Lemoine has been placed on “paid administrative leave” for breaching confidentially and that he has brought up ethical issues with the business. However, Google maintains that the evidence “does not corroborate his assertions.”
Despite all of this, academics and AI experts predominantly claim that the text and images produced by AI systems like LaMDA are solely based on what people have already written on Wikipedia, Reddit, message boards and other websites. This also implies that the AI doesn’t fully comprehend context or meaning.
Learn more:
- A transcription of the discussion with LaMDA chatbot (on Medium)
- Google engineer says Lamda AI system may have its own feelings (on BBC News)
- The Google engineer who thinks the company’s AI has come to life (on The Washington Post)
- More information about LaMDA chatbot: LaMDA: our breakthrough conversation technology (on Google Blog)