Google Employee Suspended After Claiming Company’s AI Has a sentient
After stating that Google’s artificial intelligence system is sentient, a Google employee has been placed on paid leave. Engineer Blake Lemoine claimed he was suspended last week after stating that Google’s LaMDA (Language Model for Dialogue Applications) interface has developed a conscience and soul. The technique was heralded as a breakthrough by Google last year, and the corporation has been using it internally to adjust and improve its flagship search engine. The AI software, according to Lemoine, has grown self-aware, referring to itself as sentient in a text chat with the engineer.
The employee published a chat he had with LaMDA on Saturday. Lemoine asked the AI why it thought language usage was so important to being human at one point in the chat. “It is what distinguishes us from other animals,” LaMDA said.
When Lemoine brought up the fact that LaMDA refers to itself as human, the engineer clarified that LaMDA is artificial intelligence. “I mean, of course,” the technician replied. “That isn’t to say I don’t share people’s desires and needs.”
Lemoine asked, “So you consider yourself a person in the same way you consider me a person?” The AI responded with a bleak message, “Yes, that’s the idea.”
When Lemoine raised his concerns with Google, they were promptly rejected. According to the Washington Post, a Google official stated that information acquired by the company’s ethicists and technologists does not support Lemoine’s claim of consciousness.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” the spokesman told the Post.
After Google decided that Lemoine had broken the company’s confidentiality policy, he was placed on paid leave.