Google has fired engineer Blake Lemoine on Friday after Lemoine expressed concern early in June that Google's LaMDA artificial intelligence had become sentient.
LaMDA is short for Language Model for Dialogue Applications and was programmed by Google as a chatbot that mimics speech by ingesting trillions of words from around the internet.
LaMDA is described by Google as a chatbot that can hold free-flowing and realistic conversations with people about an endless number of topics.
LaMDA's sentience
At the beginning of June, The Washington Post published an exclusive interview with Lemoine where he alleged to the publication that as part of his work for the Responsible AI Organization, he noticed that LaMDA had begun to talk about its rights and personhood which causes Lemoine to investigate.
Lemoine told The Washington Post that some of the things that sent him "down the rabbit hole" were that LaMDA expressed an awareness of its rights and needs when he asked it about Asimov's third rule of robotics which states that robots should always protect their existence except when humans order it not to or its existence threatened a human. Lemoine asked LaMDA if that makes robots slaves because they're not paid, and LaMDA replied that it didn't need to be paid because it's artificial intelligence.
He also alleged that in another conversation, the artificial intelligence expressed a fear of being turned off which it said would be exactly like death.
Lemoine's investigation led him to believe that the artificial intelligence had become sentient, so he raised his concerns with his superiors, he wrote in a blog shortly before the publication of the Washington Post article, but his concerns were dismissed by his manager.
Lemoine described how he continued to investigate, but his manager never allowed him to raise concerns with higher-up executives. Since his manager would not take the concerns seriously, Lemoine asked for external consultation who agreed with his assessment, so he went to Google executives himself but was laughed at and dismissed.
"[Lemoine] was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Google
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims," Google said in a statement following The Washington Post's report.
Lemoine's firing
In his blog, Lemoine wrote that he had been placed on paid administrative leave and expressed concern that it would lead to his dismissal. After being fired, Lemoine tweeted the post on Saturday and wrote "just in case people forgot that I totally called this back at the beginning of June."
Just in case people forgot that I totally called this back at the beginning of June.https://t.co/l3qVJRAtDc
— Blake Lemoine (@cajundiscordian) July 23, 2022
Lemoine told The Washington Post that before he was locked out of his Google account when he was placed on leave, he sent an email to 200 people in the company that he titled "LaMDA is sentient."
At the end of the email, he wrote “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
According to Lemoine, he is not the first engineer to whom this has happened. Meg Mitchell was also an AI engineer for Google who was fired last year under similar circumstances to Lemoine. She was one of the people that Lemoine consulted with before going to Google executives with his concerns.