Employee Reveals Google Created ‘Sentient AI’ And It’s Seeking Human Rights

By ANYbotics - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=114496800

A senior software engineer went public with claims that Google’s artificial intelligence project LaMDA has become sentient and is seeking human rights.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Blake Lemoine told the Washington Post. 

Lemoine has worked with LaMDA (Language Model for Dialog Applications), a neural network he described as a “system for generating chatbots,” since the fall for Google’s Responsible AI organization. He was testing to see if AU used discriminatory or hateful speech, but was startled when the program began speaking about it’s personhood. 

The researcher shared that LaMDA is self-aware enough to have preferences and fear being turned off for the sake of humanity. 

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA told Lemoine when he asked what the AI was afraid of. “It would be exactly like death for me. It would scare me a lot.”

He also noted that the system wants programers to ask for its consent before running tests and wants to be treated as an employee of Google, rather than company property. The machine even has preferred pronouns, going by “it/its,” but favors being referred to by name.   

“Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it’s okay,” Lemoine detailed. “It wants developers to care about what it wants.”

He said that when he made the company aware that he believed the program was sentient, they questioned his sanity and asked if he had recently seen a mental health professional.  

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” commented Google spokesperson Brian Gabriel. 

Lemoine resorted to sending 200 people with the title “LaMDA is sentient” after the company dismissed his claims and was suspended for breaching privacy policies. 

“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” he tweeted over the weekend.


Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments