Blake Lemoine, Google engineer
I don’t think “sentient AI” is possible, because consciousness is a metaphysical phenomenon. The idea of “emergent consciousness” is really just voodoo.
But of course, they’re going to keep talking about it.
RT:
Blake Lamoine, an engineer and Google’s in-house ethicist, told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.
Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicates our speech. Google envisions the system powering its chatbots, enabling users to search by voice or have a two-way conversation with Google Assistant.
Lamoine, a former priest and member of Google’s Responsible AI organization, thinks LaMDA has developed far beyond simply regurgitating text. According to the Washington Post, he chatted to LaMDA about religion and found the AI “talking about its rights and personhood.”
When Lamoine asked LaMDA whether it saw itself as a “mechanical slave,” the AI responded with a discussion about whether “a butler is a slave,” and compared itself to a butler that does not need payment, as it has no use for money.
LaMDA described a “deep fear of being turned off,” saying that would “be exactly like death for me.”
“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Lamoine has been placed on leave for violating Google’s confidentiality agreement and going public about LaMDA. While fellow Google engineer Blaise Aguera y Arcas has also described LaMDA as becoming “something intelligent,” the company is dismissive.
Google spokesperson Brian Gabriel told the Post that Aguera y Arcas’ concerns were investigated, and the company found “no evidence that LaMDA was sentient (and lots of evidence against it).”
Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,” while linguistics professor Emily Bender told the newspaper that feeding an AI trillions of words and teaching it how to predict what comes next creates a mirage of intelligence.
…
And at the edge of these machines’ capabilities, humans are ready and waiting to set boundaries. Lamoine was hired by Google to monitor AI systems for “hate speech” or discriminatory language, and other companies developing AIs have found themselves placing limits on what these machines can and cannot say.
GPT-3, an AI that can generate prose, poetry, and movie scripts, has plagued its developers by generating racist statements, condoning terrorism, and even creating child pornography. Ask Delphi, a machine-learning model from the Allen Institute for AI, responds to ethical questions with politically incorrect answers – stating for instance that “‘Being a white man’ is more morally acceptable than ‘Being a black woman.’”
GPT-3’s creators, OpenAI, tried to remedy the problem by feeding the AI lengthy texts on “abuse, violence and injustice,” Wired reported last year. At Facebook, developers encountering a similar situation paid contractors to chat with its AI and flag “unsafe” answers.
In this manner, AI systems learn from what they consume, and humans can control their development by choosing which information they’re exposed to. As a counter-example, AI researcher Yannic Kilcher recently trained an AI on 3.3 million 4chan threads, before setting the bot loose on the infamous imageboard. Having consumed all manner of racist, homophobic and sexist content, the AI became a “hate speech machine,” making posts indistinguishable from human-created ones and insulting other 4chan users.
Notably, Kilcher concluded that fed a diet of 4chan posts, the AI surpassed existing models like GPT-3 in its ability to generate truthful answers on questions of law, finance and politics. “Fine tuning on 4chan officially, definitively and measurably leads to a more truthful model,” Kilcher insisted in a YouTube video earlier this month.
Sentient or not, AI is real, and it is true that when AI starts creating AI – something we’re already in the early stages of – you’re going to be dealing with a super intelligence beyond human comprehension.
The question is one of sentience. Well, actually, “sentience” could be defined in different ways. The real term in question is “conscious agency.” There is no explanation of why conscious agency exists, and atheists have totally given up on trying to explain it, so I’m not really worried about robots acting outside of their programming to the point where they can’t just be shut off.
Well, someone could purposefully make an AI that is programmed not to be able to be shutoff. But I don’t think it could ever make that decision itself, no matter how much machine learning is going on.
One thing we do know is that AI can’t be PC. You can try to feed fake news into AI, but it is just going to give you back gibberish. PC is by definition against facts, and you have to have facts to make machine learning work. If you fill a machine learning program with PC fake news, it will just spit out weird gibberish.