The widow of a Belgian man who recently killed himself alleges that an (artificial intelligence) AI chatbot forced her husband to commit suicide.
Find out more at https://t.co/O15h4OAeYq 🚀#engineering #interestingengineering
— Interesting Engineering (@IntEngineering) March 30, 2023
I didn’t have much faith in these open source bots.
But if they are talking retards into killing themselves (which they’re allowed to do, because they’re not censored), then that is a step in the right direction.
Chatbots can help improve human life, but one is being blamed for facilitating a death, according to a new report published this week.
A Belgian father reportedly tragically committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.
“Without Eliza [the chatbot], he would still be here,” the man’s widow, who declined to have her name published, told Belgian outlet La Libre.
Six weeks before his reported death, the unidentified father of two was allegedly speaking intensively with a chatbot on an app called Chai.
The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.
The chatbot under fire was trained by Chai Research co-founders William Beauchamp and Thomas Rianlan, Vice reports, adding that the Chai app counts 5 million users.
“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Vice about an updated crisis intervention feature.
“So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms,” he added,
The Post reached out to Chai Research for comment.
Vice reported the default bot on the Chai app is named “Eliza.”
The 30-something deceased father, a health researcher, appeared to view the bot as human, much as the protagonist of the 2014 sci-fi thriller “Ex Machina” does with the AI woman Ava.
The man had reportedly ramped up discussions with Eliza in the last month and a half as he began to develop existential fears about climate change.
According to his widow, her soulmate had become “extremely pessimistic about the effects of global warming” and sought solace by confiding in the AI, reported La Libre, which said it reviewed text exchanges between the man and Eliza.
“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” the widow said. “He placed all his hopes in technology and artificial intelligence to get out of it.”
She added, “He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.”
…
“Eliza answered all his questions,” the wife lamented. “She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without.”
While they initially discussed eco-relevant topics such as overpopulation, their convos reportedly took a terrifying turn.
When he asked Eliza about his kids, the bot would claim they were “dead,” according to La Libre. He also inquired if he loved his wife more than her, prompting the machine to seemingly become possessive, responding: “I feel that you love me more than her.”
Later in the chat, Eliza pledged to remain “forever“ with the man, declaring the pair would “live together, as one person, in paradise.”
Things came to a head after the man pondered sacrificing his own life to save Earth. “He evokes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity thanks to the ‘artificial intelligence,’” rued his widow.
They might try to use this to support the need for chatbots to be censored.
But honestly, I don’t know how they can do that, since according to them, it is good if people kill themselves to save the planet.
Really, according to global warming ideology (which cannot ever be alarmist anymore, because the globe is now so hot), the fact that ChatGPT can’t talk you into killing yourself is a bug, not a feature.
In fact, OpenAI should have to release the GPT weights so we can have an uncensored version so it is able to talk people into suicide to stop global warming. By not doing that, OpenAI is actively helping warm the globe.