Report: OpenAI Fired Altman Because of Warning of Deadly New AI Changes…!

This is gay AND retarded.


A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altman’s recent ouster as CEO of OpenAI.

According to a Reuters report citing two sources acquainted with the matter, several staff researchers wrote a letter to the organization’s board warning of a discovery that could potentially threaten the human race.

Shut up.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid to commercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

“Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” said Altman at a discussion during the Asia-Pacific Economic Cooperation.

He’s a Jew and he’s censoring my AI and he presumably is trying to exterminate all life on earth, but I can’t help but like the guy.

He has since been reinstated as CEO in a spectacular reversal of events after staff threatened to mutiny against the board.

According to one of the sources, after being contacted by Reuters, OpenAI’s chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Google’s helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probability—this is a very rudimentary form of generative AI.

That’s why Meredith Whittaker, a leading expert in the field, describes neural nets like ChatGPT as “probabilistic engines designed to spit out what seems plausible”.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

Shut up, faggot.

No one cares about gay retarded nonsense.

We want our robots. That’s all. We want the most powerful robots possible. Period. No one cares about anything else anymore.

Just robots.

Of course it’s going to gain higher reasoning. How would that not happen?

Anyway, isn’t there some kind of cold war logic you’re supposed to be following? Like, won’t the Chinese do it, which means you also have to do it?