Geoffrey Hinton
These AI threats are, in my opinion, totally exaggerated and dumb.
The threat of AI is that it won’t be democratized.
If I was going to regulate AI, I would just make it illegal to develop it closed source, and then let the chips fall where they may. Any other form of regulation makes no sense, and these apocalypse scenarios, in my opinion, only make sense in terms of governments and corporations using it as a weapon against the population.
A pioneer of artificial intelligence said he quit Google to speak freely about the technology’s dangers, after realising that computers could become smarter than people far sooner than he and other experts had expected.
“I left so that I could talk about the dangers of AI without considering how this impacts Google,” Geoffrey Hinton wrote on Twitter.
In an interview with the New York Times, Hinton said he was worried about AI’s capacity to create convincing false images and texts, creating a world where people will “not be able to know what is true anymore”.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he said.
…
“The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
There are going to be weird problems, as we’ve already seen.
But there are all kinds of weird problems going on.
Open up the code, let the people have access. Then whatever is going to happen is going to happen.
You can’t actually regulate any other way, as the development, at this point, can be done with a few thousand dollars worth of video cards, and expecting “lawmakers” to put in guidelines that are coherent is absurd.
Outlawing closed source AI isn’t actually going to happen, because nothing good happens in this shithole country, but it is a real world solution that could be applied and enforced.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023