Jim’s Blog
January 29, 2014
AI is a hard problem, and even if we had a healthy society, we might still be stuck. That buildings are not getting taller and that fabs are not getting cheaper and not making smaller and smaller devices is social decay. That we are stuck on AI is more that it is high hanging fruit.
According to Yudkowsky, we will have AI when computers have as much computing power as human brains.
The GPU on my desktop has ten times as much computing power as the typical male human brain, and it is not looking conscious.
Artificial consciousness will require an unpredictable and unforeseeable breakthrough, if it is possible at all, because we now understand aging, but do not understand consciousness
A self driving car would be true AI, or a good step towards it, as would good machine translation. You don’t mind the computer sometimes getting the meaning backwards when it does translation, but you would mind if a self driving car drives into someone or something.
With chess playing computers, it seemed that we were making real progress towards AI but it was eventually revealed that chess playing computers do not figure out chess moves the way humans do, and the difference matters, even if a computer can beat any human chess player.
With big data, it seemed that we were making real progress towards AI, but it was eventually revealed that just as chess playing computers do not figure out chess moves the way humans do, neither does big data.
When humans found the Rosetta stone, they were able to learn Egyptian. Having fair number of words and sentences to start with, they could then figure out other words from context, giving them more words, and more words gave them more grammar, and more grammar and words enabled them to figure out yet more words from context, until they had pretty much all Egyptian. Google’s machine translation learns to translate from a thousand Rosetta stones. It is not doing what we are doing, which is understanding meaning and expressing meaning.
Humans do not use big data, just as they do not examine a million possible chess moves.
Atop each Google self driving car is a great big device that performs precision distance measurements in all directions, giving the computer a three dimensional image of its surroundings. It is a big object because it collects a huge amount of data very fast. Human eyes just collect a small amount of data, from which the human constructs a three dimensional idea of his surroundings. Humans don’t need that much data to drive, and could not handle that much data if their senses provided it.
The Google car has a centimeter scale map of the world in which it is driving. It can see traffic lights because it knows exactly where traffic lights are in the world. If one day, someone moved the traffic lights six inches to the right, problem. If roadworks, problem. If a traffic accident, problem. And that is why the cars need someone in the driver’s seat.
Maybe big data will produce acceptable results for self driving cars, acceptable being that in the rare situations that the computer cannot handle the car, the computer realizes that there is something wrong, comes to a halt, and asks for human assistance. But it is not quite there yet, and if it does produce acceptable results, will not produce human like, or even non human animal like, performance.
What is missing seems to be consciousness, and we don’t really know what that is. Intelligence seems to require huge numbers of neurons. Consciousness seems to require considerably smaller number.
The male human brain has around eighty six billion neurons.
The maximum output of any one neuron is about three hundred hertz, but only a small fraction are running near the maximum output, because if all of them were running near maximum output, the brain’s oxygen supply could not keep up.
This output from any one neuron is a summary of the data received by a large number of synapses, typically a thousand or so synapses.
So we can suppose that the male human brain processes something like three terabytes per second if we look at neuron output, or something like three thousand terabytes per second if we look at neuron input.
The GPU on my desktop generates eight thousand single precision gflops per second, which is thirty two terabytes per second.
It seems obvious to me that the problem is not artificial intelligence. By any measure of intelligence, computers are highly intelligent. The problem is that they are not conscious. And the reason they are not conscious is that we do not have the faintest idea what consciousness is. Maybe is something supernatural, maybe it is something perfectly straightforward, but we cannot see what it is for the same reason that a fish cannot see water.
Furthermore, interacting with non human animals, it seems obvious to me that a tarantula is conscious, as conscious as I am, and a GPU is not. The genetic basis for brain structure is the same in the tarantula as the human, which hints that the common ancestor of the protostomes and deuterostomes, which had a complex brain, but was otherwise scarcely more than a bag of jelly, was conscious – that consciousness was the big breakthrough that resulted in protostomes and deuterostomes dominating the earth, that consciousness is a single big trick, not a random grab bag of tricks.
From time to time, I read claims that someone is successfully emulating a cat brain or some such, but no one has successfully emulated the nervous system of Caenorhabditis elegans, which has three hundred and two neurons, nor has anyone successfully emulated the human retina, in which data flows through three layers of neurons with only local interactions, so that if we understood a single tiny patch of the retina, a dozen neurons or so, we would understand all of it. In this sense, neurons are doing something mysterious, in that quite small systems of neurons remain mysterious.
This does not prove that consciousness is magic, but, as far as our current knowledge goes, it is indistinguishable from magic.