timtyler comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (123)
Yes, but intentionally so. ;)
We are getting into a realm where its important to understand background assumptions, which is why I listed some of mine. But notice I did quality with 'reasonably efficient' and 'loosely inspired'.
'Perfect' is a pretty vague qualifier. If we want to talk in quantitative terms about efficiency and performance, we need to look at the brain in terms of circuit complexity theory and evolutionary optimization.
Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.
For example, if you want to design a circuit for a particular task and you have a bunch of CPU time available, you can run a massive evolutionary search using a GA (genetic algorithm) or variant thereof. The circuits you will eventually get are the best known solutions, and in many cases incorporate bizarre elements that are even difficult for humans to understand.
Now, that same algorithm is what has produced everything from insect ganglions to human brains.
Look at the wiring diagram for a cockroach or a bumblebee compared to what it actually does, and if you compare that circuit to equivalent complexity computer circuits for robots we can build, it is very hard to say that the organic circuit design could be improved on. An insect ganglion's circuit organization, is in some sense perfect. (keep in mind organic circuits runs at less than 1khz). Evolution has had a long long time to optimize these circuits.
Can we improve on the brain - eventually we can obviously beat the brain by making bigger and faster circuits, but that would be cheating to some degree, right?
A more important question is: can we beat the cortex's generic learning algorithm.
The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).
Many of the key problems in science and engineering can be thought of as search problems. Designing a new circuit is a search in the vast space of possible arrangement of molecules on a surface.
So we can look at how the brain compares to our best algorithms in smaller constrained search worlds. For smaller spaces (such as checkers), we have much simpler serial algorithms that can win by a landslide. For more complex search spaces, like chess the favor shifts somewhat but even desktop PC's can now beat grandmasters. Now go up one more complexity jump to a game like Go and we are still probably years away from an algorithm that can play at top human level.
Most interesting real world problems are many steps up the complexity ladder past Go.
Also remember this very important principle: the brain runs at only a few hundred hertz. So computers are cheating - they are over a million times faster.
So to compare the brains algorithms for a fair comparison, you would need to compare the brain to a large computer cluster that runs at only 500hz or so. Parallel algorithms do not scale nearly as well, so this is a huge handicap - and yet the brain still wins by a landslide in any highly complex search spaces.
Neurons mainly do calculate in analog space, but that is because this is vastly more efficient for probabilistic approximate calculation, which is what the brain is built on. A digital multiplier is many orders of magnitude less circuit space efficient than an analog multiplier - it pays a huge cost for its precision.
The brain is a highly optimized specialized circuit implementation of a very general universal intelligence algorithm. Also, the brain is Turing complete - keep that in mind.
mind != brain
The brain is the hardware and the algorithms, the mind is the actual learned structure, the data, the beliefs, ideas, personality - everything important. Very different concepts.
Google learns about the internet by making a compressed bitwise identical digital copy of it. Machine intelligences will be able to learn that way too - and it is really not much like what goes on in brains. The way the brain makes reliable long-term memories is just a total mess.
I wouldn't consider that learning.
Learning is building up a complex hierarchical web of statistical dimension reducing associations that allow massively efficient approximate simulation.
The term is more conventionally used as follows:
Psychology . the modification of behavior through practice, training, or experience.