AlexMennen comments on Superintelligence via whole brain emulation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (27)
Hanson makes so many assumptions that defy intuition. He's talking a civilization with the capacity to support trillions of individuals, in which these individuals are largely entirely disposable and can be duplicated at a moment's notice, and he doesn't think evolutionary pressures are going to come into play? We've seen random natural selection significantly improve human intelligence in as few as tens of generations. With Ems, you could probably cook up tailor-made superintelligences in a weekend using nothing but the right selection pressures. Or, at least, I see no reason to be confident in the converse proposition.
He claims we don't know enough about the brain to select usefully nonrandom changes, yet assumes that we'll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don't understand anything about how it works. What about the fact that we already know some useful nonrandom changes that we could make, such as the increased dendritic branching observable in specific intelligence-associated alleles?
It doesn't matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn't tend to make overly optimistic predictions. How many doublings is a rat away from a human?
He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn't try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.
Source? I'm aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.
To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.
If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn't surprise me that much if Deepmind produces some AI in 2017 that gets touted as a "rat-level AI" in the media, but I'd be shocked if the claim is justified.