Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AlexMennen comments on Superintelligence via whole brain emulation - Less Wrong Discussion

8 Post author: AlexMennen 17 August 2016 04:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 18 August 2016 10:19:28PM 1 point [-]

he doesn't think evolutionary pressures are going to come into play?

He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn't try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.

We've seen random natural selection significantly improve human intelligence in as few as tens of generations.

Source? I'm aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.

He claims we don't know enough about the brain to select usefully nonrandom changes, yet assumes that we'll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don't understand anything about how it works.

To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.

It doesn't matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn't tend to make overly optimistic predictions.

If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn't surprise me that much if Deepmind produces some AI in 2017 that gets touted as a "rat-level AI" in the media, but I'd be shocked if the claim is justified.