Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

WhySpace comments on Superintelligence via whole brain emulation - Less Wrong Discussion

8 Post author: AlexMennen 17 August 2016 04:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: WhySpace 17 August 2016 03:14:09PM 2 points [-]

the first uploads may be selected for high risk-tolerance

The obvious solution would be to use cryopreserved brains. Perhaps this would be necessary anyway, because of all the moral and legal problems with slicing up living person's brain to take SEM images and map the connectome. This suggests that an extremely effective EA cause would be to hand out copies of Bostrom's Superintelligence at Cryonics conventions.

It's not clear whether the cryonics community would be more or less horrified by defective spurs than the average person, though. Perhaps EAs could request to be revived early, at increased risk of information-theoretic death, if digital uploading is attempted and if self-modifying AI is a risk. Perhaps the ideal would be to have a steady stream of FAI-concerned volunteers in the front of the line, so that the first successes are likely to be cautious about such things. Ideally, we wouldn't upload anyone not concerned with FAI until we had a FAI in place, but that may not be possible if there is a coordination problem between several groups across the planet. A race to the bottom seems like a risk, if Moloch has his say.

A possible (but probably smaller) source of positive selection is that currently, people who are enthusiastic about uploading their brains correlate strongly with people who are concerned about AI safety

I ordinarily wouldn’t make such a minor nitpick, (because of this) but it might be an important distinction, so I’ll make an exception: People who worry about FAI are likely to also be enthusiastic about uploading, but I'm not sure if the average person who is enthusiastic about uploading is worried about FAI. For most people, "AI safety" means self driving cars that don't hit people.

Comment author: AlexMennen 17 August 2016 04:23:28PM 0 points [-]

People who worry about FAI are likely to also be enthusiastic about uploading, but I'm not sure if the average person who is enthusiastic about uploading is worried about FAI.

Right, that's why I said it would probably be a smaller source of selection, but the correlation is still strong, and goes in the preferred direction.

Comment author: WhySpace 17 August 2016 07:45:00PM 0 points [-]

Ah, understood. We're on the same page, then.