At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
Isn't AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Kinda, yes. Any problem is a decision theory problem - in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don't look like urgent issues - we can work on them once we have smarter minds. We need not fear not solving them too much - since if we can't solve these problems our machines won't work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.