At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
It seems that ems are much harder to make friendly than a general AI. That is, some of what we have to fear from unfriendly AIs is present in powerful WBEs too, and you can't just build a WBE provably friendly to start with; you have to constrain it or teach it to be friendly (both of which are considered dangerous methods of getting to friendliness).
I'm afraid don't recall who I'm (poorly) paraphrasing here, but:
Why would we expect emulated humans to be any Friendlier than a de novo AGI? At least no computer program has tried to maliciously take over the world yet; humans have been trying to pull that off for millennia!