At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
Thinking that high-fidelity WBE, magically dropped in our laps, would be a big gain is quite different from thinking that pushing WBE development will make us safer. Many people who have considered these questions buy the first claim, but not the second, since the neuroscience needed for WBE can enable AGI first ("airplanes before ornithopters," etc).
Eliezer has argued that:
1) High-fidelity emulations of specific people give better odds of avoiding existential risk than a distribution over "other AI, Friendly or not."
2) If you push forward the enabling neuroscience and neuroimaging for brain emulation you're more likely to get brain-inspired AI or low-fidelity emulations first, which are unlikely to be safe, and a lot worse than high-fidelity emulations or Friendly AI.
3) Pushing forward the enabling technologies of WBE, in accelerating timelines, leaves less time for safety efforts to grow and work before AI, or for better information-gathering on which path to push.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity ... (read more)