At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
What about pushing on neuroscience and neuroimaging hard enough so that when there is enough computing power to do brain-inspired AI or low-fidelity emulations, the technology for high-fidelity emulations will already be available, so people will have little reason to do brain-inspired AI or low-fidelity emulations (especially if we heavily publicize the risks).
Or what if we push on neuroimaging alone hard enough so that when neuron simulation technology advances far enough to do brain emulations, high-fidelity brain scans will already be readily available and people won't be tempted to use low-fidelity scans.
How hard have FHI/SIAI people thought about these issues? (Edit: Not a rhetorical question, it's hard to tell from the outside.)
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
... (read more)