At the FHI, we are currently working on a project around whole brain emulations (WBE), or uploads. One important question is if getting to whole brain emulations first would make subsequent AGI creation
- more or less likely to happen,
- more or less likely to be survivable.
If you have any opinions or ideas on this, please submit them here. No need to present an organised overall argument; we'll be doing that. What would help most is any unusual suggestion, that we might not have thought of, for how WBE would affect AGI.
EDIT: Many thanks to everyone who suggested ideas here, they've been taken under consideration.
I would think that brain-inspired AI would use less hardware (taking advantage of differences between the brain and the digital/serial computer environment, along with our software and machine learning knowledge.
Different relative weightings of imaginging, comp neurosci, and hardware would seem to give different probability distributions over brain-inspired AI, low-fi WBE, and hi-fi WBE, but I don't see a likely track that goes in the direction of "probably WBE" without a huge (non-competitive) willingness to hold back on the part of future developers.
Of the three, neuroimaging seems most attractive to push (to me, Robin might say it's the worst because of more abrupt/unequal transitions), but that doesn't mean one should push any of them.
A number of person-months, but not person-years.
It looks to me as though Robin would prefer computing power to mature last. Neuroimaging research now could help bring that about.
http://www.overcomingbias.com/2009/11/bad-emulation-advance.html