ata comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
Good to hear from you :-)
My understanding is that at present there's a great deal of uncertainty concerning how future advanced technologies are going to develop (I've gotten an impression that e.g. Nick Bostrom and Josh Tenenbaum hold this view). In view of such uncertainty, it's easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.
At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.
Various people have suggested to me that initially pursuing Friendly AI might have higher expected value on the chance that it turns out to be easy. So I could imagine that it's rational for you personally to focus your efforts on Friendly AI research (EDIT: even if I'm correct in my estimation in the above point). My remarks in the grandparent above were not intended as a criticism of your strategy.
I would be interested in hearing more about your own thinking about the relative feasibility of Friendly AI vs. stable whole-brain emulation and current arbitrage opportunities for existential risk reduction, whether on or off the record.
That's an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is "substantially more likely" given WBE).
There's a thread with some relevant points (both for and against) titled Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future. I hadn't looked at the comments until just now and still have to read them all; but see in particular a comment by Carl Shulman.
After reading all of the comments I'll think about whether I have something to add beyond them and get back to you.
You may want to read this paper I presented at FHI. Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.
Thanks for the very interesting reference! Is it linked on the SIAI research papers page? I didn't see it there.
I appreciate this point which you've made to me previously (and which appears in your comment that I linked above!).