ata comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 12 December 2010 09:26:23AM *  2 points [-]

Good to hear from you :-)

  1. My understanding is that at present there's a great deal of uncertainty concerning how future advanced technologies are going to develop (I've gotten an impression that e.g. Nick Bostrom and Josh Tenenbaum hold this view). In view of such uncertainty, it's easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.

  2. At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

  3. Various people have suggested to me that initially pursuing Friendly AI might have higher expected value on the chance that it turns out to be easy. So I could imagine that it's rational for you personally to focus your efforts on Friendly AI research (EDIT: even if I'm correct in my estimation in the above point). My remarks in the grandparent above were not intended as a criticism of your strategy.

  4. I would be interested in hearing more about your own thinking about the relative feasibility of Friendly AI vs. stable whole-brain emulation and current arbitrage opportunities for existential risk reduction, whether on or off the record.

Comment author: ata 12 December 2010 10:45:53AM *  2 points [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

That's an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is "substantially more likely" given WBE).

Comment author: multifoliaterose 12 December 2010 06:09:40PM 1 point [-]

There's a thread with some relevant points (both for and against) titled Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future. I hadn't looked at the comments until just now and still have to read them all; but see in particular a comment by Carl Shulman.

After reading all of the comments I'll think about whether I have something to add beyond them and get back to you.

Comment author: CarlShulman 14 December 2010 03:07:15PM 3 points [-]

You may want to read this paper I presented at FHI. Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.

Comment author: multifoliaterose 14 December 2010 08:42:30PM 2 points [-]

Thanks for the very interesting reference! Is it linked on the SIAI research papers page? I didn't see it there.

Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort.

I appreciate this point which you've made to me previously (and which appears in your comment that I linked above!).