loqi comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: loqi 14 June 2009 08:02:51AM *  4 points [-]

One problem with this argument is how conjunctive it is: "(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they're dissuaded by friendliness concerns and (F) our scientific window of opportunity is small."

My back-of-the-envelope, generous probabilities:

A. 0.5, this is a pretty strong requirement.

B. 0.9, for simplicity, giving your speculation the benefit of the doubt.

C. 0.9, same.

D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.

E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.

F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven't yet seen much evidence that it's terribly likely.

That product comes out to roughly 1:50,000. I'm guessing you think the actual figure is higher, and expect you'll contest those specific numbers, but would you agree that I've fairly characterized the structure of your objection to FAI?