Donny comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 03 September 2010 04:05:20PM 1 point [-]

Eliezer addresses point 2 in the comments of the article you linked to in point 2. He's also previously answered the questions of whether he believes he personally could solve FAI and how far out it is -- here, for example.

Comment author: multifoliaterose 03 September 2010 04:26:59PM 0 points [-]

Thanks for the references, both of which I had seen before.

Concerning Eliezer's response to Scott Aaronson: I agree that there's a huge amount of uncertainty about these things and it's possible that AGI will develop unexpectedly, but don't see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden's remarks about noncontingency here.

As for A Premature Word on AI, Eliezer seems to be saying that

  1. Even though the FAI problem is incredibly difficult, it's still worth working on because the returns attached to success would be enormous.

  2. Lots of people who have worked on AGI are mediocre

  3. The field of AI research is not well organized.

Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.