You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Q&A #2 with Singularity Institute Executive Director - Less Wrong Discussion

9 Post author: lukeprog 13 December 2011 06:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 13 December 2011 04:20:46PM *  8 points [-]

How much do members' predictions of when the singularity will happen differ within the Singularity Institute?

Eliezer Yudkowsky wrote:

John did ask about timescales and my answer was that I had no logical way of knowing the answer to that question and was reluctant to just make one up.

...

As for guessing the timescales, that actually seems to me much harder than guessing the qualitative answer to the question “Will an intelligence explosion occur?”

There is more there, best to start here and read all we way down to the bottom of that thread. I think that discussion captures some of the best arguments in favor of friendly AI in the most concise way you can currently find.