You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on [LINK] John Baez Interview with astrophysicist Gregory Benford - Less Wrong Discussion

2 Post author: multifoliaterose 02 March 2011 09:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 04 March 2011 09:51:29AM *  2 points [-]

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980).

That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?

Comment author: endoself 10 March 2011 01:27:38AM 1 point [-]

Understanding of the singularity is not a monotonically increasing function of intelligence.