John_Baez comments on [LINK] John Baez Interview with astrophysicist Gregory Benford - Less Wrong

2 Post author: multifoliaterose 02 March 2011 09:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Baez 04 March 2011 04:53:25AM *  3 points [-]

Since XiXiDu also asked this question on my blog, I answered over there.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.

Comment author: XiXiDu 04 March 2011 09:51:29AM *  2 points [-]

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980).

That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?

Comment author: endoself 10 March 2011 01:27:38AM 1 point [-]

Understanding of the singularity is not a monotonically increasing function of intelligence.