John_Baez comments on [LINK] John Baez Interview with astrophysicist Gregory Benford - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (16)
Since XiXiDu also asked this question on my blog, I answered over there.
I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.
That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?
Understanding of the singularity is not a monotonically increasing function of intelligence.