The content of John Baez's This Week's Finds: Week 310:
Includes
- Discussion of global warming and geoengineering.
- A reference to a paper by David Wolpert and Gregory Benford on Newcomb's paradox
Note: The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.
That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?
Understanding of the singularity is not a monotonically increasing function of intelligence.