XiXiDu wrote:
I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.
Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.
(Week 311 is just the first part of a multi-part interview.)
For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.
Tim Tyler wrote:
It looks like a conventional "confused environmentalist" prioritisation to me.
I'm probably confused (who isn't?), but I doubt I'm conventional. If I were, I probably wouldn't be so eager to solicit the views of Benford, Yudkowsky and Drexler on my blog. A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues. I'd like to make a dent in that problem.
The list you cite is not the explanation that XiXiDu seeks.
Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his.
Would you be willing to write a blog post reviewing his arguments and explaining why you either reject them, don't understand them or accept them and start working to mitigate risks from AI? It would be valuable to have someone like you, who is not deeply involved with the SIAI (Singularity Institute) or LessWrong.com, to write a critique on their arguments and objectives. I myself don't have the education (yet) to do so and ...
The content of John Baez's This Week's Finds: Week 310:
Includes
Note: The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.