John_Baez comments on [LINK] John Baez Interview with astrophysicist Gregory Benford - Less Wrong

2 Post author: multifoliaterose 02 March 2011 09:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 02 March 2011 10:08:40AM *  1 point [-]

The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.

I'm waiting for this for so long. I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI. So far there exists literally no valuable third-party critique.

Comment author: John_Baez 03 March 2011 12:30:34AM *  12 points [-]

XiXiDu wrote:

I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.

(Week 311 is just the first part of a multi-part interview.)

For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.

Tim Tyler wrote:

It looks like a conventional "confused environmentalist" prioritisation to me.

I'm probably confused (who isn't?), but I doubt I'm conventional. If I were, I probably wouldn't be so eager to solicit the views of Benford, Yudkowsky and Drexler on my blog. A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues. I'd like to make a dent in that problem.

The list you cite is not the explanation that XiXiDu seeks.

Comment author: XiXiDu 03 March 2011 09:39:04AM 1 point [-]

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his.

Would you be willing to write a blog post reviewing his arguments and explaining why you either reject them, don't understand them or accept them and start working to mitigate risks from AI? It would be valuable to have someone like you, who is not deeply involved with the SIAI (Singularity Institute) or LessWrong.com, to write a critique on their arguments and objectives. I myself don't have the education (yet) to do so and welcome any reassurance that would help me to take action.

If you don't have the time to write a blog post, maybe you can answer just the following question. If someone was going to donate $100k and you could pick the charity, would you choose the SIAI? Yes/No answer if you're too busy, a short explanation if you've the time. Thank you!

For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.

You mean, "before we take on the galaxy, let’s do a smaller problem"? So you don't think that we'll have to face risks from AI before climate change takes a larger toll? You don't think that working on AGI means working on the best possible solution to the problem of climate change? And even if we had to start taking active measures against climate change in the 2020s, you don't think we should rather spend that time on AI because we can survive a warmer world but no runaway AI? Gregory Benford writes that "we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren". That sounds to me like he assumes that there will be grandchildren, which might not be the case if some kind of AGI doesn't take care of a lot of other problems we'll have to face soon.

A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

Comment author: John_Baez 04 March 2011 04:53:25AM *  3 points [-]

Since XiXiDu also asked this question on my blog, I answered over there.

I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.

Comment author: XiXiDu 04 March 2011 09:51:29AM *  2 points [-]

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980).

That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn't manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?

Comment author: endoself 10 March 2011 01:27:38AM 1 point [-]

Understanding of the singularity is not a monotonically increasing function of intelligence.