Vladimir_Nesov comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 12 August 2010 08:40:31PM *  1 point [-]

My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.

Comment author: multifoliaterose 12 August 2010 08:47:57PM 1 point [-]

Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.

Comment author: Vladimir_Nesov 12 August 2010 08:58:26PM *  4 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn't work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.

Comment author: multifoliaterose 12 August 2010 11:32:12PM *  5 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

I agree that in general people should be more concerned about existential risk and that it's worthwhile to promote general awareness of existential risk.

But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.

More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I'm seriously concerned about this issue.

If Eliezer can't explain why it's pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like "I believe that AGI will be developed over the next 100 years but it's hard for me to express why so it's understandable that people don't believe me" or "I'm uncertain as to whether or not AGI will be developed over the next 100 years"

When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he's actively damaging the cause of existential risk.

Comment author: timtyler 13 August 2010 08:19:20AM 0 points [-]

Re: "AGI will be developed over the next 100 years"

I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Comment author: multifoliaterose 13 August 2010 10:29:13AM 0 points [-]

Thanks, I'll check this out when I get a chance. I don't know whether I'll agree with your conclusions, but it looks like you've at least attempted to answer one of my main questions concerning the feasibility of SIAI's approach.

Comment author: CarlShulman 13 August 2010 11:58:46AM 1 point [-]

Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.

Comment author: timtyler 13 August 2010 08:10:42PM 0 points [-]

http://www.engagingexperience.com/2006/07/ai50_first_poll.html

If the raw data was ever published, that might be of some interest.

Comment author: gwern 13 August 2010 01:37:06PM 0 points [-]

Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.

Comment author: CarlShulman 13 August 2010 02:01:47PM 1 point [-]

And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.

Comment author: gwern 13 August 2010 02:19:10PM 0 points [-]

Hm, I had not heard about that. SIAI doesn't seem to do a very good job of publicizing its projects or perhaps doesn't do a good job of finishing and releasing them.

Comment author: timtyler 13 August 2010 08:13:03AM 1 point [-]

Re: "Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all."

The marginal benefit of making machines smarter seems large - e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8

I don't really see that situation changing much anytime soon - there will probably be such marginal benefits for a long time to come.