Perplexed comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 02 November 2010 06:56:22PM 3 points [-]

Do you think it is that simple to tell it to improve itself yet hard to tell it when to stop? I believe it is vice versa, that it is really hard to get it to self-improve and very easy to constrain this urge.

I think it is important to realize that there are two diametrically opposed failure modes which SIAI's FAI research is supposed to prevent. One is the case that has been discussed so far - that an AI gets out of control. But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.

As far as I know, SIAI is not asking Goertzel to stop working on AGI. It is merely claiming that its own work is more urgent than Goertzel's. FAI research works toward preventing both failure modes.

Comment author: timtyler 03 November 2010 07:48:02AM 2 points [-]

But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.

I haven't seen much worry about that. Nor does it seem very likely - since research seems very unlikely to stop or slow down.

Comment author: CarlShulman 04 November 2010 08:22:11PM 1 point [-]

I agree with this.

Comment author: Perplexed 03 November 2010 03:39:58PM 1 point [-]

I see that worry all the time. With the role of "some other existential risk" being played by a reckless FOOMing uFAI.

Comment author: timtyler 03 November 2010 03:45:57PM *  0 points [-]

Oh, right. I assumed you meant some non-FOOM risk.

It was the "we stop short of FOOMing" that made me think that.

Comment author: shokwave 03 November 2010 07:53:42AM 1 point [-]

Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.

Comment author: timtyler 03 November 2010 08:22:03AM 0 points [-]

Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn't seem very likely to me.