timtyler comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
I haven't seen much worry about that. Nor does it seem very likely - since research seems very unlikely to stop or slow down.
I agree with this.
I see that worry all the time. With the role of "some other existential risk" being played by a reckless FOOMing uFAI.
Oh, right. I assumed you meant some non-FOOM risk.
It was the "we stop short of FOOMing" that made me think that.
Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.
Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn't seem very likely to me.