Emile comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 29 December 2010 03:47:18PM 1 point [-]

This part is the one that seems the most different from my own probabilities:

AI going FOOM being an x-risk: 5%

So, do you think the default case is a friendly AI? Or at least innocuous AI? Or that friendly AI is easy enough so that whoever first makes a fooming AI will get the friendliness part right with no influence from the SIAI?

Comment author: XiXiDu 29 December 2010 04:12:34PM *  0 points [-]

No, I do not believe that the default case is friendly AI. But I believe that AI going FOOM is, if possible at all, very hard to accomplish. Surely everyone agrees here. But at the moment I do not share the opinion that friendliness, that is to implement scope boundaries, is a very likely failure mode. I see it this way, if one can figure out how to create an AGI that FOOM's (no I do not think AGI implies FOOM) then you have a thorough comprehension of intelligence and its associated risks. I just don't see that a group of researchers (I don't believe a mere group is enough anyway) will be smart enough to create an AGI that does FOOM but somehow fail to limit its scope. Please consider reading this comment where I cover this topic in more detail. That is why I believe that only 5% of all AI's going FOOM will be an existential risk to all of humanity. That is my current estimation, I'll of course update on new evidence (e.g. arguments).