NancyLebovitz comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 30 December 2010 10:00:19AM 2 points [-]

This might be a movie threat notion-- if so, I'm sure I'll be told.

I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it.

As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn't required.

Comment author: JoshuaZ 30 December 2010 02:46:28PM 0 points [-]

Yes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn't need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.