timtyler comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong

23 Post author: jimrandomh 09 June 2011 03:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 12 June 2011 09:57:50PM 0 points [-]

A truly general AI, though, almost by definition, would be able to think up countless ways of overpowering you. It's very unlikely that you could adequately guard against all of those ways, and the AI only needs to succeed once to cause an existential risk.

It isn't likely to be you vs the superintelligence, though. People keep imagining that - and then wringing their hands. The restraints on intelligent agents while they are being developed and tested are likely to consist of a prison built by the last generation of intelligent agents, featuring them as guards.