loqi comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
Interesting. I think the point is valid, regardless of the method of attempted coercion - if a powerful AI really is friendly, you should almost certainly do whatever it says. You're basically forced to decide which you think is more likely - the AI's Friendliness, or that deferring "full deployment" of the AI however long you plan on doing so is safe. Not having a hard upper bound on the latter puts you in an uncomfortable position.
So switching on a "maybe-Friendly" AI potentially forces a major, extremely difficult-to-quantify decision. And since a UFAI can figure this all out perfectly well, it's an alluring strategy. As if we needed more reasons not to prematurely fire up a half-baked attempt at FAI.