rosyatrandom comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrHen 02 February 2010 03:35:29PM 3 points [-]

"Trial and error" probably wouldn't be necessary.

Comment author: rosyatrandom 02 February 2010 03:42:31PM 6 points [-]

No, but it's there as a baseline.

So in the original scenario above, either:

  • the AI's lying about its capabilities, but has determined regardless that the threat has the best chance of making you release it
  • the AI's lying about its capabilities, but has determined regardless that the threat will make you release it
  • the AI's not lying about its capabilities, and has determined that the threat will make you release it

Of course, if it's failed to convince you before, then unless its capabilities have since improved, it's unlikely that it's telling the truth.