linkhyrule5 comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread.

Comment author: linkhyrule5 14 July 2013 02:00:06AM 0 points [-]

... I'm fairly sure this would be a bluff.

Consider this: you decline the bargain and walk away.

The AI... spends its limited processing time simulating your torture for a few thousand years anyway?

Of course not. That gains it absolutely nothing; it could instead spend those resources on planning its next attempt. Doubly so, since it cannot prove to you that several million copies of you actually exist - its own intelligence defeats it here, since no matter how convincing the proof, it is far more likely that the AI's outsmarted you and is spending those cycles on something more productive.

In which case, you're probably not even in the simulation, because there's no point in simulating you and no way of proving to outside-you that simulation-you actually exists for longer than a millisecond at a time.

So my answer is that the AI, assuming it's any good at simulating human brains, never makes this proposal in the first place.

Comment author: linkhyrule5 21 July 2013 02:20:46AM 2 points [-]

Wait, nevermind, this is the entire point of the concept of "precommitting" anyway.