Benja comments on Self-modification is the correct justification for updateless decision theory - Less Wrong

12 Post author: Benja 11 April 2010 04:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 12 April 2010 12:48:28AM *  -1 points [-]

Wait, are you thinking I'm thinking I can determine the umpteenth digit of pi in my scenario? I see your point; that would be insane.

My point is simply this: if your existence (or any other observation of yours) allows you to infer the umpteenth digit of pi is odd, then the AI you build should be allowed to use that fact, instead of trying to maximize utility even in the logically impossible world where that digit is even.

Actually you were: There are four possibilities:

  • The AI will press the button, the digit is even
  • The AI will not press the button, the digit is even, you don't exist
  • The AI will press the button, the digit is odd, the word will kaboom
  • The AI will not press the button, the digit is odd.

Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd, and ensuring that the AI does not means choosing the digit to be odd.

If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging (if the destruction by Omega if the digit is even depends on what the AI knowing the digit to be odd would do, otherwise there is no dilemma in the first place).

Comment author: Benja 12 April 2010 01:53:10AM 0 points [-]

I'll grant you that my formulation had a serious bug, but--

There are four possibilities:

  • The AI will press the button, the digit is even
  • The AI will not press the button, the digit is even, you don't exist
  • The AI will press the button, the digit is odd, the word will kaboom
  • The AI will not press the button, the digit is odd.

Updating on the fact that the second possibility is not true is precisely equivalent to concluding that if the AI does not press the button the digit must be odd

Yes, if by that sentence you mean the logical proposition (AI presses button => digit is odd), also known as (digit odd \/ ~AI presses button).

and ensuring that the AI does not means choosing the digit to be odd.

I'll only grant that if I actually end up building an AI that presses the button, and the digit is even, then Omega is a bad predictor, which would make the problem statement contradictory. Which is bad enough, but I don't think I can be accused of minting causality from logical implication signs...

In any case,

If you already know that the digit is odd independent from the choice of the AI the whole thing reduces to a high stakes counterfactual mugging

That's true. I think that's also what Wei Dai had in mind in of the great filter post (and not the ability to change Omega's coin to tails by not pressing the button!). My position is that you should not pay in counterfactual muggings whose counterfactuality was already known prior to your decision to become a timeless decision theorist, although you should program (yourself | your AI) to pay in counterfactual muggings you don't yet know to be counterfactual.