Eliezer_Yudkowsky comments on Avoiding doomsday: a "proof" of the self-indication assumption - Less Wrong

18 Post author: Stuart_Armstrong 23 September 2009 02:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (228)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 26 September 2009 06:21:58PM 1 point [-]

The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there's no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so - and conclude that Omega incorrectly executed the equation in the context outside my own mind.

Even if we suppose that "cogito ergo sum" presents an extra bit of evidence to me, whereby I truly know that I am the "real" me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me.

If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn't following the stated algorithm. Just like if Omega says "I will offer you this $1 only if 1 = 2" and then offers you the $1.