GuySrinivasan comments on A problem with Timeless Decision Theory (TDT) - Less Wrong

36 Post author: Gary_Drescher 04 February 2010 06:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

You are viewing a single comment's thread. Show more comments above.

Comment author: GuySrinivasan 07 February 2010 11:28:13PM *  0 points [-]

I've finally figured out where my intuition on that was coming from (and I don't think it saves TDT). Suppose for a moment you were omniscient except about the relative integrals Vk (1) over measures of the components of the wavefunction which

  • had a predictor that chose an i such that pi[i] = k
  • would evolve into components with a you (2) where the predictor would present the boxes, question, etc to you, but would not tell you its choice of i.

Here my ignorance prior on pi[x] for large values of x happens to be approximately equivalent to your ignorance prior over a certain ratio of integrals (relative "sum" of measures of relevant components). When you implement C = one-box, you choose that the relative sum of measures of you that gets $0, $1000, $1000,000, and $1001,000 is (3):

  • $0: 0
  • $1000: V0
  • $1000000: (1-V0)
  • $1001000: 0

whereas when you implement C = two-box, you get

  • $0: 0
  • $1000: (1-V0)
  • $1000000: 0
  • $1001000: V0

If your preferences over wavefunctions happens to include a convenient part that tries to maximize the expected integral of dollars you[k] gets times measure of you[k], you probably one-box here, just like me. And now for you it's much more like you're choosing to have the predictor pick a sweet i 9/10 of the time.

(1) by relative integral I mean instead of Wk, you know Vk = Wk/(W0+W1+...+W9)

(2) something is a you when it has the same preferences over solutions to the wavefunction as you and implements the same decision theory as you, whatever precisely that means

(3) this bit only works because the measure we're using, the square of the modulus of the amplitude, is preserved under time-evolution

Some related questions and possible answers below.

Comment author: GuySrinivasan 07 February 2010 11:29:50PM 0 points [-]

Why would you or I have such a preference that cares about my ancestor's time-evolved descendants rather than just my time-evolved descendants? My guess is that

  • a human's preferences are (fairly) stable under time-evolution, and
  • the only humans that survive are the ones that care about their descendants, and
  • humans that we see around us are the time-evolution of similar humans,

So e.g. I[now] care approximately about what I[5-minutes-ago] cared about, and I[5-minutes-ago] didn't just care about me[now], he also cared about me[now-but-in-a-parallel-branch].