# GuySrinivasan comments on A problem with Timeless Decision Theory (TDT) - Less Wrong

36 04 February 2010 06:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 04 February 2010 08:54:25PM 0 points [-]

First thought: We can get out of this dilemma by noting that the output of C also causes the predictor to choose a suitable i, so that saying we cause the ith digit of pi to have a certain value is glossing over the fact that we actually caused the i[C]th digit of pi to have a certain value.

Comment author: 05 February 2010 11:36:46PM *  0 points [-]

the output of C also causes the predictor to choose a suitable i

How's that? Any i that is sufficiently large is suitable. It doesn't depend on the output of C. It just needs to be beyond C's ability to learn anything beyond the ignorance prior regarding the i-th digit of π.

Comment author: 07 February 2010 11:28:13PM *  0 points [-]

I've finally figured out where my intuition on that was coming from (and I don't think it saves TDT). Suppose for a moment you were omniscient except about the relative integrals Vk (1) over measures of the components of the wavefunction which

• had a predictor that chose an i such that pi[i] = k
• would evolve into components with a you (2) where the predictor would present the boxes, question, etc to you, but would not tell you its choice of i.

Here my ignorance prior on pi[x] for large values of x happens to be approximately equivalent to your ignorance prior over a certain ratio of integrals (relative "sum" of measures of relevant components). When you implement C = one-box, you choose that the relative sum of measures of you that gets \$0, \$1000, \$1000,000, and \$1001,000 is (3):

• \$0: 0
• \$1000: V0
• \$1000000: (1-V0)
• \$1001000: 0

whereas when you implement C = two-box, you get

• \$0: 0
• \$1000: (1-V0)
• \$1000000: 0
• \$1001000: V0

If your preferences over wavefunctions happens to include a convenient part that tries to maximize the expected integral of dollars you[k] gets times measure of you[k], you probably one-box here, just like me. And now for you it's much more like you're choosing to have the predictor pick a sweet i 9/10 of the time.

(1) by relative integral I mean instead of Wk, you know Vk = Wk/(W0+W1+...+W9)

(2) something is a you when it has the same preferences over solutions to the wavefunction as you and implements the same decision theory as you, whatever precisely that means

(3) this bit only works because the measure we're using, the square of the modulus of the amplitude, is preserved under time-evolution

Some related questions and possible answers below.

Comment author: 07 February 2010 11:30:24PM *  0 points [-]

I wonder if that sort of transform is in general useful? Changing your logical uncertainty into an equivalent uncertainty about measure. For the calculator problem you'd say you knew exactly the answer to all multiplication problems, you just didn't know what the calculators had been programmed to calculate. So when you saw the answer 56,088 on your Mars calculator, you'd immediately know that your Venus calculator was flashing 56,088 as well (barring asteroids, etc). This information does not travel faster than light - if someone typed 123x456 on your Mars calculator while someone else typed 123x456 on your Venus calculator, you would not know that they were both flashing 56,088 - you'd have to wait until you learned that they both typed the same input. Or if you told someone to think of an input, then tell someone else who would go to Venus and type it in there, you'd still have to wait for them to get to Venus (which they can do a light speed, whynot).

How about whether P=NP, then? No matter what, once you saw 56,088 on Mars you'd know the correct answer to "what's on the Venus calculator?" But before you saw it, your estimate of the probability "56,088 is on the Venus calculator" would depend on how you transformed the problem. Maybe you knew they'd type 123x45?, so your probability was 1/10. Maybe you knew they'd type 123x???, so your probability was 1/1000. Maybe you had no idea so you had a sort of a complete ignorance prior.

I think this transform comes down to choosing appropriate reference classes for your logical uncertainty.

Comment author: 07 February 2010 11:29:50PM 0 points [-]

Why would you or I have such a preference that cares about my ancestor's time-evolved descendants rather than just my time-evolved descendants? My guess is that

• a human's preferences are (fairly) stable under time-evolution, and
• the only humans that survive are the ones that care about their descendants, and
• humans that we see around us are the time-evolution of similar humans,