And this was my reply:
This is an unfinished part of the theory that I've also thought about, though your example puts it very crisply (you might consider posting it to LW?)
My current thoughts on resolution tend to see two main avenues:
1) Construct a full-blown DAG of math and Platonic facts, an account of which mathematical facts make other mathematical facts true, so that we can compute mathematical counterfactuals.
2) Treat differently mathematical knowledge that we learn by genuinely mathematical reasoning and by physical observation. In this case we...
Logical uncertainty has always been more difficult to deal with than physical uncertainty; the problem with logical uncertainty is that if you analyze it enough, it goes away. I've never seen any really good treatment of logical uncertainty.
But if we depart from TDT for a moment, then it does seem clear that we need to have causelike nodes corresponding to logical uncertainty in a DAG which describes our probability distribution. There is no other way you can completely observe the state of a calculator sent to Mars and a calculator sent to Venus, and yet remain uncertain of their outcomes yet believe the outcomes are correlated. And if you talk about error-prone calculators, two of which say 17 and one of which says 18, and you deduce that the "Platonic answer" was probably in fact 17, you can see that logical uncertainty behaves in an even more causelike way than this.
So, going back to TDT, my hope is that there's a neat set of rules for factoring our logical uncertainty in our causal beliefs, and that these same rules also resolve the sort of situation that you describe.
If you consider the notion of the correlated error-prone calculators, two returning 17 and one re...
I think this problem is based (at least in part) on an incoherence in the basic transparent box variant of Newcomb's problem.
If the subject of the problem will two-box if he sees the big box has the million dollars, but will one-box if he sees the big box is empty. Then there is no action Omega could take to satisfy the conditions of the problem.
In this variant that introduces the digit of pi, there is an unknown bit such that whatever strategy the subject takes, there is a value of that bit that allows Omega an action consistant with the conditions. Howev...
I'm not clear at all what the problem is, but it seems to be symantic. It's disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you're saying- indicating that no one else here really gets the point either.
It seems you have an issue with the word 'dependent' and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have kno...
In UDT1, I would model this problem using the following world program. (For those not familiar with programming convention, 0=False, and 1=True.)
def P(i):
E = (Pi(i) == 0)
D = Omega_Predict(S, i, "box contains $1M")
if D ^ E:
C = S(i, "box contains $1M")
payout = 1001000 - C * 1000 + E * 1e9
else:
C = S(i, "box is empty")
payout = 1000 - C * 1000 + E * 1e9
We then ask, what function S maximizes the expected payout at the end of P? When S sees "box is empty" clearly it ...
TDT is Timeless Decision Theory. It wouldn't be bad to say that in the first paragraph somewhere.
EDIT: Excellent. Thanks.
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
1) There is $1M in the large box and thus (D xor E)=true
2) I know that I am an one boxing agent
3) Thus D="one boxing"
4) Thus I know D/=E since the xor is true
5) I one-box and live happily with $1,000,000
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
We could make an ad-hoc repair to TDT by saying that you're not allowed to infer from a logical fact to another logical fact going via a physical (empirical) fact.
In this case, the mistake happened because we went from "My decision algorithm's output" (Logical) to "Money in box" (Physical) to "Digits of Pi" (Logical), where the last step involved following an arrow on a causal graph backwards: The digits of Pi has a causal arrow going into the "money in box" node.
The TDT dependency inference could be implemented by...
Consider the following version of the transparent-boxes scenario.
I'm trying to get a grip on what this post is about, but I don't know enough of the literature about Newcomb's Problem to be sure what is referred to here by "the transparent-boxes scenario". Can someone who knows briefly recap the baseline scenario of which this is a version?
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
I have a question that is probably stupid and/or already discussed in the comments. But I don't have time to read all the comments, so, if someone nonetheless would kindly explain why I'm confused, I would be grateful.
The OP writes
...So E does indeed "depend on" C, in the particular sense you've specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that's the wrong decision, of course. In reality, I have no choice about the spec
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
Let:
When:
Omega fails.
Omega chooses M or !M. I get $1M or 0.
Omega chooses M=false. I get $0.1.
Omega chooses M=true. I get $1M.
M chooses either M or !M. I get either $1.1 or $0.1 depending on Omega's whims
Omega has no option. I make Omega look like a fool.
So, depending on how 'Omega ...
First thought: We can get out of this dilemma by noting that the output of C also causes the predictor to choose a suitable i, so that saying we cause the ith digit of pi to have a certain value is glossing over the fact that we actually caused the i[C]th digit of pi to have a certain value.
In the setup in question, D goes into an infinite loop (since in the general case it must call a copy of C, but because the box is transparent, C takes as input the output of D).
In Eliezer's similar red/green problem, if the simulation is fully deterministic and the initial conditions are the same, then the simulator must be lying, because he must've told the same thing to the first instance, at a time when there had been no previous copy. (If those conditions do not hold, then the solution is to just flip a coin and take your 50-50 chance.)
Are these still problems when you change them to fix the inconsistencies?
According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, "You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation", where "the logical computation" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.
I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or "dependent on" C, for purposes of the TDT algorithm. Eliezer answered:
I replied as follows (which Eliezer suggested I post here).
If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.
Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.