Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] “Betting on the Past” – a decision problem by Arif Ahmed

2 Post author: Johannes_Treutlein 07 February 2017 09:14PM

Comments (3)

Comment author: Vladimir_Nesov 08 February 2017 10:00:04AM *  2 points [-]

I'll cite the thought experiment for the reference:

Betting on the Past: In my pocket (says Bob) I have a slip of paper on which is written a proposition P. You must choose between two bets. Bet 1 is a bet on P at 10:1 for a stake of one dollar. Bet 2 is a bet on P at 1:10 for a stake of ten dollars. So your pay-offs are as follows: Bet 1, P is true: 10; Bet 1, P is false: -1; Bet 2, P is true: 1; Bet 2, P is false: -10. Before you choose whether to take Bet 1 or Bet 2 I should tell you what P is. It is the proposition that the past state of the world was such as to cause you now to take Bet 2. [Ahmed 2014, p. 120]

Some comments on your post:

Alice is betting on a past state of the world. She can’t causally influence the past, and she’s uncertain whether the proposition is true or not.

More precisely, Alice is betting on implications of the past state of the world, on what it means about the future, or perhaps on what it causes the future to be. Specifically Alice's action, an implication of the past state of the world. If we say that Alice can causally influence her own action, it's fair to say that Alice can causally influence the truth of the proposition, even if she can't causally influence the state of the past. So she can't influence the state of the past, but can influence implications of the state of the past, such as her own action. Similarly, a decision algorithm can't influence its own code, but can influence the result it computes. (So I'm not even sure what CDT is supposed to do here, since it's not clear that the bet is really on the past state of the world and not on truth of a proposition about the future state of the world.)

Perhaps if the bet was about the state of the world yesterday, LDT would still take Bet 2. Clearly, LDT’s algorithm already existed yesterday, and it can influence this algorithm’s output; so if it chooses Bet 2, it can change yesterday’s world and make the proposition true.

It's better to avoid the idea of "change" in this context. Change always compares alternatives, but for UDT, there is no default state of the world before-decision-is-made, there are only alternative states of the world following the alternative decisions. So a decision doesn't change things from the way they were before it's made to the way after it's made, at most you can compare how things are after one possible decision to how things are after the other possible decision.

Given that, I don't see what role "LDT’s algorithm already existed yesterday" plays here, and I think it's misleading to state that "it can change yesterday’s world and make the proposition true". Instead it can make the proposition true without changing yesterday’s world, by ensuring that yesterday’s world was always such that the proposition is true. There is no change, yesterday’s world was never different and the proposition was never false. What changed (in our observation of the decision making process) is the state of knowledge about yesterday’s world, from uncertainty about the truth of the proposition to knowledge that it's true.

If we choose a more distant point in the past as a reference for Alice’s bet – maybe as far back as the birth of our universe – she’ll eventually be unable to exert any possible influence via logical counterfactuals.

Following from the preceding point, it doesn't matter when the past state of the world is, since we are not trying to influence it, we are instead trying to influence its consequences, which are in the future. There is something unusual about influencing consequences of a construction without influencing the construction itself, but it helps to recall that it's exactly what any program does, when it influences its actions without influencing its code. It's what a human emulation in a computer does, by making decisions without changing the initial image of their brain from which the running emulation was loaded. And it's also what a regular human running inside physics without any emulation does.

Comment author: Lumifer 08 February 2017 07:21:11PM 1 point [-]

Is there a better formulation for this? Because I don't see how is this a "problem".

Assuming Bob is truthful, Alice faces no bets. She can choose one of two courses of action and each of them has a predetermined outcome known to her. There is no uncertainty involved.

Comment author: cousin_it 09 February 2017 07:27:37PM *  0 points [-]

UDT takes bet 2.

Can you put your flavor of EDT in clear conflict with UDT? Or are they equivalent?

If you need a rigorous formulation of proof-based UDT, this old post of mine might be helpful. Feel free to ask if anything isn't clear.