I'd like to better understand how compatibilists conceive of free will.[1] LW is a known hotbed of compatibilism, so here's my question:
Suppose that determinism is true. When I face a binary choice,[2] there are two relevantly-different states of the world I could be in:[3]
State A: Past events HA have happened, current state of the world is A, I will choose CA, future FA will happen.
State B: Past events HB have happened, current state of the world is B, I will choose CB, future FB will happen.
When I make my choice (CA or CB), I'm choosing/revealing which of those two states of the world are (my) reality. They're package deals: CA follows from HA just as surely as it leads to FA, and the same holds for state B.
Which seems to give me just as much control[4] over the past as I have over the future. In whatever sense I 'exercise free will' to make CA real and bring about FA, I also make it the case that HA is the true history.
My question is: Does this bother you at all, and if not, why not?[5]
- ^
Yes, I've done my own reading, though admittedly it's been a while. I never found a satisfying (to me) answer to this question, and to the best of my recollection I rarely saw it clearly addressed in a form I recognised. If you want to link me to a pre-existing answer, please do, but please be specific: less 'read Dennett' and more 'read this passage of this work'.
- ^
Maybe no real choice is truly binary, but for the sake of simplicity let's say this one is. I don't think that changes anything important.
- ^
For simplicity I'm taking the physical laws as a given. I don't think that matters unless free will involves in some sense choosing which set of physical laws holds in reality.
- ^
Not necessarily in every sense in which you might want to use the word 'control'; you might define that word such that it only applies to causal influence forward in time. But yes in the sense that whatever I can do to make my world the one with FA in it, I can do to make my world the one with HA in it.
- ^
If your answer involves the MWI or something like it, I would appreciate if you explained (the relevant bits of) how you conceive of personal identity and consciousness within that framework.
Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash. This will be apparent if you try to unpack exactly what the difference between them is, other than your choice. (Specifically, the algorithm used to compute your choice.)
Let's say I give you a read-only SD card containing some data. You will insert this card into a device that will run some algorithm and output "A" or "B". The data on the card will not change as a result of the device's output, nor will the device's output retroactively cause different data to have been entered on the card! All that will be revealed is the device's interpretation of that data. To the extent there is any uncertainty about the entire process, it's simply that the device is a black box - we don't know what algorithm it uses to make the decision.
So, tl;dr: the choice you make does not reveal anything about the state or history of the world (SD card), except for the part that is your decision algorithm's implementation. If we draw a box around "the parts of your brain that are involved in this decision", then you could say that the output choice tells you something about the state and history of those parts of your brain. But even there, there's no backward causality -- it's again simply resolving your uncertainty about the box, not doing anything to the actual contents, except to the extent that running the decision procedure makes changes to the device's state.
As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend". As with a myriad other philosophical questions, the question itself is just map-territory confusion or a problem with word definitions.
Human brains have lots of places where it's easy to slip on logical levels and end up with things that feel like questions or paradoxes when in fact what's going on is really simple once you put back in the missing terms or expand the definitions properly. (This is often because brains don't tend to include themselves as part of reality, so this is where the missing definitions can usually be found!)
In the particular case you've presented, that tendency manifests in the part where no part of your problem specification explicitly calls out the brain or its decision procedures as components of the process. Once you include those missing pieces, it's straightforward to see that the only place where hypothetical alternative choices exist is in the decider's brain, and that no retrocausality is involved.
In the parts of reality that do not include your brain, they are already in some state and already have some history. When you make a decision, you already know what state and history exist for those parts of reality, at least to the extent that state and history is decision-relevant. What you don't know is which choice you will make.
You then can imagine CA and CB -- i.e., picture them in your brain -- as part of running your decision algorithm. Running this algorithm then makes changes to the history and state of your brain -- but not to any of the inputs that your brain took in.
Suppose I follow the following decision procedure:
None of these steps is retrocausal, in the sense of "revealing" or "choosing" anything about the past. As I perform these steps, I am altering H and S of my brain (and workspace) until a decision is arrived at. At no point is there an "A" or "B" here, except in the contents of the list.
Since there is a random element I don't even know what choice I will make, but the only thing that was "revealed" is my scoring and which way the coin flips went -- all of which happened as I went through the process. When I get to the "choice" part, it's the result of the steps that went before, not something that determines the steps.
This is just an example, of course, but it literally doesn't matter what your decision procedure is, because it's still not changing the original inputs of the process. Nothing is retroactively chosen or revealed. Instead, the world-state is being changed by the process of making the decision, in normal forward causality.
As soon as you fully expand your terms to any specific decision procedure, and include your brain as part of the definition of "history" and "state", the illusion of retrocausality vanishes.
A pair of timelines, showing two possible outcomes, with the decision procedure parenthesized:
The decision procedure operates on history H, state S as its initial input. During the process it will produce a new history and final state, following some path that will result in CA or CB. But CA and CB do not reveal or "choose" anything about the H or S that existed prior to beginning the decision procedure. Instead, the steps go forward in time creating HA or HB as they go along.
It's as if you said, "isn't it weird, how if I flip a coin and then go down street A or B accordingly, coming to whichever restaurant is on that street, that the cuisine of the restaurant I arrive at reveals which way my coin flip went?"
No. No. It's not weird at all! That's what you should expect to happen! The restaurant you arrived at does not determine the coin flip, the coin flip determines the restaurant.
As soon as you make the decision procedure a concrete procedure -- be it flipping a coin or otherwise -- it should hopefully become clear that the choice is the output of the steps taken; the steps taken are not retroactively caused by the output of the process.
The confusion in your original post is that you're not treating "choice" as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality. If you properly place "choice" as a series of events in normal spacetime, there is no paradox or retrocausality to be had. It's just normal things happening in the normal order.
LW compatibilism isn't believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of "things happening deterministically".