"If a tree falls in the forest, but no one hears it, does it make a sound?"
I didn't answer that question. I didn't pick a position, "Yes!" or "No!", and defend it. Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network. At the end, I hope, there was no question left—not even the feeling of a question.
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Like, say, "Do we have free will?"
The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude: "Yes, we must have free will," or "No, we cannot possibly have free will."
Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places. So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will? Yes or no?"
A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed. So they pursue the Traditional Rationalist course: They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences. And then they publish these devastating observations in a prestigious philosophy journal.
But proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it.
The philosopher's instinct is to find the most defensible position, publish it, and move on. But the "naive" view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn't exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question!
It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn't change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
You could look at the Standard Dispute over "If a tree falls in the forest, and no one hears it, does it make a sound?", and you could do the Traditional Rationalist thing: Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?
The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers. So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?
The cognitive algorithms we use, are the way the world feels. And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks. There can be things in the mind that cut skew to the world.
For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. This dangling unit is often useful as a shortcut in computation, which is why we have them. (Metaphorically speaking. Human neurobiology is surely far more complex.)
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?"
But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayes—then you're done. Then there's no lingering feeling of confusion, no vague sense of dissatisfaction.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn't leave anything behind.
A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you've left anything unexplained.
And so, perhaps, you'll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.
Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, "But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words." That's arguing that or explaining why a confusion exists. Now look at the neural network structure in Feel the Meaning. That's explaining how, disassembling the confusion into smaller pieces which are not themselves confusing. See the difference?
Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion. Indeed, it is an entirely different art. Bear this in mind, and you should feel less embarrassed to say, "I know that what you say can't possibly be true, and I can prove it. But I cannot write out a flowchart which shows how your brain makes the mistake, so I'm not done yet, and will continue investigating."
I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you'll get to your destination eventually. If you decide too early that you've found an answer, you won't.
The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there's someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don't know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...
But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.
So be warned that you may believe you're done, when all you have is a mere triumphant refutation of a mistake.
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.
Which is to say: When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.
So here's your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
This is one of the first real challenges I tried as an aspiring rationalist, once upon a time. One of the easier conundrums, relatively speaking. May it serve you likewise.
Some rough notes on free will, before I read the "spoiler" posts or the other attempted solutions posted as comments here.
(Advice for anyone attempting reductions/dissolutions of free will or anything else: actually write notes, make them detailed when you can (and notice when you can't), and note when you're leaving some subproblem unsolved for the time being. Often you will notice that you are confused in all kinds of ways that you wouldn't have noticed if you had kept all of it in your head. (And if you're going to try a problem and then read a solution, this is a good way of avoiding hindsight bias.))
What kind of algorithm feels like free will from the inside?
Some ingredients:
Local preferences:
The algorithm doesn't necessarily need to be an optimization process with a consistent, persistent utility function, but when the algorithm runs, there needs to be some locally-usable preference function over outcomes, since this is a decision algorithm.
Counterfactual simulation:
When you feel that you "could" make one of several (mutually exclusive) "choices", that doesn't mean that all of them are actually possible (for most senses of "possible" that we use outside the context of being confused about free will); you're going to end up doing at most one of them. But it occurs to you to imagine doing any of them, because you don't yet know what you'll decide (and you don't know what you'll decide because this imagining is part of the algorithm that generates the decision). So you look at the choices you think you might make, and you imagine yourself making each of those choices. You then evaluate each imagined outcome according to some criterion (specifying which, I think, is far outside the scope of this problem), and the algorithm then returns the choice corresponding to the imagined outcome that maximizes that criterion.
(Imagining a maybe-impossible world — one where you make a specific decision which may not be the one you will actually make — consists of imagining a world to which all of your prior beliefs about the real world apply, plus an extra assumption about what decision you end up making. If we want to go a bit deeper: suppose you're considering options A, B, and C, and you're imagining what happens if you pick B. Then you imagine a world which is identical to (how you imagine) the real world, except with a different agent substituted for you, identical to you except that their decision algorithm has a special case for this particular situation consisting of "return B".)
(I realize that I have not unpacked this so-called "imagining" at all. This is beyond my current understanding, and is not specific to the free will issue.)
Why does that feel non-deterministic?
Because we don't have any way of knowing the outcome for sure other than just following the algorithm to the conclusion. Due to the mind projection fallacy, our lack of knowledge of our deterministic decisions feels like those decisions actually not being deterministically implied yet.
...Let me phrase that better: The fact that we don't know what the algorithm will output until we follow it to its normal conclusion, feels like the algorithm not having a definite output until it reaches its conclusion. Since our beliefs about reality just feel like reality, our blank or hazy or changing map of the future feels like a blank or hazy or changing future; as is pointed out in "Timeless Causality", changing our extrapolation of the future feels like changing the future. When you don't know what decision you'll make, that feels like the future itself is undecided. And the fact that we can imagine multiple futures until it's not imaginary or the future anymore, feels like there are multiple possible futures until we pick one to go with.
Why does the idea of determinism feel non-free?
Well, there's the whole metaphor of "laws", to begin with. When we hear about fundamental physical laws, our intuition doesn't go straight to "This is the fundamental framework in which everything in the universe happens (including everything about me)". "Laws" sound like constraints imposed on us. It makes us imagine some causal force acting on us and restricting us from the outside; something that acts independently of and sometimes against mental causes, rather than what you see when you look at mental causes under a microscope (so to speak).
That also seems to explain why people think that physical determinism would preclude moral responsibility. When someone first tells you that everything about you is reducible to lawful physics, it can intuitively sound like being told that you're under the Imperius curse or that you're a puppet and some some demon called "Physics" is pulling the strings. If your intuition says that determinism means people are puppets, then surely it's easy to think that implies people cannot be held responsible for their actions, clearly Physics must get the credit or the blame.
(In one sense, yes, physics must get the credit or blame — but only the region of physics that we call "you" for short.)
And there's the fact that, if it's explained poorly, the idea of physical determinism can sound about the same as the idea of fate. (Or even if it is explained well, but you pattern-match it as "fate" from the beginning and let that contaminate your understanding of the rest of the explanation.) Of course, the ideas couldn't be more different, fate is the idea that your choices don't matter because the outcome will be the same no matter what; and this (rightly) sounds non-free, because it implies that this algorithm you're running doesn't ultimately have any influence on the future. Physical determinism, on the other hand, says quite the opposite, that the future is causally downstream from your actions, which are causally downstream from the algorithm you're running; but given sufficiently confusing/confused descriptions of determinism (like "everything is predetermined"), it is possible to mistake them for each other.
Why does the idea of predictability feel non-free?
The previous bit, on physical determinism feeling non-free, isn't the whole story. Even when the idea of "lawfulness" isn't invoked, people still think as though being theoretically predictable is a limitation on free will. They still wonder things like "If God is omniscient, then he must know every decision I will make, so how can I have free will?" (And atheists say things like this a lot to argue that an omniscient god is impossible because then we couldn't have free will (particularly as an argument against religious traditions that argue (badly) against the problem of evil by saying that God gave us free will). I'm not sure if this is because it's a soldier on their side or if they just don't know reductionism. Probably some of both.) This probably goes back to the bit about the mind projection fallacy; if you don't know what you're going to do, that feels like reality itself being indeterminate, and if you're told that reality itself is not indeterminate — that the territory isn't blank where your map is blank — then, if you haven't learned to strictly distinguish between the map and the territory, you'll say "But I can see plainly that the territory is blank at that point!", and you'll dismiss the idea that your decisions could theoretically be predictable.
(Tangential to the actual reduction, but: this seems like it could be covered by a principle analogous to the Generalized Anti-Zombie Principle. If the thing you think "free will" refers to is something that you'd suddenly have less of if I built a machine that could exactly predict your decisions (even if I just looked at its output and didn't tell you it even existed), then it's clearly not the thing that causes you to think you have "free will".)
Why do we "explain" free will in terms of mysterious substances placed in a separate realm declared unknowable by fiat?
I don't have the cognitive science to answer that, and I'll consider it outside the scope of the free will problem in particular, because that's something we seem to do with everything (as in MAMQ), not just free will.