"If a tree falls in the forest, but no one hears it, does it make a sound?"
I didn't answer that question. I didn't pick a position, "Yes!" or "No!", and defend it. Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network. At the end, I hope, there was no question left—not even the feeling of a question.
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Like, say, "Do we have free will?"
The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude: "Yes, we must have free will," or "No, we cannot possibly have free will."
Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places. So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will? Yes or no?"
A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed. So they pursue the Traditional Rationalist course: They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences. And then they publish these devastating observations in a prestigious philosophy journal.
But proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it.
The philosopher's instinct is to find the most defensible position, publish it, and move on. But the "naive" view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn't exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question!
It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn't change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
You could look at the Standard Dispute over "If a tree falls in the forest, and no one hears it, does it make a sound?", and you could do the Traditional Rationalist thing: Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?
The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers. So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?
The cognitive algorithms we use, are the way the world feels. And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks. There can be things in the mind that cut skew to the world.
For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. This dangling unit is often useful as a shortcut in computation, which is why we have them. (Metaphorically speaking. Human neurobiology is surely far more complex.)
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?"
But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayes—then you're done. Then there's no lingering feeling of confusion, no vague sense of dissatisfaction.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn't leave anything behind.
A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you've left anything unexplained.
And so, perhaps, you'll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.
Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, "But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words." That's arguing that or explaining why a confusion exists. Now look at the neural network structure in Feel the Meaning. That's explaining how, disassembling the confusion into smaller pieces which are not themselves confusing. See the difference?
Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion. Indeed, it is an entirely different art. Bear this in mind, and you should feel less embarrassed to say, "I know that what you say can't possibly be true, and I can prove it. But I cannot write out a flowchart which shows how your brain makes the mistake, so I'm not done yet, and will continue investigating."
I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you'll get to your destination eventually. If you decide too early that you've found an answer, you won't.
The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there's someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don't know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...
But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.
So be warned that you may believe you're done, when all you have is a mere triumphant refutation of a mistake.
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.
Which is to say: When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.
So here's your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
This is one of the first real challenges I tried as an aspiring rationalist, once upon a time. One of the easier conundrums, relatively speaking. May it serve you likewise.
"Free will" is a black box containing our decision making algorithm.
What kind of mind would invent "free will"? The same mind that would neatly wrap up any other open ended question into a single label, be it "élan vital" or "philogeston". Our minds are fantastic at dreaming up explanations for things, and if they are not easily empirically testable at the time, then such explanations tend to stick. Without falsifying evidence, our pet theories tend to remain, and confirmation bias slowly hardens them into what feels like brute facts.
It's appealing because it ties up (or at least hides) loose ends. If we play taboo on
free will
, we might get something likethe concept that people can narrow a number of possible futures into one future that is optimal
. With this definition, free will would indeed exist. If, however,free will
was postulated in such a say as to include some fantastical element, or another black box, Occam's razor may strike it down. Alternatively, it may be superficially appealing enough to stick, so long as we don't think about it too thoroughly. For example,the idea that humans are in control of their actions
feels like an explanation, but containscontrol
as a nested black box.But what does this process actually feel like when we make such a mistake? Well, it's based on implicit assumptions, so nothing feels amiss. You don't realize that you are making an implicit assumption. All the loose ends look like they are tucked away, at least at a glance. but if you take a closer look, say by repeatedly asking "why?", then you start to feel less confident. This is a sign of trouble, but you should make sure aren't just asking "does 1 plus 1 really equal 2" in a pretentious tone of voice. If repeated self inquiry seems to be creating a rabbit hole of nested black boxes, then you should go back to the highest level box, and try a different form of inquiry. Ask yourself if there's anything about the nested black boxes that feel wrong. Use all your tools as a rationalist to inquire into this, and hopefully find a path of inquiry besides infinite regression. For a thorough analysis, ask yourself whether the nested boxes make observable predictions, and how those predictions might differ from reality.
With our example of playing taboo on
free will
to getthe idea that humans are in control of their actions
, we might intuit that "control" is just an inherently complex concept. Although usually applying agency during an explanation is a good way of sicking Occam's razor on it, perhaps this is an exception. We are discussing agency itself, after all. But what does it mean to have agency? What are the observable differences in the world? When we hold these sorts of questions in our minds, and again try to play taboo, we are more likely to get something likethe concept that people can narrow a number of possible futures into one future that is optimal
. That's a much different answer, because a computer program could also down-select from a number of different options, given some criteria. This answer also doesn't leave loose ends, and doesn't leave that nagging feeling of doubt that comes from having left something unexplained. It turns out that all our sense of confusion was contained within the mysticism implied by using the phrase “free will”. It may be easy to forget about that doubt when only taking a broad view, but as soon as you zoom in on the problem it will become detectible. We live in a messy world, and so frequently have to say say "good enough" and leave unexplained doubts due to time constraints, but when we decide that something is important and pursue the little doubting feeling to it's conclusion, it can be incredibly satisfying. You've just answered one of the mysteries of the universe, after all.So that's what it feels like to make and then correct a mistake, but why and how did we make the mistake in the first place? Well, our minds are naturally wired around concepts of agency. This is an observable fact in others, and as in personal examples; it really does feel like the dice are out to get us, or that "the system" must be consciously malicious rather than merely incompetent. It is even more natural to endow ourselves with that same vague agency we give inanimate objects and bureaucratic systems. It's only been in the past couple hundred years that humanity has been able to group everything under the same laws of physics. Before that, the stars obeyed their own special rules, and living things were unexplainable mysteries running on "élan vital" instead of something more akin to a combustion reaction. Unless we specifically look close enough at a belief to notice that requires the world to operate under a different set of physical principles, we will default to what is most natural to believe.
As for why the concept of free will should exist in the first place, it is because it is the most natural explanation. It is a fault in out minds that the most natural is not also the simplest, but it is also a useful feature. This form of vague, associative reasoning let's us jump to reasonably accurate conclusions quickly. The difference between the two breakdowns of
free will
I gave is that one only uses known, well understood phenomena, and the other revolves around making the concept of agency unexplainable. One might also entangle the question with the concept of self and our individual sense of identity. By rolling a bunch of concepts up into one, there are less easily recognizable loose ends. The problem is that tools like associative reasoning and vague definitions aren't enough to actually arrive at a satisfying answer. It ties up enough loose ends that we declare that we've solved it, and move on, ignoring the incompleteness.Note that this in itself isn't a completely 100% exhaustive expiation of why we naturally want to believe in free will. Where does this concept come from? How do we form it in the first place? To fully answer these, we'd have to also examine the concept if our personal sense of identity, since that is the thing that gets conflated with the “free will” concept to form the more vague and fluffy version. If someone thinks computers don't or can't have free will but humans can, this is likely what they mean by “free will”. Our sense of identity is a large issue, and well enough outside the scope of the question that I think it isn't necessary for this explanation. I'll leave that particular novel length post for someone else.
I wrote the above before reading any of the comments, but there are a couple other ideas which people touched on but I did not. I'm bringing them together here, mostly for my own future reference:
Humans have the ability to model the outside world in our own minds, including other people, but not our own minds. Because of this, it seems like our choices aren't subject to causality. Credit, and more detail, here.
Another comment goes into more detail of why this is. In order to fully model itself, a mind would need more power that it has. Therefore, minds