I have no idea why or how someone first thought up this question. People ask each other silly questions all the time, and I don't think very much effort has gone into discovering how people invent them.
However, note that most of the silly questions people ask have either quietly gone away, or have been printed in children's books to quiet their curiosity. This type of question- along with many additional errors in rationality- seems to attract people. It gets asked over and over again, from generation unto generation, without any obvious, conclusive results.
The answer to most questions is either obvious, or obviously discoverable- some easy examples are "Does 2 + 2 = 4?", or "Is there a tiger behind the bush?". This question, however, creates a category error in the human linguistic system, by forcibly prying apart the concepts of "sound" and "mental experience of sound". Few people will independently discover that a miscategorization error has occurred; at first, it just seems confusing. And so people start coming up with incorrect explanations, they confuse a debate about the definition of the word "sound" with a debate about some...
I think a brain architecture/algorithm that would debate about free will would have been adapted for large amounts of social interaction in its daily life. This interaction would use markedly different skills (eg language) from those of more mundane activities. More importantly it would require a different level of modeling to achieve any kind of good results. One brain would have to contain models for complicated human social, kin and friendly relationships, as well as models for individuals' personalities.
At the center of the mesh of social interactions ...
I would say: people have mechanisms for causally modeling the outside world, and for choosing a course of action based on its imagined consequences, but we don't have a mechanism for causally modeling the mechanism within us that makes the choice, so it seems as if our own choices aren't subject to causality (and are thus "freely willed").
However, this is likely to be wrong or incomplete, firstly because it is merely a rephrasing of what I understand to be the standard philosophical answer, and secondly because I'm not sure that I feel done.
A difference of predictions between Maksym's proposed answer and mine occurs to me. If the sense of free will comes from not being able to model one's own decision process, rather than from taking the intentional stance towards people but not other things, then I would think that each individual would tend to think that she has free will, but other people don't. Since this is not the default view, my answer must be wrong or very incomplete.
"Many philosophers - particularly amateur philosophers, and ancient philosophers - share a dangerous instinct: If you give them a question, they try to answer it."
This line goes in that book you're going to write.
A warning to those who would dissolve all their questions:
Why does anything at all exist? Why does this possibility exist? Why do things have causes? Why does a certain cause have its particular effect?
I don't think this answer meets the standards of rigour that you set above, but I'm increasingly convinced that the idea of free will arises out of punishment. Punishment plays a central role in relations among apes, but once you reach the level of sophistication where you can ask "are we machines", the answer "no" gives the most straightforward philosophical path to justifying your punishing behaviour.
Things in thingspace commonly coming within the boundary 'free will' :
moral responsibility could have done otherwise possible irrational action possible self-sacrificial action gallantry and style (thanks to Kurt Vonnegut for that one) non-caused agency I am a point in spacetime and my vector at t+1 has no determinant outside myself whimsy 'car c'est mon bon désir' absolute monarchy you can put a gun at my head and I'll still say 'no' idealistic non-dualism consciousness subtending matter disagreeing with Mum & Dad disagreeing with the big Mom & Po...
Only in humans does it make predictive sense to talk about intent, capability, and inclination, and the wide gap between these kinds of perceived "properties" of fellow socially interacting humans, and the generally much simpler properties seen in inanimate objects and animals, leads the brain to allocate them to widely separated groups of buckets. It is this perceived separation in mental thing-space that leads to the the a free-will boundary being drawn around the cluster of socially interacting humans.
careful there. animistic beliefs are quit...
When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.
So when you have the impression you are done, you are not necessarily done because some have this impression without really being done. But then when you are really done, you won't actually know you are done, because you will realize that this impression of being done can be misleading.
So here's your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?
I've written up my answer to this on my blog.
I claim that the reason we posit a thing called free will is that almost all of our decision-making processes are amenable to monitoring, analysis and even reversal by “critic” algorithms that reside one (or more) levels higher up. [I say almost all, because the top level has no level above it. The buck really does stop there]. There would probably be no fe...
Robin: So when you have the impression you are done, you are not necessarily done because some have this impression without really being done. But then when you are really done, you won't actually know you are done, because you will realize that this impression of being done can be misleading.
You'd think it would work that way, but it doesn't. Are you awake or asleep right now? When you're asleep and dreaming, you don't know you're dreaming, so how do you know you're awake?
If you claim you don't know you're awake, there's a series of bets I'd like to make with you...
As usual, this is better settled by experiment than by "I just know". My favourite method is holding my nose and seeing if I can still breathe through it. Every time I've tried this while dreaming, I've still been able to breathe, and, unsurprisingly, so far I've never been able to while awake. So if I try that, then whichever way it goes, it's pretty strong evidence. There — now it's science and there's no need to assume "I feel that I know I'm awake" implies "I'm awake".
Of course, if you're the sort of person who never thinks to question your wakefulness while dreaming, then the fact that you've thought of the question at all is good evidence that you're awake. But you need a better experiment than that if you also want to be able to get the right answer while you actually are dreaming.
[Apologies if replying to super-old comments is frowned upon. I'm reading the whole blog from the beginning and occasionally finding that I have things to say.]
It's funny that the working reality tests for dreaming are pretty stupid and decidedly non-philosophical. For instance, the virtual reality the brain sets up for dreams apparently isn't good enough to do text or numbers properly, so when you are dreaming you're unable to read the same text twice and see it saying the same thing, and digital clocks never work right. (There's an interesting parallel here to the fact that written language is a pretty new thing in evolutionary scale and people probably don't have that much evolved cognitive capacity to deal with it.)
There's a whole bunch of these: http://en.wikibooks.org/wiki/Lucid_Dreaming/Induction_Techniques#Reality_checks
This reminds me of a horrible nightmare I had back in High School. It was the purest dream I had ever had: the world consisted of me, a sheet of paper, and a mathematical problem. Every time I got to the bottom of the problem, before starting to solve it, I went back up to make sure I had gotten it right... only to find it had CHANGED. That again, and again, and again, until I woke up, a knot in my stomach and covered in sweat.
To realize that this dream has an explanation based on neural architecture rather than on some fear of exams is giving me a weird, tingly satisfaction...
Apologies if replying to super-old comments is frowned upon. I'm reading the whole blog from the beginning and occasionally finding that I have things to say.
I have been reading LW since the beginning and have not seen anyone object to replies to super-old comments (and there were 18-month-old comments on LW when LW began because all Eliezer's Overcoming-Bias posts were moved to LW).
Moreover, a lot of reader will see a reply to a super-old comment. We know that because people have made comments in really old comment sections to the effect of "if you see this, please reply so we can get a sense of how many people see comments like this".
Moreover, discouraging replies to super-old comments discourages reading of super-old comments. But reading super-old comments improves the "coherence" of the community by increasing the expected amount of knowledge an arbitrary pair of LW participants has in common. (And the super-old stuff is really good.)
So, reply away, I say.
Eliezer, you seem to be saying that the impression you get when you are really done feels different from the impression you get when you ordinarily seem to be done. But then it should be possible to tell when you just seem to be done, as this impression is different. I can imagine that sometimes our brains just fail to make use of this distinction, but it is quite another to claim that we could not tell when we just seem to be done, no matter how hard we tried.
Eliezer, also, the bet your proposed would only be enforced in situations where I am not dreaming, so it would really be a bet conditional on not dreaming, which defeats the purpose.
When you're asleep and dreaming, you don't know you're dreaming, so how do you know you're awake? If you claim you don't know you're awake, there's a series of bets I'd like to make with you...
1) Some people claim they can recognize that they're in a dream state.
2) The quoted claims are an example of the rhetorical fallacy known as equivocation.
When I'm dreaming, I always know I'm dreaming, and when I'm awake I always know I'm awake.
I realize that this doesn't apply to many other people, however... even the second part.
A fuller explanation of the preceding: As an example of Robin's point, "I can imagine that sometimes our brains just fail to make use of this distinction," the reason that some people don't know when they're dreaming is that that are unable, at that time, to pay attention to all the aspects of their experience; otherwise they would be able to easily distinguish their state from the state of being awake, because the two states are very different, even subjectively. I pay attention to these aspects even while dreaming, and so I recognize that I'm dreaming.
Eliezer, you seem to be saying that the impression you get when you are really done feels different from the impression you get when you ordinarily seem to be done. But then it should be possible to tell when you just seem to be done, as this impression is different.
Yes, exactly; it feels different and you can tell the difference - but first you have to have experienced both states, and then you have to consciously distinguish the difference and stay on your guard. Like, someone who understands even classical mechanics on a mathematical level should not be fooled into believing that they understand string theory, if they are at all on their guard against false understanding; but someone who's never understood any physics at all can easily be fooled into thinking they understand string theory.
I think I'll give this a try. Let's start with what a simple non-introspective mind might do:
Init (probably recomputed sometimes, but cached most of the time): I1. Draws a border around itself, separating itself from the "outside world" in its world model. In humans and similarly embodied intelligences you could get away with defining the own body as "inside", if internal muscle control works completely without inspection.
Whenever deciding on what to output: A1. Generates a list of all possible next actions of itself, as determined in I...
I was once involved in a research of single ion channels, and here is my best understanding of the role of QM in biology.
There are no entanglement effects whatsoever, due to extremely fast decoherence, however, there are pervasive quantum tunneling effects involved in every biochemical process. The latter is enough to preclude exact prediction.
Recall that it is impossible to predict when a particular radioactive atom will decay. Similarly, it is impossible to predict exactly when a particular ion channel molecule will switch its state from open to closed and vice versa, as this involves tunneling through a potential barrier. Given that virtually every process in neurons is based on ion channels opening and closing, this is more than enough.
To summarize, tunneling is as effective in creating quantum uncertainty as decoherence, so you don't need decoherence to make precise modeling impossible.
Quantum uncertainty is decoherence. All decoherence is uncertainty. All uncertainty is decoherence. If it's impossible to predict the exact time of tunneling, that means amplitude is going to multiple branches, which, when they entangle with a larger system, decohere.
Most of the proposed models in this thread seem reasonable.
I would write down all the odd things people say about free will, pick the simplest model that explained 90% of it, and then see if I could make novel and accurate predictions based on the model. But, I'm too lazy to do that. So I'll just guess.
Evolution hardwired our cognition to contain two mutually-exclusive categories, call them "actions" and "events."
"Actions" match: [rational, has no understandable prior cause]. "Rational" means they are often influence...
HOMEWORK REPORT
With some trepidation! I'm intensely aware I don't know enough.
"Why do I believe I have free will? It's the simplest explanation!" (Nothing in neurobiology is simple. I replace Occam's Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).
OK, that was flip. To be more serious:
Considering just one side of the debate, I ask: "What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that...
Eliezer, you wrote:
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...
I'm not so sure. There have been a number of mysteries throughout history that were explained by science, and the resolution didn't feel immediately satisfying to people even though they do to us now -- like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem, or whether it indicates that I jus...
Eliezer, you wrote:
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...
I'm not so sure. There have been a number of mysteries throughout history that were explained by science, and the resolution didn't feel immediately satisfying to people even though they do to us now -- like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem, or whether it indicates that I j...
Eliezer, you wrote:
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...
I'm not so sure. There have been a number of mysteries throughout history that were resolved by science, but people didn't immediately feel as if the scientific explanation really resolved the question, even though it does to us now -- like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem,...
The neural explanation doesn't seem parsimonious, given that there appears to be a much simpler cognitive "glitch" that causes the tree-falling-in-the-forest argument and the free will argument: our habitual propensity to mistake the communication devices known as words with the actual concepts they correspond to in our own minds. And as a natural consequence, people forget that the concept they associate with a word might be different from the concept another person associates with the same word.
One common result of these errors is that arguers...
"What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about 'free will'?"
As I understand it, there was no debate on free will before about three centuries ago. Since that time, the idea that we might all be automata has been taken somewhat seriously. In earlier times, it would have been considered absurd to question free will.
So, did our cognitive algorithm change back around the time of Galileo, Descartes, and Newton? Of course not. So how can the algorithm be "blamed" for the existence o...
As I understand it, there was no debate on free will before about three centuries ago.
This is quite incorrect. Determinism (as opposed to the default folk psychology of free will) has been long debated; from Wikipedia:
"Some of the main philosophers who have dealt with this issue are Marcus Aurelius, Omar Khayyám, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron d'Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace, Arthur Schopenhauer, William James, Friedrich Nietzsche, Albert Einstein, Niels Bohr, and, more recently, Victoria DiMarco, John Searle, Suraj Manjunath, Jai Ramachandran, Ted Honderich, and Daniel Dennett."
This is a very incomplete list, which omits people like the Stoics such as Chrysippus; the other article mentions later the Atomists Leucippus and Democritus.
In Eastern tradition, there are many different takes on 'karma'.
The atheist Carvaka held a deterministic scientific view of the universe, and a materialist view of the mind (although so little survives it's hard to be sure). I'm not entirely clear on the Samkhya darsana's position on causality, though their views on satkaryavada (as opposed to the common Indian position of a...
Here's my attempt (I haven't read the comments above in detail, as I don't want the answer spoiled in case I'm wrong).
For whatever reason, it is apparent that the conscious part of our brain is not fully aware of everything that our brain does. Now let's imagine our brain executing some algorithm, and see what it looks like from the perspective of our consciousness. At any given stage in the algorithm, we might have multiple possible branches, and need to continue to execute the algorithm along one of those possible branches. To determine which branch to f...
My $0.02: all it takes is a system a) without access to its own logs, and b) disposed to posit, for any event E for which a causal story isn't readily available, a default causal story in which some agent deliberately caused E to advance some goal.
Given those two things, it will posit for its own actions a causal story in which it is the agent, since it's the capable-of-agency thing most tightly associated with its actions.
Note that this does not require there not be free will (whatever that even means, assuming it means anything), it merely asserts that ...
Some rough notes on free will, before I read the "spoiler" posts or the other attempted solutions posted as comments here.
(Advice for anyone attempting reductions/dissolutions of free will or anything else: actually write notes, make them detailed when you can (and notice when you can't), and note when you're leaving some subproblem unsolved for the time being. Often you will notice that you are confused in all kinds of ways that you wouldn't have noticed if you had kept all of it in your head. (And if you're going to try a problem and then read ...
My answer to that assignment is that i have no idea how that would work or how i could figure out how it would. Did i guess the password? if not then is it swordfish? Just give me a gold star!
I've been going through the sequences, and this is probably the post I disagree with most.
Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
More importantly, rejecting a concept doesn't solve the problem the concept is used for. The question to ask isn't what the precise definition of free will is, or whether the concept is coherent. Ask instead "What problems am I trying to solve with this concept?"
Because we do use the concept to solve problems. People...
So, to know if an answer is complete, you go by how certain cognitive processes make you feel? Seriously? Feelings lie. All the time.
My tackle at this question: Why do people debate free will?
The topic itself is of intense interest to humans, because we’d like to believe we have it, or that it exists. This is because we’d like to believe we have control over our own actions and our thoughts, since that would give us the feeling that because of said control we can shape our surroundings in search of our own happiness, or that happiness is achievable. But the crutch of the problem is we can’t just believe in free will now, because we have no idea, no proof or theories on how it exists. Th...
Those who dream do not know they dream, but when you wake you know you are awake.
I actually use this fact to enable lucid dreaming. When I'm dreaming, I ask myself, "am I dreaming?" And then I answer yes, without any further consideration, as I've realized that the answer is always yes. Because when I'm awake, I don't ask that question, because there's never any doubt to begin with. So when I'm dreaming and I find myself unsure of whether or not I'm dreaming, I therefore know that I'm dreaming, simply because the doubt and confusion exists. It's a method that's a lot simpler (and more accurate) than trying to analyze the contents of the dream to see if it seems real.
'Free will' is the halting point in the recursion of mental self-modeling.
Our minds model minds, and may model those minds' models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.
This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others' minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.
...d-do I get the prize?
Um...the halting problem+godel's incompleteness theorem, aka you cannot predict yourself completely? I think i'm missing a piece or two, and I probably am thanks to having "incompleteness theorem and halting problem" as a cached thought.
At any rate, I made a comparison between free will and arbitrary code while thinking about this.
oh horrors.
Free will is basically asking about the cause of our actions and thoughts. The cause of our neurons firing. The cause of how the atoms and quarks in our brains move around.
To know that X causes the atoms in our brain to move a certain way, we'd have to know that every time X happens, the atoms in our brain would move in that specific way. The problem is that we would have to see into the future. We'd have to see what results from X in every future instance of X. We don't have that information. All we have are our past and current experiences, that we...
If we’re pretending that free will is both silly and surprising, then why aren’t we more surprised by stronger biases towards more accurate notions like causality?
If there was no implicit provision like this, there’s no sense to asking any question like “why would brains tend to believe X and not believe not X?” To entertain the question, first we entertain a belief that our brains were “just naïve enough” to allow surprise at finding any sort of cognitive bias. Free will indicates bias--this is the only sense I can interpret from the question you asked.
O...
"Free will" is a black box containing our decision making algorithm.
What kind of mind would invent "free will"? The same mind that would neatly wrap up any other open ended question into a single label, be it "élan vital" or "philogeston". Our minds are fantastic at dreaming up explanations for things, and if they are not easily empirically testable at the time, then such explanations tend to stick. Without falsifying evidence, our pet theories tend to remain, and confirmation bias slowly hardens them into what feel...
Noise / sound exist independently of observation, at least so long as you subscribe to the idea that there exists an objective reality outside of your own mind. They are pressure waves transmitted through some medium.
The tree makes a sound, which no one hears.
The answer to this seems to be as to the sound example and to most philosophical debates in general:
1)Different categorization patterns, or, simply put, different meanings of a word. In this situation, even two words: people can disagree on what "will" is (in the context of "free") and on what "free" is (in the context of "will"; let us assume a Frege-Heimian world where if you know the two nodes you always know their combination to ignore the "context" addenda).
2)Politization of the question. In the world...
I think we care about whether or not we have free will because we associate it with accountability - both our own and others.
If someone picks me up and throws me on you, you should not blame me for getting slammed - this is not my fault, and I had no say in the matter. If someone points a gun at me and tells me to hit you, you probably won't blame for complying. But if you had to rank my accountability in these two cases, it's obvious that I'm more accountable in the latter because I did have a choice - I could not hit you and get shot. This is a very unfa...
This question never sounded like a meaningful one to me. By the time I first heard it, I was familiar with the understanding of sound as vibrations in the air, so the obvious answer was "yes."
As Sam Harris points out, the illusion of free will is itself an illusion. It doesn't actually feel like you have free will if you look closely enough. So then why are we mistaken about things when we don't examine them closely enough? Seems like a too-open-ended question.
Three things bother me here, and they're all about which questions are being asked.
The "tree falling in a forest" questions isn't, as far as I've encountered it outside of this blog, about the definition of sound. Rather, it's about whether or not reality behaves the same when you do not observe it, an issue that you casually dismissed, without any proof, evidence, or even argument. There are ways to settle this dispute partially, though they are not entirely empirical due to the nature of the conundrum.
Ignoring the question of free will, ill defined
"If a tree falls in the forest, but no one hears it, does it make a sound?"
I didn't answer that question. I didn't pick a position, "Yes!" or "No!", and defend it. Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network. At the end, I hope, there was no question left—not even the feeling of a question.
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Like, say, "Do we have free will?"
The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude: "Yes, we must have free will," or "No, we cannot possibly have free will."
Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places. So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will? Yes or no?"
A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed. So they pursue the Traditional Rationalist course: They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences. And then they publish these devastating observations in a prestigious philosophy journal.
But proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it.
The philosopher's instinct is to find the most defensible position, publish it, and move on. But the "naive" view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn't exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question!
It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn't change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
You could look at the Standard Dispute over "If a tree falls in the forest, and no one hears it, does it make a sound?", and you could do the Traditional Rationalist thing: Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?
The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers. So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?
The cognitive algorithms we use, are the way the world feels. And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks. There can be things in the mind that cut skew to the world.
For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. This dangling unit is often useful as a shortcut in computation, which is why we have them. (Metaphorically speaking. Human neurobiology is surely far more complex.)
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?"
But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayes—then you're done. Then there's no lingering feeling of confusion, no vague sense of dissatisfaction.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn't leave anything behind.
A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you've left anything unexplained.
And so, perhaps, you'll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.
Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, "But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words." That's arguing that or explaining why a confusion exists. Now look at the neural network structure in Feel the Meaning. That's explaining how, disassembling the confusion into smaller pieces which are not themselves confusing. See the difference?
Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion. Indeed, it is an entirely different art. Bear this in mind, and you should feel less embarrassed to say, "I know that what you say can't possibly be true, and I can prove it. But I cannot write out a flowchart which shows how your brain makes the mistake, so I'm not done yet, and will continue investigating."
I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you'll get to your destination eventually. If you decide too early that you've found an answer, you won't.
The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there's someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don't know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...
But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.
So be warned that you may believe you're done, when all you have is a mere triumphant refutation of a mistake.
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.
Which is to say: When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.
So here's your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
This is one of the first real challenges I tried as an aspiring rationalist, once upon a time. One of the easier conundrums, relatively speaking. May it serve you likewise.