I was recently talking with someone about the problem of free will, and I realised that for many years now I have always had the same response, without really ever soliciting broader critical feedback. The notion of free will here refers to a naive, libertarian, non-strictly-defined approach of "when I feel I make choices, I really had a choice", and all of the associated implied moral philosophy (laziness is a thing, I can be blamed for my choices etc.)

The starting assumption is that I want to believe in true things (I leave open the question of whether this epistemic duty is itself justified or not). I propose a trilemma, where exactly one of the following propositions holds:

  1. Either the notion of 'free will' is meaningless, or
  2. It is meaningful, and I do have in fact have free will, or
  3. It is meaningful, but I happen to not have it

If (1) is true, then the whole discussion is moot: nothing can be true or false, and whatever I believe is equally justified. If (2) is true, then I want to believe in having free will (since it is true that I have it). If (3) is true, then "should" is a meaningless concept - there is no way I would be able to change my view one way or the other.

So, the only possible world where I get to make this choice is a world in which free will is true, so I should believe in it - and mostly ignore the debate about compatibilism, the nature of physics, dualism etc. Which is what I do.

One potential issue is that (1) can be true or false depending on the precise definition (but then what even is a precise definition?). Still, I suspect that no matter which one I instantiate it with, as long as it is sensible, the general (self-referential) structure of the argument will stay the same.

Thanks to Jakub S. for the feedback on this post and his suggestion to formalise this argument as the epistemic duty of maximising the probability of holding correct beliefs.

New Comment
5 comments, sorted by Click to highlight new comments since:

In my opinion, your trilemma definitely does not hold. "Free will" is not a monosemantic term, but one that encompasses a range of different meanings both when used by different people and even the same person in different contexts.

  1. is false, because the term is meaningful, but used with different meanings in different contexts;
  2. is false, because you likely have free will in some of those senses and do not in others, and it may be unknown or unknowable in yet more;
  3. is false for the same reason as 2.

For example: your mention of "blame" is a fairly common cluster of moral or pragmatic concepts attached to discussions of free will, but is largely divorced from any metaphysical aspects of free will.

Whether or not a sapient agent metaphysically could have acted differently in that specific moment is irrelevant to whether it is moral or useful to assign blame to that agent for the act (in such discussions, usually an act that harms others). Even under the most hardcore determinism and assuming immutable agents, they can be classified into those that would and those that wouldn't have performed that act and so there is definitely some sort of distinction to be made. Whether you want to call it "blame" or not in such a world is a matter of opinion.

However, sapient agents such as humans in the real world are not immutable and can observe how such agents (possibly including themselves) are treated when they carry out certain acts, and can incorporate that into future decisions. This feeds into moral and pragmatic considerations regardless of the metaphysical nature of free will.

There are likewise many other concepts tied into such "free will" discussions that could be separated out instead of just lumping them all together under the same term.

We do things for reasons. This morning I got a call to say that my car, in for servicing, is ready for collection. I will collect it this afternoon. Am I "free" not to? "Could I" decide otherwise? What does the question mean? I want the car back, for obvious reasons, and this afternoon is the first good time to go — also for reasons. Of course I'm going to collect it then. Nonetheless, it was still me that decided to do that, and that will do that later today. The thoughts that led to that conclusion were mine. The sage is not above causation, nor subject to causation, but one with causation.

Additional sequences link.

“Meaningless” is vaguely defined here. You defined free will at the beginning, so it must have some meaning in that sense.

It seems like “meaningless” is actually a placeholder for “doesn’t really exist”.

Which would make the trilemma boil down to:

  1. Free will doesn’t exist
  2. It exists and I have it
  3. It exists and I don’t have it

And your basis for rejecting point 1 is that “truth wouldn’t matter, anything would be justified, therefore it’s false”.

I don’t think this follows.

Ultimately, what you’re pointing out is an issue of distinguishing between a non-free operating system that tends to accurately believe true things, versus a confused non-free operating system that tends to believe false things.

Just because this distinction cannot be subjectively resolved with 100% confidence (because what if the axioms of logic and self-coherence are wrong?), doesn’t make this automatically “moot”.

You have to at some level assume logic, memory and a degree of rationality no matter what circumstance you’re in. If you don’t assume that, then you’re not free either, you’re just acausally operating based on random whims - and that’s something you don’t control by definition.

An algorithm that computes 22+117 or something like that is free to compute it correctly, even as it's running on a physical computer that might be broken in a subtle way, possibly producing a different result. Identifying with an algorithm that your brain currently implements when making a decision doesn't seem different, you are just a more complicated algorithm, producing some result. What the physical world does with that result is a separate issue, but for purposes of this argument the algorithm is selected to be in tune with the world, it's an algorithm that the brain is currently simulating in detail.

Is there a distinction between "true will" and "false will" and how does that factor into free will?

Take the example of someone with total paralysis, or locked-in Syndrome: they are absolutely unable to move any part of their body and therefore not able to manipulate their environment. A non-deterministic view of human consciousness will still suppose that they have free-will to choose what subject is on their mind. They can listen to the ambient sounds of the room, they can imagine a blue triangle or they could choose to imagine a red hexagon.

Thankfully for me I am very much in control of all my limbs, I am able to physically manipulate my immediate environment if I choose - I can pick up and fill a glass with water[1] for example. However, if in my mind I am setting my mind to make use of my ability to manipulate my environment to grab a glass and fill it: then I'd be exercising free-will unlike a robot arm.

I can choose to imagine a red hexagon if I want. I can sometimes choose what I want to think about. But sometimes, however, I do things without thinking. I have instincts and reflexes. I also tend to ruminate, I tend to fixate on thoughts: thoughts I don't like. I would very much like to not think about such things: embarrassing moments, framings of problems which are mal-adaptive. I also have the tendency to not be able to recall certain facts which I have, in the past without external prompting been able to recall.

To phrase my original question differently - when I become fixated on a topic, is that because I actually truly want to but I am putting up some protestations that I don't (protestations, apparently, only to myself)? Or is that in fact against my will, my truest will, and that in the same way that a person with total paralysis is unable to pick up a glass of water - no matter how much they long to be able to interact and manipulate their environment - to hug a loved one, to walk on grass - they are unable to fulfill that will. I am unable to fulfill, at times, my will to think of something else.

Does the question of authenticity of desire determine whether something is free-will or not? Am I in fact exercising free-will even when I ruminate or turn my thoughts to things I don't want to because that is in fact my desire?

I often wonder if free-will is a synonym for agency. And to ask a different question: How much agency, what is the most atomic example of free-will needed to say - "yup, this entity has free will"? And can we consider physical agency and freedom of thought differently?

  1. ^

     I'm aware that there are robot arms that can do this, and a monkey could be trained to do this - I don't think that's relevant, I'm just saying - I'm aware of that argument.