In this theory, free will becomes a property that is not possessed by creatures themselves, but by creatures interacting with other creatures.
I agree (and had mention here multiple times before) that free will and agency is observer-dependent, and not an objective feature of reality. Also see the concept of intentional stance.
This also works on yourself. If your best model of yourself is as an agent making choices based on their beliefs, then you will seem to have free well to yourself.
It's quite clear that humans have free will relative to humans. I also conjecture that
Perhaps all complicated systems that can think are always too complicated to predict themselves, as such, they would all consider themselves to have free will.
Also, if you actually have free will, you may well seem to have free will to yourself. This is a hand...
Predictability is subjective, so there is no such thing as "actually having" free will as he defines it.
Unpredictability is subjective, because what an observer can't predict depends on their specific limitations. Perfect predictability is usually cashed out in terms of a perfect predictor, with no limitations.
So a flipped coin has free will? I fear this is an attempt to dissolve a question which has gone the wrong way: instead of tabooing the word and looking for predictions that can be made conditional on some unambiguous feature, you're looking for an easier-to-use definition that you can force the word into.
Are you familiar with Dennett’s three stances? You’ve mostly reinvented that idea, I think; and your notion of what constitutes free will is also very similar to Dennett’s account of consciousness.
I know about the three stances. What's Dennett's account of consciousness? I know that he doesn't believe in qualias and tries to explain everything in the style of modern physics.
If I may toot my own horn: https://www.lesswrong.com/posts/yex7E6oHXYL93Evq6/book-review-consciousness-explained
I'll admit I'm not totally sure what Said Achmiz means by his comparison, though :)
I think the use of the word "psychopathic" is distracting here. The attention to "others as machines" is relevant but psyhopathy has important meanings elsewhere which really don't apply here. In a proper prediction domination it's more like judo in that you are using the the agents own power towards your needs. You never ask for something that would be opposed except when confrontation is exactly what you want out of the system (and therefore it's not true opposition but cooperation).
In general standard gripes about free will apply. Natural laws are descriptive and not normative. Thus they can't be oppressive. If somebody develops a theory of what ice scream flavour you like it just means you have tastes and it doesn't limit your taste formation in any way. Having "free icecream taste" would mean you would not have any taste (that is you can not like any particular flavour). Having "free will" would mean there is no choice that could be attributed to your character, you have no "decision taste". As processed here there is an enourmous difference between "agent X can't attribute a particular decision that agent Y will do" vs "there is no function from the state of the world and state of agent Y that would output agent Ys decision".
If the universe is deterministic then something else has to be responsible for (the illusion of libertarian) free will. But you don't know that the universe is deterministic.
How is a probabilistic universe more conducive to free will? If I can predict that you have exactly a 50% chance of doing some action, based on the output of the module in your brain that is specialized to generate random numbers, I can still interact with you like with a machine.
Libertarian free will is standardly defined as being able to make decisions that are not determined by the preceding state of the universe (and also rational and on line with your desires to a one extent). Its not defined as not being mechanical in anyway and defining it that way kind of makes it supernatural.
For me it may appears too that apparent agents become apparent machines if the observer has strong prediction power.
As an adult I remember my mind states as a teenager during an aha moment on simple euclidian geometry poofs. The recall efforts, the checks, the sight, the comparisons, the size of the A4 sheet where the figures had to fit in with a margin for the written demo, the colors of my pens and the choices of coloring some figures or use a pencil instead etc.
Well, strong prediction power includes stonger understanding of causes and outcomes, I mean the birth, life and death of the event is well connnected to well defined internal and external values.
So, if a powerful predictor reads the mind of a teenager making the same euclidian geometry that i did, i think the predictor would know before and better than the teenager the outcomes of the efforts, checks, sight, comparisons, agency, results and aha.
The problem is the internal and external values, are they valued before the teenager's birth ? Or are they real time values like for weather forecast or obervation of quantum state ?
Maybe the internal and external values or finite sets of possible values nearest to the event are known sufficiently in advance to call the teenager an apparent machine for a predictor with faster real time.
But if the complete causal chain defines what the teenager is then the predictor is just wit-nicknaming in real time to satisfy its hurry agenda.
So free will is an appearence to others' perspective. If the CIA gets mechanically answers from humans 1,2 and asks itself if techniques for n works for n+1 then since all sapiens are the same, we are all machines in this perspective. But if the question is "would you have worked for the CIA if you were born in America" then all is undecidable because the machines flip sides with the predictors, it's like the famous Gödel prequel "The Cretans, always liars" by Epimenides, Cretica.
In this theory, free will becomes a property that is not possessed by creatures themselves, but by creatures interacting with other creatures.
That would seem to imply that Robinson Crusoe lost his free will when he was marooned, and regained it on encountering Friday. Or that when I arrive home and shut my front door, I stop having free will until I go out again.
Only if you stop interact with yourself. Remove the feedback loop where you observe the self and there would be no free will because it would be irrelevant, as there would be nothing being modeled that one could assess to have free will or not.
You should not only shut your door, but also stop thinking about yourself and explaining your own behavior. People in "flow" seem to be in such a free-will-less state.
A more radical version of this idea is promoted by Susan Blackmore, which says that that consciousness (not just free will) exists only when a human (or some other thinking creature) thinks about itself:
Whenever you ask yourself, “Am I conscious now?” you always are.
But perhaps there is only something there when you ask. Maybe each time you probe, a retrospective story is concocted about what was in the stream of consciousness a moment before, together with a “self” who was apparently experiencing it. Of course there was neither a conscious self nor a stream, but it now seems as though there was.
Perhaps a new story is concocted whenever you bother to look. When we ask ourselves about it, it would seem as though there’s a stream of consciousness going on. When we don’t bother to ask, or to look, it doesn’t.
I think consciousness is still there even when self-consciousness is not, but if we replace "consciousness" with "free will" in that passage, I would agree with it.
Free will
Consider creatures. This is really hard to define in general, but for now let's just consider biological creatures. They are physical systems.
An effectively deterministic system, or an apparent machine, is a system whose behavior can be predicted by the creature making the judgment easily (using only a little time/energy) from its initial state and immediate surroundings.
An effectively teleological system, or an apparent agent, is a system whose behavior cannot be predicted as above, but whose future state can be predicted in some sense.
In what sense though, needs work: if I can predict that you would eat food, but not how, that should count. If I can predict you would eat chocolate at 7:00, though I don't know how you would do that, that might count as less free. Perhaps something like information-theoretic "surprise", or "maximizing entropy"? More investigation needed.
Basically, an apparent machine is somebody that you can predict very well, and an apparent agent is somebody that you can predict only to a limited, big-picture way.
A successful agent needs to figure out what other agents are going to do. But it's too hard to model them as apparent machines, just because how complicated creatures are. It's easier to model them as apparent agents.
Apparent agents are apparently free: they aren't apparently deterministic.
Apparent agents are willful: they do actions.
Thus, apparent agents apparently have free will. To say someone "has free will" means that someone is a creature that does things in a way you can't predict in detail but can somewhat in outcome. Machines can be willful or not, but they are not free.
In this theory, free will becomes a property that is not possessed by creatures themselves, but by creatures interacting with other creatures.
Eventually, some creatures evolved to put this line of thought to their self, probably those animals that are very social and need to think about their selves constantly, like humans.
And that's how humans think they themselves have free will.
Perhaps all complicated systems that can think are always too complicated to predict themselves, as such, they would all consider themselves to have free will.
From free to unfree
With more prediction power, a creature could modeling other creatures as apparent machines, instead of apparent agents. This is how humans have been treating other animals, actually. Descartes is a famous example. But all creatures can be machines, for someone with enough computing power.
Thinking of some creature as a machine to operate with instead of an agent to negotiate with, is usually regarded as psychopathic. Most psychopathic humans are so not due to an intelligent confidence in predicting other humans, but because of their lack of empathy/impulse control, caused by some environmental/genetic/social/brain abnormality.
But psychopathic modeling of humans can happen in an intelligent, honest way, if someone (say, a great psychologist) becomes so good at modeling humans that the other humans are entirely predictable to him.
This has been achieved in a limited way in advertisement companies and attention design and politics. The 2016 American election manipulation by Cambridge Analytica shows honest psychopathy. It will become more prevalent and more subtle, since overt manipulation makes humans deliberately become less predictable as a defense.
Emotionally intelligent robots/electronic friends could become benevolent psychopaths. They will be (hopefully) benevolent, or at least be designed to be. They will be more and more psychopathic (not in the usual "evil" sense, I emphasize) if they become better at understanding humans. This is one possibility for humans to limit the power of their electronic friends, out of an unwillingness to be modelled as machines instead of agents.