As humans, our brains need the capacity to pretend that we could choose different things
This seems wrong, "capacity to pretend" is not it. Rather, we don't know what we'll do, there is no need to pretend that we don't know. What we know (can figure out) is what consequences are anticipated under assumptions of making various hypothetical actions (this might be what you meant by "pretend").
(It's a bit more subtle than that: it's possible to anticipate the decision, but this anticipation doesn't, or shouldn't, play a direct role in selecting the decision, it observes and doesn't determine. So it's possible to know what you'll most likely do without having decided it yet.)
I don't think you understand EY's position at all.
The actual argument can be summarized more like this: "If free will means anything, then it must mean our algorithm's ability to determine our actions. Therefore free will is not only compatible with determinism, it's absolutely dependent on determinism. If our mind's state didn't determine our actions, it would be then that there would be no possibility of free will.
The sort of confusion which thinks free will to be incompatible with determinism, derives from people picturing their selves as being restrained by physics instead of being part of physics."
I'd take that, minus the crucial dependence on determinism. A system can contain stochastic elements and yet be compatible with free will.
This, I think, is a major part of it, that it doesn't seem you've accounted for:
The "free will" debate is a confusion because, to answer the question on the grounds of the libertarians is to already cede their position. The question they ask: "Can I make choices, or does physics determine what I do?"
Implicit in that question is a definition of the self that already assumes dualism. The questions treats the self as a ghost in the machine, or a philosophy student of perfect emptiness. The libertarians imagine that we should be able to make decisions not only apart from physics, but apart from anything. They are treating the mind as a blank slate that should be able to take in information and output consequences based on nothing whatsoever.
If, instead, you apply the patternist theory of mind, you start with the self as "an ongoing collection of memories and personality traits." (Simplified, of course.) From that point, you can reduce the question to a reductio ad absurdum. Say that one of my personality traits is a love and compassion for animals, and we're asking the question, "Do I have the free will to run over this squirrel?" Replace "p...
I'm staying out of the EY-exegesis side of this altogether, but a note on your summary in its own voice...
As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively.
I would say, rather, that the process of imagining different outcomes and selecting one simply is the experience that we treat as the belief that we can choose different things. Or, to put it another way: I don't think we're highly motivated to pretend we could have done something different, so much as we are easily confused about whether we could have or not.
Sorry to go off-topic, however I'd like to know how close my understanding of free will and determinism is to reality, or at least to that of Less Wrong.
My understanding is that the world is completely deterministic and the decisions with which we're faced, as well as the choices that we make, are all predetermined (in advance, since the beginning of time - whatever the beginning of time may mean). And even though this is the case, it doesn't mean that we're not fulfilling our preferences at each decision point.
Also, there's nothing spontaneous or random ...
So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.
Not sure about the EY's position, but I find that you are making a significant assumption: that people always feel like they have free will. This is patently false. I would start by trying to imagine how it feels to have no free will. Possible options:
You feel compelled to do things because the voices in your head tell you to (i.e. you don't have your o
They key argument to me in Eliezer's "Free Will" sequence, is the fact that causality doesn't work from past to future, but from past to present and present to future. For the same reason, there is (usually) no way to know the future from the past without simulating the present.
Now, let's apply that to Free Will. You are in a state S (with a knowledge of the world and a set of inputs), you run an algorithm that will decide what action A you'll do.
It is deterministic, so given the state S, something (Omega) can predict what action A you'll do. Bu...
to pretend that we could choose different things
On the above (emphasis added) - and independent of anything I've seen from EY - beware the modal scope fallacy. It leads to unsound rejections of "could" and "ability" statements.
I'm not seeing how that conclusion is reached. How would we act differently if we did have free will, as opposed to a necessary illusion for decision-making?
I'd like to propose a way for measuring a system's freedom: it is the size of the set of closed-ended goals which it can satisfy from its current state. How's that?
I also think that this is all you really need to not be confused about free will. It's the freedom to do what you will.
How would particles or humans behave differently if they had free will compared to if they didn't?
I actually think that's a great way to approach the problem, if you view emotion and cognition as behavior.
Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2.
It looks like you are running on a corrupted system that just chose staying at home.
Oh.
I tried to figure out what Eliezer's stance on free will was quite a few times, but never really figured out what he meant. This cleared things, thanks!
I'm participating in a university course on free will. On the online forum, someone asked me to summarise Eliezer's solution to the free will problem, and I did it like this. Is it accurate in this form? How should I change it?
“I'll try to summarise Yudkowsky's argument.
As Anneke pointed out, it's kinda difficult to decide what the concept of free will means. How would particles or humans behave differently if they had free will compared to if they didn't? It doesn't seem like our argument is about what we actually expect to see happening.
This is similar to arguing about whether a tree falling in a deserted forest makes any noise. If two people are arguing about this, they probably agree that if we put a microphone in the forest, it would pick up vibrations. And they also agree that no-one is having the sense experience of hearing the tree fall. So they're arguing over what 'sound' means. Yudkowsky proposes a psychological reason why people may have that particular confusion, based on how human brains work.
So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question.
It feels like I choose between some of my possible futures. I can imagine waking up tomorrow and going to my Engineering lecture, or staying in my room and using Facebook. Both of those imaginings feel equally 'possible'.
Humans execute a decision making algorithm which is fairly similar to the following one.
List all your possible actions. For my lecture example, that was “Go to lecture” and “Stay home.”
Predict the state of the universe after pretending that you will take each possible action. We end up with “Buck has learnt stuff but not Facebooked” and “Buck has not learnt stuff but has Facebooked.”
Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2.
Execute the action associated with the best outcome. In this case, I'd go to my lecture.
Note that the above algorithm can be made more complex and powerful, for example by incorporating probability and quantifying your preferences as a utility function.
As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively. The way our brain implements this is by considering those possible worlds which we could reach through our choices, and by treating them as possible.
So now we have a fairly convincing explanation of why it would feel like we have free will, or the ability to choose between various actions: it's how our decision making algorithm feels from the inside.”