gjm comments on Is Spirituality Irrational? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (429)
I think that's wrong for two reasons. The first is that the model might explicitly include the agent's desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
I think that's better understood as a limit on its intelligence than on its freedom. It doesn't have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn't try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I'd say not that the chess program lacks free will, but that it's the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven't given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn't help with that; the compatibilist can state necessary conditions too.
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: "I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize". I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don't want to do the thing they're hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn't freely refuse the bribe?
On another occasion I'm offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be "no", and accordingly I think unpredictability and freedom can't be so close to being the same thing.
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don't. For example, I desire chocolate. This is not something I chose, it's something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that's not the same thing. That's deciding to try to train myself not to desire chocolate.)
If we don't have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account "free will"?
This is a very deep topic that is treated extensively in David Deutsch's book, "The Beginning of Infinity" (also "The Fabric of Reality", particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I'll have to recapitulate Deutsch's argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
Sure. Do you distinguish between "will" and "desire"?
Really? What are they?
Yes.
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
(I see you've been downvoted. Not by me.)
If Jewishness is inherited from one's mother, and a person's great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
I seldom use the word "will" other than in special contexts like "free will". Why do you ask?
One such might be: "For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent."
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for "free will" available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that's a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you're saying something a bit less content-free than that; let me paraphrase and you can correct me if I'm getting it wrong. "Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will." That's less content-free because we can then ask: OK, what if you're wrong about everything being predictable in principle; or what if you're right but we ask about a hypothetical different world where some things aren't predictable in principle?
Let's ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let's suppose that whether or not the brain uses quantum effects in any "interesting" way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let's situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
I don't think that's correct. For instance, in the second case I am coerced by another agent, and in the first I'm not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn't (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of "cause" is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren't, etc.
Of course. Does this mean that you concede that our desires are not freely chosen?
Oh, good!
You're right, the argument in chapter 7 is not complete, it's just the 80/20 part of Deutsch's argument, so it's what I point people to first. And non-explanatory models with predictive power are not impossible, they're just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
No.
First, I disagree with "Free will means unpredictability-in-principle." It doesn't mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be "real" free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you've chosen a bad example to make your point, so let me propose a better one: we're in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you'll choose one or the other, but I have no idea which. In that case, it's possible that you are making the choice using "real" free will.
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That's exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don't think that Turing machines exercise "free will" or "decide" whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
That isn't quite what you said before, but I'm happy for you to amend what you wrote.
It seems to me that the argument you're now making has almost nothing to do with the argument in chapter 7 of Deutsch's book. That doesn't (of course) in any way make it a bad argument, but I'm now wondering why you said what you did about Deutsch's books.
Anyway. I think almost all the work in your argument (at least so far as it's relevant to what we're discussing here) is done by the following statement: "Explanatory power turns out to be the only known effective filter for theories with high predictive power." I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev's empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the "is there a nice clear criterion?" test. Also, if you aren't claiming anything close to "free will = UIP" then I no longer know what you meant by saying that ialdabaoth got it more or less right.
Sure. That would be why I said "with great confidence" rather than "with absolute certainty". I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it's extremely unlikely. (So no, I don't agree that I've "chosen a bad example"; rather, I think you misunderstood the example I gave.)
If you say "you chose a bad example to make your point, so let me propose a better one" and then give an example that doesn't even vaguely gesture in the direction of making my point, I'm afraid I start to doubt that you are arguing in good faith.
The things you describe me as being "coerced by" are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of "free will" that we're looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that's generally the right way to think about questions like "what is free will?".)
In particular, I think your claim about "the only difference" is flatly wrong.
That sounds sensible on first reading, but I think actually it's a bit like saying "what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn't care about suffering" and inferring that our notions of right and wrong shouldn't have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that's predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like "free will", if it's talking about very-limited beings like us.)
I haven't, as it happens, been claiming that free will is "objectively real". All I claim is that it may be a useful notion. Perhaps it's only as "objectively real" as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask "to what extent is X exercising free will?" in the same way as you could ask "is X a better move than Y, for a human player with a human opponent?".
Sorry about that. I really was trying to be helpful.
Well, heck, what are we arguing about then? Of course it's a useful notion.
A better analogy would be "simultaneous events at different locations in space." Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You're arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn't seem that useful to me.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you're working with. I'm still not sure what yours actually is, but mine doesn't have that property, out at any rate doesn't have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it. It's useful in the same way that it's useful to talk about, say, "the force of gravity" even though in reality there is no such thing. (That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don't think anything has free will). Do you think Pachinko machines have free will? Do they "decide" which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say "real free will" I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don't know how to make it any clearer than that.
I think it's more helpful to talk about whatever we have that we're trying to talk about, even if some of what we say about it isn't quite right, which is why I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say "there is, more or less, a force of gravity, but note that in some situations we'll need to talk about it differently" than "there is no force of gravity". And I would say the same about "free will".
I don't know much about Pachinko machines, but I don't think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Again, I don't think there are any sort of deliberative processes going on there, so no free will.
So there are two parts to this, and I'm not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents' conscious "parts" (of course this terminology doesn't imply an actual physical division).
Of course "actually possible" is pretty problematic language; what counts as possible? If I'm understanding you right, you'd cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that's enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by "amplified" quantum effects that they can't be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there's no reason (that I can see) to expect any connection between "degree of influence from quantum randomness" and any of the characteristics we generally think of as distinguishing free from not-so-free -- practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn't seem to me that predictability by a hypothetical "past-omniscient" observer has much connection with what in other contexts we call free will. Why make it part of the definition?
That's like saying, "I prefer triangles with four sides." You are, of course, free to prefer whatever you want and to use words however you want. But the word "free" has an established meaning in English which is fundamentally incompatible with determinism. Free means, "not under the control or in the power of another; able to act or be done as one wishes." If my actions are determined by physics or by God, I am not free.
And you think chess-playing machines do?
BTW, if your standard for free will is "having processing that resembles human deliberation" then you've simply defined free will as "something that humans have" in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically "yes".
I'd call them two "interpretations" rather than two "parts". But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that's not free will.
Whatever is not impossible. In this case (and we've been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what "reliably predictable" means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It's really not complicated.
Because that is what the "free" part of "free will" means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what "reliably predictor" means). If I cannot choose B then I am not free.
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as
can people decide by free will not to have an allergic reaction?are misleading.Or you can convert into it.
I think you need at least a couple more zeroes in there for that to be right.
They or one of their matrilinear ancestors converted to Judaism?
Oooops! I meant there to be three more. Will fix. Thanks.
In case it wasn't clear: I was not posing "on what basis ..." as a challenge, I was pointing out that it isn't much of a challenge and that for similar reasons lisper's parallel question about free will is not much of a challenge either.