gjm comments on Is Spirituality Irrational? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (429)
(I see you've been downvoted. Not by me.)
If Jewishness is inherited from one's mother, and a person's great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
I seldom use the word "will" other than in special contexts like "free will". Why do you ask?
One such might be: "For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent."
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for "free will" available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that's a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you're saying something a bit less content-free than that; let me paraphrase and you can correct me if I'm getting it wrong. "Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will." That's less content-free because we can then ask: OK, what if you're wrong about everything being predictable in principle; or what if you're right but we ask about a hypothetical different world where some things aren't predictable in principle?
Let's ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let's suppose that whether or not the brain uses quantum effects in any "interesting" way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let's situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
I don't think that's correct. For instance, in the second case I am coerced by another agent, and in the first I'm not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn't (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of "cause" is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren't, etc.
Of course. Does this mean that you concede that our desires are not freely chosen?
Oh, good!
You're right, the argument in chapter 7 is not complete, it's just the 80/20 part of Deutsch's argument, so it's what I point people to first. And non-explanatory models with predictive power are not impossible, they're just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
No.
First, I disagree with "Free will means unpredictability-in-principle." It doesn't mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be "real" free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you've chosen a bad example to make your point, so let me propose a better one: we're in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you'll choose one or the other, but I have no idea which. In that case, it's possible that you are making the choice using "real" free will.
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That's exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don't think that Turing machines exercise "free will" or "decide" whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
That isn't quite what you said before, but I'm happy for you to amend what you wrote.
It seems to me that the argument you're now making has almost nothing to do with the argument in chapter 7 of Deutsch's book. That doesn't (of course) in any way make it a bad argument, but I'm now wondering why you said what you did about Deutsch's books.
Anyway. I think almost all the work in your argument (at least so far as it's relevant to what we're discussing here) is done by the following statement: "Explanatory power turns out to be the only known effective filter for theories with high predictive power." I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev's empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the "is there a nice clear criterion?" test. Also, if you aren't claiming anything close to "free will = UIP" then I no longer know what you meant by saying that ialdabaoth got it more or less right.
Sure. That would be why I said "with great confidence" rather than "with absolute certainty". I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it's extremely unlikely. (So no, I don't agree that I've "chosen a bad example"; rather, I think you misunderstood the example I gave.)
If you say "you chose a bad example to make your point, so let me propose a better one" and then give an example that doesn't even vaguely gesture in the direction of making my point, I'm afraid I start to doubt that you are arguing in good faith.
The things you describe me as being "coerced by" are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of "free will" that we're looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that's generally the right way to think about questions like "what is free will?".)
In particular, I think your claim about "the only difference" is flatly wrong.
That sounds sensible on first reading, but I think actually it's a bit like saying "what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn't care about suffering" and inferring that our notions of right and wrong shouldn't have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that's predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like "free will", if it's talking about very-limited beings like us.)
I haven't, as it happens, been claiming that free will is "objectively real". All I claim is that it may be a useful notion. Perhaps it's only as "objectively real" as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask "to what extent is X exercising free will?" in the same way as you could ask "is X a better move than Y, for a human player with a human opponent?".
Sorry about that. I really was trying to be helpful.
Well, heck, what are we arguing about then? Of course it's a useful notion.
A better analogy would be "simultaneous events at different locations in space." Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You're arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn't seem that useful to me.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you're working with. I'm still not sure what yours actually is, but mine doesn't have that property, out at any rate doesn't have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it. It's useful in the same way that it's useful to talk about, say, "the force of gravity" even though in reality there is no such thing. (That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don't think anything has free will). Do you think Pachinko machines have free will? Do they "decide" which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say "real free will" I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don't know how to make it any clearer than that.
I think it's more helpful to talk about whatever we have that we're trying to talk about, even if some of what we say about it isn't quite right, which is why I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say "there is, more or less, a force of gravity, but note that in some situations we'll need to talk about it differently" than "there is no force of gravity". And I would say the same about "free will".
I don't know much about Pachinko machines, but I don't think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Again, I don't think there are any sort of deliberative processes going on there, so no free will.
So there are two parts to this, and I'm not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents' conscious "parts" (of course this terminology doesn't imply an actual physical division).
Of course "actually possible" is pretty problematic language; what counts as possible? If I'm understanding you right, you'd cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that's enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by "amplified" quantum effects that they can't be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there's no reason (that I can see) to expect any connection between "degree of influence from quantum randomness" and any of the characteristics we generally think of as distinguishing free from not-so-free -- practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn't seem to me that predictability by a hypothetical "past-omniscient" observer has much connection with what in other contexts we call free will. Why make it part of the definition?
That's like saying, "I prefer triangles with four sides." You are, of course, free to prefer whatever you want and to use words however you want. But the word "free" has an established meaning in English which is fundamentally incompatible with determinism. Free means, "not under the control or in the power of another; able to act or be done as one wishes." If my actions are determined by physics or by God, I am not free.
And you think chess-playing machines do?
BTW, if your standard for free will is "having processing that resembles human deliberation" then you've simply defined free will as "something that humans have" in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically "yes".
I'd call them two "interpretations" rather than two "parts". But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that's not free will.
Whatever is not impossible. In this case (and we've been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what "reliably predictable" means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It's really not complicated.
Because that is what the "free" part of "free will" means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what "reliably predictor" means). If I cannot choose B then I am not free.
I don't think that's at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don't think it's impossible for "free" to mean something compatible with determinism.
Let's take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. "Not under the control or in the power of another"? That's OK; the laws of physics, whatever they are, are not another agent. "Able to act or be done as one wishes"? That's OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn't say anything about that.
(I wouldn't want to claim that the definition you selected is a perfect one, of course.)
Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)
Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word "impossible" inappropriate. For whatever reason, you've never seen fit even to acknowledge my having done so.
But let's set that aside. I shall restate your claim in a form I think better. "If you are reliably predictable, then it is impossible for your choice and the predictor's prediction not to match." Consider a different situation, where instead of being predicted your action is being remembered. If it's reliably rememberable, then it is impossible for your action and the remember's memory not to match -- but I take it you wouldn't dream of suggesting that that involves any constraint on your freedom.
So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that's not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you're saying is an argument for incompatibilism; it is just a restatement of incompatibilism.
Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.
No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like "cannot" and "impossible" have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating "free will" is the particular one you have in mind.
How would you define it then?
This would not be the first time in history that the philosophical community was wrong about something.
No, I get that. But "a very little bit" is still distinguishable from zero, yes?
Nothing about it seems human decision-like. But that's a prejudice because you happen to be human. See below...
I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a "humanist".)
Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.
I hereby acknowledge your having pointed this out. But it's irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That's why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.
That's possible. But just because incompatibalism is a tautology does not make it untrue.
I don't think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it's not a prediction). That doesn't feel like a tautology to me, but I'm not going to argue about it. Either way, it's true.
Of course. As soon as someone presents a cogent argument I'm happy to consider it. I haven't heard one yet (despite having read this ).
That's really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God's failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.
You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don't want to shatter your illusion of free will.
I don't see any possible way to distinguish between "can not" and "with 100% certainty will not". If they can't be distinguished, they must be the same.
The dictionary disagrees.
Freehas many different meanings.What ontological category does
physicshave in your view of the world?Are you seriously arguing that "free" in "free will" might mean the same thing as (say) "free" in "free beer"? Come on.
That's a very good question, and it depends (ironically) on which of two possible definitions of physics you're referring to. If you mean physics-the-scientific-enterprise (let's call that physics1) then it exists in the ontological category of human activity (along with things like "commerce"). If you mean the underlying processes which are the object of study in physics1 (let's call that physics2) then I'd put those in the ontological category of objective reality.
Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as
can people decide by free will not to have an allergic reaction?are misleading.Or you can convert into it.
I think you need at least a couple more zeroes in there for that to be right.
They or one of their matrilinear ancestors converted to Judaism?
Oooops! I meant there to be three more. Will fix. Thanks.
In case it wasn't clear: I was not posing "on what basis ..." as a challenge, I was pointing out that it isn't much of a challenge and that for similar reasons lisper's parallel question about free will is not much of a challenge either.