gjm comments on Is Spirituality Irrational? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (429)
Well, they are. Maybe "mental faculties" would be a better translation. But it's neither here nor there.
That hardly seems fair. That means that if Adam and Eve had not eaten the fruit then they would have been punished for the sins that they committed out of ignorance.
Indeed. But God didn't provide any. In fact, He specifically commanded A&E to remain ignorant.
Huh? I don't understand that at all. Your claim was that any designed entity "cannot do or calculate anything that its designer can't do or calculate". I exhibited a computer that can calculate a trillion digits of pi as a counterexample. What does the fact that evolution took a long time to produce the first computer have to do with it? The fact remains that computers can do things that their human designers can't.
In fact, just about anything that humans build can do things humans can't do; that's kind of the whole point of building them. Bulldozers. Can openers. Hammers. Paper airplanes. All of these things can do things that their human designers can't do.
Actually, that's not an argument that time travel is impossible. Time travel is indeed impossible, but that's a different argument :-) Time travel and free will are logically incompatible, at least under certain models of time travel. (If the past can change once you've travelled into it so that you can no longer reliably predict the future, then time travel and free will can co-exist.)
Exactly. This is necessarily part of the definition of free will. If you're predictable to an external agent but not to yourself then it must be the case that there is something that determines your future actions that is accessible to that agent but not to you.
But if you are reliably predictable then it is not the case that you could choose something else. That's what it means to be reliably predictable.
Sorry about that. I tried to write a pithy summary but it got too long for a comment. I'll have to write a separate article about it I guess. For the time being I'll just have to ask you to trust me: time travel into the past is ruled out by quantum mechanics. (This should be good news for you because it leaves open the possibility of free will!)
Yes!!! Exactly!!! That is in fact the whole point of my OP: the quale of the Presence of the Holy Spirit has also been directly observed and therefore does exist (despite the fact that the Holy Spirit does not).
Sorry, that didn't parse. What is "that"?
Well, yeah, at root I'm not doing it deliberately. What I'm doing (when I do it -- I don't always, it's hard work [1]) is to improve the illusion that I'm doing things deliberately. But as with classical reality, a good-enough illusion is good enough.
[1] For example, I'm not doing it right now. I really ought to be doing real work, but instead I'm slacking off writing this response, which is a lot more fun, but not really what I ought to be doing.
Yes. Did you read "31 flavors of ontology"?
The word "could" is a tricksy one, and I think it likely that your disagreement with CCC about free will has a lot to do with different understandings of "could" (and of its associated notions like "possible" and "inevitably").
The reason "could" is tricky is that whether or not something "could" happen (or could have happened) is usually reckoned relative to some state of knowledge. If you flip a coin but keep your hand over it so that you can see how it landed but I can't then from my perspective it could be either heads or tails but from yours it can't.
To assess free will you have to take the perspective of some hypothetical agent that has all of the knowledge that is potentially available. If such an agent can predict your actions then you cannot have free will because, as I pointed out before, your actions are determined by factors that are accessible to this hypothetical agent but not to you. Such agents do not exists in our world so we can still argue about it, but in a hypothetical world where we postulate the existence of such an agent (i.e. a world with time travel in to the past without the possibility of changing the past, or a world with a Newcomb-style intelligent alien) the argument is settled: such an agent exists, you are reliably predictable, and you cannot have free will. (This, by the way, is the resolution of Newcomb's paradox: you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb's paradox) evidence against it.)
You sound as though they have some choice as to which box to take, or whether or not to believe in free will. But if your argument is correct, then they do not.
Do I? That wasn't my intention. They don't have a choice in which box to take, any more than they have a choice in whether or not they find my argument compelling. If they find my argument compelling then (if they are rational) they will take 1 box and win $1M. If they don't, then (maybe) they won't. There's no real "choice" involved (though there is the very compelling illusion of choice).
This is actually a perfect illustration of the limits of free will even in our own awareness: you can't decide whether to find a particular argument compelling or not, it's something that just happens to you.
This is questionable, and I would expect many compatibilists to say quite the opposite.
What can I say? The compatibilists are wrong. The proof is simple: either all reliably predictable agents have free will, or some do and some don't. If they all do, then a rock has free will and we will just have to agree to disagree about that (some people actually do take that position). If some do and some don't, then in order for the term "free will" to have meaning you need a criterion by which to distinguish reliably predictable agents with free will from those without it. No one has ever come up with such a criterion (AFAIK).
My intuition has always been that 'free will' isn't a binary thing; it's a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that 'entropy' and 'free will' are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)
Yes, I think that's exactly right. But compatibilists don't agree with that. They think that there is such a thing as free will in some absolute sense, and that this thing is "compatible" (hence the name) with determinacy/reliable predictability.
There are a number of useful terms for which no one has ever come up with a precisely stated and clearly defensible criterion. Beautiful, good, conscious, etc. This surely does indicate that there's something unsatisfactory about those terms, but I don't think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.
Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don't. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X's preference for Y or decision to do Y or something of the kind.
So, if you do something purely "on autopilot" without any actual wish to do it, that condition fails and you didn't do it freely; if you do it because a mad neuroscientist genius has reprogrammed your brain so that you would inevitably have done Y, we can go straight from that fact to your doing Y (but if she did it by making you want to do Y then arguably the best explanation still makes use of that fact, so this is a borderline case, which is exactly as it should be); if you do it because someone who is determined that you should do Y is threatening to torture your children to death if you don't, more or less the same considerations apply as for the mad neuroscientist genius (and again this is good, because it's a borderline case -- we might want to say that you have free will but aren't acting freely).
What does this criterion say about "normal" decisions, if your brain is in fact implemented on top of deterministic physics? Well, an analysis of the causes of your action would need to go via what happened in your brain when you made the decision; there would be an "explanation" that just follows the trajectories of the elementary particles involved (or something of the kind; depends on exactly what deterministic physics) but I claim that wouldn't be a good explanation -- in the same way as it wouldn't be a good explanation for why a computer chess player played the move it did just to analyse the particle trajectories, because doing so doesn't engage at all with the tree-searching and position-evaluating the computer did.
One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren't necessarily any easier to define clearly than "free will" itself. Would we want to say that that computer chess player had free will? After all, I've just observed that any good explanation of the move it played would have to go via the process of searching and evaluation it did. Well, I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will! (Free will, like everything else, comes in degrees). Still, "clearly" not very much, so what's different? One thing that's different, though how different depends on details of the program in ways I don't like, is that there may be an explanation along the following lines. "It played the move it did because that move maximizes the merit of the position as measured by a 12-ply search with such-and-such a way of scoring the positions at the leaves of the search tree." It seems fair to say that that really is "why" the computer chose the move it did; this seems like just as good an explanation as one that gets into more details of the dynamics of the search process; but it appeals to a universal fact about the position and not to the actual process the computer went through.
You could (still assuming determinism) do something similar for the choices made by the human brain, but you'd get a much worse explanation -- because a human brain (unlike the computer) isn't just optimizing some fairly simply defined function. An explanation along these lines would end up amounting to a complete analysis of particle trajectories, or maybe something one level up from that (activation levels in some sophisticated neural-network model, perhaps) and wouldn't provide the sort of insight we seek from a good explanation.
In so far as your argument works, I think it also proves that the incompatibilists are wrong. I've never seen a really convincing incompatibilist definition of "free will" either. Certainly not one that's any less awful than the compatibilist one I gave above. It sounds as if you're proposing something like "not being reliably predictable", but surely that won't do; do you want to say a (quantum) random number generator has free will? Or a mechanical randomizing device that works by magnifying small differences and is therefore not reliably predictable from any actually-feasible observations even in a deterministic (say, Newtonian) universe?
Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it's really quibbling over terminology, which is almost always a waste of time.
OK, that's not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent's desires, namely, whatever model would be used by a reliable predictor.
Indeed.
OK, then you're intuitive definition of "free will" is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn't want to play chess any more.
I'd say that not being reliably predictable is a necessary but not sufficient condition.
I think ialdabaoth actually came pretty close to getting it right:
I think that's wrong for two reasons. The first is that the model might explicitly include the agent's desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
I think that's better understood as a limit on its intelligence than on its freedom. It doesn't have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn't try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I'd say not that the chess program lacks free will, but that it's the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven't given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn't help with that; the compatibilist can state necessary conditions too.
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: "I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize". I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don't want to do the thing they're hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn't freely refuse the bribe?
On another occasion I'm offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be "no", and accordingly I think unpredictability and freedom can't be so close to being the same thing.
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don't. For example, I desire chocolate. This is not something I chose, it's something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that's not the same thing. That's deciding to try to train myself not to desire chocolate.)
If we don't have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account "free will"?
This is a very deep topic that is treated extensively in David Deutsch's book, "The Beginning of Infinity" (also "The Fabric of Reality", particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I'll have to recapitulate Deutsch's argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
Sure. Do you distinguish between "will" and "desire"?
Really? What are they?
Yes.
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
(I see you've been downvoted. Not by me.)
If Jewishness is inherited from one's mother, and a person's great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
I seldom use the word "will" other than in special contexts like "free will". Why do you ask?
One such might be: "For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent."
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for "free will" available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that's a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you're saying something a bit less content-free than that; let me paraphrase and you can correct me if I'm getting it wrong. "Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will." That's less content-free because we can then ask: OK, what if you're wrong about everything being predictable in principle; or what if you're right but we ask about a hypothetical different world where some things aren't predictable in principle?
Let's ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let's suppose that whether or not the brain uses quantum effects in any "interesting" way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let's situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
I don't think that's correct. For instance, in the second case I am coerced by another agent, and in the first I'm not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn't (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of "cause" is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren't, etc.