gjm comments on Is Spirituality Irrational? - Less Wrong

5 Post author: lisper 09 February 2016 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (429)

You are viewing a single comment's thread. Show more comments above.

Comment author: CCC 24 February 2016 08:26:32AM *  1 point [-]

I grew up speaking Hebrew, so I can tell you that the original is ambiguous too. The GNT translation interpolates the word "Then". That word ("az") does not appear in the original. The KJV translation is pretty good, but here's an interesting bit o' trivia: the original of "a tree to be desired to make one wise" is "w'nech'mäd häëtz l'has'Kiyl" which literally means, "and the tree was cute for wisdom." (Actually, it's not quite "wisdom", the meaning of "l'has'Kiyl" is broader than that. A better translation would be something like "smartness" or "brainpower".)

Huh. Maybe I've been playing too many role-playing games, but I tend to think of "wisdom" and "smartness" as somewhat but not entirely correlated; with "smartness" being more related to academics and book-learning and "wisdom" more common-sense and correctness of intuition.

Sure, but 1) I don't grant your premise and 2) the order of events is ambiguous, so even if I grant the premise the possibility remains that Eve didn't know it was evil except in retrospect.

I'll trust you with regards to the Hebrew and abandon this line of argument in the face of point 2.

That's the Ethan Couch defense, and it's not entirely indefensible. We don't generally prosecute children as adults. However, it is problematic if you use it as an excuse to game to system by remaining willfully ignorant. A parent who denied their child an education on the grounds that if the child remained profoundly ignorant then it would be incapable of sinning would probably be convicted of child abuse, and rightly so IMHO.

Granted. Those who are not ignorant have a duty to alleviate the ignorance of others - Ezekiel 3 verses 17 to 21 are relevant here. (Note that the ignorant man is still being punished - just because his sin is lesser in his ignorance does not mean that it is nothing - so education is still important to reduce sin).

You have to be careful to distinguish what is computable in theory vs what is computable in practice. Even now, computers can do many things that their creators cannot.

Granted. I was talking computable in theory. If we're considering computable in practice, then there's the question of why there was a several-billion-year wait before the first (known to us) computing devices appeared in this universe; that's more than enough time to figure out how to build a computer, than build that computer, then calculate more digits of pi than I can imagine.

Time travel, like omniscience, is logically incompatible with free will for exactly the reason you describe.

I can think of quite a few arguments that time travel is impossible, but this is a new one to me. I can see where you're coming from - you're saying that the idea that someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.

I'm not sure that it is, though. Just because I could choose something else doesn't mean that I will choose something else. (Although that gets into the murky waters of whether it is possible for me to do that which I am never observed to do...)

Time travel is impossible because your physical existence is an illusion, (See also this and this.

Okay, I've had a look at those. The first one kind of skipped over the math for how one ends up with a negative entropy - that supercorrelation is mentioned as being odd, but nowhere is it explained what that means. (It's also noted that the quantum correlation measurement is analogous to the classical one, but I am left uncertain as to how, when, and even if that analogy breaks down, because I do not understand that critical part of the maths, and how it corresponds to the real world, and I am left with the suspicion that it might not).

So, I'm not saying the conclusion as presented in the paper is necessarily wrong. I'm saying I don't follow the reasoning that leads to it.

Maybe. But if, as you have already conceded, the quale of motion can exist without motion, why cannot the quale of free will exist without free will?

I will concede that there is no reason why the quale of free will can't exist without free will. I will, however, firmly maintain that the quale of free will (along with many other qualia, like the quale of redness) can be and has been directly observed, and therefore does exist.

Coming to the realization that free will (and even classical reality itself) are illusions doesn't make those illusions any less compelling. You can still live your life as if you were a classical being with free will while being aware of the fact that this is not actually true in the deepest metaphysical sense.

Fair enough, but that seems to be the case when you are not using the skill of being certain that your free will is an illusion.

But it's much more useful than just that. By becoming aware of how your brain fools you into thinking you have free will you can actually take more control of your life. Yes, I know that sounds like a contradiction, but it's not.

This is a contradiction. If you don't have free will, then you have no control and cannot take control; if you do take control, then you have the free will to, at the very least, decide to take that control.

I'm not saying that the certainty can't improve the illusion. I'll trust you on that point, that you have somehow found some way to take the certainty that you do not have free will and - somehow - use this to give yourself at least the illusion of greater control over your own life. (I'm rather left wondering how, but I'll trust that it's possible). However, the idea that you are doing so deliberately implies that you not only have, but are actively exercising your free will.

But why don't you go read the book before we go further.

We would probably need to put this line of debate on hold for some time, then. I'd have to find a copy first.

Not just degrees. Existence is not just a continuum, it's a vector space.

Okay, how does that work? I can see how existence as a continuum makes sense (and, indeed, that's how I think of it), but as a vector space?

Comment author: lisper 24 February 2016 06:12:54PM 1 point [-]

I tend to think of "wisdom" and "smartness" as somewhat but not entirely correlated

Well, they are. Maybe "mental faculties" would be a better translation. But it's neither here nor there.

the ignorant man is still being punished

That hardly seems fair. That means that if Adam and Eve had not eaten the fruit then they would have been punished for the sins that they committed out of ignorance.

education is still important to reduce sin

Indeed. But God didn't provide any. In fact, He specifically commanded A&E to remain ignorant.

then there's the question of why there was a several-billion-year wait

Huh? I don't understand that at all. Your claim was that any designed entity "cannot do or calculate anything that its designer can't do or calculate". I exhibited a computer that can calculate a trillion digits of pi as a counterexample. What does the fact that evolution took a long time to produce the first computer have to do with it? The fact remains that computers can do things that their human designers can't.

In fact, just about anything that humans build can do things humans can't do; that's kind of the whole point of building them. Bulldozers. Can openers. Hammers. Paper airplanes. All of these things can do things that their human designers can't do.

I can think of quite a few arguments that time travel is impossible, but this is a new one to me.

Actually, that's not an argument that time travel is impossible. Time travel is indeed impossible, but that's a different argument :-) Time travel and free will are logically incompatible, at least under certain models of time travel. (If the past can change once you've travelled into it so that you can no longer reliably predict the future, then time travel and free will can co-exist.)

[if] someone, somewhere, might know with certainty what I will decide in a given set of circumstances is logically incompatible with the idea that I might choose something else.

Exactly. This is necessarily part of the definition of free will. If you're predictable to an external agent but not to yourself then it must be the case that there is something that determines your future actions that is accessible to that agent but not to you.

Just because I could choose something else doesn't mean that I will choose something else.

But if you are reliably predictable then it is not the case that you could choose something else. That's what it means to be reliably predictable.

but nowhere is it explained what that means

Sorry about that. I tried to write a pithy summary but it got too long for a comment. I'll have to write a separate article about it I guess. For the time being I'll just have to ask you to trust me: time travel into the past is ruled out by quantum mechanics. (This should be good news for you because it leaves open the possibility of free will!)

the quale of free will (along with many other qualia, like the quale of redness) can be and has been directly observed, and therefore does exist

Yes!!! Exactly!!! That is in fact the whole point of my OP: the quale of the Presence of the Holy Spirit has also been directly observed and therefore does exist (despite the fact that the Holy Spirit does not).

that seems to be the case when you are not using the skill of being certain that your free will is an illusion

Sorry, that didn't parse. What is "that"?

the idea that you are doing so deliberately implies that you not only have, but are actively exercising your free will.

Well, yeah, at root I'm not doing it deliberately. What I'm doing (when I do it -- I don't always, it's hard work [1]) is to improve the illusion that I'm doing things deliberately. But as with classical reality, a good-enough illusion is good enough.

[1] For example, I'm not doing it right now. I really ought to be doing real work, but instead I'm slacking off writing this response, which is a lot more fun, but not really what I ought to be doing.

a vector space?

Yes. Did you read "31 flavors of ontology"?

Comment author: gjm 25 February 2016 01:16:18PM 0 points [-]

But if you are reliably predictable then it is not the case that you could choose something else. That's what it means to be reliably predictable.

The word "could" is a tricksy one, and I think it likely that your disagreement with CCC about free will has a lot to do with different understandings of "could" (and of its associated notions like "possible" and "inevitably").

Comment author: lisper 25 February 2016 03:45:36PM 0 points [-]

The reason "could" is tricky is that whether or not something "could" happen (or could have happened) is usually reckoned relative to some state of knowledge. If you flip a coin but keep your hand over it so that you can see how it landed but I can't then from my perspective it could be either heads or tails but from yours it can't.

To assess free will you have to take the perspective of some hypothetical agent that has all of the knowledge that is potentially available. If such an agent can predict your actions then you cannot have free will because, as I pointed out before, your actions are determined by factors that are accessible to this hypothetical agent but not to you. Such agents do not exists in our world so we can still argue about it, but in a hypothetical world where we postulate the existence of such an agent (i.e. a world with time travel in to the past without the possibility of changing the past, or a world with a Newcomb-style intelligent alien) the argument is settled: such an agent exists, you are reliably predictable, and you cannot have free will. (This, by the way, is the resolution of Newcomb's paradox: you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb's paradox) evidence against it.)

Comment author: g_pepper 25 February 2016 09:08:46PM 1 point [-]

you should always take the one box. The only reason people think that two boxes might be the right answer is because they refuse to relinquish the intuition that they have free will despite the overwhelming (hypothetical in the case of Newcomb's paradox) evidence against it.

You sound as though they have some choice as to which box to take, or whether or not to believe in free will. But if your argument is correct, then they do not.

Comment author: lisper 25 February 2016 10:47:30PM 0 points [-]

You sound as though they have some choice as to which box to take

Do I? That wasn't my intention. They don't have a choice in which box to take, any more than they have a choice in whether or not they find my argument compelling. If they find my argument compelling then (if they are rational) they will take 1 box and win $1M. If they don't, then (maybe) they won't. There's no real "choice" involved (though there is the very compelling illusion of choice).

This is actually a perfect illustration of the limits of free will even in our own awareness: you can't decide whether to find a particular argument compelling or not, it's something that just happens to you.

Comment author: gjm 25 February 2016 06:03:26PM 0 points [-]

To assess free will you have to take the perspective of some hypothetical agent that has all of the knowledge that is potentially available.

This is questionable, and I would expect many compatibilists to say quite the opposite.

Comment author: lisper 25 February 2016 08:33:04PM 0 points [-]

What can I say? The compatibilists are wrong. The proof is simple: either all reliably predictable agents have free will, or some do and some don't. If they all do, then a rock has free will and we will just have to agree to disagree about that (some people actually do take that position). If some do and some don't, then in order for the term "free will" to have meaning you need a criterion by which to distinguish reliably predictable agents with free will from those without it. No one has ever come up with such a criterion (AFAIK).

Comment author: ialdabaoth 25 February 2016 09:48:51PM 0 points [-]

My intuition has always been that 'free will' isn't a binary thing; it's a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that 'entropy' and 'free will' are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)

Comment author: lisper 25 February 2016 10:40:29PM 0 points [-]

Yes, I think that's exactly right. But compatibilists don't agree with that. They think that there is such a thing as free will in some absolute sense, and that this thing is "compatible" (hence the name) with determinacy/reliable predictability.

Comment author: gjm 25 February 2016 10:55:27PM 0 points [-]

No one has ever come up with such a criterion

There are a number of useful terms for which no one has ever come up with a precisely stated and clearly defensible criterion. Beautiful, good, conscious, etc. This surely does indicate that there's something unsatisfactory about those terms, but I don't think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.

Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don't. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X's preference for Y or decision to do Y or something of the kind.

So, if you do something purely "on autopilot" without any actual wish to do it, that condition fails and you didn't do it freely; if you do it because a mad neuroscientist genius has reprogrammed your brain so that you would inevitably have done Y, we can go straight from that fact to your doing Y (but if she did it by making you want to do Y then arguably the best explanation still makes use of that fact, so this is a borderline case, which is exactly as it should be); if you do it because someone who is determined that you should do Y is threatening to torture your children to death if you don't, more or less the same considerations apply as for the mad neuroscientist genius (and again this is good, because it's a borderline case -- we might want to say that you have free will but aren't acting freely).

What does this criterion say about "normal" decisions, if your brain is in fact implemented on top of deterministic physics? Well, an analysis of the causes of your action would need to go via what happened in your brain when you made the decision; there would be an "explanation" that just follows the trajectories of the elementary particles involved (or something of the kind; depends on exactly what deterministic physics) but I claim that wouldn't be a good explanation -- in the same way as it wouldn't be a good explanation for why a computer chess player played the move it did just to analyse the particle trajectories, because doing so doesn't engage at all with the tree-searching and position-evaluating the computer did.

One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren't necessarily any easier to define clearly than "free will" itself. Would we want to say that that computer chess player had free will? After all, I've just observed that any good explanation of the move it played would have to go via the process of searching and evaluation it did. Well, I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will! (Free will, like everything else, comes in degrees). Still, "clearly" not very much, so what's different? One thing that's different, though how different depends on details of the program in ways I don't like, is that there may be an explanation along the following lines. "It played the move it did because that move maximizes the merit of the position as measured by a 12-ply search with such-and-such a way of scoring the positions at the leaves of the search tree." It seems fair to say that that really is "why" the computer chose the move it did; this seems like just as good an explanation as one that gets into more details of the dynamics of the search process; but it appeals to a universal fact about the position and not to the actual process the computer went through.

You could (still assuming determinism) do something similar for the choices made by the human brain, but you'd get a much worse explanation -- because a human brain (unlike the computer) isn't just optimizing some fairly simply defined function. An explanation along these lines would end up amounting to a complete analysis of particle trajectories, or maybe something one level up from that (activation levels in some sophisticated neural-network model, perhaps) and wouldn't provide the sort of insight we seek from a good explanation.

In so far as your argument works, I think it also proves that the incompatibilists are wrong. I've never seen a really convincing incompatibilist definition of "free will" either. Certainly not one that's any less awful than the compatibilist one I gave above. It sounds as if you're proposing something like "not being reliably predictable", but surely that won't do; do you want to say a (quantum) random number generator has free will? Or a mechanical randomizing device that works by magnifying small differences and is therefore not reliably predictable from any actually-feasible observations even in a deterministic (say, Newtonian) universe?

Comment author: lisper 26 February 2016 01:47:08AM *  0 points [-]

I don't think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.

Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it's really quibbling over terminology, which is almost always a waste of time.

Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don't. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X's preference for Y or decision to do Y or something of the kind.

OK, that's not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent's desires, namely, whatever model would be used by a reliable predictor.

One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren't necessarily any easier to define clearly than "free will" itself.

Indeed.

I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will!

OK, then you're intuitive definition of "free will" is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn't want to play chess any more.

It sounds as if you're proposing something like "not being reliably predictable", but surely that won't do; do you want to say a (quantum) random number generator has free will?

I'd say that not being reliably predictable is a necessary but not sufficient condition.

I think ialdabaoth actually came pretty close to getting it right:

'free will' isn't a binary thing; it's a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that 'entropy' and 'free will' are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)

Comment author: gjm 26 February 2016 05:13:39PM 0 points [-]

no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent's desires, namely, whatever model would be used by a reliable predictor.

I think that's wrong for two reasons. The first is that the model might explicitly include the agent's desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)

a chess playing computer with free will should be able to decide, for example, that it didn't want to play chess any more.

I think that's better understood as a limit on its intelligence than on its freedom. It doesn't have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn't try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I'd say not that the chess program lacks free will, but that it's the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)

not being reliably predictable is a necessary but not sufficient condition.

Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven't given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn't help with that; the compatibilist can state necessary conditions too.

I think ialdabaoth actually came pretty close to getting it right

There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: "I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize". I suppose that gets around my random number generator example, but not really in a very satisfactory way.

So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don't want to do the thing they're hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn't freely refuse the bribe?

On another occasion I'm offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?

I think the answers to the questions in those paragraphs should both be "no", and accordingly I think unpredictability and freedom can't be so close to being the same thing.

Comment author: lisper 26 February 2016 08:03:50PM 0 points [-]

the model might explicitly include the agent's desires

OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don't. For example, I desire chocolate. This is not something I chose, it's something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that's not the same thing. That's deciding to try to train myself not to desire chocolate.)

If we don't have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account "free will"?

a model might predict much better than it explains

This is a very deep topic that is treated extensively in David Deutsch's book, "The Beginning of Infinity" (also "The Fabric of Reality", particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I'll have to recapitulate Deutsch's argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.

If you have no will, it makes no sense to ask whether it is free.

Sure. Do you distinguish between "will" and "desire"?

the compatibilist can state necessary conditions too.

Really? What are they?

Do you really want to say that this indicates that I didn't freely refuse the bribe?

Yes.

Is it maximally free?

Yes, which is to say, not free at all. It is exactly as free as the first case.

The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.