gjm comments on Is Spirituality Irrational? - Less Wrong

5 Post author: lisper 09 February 2016 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (429)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 27 February 2016 11:48:56PM -1 points [-]

Does this mean that you concede that our desires are not freely chosen?

I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.

[...] are not impossible [...]

That isn't quite what you said before, but I'm happy for you to amend what you wrote.

The reason they are extremely unlikely [...]

It seems to me that the argument you're now making has almost nothing to do with the argument in chapter 7 of Deutsch's book. That doesn't (of course) in any way make it a bad argument, but I'm now wondering why you said what you did about Deutsch's books.

Anyway. I think almost all the work in your argument (at least so far as it's relevant to what we're discussing here) is done by the following statement: "Explanatory power turns out to be the only known effective filter for theories with high predictive power." I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev's empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)

It doesn't mean UIP, it simply requires UIP.

OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the "is there a nice clear criterion?" test. Also, if you aren't claiming anything close to "free will = UIP" then I no longer know what you meant by saying that ialdabaoth got it more or less right.

to be "real" free will, there would have to be some circumstances where [...]

Sure. That would be why I said "with great confidence" rather than "with absolute certainty". I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it's extremely unlikely. (So no, I don't agree that I've "chosen a bad example"; rather, I think you misunderstood the example I gave.)

let me propose a better one

If you say "you chose a bad example to make your point, so let me propose a better one" and then give an example that doesn't even vaguely gesture in the direction of making my point, I'm afraid I start to doubt that you are arguing in good faith.

Not so. In the first case you are being coerced by [...]

The things you describe me as being "coerced by" are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of "free will" that we're looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that's generally the right way to think about questions like "what is free will?".)

In particular, I think your claim about "the only difference" is flatly wrong.

what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle.

That sounds sensible on first reading, but I think actually it's a bit like saying "what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn't care about suffering" and inferring that our notions of right and wrong shouldn't have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that's predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.

(I think, in fact, that even such a superbeing might have reason to talk about something like "free will", if it's talking about very-limited beings like us.)

any phenomenon that someone claims is objectively real.

I haven't, as it happens, been claiming that free will is "objectively real". All I claim is that it may be a useful notion. Perhaps it's only as "objectively real" as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask "to what extent is X exercising free will?" in the same way as you could ask "is X a better move than Y, for a human player with a human opponent?".

Comment author: lisper 28 February 2016 06:56:31AM 1 point [-]

an example that doesn't even vaguely gesture in the direction of making my point

Sorry about that. I really was trying to be helpful.

I haven't, as it happens, been claiming that free will is "objectively real". All I claim is that it may be a useful notion.

Well, heck, what are we arguing about then? Of course it's a useful notion.

chess

A better analogy would be "simultaneous events at different locations in space." Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.

Comment author: gjm 28 February 2016 10:34:04PM -1 points [-]

Of course it's a useful notion.

You're arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn't seem that useful to me.

Chess is a mathematical abstraction that is the same for all observers.

I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you're working with. I'm still not sure what yours actually is, but mine doesn't have that property, out at any rate doesn't have it to do great an extent as yours seems to.

Comment author: lisper 29 February 2016 03:01:07AM 1 point [-]

Free will is a useful notion because we have the perception of having it, and so it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it. It's useful in the same way that it's useful to talk about, say, "the force of gravity" even though in reality there is no such thing. (That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)

You said that a chess-playing computer has (some) free will. I disagree (obviously because I don't think anything has free will). Do you think Pachinko machines have free will? Do they "decide" which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?

When I say "real free will" I mean this:

  1. Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.

  2. Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.

I don't know how to make it any clearer than that.

Comment author: gjm 29 February 2016 01:55:42PM 1 point [-]

it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it.

I think it's more helpful to talk about whatever we have that we're trying to talk about, even if some of what we say about it isn't quite right, which is why I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.

I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say "there is, more or less, a force of gravity, but note that in some situations we'll need to talk about it differently" than "there is no force of gravity". And I would say the same about "free will".

Do you think Pachinko machines have free will?

I don't know much about Pachinko machines, but I don't think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.

Does the atmosphere have free will?

Again, I don't think there are any sort of deliberative processes going on there, so no free will.

I mean this: [...] Decisions are made by my conscious self.

So there are two parts to this, and I'm not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents' conscious "parts" (of course this terminology doesn't imply an actual physical division).

it must be actually possible for me to choose more than one alternative

Of course "actually possible" is pretty problematic language; what counts as possible? If I'm understanding you right, you'd cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).

So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that's enough to determine the answer after the decision is made too, so no decisions are free.

One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by "amplified" quantum effects that they can't be reliably predicted even by an observer with access to everything in their past light-cone.

It might be worse. Perhaps some of our decisions are so affected and some not. If so, there's no reason (that I can see) to expect any connection between "degree of influence from quantum randomness" and any of the characteristics we generally think of as distinguishing free from not-so-free -- practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.

It doesn't seem to me that predictability by a hypothetical "past-omniscient" observer has much connection with what in other contexts we call free will. Why make it part of the definition?

Comment author: lisper 29 February 2016 07:29:02PM 1 point [-]

I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.

That's like saying, "I prefer triangles with four sides." You are, of course, free to prefer whatever you want and to use words however you want. But the word "free" has an established meaning in English which is fundamentally incompatible with determinism. Free means, "not under the control or in the power of another; able to act or be done as one wishes." If my actions are determined by physics or by God, I am not free.

I don't think they have any processes going on in them that at all resemble human deliberation

And you think chess-playing machines do?

BTW, if your standard for free will is "having processing that resembles human deliberation" then you've simply defined free will as "something that humans have" in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically "yes".

So there are two parts to this

I'd call them two "interpretations" rather than two "parts". But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that's not free will.

"actually possible" is pretty problematic language; what counts as possible?

Whatever is not impossible. In this case (and we've been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what "reliably predictable" means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It's really not complicated.

Why make it part of the definition?

Because that is what the "free" part of "free will" means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what "reliably predictor" means). If I cannot choose B then I am not free.

Comment author: gjm 29 February 2016 10:37:35PM -1 points [-]

the word "free" has an established meaning in English which is fundamentally incompatible with determinism.

I don't think that's at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don't think it's impossible for "free" to mean something compatible with determinism.

Let's take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. "Not under the control or in the power of another"? That's OK; the laws of physics, whatever they are, are not another agent. "Able to act or be done as one wishes"? That's OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn't say anything about that.

(I wouldn't want to claim that the definition you selected is a perfect one, of course.)

And you think chess-playing machines do [sc. have processes going on in them that at all resemble human deliberation]?

Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)

if your standard for free will is "having processing that resembles human deliberation"

Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)

if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts

I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word "impossible" inappropriate. For whatever reason, you've never seen fit even to acknowledge my having done so.

But let's set that aside. I shall restate your claim in a form I think better. "If you are reliably predictable, then it is impossible for your choice and the predictor's prediction not to match." Consider a different situation, where instead of being predicted your action is being remembered. If it's reliably rememberable, then it is impossible for your action and the remember's memory not to match -- but I take it you wouldn't dream of suggesting that that involves any constraint on your freedom.

So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that's not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you're saying is an argument for incompatibilism; it is just a restatement of incompatibilism.

It's really not complicated.

Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.

again, this is what "reliable predictor" means

No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like "cannot" and "impossible" have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating "free will" is the particular one you have in mind.

Comment author: lisper 01 March 2016 06:13:49PM 1 point [-]

I don't think that's at all clear

How would you define it then?

a clear majority of philosophers

This would not be the first time in history that the philosophical community was wrong about something.

Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?

No, I get that. But "a very little bit" is still distinguishable from zero, yes?

nothing a pachinko machine or the weather does seems at all decision-like

Nothing about it seems human decision-like. But that's a prejudice because you happen to be human. See below...

I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.

I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a "humanist".)

Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.

I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word "impossible" inappropriate. For whatever reason, you've never seen fit even to acknowledge my having done so.

I hereby acknowledge your having pointed this out. But it's irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That's why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.

it is just a restatement of incompatibilism.

That's possible. But just because incompatibalism is a tautology does not make it untrue.

I don't think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it's not a prediction). That doesn't feel like a tautology to me, but I'm not going to argue about it. Either way, it's true.

Please consider the possibility that other people besides yourself have thought about this stuff

Of course. As soon as someone presents a cogent argument I'm happy to consider it. I haven't heard one yet (despite having read this ).

It means you will not choose B, which is not necessarily the same as that you cannot choose B.

That's really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God's failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.

You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don't want to shatter your illusion of free will.

I don't see any possible way to distinguish between "can not" and "with 100% certainty will not". If they can't be distinguished, they must be the same.

Comment author: gjm 01 March 2016 07:55:08PM -1 points [-]

How would you define it then?

I already pointed out that your own choice of definition doesn't have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.

This would not be the first time in history that the philosophical community was wrong about something.

Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.

You could still be right, of course. But I think you'd need to offer more and better justification than you have so far, to be at all convincing.

But "a very little bit" is still distinguishable from zero, yes?

Well, the actual distinguishing might be tricky, especially as all I've claimed is that arguably it's so. But: yes, I have suggested -- to be precise about my meaning -- that some reasonable definitions of "free will" may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.

Nothing about it seems human decision-like.

Nothing about it seems decision-like at all. My notion of what is and what isn't a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I'll happily revise this in the light of new data.

I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready

Me too; if you think that what I have said about decision-making isn't, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren't altogether IA/AI-ready, for the rather boring reason that I don't know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.

The hypothesis that humans make decisions by heuristic search has been pretty much disproven

First: No, it hasn't. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do -- though our trees are quite different from the computers'.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.

All I require for my argument to hold is predictability in principle, not predictability in fact.

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.

I think the fact that you never actually get to observe the event of "such-and-such a TM not halting" means you don't really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it's as if you chose a definition in some principled way, found it gave an answer you didn't like, and then looked for a hack to make it give a different answer.

just because incompatibilism is a tautology does not make it untrue.

Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.

As soon as someone presents a cogent argument I'm happy to consider it.

I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don't find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.

It reminds me of [...]

I regret to inform you that "argument X has been deployed in support of wrong conclusion Y" is not good reason to reject argument X -- unless the inference from X to Y is watertight, which in this case I hope you agree it is not.

I don't see any possible way to distinguish between "can not" and "with 100% certainty will not".

This troubles me not a bit, because you can never say "with 100% certainty will not" about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.

And at degrees of certainty less than 100%, it seems to me that "almost certainly will not" and "very nearly cannot" are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys' names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you're working with leads you to a different conclusion, so much the worse for that notion of possibility.

Comment author: lisper 01 March 2016 11:17:30PM 0 points [-]

rudely patronizing

Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I'm sorry if that frustration occasionally manifests itself as rudeness.

you can never say "with 100% certainty will not" about anything with any empirical content

Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.

Nothing about [a pachinko machine] seems decision-like at all.

a thermostat has (in a very aetiolated sense) beliefs.

Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

I presume you mean "perfectly reliable prediction of everything is not possible in principle." Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.

Comment author: ChristianKl 29 February 2016 10:05:28PM *  -1 points [-]

But the word "free" has an established meaning in English which is fundamentally incompatible with determinism.

The dictionary disagrees. Free has many different meanings.

If my actions are determined by physics or by God, I am not free.

What ontological category does physics have in your view of the world?

Comment author: lisper 01 March 2016 05:04:25PM 1 point [-]

Free has many different meanings.

Are you seriously arguing that "free" in "free will" might mean the same thing as (say) "free" in "free beer"? Come on.

What ontological category does physics have in your view of the world?

That's a very good question, and it depends (ironically) on which of two possible definitions of physics you're referring to. If you mean physics-the-scientific-enterprise (let's call that physics1) then it exists in the ontological category of human activity (along with things like "commerce"). If you mean the underlying processes which are the object of study in physics1 (let's call that physics2) then I'd put those in the ontological category of objective reality.

Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.

Comment author: ChristianKl 01 March 2016 05:27:10PM 0 points [-]

You can see free will as 1 d : enjoying personal freedom : not subject to the control or domination of another. There no other person who controls your actions.

The next definitions is: 2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself

I think you can make a good case that the way someone's neurons work is part of their own nature or being.

You ontological model that there's an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic

Comment author: lisper 01 March 2016 06:57:56PM 1 point [-]

I think this is a difference in the definition of the word "I", which can reasonably be taken to mean at least three different things:

  1. The totality of my brain and body and all of the processes that go on there. On this definition, "I have lungs" is a true statement.

  2. My brain and all of the computational processes that go on there (but not the biological processes). On this definition, "I have lungs" is a false statement, but "I control my breathing" is a true statement.

  3. That subset of the computational processes going on in my brain that we call "conscious." On this view, the statement, "I control my breathing" is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.

To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don't really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.

Comment author: ChristianKl 29 February 2016 08:32:45AM *  0 points [-]

(That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)

The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.

There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as can people decide by free will not to have an allergic reaction? are misleading.