gjm comments on Is Spirituality Irrational? - Less Wrong

5 Post author: lisper 09 February 2016 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (429)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 28 February 2016 10:34:04PM -1 points [-]

Of course it's a useful notion.

You're arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn't seem that useful to me.

Chess is a mathematical abstraction that is the same for all observers.

I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you're working with. I'm still not sure what yours actually is, but mine doesn't have that property, out at any rate doesn't have it to do great an extent as yours seems to.

Comment author: lisper 29 February 2016 03:01:07AM 1 point [-]

Free will is a useful notion because we have the perception of having it, and so it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it. It's useful in the same way that it's useful to talk about, say, "the force of gravity" even though in reality there is no such thing. (That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)

You said that a chess-playing computer has (some) free will. I disagree (obviously because I don't think anything has free will). Do you think Pachinko machines have free will? Do they "decide" which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?

When I say "real free will" I mean this:

  1. Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.

  2. Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.

I don't know how to make it any clearer than that.

Comment author: gjm 29 February 2016 01:55:42PM 1 point [-]

it's useful to be able to talk about whatever it is that we perceive ourselves to have even though we don't really have it.

I think it's more helpful to talk about whatever we have that we're trying to talk about, even if some of what we say about it isn't quite right, which is why I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.

I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say "there is, more or less, a force of gravity, but note that in some situations we'll need to talk about it differently" than "there is no force of gravity". And I would say the same about "free will".

Do you think Pachinko machines have free will?

I don't know much about Pachinko machines, but I don't think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.

Does the atmosphere have free will?

Again, I don't think there are any sort of deliberative processes going on there, so no free will.

I mean this: [...] Decisions are made by my conscious self.

So there are two parts to this, and I'm not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents' conscious "parts" (of course this terminology doesn't imply an actual physical division).

it must be actually possible for me to choose more than one alternative

Of course "actually possible" is pretty problematic language; what counts as possible? If I'm understanding you right, you'd cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).

So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that's enough to determine the answer after the decision is made too, so no decisions are free.

One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by "amplified" quantum effects that they can't be reliably predicted even by an observer with access to everything in their past light-cone.

It might be worse. Perhaps some of our decisions are so affected and some not. If so, there's no reason (that I can see) to expect any connection between "degree of influence from quantum randomness" and any of the characteristics we generally think of as distinguishing free from not-so-free -- practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.

It doesn't seem to me that predictability by a hypothetical "past-omniscient" observer has much connection with what in other contexts we call free will. Why make it part of the definition?

Comment author: lisper 29 February 2016 07:29:02PM 1 point [-]

I prefer notions of free will that don't become necessarily wrong if the universe is deterministic or there's an omnipotent god or whatever.

That's like saying, "I prefer triangles with four sides." You are, of course, free to prefer whatever you want and to use words however you want. But the word "free" has an established meaning in English which is fundamentally incompatible with determinism. Free means, "not under the control or in the power of another; able to act or be done as one wishes." If my actions are determined by physics or by God, I am not free.

I don't think they have any processes going on in them that at all resemble human deliberation

And you think chess-playing machines do?

BTW, if your standard for free will is "having processing that resembles human deliberation" then you've simply defined free will as "something that humans have" in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically "yes".

So there are two parts to this

I'd call them two "interpretations" rather than two "parts". But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that's not free will.

"actually possible" is pretty problematic language; what counts as possible?

Whatever is not impossible. In this case (and we've been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what "reliably predictable" means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It's really not complicated.

Why make it part of the definition?

Because that is what the "free" part of "free will" means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what "reliably predictor" means). If I cannot choose B then I am not free.

Comment author: gjm 29 February 2016 10:37:35PM -1 points [-]

the word "free" has an established meaning in English which is fundamentally incompatible with determinism.

I don't think that's at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don't think it's impossible for "free" to mean something compatible with determinism.

Let's take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. "Not under the control or in the power of another"? That's OK; the laws of physics, whatever they are, are not another agent. "Able to act or be done as one wishes"? That's OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn't say anything about that.

(I wouldn't want to claim that the definition you selected is a perfect one, of course.)

And you think chess-playing machines do [sc. have processes going on in them that at all resemble human deliberation]?

Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)

if your standard for free will is "having processing that resembles human deliberation"

Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)

if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts

I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word "impossible" inappropriate. For whatever reason, you've never seen fit even to acknowledge my having done so.

But let's set that aside. I shall restate your claim in a form I think better. "If you are reliably predictable, then it is impossible for your choice and the predictor's prediction not to match." Consider a different situation, where instead of being predicted your action is being remembered. If it's reliably rememberable, then it is impossible for your action and the remember's memory not to match -- but I take it you wouldn't dream of suggesting that that involves any constraint on your freedom.

So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that's not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you're saying is an argument for incompatibilism; it is just a restatement of incompatibilism.

It's really not complicated.

Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.

again, this is what "reliable predictor" means

No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like "cannot" and "impossible" have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating "free will" is the particular one you have in mind.

Comment author: lisper 01 March 2016 06:13:49PM 1 point [-]

I don't think that's at all clear

How would you define it then?

a clear majority of philosophers

This would not be the first time in history that the philosophical community was wrong about something.

Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?

No, I get that. But "a very little bit" is still distinguishable from zero, yes?

nothing a pachinko machine or the weather does seems at all decision-like

Nothing about it seems human decision-like. But that's a prejudice because you happen to be human. See below...

I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.

I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a "humanist".)

Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.

I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word "impossible" inappropriate. For whatever reason, you've never seen fit even to acknowledge my having done so.

I hereby acknowledge your having pointed this out. But it's irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That's why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.

it is just a restatement of incompatibilism.

That's possible. But just because incompatibalism is a tautology does not make it untrue.

I don't think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it's not a prediction). That doesn't feel like a tautology to me, but I'm not going to argue about it. Either way, it's true.

Please consider the possibility that other people besides yourself have thought about this stuff

Of course. As soon as someone presents a cogent argument I'm happy to consider it. I haven't heard one yet (despite having read this ).

It means you will not choose B, which is not necessarily the same as that you cannot choose B.

That's really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God's failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.

You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don't want to shatter your illusion of free will.

I don't see any possible way to distinguish between "can not" and "with 100% certainty will not". If they can't be distinguished, they must be the same.

Comment author: gjm 01 March 2016 07:55:08PM -1 points [-]

How would you define it then?

I already pointed out that your own choice of definition doesn't have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.

This would not be the first time in history that the philosophical community was wrong about something.

Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.

You could still be right, of course. But I think you'd need to offer more and better justification than you have so far, to be at all convincing.

But "a very little bit" is still distinguishable from zero, yes?

Well, the actual distinguishing might be tricky, especially as all I've claimed is that arguably it's so. But: yes, I have suggested -- to be precise about my meaning -- that some reasonable definitions of "free will" may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.

Nothing about it seems human decision-like.

Nothing about it seems decision-like at all. My notion of what is and what isn't a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I'll happily revise this in the light of new data.

I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready

Me too; if you think that what I have said about decision-making isn't, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren't altogether IA/AI-ready, for the rather boring reason that I don't know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.

The hypothesis that humans make decisions by heuristic search has been pretty much disproven

First: No, it hasn't. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do -- though our trees are quite different from the computers'.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.

All I require for my argument to hold is predictability in principle, not predictability in fact.

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.

I think the fact that you never actually get to observe the event of "such-and-such a TM not halting" means you don't really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it's as if you chose a definition in some principled way, found it gave an answer you didn't like, and then looked for a hack to make it give a different answer.

just because incompatibilism is a tautology does not make it untrue.

Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.

As soon as someone presents a cogent argument I'm happy to consider it.

I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don't find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.

It reminds me of [...]

I regret to inform you that "argument X has been deployed in support of wrong conclusion Y" is not good reason to reject argument X -- unless the inference from X to Y is watertight, which in this case I hope you agree it is not.

I don't see any possible way to distinguish between "can not" and "with 100% certainty will not".

This troubles me not a bit, because you can never say "with 100% certainty will not" about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.

And at degrees of certainty less than 100%, it seems to me that "almost certainly will not" and "very nearly cannot" are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys' names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you're working with leads you to a different conclusion, so much the worse for that notion of possibility.

Comment author: lisper 01 March 2016 11:17:30PM 0 points [-]

rudely patronizing

Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I'm sorry if that frustration occasionally manifests itself as rudeness.

you can never say "with 100% certainty will not" about anything with any empirical content

Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.

Nothing about [a pachinko machine] seems decision-like at all.

a thermostat has (in a very aetiolated sense) beliefs.

Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

I presume you mean "perfectly reliable prediction of everything is not possible in principle." Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.

Comment author: gjm 02 March 2016 09:53:42AM *  -1 points [-]

with 100% certainty, no one will exhibit a working perpetual motion machine today

100%? Really? Not just "close to 100%, so let's round it up" but actual complete certainty?

I too am a believer in the Second Law of Thermodynamics, but I don't see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles -- we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world -- e.g., so far as I know no one currently has a good answer to "why is the entropy so low at the big bang?" nor to "is information lost when things fall into black holes?" -- so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won't reveal any loopholes?

Now, of course there's a difference between "the SLoT has loopholes" and "someone will reveal a way to exploit those loopholes tomorrow". The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.

the sun will not rise in the west. [...] I will not be the president of the United States

Again, not zero. Very very very tiny, but not zero.

Do you believe that a thermostat makes decisions?

It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there's nothing in what it does that looks at all like a deliberative process, so I wouldn't say it has free will even to the tiny extent that maybe a chess-playing computer does.

For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.

Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)

perfectly reliable prediction of some things (in principle) is clearly possible.

Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think "in half the branches, by measure, X will happen, and in the other half Y will happen" counts as a perfectly reliable prediction of whether X or Y will happen?

is possible by definition.

Only perfectly non-empirical things. Sure, you can "predict" that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can "predict" that 3x4=12. As soon as that turns into "this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4", you're in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn't looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.

[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]

Comment author: lisper 02 March 2016 06:25:43PM 0 points [-]

Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding,

I'm not sure what I "expect" but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what "free will" means and I'm trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we'll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before "deciding" whether or not to turn on the heat has even a little bit of free will then we'll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before "deciding" what to do have even a little bit of free will then we'll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.

Comment author: Jiro 02 March 2016 03:45:55PM 0 points [-]

100%? Really? Not just "close to 100%, so let's round it up" but actual complete certainty?

Most people, by "100% certainty", mean "certain enough that for all practical purposes it can be treated as 100%". Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.

Comment author: ChristianKl 29 February 2016 10:05:28PM *  -1 points [-]

But the word "free" has an established meaning in English which is fundamentally incompatible with determinism.

The dictionary disagrees. Free has many different meanings.

If my actions are determined by physics or by God, I am not free.

What ontological category does physics have in your view of the world?

Comment author: lisper 01 March 2016 05:04:25PM 1 point [-]

Free has many different meanings.

Are you seriously arguing that "free" in "free will" might mean the same thing as (say) "free" in "free beer"? Come on.

What ontological category does physics have in your view of the world?

That's a very good question, and it depends (ironically) on which of two possible definitions of physics you're referring to. If you mean physics-the-scientific-enterprise (let's call that physics1) then it exists in the ontological category of human activity (along with things like "commerce"). If you mean the underlying processes which are the object of study in physics1 (let's call that physics2) then I'd put those in the ontological category of objective reality.

Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.

Comment author: ChristianKl 01 March 2016 05:27:10PM 0 points [-]

You can see free will as 1 d : enjoying personal freedom : not subject to the control or domination of another. There no other person who controls your actions.

The next definitions is: 2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself

I think you can make a good case that the way someone's neurons work is part of their own nature or being.

You ontological model that there's an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic

Comment author: lisper 01 March 2016 06:57:56PM 1 point [-]

I think this is a difference in the definition of the word "I", which can reasonably be taken to mean at least three different things:

  1. The totality of my brain and body and all of the processes that go on there. On this definition, "I have lungs" is a true statement.

  2. My brain and all of the computational processes that go on there (but not the biological processes). On this definition, "I have lungs" is a false statement, but "I control my breathing" is a true statement.

  3. That subset of the computational processes going on in my brain that we call "conscious." On this view, the statement, "I control my breathing" is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.

To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don't really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.

Comment author: ChristianKl 01 March 2016 07:45:49PM *  0 points [-]

Basically after you previously argued that there only one reasonable definition of free will you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?

Is that a reasonable description of your position?

Comment author: lisper 01 March 2016 10:41:08PM 0 points [-]

No, not even remotely close. We seem to have a serious disconnect here.

For starters, I don't think I ever gave a definition of "free will". I have listed what I feel to be (two) necessary conditions for it, but I don't think I ever gave sufficient conditions, which would be necessary for a definition. I'm not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don't have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word "free" in a context of a discussion of "free will." But that is not at all the same thing as arguing that there is only one reasonable definition of "free will." Also, the question of what "I" means is different from the question of what "free will" means. But both are (obviously) relevant to the question of whether or not I have free will.

The reason I brought up the definition of "I" is because you wrote:

You ontological model that there's an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic

That is not my position. (And ontology is a bit of a red herring here.) I can't even imagine what it means for a neuron to "do something that not in their nature or being", let alone that this departure from "nature or being" could be caused by physics. That's just bizarre. What did I say that made you think I believed this?

I can't define "free will" just like I can't define "pornography." But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that "will not with 100% reliability" is logically equivalent to "can not" in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn't leave earth orbit is because it can't or because it chooses not to.

Comment author: ChristianKl 29 February 2016 08:32:45AM *  0 points [-]

(That's actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)

The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.

There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as can people decide by free will not to have an allergic reaction? are misleading.