Comment author: g_pepper 03 March 2016 10:21:57PM 0 points [-]

I'm willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.

Pretty much everyone perceives himself/herself freely making choices, so the claim that free will is real is consistent with most peoples' direct experience. While this does not prove that free will is real, it does suggest that the claim that free will is real is not really any more extraordinary than the claim that it is not real. So, I do not think that the person claiming that free will is real has any greater burden of proof than the person who claims that it is not.

Comment author: lisper 04 March 2016 12:39:49AM *  0 points [-]

That's not a valid argument for at least four reasons:

  1. There are many perceptual illusions, so the hypothesis that free will is an illusion is not a priori an extraordinary claim. (In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!)

  2. There is evidence that free will is in fact a perceptual illusion.

  3. It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.

  4. We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do, and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will (if you believe that a thermostat has free will, well, we're just going to have to agree to disagree about that). So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not. I have heard no one propose such a criterion that doesn't lead to conclusions that grate irredeemably upon my intuitions about what free will is (or what it would have to be if it were a real thing).

In this respect, free will really is very much like God except that the subjective experience of free will is more common than the subjective experience of the Presence of the Holy Spirit.

BTW, it is actually possible that the subjective experience of free will is not universal among humans. It is possible that some people don't have this subjective perception, just as some people don't experience the Presence of the Holy Spirit. It is possible that this lack of the subjective perception of free will is what leads some people to submit to the will of Allah, or to become Calvinists.

Comment author: CCC 03 March 2016 07:54:38AM 0 points [-]

The serpent wasn't an authority figure.

How could Eve have known that? See my point above about Eve not having the benefit of any cultural references.

Eve could have known that God was an authority figure, from Genesis 2 verse 20-24, in which God created Eve (from Adam's rib) and brought her to Adam.


Why do you think one is okay and the other one is not?

Because the kitten is acting in self defense. If the kitten had initiated the violence, that would not be OK.

So you accept self-defense as a justification, but not complete (but not wilful) ignorance?


Because it's really boring

Seriously?

Well, I'm guessing, but yes, it's a serious guess. Omnipotence means the ability to do everything, it does not mean that everything is pleasant to do. And I certainly know I'd start to lose patience a bit after explaining individually to the hundredth person why stealing is wrong.

he thinks they'd have reason to want to kill him.

Yes, because he's cursed by God.

The curse, in and of itself, is not what's going to make people want to kill him (if it was, then God could merely remove that aspect of the curse, rather than install a separate Mark as a warning to people not to do that). No, the curse merely prevented him from farming, from growing his own food. I'm guessing it also, as a result, made his guilt obvious - everyone would recognise the man who could not grow crops, and know he'd killed his brother.

But the curse is not what's making Cain expect other people to kill him. He clearly expects that other people will freely choose to kill him, and that suggests to me that he knew he had done wrong.

I'd always understood the Flood story as they weren't just thinking evil, but continually doing (unspecified) evil to the point where they weren't even considering doing non-evil stuff.

If that were true then humans would have died out in a single generation even without the Flood.

I don't see how that follows. I can imagine ways to produce a next generation consisting of entirely evil (or, at best, morally neutral) actions. What do you think would prevent the appearance of a new generation?

Simulate the algorithm with pencil and paper, if all else fails.

But that doesn't work. If you do the math you will find that the even if you got the entire human race to do pencil-and-paper calculations 24x7 you'd have less computational power than a single iPhone.

Yes, and over fourteen billion years, how many digits of pi can they produce?

I'm not saying it's fast. Compared to a computer, pen-and-paper is really, really slow. That's why we have computers. But fourteen billion years is a really, really, really long time.

perfect knowledge of the future - does not necessarily imply a perfectly deterministic universe.

Of course it does. That's what determinism means. In fact, perfect knowledge is a stronger condition than determinism. Knowable necessarily implies determined, but the converse is not true. Whether a TM will halt on a given input is determined but not generally knowable.

That's provided that the perfect knowledge of the future is somehow derived from a study of the present state of the universe. The time traveller voids this implicit assumption by deriving his perfect knowledge from a study of the future state of the universe.

Sorry about making that unwarranted assumption. Here's a reference. The details don't really matter. If you tell me your background I'll try to come up with a more culturally appropriate example.

Ah, thank you. That explains it all quite neatly.

I'm not sure it's really worth the bother of coming up with a different example at this point - your point was quite clearly made, even without knowledge of the story. (If it makes any difference, I'm South African, which is probably going to be less helpful than one might think considering the number of separate cultures in here).

the question of whether two things are the same must also become fuzzy, and non-binary

Indeed. [linked to "Ship of Theseus"]

Your point is well made.

Comment author: lisper 03 March 2016 06:12:11PM 0 points [-]

The serpent wasn't an authority figure. How could Eve have known that? Eve could have known that God was an authority figure

That's a red herring. The question was not how she could have known that God was an authority figure. The question was how she could have known that the snake was NOT an authority figure too.

it's a serious guess

Oh, come on. Even if we suppose that God can get bored, you really don't think he could have come up with a more effective way to spread the Word than just having one-on-one chats with individual humans? Why not hold a big rally? Or make a video? Or at least have more than one freakin' person in the room when He finally gets fed up and says, "OK, I've had it, I'm going to tell you this one more time before I go on extended leave!" ???

Sheesh.

everyone would recognise the man who could not grow crops, and know he'd killed his brother

You do know that this is LessWrong, right? A site dedicated to rationality and the elimination of logical fallacies and cognitive bias? Because you are either profoundly ignorant of elementary logic, or you are trolling. For your reasoning here to be valid it would have to be the case that the only possible reason someone could not grow crops is that they had killed their brother. If you can't see how absurd that is then you are beyond my ability to help.

I don't see how that follows.

Because "the good stuff" is essential to our survival. Humans cannot survive without cooperating with each other. That's why we are social animals. That's why we have evolved moral intuitions about right and wrong.

Yes, and over fourteen billion years, how many digits of pi can they produce?

What difference does that make? Yes, 14B years is a long time, but it's exactly the same amount of time for a computer. However much humans can calculate in 14B years (or any other amount of time you care to pull out of your hat) a computer can calculate vastly more.

I'm South African

I've been to SA twice. Beautiful country, but your politics are even more fucked up than ours here in the U.S., and that's saying something.

Comment author: ChristianKl 02 March 2016 09:25:24PM 1 point [-]

This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will.

To the extend that the subjective experience you call free will is independent on what other people mean with the term free will, the arguments about it aren't that interesting for the general discussion about whether what's commonly called free will exists.

More importantly concepts that start from "I have the feeling that X is true" usually produce models of reality that aren't true in 100% of the cases. They make some decent predictions and fail predictions in other cases.

It's usually possible to refine concepts to be better at predicting. It's part of science to develop operationalized terms.

This started by you saying But the word "free" has an established meaning in English. That's you pointing to a shared understanding of free and not you pointing to your private experience.

No, that's not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.

Human's are not reliably predictive due to being NFA's. Out of memory Heinz von Förster bring the example of a child answer the question of: "What's 1+1?" with "Blue". It needs a education to train children to actually give predicable answers to the question what's "What's 1+1?".

Weather systems are not reliably predictable, but they don't have free will.

I think the issue with why weather systems are not predictable is not because they aren't free to make choices (if you use certain models) but because is about the part of "will". Having a will is about having desires. The weather doesn't have desires in the same sense that humans do and thus it has no free will.

I think that humans do have desire that influence the choices they make even in the absence of them being conscious of the desire creating the choice.

The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality

Grounding the concept of color in external reality isn't trival. There are many competing definitions. You can define it over what the human eye perceives which has a lot to do with human genetics that differ from person to person. You can define it over wave-lengths. . You can define it over RGB values.

It doesn't make sense to argue that color doesn't exist because the human qualia of color doesn't map directly to the wave-length definition of color

With color the way you determine the difference between colors is also a fun topic. The W3C definition for example leads to strange consequences.

Comment author: lisper 02 March 2016 11:06:51PM 0 points [-]

That's you pointing to a shared understanding of free and not you pointing to your private experience.

You're conflating two different things:

  1. Attempting to communicate about a phenomenon which is rooted in a subjective experience.

  2. Attempting to conduct that communication using words rather than, say, music or dance.

Talking about the established meaning of the word "free" has to do with #2, not #1. The fact that my personal opinion enters into the discussion has to do with #1, not #2.

I think that humans do have desire that influence the choices they make

Yes, of course I agree. But that's not the question at issue. The question is not whether we have "desires" or "will" (we all agree that we do), the question is whether or not we have FREE will. I think it's pretty clear that we do NOT have the freedom to choose our desires. At least I don't seem to; maybe other people are different. So where does this alleged freedom enter the process?

Grounding the concept of color in external reality isn't trival

I never said it was. In fact, the difficulty of grounding color perception in objective reality actually supports my position. One would expect that the grounding of free will perception in objective reality to be at least as difficult as grounding color perception, but I don't see those who support the objective reality of free will undertaking such a project, at least not here.

I'm willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.

Comment author: entirelyuseless 02 March 2016 10:03:10PM 0 points [-]

I disagree: if you interpret EPR experiments as wavefunction collapse rather than many worlds, then you can conclude that either one measurement affects the other, or both affect each other. But you cannot come up with any encoding that will allow you to transmit information.

Comment author: lisper 02 March 2016 10:36:33PM 0 points [-]

Yes, of course that's true. But collapse is only an approximation to the truth. It is a very good approximation in many common cases. But the Aharonov experiment is interesting precisely because it is a case where collapse is no longer a good approximation to the truth, and so of course if you view it through the lens of collapse things are going to look weird. To see why collapse is not always a good approximation to the truth, see the references in the OP.

Comment author: ChristianKl 02 March 2016 01:27:27PM 0 points [-]

I can't even imagine what it means for a neuron to "do something that not in their nature or being", let alone that this departure from "nature or being" could be caused by physics. That's just bizarre. What did I say that made you think I believed this?

I thought you made an argument that physical determinism somehow means that there's no free will because physics is causes an effect to happen. If I misunderstood that you make the argument feel free to point that out.

Given the dictionary definition of "free" that seems to be flawed.

I can't define "free will" just like I can't define "pornography."

That's an appeal to the authority of your personal intuition. It prevents your statements from being falsifiable. It moves the statements into to vague to be wrong territory.

If I have a conversation with a person who has akrophobie to debug then I'm going to use words in a way where I only care about the effect of the words but not whether my sentences make falsifiable statements. If I however want to have a rational discussion on LW than I strive to use rational language. Language that makes concrete claims that allow others to engage with me in rational discourse.

Again that's what distinguish rational!LW from rational!NewAtheist. If you don't simply want to have a replacement of religion, but care about reasoning than it's useful to not be to vague to be wrong.

The thing you wrote about only calling the part of you I that corresponds to your conscious mind looks to me like subclinical depersonalization disorder. A notion of the self that can be defended but that's unhealthy to have.

I not only have lungs. My lungs are part of the person that I happen to be.

If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn't leave earth orbit is because it can't or because it chooses not to.

If we stay with the dictionary definition of freedom why look at the nature of the moon. Is the fact that it revolves around the earth an emergent property of how the complex internals of the moon work or isn't it?

My math in that area isn't perfect but objects that can be modeled by nontrival nondeterministic finite automatons might be a criteria.

Nontrival nondeterministic finite automatons can reasonably described as using heuristics to make choices. They make them based on the algorithm that's programmed into them and that algorithm can by reasonably described as being part of the nature of a specific nondeterministic finite automatons.

I don't think the way that the moon resolves around the earth is reasonably modeled with nontrival nondeterministic finite automatons.

Comment author: lisper 02 March 2016 06:37:37PM 0 points [-]

I thought you made an argument that physical determinism somehow means that there's no free will because physics is causes an effect to happen.

No, that's not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.

I actually go even further than that. If I am not reliably predictable, then I might have free will, but my mere unpredictability is not enough to establish that I have free will. Weather systems are not reliably predictable, but they don't have free will. It is not even the case that non-determinism is sufficient to establish free will. Photons are non-deterministic, but they don't have free will.

That's an appeal to the authority of your personal intuition.

Well, yeah, of course it is (though I would not call my intuitions an "authority"). This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will. I don't know of any way to talk about a subjective experience without referring to my personal intuitions about it.

The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality, whereas with free will it's not so easy. In fact, no one has exhibited a satisfactory explanation of my subjective experience that is grounded in objective reality, hence my conclusion that my subjective experience of having free will is an illusion.

Comment author: gjm 02 March 2016 09:53:42AM *  -1 points [-]

with 100% certainty, no one will exhibit a working perpetual motion machine today

100%? Really? Not just "close to 100%, so let's round it up" but actual complete certainty?

I too am a believer in the Second Law of Thermodynamics, but I don't see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles -- we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world -- e.g., so far as I know no one currently has a good answer to "why is the entropy so low at the big bang?" nor to "is information lost when things fall into black holes?" -- so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won't reveal any loopholes?

Now, of course there's a difference between "the SLoT has loopholes" and "someone will reveal a way to exploit those loopholes tomorrow". The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.

the sun will not rise in the west. [...] I will not be the president of the United States

Again, not zero. Very very very tiny, but not zero.

Do you believe that a thermostat makes decisions?

It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there's nothing in what it does that looks at all like a deliberative process, so I wouldn't say it has free will even to the tiny extent that maybe a chess-playing computer does.

For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.

Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)

perfectly reliable prediction of some things (in principle) is clearly possible.

Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think "in half the branches, by measure, X will happen, and in the other half Y will happen" counts as a perfectly reliable prediction of whether X or Y will happen?

is possible by definition.

Only perfectly non-empirical things. Sure, you can "predict" that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can "predict" that 3x4=12. As soon as that turns into "this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4", you're in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn't looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.

[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]

Comment author: lisper 02 March 2016 06:25:43PM 0 points [-]

Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding,

I'm not sure what I "expect" but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what "free will" means and I'm trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we'll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before "deciding" whether or not to turn on the heat has even a little bit of free will then we'll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before "deciding" what to do have even a little bit of free will then we'll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.

Comment author: entirelyuseless 02 March 2016 03:44:06PM 0 points [-]

I don't understand Aharonov's experiment enough to say what it does or doesn't show. But your argument surely does not disprove his claim, since he is talking about particular circumstances, not making a general claim that there is some method that will tell you general truths about the future such as what the stock market is going to do. In fact, he does not appear to be saying that you can send yourself information at all, in a form which will be intelligible to you before the future events.

Comment author: lisper 02 March 2016 06:02:07PM -1 points [-]

So I read the paper, and it is kind of a cool experiment, but it does not show that "future choices can affect a past measurement's outcome." Explaining why would require a separate article (maybe time to re-open main!) But the TL;DR version is this: if you want to argue that A affects B then you have to show a causal relationship that runs from A to B. If you can do that, then you can always come up with some encoding that will allow you to transmit information from A to B. That's what "causal relationship" means. But that is (unsurprisingly) not what Aharonov et al. have done. They have merely shown correlations between A and B, and then argue on purely intuitive grounds that there must have been some causal relationship between A and B because "Bell's theorem forbids spin values to exist prior to the choice of the orientation measured." While this is true, it's misleading because it implies that spin values do exist after a strong measurement. But that is not true. There is no fundamental difference between a strong and a weak measurement. There is a smooth continuum between weak and strong measurements, and at no point during the transition from weak to strong does the spin value begin to "actually exist" (a.k.a. wavefunction collapse).

Comment author: torekp 02 March 2016 01:58:37AM *  0 points [-]

Thanks, this helped me fill in some gaps. In Ron Garret's piece that you linked above, a comment has a link to a very nice article by Aharonov et al titled Can a Future Choice Affect a Past Measurement's Outcome?. (Hint: yes.)

Comment author: lisper 02 March 2016 05:18:45AM 1 point [-]

Just FYI, I am Ron Garret. Also just FYI, the Aharonov study does not show that future choices can affect a past measurement's outcome. If this were possible, you could use it to send yourself information about the future of (say) the stock market and become the richest person on earth.

Comment author: gjm 01 March 2016 07:55:08PM -1 points [-]

How would you define it then?

I already pointed out that your own choice of definition doesn't have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.

This would not be the first time in history that the philosophical community was wrong about something.

Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.

You could still be right, of course. But I think you'd need to offer more and better justification than you have so far, to be at all convincing.

But "a very little bit" is still distinguishable from zero, yes?

Well, the actual distinguishing might be tricky, especially as all I've claimed is that arguably it's so. But: yes, I have suggested -- to be precise about my meaning -- that some reasonable definitions of "free will" may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.

Nothing about it seems human decision-like.

Nothing about it seems decision-like at all. My notion of what is and what isn't a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I'll happily revise this in the light of new data.

I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready

Me too; if you think that what I have said about decision-making isn't, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren't altogether IA/AI-ready, for the rather boring reason that I don't know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.

The hypothesis that humans make decisions by heuristic search has been pretty much disproven

First: No, it hasn't. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do -- though our trees are quite different from the computers'.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.

All I require for my argument to hold is predictability in principle, not predictability in fact.

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

because I don't believe that Turing machines exercise free will when "deciding" whether or not to halt.

I think the fact that you never actually get to observe the event of "such-and-such a TM not halting" means you don't really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it's as if you chose a definition in some principled way, found it gave an answer you didn't like, and then looked for a hack to make it give a different answer.

just because incompatibilism is a tautology does not make it untrue.

Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.

As soon as someone presents a cogent argument I'm happy to consider it.

I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don't find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.

It reminds me of [...]

I regret to inform you that "argument X has been deployed in support of wrong conclusion Y" is not good reason to reject argument X -- unless the inference from X to Y is watertight, which in this case I hope you agree it is not.

I don't see any possible way to distinguish between "can not" and "with 100% certainty will not".

This troubles me not a bit, because you can never say "with 100% certainty will not" about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.

And at degrees of certainty less than 100%, it seems to me that "almost certainly will not" and "very nearly cannot" are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys' names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you're working with leads you to a different conclusion, so much the worse for that notion of possibility.

Comment author: lisper 01 March 2016 11:17:30PM 0 points [-]

rudely patronizing

Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I'm sorry if that frustration occasionally manifests itself as rudeness.

you can never say "with 100% certainty will not" about anything with any empirical content

Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.

Nothing about [a pachinko machine] seems decision-like at all.

a thermostat has (in a very aetiolated sense) beliefs.

Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?

Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.

I presume you mean "perfectly reliable prediction of everything is not possible in principle." Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.

Comment author: ChristianKl 01 March 2016 07:45:49PM *  0 points [-]

Basically after you previously argued that there only one reasonable definition of free will you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?

Is that a reasonable description of your position?

Comment author: lisper 01 March 2016 10:41:08PM 0 points [-]

No, not even remotely close. We seem to have a serious disconnect here.

For starters, I don't think I ever gave a definition of "free will". I have listed what I feel to be (two) necessary conditions for it, but I don't think I ever gave sufficient conditions, which would be necessary for a definition. I'm not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don't have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word "free" in a context of a discussion of "free will." But that is not at all the same thing as arguing that there is only one reasonable definition of "free will." Also, the question of what "I" means is different from the question of what "free will" means. But both are (obviously) relevant to the question of whether or not I have free will.

The reason I brought up the definition of "I" is because you wrote:

You ontological model that there's an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic

That is not my position. (And ontology is a bit of a red herring here.) I can't even imagine what it means for a neuron to "do something that not in their nature or being", let alone that this departure from "nature or being" could be caused by physics. That's just bizarre. What did I say that made you think I believed this?

I can't define "free will" just like I can't define "pornography." But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that "will not with 100% reliability" is logically equivalent to "can not" in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn't leave earth orbit is because it can't or because it chooses not to.

View more: Prev | Next