Jack comments on Rationality Quotes: February 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (322)
But this is what is so great about Bayesian epistemology. You don't have to wait for some neuroscientists to announce this finding. If you know a decent amount of neuroscience now, you can be fairly confident in predicting that they one day will be able to explain choice in terms of neuron firings. All the people here who believe this aren't just making it up. We're extrapolating from what is known and making reasonable inferences. If you wait for someone to figure out exactly how it is done you're going to spend a lot more time being wrong than those who infer in advance. Again though, I can already make accurate predictions about people's choices based on macro-phenomena.
But don't confuse placeholders with fundamental properties. I have no problem with "choice". I use it all the time. I think I make choices constantly. If it is helpful for your models by all means use it. But that doesn't require you to assert that choice is some incredible new kind of event which is neither causal nor random. I have lots of things in my ontology that are not in my basic ontology: morality, love, basketball etc. Maybe in modeling subjective experience you even want to distinguish things you do from other caused or random events and so use this word "choice" in a special way. But surely you can recognize that you aren't actually that different from all the other objects you discover in the world and likely work the same way they do. And when you take this objective, view from nowhere, scientific perspective I don't see how you can have an event that is neither caused nor random.
I quite understand that your "choice" is neither caused or random but a third value that is neither. What I don't understand is what positive qualities this third value possesses. I promise you I can make sense of their being a third variable in abstract. What I don't understand is what your third variable is. I can say lots of things about "random" and "causally determined" that distinguish these properties. But I haven't heard you do anything in the way of describing this third property.
You should know that you're hardly the first person who has wanted this kind of free will and went about inventing a third kind of thing to prove that it existed. One reason I'm so skeptical is that every single one of these attempts that I know of has failed miserably. Libertarians are a very small minority among contemporary analytic philosophers for a reason.
OK, good, I thought so. You seemed pretty smart.
Why don't you go ahead and do that, for a paragraph or so, and I'll see if I can complete the pattern for you and give you the kind of description you're looking for. To me it just seems obvious what a choice is, in the same way that I know what "truth" is and what "good" is, but if you can manage to describe the meaning of "random" analytically then I can probably copy it for the word "chosen." If I can't, that will surprise me.
Have I waxed poetic about souls and destiny and homunculi? I don't remember "inventing" a third kind of thing. I'm just sort of pointing at my experience of choice and labeling it "choice." If you insist that what I think is choice is really something else, you're welcome to prove it to me with direct evidence, but I'm not really interested in Bayesian inferences here. I am unconvinced that brains and rocks are in the same reference class. I do not accept the physicalist-reductionist hypothesis as literally true, despite its excellent track record at producing useful models for predicting the future. I understand that the vast majority of people on this site -do- accept that hypothesis. I do not have the stamina or inclination to hold the field on that issue against an entire community of intelligent debaters.
Why? How an algorithm feels is not a reliable indicator of its internal structure.
For convenience. If you show me a few examples where believing that I don't have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don't have free will if you don't have free will. From Surely You're Joking, Mr. Feynman!:
All right, suppose all that is true, and that people can be hypnotized so that they literally can't break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I'm not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you're under the hypnotist's spell, it doesn't do any good to realize that you have no free will. You're still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I'm already under the "hypnotist's" spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I'm sorry, I'm not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances - and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data - it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You're not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you're saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that "I have free will" is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I'm happy to set aside that belief if there's some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That's still not enough, though, to get me to "don't believe things that have a low Bayesian prior and little or no supporting evidence." I don't get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can't find the end when it comes to rejecting the hypothesis of free will.
Robin, I've liked your comments both on this thread and others that we've had, but I can't afford to continue the discussion any time soon -- I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I'll read it and think about it, but I won't respond.
Understood.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don't disagree that you have it.
Good luck on your thesis - I won't distract you any more.
I place a very low probability on my having genuine 'free will' but I act as if I do because if I don't it doesn't matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It's a bit of a Pascal's Wager.
Why do you define "free will" to refer to something that does not exist, when the thing which does exist - will unconstrained by circumstance or compulsion - is useful to refer to? For one, its absence is one indicator of an invalid contract.
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin's point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.
It's obvious to you what "truth" is and what "goodness" is? Really? I think I can say clever and right things about these concepts because I've done a lot of studying and thinking. But the answers don't seem obvious at all to me. Anyway, causality and randomness. Clearly huge topics about which lots have been said.
I believe a causal event is a kind of regularity, extended in spacetime, which has a variable that can be manipulated by hypothetical agent at one end to control a variable at the other end (usually the effect part is later in time). So by altering the velocity of an asteroid, the mean temperature of the planet Earth can be dramatically altered, for example. On a micro-level, intervening on a neuron and causing it to fire at a certain rate will lead to adjacent neurons firing. Altering the social mores of a society can cause a man not to return a wallet. For any one event to occur a large amount of variable have to be right and any one of those variable can be altered so as to alter the event, so these simple examples are overly simple. Lots more has been said if you're interested. Pearl and Woodward are good authors.
Randomness might be more difficult since it isn't obvious ontological randomness even exists. Epistemological randomness does: rolling a dice is a good example we have no way to predict the outcome. But in principle we could predict the outcome. Some interpretations of quantum mechanics do involve ontological randomness. Such events can be distinguished from causal events in that the valuable of the resulting variable cannot be controlled by any agent, not because no agent is powerful enough but because there are no variables which can be intervened on to alter the outcome in the way desired. There is no possibility of controlling such events. It is possible quantum indeterminacy is just the product of a hidden variable we don't know about or that the apparent randomness is actually just a product of anthropics, every possible state gets observed and every outcome seems random because "you" only get to observe one and can't communicate with the other "you's".
I don't have a problem with you pointing at an experience and labeling it "choice". I do that too. You make choices. It's just what it is to make a choice is one of these two things, a caused event or an uncaused event. You invent a third kind of thing when you come up with with a new kind of event which isn't seen anywhere else, and declare it to be fundamental. And the way many philosophers have historically dealt with this exact problem is by positing souls and homunculi, "agent causation" and whatnot. When you decide that your experience of choice is a fundamental feature of the world you're doing the exact same thing- any claim that something is irreducible is the same as a claim that something belongs in our basic ontology. The fact that you didn't do this in verse just means I'm not annoyed, it's still the same mistake.
I've been known to be more tolerant that others of unorthodoxy on this matter and I doubt many more would join in. Most people probably have the same arguments anyway. You're not obligated to but I'd be interested in hearing your reasons for not accepting the hypothesis. However, my definition of truth is something like "the limit of useful modeling" so we might have to sort truth out a bit too. If you preface the discussion to demonstrate that you're aware the position is unpopular already and you're just trying to work this out you can probably avoid a karma hit. I'll vote you up it it happens.
Sure, consider it prefaced. I'm not trying to convince anybody; I'm just sharing my views because one or two users seem curious about them, and because I might learn something this way. It's not very important to me. If anyone would like me to stop talking about this topic on Less Wrong, feel free to say so explicitly, and I will be glad to oblige you.
I don't mean that the entire contents, in detail, of what is and is not inside the box marked "true" is known to me. That would be ridiculous. I just mean that I know which box I'm talking about, and so do you. Sophisticated discussions about what "true" means (as opposed to discussion about whether some specific claim X is true) generally do more harm than good. You can tell cute stories about The Simple Truth, and that may help startle some philosophers into realizing where they've gone off-course, but mostly you're just lending a little color to the Reflexive Property or the Identity Property: a = a.
I can probably work with this. I expect you will still think I'm postulating unnecessary ontological entities, and, given your epistemological value system, you'll be right. Still, maybe the details will interest you.
Some interpretations of conscious awareness do involve ontological choice. Such events can be distinguished from random events in that the value of the resulting variable can be controlled by exactly one agent, as opposed to zero agents, as in the case of a truly random variable. The agent in question could be taken to be some subset of the neurons in the brain, or some subset of a person's conscious awareness, or some kind of minimally intervening deity. It is not clear exactly who or what the agent is.
Conscious events can be distinguished from caused events in that conventional measures of kinetic power and information-theoretic power are bad predictors of a hypothetical agent's ability to manipulate the outcome of a conscious event. Whether because the relevant interactions among neurons, given their level of chaotic complexity, occur in a slice of spacetime that is small enough to be resistant to external computation, or because the event is driven by some process outside the well-understood laws of physics, a conscious event is difficult or impossible to control from outside the relevant consciousness. Thus, instead of a single output depending subtly on many other variables, the output depends almost exclusively on a single input or small set of inputs.
I'd be happy to explain it in August, when I'll be bored silly. At the moment, I'm pretty busy with my law school thesis, which is on antitrust law and has little to do with either free will or reductionism. Feel free to comment on any of my posts around that time, or to send your contact info to zelinsky a t gm ail dot com. Zelinsky is a rationalist friend of mine who agrees with you and only knows one person who thinks like me, so he'll know who it's for.
Thanks for bearing with me so far and for responding to arguments that must no doubt strike you as woefully unenlightened with a healthy measure of respect and patience. I really am done with both the free will discussion and the reductionist discussion for now, but I enjoyed discussing them with you, and consider it well worth the karma I 'spent'. If you can think of any ways that what you see as my misunderstanding of free will or reductionism is likely to interfere with my attempts to help refine LW's understanding of Goodhart's Law, please let me know, and I'll vote them up.