SforSingularity comments on Ingredients of Timeless Decision Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (226)
If this is correct, then it amounts to a profound philosophical and scientific achievement.
Not by my standards.
Free will is about as easy as a problem can get and still be Confusing. Plenty of moderately good reductionists have refused to be confused by it. Killing off the problem entirely is more like dropping nuclear weapons to obliterate the last remnants of a dead horse than any great innovation within the field of reductionism.
There are non-reductionist philosophers who would think of reducing free will as a great and difficult achievement, but by reductionist standards it's a mostly-solved problem already.
Formal cooperation in the one-shot PD, now that should be interesting.
Here is what I don't understand about the free will problem. I know this is a simple objection, so there must be a standard reply to it; but I don't know what that reply is.
Denote F as a world in which free will exists, f as one in which it doesn't. Denote B as a world in which you believe in free will, and b as one in which you don't. Let a combination of the two, e.g., FB, denote the utility you derive from having that belief in that world. Suppose FB > Fb and fb > fB (being correct > being wrong).
The expected utility of B is FB x p(F) + fB x (1-p(F)). Expected utility of b is Fb x p(F) + fb x (1-p(F)). Choose b if Fb x p(F) + fb x (1-p(F)) > FB x p(F) + fB x (1-p(F)).
But, that's not right in this case! You shouldn't consider worlds of type f in your decision, because if you're in one of those worlds, your decision is pre-ordained. It doesn't make any sense to "choose" not to believe in free will - that belief may be correct, but if it is correct, then you can't choose it.
Over worlds of type F, the expected utility of B is FB x p(F), and the utility of b is Fb x p(F), and FB > Fb. So you always choose B.
I am unable to attach a truth condition to these sentences - I can't imagine two different ways that reality could be which would make the statements true or alternatively false.
http://wiki.lesswrong.com/wiki/Free_will_(solution)
Do you mean that the phrases "free will exists" and "free will does not exist" are both incoherent?
If I want to, I can assign a meaning to "free will" in which it is tautologically true of causal universes as such, and applied to agents, is true of some agents but not others. But you used the term, you tell me what it means to you.
You used the term first. You called it a "dead horse" and "about as easy as a problem can get and still be Confusing". I would think this meant that you have a clear concept of what it means. And it can't be a tautology, because tautologies are not dead horses.
I can at least say that, to me, "Free will exists" implies "No Omega can predict with certainty whether I will one-box or two-box." (This is not an "if and only if" because I don't want to say that a random process has free will; nor that an undecidable algorithm has free will.)
I thought about saying: "Free will does not exist" if and only if "Consciousness is epiphenomenal". That sounds dangerously tautological, but closer to what I mean.
I can't think how to say anything more descriptive than what I wrote in my first comment above. I understand that saying there is free will seems to imply that I am not an algorithm; and that that seems to require some weird spiritualism or vitalism. But that is vague and fuzzy to me; whereas it is clear that it doesn't make sense to worry about what I should do in the worlds where I can't actually choose what I will do. I choose to live with the vague paradox rather than the clear-cut one.
ADDED: I should clarify that I don't believe in free will. I believe there is no such thing. But, when choosing how to act, I don't consider that possibility, because of the reasons I gave previously.
Then you've got the naive incoherent version of "free will" stuck in your head. Read the links.
http://wiki.lesswrong.com/wiki/Free_will
http://wiki.lesswrong.com/wiki/Freewill(solution)
All right, I read all of the non-italicized links, except for the "All posts on Less Wrong tagged Free Will", trusting that one of them would say something relevant to what I've said here. But alas, no.
All of those links are attempts to argue about the truth value of "there is free will", or about whether the concept of free will is coherent, or about what sort of mental models might cause someone to believe in free will.
None of those things are at issue here. What I am talking about is what happens when you are trying to compute something over different possible worlds, where what your computation actually does is different in these different worlds. When you must compare expected value in possible worlds in which there is no free will, to expected value in possible worlds in which there is free will, and then make a choice; what that choice actually does is not independent of what possible world you end up in. This means that you can't apply expectation-maximization in the usual way. The counterintuitive result, I think, is that you should act in the way that maximizes expected value given that there is free will, regardless of the computed expected value given that there is not free will.
As I mentioned, I don't believe in free will. But I think, based on a history of other concepts or frameworks that seemed paradoxical but were eventually worked out satisfactorily, that it's possible there's something to the naive notion of "free will".
We have a naive notion of "free will" which, so far, no one has been able to connect up with our understanding of physics in a coherent way. This is powerful evidence that it doesn't exist, or isn't even a meaningful concept. It isn't proof, however; I could say the same thing about "consciousness", which as far as I can see really shouldn't exist.
All attempts that I've seen so far to parse out what free will means, including Eliezer's careful and well-written essays linked to above, fail to noticeably reduce the probability I assign to there being naive "free will", because the probability that there is some error in the description or mapping or analogies made is always much higher than the very-low prior probability that I assign to there being "free will".
I'm not arguing in favor of free will. I'm arguing that, when considering an action to take that is conditioned on the existence of free will, you should not do the usual expected-utility calculations, because the answer to the free will question determines what it is you're actually doing when you choose an action to take, in a way that has an asymmetry such that, if there is any possibility epsilon > 0 that free will exists, you should assume it exists.
(BTW, I think a philosopher who wished to defend free will could rightfully make the blanket assertion against all of Eliezer's posts that they assume what they are trying to prove. It's pointless to start from the position that you are an algorithm in a Blocks World, and argue from there against free will. There's some good stuff in there, but it's not going to convince someone who isn't already reductionist or determinist.)
I have stated exactly what I mean by the term "free will" and it makes this sentence nonsense; there is no world in which you do not have free will. And I see no way that your will could possibly be any freer than it already is. There is no possible amendment to reality which you can consistently describe, that would make your free will any freer than it is in our own timeless and deterministic (though branching) universe.
What do you mean by "free will" that makes your sentence non-nonsense? Don't say "if we did actually have free will", tell me how reality could be different.
Saying that you shouldn't do something because it's preordained whether you do it or not is a very confused way of looking at things. Christine Korsgaard, by whom I am normally unimpressed but who has a few quotables, says:
(From "The Authority of Reflection")
I don't understand what that Korsgaard quote is trying to say.
I didn't say that. I said that, when making a choice, you shouldn't consider, in your set of possible worlds, possible worlds in which you can't make that choice.
It's certainly not as confused a way of looking at things as choosing to believe that you can't choose what to believe.
I should have said you shouldn't try to consider those worlds. If you are in f, then it may be that you will consider such possible worlds; and there's no shouldness about it.
"But", you might object, "what should you do if you are a computer program, running in a deterministic language on deterministic hardware?"
The answer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.
The 2-sentence version of what I'm saying is that, if you don't believe in free will, you might be making an error that you could have avoided. But if you believe in free will, you can't be making an error that you could have avoided.
In the context of the larger paper, the most charitable way of interpreting her (IMO) is that whether we have free will or not, we have the subjective impression of it, this impression is simply not going anywhere, and so it makes no sense to try to figure out how a lack of free will ought to influence our behavior, because then we'll just sit around waiting for our lack of free will to pick us up out of our chair and make us water our houseplants and that's not going to happen.
What if we're in a possible world where we can't choose not to consider those worlds? ;)
"Choosing to believe that you can't choose what to believe" is not a way of looking at things; it's a possible state of affairs, in which one has a somewhat self-undermining and false belief. Now, believing that one can choose to believe that one cannot choose what to believe is a way of looking at things, and might even be true. There is some evidence that people can choose to believe self-undermining false things, so believing that one could choose to believe a particular self-undermining false thing which happens to have recursive bearing on the choice to believe it isn't so far out.
The mistake you're making is that determinism does not mean your decisions are irrelevant. The universe doesn't swoop in and force you to decide a certain way even though you'd rather not. Determinism only means that your decisions, by being part of physical reality rather than existing outside it, result from the physical events that led to them. You aren't free to make events happen without a cause, but you can still look at evidence and come to correct conclusions.
If you can't choose whether you believe, then you don't choose whether you believe. You just believe or not. The full equation still captures the correctness of your belief, however you arrived at it. There's nothing inconsistent about thinking that you are forced to not believe and that seeing the equation is (part of) what forced you.
(I avoid the phrase "free will" because there are so many different definitions. You seem to be using one that involves choice, while Eliezer uses one based on control. As I understand it, the two of you would disagree about whether a TV remote in a deterministic universe has free will.)
edit: missing word, extra word
Brian said:
And Alicorn said:
And before either of those, I said:
These all seem to mean the same thing. When you try to argue against what someone said by agreeing with him, someone is failing to communicate.
Brian, my objection is not based on the case fb. It's based on the cases Fb and fB. fB is a mistake that you had to make. Fb, "choosing to believe that you can't choose to believe", is a mistake you didn't have to make.
Yes. I started writing my reply before Alicorn said anything, took a short break, posted it, and was a bit surprised to see a whole discussion had happened under my nose.
But I don't see how what you originally said is the same as what you ended up saying.
At first, you said not to consider f because there's no point. My response was that the equation correctly includes f regardless of your ability to choose based on the solution.
Now you are saying that Fb is different from (inferior to?) fB.
Free will is counted as one of the great problems of philosophy. Wikipedia Lists it as a "central problem of metaphysics". SEP has a whole, long article on it along with others on: "compatibilism", "causal determinism" , "free will and fatalism", "divine foreknowledge", "incompatibilism (nondeterministic) theories of free will" and "arguments for incompatibilism".
If you really have "nuked the dead donkey" here, you would cut out a lot of literature. Furthermore, religious people would no longer be able to use "free will" as a magic incantation with which to defend God.
Dennett and others have used multi-ton high explosives on the dead donkey. Why would nuclear weapons make a further difference?
People respond to math more than to words.
Er... no they don't?
Some do.
rather, if one challenges a valid verbal theory one can usually find some way of convincing people that there is some "wiggle room", that it may or may not be valid, etc. But a mathematical theory has, I think, an air of respectability that will make people pay attention, even if they don't like it, and especially if they don't actually understand the mathematics.
If your theory finds applications, (which, given the robotics revolution we seem to be in the middle of is not vastly unlikely), then it will further marginalize those who stick to the old convenient confusion about free will.
Of course, given what has happened with evolution (smart Christians accept it, but find excuses to still believe in God), I suspect that it will only have an incremental impact on religiosity, even amongst the elite.
The only reason free will is regarded as a problem of philosophy is that philosophers are in the rather bizarre habit of defining it as "your actions are uncaused" - it should be no surprise that a nonsensical definition leads to problems!
When we use the correct definition - the one that corresponds to how the term is actually used - "your actions are caused by your own decisions, as opposed to by external coercion" - the problem doesn't arise.
Free will seems like a pretty boring topic to me. The main recent activity I have noticed in the area was Daniel Dennett's "Freedom Evolves" book. That book was pretty boring and mostly wrong - I thought. It was curious to see Daniel Dennett make such a mess of the subject, though.
As it happens, I'm reading through Freedom Evolves right now; up to chapter 3, and while I don't quite buy his ideas on inevitability, it so far doesn't strike me as a mess?
I liked the bit on memes. Most of the rest of it was a lot of word games, IMO.