entirelyuseless comments on Newcomb versus dust specks - Less Wrong

-1 Post author: ike 12 May 2016 03:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 14 May 2016 12:42:45AM -2 points [-]

We've had this discussion before. When you one-box, your choice does not cause the money. The money is already there or it is not. Causality does not go backwards in time.

In other words, Newcomb and the smoking lesion are identical in logical form.

Comment author: ArisKatsaris 14 May 2016 02:37:17PM 0 points [-]

When you one-box, your choice does not cause the money.

Your decision algorithm will cause the choice. The prediction of that choice, by someone knowing your decision algorithm, will have caused money.

If you want the money you should therefore be a decision algorithm that makes the choice whose prediction will cause the money.

Comment author: entirelyuseless 14 May 2016 03:20:59PM -1 points [-]

You cannot make yourself into a certain decision algorithm, just as you cannot make yourself have or not have a lesion.

Comment author: ArisKatsaris 15 May 2016 11:56:42AM 1 point [-]

You cannot make yourself into a certain decision algorithm

What, is this some sort of objection where you believe that determinism means we don't make 'real' choices'?

You could be convinced by my words and make yourself into a person who chooses to one-box. Or you could refuse to be convinced and remain a person who chooses to two-boxes.

Granted, by being "convinced" or "not convinced" it means that you're already the decision algorithm that would make that choice. So what? Whether you'll be convinced or not still affects your decision algorithm from then on.

Comment author: entirelyuseless 15 May 2016 02:18:29PM -1 points [-]

No, I don't believe that determinism means we don't make real choices. But it is also true, as you note yourself, that if I am convinced by your words, then I was already the kind of person who would be convinced, and I did not make myself into that sort of person. And likewise for the opposite case.

But I am consistent: I believe we make real choices even if Omega predicts our actions, and I also believe we make real choices even if a lesion causes them. The people arguing against my position are saying we don't make real choices in the second case, so they are the ones raising the determinism objection.

Comment author: ArisKatsaris 17 May 2016 07:29:52PM 0 points [-]

Okay, can you just state clearly whether you one-box or two-box, and whether you smoke or not-smoke in the smoking lesion problem, so that I understand what your position is, before trying to understand why it is?

Comment author: entirelyuseless 18 May 2016 02:07:06PM 0 points [-]

I take the one box in Newcomb, and I do not smoke in the smoking lesion.

My position is that they are the same problem. The million is already there or it is not, and the lesion is there or it is not. I cannot change that in either case. But I still make a real decision, one that will be correlated with the outcome, and I choose the winning one.

Comment author: Pimgd 19 May 2016 07:57:06AM *  0 points [-]

I can't even begin to model myself as "liking" smoking - it gives a disgusting smell that clings to everything and even being near second-hand smoke makes for uncomfortable breathing. If I try to model myself as someone who likes smoking, I don't see myself living, because I've been altered beyond recognition.

Add to that that it seems to be a problem without a correct answer ("yes" seems to be the preferred option, given that there is no statement that you prefer smoking without cancer over smoking with cancer, thus "you prefer to smoke" + "some cancer related stuff that you may or may not have an opinion about" = "go smoke already". But this isn't the direct correct answer because if you take another worldview and look at the problem, "to smoke is to admit that you have this genetic flaw and thus you have cancer"), and I have massive problems when it comes to understanding this sort of thing.

This question seems to have the same thing going on - pick one! A) "everyone is tortured" or B) "everyone gets a dust speck". But wait, there's some numbers going on in the background where there's either a lot of clones of you or only one of you. And if everyone gets tortured then there's only one of you. Here it is left unsaid that torture is far far far worse than the dust speck for a single individual, but the issue remains: I see "Do a really really really bad thing" or "Do a meh thing" and then some fancy attempts to trip up various logic systems - What about the logic that, hey, A is always worse than B? ... I guess you could fix this by there being OTHER people present, so that it's a "you get tortured" vs "you and everyone else (3^^^3) get a dust speck"... but then there'd be loopholes in the region of "yes, but my preferences prefer a world where there are people other than me, so I'll take torture if that means I get to exist in such a world".

As for one-box/two-box, I'd open B up, and if it was empty I'd take the contents of A home. If it contained the cash, well, I dunno. I guess I'd leave the 1000 behind, if the whole "if you take both then B is empty" idea was true. Maybe it's false. Maybe it's true! Regardless of that, I just got a million bucks, and an extra $1000, well, that's not all that much after receiving a whole million. (Yes, you could do stuff with that money, like buying malaria nets or something, but I am not an optimal rational agent, my thinking capacity is limited, and I'd rather bank the $1m than get tripped up by $1000 because I got greedy). ... weirdly enough, if you change the numbers so that A contained $1000 and B contained $1001, I'd open up B first... and then regardless of seeing the money, I'd take A home too.

Feel free to point out the holes in my thinking - I'd prefer examples that are not too "out there" because my answers tend to not be based on the numbers but on all the circumstances around it - that $1m would see me work on what I'd want to work on for the rest of my life, and that $1000 would reduce the time I'd need to spend working for doing what I wanna do by about a month (or 3 weeks).

Comment author: gjm 19 May 2016 02:47:15PM -1 points [-]

I can't even begin to model myself as "liking" smoking

Then for the "smoking lesion" problem to be any use to you, you need to perform a sort of mental translation in which it isn't about smoking but about some other (perhaps imaginary) activity that you do enjoy but is associated with harmful outcomes. Maybe it's eating chocolate and the harmful outcome is diabetes. Maybe it's having lots of sex and the harmful outcome is syphilis. Maybe it's spending all your time sitting alone and reading and the harmful outcome is heart disease. The important thing is to keep the structure of the thing the same: doing X is associated with bad outcome Y, it turns out (perhaps surprisingly) that this is not because X causes Y but because some other thing causes both X and Y, you find yourself very much wanting to do X, so now what do you do?

Comment author: Jiro 21 May 2016 07:21:45PM *  0 points [-]

Having a smoking lesion make you choose smoking is vague. Does it make you choose smoking by increasing the utility you gain from smoking, but not affecting your ability to reason based on this utility? Or does it make you choose smoking by affecting your ability to do logical reasoning?

In the former case, switching from nonsmoking to smoking because you made a logical conclusion should not affect your chances of dying, even though switching to smoking in general should affect your chance of dying.

In the latter case, switching to smoking should affect your chance of dying, but you are then asking a question which presupposes under some circumstances that you can't answer it.

Comment author: Pimgd 19 May 2016 09:01:21AM *  0 points [-]

I went looking around on wikipedia and found Kavka's toxin puzzle which seems to be about "you can get a billion dollars if you intend to drink this poison (which will hurt a lot for a whole day similar to the worst torture imaginable but otherwise leave no lasting effects) tomorrow evening, but I'll pay you tonight"... but there I don't get the paradox either - whats stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ... Possibly by force. Possibly by relying on saying things in a manner that you don't know that he knows he has to do this. Possibly with a whole lot of actors. Like scheduling a text "I am perfectly fine, there is nothing wrong with me" to parents and friends to be sent tomorrow morning.

Of course, this relies on my ability to raise the probability of intervention, but that seems like an easier challenge than engaging in willful doublethink... ... or you'd perhaps add various chemicals to your food the next day - I know I can be committed to an idea (I will do this task tonight), come home, eat dinner, and then I'd be totally uncommitted (that task can wait, I will play games first).

... A billion is a lot of money, perhaps I'd drink the poison and then have a hired person drug me to a coma, to be awoken the next day? You could hire a lot of medical staff with that kind of money.

Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?

Comment author: gjm 19 May 2016 02:40:19PM -1 points [-]

all these "creative" solutions are not really allowed. Why is that?

Because the point of these questions isn't to challenge you to find a good answer, it's that the process of answering them may lead to insight into your actual value system, understanding of causation, etc. Finding clever ways around the problem is a bit like cheating in an optician's eye test[1]: sure, maybe you can do that, but the result will be that you get less effective eyesight correction and end up worse off.

[1] e.g., maybe you have found a copy of whatever chart they use and memorized the letters on it.

So, e.g., the point of the toxin puzzle is to ask: can you, really, form an intention to do something when you know that when the time comes you will be able to choose and will have no reason to choose to do it and much reason not to? That's an interesting psychological and/or philosophical question. You can avoid answering it by saying "well, I'd find a way to make taking the toxin not actually do me any harm", and that might be an excellent idea if you ever find yourself in that bizarre situation -- but the point of the question isn't to plan for an actual future where you encounter a quirkily sadistic but generous billionaire, it's to help clarify your thinking about what happens when you form an intention to do something.

Of course you may repurpose the question, and then your "clever" answers may be entirely to the point. Suppose you decide that no, you cannot form an intention to do something that you will have good reason to choose not to do; well, situations might arise where it would be useful to do that (even though the precise situation Kavka describes is unlikely), so it's reasonable to think about how you might make it possible, and then some "clever" answers may become relevant. But others probably won't, and the "get drugged into a coma" solution is probably one of those.

(Incidentally, in the original puzzle the amount of money was a million rather than a billion. That's probably still enough to hire someone to drug you into a coma.)

Comment author: Lumifer 19 May 2016 02:29:15PM 1 point [-]

Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?

There are reasons.

Comment author: ArisKatsaris 19 May 2016 12:58:37PM 1 point [-]

What's stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ...

Like Odysseus with the Sirens, you'd have to "create a subagent"/hire a friend to convince you not to drink, before you intend to drink it, then actually change your intentions and want to drink it.

This doesn't seem possible for a human mind, though of course it's easier to imagine for artificial minds that can be edited at will.

Comment author: Pimgd 19 May 2016 08:25:15AM 0 points [-]

I get the feeling maybe this ought to be two comments, one on the main thread and one here. But they're too entangled.

Comment author: Lumifer 18 May 2016 02:30:27PM 0 points [-]

But I still make a real decision

Leaving Newcomb aside for the moment, in the smoking lesion case your decision is predetermined and you have no choice in the matter. I don't see how that counts as "a real decision".

Comment author: ArisKatsaris 18 May 2016 08:10:50PM 1 point [-]

"your decision is predetermined and you have no choice in the matter."

Is LW now populated by the sort of people who haven't even heard of compatibilism and of the idea that determinism not only doesn't contradict having a choice, but is actually fundamental to the process of decision-making? You can only "choose", if your values and personality can determine the outcome.

Comment author: Lumifer 18 May 2016 08:18:53PM 0 points [-]

By "heard of", do you actually mean "agree with"?

Comment author: entirelyuseless 18 May 2016 02:48:28PM 0 points [-]

I agree that this is what most people think, but it is a mistake.

I don't agree to leave Newcomb aside in considering this, because my position is that they are the same problem. If I have no choice in the smoking lesion, I have no choice in Newcomb.

Consider the Newcomb case.

I exist, and my brain and body are in a certain condition. I did not put them in that condition. I cannot make them not have been in that condition.

Omega looks at me. Using the condition of my brain and body -- conditions over which I have no control whatsoever -- he determines whether I am going to choose one box or two boxes. He has 100% accuracy, and this implies that the situation is completely determined by the condition of my brain and body.

In other words, "the condition of my brain and body" functions exactly like the lesion. It completely "predetermines" the outcome. If I have no choice in the lesion case, I have no choice in Newcomb.

Nonetheless, I say I have a choice in Newcomb, because the condition of my brain and body imply that I will engage in a certain process of reasoning, considering the alternatives of one boxing and two boxing, and choose one of them.

Likewise, I have a choice in the lesion case, because the lesion implies that I will engage in a certain process of reasoning, considering the alternatives of smoking and not smoking, and choose one of them.

In both cases, the outcome is predetermined. In both cases, the outcome is the result of a choice that results from a process of thought.

Comment author: Lumifer 18 May 2016 03:10:23PM *  0 points [-]

I don't agree to leave Newcomb aside in considering this, because my position is that they are the same problem.

If they are the same problem, you shouldn't care about leaving one aside. The smoking lesion is a simpler and clearer problem because it doesn't need to postulate a supernatural entity.

In other words, "the condition of my brain and body" functions exactly like the lesion. It completely "predetermines" the outcome.

So you're a determinist. OK.

Nonetheless, I say I have a choice in Newcomb, because the condition of my brain and body imply that I will engage in a certain process of reasoning, considering the alternatives of one boxing and two boxing, and choose one of them.

That, to me, doesn't follow at all. You don't choose, you're just an automaton going through the motions. It is, as you say, similar to the lesion -- there might well be complicated intermediate steps but there is no choice involved. You literally do not have a choice.

In which way your choice is different from the choice of a calculator which also goes through a bunch of processes before deciding to output 4 as a response to 2+2?

Comment author: ike 14 May 2016 12:47:51AM 0 points [-]

I'm referring to TDT, which disagrees.

Comment author: entirelyuseless 14 May 2016 02:35:11PM *  -1 points [-]

Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.

Comment author: ike 14 May 2016 05:24:29PM 0 points [-]

They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.

It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.

Comment author: entirelyuseless 14 May 2016 09:25:49PM *  0 points [-]

If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.

Comment author: ike 14 May 2016 09:54:27PM 0 points [-]

In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.

Comment author: entirelyuseless 15 May 2016 01:49:14AM *  0 points [-]

Exactly. There is no difference; either way you should not smoke.

Also, what do you mean by saying that there is "counterfactual dependence" in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?

Comment author: ike 15 May 2016 02:15:20AM -1 points [-]

So you're not just arguing with Eliezer, you're arguing with the entirety of causal decision theory.

I strongly suspect you don't understand causal decision theory at this point, or counterfactuals as used by it. If this is the case, see https://en.wikipedia.org/wiki/Causal_decision_theory, or http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/, or https://wiki.lesswrong.com/wiki/Causal_Decision_Theory

Those links explain it better than I can quickly, but I'll try anyway: counterfactuals ask "if you reached into the universe from outside and changed A, what would happen?" Only things caused by A change, not things merely correlated with A.

Comment author: entirelyuseless 15 May 2016 02:18:57AM *  0 points [-]

I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.

(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)

Comment author: ike 15 May 2016 02:29:02AM 0 points [-]

So why did you ask me what I meant about counterfactuals? If you take the TDT assumption that identical copies of you counterfactually effect each other, then Newcomb has counterfactual dependence and Lesions doesn't.

I'm not sure of your point here.