Unknowns comments on The Smoking Lesion: A problem for evidential decision theory - Less Wrong

3 [deleted] 23 August 2010 09:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: Unknowns 23 August 2010 04:09:07PM 1 point [-]

Actually, this is excellent. We could rewrite Newcomb's problem like this:

Omega places in the box together with the million or non-million, a device that influences your brain, programming the device so that you are caused to take both if it does not place the million, and programming the device so that you are caused to one-box if it places the million. In other words, Omega decides in advance whether you are going to get the million or not, then sets up the situation so you will make the choice that gets you what it wanted you to get.

However, the influence on your brain is quite subtle; to you, it still feels like you are deciding in the normal way, using some decision theory or other.

Now, do you one-box or two-box? This is certainly exactly the same as the smoking lesion. Nor can you answer "I don't have to decide because my actions are determined" because your actions might well be determined in real life anyway, and you still have to decide.

If you one-box here, you should not smoke in the lesion problem. If you don't one-box here... well, too bad for you.

Comment author: TobyBartels 23 August 2010 04:46:49PM 1 point [-]

Now, do you one-box or two-box?

The obvious answer is ‘whatever Omega decided’. But I hope that I one-box.

Comment author: Unknowns 23 August 2010 05:10:24PM 0 points [-]

You might as well say in general that you do "whatever the laws of physics determine."

But you still have to decide, anyway. Hoping doesn't help.

Comment author: TobyBartels 23 August 2010 07:13:03PM 1 point [-]

I flip a coin; if it's heads, I give you a million dollars, else I give you a thousand dollars. How much money should you get from me? (And is this problem any different from the last one?)

At some point, these questions no longer help us make rational decisions. Even an AI with complete access to its source code can't do anything to prepare itself for these situations.

Comment author: Kingreaper 23 August 2010 07:16:51PM *  0 points [-]

No, you don't, you don't get to decide. The decision has been made.

You're ignoring the fact that, normally, the thoughts going on in your brain are PART of how the decision is determined by the laws of physics. In your scenario, they're irrelevant. Whatever you think, your action is determined by the machine.

EDIT: http://lesswrong.com/lw/2mc/the_smoking_lesion_a_problem_for_evidential/2hx7?c=1 You've claimed that you would one-box in this scenario. You've claimed therefore, that you would one-box if programmed to two-box.

Ie. you've claimed you're capable of logically impossible acts. Either that, or you don't understand your own scenario.

Comment author: Unknowns 23 August 2010 07:47:54PM 0 points [-]

The machine works only by getting you to think certain things, and these things cause your decision. So you decide in the same way you normally do.

I did not say I would one box if I were programmed to two box; I said I would one-box.

Comment author: Kingreaper 23 August 2010 10:16:24PM *  0 points [-]

And if you were programmed to two-box, and unaware of that fact?

Your response is like responding to "what would you do if there was a 50% chance of you dying tomorrow?" with: "I'd survive"

It completely ignores the point of the situation, and assumes godlike agency.

Comment author: Kingreaper 23 August 2010 04:13:54PM *  -1 points [-]

I do whatever I'm being influenced into doing.

This is a fact.

You can argue all you like about what I should do, but what I will do is already decided, and isn't influenced by my thoughts, my rationality, or anything else.

All the information needed to determine what I will do is in the lesion/machine.

Applying rationality to a scenario where the agent is by definition incapable of rationality is just plain silly.

Comment author: Unknowns 23 August 2010 04:15:18PM *  2 points [-]

Do you think that in real life you are exempt from the laws of physics?

If not, does that mean that "what you will do is already decided"? That you don't have to make a decision? That you are "incapable of rationality"?

Comment author: Kingreaper 23 August 2010 04:16:50PM *  -2 points [-]

In the real world the information that determines my action is contained within me. In order to determine the action, you would have to run "me" (or at least some reasonable part thereof)

In your version of newcombs the information that determines my action is contained within the machine.

Can you see why I consider that a significant difference?

Comment author: Unknowns 23 August 2010 05:08:34PM 2 points [-]

No. The machine determines your action only by determining what is in you, which determines your action in the normal way.

So you still have to decide what to do.

Comment author: Kingreaper 23 August 2010 05:20:42PM *  -2 points [-]

Do you see how this scenario rules out the possibility of me deciding rationally?

EDIT: In fact, let me explain now, before you answer, give me a sec and I'll re-edit

EDIT2: If the rational decision is to two-box, and Omega has set me to one-box, then I must not be deciding rationally. Correct?

If the rational decision is to one-box, and Omega has set me to two-box, then I must not be deciding rationally. Correct?

Now, assuming I will not decide rationally, as I know I will not, I need waste no time thinking. I'll do whichever I feel like.

Comment author: Unknowns 23 August 2010 06:10:42PM 1 point [-]

You can substitute "the laws of physics" for "Omega" in your argument, and if it proves you will not decide rationally in the Omega situation, then it proves you will not decide --anything-- rationally in real life.

Comment author: [deleted] 23 August 2010 06:23:23PM -1 points [-]

Presumably (or at least hopefully) if you are a rational agent with a certain DT, then a long and accurate description of the ways that "the laws of physics" affect your decision-making process break down into

  1. The ways that the laws of physics affect the computer you're running on
  2. How the computer program, and specifically your DT, works when running on a reliable computer.

It's not clear how a reduction like this could work in your example.

Comment author: Unknowns 23 August 2010 06:34:25PM 1 point [-]

In my example, it is give that Omega decides what you are going to do, but that he causes you to do it in the same way you ordinarily do things, namely with some decision theory and by thinking some thoughts etc.

If the fact that Omega causes it means that you are irrational, then the fact that the laws of physics cause your actions also means that you are irrational.

Comment author: Kingreaper 23 August 2010 06:38:43PM *  -2 points [-]

A rational entity can exist in the laws of physics. A rational entity by definition has a determined decision, if there is a rational decision possible. A rational entity cannot make an irrational decision.

You're getting hung up on the determinism. That's not the issue. Rational entities are by definition deterministic.

What they are not is deterministically irrational. Your scenario requires an irrational entity.

Your scenario requires that the entity be able to make an irrational decision, using it's normal thought processes. This requires that it be using irrational thought processes.

Comment author: Kingreaper 23 August 2010 06:16:50PM *  -1 points [-]

No, it proves I will not decide everything rationally if I don't decide everything rationally. Which is pretty tautologous.

The Omega example requires that I will not decide everything rationally.

The real world permits the possibility of a rational agent. Thus it makes sense to question what a rational agent would do. Your scenario doesn't permit a rational agent, thus it makes no sense to ask what a rational agent would do.

You're missing the point Unknowns. In your scenario, my decision doesn't depend on how I decide. It just depends on the setting of the box. So I might as well just decide arbitrarily, and save effort.

What would you do in your own scenario?

Comment author: Unknowns 23 August 2010 06:18:07PM 1 point [-]

In real life, your decision doesn't depend on how you decide it. It just depends on the positions of your atoms and the laws of physics. So you might as well just decide arbitrarily, and save effort.

I would one-box.

Comment author: Kingreaper 23 August 2010 07:33:22PM -1 points [-]

So, if Omega programmed you to two-box, you would one-box?

That's not exactly consistent. In fact, that's logically impossible.

Essentially, you're denying your own scenario.

Comment author: thomblake 23 August 2010 05:52:54PM 1 point [-]

You left out some steps in your argument. It appears you were going for a disjunction elimination, but if so I'm not convinced of one premise. Let me lay out more explicitly what I think your argument is supposed to be, then I'll show where I think it's gone wrong.

A = "The rational decision is to two-box" B = "Omega has set me to one-box" C = "The rational decision is to one-box" D = "Omega has set me to two-box" E = "I must not be deciding rationally"

1. (A∧B)→E
2. (C∧D)→E
3. (A∧B)∨(C∧D)
4. ∴ E

I'll grant #1 and #2. This is a valid argument, but the dubious proposition is #3. It is entirely possible that (A∧D) or that (C∧B). And in those cases, E is not guaranteed.

In short, you might decide rationally in cases where you're set to one-box and it's rational to one-box.

Comment author: Kingreaper 23 August 2010 06:01:38PM *  -1 points [-]

Proposition 3 is only required to be possible, not to be true, and is supported by the existence of both paths of the scenario: the scenario requires that both A and B are possible.

It is possible that I will make the rational decision in one path of the scenario. But the scenario contains both paths. In one of the two paths I must be deciding irrationally.

Given as it was stated that I will use my normal thought-processes in both paths, my normal thought-processes must, in order for this scenario to be possible, be irrational.

Comment author: thomblake 23 August 2010 06:12:19PM *  1 point [-]

You're mixing modes.

It is not the case that in order for this scenario to be possible, your normal thought-processes must be necessarily irrational. Rather, in order for this scenario to be possible, your normal thought-processes must be possibly irrational. And clearly that's the case for normal non-supernatural decision-making.

ETA: Unknowns stated the conclusion better

Comment author: Kingreaper 23 August 2010 06:21:38PM *  0 points [-]

Let's try a different tack: Is it rational to decide rationally in Unknown's scenario?

1.Thinking takes effort, and this effort is a disutility. (-c)

2.If I don't think I will come to the answer the machine is set to. (of utility X)

3.If I do think I will come to the answer the machine is set to. (of utility X)

My outcome if I don't think is "X" My outcome if I do think if "X-c" Which is less than "X" I shouldn't waste my effort thinking this through.