ThrustVectoring comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong

15 Post author: CarlShulman 19 June 2013 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread.

Comment author: Decius 19 June 2013 05:36:39AM -2 points [-]

Newcomb's problem isn't about decision theory, it's about magic and strange causation. Replace the magician with a human agent and one-boxing isn't nearly as beneficial anymore- even when the human's accuracy is very high.

Less Wrongers publicly consider one-boxing the correct answer because it's non-obvious and correct for the very limited problem where decisions can be predicted in advance, just like we (taken as a whole) pretend that we cooperate on one-shot prisoner's dilemma.

People in other areas are more likely to believe other things about the magic involved (for example, that free will exists in a meaningful form), and therefore have different opinions about what the optimal answer is.

Comment author: [deleted] 19 June 2013 05:46:25AM 8 points [-]

Newcomb's problem isn't about decision theory...

Well, it was first introduced into philosophical literature by Nozick explicitly as a challenge to the principle of dominance in traditional decision theories. So, it's probably about decision theory at least a little bit.

Comment author: David_Gerard 19 June 2013 09:20:51AM *  0 points [-]

From the context, I would presume "about" in the sense of "this is why it's fascinating to the people who make a big deal about it". (I realise the stated reason for LW interest is the scenario of an AI whose source code is known to Omega having to make a decision, but the people being fascinated are humans.)

Comment author: Decius 20 June 2013 02:30:05AM -2 points [-]

Given that your source code is known to Omega, your decision cannot be 'made'.

Comment author: wedrifid 20 June 2013 03:47:06AM 0 points [-]

Given that your source code is known to Omega, your decision cannot be 'made'.

Yes it can.

Comment author: Decius 20 June 2013 05:22:35AM 0 points [-]

Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made. A Customer Service Representative that follows company policy regardless of the outcome isn't making decisions, he's abdicating the decision-making to someone else.

It's probable that free will doesn't exist, in which case decisions don't exist and agenthood is an illusion; that would be consistent with the line of thinking which has produced the most accurate observations to date. I will continue to act as though I am an agent, because on the off chance I have a choice it is the choice that I want.

Comment author: RichardKennaway 20 June 2013 09:15:39AM 3 points [-]

Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.

Really?

Comment author: Decius 21 June 2013 02:16:34AM 0 points [-]

Oddly enough, those are about programming. There's nothing in there that is advice to robots about what decisions to make.

Comment author: RichardKennaway 21 June 2013 08:20:57AM *  1 point [-]

There's nothing in there that is advice to robots about what decisions to make.

It is all about robots -- deterministic machines -- performing activities that everyone unproblematically calls "making decisions". According to what you mean by "decision", they are inherently incapable of doing any such thing. Robots, in your view, cannot be "agents"; a similar Google search shows that no-one who works with robots has any problem describing them as agents.

So, what do you mean by "decision" and "agenthood"? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?

Comment author: Decius 22 June 2013 11:13:25PM 0 points [-]

It would be better to say that materialism being true has the prerequisite of determinism being true, in which case "decisions" do not have the properties we're crossing on.

Comment author: wedrifid 20 June 2013 05:33:30PM 0 points [-]

Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.

Still not true. The prediction capability of other agents in the same universe does not make the decisions made by an agent into not-decisions. (This is a common confusion that often leads to bad decision-theoretic claims.)

Comment author: Decius 21 June 2013 02:32:49AM *  0 points [-]

If free will is not the case, there are no agents (anymore?)

If it is the case that the universe in the past might lead to an agent making one of two or more decisions, then free will is the case and perfect prediction is impossible; if it is not the case that an entity can take any one of two or more actions, then free will is not the case and perfect prediction is possible.

Note that it is possible for free will to exist but for me to not be one of the agents. Sometimes I lose sleep over that.

Comment author: wedrifid 21 June 2013 06:37:00AM *  1 point [-]

If free will is not the case, there are no agents (anymore?)

A starting point.

Comment author: Decius 22 June 2013 11:01:23PM -1 points [-]

The scale does not decide the weight of the load.

Comment author: TheOtherDave 21 June 2013 03:03:38AM 1 point [-]

Can you clarify what you mean by "agent"?

Comment author: Decius 22 June 2013 10:58:59PM 0 points [-]

One of the necessary properties of an agent is that it makes decisions.

Comment author: TheOtherDave 20 June 2013 05:49:17PM 0 points [-]

So... hrm.
How do I tell whether something is a decision or not?

Comment author: Luke_A_Somers 20 June 2013 09:24:47PM 0 points [-]

By the causal chain that goes into it. Does it involve modeling the problem and considering values and things like that?

Comment author: TheOtherDave 20 June 2013 09:34:24PM 1 point [-]

So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that's a decision or not depends on whether its internal structure is a model of the "does the heat go on?" problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can't, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.

(nods) OK, that's fair. I can live with that.

Comment author: CAE_Jones 19 June 2013 07:28:32AM 0 points [-]

Newcomb's problem isn't about decision theory, it's about magic and strange causation. Replace the magician with a human agent and one-boxing isn't nearly as beneficial anymore- even when the human's accuracy is very high.

I felt a weird sort of validation when I saw that Theists tend to 1box more than Atheists, and I think you pretty much nailed why. Theists are more likely to believe that omniscience is possible, so it isn't surprising that less theists believe they can beat Omega.

I haven't studied the literature on free will well enough to know the terms; I noticed that distribution of beliefs on free will were given in the post, and suspect that if I was up to speed on the terminology that would affect my confidence in my model of why people 1box/2box quite a lot. For now, I'm just noticing that all the arguments in favor of 2boxing that I've read seem to come down to refusal to believe that Omega can be a perfect predictor. But like I said, I'm not well studied on the literature and might not be saying anything meaningful.

Comment author: Decius 19 June 2013 05:25:22PM -1 points [-]

I'm just noticing that all the arguments in favor of 2boxing that I've read seem to come down to refusal to believe that Omega can be a perfect predictor.

That's hits what I meant pretty much on the head. If Omega is a perfect predictor, then it is meaningless to say that the human is making a choice.