MugaSofer comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: incogn 04 March 2013 06:39:23PM *  7 points [-]

(Thanks for discussing!)

I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb's paradox is that, in Newcomb's paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.

From Omega's point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality. That is what allows Omega its predictive power. Choice is not something inherent to a system, but a feature of an outsider's model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.

As for the rest of our disagreement, I am not sure why you insist that CDT must work with a misleading model. The standard formulation of Newcomb's paradox is inconsistent or underspecified. Here are some messy explanations for why, in list form:

  • Omega predicts accurately, then you get to choose is a false model, because Omega has predicted you will two-box, then you get to choose does not actually let you choose; one-boxing is an illegal choice, and two-boxing the only legal choice (In Soviet Russia joke goes here)
  • You get to choose, then Omega retroactively fixes the contents of the boxes is fine and CDT solves it by one-boxing
  • Omega tries to predict but is just blindly guessing, then you really get to choose is fine and CDT solves it by two-boxing
  • You know that Omega has perfect predictive power and are free to be committed to either one- or two-boxing as you prefer is nowhere near similar to the original Newcomb's formulation, but is obviously solved by one-boxing
  • You are not sure about Omega's predictive power and are torn between trying to 'game' it and cooperating with it is not Newcomb's problem
  • Your choice has to be determined by a deterministic algorithm, but you are not allowed to know this when designing the algorithm, so you must instead work in ignorance and design it by a false dominance principle is just cheating
Comment author: MugaSofer 06 March 2013 11:41:13AM 1 point [-]

Omega predicts accurately, then you get to choose is a false model, because Omega has predicted you will two-box, then you get to choose does not actually let you choose; one-boxing is an illegal choice, and two-boxing the only legal choice (In Soviet Russia joke goes here)

Not if you're a compatibilist, which Eliezer is last I checked.

Comment author: incogn 11 March 2013 07:31:34AM *  2 points [-]

The post scav made more or less represents my opinion here. Compatibilism, choice, free will and determinism are too many vague definitions for me to discuss with. For compatibilism to make any sort of sense to me, I would need a new definition of free will. It is already difficult to discuss how stuff is, without simultaneously having to discuss how to use and interpret words.

Trying to leave the problematic words out of this, my claim is that the only reason CDT ever gives a wrong answer in a Newcomb's problem is that you are feeding it the wrong model. http://lesswrong.com/lw/gu1/decision_theory_faq/8kef elaborates on this without muddying the waters too much with the vaguely defined terms.

Comment author: scav 07 March 2013 11:15:00AM 1 point [-]

I don't think compatibilist means that you can pretend two logically mutually exclusive propositions can both be true. If it is accepted as a true proposition that Omega has predicted your actions, then your actions are decided before you experience the illusion of "choosing" them. Actually, whether or not there is an Omega predicting your actions, this may still be true.

Accepting the predictive power of Omega, it logically follows that when you one-box you will get the $1M. A CDT-rational agent only fails on this if it fails to accept the prediction and constructs a (false) causal model that includes the incoherent idea of "choosing" something other than what must happen according to the laws of physics. Does CDT require such a false model to be constructed? I dunno. I'm no expert.

The real causal model is that some set of circumstances decided what you were going to "choose" when presented with Omega's deal, and those circumstances also led to Omega's 100% accurate prediction.

If being a compatibilist leads you to reject the possibility of such a scenario, then it also logically excludes the perfect predictive power of Omega and Newcomb's problem disappears.

But in the problem as stated, you will only two-box if you get confused about the situation or you don't want $1M for some reason.

Comment author: ArisKatsaris 15 March 2013 01:13:59AM 2 points [-]

"then your actions are decided before you experience the illusion of "choosing" them."

Where's the illusion? If I choose something according to my own preferences, why should it be an illusion merely because someone else can predict that choice if they know said preferences? Why does their knowledge of my action affect my decision-making powers?

The problem is you're using the words "decided" and "choosing" confusingly with -- different meanings at the same time. One meaning is having the final input on the action I take -- the other meaning seems to be a discussion of when the output can be calculated.

The output can be calculated before I actually even insert the input, sure -- but it's still my input, and therefore my decision -- nothing illusory about it, no matter how many people calculated said input in advance: even though they calculated it was I who controlled it.

Comment author: scav 15 March 2013 03:04:21PM *  0 points [-]

The knowledge of your future action is only knowledge if it has a probability of 1. Omega acquiring that knowledge by calculation or otherwise does not affect your choice, but it is a consequence of that knowledge being able to exist (whether Omega has it or not) that means your choice is determined absolutely.

What happens next is exactly the everyday meaning of "choosing". Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will "decide" to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it. That's one part of the illusion of choice.

EDIT: I'm assuming you're a human. A rational agent need not have this incredibly clunky architecture.

The second part of the illusion is specific to this very artificial problem. The counterfactual (you choose the opposite of what Omega predicted) just DOESN'T EXIST. It has probability 0. It's not even that it could have happened in another branch of the multiverse - it is logically precluded by the condition of Omega being able to know with probability 1 what you will choose. 1 - 1 = 0.

Comment author: ArisKatsaris 15 March 2013 03:28:28PM *  1 point [-]

The knowledge of your future action is only knowledge if it has a probability of 1.

Do you think Newcomb's Box fundamentally changes if Omega is only right with a probability of 99.9999999999999%?

Signals zap around your brain in accordance with the laws of physics and evaluate courses of action according to some neural representation of your preferences, and one course of action is the one you will "decide" to do. Soon afterwards, your conscious mind becomes aware of the decision and feels like it made it.

That process "is" my mind -- there's no mind anywhere which can be separate from those signals. So you say that my mind feels like it made a decision but you think this is false? I think it makes sense to say that my mind feels like it made a decision and it's completely right most of the time.

My mind would be only having the "illusion" of choice if someone else, someone outside my mind, intervened between the signals and implanted a different decision, according to their own desires, and the rest of my brain just rationalized the already pretaken choice. But as long as the process is truly internal, the process is truly my mind's -- and my mind's feeling that it made the choice corresponds to reality.

"The counterfactual (you choose the opposite of what Omega predicted) just DOESN'T EXIST."

That the opposite choice isn't made in any universe, doesn't mean that the actually made choice isn't real -- indeed the less real the opposite choice, the more real your actual choice.

Taboo the word "choice", and let's talk about "decision-making process". Your decision-making process exists in your brain, and therefore it's real. It doesn't have to be uncertain in outcome to be real -- it's real in the sense that it is actually occuring. Occuring in a deterministic manner, YES -- but how does that make the process any less real?

Is gravity unreal or illusionary because it's deterministic and predictable? No. Then neither is your decision-making process unreal or illusionary.

Comment author: scav 15 March 2013 05:52:27PM 0 points [-]

Yes, it is your mind going through a decision making process. But most people feel that their conscious mind is the part making decisions and for humans, that isn't actually true, although attention seems to be part of consciousness and attention to different parts of the input probably influences what happens. I would call that feeling of making a decision consciously when that isn't really happening somewhat illusory.

The decision making process is real, but my feeling of there being an alternative I could have chosen instead (even though in this universe that isn't true) is inaccurate. Taboo "illusion" too if you like, but we can probably agree to call that a different preference for usage of the words and move on.

Incidentally, I don't think Newcomb's problem changes dramatically as Omega's success rate varies. You just get different expected values for one-boxing and two-boxing on a continuous scale, don't you?