NickRetallack comments on Newcomb's Problem and Regret of Rationality - Less Wrong

64 Post author: Eliezer_Yudkowsky 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Amanojack 22 May 2011 05:24:08PM *  -1 points [-]

Newcomb's Problem is silly. It's only controversial because it's dressed up in wooey vagueness. In the end it's just a simple probability question and I'm surprised it's even taken seriously here. To see why, keep your eyes on the bolded text:

Omega has been correct on each of 100 observed occasions so far - everyone [on each of 100 observed occasions] who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.

What can we anticipate from the bolded part? The only actionable belief we have at this point is that 100 out of 100 times, one-boxing made the one-boxer rich. The details that the boxes were placed by Omega and that Omega is a "superintelligence" add nothing. They merely confuse the matter by slipping in the vague connotation that Omega could be omniscient or something.

In fact, this Omega character is superfluous; the belief that the boxes were placed by Omega doesn't pay rent any differently than the belief that the boxes just appeared at random in 100 locations so far. If we are to anticipate anything different knowing it was Omega's doing, on what grounds? It could only be because we were distracted by vague notions about what Omega might be able to do or predict.

The following seemingly critical detail is just more misdirection and adds nothing either:

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

I anticipate nothing differently whether this part is included or not, because nothing concrete is implied about Omega's predictive powers - only "superintelligence from another galaxy," which certainly sounds awe-inspiring but doesn't tell me anything really useful (how hard is predicting my actions, and how super is "super"?).

The only detail that pays any rent is the one above in bold. Eliezer is right that one-boxing wins, but all you need to figure that out is Bayes.

EDIT: Spelling

Comment author: NickRetallack 02 July 2013 07:56:20AM 2 points [-]

I'm with you. You have to look at the outcomes, otherwise you end up running into the same logical blinders that make Quantum Mechanics hard to accept.

After reading some of the Quantum Mechanics sequence, I am more willing to believe in Omega's omniscience. Quantum mechanics allows for multiple timelines leading to the same outcome to interfere and simply never happen, even if they would have been probable in classical mechanics. Perhaps all timelines leading to the outcome where one-boxing does not yield money happen to interfere. Even if you take a more literal interpretation of the problem statement, where it is your own mind that determines the box's content, your mind is made of particles which could conceivably affect the universe's configuration.

Comment author: christopheg 02 July 2013 08:20:57AM *  0 points [-]

I have more or less the same point of view and applied it to non iterated prisonner's dilemma (as Newcomb's is merely half a Prisonner's Dilemma as David Lewis suggested in an article, and on this I agree with him, but not on his conclusion).

What is at stakes here (in Newcomb's or PD) may not be that easy to accept anyway. It's probability and Bayes against causality. The doom loop in Newcomb's (reasoning loop leading to loose 1 million, as I see it) is stating that The content of the boxes is already put when you play, henceforth you action won't change anything. The quantum mechanical reasoning would go the other way: as long as you did'nt observe/interact with it it is merely a probability. You may even want to go futher than that: imagine that someone else see the content of the box, then see you choosing the predicted set of boxes. He will conclude you have no freewill, or something along theses lines. I understand that people puting freewill as a fact - not merely a belief that could be contradicted by experiment - and so reject unthinkingly the probabilist reasoning.

My comment about PD is in this Sequence (http://lesswrong.com/lw/hl8/other_prespective_on_resolving_the_prisoners/). I merely applty probability rules. I'm interrested to know if you see any fault in it from a probabilist point of view.