MinibearRex comments on You're in Newcomb's Box - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (172)
I think the primary reason why this Prometheus problem is flawed is that in Newcomb's problem, the presence or absence of the million dollars is unknown, while in this Prometheus problem, you already know what Prometheus did as a result of his prediction. Think of a variation on Newcomb's problem where Omega allows you to look inside box B before choosing, and you see that it is full. Only an idiot would take only one box in that scenario, and that's why this analysis is flawed.
You are right that this scenario is comparable to Newcomb's Problem With Transparent Boxes. But you're wrong about the idiocy. Rational agents one box on Transparent Newcomb's too. ;)
I think in the classic Newcomb's problem, because Omega is a superintelligence and an astonishingly accurate predictor of human behavior, you have to assume that Omega predicted every thought you have, including that one. For that reason, we're assuming that it's just about impossible for you to "trick" Omega. However, if you know, for a fact, that both boxes are filled, then you know exactly what Omega modeled you doing. That doesn't mean that you have to do it. At this point, it is possible to trick Omega. Taking both boxes just means that Omega made a mistake about what you'd do.
I've heard people argue, as you are, that rational agents should one box on transparent Newcomb's, but I've never heard a good explanation for why they think that. Care to help me out?
Two points that may or may not be useful:
Thank you, that is helpful. I still have a slight problem with it, though. In the classic Newcomb's problem, I'm in a state of uncertainty about Omega's prediction. Only when I actually pick up either one box or two can I say with confidence what Omega did. At the moment that I pick up Box B, I do know that I am leaving behind $1000 in Box A. At this point, I might be tempted to think that I should grab that box as well, since I already "know" what's inside of it. The problem is that Omega probably predicted that temptation. Because I don't know Omega's decision while I'm considering the problem, I can't hope to outsmart it.
I would argue, though, that getting $1,001,000 out of Newcomb's problem is better than getting $1,000,000. If there's a way to make that happen, a rational agent should pursue it. This is only really possible if you can outsmart Omega, which does seem like a very difficult challenge. It's really only possible if you can think one level further than Omega. In classic Newcomb's, you have to presume that Omega is predicting every thought you have and thinking ahead of you, so you can't ever assume that you know what Omega will do, because Omega knows that you will assume that and do differently. In transparent Newcomb's, however, we can know what Omega has done, and so we have a chance to outsmart it.
Obviously, if we are anticipating being faced with this problem, we can decide to agree to only take one box, so that Omega fills it up with $1,000,000, but that's not what transparent Newcomb's is asking. In transparent Newcomb's, an alien flies up to you and drops off two transparent boxes that contain between them $1,001,000. It doesn't matter to me what algorithm Omega used to decide to do this. Rationalists should win. If I can outsmart Omega, and I have an opportunity to on transparent Newcomb's, I should do it.