nshepperd comments on Conceptual Analysis and Moral Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (456)
When I say "arbitrarily large" I do not mean infinite. You have some fixed computing power, X (which you can interpret as "memory size" or "number of computations you can do before the sun explodes the next day" or whatever). The premise of newcomb's is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
Two points: If Omega's memory is Q times large than yours, you can't fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn't give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you'll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if "Omega isn't a perfect predictor" means that he can't simulate a physical system for an infinite number of steps in finite time then I agree but don't give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.