Interesting point!
However, I can think of a couple of ways of sidestepping it to have a Newcomb problem without lies:
Omega may be able to predict your reaction without simulating you at all, just like a human may be able to perfectly predict the behavior of a program without executing it
Omega could tell you "Either I am simulating you to gauge your response, or this is reality and I predicted your response" - and the problem would be essentially the same.
So it mostly seems to boil down to the implementation details of Omega.
Omega could tell you "Either I am simulating you to gauge your response, or this is reality and I predicted your response" - and the problem would be essentially the same.
This is essentially the same only if you care only about reality. But if you care about outcomes in simulations, too, then this is not "essentially the same" as the regular formulation of the problem.
If I care about my outcomes when I am "just a simulation" in a similar way to when I am "in reality", then the phrasing you've used for Omega would not lead to the standard Newcomb problem. If I'm understanding this correctly, your reformulation of what Omega says will result in justified two-boxing with CDT.
Either I'm a simulation, or I'm not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I'm much more likely to be a simulation. Since we're in a simulation, Omega has not yet predicted our response. Therefore two-boxing really is genuinely better than one-boxing.
In other words, while Newcomb's problem is usually an illustration for why CDT fails by saying we should two-box, under your reformulation, CDT correctly says we should two-box. (Under the assumption that we value simulated utilons as we do "real" ones.)
That depends on what you care about. If you only care about what the non-simulated you gets, than one boxing is still better. And I don't see any reason why a simulated you should care, because they won't actually be around to get the utility, as presumably Omega ends the simulation after they give their response.
Either I'm a simulation, or I'm not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I'm much more likely to be a simulation.
If you assume the standard implicit condition of a perfectly deterministic universe where omega does predict with 100% accuracy every single player, then Omega does not need to simulate you more than once. Omega instead needs perfect information on your full state before the decision and any parameters that might influence the decision (along with, of course, incredible computing power).
We can simplify this consideration away by stipulating that the simulated agent doesn't actually get any money, so the consequences of each choice is the same for the simulated agent.
Omega could tell you "Either I am simulating you to gauge your response, or this is reality and I predicted your response" - and the problem would be essentially the same.
In that case, even a causal decision theorist (who cared about their copies) would get the right answer.
In that case, even a causal decision theorist (who cared about their copies) would get the right answer.
Presumably a CDT-agent reasons as follows: "There's 50% chance I'm the simulation in which case my decision causally influences the content of the first box, and 50% chance I'm real in which case my decision causally influences whether I get one or both boxes." There are two problems with this kind of reasoning (which have been pointed out before).
It seems to me that the case for UDT over CDT gets much stronger if you consider all of the problems that motivated it instead of just one.
A CDT-agent needs a "finding copies of me in the world" module. It's unclear how to design this, even in principle.
Has UDT now solved that problem?
Has UDT now solved that problem?
UDT tries to sidestep the problem. Instead of having a module that must decide in a binary way whether something is or isn't a copy of itself, it instead uses its "math intuition module" to determine how much "logical correlation" exists between something and itself (i.e., computes the conditional probabilities of various outcomes depending on its decisions). The idea is that hopefully once we understand how logical uncertainty is supposed to work, this will just work automatically without having to have "extra code" for figuring out what things in the world count as copies.
Ok - but I don't see that as being any better than CDT. In both cases we need a working module (that we don't have) to make the theory work.
UDT is better than CDT because it allows correlations with "non-copies"; I think we should focus on that, not on CDT's lack of copy-finding modules.
Ok - but I don't see that as being any better than CDT. In both cases we need a working module (that we don't have) to make the theory work.
My argument is that every agent needs a solution to logical uncertainty anyway, otherwise it would be unable to, for example, decide whether or not to spend resources looking for a polynomial time solution to 3-SAT (or can only decide things like this in a haphazard way). So with CDT, you would need an extra module that we don't have.
Omega may be able to predict your reaction without simulating you at all, just like a human may be able to perfectly predict the behavior of a program without executing it
If you perfectly predict something (as Omega supposedly does), you must run a model on some hardware equivalent.
Unless you subscribe to special pleading similar to "if a program does not run on silicon based hardware, it's not truly run" - as Searle does with the special role attributed to the brain - you should expect that model that predicts your reaction to be essentially a simulation of yourself, at least of all components involved in that decision.
Think of it along the lines of brain uploading. To predict your reaction perfectly, someone has to go through your source code on some computational substrate - which means executing it, if only in their "mind". Why privilege certain kinds of Turing Machine implementations?
Omega could tell you "Either I am simulating you to gauge your response, or this is reality and I predicted your response" - and the problem would be essentially the same.
Would it? Being told that, you'd have to immediately assume that there is at least a 50% chance of being a simulation that's going to be switched off after giving your answer - your priorities would change to keep Omega from getting the information it desires off of you. One million currency units versus a 50% chance of ceasing to exist?
Interesting problem, but not essentially the same as the classic Newcomb's.
If you perfectly predict something (as Omega supposedly does), you must run a model on some hardware equivalent.
Nope!
For example, many programmers will be able to predict the output of this function without running it or walking through it mentally (which would require too much effort):
def predictable():
epiphenomenon = 0
for i in range(1000):
for j in range(1, 1000):
if epiphenomenon % j == i:
epiphenomenon += i * j
else:
epiphenomenon = j - epiphenomenon
epiphenomenon += 1
return 42
Ignoring the fact that this is a contrived edge case of disputable relevance to the Omega-predicting-human-decisions problem, there is still a model being run.
Why does it need to be a programmer? Why would non-programmers not be able to predict the output of this function with 100% accuracy?
What, then, is the difference in what a programmer does versus what a non-programmer does?
Clearly, the programmer has a more accurate mental model of what the function does and how it works and what its compiler (and the thing that runs the compiled code) or interpreter will do. Whether the function is "truly run" or "truly simulated" is at this point a metaphysical question similar to asking whether a mind is truly aware if you only write each of its computation steps using large amounts of small stones on the sand of an immense desert.
If you take a functional perspective, these are all equivalent:
f1 = 1+1, f2 = 2, f3 = 1+1+1-1
When you run, say, a Java Virtual Machine on various kinds of hardware, the fundamental chipset instructions it translates to may all be different, but the results are still equivalent.
When you upload your brain, you'd expect the hardware implementation to differ from your current wetware implementation - yet as long as the output is identical, as long as they are indistinguishable when put into black boxes, you probably wouldn't mind (cf. Turing tests).
Now, when you go for a perfect correspondence between two functions, by necessity their compressed representation of the relevant parts of the function (relevant for the output) has to be isomorphic.
The example you provided reduces to "def predictable(): return 42". These are the parts relevant to the output, "the components involved in that decision" (I should have stressed that more).
If you predict the output of predictable() perfectly, you are simulating the (compressed) components involved in that decision - or a functionally equivalent procedure - perfectly.
To predict your reaction perfectly, someone has to go through your source code on some computational substrate - which means executing it, if only in their "mind".
Certainly not true in all possible worlds. For example, it could be that for some strange reason humans always 1-box when encountering Newcomb's problem. Then, knowing you're a human is sufficient to predict that you will 1-box.
Also to illustrate, you can see where a cannonball will land without simulating the cannonball.
you can see where a cannonball will land without simulating the cannonball.
To predict with any degree of accuracy where a cannonball will land, I'm going to need to know the muzzle velocity, angle, and elevation of the cannon, and then I'm going to need to mathematically simulate the cannon firing. If I want to be more confident or more accurate, I'm also going to need to know the shape, size, and mass of the cannonball; and the current weather conditions; and I'm going to need to simulate the cannon's firing in more detail.
If I wanted to predict anything about a chaotic system, like the color of an arbitrary pixel in a gigapixel rendering of the Mandelbrot Set, I'd need to do a much finer-grained simulation--even if I'm just looking for a yes/no answer.
To get an answer from a particular decision theory, Omega is going to have to do the functional equivalent of lying to that decision theory--tracing its execution path along a particular branch which corresponds to a statement from Omega that is not veridical. I don't think we can say whether that simulation is detailed enough to be consciously aware of the lie, but I don't think that's what's being asked.
To predict with any degree of accuracy where a cannonball will land, I'm going to need to know the muzzle velocity, angle, and elevation of the cannon, and then I'm going to need to mathematically simulate the cannon firing.
No, you really don't. LCPW please. The cannonball is flying unrestricted through the air in an eastwardly direction and will impact a giant tub of jello.
I agree that only the components that are relevant need to be modeled/simulated.
However, for the Newcomb decision - involving a lot of cognitive work and calls to your utility function - and taking into account the many interconnections between different components of our cognitive architecture, non-trivial parts of yourself would need to be modeled - unlike in your cannonball example, where mass and shape suffice.
For your hypothetical, if knowing you're human were enough to perfectly predict that particular decision, to ascertain that relationship an initial simulation of the relevant components must have occurred - how else would that belief of Omega's be justified? I do agree that such a possibility (just one simulation for all of mankind) would lower your belief of just being Omega's simulation, however since Omega predicts perfectly, if there are any human beings who do not follow that most general rule (e.g. human -> 1boxes), the number of simulations would rise again. The possible worlds in which just one simulation suffices should be quite strange, and shouldn't skew the expected number of needed simulations per human too much.
Let's take your cannonball example. Can you explain how predicting where a cannonball will land does not involve simulating the relevant components of the cannonball, and the situation it is in? With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
With the simulation requiring higher fidelity the more accurate it has to be. For a perfect simulation, the involved components would need to be perfectly mimicked.
This is false, unless you're also expecting perfect precision, whatever that means. Omega is looking for a binary answer, so probably doesn't need much precision at all. It's like asking if the cannonball will fall east or west of its starting position - you don't need to model much about it at all to predict its behavior perfectly.
how else would that belief of Omega's be justified
Nobody claimed that Omega's beliefs are justified, whatever that means. Omega doesn't need to have beliefs. Omega just needs to be known to always tell the truth, and to be able to perfectly predict how many boxes you will choose. He could have sprung into existence at the start of the universe with the abilities, for all we know.
If the universe is deterministic, then one can know based on just the starting state of the universe how many boxes you will pick. Omega might be exploiting regularities in physics that have very little to do with the rest of your mind's computation.
If you perfectly predict something (as Omega supposedly does), you must run a model on some hardware equivalent.
The less I know about chess, the more certainly I can predict the outcome if I play against a grandmaster.
The only "model" I need is the knowledge that the grandmaster is an expert player and I am not. Where in that am I "running a model"?
The less I know about chess, the more certainly I can predict the outcome if I play against a grandmaster.
Alright, let's take this to the extreme. You're playing an unknown game, all you know about it is that the grandmaster is an expert player and you don't even know the rules nor the name of the game.
Task: Perfectly predict the outcome of you playing the grandmaster. That is, out of 3^^^3 runs of such a (first) game, you'd get each single game outcome right.
All the components of your reasoning process that have a chance to affect the outcome would need to be modelled, if only in some compressed yet equivalent form. For certain other predictions, such as "chance to spontaneously combust", many attributes of e.g. your brain state would not need to be encompassed in a model for perfect predictability, but for the initial Newcomb's question, involving a great many cognitive subsystems, a functionally equivalent model may be very hard to tell apart from the original human.
Congruency / isomorphism to the degree that there is a perfect correspondence with a question as involved as Newcomb's would map to a correspondence for a vast range of topics involving the same cognitive functions.
As to your observation, it may be that there cases where for certain ranges of predictive precision, knowing less will increase your certainty. Yet, to predict perfectly you must model perfectly all components relevant to the outcome (if only in their maximally compressed form), and using the model to get the outcome from certain starting conditions equals computation.
Where did 3^^^3 pop out of? Outside of mathematics, "always" never means "always", and in the present context, Omega does not have to be perfect.
Allowing for a margin of error, the simulation would indeed make do with lower fidelity. Yet the smaller the margin of error that is tolerable, the more the predictive model would have to resemble / be isomorphic to the functionality of all components involved in the outcome ((aside from some locally inversed dynamics as the one you pointed out).
Given an example such as "chess novice versus grandmaster", a very rough model does indeed suffice until you get into extremely small tolerable epsilons (such as "no wrong prediction in 3^^^3 runs").
However, for the present example, the proportion of one boxers versus two boxers doesn't at all seem that lopsided.
Thus, to maintain a very high accuracy, the model would need to capture most of that which distinguishes between the two groups. I do grant that as the required accuracy is allowed to decrease to the low sigma range, the model probably would be very different from the actual human being, i.e. those parts that are isomorphic to that human's thought process may not reflect more than a sliver of that person's unique cognitive characteristics.
All in the details of the problem, as always. I may have overestimated Omega's capabilities. (I imagine Omega chuckling in the background.)
You're playing an unknown game, all you know about it is that the grandmaster is an expert player and you don't even know the rules nor the name of the game.
If I also know that the game has (a) no luck component and (b) no mixed strategy Nash equilibrium (i.e. rock beats scissors beats paper beats rock), then I have enough information to make a within epsilon accurate prediction. If I don't know those facts about the game, then you are right.
But your references to occurrences like spontaneous combustion is beside the point. The task is: assign likelihood of grandmaster winning or not winning. There are many possibilities that don't correspond to either category, but predict-the-outcome doesn't care about those possibilities. In the same way that I don't advise my clients about the possibility that a mistrial will occur because the judge had a heart attack when I discuss possible litigation outcome.
Correct me if I'm wrong, but I don't think that "The task is: assign likelihood of grandmaster winning or not winning." captures what Omega is doing.
For each game you play, either the grandmaster will win or he will not (tertium non datur). Since it is not possible for Omega to be wrong, the only probabilities that are assigned are 0 or 1. No updating necessary.
Say you play 5 games against someone, and they will go W(in)L(oss)WLL, then Omega would predict just that; i.e. it would assign "Pr(game series goes "WLWLL) = 1".
If Omega knows whether you will accept, it also knows whether you'll have a heart attack - or some event - barring you from accepting, since that affects the outcome. It doesn't need to label that event "heart attack", but still its genesis needs to be accounted for in the model, since it affects the outcome.
I don't want to dispute definitions, but I wouldn't says Omega erred in predicting your choice if you had a heart attack before revealing whether you would one-box or two-box. As far as I'm concerned, Omega is not known for predicting who will and won't be able to play, just what they will do if they play.
but it's a thought worth bearing in mind.
I will give you £10 for every occasion in which I feel I have benefited from bearing this in mind, or regretted not bearing it in mind. How much money do you expect to have earned from me after ten years?
Well, that's £10 already - by bearing that thought in mind, you've been able to construct a perfect snarky comment that you couldn't have otherwise done. Was that not worth it? ;-)
Have an upvote for your cheek :-P
That was actually the third attempt to respond to this post, and the third least snarky. The second one compared it to a fan theory for a piece of established fiction.
I'm afraid they're counterfactual snarks. I ran a simulation for each of them in which I anticipated your reaction, and on this basis I decided not to make them. This version of you should be thankful for this, but the simulated versions of you are bloody livid.
Thank goodness I'm alone in my room right now -- I wouldn't like to have to explain to someone else why I'm laughing so hard.
If Omega doesn't simulate you, but uses other methods to gauge your reactions, he isn't lying to you per se. But he is estimating your reaction in the hypothetical situation where you were fed untrue information that you believed to be true. And that you believed to be true, specifically because the source is Omega, and Omega is trustworthy.
Suppose that Omega is known to be accurate with probability p1 for one-boxers and p2 for two-boxers. I.e. the odds are 1-p1 that a one boxer walks out empty-handed, and 1-p2 that a two-boxer gets $1001000. This information is public, so there are no lies. As p1 and p2 tend to certainty, would the unreliable predictor problem converge to the original one? If so, your point of simulated or hypothetical lying in the original problem is irrelevant.
"I have already made my decision...", which is not true at that point
It's true in temporal sense, just not true in the more general acausal sense, where even in Transparent Newcomb's Problem seeing the contents of the box doesn't mean that the contents of the box are already-acausally determined, since being determined in this case refers to the (logical knowledge about the) distribution of states of the box in the possible worlds, not to the (knowledge about the) state of the box in the world you're observing (you don't even properly know which world is that at that point, so can't use the observation to refine the model that describes the possible worlds under study, so long as you work with all of them, without updating away those inconsistent with observations).
Slightly Less Convenient World:
Omega has predicted that you would think about this, and within its exposé / proof of its immense mental powers includes both a solution to Löb's Theorem (which you will conveniently forget as soon as you have made your decision) and a formal proof of an elimination-based prediction algorithm (also conveniently forgotten) which simulates and eliminates every other possible universe (ad infinitum) except the one you are in, which allows Omega to predict the remaining possible state with 0.999...9 precision, AKA 1.
In a counterfactual world where the sky is green and you believe the sky to be green, is your belief true?
I don't think he's actually lying at all. In the simulated world, the simulated coin came up tails. He's not lying about the coin flip result, he's simply talking about a different coin.
Why does it make sense to accept the simulation as a "real person" that can be "lied" to, but not accept the simulated world they exist in as the "real world" in which they exist? Maybe we're all being simulated and their world isn't any less real than ours. The "simluation" might not even be running at a deeper level (i.e. within this simulation) - Omega being from "outside the matrix" is certainly one way of explaining how Omega manages to be so darn clever in the first place...
The simulated you will presumably not gain $1K/$1M worth of utility from taking the simulated $1K/$1M (if it does, then it's a different problem).
Why not give the simulation $1K/$1M simulated dollars? Last I heard simulated dollars are fairly cheap when you're running the simulation. Omega is possibly lying by omission about the fact that the simulation is going to end at some point in the future, but that's hardly news. Nor even strictly necessary.
In the Newcomb problem, the simulated you is told "I have already made my decision...", which is not true at that point
The opaque box is open on the side facing away from you, and your completely trustworthy human friend writes down the content of the box without revealing it to you, before you ostensibly make your choice. Later she shows you her notes which prove that Omega did not lie.
Mere verbal nitpicking. Whether Omega needs to simulate you or not isn't part of any of the problems. And estimating hypothetical situations is hardly a lie, neither 'per se' nor by any other meaningful approximation of the word 'lie'.
Just developing my second idea at the end of my last post. It seems to me that in the Newcomb problem and in the counterfactual mugging, the completely trustworthy Omega lies to a greater or lesser extent.
This is immediately obvious in scenarios where Omega simulates you in order to predict your reaction. In the Newcomb problem, the simulated you is told "I have already made my decision...", which is not true at that point, and in the counterfactual mugging, whenever the coin comes up heads, the simulated you is told "the coin came up tails". And the arguments only go through because these lies are accepted by the simulated you as being true.
If Omega doesn't simulate you, but uses other methods to gauge your reactions, he isn't lying to you per se. But he is estimating your reaction in the hypothetical situation where you were fed untrue information that you believed to be true. And that you believed to be true, specifically because the source is Omega, and Omega is trustworthy.
Doesn't really change much to the arguments here, but it's a thought worth bearing in mind.