Comment author: Jiro 24 November 2014 03:24:21PM *  1 point [-]

But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match.

It's already arbitrary large. You want that expanded to match arbitrarily large?

Look, in Newcomb's problem you are not supposed to be a "perfect reasoner"

Asking "which box should you pick" implies that you can follow a chain of reasoning which outputs an answer about which box to pick.

It sounds like your decision making strategy fails to produce a useful result.

My decision making strategy is "figure out what Omega did and do the opposite". It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don't halt.

(And if you're going to respond with "then Omega knows in advance that your decision strategy doesn't halt", how's he going to know that?)

Furthermore, there's always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega's choice was.

What is your point, even?

That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.

Comment author: nshepperd 25 November 2014 03:05:00AM *  2 points [-]

It's already arbitrary large. You want that expanded to match arbitrarily large?

When I say "arbitrarily large" I do not mean infinite. You have some fixed computing power, X (which you can interpret as "memory size" or "number of computations you can do before the sun explodes the next day" or whatever). The premise of newcomb's is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.

Asking "which box should you pick" implies that you can follow a chain of reasoning which outputs an answer about which box to pick.

Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.

My decision making strategy is "figure out what Omega did and do the opposite". It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting).

Two points: If Omega's memory is Q times large than yours, you can't fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn't give a result before the sun explodes, so leaves both boxes empty and flies away to safety.

That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.

Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you'll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.

In other words, if "Omega isn't a perfect predictor" means that he can't simulate a physical system for an infinite number of steps in finite time then I agree but don't give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.

Furthermore, there's always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega's choice was.

This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.

Comment author: Jiro 24 November 2014 01:54:24AM 1 point [-]

I don't see how omega running his simulation on a timer makes any difference for this,

It's me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.

Though it may be convenient to postulate arbitrarily large computing power

Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.

the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box", with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega

I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega's internals.

Actually, if you look at the decision tree for Newcomb's, the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box",

Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say "it doesn't matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch", but that's not correct. In the original problem, if I just compare the branches without considering Omega's predictions, I should always two-box. If I consider Omega's predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.

Comment author: nshepperd 24 November 2014 03:37:38AM *  1 point [-]

Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.

But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically "predicting him".

Look, in Newcomb's problem you are not supposed to be a "perfect reasoner" with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?

Comment author: EHeller 24 November 2014 01:43:35AM *  0 points [-]

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.

This contradicts the accuracy stated at the beginning. Omega can't leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.

And even if Omega has way more computational than I do, I can still generate a random number. I can flip a coin thats 60/40 one-box, two-box. The most accurate Omega can be, then, is to assume I one box.

Comment author: nshepperd 24 November 2014 03:10:56AM 2 points [-]

This contradicts the accuracy stated at the beginning. Omega can't leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.

He can maintain his 99% accuracy on deterministic one-boxers, which is all that matters for the hypothetical.

Alternatively, if we want to explicitly include mixed strategies as an available option, the general answer is that Omega fills the box with probability = the probability that your mixed strategy one-boxes.

Comment author: Jiro 23 November 2014 04:50:26PM 1 point [-]

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument.

If you are suggesting that Omega read my mind and think "does this human intend to outsmart me, Omega", then sure he can do that. But that only takes care of the specific version of the strategy where the player has conscious intent.

If you're suggesting "Omega figures out whether my strategy is functionally equivalent to trying to outsmart me", you're basically claiming that Omega can solve the halting problem by analyzing the situation to determine if it's an instance of the halting problem, and outputting an appropriate answer if that is the case. That doesn't work.

Indeed, Omega requires a strategy for when he finds that you are too hard to predict.

That still requires that he determine that I am too hard to predict, which either means solving the halting problem or running on a timer. Running on a timer is a legitimate answer, except again it means that there are some strategies I cannot execute.

The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.

I thought the assumption is that I am a perfect reasoner and can execute any strategy.

Comment author: nshepperd 23 November 2014 10:33:22PM *  1 point [-]

Running on a timer is a legitimate answer

There's your answer.

except again it means that there are some strategies I cannot execute.

I don't see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.

I thought the assumption is that I am a perfect reasoner and can execute any strategy.

Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega's power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.

(Actually, if you look at the decision tree for Newcomb's, the intended options for your strategy are clearly supposed to be "unconditionally one-box" and "unconditionally two-box", with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn't fill the box.)

Comment author: Jiro 23 November 2014 03:43:48AM *  1 point [-]

It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying "no, you can't use that strategy" is fighting the hypothetical.

Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have the same problem; it's just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. "You can't troll Omega" becomes equivalent to "you can't pick a strategy that makes the flaw in the scenario too obvious".

Comment author: nshepperd 23 November 2014 12:10:31PM *  2 points [-]

If your goal is to show that Omega is "impossible" or "inconsistent", then having Omega adopt the strategy "leave both boxes empty for people who try to predict me / do any other funny stuff" is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.

Indeed, Omega requires a strategy for when he finds that you are too hard to predict. The only reason such a strategy is not provided beforehand in the default problem description is because we are not (in the context of developing decision theory) talking about situations where you are powerful enough to predict Omega, so such a specification would be redundant. The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.

By the way, it is extremely normal for there to be strategies you are "incapable of executing". For example, I am currently unable to execute the strategy "predict what you will say next, and counter it first", because I can't predict you. Computation is a resource like any other.

Comment author: EHeller 22 November 2014 02:29:05AM 0 points [-]

You've now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I'm trying to infer theirs.

If Newcomb is only about a weird world where omega can try and predict the player's actions, but the player is not allowed to predict omega's, then its sort of a silly problem. Its lost most of its generality because you've explicitly disallowed the majority of strategies.

If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).

Comment author: nshepperd 22 November 2014 02:59:43AM *  0 points [-]

In real world games, we spend most our time trying to make action-conditional predictions. "If I play Foo, then my opponent will play Bar". There's no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb's matches that.

(For example, transparent boxes: Omega predicts "if I fill both boxes, then player will _" and fills the boxes based on that prediction. Or a few other variations on that.)

Comment author: Lumifer 22 November 2014 02:01:20AM -3 points [-]

What is your basis for arguing that it does not exist?

Introspection.

What's your basis for arguing that it does exist?

What makes humans so special as to exempted from this?

Tsk, tsk. Such naked privileging of an assertion.

to resolve whatever differences in reasoning are causing our disagreement.

Well, the differences are pretty clear. In simple terms, I think humans have free will and you think they don't. It's quite an old debate, at least a couple of millennia old and maybe more.

I am not quite sure why do you have difficulties accepting that some people think free will exists. It's not a that unusual position to hold.

Comment author: nshepperd 22 November 2014 02:11:53AM 1 point [-]

Are you talking about libertarian free will? The uncaused causer? I would have hoped that LWers wouldn't believe such absurd things. Perhaps this isn't the right place for you if you still reject reductionism.

Comment author: Jiro 21 November 2014 04:02:10PM 0 points [-]

What if I try to predict what Omega does, and do the opposite?

That would mean that either 1) there are some strategies I am incapable of executing, or 2) Omega can't in principle predict what I do, since it is indirectly predicting itself.

Alternatively, what if instead of me trying to predict Omega, we run this with transparent boxes and I base my decision on what I see in the boxes, doing the opposite of what Omega predicted? Again, Omega is indirectly predicting itself.

Comment author: nshepperd 22 November 2014 01:53:30AM *  1 point [-]

I don't see how this is relevant, but yes, in principle it's impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.

This has pretty much no impact on actual newcomb's though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans ("trolls get no money"). For the purpose of the thought experiment, action-conditional predictions are fine.

IOW, this is not a problem with Newcomb's. By the way, this has been discussed previously.

Comment author: Lumifer 21 November 2014 03:35:16AM -2 points [-]

So how do you know what's possible? Do you have data, by any chance? Pray tell!

Comment author: nshepperd 21 November 2014 03:43:23AM *  1 point [-]

Are you going to assert that your preferences are stored outside your brain, beyond the reach of causality? Perhaps in some kind of platonic realm?

Mood - check, that shows up in facial expressions, at least.

Season - check, all you have to do is look out the window, or look at the calendar.

Last food you ate - check, I can follow you around for a day, or just scan your stomach.

This line of argument really seems futile. Is it so hard to believe that your mind is made of parts, just like everything else in the universe?

Comment author: Lumifer 21 November 2014 03:14:32AM -3 points [-]

And all of those things are known by a sufficiently informed observer...

Show me one.

Comment author: nshepperd 21 November 2014 03:33:52AM 1 point [-]

No need. It only needs to be possible for

Can anyone predict with complete certainty?

to be true!

View more: Prev | Next