thomblake comments on Is Omega Impossible? Can we even ask? - Less Wrong

-8 Post author: mwengler 24 October 2012 02:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 24 October 2012 03:07:15PM *  12 points [-]

Doesn't Newcomb's problem remain pretty much the same if Omega is "only" able to predict your answer with 99% accuracy?

In that case, a one boxer would get a million 99% of the time, and nothing 1% of the time, and a two-boxer would get a thousand 99% of the time, and thousand and a million 1% of the time ... unless you have a really weirdly shaped utility function, one-boxing still seems much better.

(I see the "omnipotence" bit a bit of a spherical cow assumption that allows to sidestep some irrelevant issues to get to the meat of the problem, but it does become important when you're dealing with bits of code simulating each other)

Comment author: thomblake 24 October 2012 03:18:27PM *  9 points [-]

If Omega is only able to predict your answer with 75% accuracy, then the expected payoff for two-boxing is:

.25 * 1001000 + .75 * 1000 = 251000

and the expected payoff for one-boxing is:

.25 * 0 + .75 * 1000000 = 750000.

So even if Omega is just a pretty good predictor, one-boxing is the way to go. (unless you really need a thousand dollars or usual concerns about money vs utility)

Comment author: thomblake 24 October 2012 03:45:53PM 2 points [-]

For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.

Now I wonder how good humans are at playing Omega.

Comment author: benelliott 24 October 2012 04:01:52PM *  2 points [-]

Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.

Comment author: thomblake 24 October 2012 04:10:03PM 0 points [-]

Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer.

Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.

Comment author: benelliott 24 October 2012 04:28:43PM 1 point [-]

Fair point, your're right.

Comment author: KPier 24 October 2012 11:29:48PM 0 points [-]

... and if your utility scales linearly with money up to $1,001,000, right?

Comment author: thomblake 25 October 2012 02:13:14PM 0 points [-]

Yes, that sort of thing was addressed in the parenthetical in the grandparent. It doesn't specifically have to scale linearly.

Comment author: prase 25 October 2012 05:12:10AM 0 points [-]

Or if the payoffs are reduced to fall within the (approximately) linear region.

Comment author: [deleted] 25 October 2012 08:51:48AM 3 points [-]

But if they are too low (say, $1.00 and $0.01) I might do things other than what gets me more money Just For The Hell Of It.

Comment author: faul_sname 25 October 2012 05:31:19PM 4 points [-]

And thus was the first zero-boxer born.

Comment author: [deleted] 25 October 2012 06:10:33PM 2 points [-]

Zero-boxer: "Fuck you, Omega. I won't be your puppet!"

Omega: "Keikaku doori..."

Comment author: vi21maobk9vp 24 October 2012 05:27:44PM 1 point [-]

This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.

And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).

I don't say it is impossible to specify, just that your description could be improved.

Comment author: thomblake 24 October 2012 05:32:19PM 2 points [-]

Sure, it would also be nice to know that your wearing blue shoes will not throw off Omega. In the absence of any such information (we can stipulate if need be) the analysis is correct.

Comment author: mwengler 24 October 2012 03:42:53PM -1 points [-]

Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.

As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.

Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn't want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.

Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.

If I see Omega getting the answer right 75% of the time, I think "the clever conman makes himself look real by appearing to be constrained by real limits." Does this make me smarter or dumber than we want a powerful AI to be?

Comment author: thomblake 24 October 2012 03:48:16PM 6 points [-]

Nobody is proposing building an AI that can't recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.

Comment author: mwengler 25 October 2012 03:18:15PM 0 points [-]

I have seen numerous statements of Newcomb's problem where it is stated "Omega got the answer right 100 out of 100 times before." That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I'm not sure there is), it has been left implicit until now.