EDIT: I see by the karma bombing we can't even ask.  Why even call this part of the site "discussion?"  

 

Some of the classic questions about an omnipotent god include

 

  1. Can god make a square circle?
  2. Can god create an immovable object?  And then move it?
Saints and philosophers wrestled with these issues back before there was television.  My recollection is that people who liked the idea of an omnipotent god would answer "omnipotence does not include the power to do nonsense" where they would generally include contradictions as nonsense.  So omnipotence can't square a circle, can't make 2=3, can't make an atom which is simultaneously lead and gold.  

But where do the contradictions end and the merely difficult to conceive begin?  Can omnipotence make the ratio of the diameter to the circumference of a circle = 3, or 22/7?  Can omnipotence make sqrt(2)=1.4 or 2+2=5?  While these are not directly self-contradictory statements, they can be used with a variety of simple truths to quickly derive self-contradictory statements.  Can we then conclude that "2+2=5" is essentially a contradiction because it is close to a contradiction?  Where do we draw the line?  

What if were set some problem where we are told to assume that 
  1. 2+2 = 5
  2. 1+1 = 2
  3. 1+1+1+1+1 = 5
In solving this set problem, we can quickly derive that 1=0, and use that to prove effectively anything we want to prove.  Perhaps not formally, but we have violated the "law of the excluded middle," that either a statement is true or its negation is.  Once you violate that, you can prove ANYTHING using simple laws of inference, because you have propositions that are true and false.  

What if we set a problem where we are told to assume
  1. Omega is an infallible intelligence that does not lie
  2. Omega tells you 2+2=5
Well, we are going to have the same problem as above, we will be able to prove anything.

Newcomb's Problem

In Newcomb's box problem, we are told to assume that
  1. Omega is an infallible intelligence
  2. Omega has predicted correctly whether we will one box or two box.  
From these assumptions we wind up with all sorts of problems of causality and/or free will and/or determinism.  

What if these statements are not consistent?  What if these statements are tantamount to assuming 0=1, or are within a few steps of assuming 0=1?  Or something just as contradictory, but harder to identify?  

Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.  Beyond that, I can think that the empirical barrier to believing Omega exists in reality would be gigantic, millions of humans have watched magic shows performed by non-superior intelligences where cards we have signed have turned up in a previously sealed envelope or wallet or audience member's pocket.  We recognize that these are tricks, that they are not what they appear.  

To question Omega is not playing by the mathematician's or philosopher's rules.  But when we play by the rules, do we blithely assume 2+2=5 and then wrap ourselves around the logical axle trying to program a friendly AI to one-box?  Why is questioning Omega's possibility of existence, or possibility of proof of existence out-of-bounds?  

 

New Comment
52 comments, sorted by Click to highlight new comments since:

I see by the karma bombing we can't even ask.

It's more that the post isn't well written. It mentions omnipotence (for God), some thoughts that past philosophers had on then, and then rambles about things being difficult to conceive (without any definitions or decomposition of the problem) and then brings in Omega, with an example equivalent to "1) Assume Omega never lies, 2) Omega lies".

Then when we get to the actual point, it's simply "maybe the Newcomb problem is impossible". With no real argument to back that up (and do bear in mind that if copying of intelligence is possible, then the Newcomb problem is certainly possible; and I've personally got a (slightly) better-than-chance record at predicting if people 1-box or 2-box on Newcomb-like problems, so a limited Omega certainly is possible).

It's more that the post isn't well written. ...

Then when we get to the actual point, it's simply "maybe the Newcomb problem is impossible".

Well written, well read, definitely one or the other. Of course in my mind it is the impossibility of Omega that is central, and I support that with the title of my post. In my mind, Newcomb's problem is a good example. And from the discussion, it may turn out to be a good example. I have learned that 1) WIth the numbers stated, Omega doesn't need to have mysterious powers, he only needs to be right a little mroe than 1/2 the time. 2) Then other commenters go on to realize that understanding of HOW Omega is right will impact on whether one should one-box or two-box

So even IF the "meat" was Newcomb's problem this post is an intellectual success for me (and I feel confident for some of those who have pointed out the ways Newcomb's problem becomes more interesting with a finite Omega).

As to a full support for my ideas, it seems to me that posts must be limited in length and content to be read and responded to. That ONE form of "crackpot" is the person who shows up with 1000 pages or even 25 pages of post to support his unusual point. Stylistically, I think the discussion on this post justifies the way I wrote it. The net karma bombing was largely halted by my "whiney" edit. The length and substance of my post was considered in such a way as to be quite useful to my understanding (and naming this section "Discussion" suggests at least some positive value in that).

So in the internet age, a post which puts hooks for concepts in place without hanging 10s of pages of pre-emptive verbiage on each one is superior to its wordy alternative. And lesswrong's collective emergent action is to karma bomb such posts. Is this a problem? More for me than for you, that is for sure.

My objection, more succinctly: too long an introduction, not enough argument in the main part. Rewriting it with a paragraph (or three) cut from the intro and an extra paragraph of arguments about the impossibility of Omega in Newcomb would have made it much better, in my opinion.

But glad the discussion was useful!

[-]Emile180

Doesn't Newcomb's problem remain pretty much the same if Omega is "only" able to predict your answer with 99% accuracy?

In that case, a one boxer would get a million 99% of the time, and nothing 1% of the time, and a two-boxer would get a thousand 99% of the time, and thousand and a million 1% of the time ... unless you have a really weirdly shaped utility function, one-boxing still seems much better.

(I see the "omnipotence" bit a bit of a spherical cow assumption that allows to sidestep some irrelevant issues to get to the meat of the problem, but it does become important when you're dealing with bits of code simulating each other)

If Omega is only able to predict your answer with 75% accuracy, then the expected payoff for two-boxing is:

.25 * 1001000 + .75 * 1000 = 251000

and the expected payoff for one-boxing is:

.25 * 0 + .75 * 1000000   = 750000.

So even if Omega is just a pretty good predictor, one-boxing is the way to go. (unless you really need a thousand dollars or usual concerns about money vs utility)

For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.

Now I wonder how good humans are at playing Omega.

Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.

Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer.

Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.

Fair point, your're right.

... and if your utility scales linearly with money up to $1,001,000, right?

Yes, that sort of thing was addressed in the parenthetical in the grandparent. It doesn't specifically have to scale linearly.

Or if the payoffs are reduced to fall within the (approximately) linear region.

But if they are too low (say, $1.00 and $0.01) I might do things other than what gets me more money Just For The Hell Of It.

And thus was the first zero-boxer born.

[-][anonymous]50

Zero-boxer: "Fuck you, Omega. I won't be your puppet!"

Omega: "Keikaku doori..."

This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.

And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).

I don't say it is impossible to specify, just that your description could be improved.

Sure, it would also be nice to know that your wearing blue shoes will not throw off Omega. In the absence of any such information (we can stipulate if need be) the analysis is correct.

Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.

As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.

Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn't want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.

Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.

If I see Omega getting the answer right 75% of the time, I think "the clever conman makes himself look real by appearing to be constrained by real limits." Does this make me smarter or dumber than we want a powerful AI to be?

Nobody is proposing building an AI that can't recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.

I have seen numerous statements of Newcomb's problem where it is stated "Omega got the answer right 100 out of 100 times before." That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I'm not sure there is), it has been left implicit until now.

Consider the title of my discussion post. So we don't even need a near-magical Omega to set this problem. So WTF is he doing here? Just confusing things and misleading people (at least me.)

[-]gwern130

Downvoted for missing the obvious and often pointed out part about a fallible Omega still making Newcomb go through, and then whining about your own fault.

I was already downvoted 6 before whining about my own fault. If Omega need not be infallible than it is certainly gratuitously confusing to me to put such an omega in the argument. I am a very smart potential fellow traveler here, it would seem a defect of the site that its collective behavior is to judge queries such as mine unreasonable and undesirable to be seen.

If omega has previously been cited to be quite fallible and still have a newcomb's problem, I have not noticed it and I would love to see a link. Meanwhile, I'd still like to know how a newcomb's problem stated with a garden-variety human conman making the prediction is inferior to one which gratuitously calls upon a being with special powers unknown in the universe. Why attribute to Omega that which can be adequately explained by Penn and Teller?

I was already downvoted 6 before whining about my own fault.

So?

If Omega need not be infallible than it is certainly gratuitously confusing to me to put such an omega in the argument.

No, it's necessary to prevent other people from ignoring the hypothetical and going 'I trick Omega! ha ha ha I are so clever!' This is as about as interesting as saying, in response to the trolley dilemma, 'I always carry a grenade with me, so instead of choosing between the 5 people and the fat man, I just toss the grenade down and destroy the track! ha ha ha I are so clever!'

I am a very smart potential fellow traveler here, it would seem a defect of the site that its collective behavior is to judge queries such as mine unreasonable and undesirable to be seen.

'I am important! You should treat me nicer!'

If omega has previously been cited to be quite fallible and still have a newcomb's problem, I have not noticed it and I would love to see a link.

Multiple people have already pointed it out here, which should tell you something about how widespread that simple observation is - why on earth do you need a link? (Also, if you are "very smart", it should have been easy to construct the obvious Google query.)

[-]Shmi120

Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.

If you allow arbitrarily high but not 100%-accurate predictions (as EY is fond of repeating, 100% is not a probability), the original Newcomb's problem is defined as the limit when prediction accuracy goes to 100%. As noted in other comments, the "winning" answer to the problem is not sensitive to the prediction level just above 50% accuracy (1/(2-1000/1000000), to be precise), so the limiting case must have the same answer.

Damn good point, thanks. That certainly answers my concern about Newcomb's problem.

I think you're correct in raising the general issue of what hypothetical problems it makes sense to consider, but your application to Newcomb's does not go very far.

Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.

You didn't give any, though, and Newcomb's problem does not require an infallible Omega, only a fairly reliable one. The empirical barrier to believing in Omega is assumed away by another hypothesis: that you are sure that Omega is honest and reliable.

Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I'm not sure if that implies that he would one-box against me, even if he agrees that he would one-box against Omega and that my prediction is based on good evidence that he would.

I'm pretty sure Eliezer would one-box against Omega any time box B contained more money than box A. Against you or me, I'm pretty sure he would one box with the original 1000000:1000 problem (that's kind of the obvious answer), but not sure if it were a 1200:1000 problem.

A further thing to note: If Eliezer models other people as either significantly overestimating or significantly understimating the probability he'll one-box against them, both possibilities increase the probability he'll actually two-box against them.

So it all depends on Eliezer's model of other people's model of Eliezer's model of their model. Insert The Princess Bride reference. :-)

Or at least your model of Eliezer models other people modeling his model of them. He may go one level deeper and model other people's model of his model of other people's model of his model of them, or (more likely) not bother and just use general heuristics. Because modeling breaks down around one or two layers of recursion most of the time.

Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don't confuse the map with the territory, the universe may not be timeless.

Unless you are assigning a numerical, non-zero, non-unity probability to Omega's accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega's accuracy, no doubt including considerations of how much the FAI's own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.

A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was "epsilon." Feynman replied (and I paraphrase from memory) "Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number."

Any calculation of your estimate of Omega's responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven't figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.

If Eliezer knows that your prediction is based on good evidence that he would one-box, then that screens off the dependence between your prediction and his decision, so he should two-box.

Surely the same applies to Omega. By hypothesis, Eliezer knows that Omega is reliable, and since Eliezer does not believe in magic, he deduces that Omega's prediction is based on good evidence, even if Omega doesn't say anything about the evidence.

My only reason for being unsure that Eliezer would one-box against me is that there may be some reflexivity issue I haven't thought of, but I don't think this one works.

One issue is that I'm not going around making these offers to everyone, but the only role that that plays in the original problem is to establish Omega's reliability without Newcomb having to explain how Omega does it. But I don't think it matters where the confidence in Omega's reliability comes from, as long as it is there.

If you know that Omega came to a conclusion about you based on things you wrote on the Internet, and you know that the things you wrote imply you will one-box, then you are free to two-box.

Edit: basically the thing you have to ask is, if you know where Omega's model of you comes from, is that model like you to a sufficient extent that whatever you decide to do, the model will also do?

Ah, but the thing you DON'T know is that Omega isn't cheating. Cheating LOOKS like magic but isn't. Implicit in my point, certainly part of my thinking, is that unless you understand deeply and for sure HOW the trick is done, you can expect the trick will be done on you. So unless you can think of a million dollar upside to not getting the million dollars, you should let yourself be the mark of the conman Omega since your role seems to include getting a million dollars for whatever reasons Omega has to do that.

You should only two box if you understand Omega's trick so well that you are sure you can break it, i.e. that you will get the million dollars anyway. And the value of breaking Omega's trick is that the world doesn't need more successful con men.

Considering the likelihood of being confronted by a fake Omega rather than a real one, it would seem a matter of great lack of foresight to not want to address this problem in coding your FAI.

Unless he figures you're not an idiot and you already know that, in which case it's better for him to have a rule that says "always one-box on Newcomb-like problems whenever the payoff for doing so exceeds n times the payoff for failed two-boxing" where n is a number (probably between 1.1 and 100) that represents the payment differences. Obviously, if he's playing against something with no ability to predict his actions (e.g. a brick) he's going to two-box no matter what. But a human with theory of mind is definitely not a brick and can predict his action with far better than random accuracy.

Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I'm not sure if that implies that he would one-box against me,

And since any FAI Eliezer codes is (nearly) infinitely more likely to be presented Newcomb's boxes by one such as you, or Penn and Teller, or Madoff than by Omega or his ilk, this would seem to be a more important question than the Newcomb's problem with Omega.

Really the main point of my post is "Omega is (nearly) impossible therefore problems presuming Omega are (nearly) useless". But the discussion has come mostly to my Newcomb's example making explicit its lack of dependence on an Omega. But here in this comment you do point out that the "magical" aspect of Omega MAY influence the coding choice made. I think this supports my claim that even Newcomb's problem, which COULD be stated without an Omega, may have a different answer than when stated with an Omega. That it is important when coding an FAI to consider just how much evidence it should require that it has an Omega it is dealing with before it concludes that it does. In the long run, my concern is that an FAI coded to accept an Omega will be susceptible to accepting people deliberately faking Omega, which are in our universe (nearly) infinitely more present than true Omegas.

Omega problems are not posed for the purpose of being prepared to deal with Omega should you, or an FAI, ever meet him. They are idealised test problems, thought experiments, for probing the strengths and weaknesses of formalised decision theories, especially regarding issues of self-reference and agents modelling themselves and each other. Some of these problems may turn out to be ill-posed, but you have to look at each such problem to decide whether it makes sense or not.

What if were set some problem where we are told to assume that

2+2 = 5
1+1 = 2
1+1+1+1+1 = 5

3rd one looks fine to me. :)

Edit: Whoosh. Yes, that was the sound of this going right over my head.

I'm surprised you don't like the 2nd one :)

The point being one false statement declared as true is enough to render logical derivations unrelliable no matter how many true statements you include with it.

"Why even call this part of the site discussion? "

We are free to discuss, we are also free to downvote or upvote others depending on the quality of said discussion. In this case, you seem to not be addressing any of the responses you've gotten regarding the flaws in your argument, but just chose to complain about downvotes.

Omega-as-infallible-entity isn't required for Newcomb-style problems. If you're to argue that you don't believe that predicting people's behaviour with even slightly-above-random-chance is theoretically possible, then try to make that argument - but you'll fail. Perfect predictive accuracy may be physically impossible, given quantum uncertainty, but thankfully it's not required.

At the time I added the edit, I had two comments and 6 net downvotes. I had replied to the two comments. It is around 25 hours later now. For me, 25 hour gaps in my responses to lesswrong will be typical, I'm not sure a community which can't work with that is even close to optimal. So here I am commenting on comments.

Of course you're free to downvote and I'm free to edit. Of course we are both free as is everyone else, to speculate whether the results are what we would want, or note. Free modulo determinism, that is.

As far as I know, this is the first thread in which it has ever been pointed out that Omega doesn't need to be infallible or even close to infallible in order for the problem to work. A newcomb's problem set with a gratuitous infallible predictor is inferior to a newcomb's problem set with a currently-implementable but imperfect prediction algorithm. Wouldn't you agree? When I say inferior, I mean both as a guide to the humans such as myself trying to make intellectual progress here, and as a guide to the coders of FAI.

As far as I am concerned, a real and non-trivial improvement has been proposed to the statement of Newcomb's problem as a result of my so-called "discussion" posting. An analagous improvement in another argument would be 1) Murder is wrong because my omniscient, omnipotent, and all-good god says it is. 2) I don't think an omniscient omnipotent all-good god is possible in our universe 1) well obviously you don't need such a god to see that Murder is wrong.

Whether my analogy seems self-aggrandizing or not, the value to the discussion of taking extraneous antecedents out of discussed problems I hope will be generally understood.

As far as I know, this is the first thread in which it has ever been pointed out that Omega doesn't need to be infallible or even close to infallible in order for the problem to work. [...] As far as I am concerned, a real and non-trivial improvement has been proposed to the statement of Newcomb's problem as a result of my so-called "discussion" posting.

I'll note here that the lesswrong wiki page on Newcomb's problem has a section which says the following:

Irrelevance of Omega's physical impossibility

Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega."

Also this section wasn't recently added, it has been there since November 2010.

In short you're not the first person to introduce to us the idea of Omega being impossible.

A newcomb's problem set with a gratuitous infallible predictor is inferior to a newcomb's problem set with a currently-implementable but imperfect prediction algorithm. Wouldn't you agree?

No, in maths you want to pick the simplest possible thing that embodies the principle you want to study, needless complications are distracting. Throwing in a probabilistic element to something that works fine as a deterministic problem is needless.

Throwing in a probabilistic element to something that works fine as a deterministic problem is needed.

Typo?

Yes, thanks

Is Omega Impossible?

No, Omega is possible. I have implemented Newcomb's Game as a demonstration. This is not a probabilistic simulation, this omega is never wrong.

It's really very obvious if you think about it like a game designer. To the obvious objection: Would a more sophisticated Omega be any different in practice?

For my next trick, I shall have an omnipotent being create an immovable object and then move it.

edit: sorry about the bugs. it's rather embarrassing, i have not used these libraries in ages.

It's really very obvious if you think about it like a game designer.

Your Omega simulation actually loads the box after you have chosen not before, while claiming to do otherwise. If this is a simulation of Omega, thank you for making my point.

[-][anonymous]00
[This comment is no longer endorsed by its author]Reply