All of dv82matt's Comments + Replies

dv82matt160

But can you be 99.99% confident that 1159 is a prime?

This doesn't affect the thrust of the post but 1159 is not prime. Prime factors are 19 and 61.

-1bentarm
I was about 80% sure that 1159 was not prime, based on reading that sentence. It took me <1 minute to confirm this. I can totally be more than 99.99% sure of the primality of any given four-digit number. In fact, those odds suggest that I'd expect to make one mistake with probability >0.5 if I were to go through a list of all the numbers below 10,000 and classify them as prime or not prime. I think this is ridiculous. I'm quite willing to take a bet at 100 to 1 odds that I can produce an exhaustive list of all the prime numbers below 1,000,000 (which contains no composite numbers), if anyone's willing to stump up at least $10 for the other side of the bet.

That may have, in fact, been the point. I doubt many people bothered to check.

I agree that you can be 99.99% (or more) certain that 53 is prime but I don't think you can be that confident based only on the arguement you gave.

If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are

... (read more)
1ChrisHallquist
I wouldn't call that argument my only reason, but it's my best shot at expressing my main reason in words. Funny story: when I was typing this post, I almost typed, "If a number is not prime, it must have a prime factor greater than its square root." But that's wrong, counterexamples include pi, i, and integers less than 2. Not that I was confused about that, my real reasoning was partly nonverbal and included things like "I'm restricting myself to the domain of integers greater than 1" as unstated assumptions. And I didn't actually have to spell out for myself the reasoning why 2 and 5 aren't factors of 53; that's the sort of thing I'm used to just seeing at a glance. This left me fearing that someone would point out some other minor error in the argument in spite of the arguments' being essentially correct, and I'd have to respond, "Well, I said I was 99.99% sure 53 was prime, I never claimed to be 99.99% sure of that particular argument."

Are the various people actually being presented with the same problem? It makes a difference if the predictor is described as a skilled human rather than as a near omniscient entity.

The method of making the prediction is important. It is unlikely that a mere human without computational assistance could simulate someone in sufficient detail to reliably make one boxing the best option. But since the human predictor knows that the people he is asking to choose also realize this he still might maintain high accuracy by always predicting two boxing.

edit: grammar

1FeepingCreature
But if you're playing against a mere human, it is in your interest to make your behavior easy to predict, so that your Omega can feel confident in oneboxing you. (Thus, social signalling) This is one of the rare cases where evolution penalizes complexity.
3Eliezer Yudkowsky
(Plausible, but then the mere human should have a low accuracy / discrimination rate. You can't have this and a high accuracy rate at the same time. Also in practice there are plenty of one-boxers out there.)

This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb's problem, then a predictor that always predicts one boxing could be suprisingly accurate.

It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.

As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.

You may have misunderstood what is meant by "smart predictor".

The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.

Furthermore a significant part of the essay explains in detail why many of the assumptions associated... (read more)

dv82matt-10

I have written a critique of the position that one boxing wins on Newcomb's problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here

2nhamann
I don't understand what the part about "fallible" and "infallible" agents is supposed to mean. If there is an "infallible" agent that makes the correct prediction 60% of the time and a "fallible" agent that makes the correct prediction 60% of the time, in what way should one anticipate them to behave differently?
2ata
http://wiki.lesswrong.com/wiki/Omega Omega is assumed to be a "smart predictor".
-1GrateGoo
Any conclusions, about how things work in the real world, drawn from Newcomb's problem, crucially rest on the assumption that an all-knowing being might, at least theoretically, as a logically consistent concept, exist. If this crucial assumption is flawed, then any conclusions drawn from Newcomb's problem are likely flawed too. To be all-knowing, you'd have to know everything about everything, including everything about yourself. To contain all that knowledge, you'd have to be larger than it - otherwise there would be no matter or energy left to perform the activity of knowing it all. So, in order to be all-knowing, you'd have to be larger than yourself. Which is theoretically impossible. So, the Newcomb problem crucially rests on a faulty assumption: that something that is theoretically impossible might be theoretically possible. So, conclusions drawn from Newcomb's problem are no more valid than conclusions drawn from any other fairy tale. They are no more valid than, for example, the reasoning: "if an omnipotent and omniscient God would exist who would eventually reward all good humans with eternal bliss, all good humans would eventually be rewarded with eternal bliss -> all good humans will eventually be rewarded with eternal bliss whether the existence of an omnipotent and omniscient God is even theoretically possible or not". One might think that Newcomb's problem could be altered; one might think that instead of an "all-knowing being" it could assume the existence a non-all-knowing being that however knows what you will choose. But if the MWI is correct, or if the universe is otherwise infinitely large, not all of the infinitely many identical copies of you would be controlled by any such being. If they would, that would mean that that being would have to be all-knowing. Which, as shown, is not possible.

I’m finding "correct" to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.

To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.

0MrHen
I don't care about Newcomb's problem. This post doesn't care about Newcomb's problem. The next step in this line of questioning still doesn't care about Newcomb's problem. So, please, forget about Newcomb's problem. At some point, way down the line, Newcomb's problem may show up again, but when it does this: Will certainly be taken into account. Namely, it is exactly because the difference is not trivial that I went looking for a trivial example. The reason you find "correct" to be loaded is probably because you are expecting some hidden "Gotcha!" to pop out. There is no gotcha. I am not trying to trick you. I just want an answer to what I thought was a simple question.

First, thanks for explaining your down vote and thereby giving me an opportunity to respond.

We say that Omega is a perfect predictor not because it's so very reasonable for him to be a perfect predictor, but so that people won't get distracted in those directions.

The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box... (read more)

0byrnema
I see we really are talking about different Newcomb "problem"s. I took back my down vote. So one of our problems should have another name, or at least a qualifier. I don't think Newcomb's problem (mine) is so trivial. And I wouldn't call belief in the triangle inequality a bias. The contents of box 1 = (a>=0) The contents of box 2 = (b>=0) 2-boxing is the logical deduction that ((a+b)>=a) and ((a+b)>=b). I do 1-box, and do agree that this decision is a logical deduction. I find it odd though that this deduction works by repressing another logical deduction and don't think I've ever see this before. I would want to argue that any and every logical path should work without contradiction.
0MrHen
Perhaps I can clarify: I specifically intended to simplify the dilemma to the point where it was trivial. There are a few reasons for this, but the primary reason is so I can take the trivial example expressed here, tweak it, and see what happens. This is not intended to be a solution to any other scenario in which Omega is involved. It is intended to make sure that we all agree that this is correct.

The basic concept behind Omega is that it is (a) a perfect predictor

I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.

If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.

1byrnema
I voted this comment down, and would like to explain why. Right, we don't want people distracted by whether Omega's prediction could be incorrect in their case or whether the solution should involve tricking Omega, etc. We say that Omega is a perfect predictor not because it's so very reasonable for him to be a perfect predictor, but so that people won't get distracted in those directions. We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability? Rather it's about whether logic (2-boxing seems logical) and winning are at odds. Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing --in this problem -- about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.
-1Dan_Moore
I agree. A perfect predictor is either Laplace’s Demon or a supernatural being. I don’t see why either concept is particularly useful for a rationalist.

I don’t think Newcomb’s Problem can easily be stated as a real (as opposed to a simply logical) problem. Any instance of Newcomb’s problem that you can feasibly construct in the real world it is not a strict one shot problem. I would suggest that optimizing a rational agent for the strictly logical one shot problem one is optimizing for a reality that we don’t exist in.

Even if I am wrong about Newcomb’s problem effectively being an iterated type of problem treating it as if it is seems to solve the dilemma.

Consider this line of reasoning. Omega wants to ma... (read more)

0[anonymous]
It can be stated as real in any and every universe that happens to have an omniscient benefactor who is known to be truthful and prone to presenting such scenarios. It's not real in any other situation. The benefit for optimising a decision making strategy to handle such things as the Newcomb problem is that it is a boundary case. If our decision making breaks down entirely at extreme cases then we can not trust it to be correct.

Concerning Newcomb’s Problem I understand that the dominant position among the regular posters of this site is that you should one-box. This is a position I question.

Suppose Charlie takes on the role of Omega and presents you with Newcomb’s Problem. So far as it is pertinent to the problem Charlie is identical to Omega with the notable exception that his prediction is only %55 likely to be accurate. Should you one-box or two-box in this case?

If you one-box then the expected utility is (.55 1,000,000) $550,000 and if you two-box then it is (.45 1,001,000) $450,450 so it seems you should still one-box even when the prediction is not particularly accurate. Thoughts?

1wedrifid
Good question. And with Charlie known to be operating exactly as defined then yes, I would one box. I wouldn't call him Charlie however as that leads to confusion. The significant problem with dealing with someone who is taking the role of Omega is in my ability to form a prediction about them that is sufficient to justify the 'cooperate' response. Once I have that prediction the rest, as you have shown, is just simple math.