Comment author: Amanojack 22 May 2011 05:24:08PM *  -1 points [-]

Newcomb's Problem is silly. It's only controversial because it's dressed up in wooey vagueness. In the end it's just a simple probability question and I'm surprised it's even taken seriously here. To see why, keep your eyes on the bolded text:

Omega has been correct on each of 100 observed occasions so far - everyone [on each of 100 observed occasions] who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.

What can we anticipate from the bolded part? The only actionable belief we have at this point is that 100 out of 100 times, one-boxing made the one-boxer rich. The details that the boxes were placed by Omega and that Omega is a "superintelligence" add nothing. They merely confuse the matter by slipping in the vague connotation that Omega could be omniscient or something.

In fact, this Omega character is superfluous; the belief that the boxes were placed by Omega doesn't pay rent any differently than the belief that the boxes just appeared at random in 100 locations so far. If we are to anticipate anything different knowing it was Omega's doing, on what grounds? It could only be because we were distracted by vague notions about what Omega might be able to do or predict.

The following seemingly critical detail is just more misdirection and adds nothing either:

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

I anticipate nothing differently whether this part is included or not, because nothing concrete is implied about Omega's predictive powers - only "superintelligence from another galaxy," which certainly sounds awe-inspiring but doesn't tell me anything really useful (how hard is predicting my actions, and how super is "super"?).

The only detail that pays any rent is the one above in bold. Eliezer is right that one-boxing wins, but all you need to figure that out is Bayes.

EDIT: Spelling

Comment author: rstarkov 22 May 2011 06:28:06PM *  -2 points [-]

It's only controversial because it's dressed up in wooey vagueness

I also happen to think that under-specification of this puzzle adds significantly to the controversy.

What the puzzle doesn't tell us is the properties of the universe in which it is set. Namely, whether the universe permits future to influence the past, which I'll refer to as "future peeking".

(alternatively, whether the universe somehow allows someone within the universe to precisely simulate the future faster than it actually comes - a proposition I don't believe is ever true in any universe defined mathematically).

This is important because if the future can't influence the past, then it is known with absolute certainty that taking two boxes won't possibly change what's in them (this is, after all, a basic given of the universe). Whether Omega has predicted something before is completely irrelevant now that the boxes are placed.

Alas, we aren't told what the universe is like. If that is intentionally part of the puzzle then the only way to solve it would be to enumerate all possible universes, assigning each one a probability of being ours based on all the available evidence, and essentially come up with a probability that "future peeking" is impossible in our universe. One would then apply simple arithmetic to calculate the expected winnings.

Unfortunately P("future peeking allowed") it's one of those probabilities that is completely incalculable for any practical purpose. Thus if "no future peeking" isn't a given, the best answer is "I don't know if taking two boxes is best because there's this one probability I can't actually calculate in practice".

Comment author: rstarkov 25 March 2011 11:37:06AM 2 points [-]

I have found that the logical approach like this one works much more rarely than it doesn't, simply because it appears that people can manage not to trust reason, or to doubt the validity of the (more or less obvious) inferences involved.

Additionally, belief is so emotional that even people who see all the logic, and truly seem to appreciate that believing in God is completely silly, still can't rid themselves of the belief. It's like someone who knows household spiders are not dangerous in any way and yet are more terrified of them than, say, an elephant.

Perhaps what's needed in addition to this is a separate "How to eschew the idea of god from your brain" guide. It would include practical advice collected from various self-admitted ex-believers. Importantly, I think people who have never believed should avoid contributing to such a guide unless they have reasons to believe that they have an extraordinary amount of insight into a believer's mind.

Comment author: rstarkov 25 March 2011 11:42:53AM 1 point [-]

To expand a bit on the first paragraph, I feel that such reasonable arguments are to many people about the same as the proof of Poincaré conjecture is to me: I fully understand the proposition, but I'm not nearly smart enough to follow the proof sufficiently well to be confident it's right.

Importantly, I can also follow the outline of the proof, to see how it's intended to work, but this is of course insufficient to establish the validity of the proof.

So the only real reason I happen to trust this proof is that I already have a pre-established trust in the community who reviewed the proof. But of course the same is also true of a believer who has a pre-established trust in the theist community.

So the guide would require a section on "how to pick authorities to trust", which would explain why it's necessary (impractical to verify everything yourself) and why the scientific community is the best one to trust (highest rate of successful predictions and useful conclusions).

Comment author: rstarkov 25 March 2011 11:37:06AM 2 points [-]

I have found that the logical approach like this one works much more rarely than it doesn't, simply because it appears that people can manage not to trust reason, or to doubt the validity of the (more or less obvious) inferences involved.

Additionally, belief is so emotional that even people who see all the logic, and truly seem to appreciate that believing in God is completely silly, still can't rid themselves of the belief. It's like someone who knows household spiders are not dangerous in any way and yet are more terrified of them than, say, an elephant.

Perhaps what's needed in addition to this is a separate "How to eschew the idea of god from your brain" guide. It would include practical advice collected from various self-admitted ex-believers. Importantly, I think people who have never believed should avoid contributing to such a guide unless they have reasons to believe that they have an extraordinary amount of insight into a believer's mind.

Comment author: AlephNeil 25 March 2011 01:46:09AM *  4 points [-]

I can understand 'supernatural' in the context of the Game Of Life: the user ('God' if you like) violates the laws of physics (in this case by anomalously switching some cells on or off).

Can't we give pretty much the same definition of 'supernatural' in the real world? (Relative to the 'true laws of physics' whatever they are.)

Perhaps we could say that a supernatural event is one that "increases the algorithmic information content of the universe", though formulating this precisely would require some care.

Comment author: rstarkov 25 March 2011 03:54:18AM 4 points [-]

Of course I'd argue that the game of life is not an isolated universe if one can toggle cells in it, and if you consider the whole lot then there's nothing supernatural about the process of cells being toggled.

But this is a good example. I asked about what others mean by "supernatural" and this sounds very close indeed!

Comment author: Manfred 25 March 2011 12:08:35AM 7 points [-]

It's also often used to mean "not material," or "not obeying the same laws as most stuff we see." So the stuff we see around us is the "natural," and things that don't follow natural law are "supernatural."

I suspect most supernatural beliefs are non-reductable just because they're made up by people and people don't generally think in terms of reductionism. I prefer "magic" form for complicated human-interacting ontologically basic things.

Comment author: rstarkov 25 March 2011 12:39:43AM 1 point [-]

Sounds like a reasonable way of putting it. So a weapon shooting invisible (to the human eye) bullets would be classified as "supernatural" by someone from the stone age, because to them, killing someone requires direct contact with a visible weapon or projectile, that has appreciable travel time. Right?

Although "hard science" would have to be excluded from this, even though it contains lots of stuff that doesn't obey the same laws as most stuff we see.

Comment author: ShardPhoenix 24 March 2011 11:47:59PM 3 points [-]

I didn't downvote and it's not an inappropriate topic, but the post is a bit rambling. More thought and editing could allow you to figure out what you're really trying to say, and hence tighten up the post.

Comment author: rstarkov 24 March 2011 11:53:08PM 1 point [-]

I suppose it's not the most concise post I've ever written. Thanks for the feedback!

Comment author: rstarkov 24 March 2011 11:18:08PM 2 points [-]

So from the negative votes I'm guessing that this is not something you guys find appropriate in "discussion"? It would help me as a newcomer if you also suggested what makes it bad :)

Comment author: Roland2 12 March 2008 06:28:41AM 9 points [-]

GBM:

Q: What is the probability for a pseudo-random number generator to generate a specific number as his next output?

A: 1 or 0 because you can actually calculate the next number if you have the available information.

Q: What probability do you assign to a specific number as being it's next output if you don't have the information to calculate it?

Replace pseudo-random number generator with dice and repeat.

Comment author: rstarkov 24 March 2011 03:36:12PM *  5 points [-]

Even more important, I think, is the realization that, to decide how much you're willing to bet on a specific outcome, all of the following are essentially the same:

  • you do have the information to calculate it but haven't calculated it yet
  • you don't have the information to calculate it but know how to obtain such information.
  • you don't have the information to calculate it

The bottom line is that you don't know what the next value will be, and that's the only thing that matters.

Comment author: alexflint 16 March 2011 09:15:22AM *  2 points [-]

I'm obviously new to this whole thing, but is this a largely undebated, widely accepted view on probabilities? That there are NO situations in which you can't meaningfully state a probability?

Actually, yes, but you're right to be surprised because it's (to my mind at least) an incredible result. Cox's theorem establishes this as a mathematical result from the assumption that you want to reason quantitatively and consistently. Jaynes gives a great explanation of this in chapters 1 and 2 of his book "Probability Theory".

But how valid is this result when we knew nothing of the original distribution?

The short answer is that a probability always reflects your current state of knowledge. If I told you absolutely nothing about the coin or the distribution, then you would be entirely justified in assigning 50% probability to heads (on the basis of symmetry). If I told you the exact distribution over p then you would be justified in assigning a different probability to heads. But in both cases I carried out the same experiment -- it's just that you had different information in the two trials. You are justified in assigning different probabilities because Probability is in the mind. The knowledge you have about the distribution over p is just one more piece of information to roll into your probability.

With this in mind, do we still believe that it's not wrong (or less wrong? :D) to assume a normal distribution, make our calculations and decide how much you'd bet that the mean of the next 100,000 samples is in the range -100..100?

That depends on the probability that the coin flipper chooses a Cauchy distribution. If this were a real experiment then you'd have to take into account unwieldy facts about human psychology, physics of coin flips, and so on. Cox's theorem tells us that in this case there is a unique answer in the form of a probability, but it doesn't guarantee that we have time, resources, or inclination to actually calculate it. If you want to avoid all those kinds of complicated facts then you can start from some reasonable mathematical assumptions such as a normal distribution over p - but if your assumptions are wrong then don't be surprised when your conclusions turn out wrong.

Comment author: rstarkov 24 March 2011 02:11:37PM 1 point [-]

Thanks for this, it really helped.

it doesn't guarantee that we have time, resources, or inclination to actually calculate it

Here's how I understand this point, that finally made things clearer:

Yes, there exists a more accurate answer, and we might even be able to discover it by investing some time. But until we do, the fact that such an answer exists is completely irrelevant. It is orthogonal to the problem.

In other words, doing the calculations would give us more information to base our prediction on, but knowing that we can do the calculation doesn't change it in the slightest.

Thus, we are justified to treat this as "don't know at all", even though it seems that we do know something.

Probability is in the mind

Great read, and I think things have finally fit into the right places in my head. Now I just need to learn to guesstimate what the maximum entropy distribution might look like for a given set of facts :)

Well, that and how to actually churn out confidence intervals and expected values for experiments like this one, so that I know how much to bet given a particular set of knowledge.

Comment author: othercriteria 13 March 2011 02:38:09PM 0 points [-]

The description of the coin flips having a Binomial(n=?,p) distribution, instead of a Bernoulli(p) distribution, might be a cause of the mis-reading.

Comment author: rstarkov 13 March 2011 08:07:10PM *  0 points [-]

Perhaps - obviously each coin is flipped just once, i.e. Binomial(n=1,p), which is the same thing as Bernoulli(p). I was trying to point out that for any other n it would work the same as a normal coin, if someone were to keep flipping it.

View more: Prev | Next