Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Presumptuous Philosopher's Presumptuous Friend

3 Post author: PlaidX 05 October 2009 05:26AM

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which hotel are we in, I wonder?" you ask.

"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."

"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."

"Unless there are no observers, I assume you mean" says the presumptuous philosopher.

"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"

"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

Comments (80)

Comment author: ata 05 October 2009 07:37:53PM *  7 points [-]

I don't think this requires anthropic reasoning.

Here is a variation on the story:

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds a hotel with 1,000,001 rooms. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which room are we in, I wonder?" you ask.

"Any of them is equally likely," says the presumptuous philosopher. "Because it's bloody obvious and all that. Million to one odds for any given room."

"Rubbish!" you scream. "Rubbish and poppycock! We have a 50% chance of being in room 870,199, and a 50% chance of being in one of the other rooms."

After the presumptuous philosopher stands in baffled silence for a moment, he says, "In that case, I'll make a deal with you. We'll go outside and check, and if we're in room 870,199 I'll give you ten bucks. If we're in one of the other rooms, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself surrounded by throngs of yourselves and smug looking presumptuous philosophers; you turn around and look at your door, labeled 129,070.

If I'm not mistaken (am I?), this version of the story is exactly isomorphic to PlaidX's original version; the only difference is that it's easier to see why the friend is wrong before you get to the end.

To anyone who agrees with the friend in the original story -- that the most reasonable estimate is that there is an even chance of being in either hotel -- would you disagree that this version is isomorphic to the original?

Comment author: PlaidX 05 October 2009 07:53:13PM 2 points [-]

I thought of this, but then, in the other direction, is the problem non-isomorphic to the original presumptuous philosopher problem? If so, why?

Is it because I used hotels instead of universes? Is it because the existence of both hotels has probability 100% instead of probability 50%? Is it some other thing?

Comment author: Nubulous 06 October 2009 07:15:49AM 0 points [-]

The most obvious difference is that the original problem involved the smaller or the larger set of people whereas this one uses the smaller and the larger.

Comment author: PlaidX 06 October 2009 08:52:10AM 0 points [-]

Ah, so the difference isn't that I used hotels instead of universes, it's that I used hotels instead of POSSIBLE hotels. In other words, your likelihood of being in a hotel depends on the number of "you"s in the hotel, but your likelihood of being in a possible hotel does not, is that what you're saying?

Unless the number of "you"s is zero. Then it clearly does depend on the number. Isn't this just packing and unpacking?

Comment author: Nubulous 06 October 2009 01:08:52PM 0 points [-]

You're reading a little more into what I said than was actually there. I was just remarking on the change of dependence between the parts of the problem, without having thought through what the consequences would be.

Now that I have thought it through, I agree with the presumptuous philosopher in this case. However I don't agree with him about the size of the universe. The difference being that in the hotel case we want a subjective probability, whereas in the universe case we want an objective one. Subjectively, there's a very high probability of finding yourself in a big universe/hotel. But subjective probabilities are over subjective universes, and there are very very many subjective large universes for the one objective large universe, so a very high subjective probability of finding yourself in a large universe doesn't imply a large objective probability of being found in one.

Comment author: PlaidX 06 October 2009 10:35:54PM 0 points [-]

I don't understand what you mean by subjective and objective probabilities. Would you still agree with the philosopher in my problem if omega flipped a coin (or looked at binary digit 5000 of pi) and then built the small hotel OR the big hotel?

Comment author: Nubulous 08 October 2009 12:43:05AM 0 points [-]

I don't know what I meant either. I remember it making perfect sense at the time, but that was after 35 hours without sleep, so.....

The answer to the second part is no, I would expect a 50:50 chance in that case.
In case you were thinking of this as a counterexample, I also expect a 50:50 chance in all the cases there from B onwards. The claim that the probabilities are unchanged by the coin toss is wrong, since the coin toss changes the number of participants, and we already accepted that the number of participants was a factor in the probability when we assigned the 99% probability in the first place.

Comment author: PlaidX 08 October 2009 03:08:37AM *  3 points [-]

So, if omega picks a number from 1 to 3, and depending on the result makes:

A. a hotel with a million rooms

B. a hotel with one room

C. a pile of flaming tires

you'd say that a person has a 50% chance of finding themselves in situation A or B, but a 0% chance of being in C?

Why does the number of people only matter when the number of people is zero? Doesn't that strike you as suspicious?

Comment author: Nubulous 10 October 2009 11:39:22PM 0 points [-]

When we speak of a subjective probability in a person-multiplying experiment such as this, we (or at least, I) mean "The outcome ratio experienced by a person who was randomly chosen from the resulting population of the experiment, then was used as the seed for an identical experiment, then was randomly chosen from the resulting population, then was used as the seed.... and so forth, ad infinitum".

I'm not confident that we can speak of having probabilities in problems which can't in theory be cast in this form.

In other words, the probability is along a path. When you look at the problem this way, it throws some light on why there are two different arguable values for the probability. If you look back along the path, ("what ratio will our person have experienced") the answer in your experiment is 1000000:1. If you look forward along the path, ("what ratio will our person experience") the answer is 1:1 (in the flaming-tires case there's no path, so there's no probability).

Comment author: PlaidX 11 October 2009 04:27:29AM 0 points [-]

But again I must ask, on the going-forward basis, why is the number of people in each world irrelevant? I grant you that the WORLD splits into even thirds, but the people in it don't, they split 1000000 / 1 / 0. Where are you getting 1 / 1 / 0?

Comment author: wedrifid 05 October 2009 12:34:14PM *  4 points [-]

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

One of the other copies just got $10 bucks, you lost nothing. Nice work bluffing your presumptuous friend and pumping his ego for (a chance at) cash. I just hope you think things through a bit more thoroughly if you have to lay cash on the line. Or that you have good reason to be valuing the outcome of the one copy equal to that of the million in the other hotel.

This is a trivial problem that need not be confusing unless you want to be confused.

ETA: No offence to PlaidX. On similar topics Eliezer has appeared to me to want to be confused!

Comment author: Jonathan_Graehl 06 October 2009 06:06:06PM 3 points [-]

I wouldn't want to endure a million smug "told you so" smiles for $10. Think dust specks.

Comment author: wedrifid 06 October 2009 07:28:46PM 2 points [-]

And miss watching 1,000,000 presumptuous philosophers flummoxed when the only response they get is a look of condescending superiority? I don't think so!

Comment author: wedrifid 05 October 2009 12:38:37PM 3 points [-]

While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

I feel... thin. Sort of stretched, like... butter scraped over too much bread.

Comment author: CannibalSmith 05 October 2009 07:46:39AM 9 points [-]

Comment author: CronoDAS 05 October 2009 09:13:47AM *  1 point [-]

/me is confused by this picture

Comment author: Aurini 06 October 2009 12:44:46AM 0 points [-]

"Well played, clerks... well played." slow clap ~Leonardo Leonardo

Comment author: Tyrrell_McAllister 05 October 2009 06:23:24PM 0 points [-]

Whose face is the smug one?

Comment author: CannibalSmith 06 October 2009 11:23:47AM 1 point [-]
Comment author: [deleted] 21 October 2009 05:36:36PM 0 points [-]

Or, in case it ever stops being the second result (which, actually, it has): http://rndm.files.wordpress.com/2006/11/smug404.jpg

Comment author: taw 05 October 2009 05:53:54AM 5 points [-]

I wonder... could we please use Omega less often unless absolutely required? (and if absolutely required it strongly suggests something is wrong with the story anyway)

Comment author: CannibalSmith 05 October 2009 07:03:58AM 2 points [-]

Not a chance.

Comment author: PlaidX 05 October 2009 06:08:47AM 2 points [-]

I used omega because it makes things tidier. I think it's important for a thought experiment to be tidy, but not very important for it to be realistic.

Also it's funny.

Comment author: taw 05 October 2009 08:18:09AM 0 points [-]

My problem is experiments like Newcomb in which Omega is used to break causality, and make absolutely no sense; and experiments like this which are really in every way equivalent to "being moved to a random room", look too similar.

Comment author: Vladimir_Nesov 05 October 2009 03:46:16PM *  2 points [-]

It doesn't break causality. Newcomb's problem (especially if you move the victim to a deterministic substrate) can very well be set up in the real world. It just can't be currently done because of limitations of technology.

Comment author: SilasBarta 05 October 2009 06:12:01PM 1 point [-]

Well, what do you mean by "setting it up in the real world"? There are certainly versions that can be done on computer (and I'm not sure if you were counting these, so don't take this as a criticism).

-Write an algorithm A1 for picking whether to one-box or two-box on the problem.

-Write an algorithm A2 for predicting whether a given algorithm will one-box or two-box, and then fill the box as per Omega.

-Run a program in which A2 acts on A1, and then A1 runs, and find A1's payoff.

Eliezer_Yudkowsky even claimed that this implementation of Newcomb's problem makes it even clearer why you should use Timeless Decision Theory.

Comment author: wedrifid 05 October 2009 12:28:18PM 1 point [-]

Omega doesn't break causality in Newcomb. It is merely a chain of causality which is entirely predictable.

Comment author: taw 05 October 2009 12:53:07PM 0 points [-]

Yes it does. It makes decision in the past that depends on your decision in the future, and your decision in the future can assume Omega has already decided in the past. That's a causality loop.

Newcomb is a completely bogus problem.

Comment author: Jonathan_Graehl 05 October 2009 07:09:09PM 2 points [-]

Is the taw-on-Newcomb downvoting happening because he's speaking against what's considered settled fact?

Comment author: Vladimir_Nesov 05 October 2009 04:12:34PM 1 point [-]

It's only a loop in imaginary Platonia. In the real world, laws of physics don't notice that there's a "loop". One way to see the problem is as a situation that demonstrates failure to adequately account for the real world with the semantics usually employed to think about it.

Comment author: Jonathan_Graehl 05 October 2009 07:09:43PM 2 points [-]

Too opaque.

Comment author: Vladimir_Nesov 05 October 2009 07:16:35PM 1 point [-]

Alas, yes. I'm working on that.

Comment author: Tyrrell_McAllister 05 October 2009 06:28:37PM *  1 point [-]

If it's a loop in Platonia, then all causation happens in Platonia. If any causation can be said to happen in the real world, then real causation is happening backwards in time in the Newcomb scenario.

But I, for one, have no problem with that. All causal processes observed so far have run in the same temporal direction. But there's no reason to rule out a priori the possibility of exceptions.

ETA: Nor to rule out loops.

Comment author: brianm 06 October 2009 12:01:28PM 1 point [-]

I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.

Comment author: Tyrrell_McAllister 06 October 2009 03:11:16PM 0 points [-]

It's commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet's firing causes the prisoner to die, but the finger's pulling of the trigger causes both.) Newcomb's scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe.

All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.

Comment author: brianm 07 October 2009 11:44:21AM *  2 points [-]

Newcomb's scenario has the added wrinkle that event B also causes event A

I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decision criteria weren't known to the person I was testing).

Am I violating causality by doing this? Clearly not - my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you'd decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it's already in the original thought experiment if we assume literally 100% accuracy).

Assuming I'm understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we'd change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega's prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.

Comment author: wedrifid 05 October 2009 01:00:53PM *  1 point [-]

He makes a prediction based on the nearby state of the universe that you model with an accuracy that approaches 1. If your mathematician can't handle that then find a better mathematician.

I shall continue to find Omega useful.

ETA: The part of the Newcomb problem that is actually hard to explain is that I am somehow confident that Omega is being truthful.

Comment author: Jack 05 October 2009 10:11:58PM 0 points [-]

For a bunch of people with what seems to be a Humean suspicion of metaphysics "causation" sure comes up a lot. If you think that causation is just a psychological projection onto constantly conjoined events then it isn't clear what the paradox here is.

Comment author: ata 05 October 2009 10:32:08PM *  1 point [-]

There are non-metaphysical treatments of causality. I'm not sure if any particular interpretations are favoured around here, but they build on Bayes and they work. (I have yet to read it, but I've heard good things about Judea Pearl's Causality.)

It's a "psychological projection" inasmuch as probability itself is, but as with probability, that doesn't mean it's never a useful concept, as long as it's understood in the correct light.

Comment author: Jack 05 October 2009 11:12:14PM 0 points [-]

Sure. But,

  1. The way I see causal language being used doesn't suggest to me a demystified understanding of causality.

  2. Maybe I'm being dense but it seems to me a non-metaphysical account of causality won't a priori exclude backwards causation and causality loops. In other words, even if we allow some kind of deflated causality that won't mean Newcomb's problem "makes no sense".

Comment author: ata 05 October 2009 11:44:30PM 1 point [-]

Oh, I wasn't agreeing with taw on that. Just responding to your association of causation with metaphysics. I don't see Omega breaking any causality, whether in a metaphysical or statistical sense.

As for excluding backwards causation and causality loops -- I'm not sure why we should necessarily want to exclude them, if a given system allows them and they're useful for explaining or predicting anything, even if they go against our more intuitive notions of causality. I was just recently thinking that backwards causality might be a good way to think about Newcomb's problem. (That idea might go down in flames, but I think the point stands that backward/cyclical causality should be allowed if they're found to be useful.)

Comment author: Jack 05 October 2009 11:58:24PM 0 points [-]

I think we agree down the line.

Comment author: taw 06 October 2009 10:28:28AM 0 points [-]

I meant causation in purely physical sense. Disregarding complexity of quantum-ness, Omega can't do that as you get time loops.

Comment author: Jack 06 October 2009 03:50:43PM 1 point [-]

I meant causation in purely physical sense.

I don't know what that means. Our most basic physics makes no mention of causation or even objects. There are just quantum fields with future states that can be predicted if you have knowledge of earlier states and the right equations. And no matter what "causation in a purely physical sense" means I have no idea why it prohibits an event at time t1 (Omega's predictions) from necessarily coinciding with an event at t2 (your decision).

Comment author: PlaidX 05 October 2009 09:29:09AM 0 points [-]

You can do both this experiment and newcomb without omega, or at least, you can start with a similar, but messier setup and bridge it to the tidy omega version using reasonable steps. But the process is very tedious.

Comment author: taw 05 October 2009 10:49:01AM 0 points [-]

Past discussions indicate quite conclusively that Newcomb is completely unmathematizable as a paradox. Every mathematization becomes trivial one was or the other, and resolves causality loop caused by Omega.

If problems with Omega can be pathological like that, it's a good argument to avoid using Omega unless absolutely necessary (in which case you can rethink if problem is even well stated).

Comment author: wedrifid 05 October 2009 02:46:55PM 0 points [-]

Every mathematization becomes trivial

I would be shocked if it didn't. It's a trivial problem.

Comment author: taw 05 October 2009 04:00:15PM 1 point [-]

Trivial how? Depending on mathematization it collapses to either one-boxing, or two-boxing, depending on how we break the causality loop.

If you decide first, trivially one-box. If Omega decides first, trivially two-box. If you have causality loop, your problem doesn't make any sense.

Comment author: wedrifid 05 October 2009 12:26:39PM 1 point [-]

and if absolutely required it strongly suggests something is wrong with the story anyway

No it doesn't. It suggests that care is being taken to remove irrelevant details and prevent irritating technicalities.

Comment author: taw 05 October 2009 12:51:40PM 0 points [-]
Comment author: SilasBarta 05 October 2009 04:19:57PM 2 points [-]

Why do we spend so much time thinking about how to reason on problems in which

a) you know what's going on while you're not conscious, and

b) you take at face value information fed to you by a hostile entity?

Comment author: jimmy 05 October 2009 05:19:52PM 3 points [-]

Because it's much simpler that way, and you need to be able to handle trivial cases before you can deal with more complicated ones.

Besides, what is hostile about making a million copies of you? I'd take getting knocked out for that, as long as the copies don't all have brain damage for it.

Comment author: SilasBarta 05 October 2009 05:50:24PM 1 point [-]

Okay, fair point. It is indeed important to start from simple cases. I guess I didn't say what I really meant there.

My real concern is this: posters are trying to develop the limits of e.g. anthropic reasoning. Anthropic reasoning takes the form of, "I observe that I exist. Therefore, it follows that..."

But then to attack that problem, they posit scenarios of a completely different form: "I have been fed solid evidence from elsewhere that {x, y, and z} and then placed in {specific scenario}. Then I observe E. What should I infer?"

That does not generalize to anthropic reasoning: it's just reasoning from arbitrarily selected premises.

Comment author: jimmy 05 October 2009 10:06:24PM 0 points [-]

I figured that wasn't your real objection, but I guessed wrong about what it was.

I figured you were going for something like "you need to include sufficient information so that we know we're not positing an impossible world", which is a fair point, since, for example, at first glance newcombs problem appears to violate causality.

Are you suggesting that we deal with more general problems where we know even less, or are you just saying that these problems aren't even related to anthropic reasoning?

Comment author: SilasBarta 05 October 2009 10:33:37PM *  0 points [-]

are you just saying that these problems aren't even related to anthropic reasoning?

This. This is what I'm saying.

These posts I'm referring to start out with "Assume you're in a situation where [...]. And you know that that's the situation. Then what you can you infer from evidence E?"

But when you do that, there's nothing anthropic about that -- it's just a usual logical puzzle, unrelated to reasoning about what you can know from your existence in this universe.

Comment author: PlaidX 05 October 2009 10:41:38PM 0 points [-]

Do you consider the original presumptuous philosopher problem to involve anthropic reasoning? What is it that's required to be undefined for reasoning to be anthropic?

Comment author: SilasBarta 06 October 2009 01:13:32AM 0 points [-]

Anthropic reasoning is any reasoning based on the fact that you (believe you) exist, and any condition necessary for you to reach that state, including suppositions about what such conditions include. It can be supplemented by observations of the world as it is.

In this problem, most of the problems that purport to use anthropic reasoning, and the original presumptuous philosopher problems, they are just reasoning from arbitrary givens, which don't even generalize to anthropic reasoning. Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.

Anthropic reasoning is simply not the same as "hey, what if someone did this to you, where these things had this frequency, what would you conclude upon seeing this?" That's just a normal inference problem.

Just to show that I'm being reasonable, here is what I would consider a real case of anthropic reasoning.

"I notice that I exist. The noticer seems to be the same as that which exists. So, whatever the computational process is for generating my observations must either permit self-reflection, or the thing I notice existing isn't really the same thing having these thoughts."

Comment author: PlaidX 06 October 2009 04:52:15AM 0 points [-]

Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.

To me, that just indicates that anthropic reasoning is valid, or at least that what we're calling anthropic reasoning is valid.

Comment author: SilasBarta 06 October 2009 03:24:02PM 1 point [-]

Well, that just means that you're doing ordinary reasoning, of which anthropic reasoning is a subset. It does not follow that this (and topics like it) is anthropic reasoning. And no, you don't get to define words however you like: the term "anthropic reasoning" is supposed to carve out a natural category in conceptspace, yet when you use it to mean "any reasoning from arbitrary premises", you're making the term less helpful.

Comment author: PlaidX 06 October 2009 10:40:25PM 1 point [-]

the term "anthropic reasoning" is supposed to carve out a natural category in conceptspace

If it doesn't carve out such a category, maybe that's because it's a malformed concept, not because we're using it wrong. Off the top of my head, I see no reason why the existence of the observer should be a special data point that needs to be fed into the data processing system in a special way.

Comment author: wedrifid 05 October 2009 06:47:56PM 0 points [-]

That does not generalize to anthropic reasoning: it's just reasoning from arbitrarily selected premises.

Which is interesting enough, so long as I only have to write trivial replies and not waste time writing up the trivial scenarios! (You make a good point.)