You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

drnickbone comments on Irrationality Game III - Less Wrong Discussion

11 Post author: CellBioGuy 12 March 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (204)

You are viewing a single comment's thread.

Comment author: drnickbone 13 March 2014 08:17:55AM 15 points [-]

The universe is finite, and not much bigger than the region we observe. There is no multiverse (in particular Many Worlds Interpretation is incorrect and SIA is incorrect). There have been a few (< million) intelligent civilisations before human beings but none of them managed to expand into space, which explains Fermi's paradox. This also implies a mild form of the "Doomsday" argument (we are fairly unlikely to expand ourselves) but not a strong future filter (if instead millions or billions of civilisations had existed, but none of them expanded, there would be a massive future filter). Probability: 90%.

Comment author: polymathwannabe 13 March 2014 02:06:59PM 0 points [-]

I don't know how to vote on this. I have very strong suspicions that MWI is incorrect (its Copernican allure is its only favorable point), but I disagree that the universe is finite. I feel inclined toward SIA, but I generally reject anthropic reasoning (that's perhaps a statement about myself rather than about your arguments).

(Also, I require more detailed arguments to dissolve Fermi's paradox because I don't believe paradoxes exist in reality.)

Comment author: drnickbone 13 March 2014 03:06:54PM *  0 points [-]

I'd suggest that since you agree with some parts but disagree with others, you assign probability a lot less than 90% to the whole hypothesis. So you should think I'm irrationally overconfident in the whole lot, and upvote please!

If you want some detail, I start from the "Great Filter" argument (see http://hanson.gmu.edu/greatfilter.html). I find it very hard to believe that there is a super-strong future filter ahead of us (such that we have < 1 in a million or < 1 in a billion chance of passing it and then expanding into space). But a relatively weak filter implies that rather few civilizations can have got to our stage of development - there can't have been millions or billions of them, or some would have got past the filter and expanded, and we would not expect to see the world as we do in fact see it. The argument to the universe being finite (and not too big) then follows from there being a limited number of civilizations. SIA and MWI must also be wrong, because they each imply a very large or infinite number of civilizations.

Comment author: Nisan 13 March 2014 04:14:57PM 1 point [-]

Your conclusion doesn't follow from your premises. The lack of a strong filter implies that a not insignificant proportion of civilizations colonize space. This is consistent with there being a large universe containing many intergalactic civilizations we will never observe because of the expansion of the universe.

Comment author: drnickbone 13 March 2014 05:34:00PM *  0 points [-]

No, in that large universe model we'd expect to be part of one of the expanded, intergalactic civilisations, and not part of a small, still-at-home civilisation. So, as I stated "we would not expect to see the world as we do in fact see it". Clearly we could still be part of a small civilisation (nothing logically impossible about being in a tiny minority), or we could be in some sort of zoo or ancestor simulation within a big civilisation. But that's not what we'd expect to see. You might want to see Ken Olum's paper for more on this: http://arxiv.org/abs/gr-qc/0303070

Incidentally, Olum considers several different ways out of the conflict between expectation and observation: the finite universe is option F (page 5) and that option seems to me to be a lot more plausible than any of the alternatives he sketches. But if you disagree, please tell me which option you think more likely.

Comment author: ThisSpaceAvailable 16 March 2014 02:12:17AM 0 points [-]

I find that sort of anthropic argument to Prove Too Much. For instance, our universe is about 14 billion years old, but many models have the universe existing trillions of years into the future. If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe's lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years. We can also reject the hypothesis that more than 4 trillion humans lives will take place, that any given 1-year-old will reach the age of 20, that humans will have machines capable of flight for more than 2000 years, etc.

Olum appears to be making a post hoc argument. The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn't mean I think I need a new model. The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What's so special about me, that I should reject the possibility that I such a person?

Comment author: drnickbone 17 March 2014 01:07:12PM *  1 point [-]

If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe's lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years.

That sounds like "Copernican" reasoning (assume you are at a random point in time) rather than "anthropic" reasoning (assume you are a random observer from a class of observers). I'm not surprised the Copernican approach gives daft results, because the spatial version (assume you are at a random point in space) also gives daft results: see here in this thread point 2.

Incidentally, there is a valid anthropic version of your argument: the prediction is that the universe will be uninhabitable 280 billion years from now, or at least contain many fewer observers than it does now. However, in that case, it looks like a successful prediction. The recent discovery that the stars are beginning to go out and that 95% of stars that will ever form have formed already is just the sort of thing that would be expected under anthropic reasoning. But it is totally surprising otherwise.

We can also reject the hypothesis that more than 4 trillion humans lives will take place

The correct application of anthropic reasoning only rejects this as a hypothesis about the average number of observers in a civilisation, not about human beings specifically. If we knew somehow (on other grounds) that most civilisations make it to 10 trillion observers, we wouldn't predict any less for human beings.

that any given 1-year-old will reach the age of 20,

That's an instance of the same error: anthropic reasoning does NOT reject the particular hypothesis. We already know that an average human lifespan is greater than 20, so we have no reason to predict less than 20 for a particular child. (The reason is that observing one particular child at age 1 as a random observation from the set of all human observations is no less probable if she lives to 100 than if she lives to 2).

The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn't mean I think I need a new model

Anthropic reasoning is like any Bayesian reasoning: observations only count as evidence between hypotheses if they are more likely on one hypothesis than another. Also, hypotheses must be fairly likely a priori to be worth considering against the evidence. Suppose you somehow got a precise observation of sperm meeting egg to make you, with a genome analysis of the two: that exact DNA readout would be extremely unlikely under the hypothesis of the usual laws of physics, chemistry and biology. But that shouldn't make you suspect an alternative hypothesis (e.g. that you are some weird biological experiment, or a special child of god) because that exact DNA readout is extremely unlikely on those hypotheses as well. So it doesn't count as evidence for these alternatives.

The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What's so special about me, that I should reject the possibility that I such a person?

If all hypotheses gave extremely low probability of being born before the expansion, then you are correct. But the issue is that some hypotheses give high probability that an observer finds himself before expansion (the hypotheses where no civilisations expand, and all stay small). So your observations do count as evidence to decide between the hypotheses.

Comment author: drnickbone 14 March 2014 05:18:33PM *  0 points [-]

I got a bit distracted by the "anthropic reasoning is wrong" discussion below, and missed adding something important. The conclusion that "we would not expect to see the world as we in fact see it" holds in a big universe regardless of the approach taken to anthropic reasoning. It's worth spelling that out in some detail.

  1. Suppose I don't want to engage in any form of anthropic reasoning or observation sampling hypothesis. Then the large universe model leaves me unable to predict anything much at all about my observations. I might perhaps be in a small civilisation, but then I might be in a simulation, or a Boltzmann Brain, or mad, or a galactic emperor, or a worm, or a rock, or a hydrogen molecule. I have no basis for assigning significant probability to any of these - my predictions are all over the place. So I certainly can't expect to observe that I'm an intelligent observer in a small civilisation confined to its home planet.

  2. Suppose I adopt a "Copernican" hypothesis - I'm just at a random point in space. Well now, the usual big and small universe hypotheses predict that I'm most likely going to be somewhere in intergalactic or interstellar space, so that's not a great predictive success. The universe model which most predicts my observations looks frankly weird... instead of a lot of empty space, it is a dense mass of "computronium" running lots of simulations of different observers, and I'm one of them. Even then I can't expect to be in a simulation of a small civilisation, since the sim could be of just about anything. Again, not a great predictive success.

  3. Suppose I adopt SIA reasoning. Then I should just ignore the finite universes, since they contribute zero prior probability. Or if I've decided for some reason to keep all my universe hypotheses finite, then I should ignore all but the largest ones (ones with 3^^^3 or more galaxies). Among the infinite-or-enormous universes, they nearly all have expanded civilisations, and so under SIA, nearly all predict that I'm going to be in a big civilisation. The only ones which predict otherwise include a "universal doom" - the probability that a small civilisation ever expands off its home world is zero, or negligibly bigger than zero. That's a massive future filter. So SIA and big universes can - just about - predict my observations, but only if there is this super-strong filter. Again, that has low prior probability, and is not what I should expect to see.

  4. Suppose I adopt SSA reasoning. I need to specify the reference class, and it is a bit hard to know which one to use. In a big universe, different reference classes will lead to very different predictions: picking out small civilisations, large civilisations, AIs, SIMs, emperors and so on (plus worms, rocks and hydrogen for the whackier reference classes). As I don't know which to use, my predictions get smeared out across the classes, and are consequently vague. Again, I can't expect to be in a small civilisation on its home planet.

By contrast, look at the small universe models with only a few civilisations. A fair chunk of these models have modest future filters so none of the civilisations expand. For those models, SSA looks in quite good shape, as there is quite a wide choice of reference classes that all lead to the same prediction. Provided the reference class predicts I am an intelligent observer at all then it must predict I am in a small civilisation confined to its home planet (because all civilisations are like that). Of course there are the weird classes which predict I'm a worm and so on - nothing we can do about those - but among the sensible classes we get a hit.

So this is where I'm coming from. The only model which leads me to expect to see what I actually see is a small universe model, with a modest future filter. Within that model, I will need to adopt some sort of SSA-reasoning to get a prediction, but I don't have to know in advance which reference class to use: any reference class which selects an intelligent observer predicts roughly what I see. None of the other models or styles of reasoning lead to that prediction.

Comment author: Squark 14 March 2014 12:18:44PM 0 points [-]

This sort of anthropic reasoning is wrong. Consider the following experiment.

A fair coin is tossed. If the result is H, you are cloned into 10^10 copies, and all of those copies except one are placed in the Andromeda galaxy. Another copy remains in the Milky Way. If the result is T, no cloning occurs and you remain in the Milky Way. Either way, the "you" in Milky Way has no immediate direct way to know about the result of the coin toss.

Someone, call her "anthropic mugger", comes to you an offers a bet. She can perform an experiment which will reveal the result of the coin toss (but she hasn't done it yet). If you accept the bet and the coin toss turns out to be H, she pays you 1$. If you accept the bet and the coin toss turns out to be T, you pay her 1000$. Do you accept the bet?

Reasoning along the same lines as you did to conclude there are no large civilizations, you should accept the bet. But this means your expected gain before the coin toss is -499.5$. So, before the coin toss it is profitable for you to change your way of reasoning so you won't be tempted to accept the bet.

There's no reason to accept the bet unless in the cloning scenario you care much less about the copy of you in Milky Way than in the no-cloning scenario. So, there's no reason to assume there are no large civilizations if the existence of large civilizations wouldn't make us care much less about our own.

Comment author: drnickbone 14 March 2014 02:40:07PM *  0 points [-]

There are a number of problems with that:

1) You don't specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/50? Suppose I am then told I am in the Milky Way... what odds should I now accept on the coin-toss: still 50/50? If you say 50/50 in both cases then you are a "double-halfer" (in the terminology of Sleeping Beauty problems) and you can be Dutch-booked. If you answer other than 50/50 in one case or the other, then you are saying there are circumstances where you'd bet at odds different (probably very different) from the physical odds of a fair coin toss and without any context, that sounds rather crazy. So whatever you say, there is a bullet to bite.

2) I am, by the way, quite aware of the literature on Anthropic Decision Theory (especially Stuart Armstrong's paper) and since my utility function is roughly the average utility for my future copies (rather than total utility) I feel inclined to bet with the SSA odds. Yes, this will lead to the "me" in the Milky Way making a loss in the case of "H" but at that stage he counts for only a tiny slither of my utility function, so I think I'll take the risk and eat the loss. If I modify my reasoning now then there are other bets which will lead to a bigger expected loss (or even a guaranteed loss if I can be Dutch-booked).

Remember though that I only assigned 90% probability to the original hypothesis. Part of the remaining 10% uncertainty is that I am not fully confident that SSA odds are the right ones to use. So the anthropic mugger might not be able to make $500 off me (I'm likely to refuse the 1000:1 bet), but he probably could make $5 off me.

3) As in many such problems, you oversimplify by specifying in advance that the coin is fair, which then leads to the crazy-sounding betting odds (and the need to bite a bullet somewhere). But in the real world case, the coin has unknown bias (as we don't know the size of the future filter). This means we have to try to estimate the bias (size of filter) based on the totality of our evidence.

Suppose I'm doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails. Then it seems very reasonable that under the "bias towards heads" hypothesis I would expect to be in Andromeda, and if I discover I am not, that counts as evidence for the "bias towards tails" hypothesis. So as I now suspect bias in one particular direction, why still bet on 50/50 odds?

Comment author: Squark 14 March 2014 02:59:08PM 0 points [-]

1) You don't specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the "large civilizations" problem since we know we're in a small civilization.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/50?

In your version of the problem the clones get to bet too, so the answer depends on how your utility is accumulated over clones.

So whatever you say, there is a bullet to bite.

If you have a well-defined utility function and you're using UDT, everything makes sense IMO.

Suppose I'm doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails.

It doesn't change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

Comment author: drnickbone 14 March 2014 03:59:01PM *  0 points [-]

I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the "large civilizations" problem since we know we're in a small civilization.

Well we currently observe that we are in a small civilisation (though we could be in a zoo or simulation or whatever). But to assess the hypotheses in question we have to (in essence) forget that observation, create a prior for small universe versus big universe hypotheses, see what the hypotheses predict we should expect to observe, and then update when we "notice" the observation.

Alternatively, if you adopt the UDT approach, you have to consider what utility function you'd have before knowing whether you are in a big civilization or not. What would the "you" then like to commit the "you" now to deciding?

If you think you'd care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you'd care about total utility (so the small civs get equal weight regardless) then be carefully that it's bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://lesswrong.com/lw/fg7/sia_fears_expected_infinity/).

It doesn't change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts). You can't create crazy-sounding conclusions with metaphorical coins (i.e. situations where you bet at million to 1 odds, despite knowing that the coin toss was a fair one.)

Comment author: Squark 14 March 2014 06:49:00PM 0 points [-]

If you think you'd care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you'd care about total utility (so the small civs get equal weight regardless) then be carefully that it's bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://lesswrong.com/lw/fg7/sia_fears_expected_infinity/).

I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don't reduce the utility of small civilizations.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts).

I don't believe in different kinds of coins. They're all the same Bayesian probabilities. It's a meta-Occam razor: I don't see any need for introducing these distinct categories.

Comment author: Nisan 13 March 2014 10:17:41PM 0 points [-]

Oh I see, that makes sense.

Comment author: polymathwannabe 13 March 2014 04:17:33PM -1 points [-]

I agree with this counterargument, but this thread being what it is, in which direction should I vote sub-comments?

Comment author: Squark 14 March 2014 08:29:40AM 1 point [-]

Subcomments are voted by the ordinary rules

Comment author: shminux 16 March 2014 02:17:55AM -1 points [-]

I have very strong suspicions that MWI is incorrect

How would you evaluate correctness of something untestable?

Comment author: polymathwannabe 16 March 2014 03:55:58AM *  -1 points [-]

I don't know whether this counts as a correctness assessment, but my expectations do not vary with the trueness of MWI, so it's a needless hypothesis.