Nisan comments on Irrationality Game III - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (204)
Your conclusion doesn't follow from your premises. The lack of a strong filter implies that a not insignificant proportion of civilizations colonize space. This is consistent with there being a large universe containing many intergalactic civilizations we will never observe because of the expansion of the universe.
No, in that large universe model we'd expect to be part of one of the expanded, intergalactic civilisations, and not part of a small, still-at-home civilisation. So, as I stated "we would not expect to see the world as we do in fact see it". Clearly we could still be part of a small civilisation (nothing logically impossible about being in a tiny minority), or we could be in some sort of zoo or ancestor simulation within a big civilisation. But that's not what we'd expect to see. You might want to see Ken Olum's paper for more on this: http://arxiv.org/abs/gr-qc/0303070
Incidentally, Olum considers several different ways out of the conflict between expectation and observation: the finite universe is option F (page 5) and that option seems to me to be a lot more plausible than any of the alternatives he sketches. But if you disagree, please tell me which option you think more likely.
I find that sort of anthropic argument to Prove Too Much. For instance, our universe is about 14 billion years old, but many models have the universe existing trillions of years into the future. If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe's lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years. We can also reject the hypothesis that more than 4 trillion humans lives will take place, that any given 1-year-old will reach the age of 20, that humans will have machines capable of flight for more than 2000 years, etc.
Olum appears to be making a post hoc argument. The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn't mean I think I need a new model. The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What's so special about me, that I should reject the possibility that I such a person?
That sounds like "Copernican" reasoning (assume you are at a random point in time) rather than "anthropic" reasoning (assume you are a random observer from a class of observers). I'm not surprised the Copernican approach gives daft results, because the spatial version (assume you are at a random point in space) also gives daft results: see here in this thread point 2.
Incidentally, there is a valid anthropic version of your argument: the prediction is that the universe will be uninhabitable 280 billion years from now, or at least contain many fewer observers than it does now. However, in that case, it looks like a successful prediction. The recent discovery that the stars are beginning to go out and that 95% of stars that will ever form have formed already is just the sort of thing that would be expected under anthropic reasoning. But it is totally surprising otherwise.
The correct application of anthropic reasoning only rejects this as a hypothesis about the average number of observers in a civilisation, not about human beings specifically. If we knew somehow (on other grounds) that most civilisations make it to 10 trillion observers, we wouldn't predict any less for human beings.
That's an instance of the same error: anthropic reasoning does NOT reject the particular hypothesis. We already know that an average human lifespan is greater than 20, so we have no reason to predict less than 20 for a particular child. (The reason is that observing one particular child at age 1 as a random observation from the set of all human observations is no less probable if she lives to 100 than if she lives to 2).
Anthropic reasoning is like any Bayesian reasoning: observations only count as evidence between hypotheses if they are more likely on one hypothesis than another. Also, hypotheses must be fairly likely a priori to be worth considering against the evidence. Suppose you somehow got a precise observation of sperm meeting egg to make you, with a genome analysis of the two: that exact DNA readout would be extremely unlikely under the hypothesis of the usual laws of physics, chemistry and biology. But that shouldn't make you suspect an alternative hypothesis (e.g. that you are some weird biological experiment, or a special child of god) because that exact DNA readout is extremely unlikely on those hypotheses as well. So it doesn't count as evidence for these alternatives.
If all hypotheses gave extremely low probability of being born before the expansion, then you are correct. But the issue is that some hypotheses give high probability that an observer finds himself before expansion (the hypotheses where no civilisations expand, and all stay small). So your observations do count as evidence to decide between the hypotheses.
I got a bit distracted by the "anthropic reasoning is wrong" discussion below, and missed adding something important. The conclusion that "we would not expect to see the world as we in fact see it" holds in a big universe regardless of the approach taken to anthropic reasoning. It's worth spelling that out in some detail.
Suppose I don't want to engage in any form of anthropic reasoning or observation sampling hypothesis. Then the large universe model leaves me unable to predict anything much at all about my observations. I might perhaps be in a small civilisation, but then I might be in a simulation, or a Boltzmann Brain, or mad, or a galactic emperor, or a worm, or a rock, or a hydrogen molecule. I have no basis for assigning significant probability to any of these - my predictions are all over the place. So I certainly can't expect to observe that I'm an intelligent observer in a small civilisation confined to its home planet.
Suppose I adopt a "Copernican" hypothesis - I'm just at a random point in space. Well now, the usual big and small universe hypotheses predict that I'm most likely going to be somewhere in intergalactic or interstellar space, so that's not a great predictive success. The universe model which most predicts my observations looks frankly weird... instead of a lot of empty space, it is a dense mass of "computronium" running lots of simulations of different observers, and I'm one of them. Even then I can't expect to be in a simulation of a small civilisation, since the sim could be of just about anything. Again, not a great predictive success.
Suppose I adopt SIA reasoning. Then I should just ignore the finite universes, since they contribute zero prior probability. Or if I've decided for some reason to keep all my universe hypotheses finite, then I should ignore all but the largest ones (ones with 3^^^3 or more galaxies). Among the infinite-or-enormous universes, they nearly all have expanded civilisations, and so under SIA, nearly all predict that I'm going to be in a big civilisation. The only ones which predict otherwise include a "universal doom" - the probability that a small civilisation ever expands off its home world is zero, or negligibly bigger than zero. That's a massive future filter. So SIA and big universes can - just about - predict my observations, but only if there is this super-strong filter. Again, that has low prior probability, and is not what I should expect to see.
Suppose I adopt SSA reasoning. I need to specify the reference class, and it is a bit hard to know which one to use. In a big universe, different reference classes will lead to very different predictions: picking out small civilisations, large civilisations, AIs, SIMs, emperors and so on (plus worms, rocks and hydrogen for the whackier reference classes). As I don't know which to use, my predictions get smeared out across the classes, and are consequently vague. Again, I can't expect to be in a small civilisation on its home planet.
By contrast, look at the small universe models with only a few civilisations. A fair chunk of these models have modest future filters so none of the civilisations expand. For those models, SSA looks in quite good shape, as there is quite a wide choice of reference classes that all lead to the same prediction. Provided the reference class predicts I am an intelligent observer at all then it must predict I am in a small civilisation confined to its home planet (because all civilisations are like that). Of course there are the weird classes which predict I'm a worm and so on - nothing we can do about those - but among the sensible classes we get a hit.
So this is where I'm coming from. The only model which leads me to expect to see what I actually see is a small universe model, with a modest future filter. Within that model, I will need to adopt some sort of SSA-reasoning to get a prediction, but I don't have to know in advance which reference class to use: any reference class which selects an intelligent observer predicts roughly what I see. None of the other models or styles of reasoning lead to that prediction.
This sort of anthropic reasoning is wrong. Consider the following experiment.
A fair coin is tossed. If the result is H, you are cloned into 10^10 copies, and all of those copies except one are placed in the Andromeda galaxy. Another copy remains in the Milky Way. If the result is T, no cloning occurs and you remain in the Milky Way. Either way, the "you" in Milky Way has no immediate direct way to know about the result of the coin toss.
Someone, call her "anthropic mugger", comes to you an offers a bet. She can perform an experiment which will reveal the result of the coin toss (but she hasn't done it yet). If you accept the bet and the coin toss turns out to be H, she pays you 1$. If you accept the bet and the coin toss turns out to be T, you pay her 1000$. Do you accept the bet?
Reasoning along the same lines as you did to conclude there are no large civilizations, you should accept the bet. But this means your expected gain before the coin toss is -499.5$. So, before the coin toss it is profitable for you to change your way of reasoning so you won't be tempted to accept the bet.
There's no reason to accept the bet unless in the cloning scenario you care much less about the copy of you in Milky Way than in the no-cloning scenario. So, there's no reason to assume there are no large civilizations if the existence of large civilizations wouldn't make us care much less about our own.
There are a number of problems with that:
1) You don't specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.
Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/50? Suppose I am then told I am in the Milky Way... what odds should I now accept on the coin-toss: still 50/50? If you say 50/50 in both cases then you are a "double-halfer" (in the terminology of Sleeping Beauty problems) and you can be Dutch-booked. If you answer other than 50/50 in one case or the other, then you are saying there are circumstances where you'd bet at odds different (probably very different) from the physical odds of a fair coin toss and without any context, that sounds rather crazy. So whatever you say, there is a bullet to bite.
2) I am, by the way, quite aware of the literature on Anthropic Decision Theory (especially Stuart Armstrong's paper) and since my utility function is roughly the average utility for my future copies (rather than total utility) I feel inclined to bet with the SSA odds. Yes, this will lead to the "me" in the Milky Way making a loss in the case of "H" but at that stage he counts for only a tiny slither of my utility function, so I think I'll take the risk and eat the loss. If I modify my reasoning now then there are other bets which will lead to a bigger expected loss (or even a guaranteed loss if I can be Dutch-booked).
Remember though that I only assigned 90% probability to the original hypothesis. Part of the remaining 10% uncertainty is that I am not fully confident that SSA odds are the right ones to use. So the anthropic mugger might not be able to make $500 off me (I'm likely to refuse the 1000:1 bet), but he probably could make $5 off me.
3) As in many such problems, you oversimplify by specifying in advance that the coin is fair, which then leads to the crazy-sounding betting odds (and the need to bite a bullet somewhere). But in the real world case, the coin has unknown bias (as we don't know the size of the future filter). This means we have to try to estimate the bias (size of filter) based on the totality of our evidence.
Suppose I'm doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails. Then it seems very reasonable that under the "bias towards heads" hypothesis I would expect to be in Andromeda, and if I discover I am not, that counts as evidence for the "bias towards tails" hypothesis. So as I now suspect bias in one particular direction, why still bet on 50/50 odds?
I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the "large civilizations" problem since we know we're in a small civilization.
In your version of the problem the clones get to bet too, so the answer depends on how your utility is accumulated over clones.
If you have a well-defined utility function and you're using UDT, everything makes sense IMO.
It doesn't change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.
Well we currently observe that we are in a small civilisation (though we could be in a zoo or simulation or whatever). But to assess the hypotheses in question we have to (in essence) forget that observation, create a prior for small universe versus big universe hypotheses, see what the hypotheses predict we should expect to observe, and then update when we "notice" the observation.
Alternatively, if you adopt the UDT approach, you have to consider what utility function you'd have before knowing whether you are in a big civilization or not. What would the "you" then like to commit the "you" now to deciding?
If you think you'd care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.
If you think you'd care about total utility (so the small civs get equal weight regardless) then be carefully that it's bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://lesswrong.com/lw/fg7/sia_fears_expected_infinity/).
A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts). You can't create crazy-sounding conclusions with metaphorical coins (i.e. situations where you bet at million to 1 odds, despite knowing that the coin toss was a fair one.)
I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don't reduce the utility of small civilizations.
I don't believe in different kinds of coins. They're all the same Bayesian probabilities. It's a meta-Occam razor: I don't see any need for introducing these distinct categories.
I'm not sure how you apply that in a big universe model... most of it is lies outside any given light-cone, so which one do you pick? Imagine you don't yet know where you are: do you sum utility across all light-cones (a sum which could still diverge in a big universe) or take the utility of an average light cone. Also, how do you do the time-discounting if you don't yet know when you are?
My initial guess is that this utility function won't encourage betting on really big universes (as there is no increase in utility of the average lightcone from winning the bet), but it will encourage betting on really dense universes (packed full of people or simulations of people). So you should maybe bet that you are in a simulation, running on a form of dense "computronium" in the underlying universe.
The possible universes I am considering already come packed into a future light cone (I don't consider large universes directly). The probability of a universe is proportional to 2^{-its Kolmogorov complexity} so expected utility converges. Time-discounting is relative to the vertex of the light-cone.
Not really. Additive terms in the utility don't "encourage" anything, multiplicative factors do.
I was a bit surprised by this... if your possible models only include one light-cone (essentially just the observable universe) then they don't look too different from those of my stated hypothesis (at the start of the thread). What is your opinion then on other civilisations in the light-cone? How likely are these alternatives?
Here's how it works. Imagine the "mugger" offers all observers a bet (e.g. at your 1000:1 on odds) on whether they believe they are in a simulation, within a dense "computronium" universe packed full of computers simulating observers. Suppose only a tiny fraction (less than 1 in a trillion) universe models are like that, and the observers all know this (so this is equivalent to a very heavily weighted coin landing against its weight). But still, by your proposed utility function, UDT observers should accept the bet, since in the freak universes where they win, huge numbers of observers win $1 each, adding a colossal amount of total utility to the light-cone. Whereas in the more regular universes where they lose the bet, relatively fewer observers will lose $1000 each. Hence accepting the bet creates more expected utility than rejecting it.
Another issue you might have concerns the time-discounting. Suppose 1 million observers live early on in the light-cone, and 1 trillion live late in the light-cone (and again all observers know this). The mugger approaches all observers before they know whether they are "early" or "late" and offers them a 50:50 bet on whether they are "early" rather than "late". The observers all decide to accept the bet, knowing that 1 million will win and 1 trillion will lose: however the utility of the losers is heavily discounted, relative to the winners, so the total expected time-discounted utility is increased by accepting the bet.
Oh I see, that makes sense.
I agree with this counterargument, but this thread being what it is, in which direction should I vote sub-comments?
Subcomments are voted by the ordinary rules