You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Irrationality Game III - Less Wrong Discussion

11 Post author: CellBioGuy 12 March 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (204)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 13 March 2014 05:34:00PM *  0 points [-]

No, in that large universe model we'd expect to be part of one of the expanded, intergalactic civilisations, and not part of a small, still-at-home civilisation. So, as I stated "we would not expect to see the world as we do in fact see it". Clearly we could still be part of a small civilisation (nothing logically impossible about being in a tiny minority), or we could be in some sort of zoo or ancestor simulation within a big civilisation. But that's not what we'd expect to see. You might want to see Ken Olum's paper for more on this: http://arxiv.org/abs/gr-qc/0303070

Incidentally, Olum considers several different ways out of the conflict between expectation and observation: the finite universe is option F (page 5) and that option seems to me to be a lot more plausible than any of the alternatives he sketches. But if you disagree, please tell me which option you think more likely.

Comment author: Squark 14 March 2014 12:18:44PM 0 points [-]

This sort of anthropic reasoning is wrong. Consider the following experiment.

A fair coin is tossed. If the result is H, you are cloned into 10^10 copies, and all of those copies except one are placed in the Andromeda galaxy. Another copy remains in the Milky Way. If the result is T, no cloning occurs and you remain in the Milky Way. Either way, the "you" in Milky Way has no immediate direct way to know about the result of the coin toss.

Someone, call her "anthropic mugger", comes to you an offers a bet. She can perform an experiment which will reveal the result of the coin toss (but she hasn't done it yet). If you accept the bet and the coin toss turns out to be H, she pays you 1$. If you accept the bet and the coin toss turns out to be T, you pay her 1000$. Do you accept the bet?

Reasoning along the same lines as you did to conclude there are no large civilizations, you should accept the bet. But this means your expected gain before the coin toss is -499.5$. So, before the coin toss it is profitable for you to change your way of reasoning so you won't be tempted to accept the bet.

There's no reason to accept the bet unless in the cloning scenario you care much less about the copy of you in Milky Way than in the no-cloning scenario. So, there's no reason to assume there are no large civilizations if the existence of large civilizations wouldn't make us care much less about our own.

Comment author: drnickbone 14 March 2014 02:40:07PM *  0 points [-]

There are a number of problems with that:

1) You don't specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/50? Suppose I am then told I am in the Milky Way... what odds should I now accept on the coin-toss: still 50/50? If you say 50/50 in both cases then you are a "double-halfer" (in the terminology of Sleeping Beauty problems) and you can be Dutch-booked. If you answer other than 50/50 in one case or the other, then you are saying there are circumstances where you'd bet at odds different (probably very different) from the physical odds of a fair coin toss and without any context, that sounds rather crazy. So whatever you say, there is a bullet to bite.

2) I am, by the way, quite aware of the literature on Anthropic Decision Theory (especially Stuart Armstrong's paper) and since my utility function is roughly the average utility for my future copies (rather than total utility) I feel inclined to bet with the SSA odds. Yes, this will lead to the "me" in the Milky Way making a loss in the case of "H" but at that stage he counts for only a tiny slither of my utility function, so I think I'll take the risk and eat the loss. If I modify my reasoning now then there are other bets which will lead to a bigger expected loss (or even a guaranteed loss if I can be Dutch-booked).

Remember though that I only assigned 90% probability to the original hypothesis. Part of the remaining 10% uncertainty is that I am not fully confident that SSA odds are the right ones to use. So the anthropic mugger might not be able to make $500 off me (I'm likely to refuse the 1000:1 bet), but he probably could make $5 off me.

3) As in many such problems, you oversimplify by specifying in advance that the coin is fair, which then leads to the crazy-sounding betting odds (and the need to bite a bullet somewhere). But in the real world case, the coin has unknown bias (as we don't know the size of the future filter). This means we have to try to estimate the bias (size of filter) based on the totality of our evidence.

Suppose I'm doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails. Then it seems very reasonable that under the "bias towards heads" hypothesis I would expect to be in Andromeda, and if I discover I am not, that counts as evidence for the "bias towards tails" hypothesis. So as I now suspect bias in one particular direction, why still bet on 50/50 odds?

Comment author: Squark 14 March 2014 02:59:08PM 0 points [-]

1) You don't specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the "large civilizations" problem since we know we're in a small civilization.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/50?

In your version of the problem the clones get to bet too, so the answer depends on how your utility is accumulated over clones.

So whatever you say, there is a bullet to bite.

If you have a well-defined utility function and you're using UDT, everything makes sense IMO.

Suppose I'm doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails.

It doesn't change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

Comment author: drnickbone 14 March 2014 03:59:01PM *  0 points [-]

I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the "large civilizations" problem since we know we're in a small civilization.

Well we currently observe that we are in a small civilisation (though we could be in a zoo or simulation or whatever). But to assess the hypotheses in question we have to (in essence) forget that observation, create a prior for small universe versus big universe hypotheses, see what the hypotheses predict we should expect to observe, and then update when we "notice" the observation.

Alternatively, if you adopt the UDT approach, you have to consider what utility function you'd have before knowing whether you are in a big civilization or not. What would the "you" then like to commit the "you" now to deciding?

If you think you'd care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you'd care about total utility (so the small civs get equal weight regardless) then be carefully that it's bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://lesswrong.com/lw/fg7/sia_fears_expected_infinity/).

It doesn't change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts). You can't create crazy-sounding conclusions with metaphorical coins (i.e. situations where you bet at million to 1 odds, despite knowing that the coin toss was a fair one.)

Comment author: Squark 14 March 2014 06:49:00PM 0 points [-]

If you think you'd care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you'd care about total utility (so the small civs get equal weight regardless) then be carefully that it's bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://lesswrong.com/lw/fg7/sia_fears_expected_infinity/).

I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don't reduce the utility of small civilizations.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts).

I don't believe in different kinds of coins. They're all the same Bayesian probabilities. It's a meta-Occam razor: I don't see any need for introducing these distinct categories.

Comment author: drnickbone 14 March 2014 07:53:36PM 0 points [-]

I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don't reduce the utility of small civilizations.

I'm not sure how you apply that in a big universe model... most of it is lies outside any given light-cone, so which one do you pick? Imagine you don't yet know where you are: do you sum utility across all light-cones (a sum which could still diverge in a big universe) or take the utility of an average light cone. Also, how do you do the time-discounting if you don't yet know when you are?

My initial guess is that this utility function won't encourage betting on really big universes (as there is no increase in utility of the average lightcone from winning the bet), but it will encourage betting on really dense universes (packed full of people or simulations of people). So you should maybe bet that you are in a simulation, running on a form of dense "computronium" in the underlying universe.

Comment author: Squark 14 March 2014 08:15:14PM 0 points [-]

I'm not sure how you apply that in a big universe model... most of it is lies outside any given light-cone, so which one do you pick? Imagine you don't yet know where you are: do you sum utility across all light-cones (a sum which could still diverge in a big universe) or take the utility of an average light cone. Also, how do you do the time-discounting if you don't yet know when you are?

The possible universes I am considering already come packed into a future light cone (I don't consider large universes directly). The probability of a universe is proportional to 2^{-its Kolmogorov complexity} so expected utility converges. Time-discounting is relative to the vertex of the light-cone.

...it will encourage betting on really dense universes (packed full of people or simulations of people).

Not really. Additive terms in the utility don't "encourage" anything, multiplicative factors do.

Comment author: drnickbone 16 March 2014 09:30:25PM *  1 point [-]

The possible universes I am considering already come packed into a future light cone (I don't consider large universes directly).

I was a bit surprised by this... if your possible models only include one light-cone (essentially just the observable universe) then they don't look too different from those of my stated hypothesis (at the start of the thread). What is your opinion then on other civilisations in the light-cone? How likely are these alternatives?

  • No other civilisations exist or have existed in the light-cone apart from us.
  • A few have existed apart from us, but none have expanded (yet)
  • A few have existed, and a few have expanded, but we can't see them (yet)
  • Lots have existed, but none have expanded (very strong future filter)
  • Lots have existed, and a few have expanded (still a strong future filter), but we can't see the expanded ones (yet)
  • Lots have existed, and lots have expanded, so the light-cone is full of expanded civilisations; we don't see that, but that's because we are in a zoo or simulation of some sort.

..it will encourage betting on really dense universes (packed full of people or simulations of people).

Not really. Additive terms in the utility don't "encourage" anything, multiplicative factors do.

Here's how it works. Imagine the "mugger" offers all observers a bet (e.g. at your 1000:1 on odds) on whether they believe they are in a simulation, within a dense "computronium" universe packed full of computers simulating observers. Suppose only a tiny fraction (less than 1 in a trillion) universe models are like that, and the observers all know this (so this is equivalent to a very heavily weighted coin landing against its weight). But still, by your proposed utility function, UDT observers should accept the bet, since in the freak universes where they win, huge numbers of observers win $1 each, adding a colossal amount of total utility to the light-cone. Whereas in the more regular universes where they lose the bet, relatively fewer observers will lose $1000 each. Hence accepting the bet creates more expected utility than rejecting it.

Another issue you might have concerns the time-discounting. Suppose 1 million observers live early on in the light-cone, and 1 trillion live late in the light-cone (and again all observers know this). The mugger approaches all observers before they know whether they are "early" or "late" and offers them a 50:50 bet on whether they are "early" rather than "late". The observers all decide to accept the bet, knowing that 1 million will win and 1 trillion will lose: however the utility of the losers is heavily discounted, relative to the winners, so the total expected time-discounted utility is increased by accepting the bet.

Comment author: Squark 18 March 2014 08:52:08PM 0 points [-]

I was a bit surprised by this... if your possible models only include one light-cone (essentially just the observable universe) then they don't look too different from those of my stated hypothesis (at the start of the thread).

My disagreement is that the anthropic reasoning you use is not a good argument for non-existence of large civilizations.

How likely are these alternatives? ...

I am using a future light cone whereas your alternatives seem to be formulated in terms of a past light cone. Let me say that I think the probability to ever encounter another civilization is related to the ratio {asymptotic value of Hubble time} / {time since appearance of civilizations became possible}. I can't find the numbers this second, but my feeling is such an occurrence is far from certain.

Here's how it works...

Very good point! I think that if the "computronium universe" is not suppressed by some huge factor due to some sort of physical limit / great filter, then there is a significant probability such a universe arises from post-human civilization (e.g. due to FAI). All decisions with possible (even small) impact on the likelihood of and/or the properties of this future get a huge utility boost. Therefore I think decisions with long term impact should be made as if we are not in a simulation whereas decisions which involve purely short term optimizations should be made as if we are in a simulation (although I find it hard to imagine such a decision in which it is important whether we are in a simulation).

Another issue you might have concerns the time-discounting...

The effective time discount function is of rather slow decay because the sum over universes includes time translated versions of the same universe. As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t. Nevertheless, for huge time differences your argument is correct. This is actually a good thing, since otherwise your decisions would be dominated by the Boltzmann brains appearing far after heat death.