drnickbone comments on "Stupid" questions thread - Less Wrong

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 13 July 2013 09:36:11AM 8 points [-]

If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.

The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.

Comment author: timtyler 14 July 2013 11:18:50AM *  -1 points [-]

The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed.

Jaan Tallinn's attempt: Why Now? A Quest in Metaphysics. The "Doomsday argument" is far from certain.

Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news - our descendants will evolve into something different soon enough. That's not much of a "Doomsday".

Comment author: drnickbone 14 July 2013 01:20:02PM 1 point [-]

I described some problems with Tallinn's attempt here - under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.

Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)

Comment author: timtyler 14 July 2013 11:50:10PM *  -1 points [-]

I described some problems with Tallinn's attempt here - under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.

We seem pretty damn close to me! A decade or so is not very long.

(Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)

In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.

Comment author: drnickbone 16 July 2013 07:27:51AM 0 points [-]

Remember that in Tallinn's analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn's model could be adjusted so that they only explore "branch-points" in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.

On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn's and Bostrom's analysis, m is very much bigger than 2.

Comment author: Eugine_Nier 17 July 2013 03:25:35AM 0 points [-]

Remember that in Tallinn's analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially).

What substrate are they running these simulations on?

Comment author: drnickbone 17 July 2013 07:37:53AM 0 points [-]

I had another look at Tallinn's presentation, and it seems he is rather vague on this... rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom's original simulation argument provides some lower bounds - and references - on what could be done using just classical computation.

Comment author: timtyler 21 July 2013 01:42:47PM *  -1 points [-]

I suppose Tallinn's model could be adjusted so that they only explore "branch-points" in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.

More likely that there are a range of historical "tipping points" that they might want to explore - perhaps including the invention of language and the origin of humans.

On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn's and Bostrom's analysis, m is very much bigger than 2.

Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless - but any average of number of simulations run per-world would probably be low - since so many sims would be leaf nodes - and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller - and less likely to contain many observers.

Comment author: Manfred 13 July 2013 09:21:41PM -2 points [-]

all attempts to refute it have so far failed.

Well. The claims that it's relevant to our current information state have been refuted pretty well.

Comment author: drnickbone 13 July 2013 11:25:43PM 3 points [-]

Citation needed (please link to a refutation).

Comment author: Manfred 14 July 2013 01:25:23AM 1 point [-]

I'm not aware of any really good treatments. I can link to myself claiming that I'm right, though. :D

I think there may be a selection effect - once the doomsday argument seems not very exciting, you're less likely to talk about it.