timtyler comments on "Stupid" questions thread - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (850)
Jaan Tallinn's attempt: Why Now? A Quest in Metaphysics. The "Doomsday argument" is far from certain.
Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news - our descendants will evolve into something different soon enough. That's not much of a "Doomsday".
I described some problems with Tallinn's attempt here - under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
We seem pretty damn close to me! A decade or so is not very long.
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn's analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn's model could be adjusted so that they only explore "branch-points" in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn's and Bostrom's analysis, m is very much bigger than 2.
What substrate are they running these simulations on?
I had another look at Tallinn's presentation, and it seems he is rather vague on this... rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom's original simulation argument provides some lower bounds - and references - on what could be done using just classical computation.
More likely that there are a range of historical "tipping points" that they might want to explore - perhaps including the invention of language and the origin of humans.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless - but any average of number of simulations run per-world would probably be low - since so many sims would be leaf nodes - and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller - and less likely to contain many observers.