Will_Sawin comments on Theists are wrong; is theism? - Less Wrong

5 Post author: Will_Newsome 20 January 2011 12:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (533)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 23 January 2011 10:48:11PM 0 points [-]

If we have a prior of 100 to 1 against agent-caused universes, and .1% of non-agent universes have observers observing interestingness while 50% of agent-caused universes have it, what is the posterior probability of being in an agent-caused universe?

Comment author: Perplexed 24 January 2011 01:09:12AM 0 points [-]

I make it about 83% if you ignore the anthropic issues (by assuming that all universes have observers, or that having observers is independent of being interesting, for example). But if you want to take anthropic issues into account, you are only allowed to take the interestingness of this universe as evidence, not its observer-ladenness. So the answer would have to be "not enough data".

Comment author: Will_Sawin 24 January 2011 02:32:05AM 0 points [-]

You can't not be allowed to take the observer-ladenness of a universe as evidence.

Limiting case: Property X is true of a universe if and only if it has observers. May we take the fact that observers exist in our universe as evidence that observers exist there?

Comment author: datadataeverywhere 23 January 2011 11:30:24PM 0 points [-]

I have no idea what probability should be assigned to non-agent universes having observers observing interesting things (though for agent universes, 50% seems too low), but I also think your prior is too high.

I think there is some probability that there are no substantial universe simulations, and some probability that the vast majority of universes are simulations, but even if we live in a multiverse where simulated universes are commonplace, our particular universe seems like a very odd choice to simulate unless the basement universe is very similar to our own. I also assign a (very) small probability to the proposition that our universe is computationally capable of simulating universes like itself (even with extreme time dilation), so that also seems unlikely.

Comment author: Will_Sawin 24 January 2011 02:33:20AM 1 point [-]

Probabilities were for example purposes only. I made them up because they were nice to calculate with and sounded halfway reasonable. I will not defend them. If you request that I come up with my real probability estimates, I will have to think harder.

Comment author: datadataeverywhere 24 January 2011 02:50:48AM 0 points [-]

Ah, well your more general point was well-made. I don't think better numbers are really important. It's all too fuzzy for me to be at all confident about.

I still retain my belief that it is implausible that we are in a universe simulation. If I am in a simulation, I expect that it is more likely that I am by myself (and that conscious or not, you are part of the simulation created in response to me), moderately more likely that there are a small group of humans being simulated with other humans and their environment dynamically generated, and overall very unlikely that the creators have bothered to simulate any part of physical reality that we aren't directly observing (including other people). Ultimately, none of these seem likely enough for me to bother considering for very long.

Comment author: jacob_cannell 26 January 2011 05:47:45AM 0 points [-]

The first part of your belief that "it is implausible that we are in a universe simulation" appears to be based on the argument:

If simulationism, then solipsism is likely.

Solipsism is unlikely, so . . .

Chain of logic aside, simulationism does not imply solipsism. Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations. So some simulated individuals may exist in small solipsist sims, but the great majority of conscious sims will find themselves in larger shared simulations.

Presumably a posthuman intelligence on earth would be interested in earth as a whole system, and would simulate this entire system. Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

There is a massive sweet spot, an extremely effecient method, of simulating a modern computer - which is to simulate it at the level of it's turing equivalent circuit. Simulating it at a level below this - say at the molecular level, is just a massive waste of resources, while any simulation above this loses accuracy completely.

It is postulated that a similar simulation scale separation exists for human minds, which naturally relates to uploads and AI.

Comment author: datadataeverywhere 26 January 2011 07:54:39AM 1 point [-]

Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.

Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations.

Cheaper, but not necessarily more efficient. It matters which answers one is looking for, or which goals one is after. It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.

I'd like to stay away from telling God what to be interested in, but out of the infinite space of possibilities, Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.

If the basement universe matches our physics, I'm betting on the side that says simulating all the minds on Earth and enough other stuff to make the simulation consistent is an expensive enough proposition that it won't be worthwhile to do it many times. Maybe I'm wrong; there's no particular reason why simulating all of humanity in the year of 2011 needs to take more than 10^18 J, so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.

Comment author: jacob_cannell 26 January 2011 09:00:52AM *  1 point [-]

Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations.

Cheaper, but not necessarily more efficient.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers.

It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.

The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don't expect this to every change - simulations have utility for simulators.

so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.

I'm a little confused about the 10^18 number.

From what I recall, at the limits of computation one kg of matter can hold roughly 10^30 bits, and a human mind is in the vicinity of 10^15 bits or less. So at the molecular limits a kg of matter could hold around a quadrillion souls - an entire human galactic civilization. A skyscraper of such matter could give you 10^8 kg .. and so on. Long before reaching physical limits, posthumans would be able to simulate many billions of entire earth histories. At the physical molecular limits, they could turn each of the moon's roughly 10^22 kg into an entire human civilization, for a total of 10^37 minds.

The potential time scale compression are nearly as vast - with estimated speed limits at around 10^15 ops/bit/sec in ordinary matter at ordinary temperatures, vs at most 10^4 ops/bit/sec in human brains, although not dramatically higher than the 10^9 ops/bit/sec of today's circuits. The potential speedup of more than 10^10 over biological brains allows for about one hundred years per second of sidereal time.

Comment author: datadataeverywhere 26 January 2011 02:51:29PM *  1 point [-]

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

Which seems pretty reasonable to me. Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

I agree, though as a minor note if cost is the Y-axis the graph has to have a vertical asymptote, so it has to grow much faster than exponential at the end. Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline with experiences that their conscious mind will integrate and patch up without noticing. I'm not sure this would work with many mind-types, but I think it would work with human minds, which have a strong bias to maintaining coherence, even at the cost of ignoring reality. If I'm being simulated, I suspect that this is happening even to me on a regular basis, and possibly happening much more often the less I interact with someone.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Updating on the condition that we closely match the ancestors of our simulators, I think it's pretty reasonable that we could be chosen to be simulated. This is really the only plausible reason I can think of to chose us in particular. I'm still dubious as to the value doing so will have to our descendants.

I'm a little confused about the 10^18 number.

Actually, I made a mistake, so it's reasonable to be confused. 20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less. This gives 10^11 W for six billion, (4x) 10^18 J for one year.

I don't think it's reasonable to expect all the matter in the domain of a future civilization to be used to its computational capacity. I think it's much more likely that the energy output of the Milky Way is a reasonably likely bound to how much computation will go on there. This certainly doesn't have to be the case, but I don't see superintelligences annihilating matter at a dramatically faster rate in order to provide massively more power to the remainder of the matter around. The universe is going to die soon enough as it is. (I could be very short sighted about this) Anyway, energy output of the Milky Way is around 5x10^36 W. I divided this by Joules instead of by Watts, so the second number I gave was 10^18, when it should have been (5x) 10^24.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale. Throwing our moon into the Sun in order to get energy out of it is probably a better use of it as raw materials than turning it into circuitry. Likewise for time compression, convince me that power isn't a problem.

Comment author: jacob_cannell 26 January 2011 09:31:00PM *  1 point [-]

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Simply because we are discussing simulating the historical period in which we currently exist.

Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

The premise of the SA is that the posthuman 'gods' will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it's probabilistic utility in determining my current state.

The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on.

So simulating me would more or less require your simulation as well, it's very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

Could you turn off part of cortex and replace it with a rough simulation some of the time without compromising the whole system? Perhaps sometimes, but I doubt that this can give a massive gain.

I'm still dubious as to the value doing so will have to our descendants.

Why do we currently simulate (think about) our history? To better understand ourselves and our future.

I believe there are several converging reasons to suspect that vaguely human-like minds will turn out to be a persistent pattern for a long time - perhaps as persistent as eukaryotic cells. Adapative radiation will create many specializations and variations, but the basic pattern of a roughly 10^15 bit mind and it's general architecture may turn out to be a fecund replicator and building block for higher level pattern entities.

It seems plausible some of these posthumans will actually descend from biological humans alive today. They will be very interested in their ancestors, and especially the ancestors they new in their former life who died without being uploaded or preserved.

Humans have been thinking about this for a while. If you could upload and enter virtual heaven, you could have just about anything that you want. However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on.

So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

We already are within six orders of magnitude of the speed limit of ordinary matter (10^9 bit ops/sec vs 10^15), and there is every reason to suspect we will get roughly as close to the density limit.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale.

There are several measures - the number of bits storable per unit mass derives how many human souls you can store in memory per unit mass.

Energy relates to the bit operations per second and the speed of simulated time.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

So at the limits of computation, 1 kg of ordinary matter at room temperature should give about 10^25 human lifetimes per joule. One square meter of high efficiency solar panel could power several hundred kilograms of computational substrate.

So at the limits of computation, future posthuman civilizations could simulate truly astronomical number of human lifetimes in one second using less power and mass than our current civilization.

No need to dissemble planets. Using the whole surface of a planet gives a multiplier of 10^14 over a single kilogram. Using the entire mass only gives a further 10^8 multiple over that or so, and is much much more complex and costly to engineer. (when you start thinking of energy in terms of human souls, this becomes morally relevant)

If this posthuman civilization simulates human history for a billion years instead of a second, this gives another 10^16 multiplier.

Using much more reasonable middle of the road estimates:

  • Say tech may bottom out at a limit within half (in exponential terms) of the maximum - say 10^13 human lifetimes per kg per joule vs 10^25.

  • The posthuman civ stabilizes at around 10^10 1kg computers (not much more than we have today).

  • The posthuman civ engages in historical simulation for just one year. (10^7 seconds).

That is still 10^30 simulated human lifetimes, vs roughly 10^11 lifetimes in our current observational history.

Those are still astronomical odds for observing that we currently live in a sim.

Comment author: datadataeverywhere 27 January 2011 12:29:20AM -1 points [-]

This is very upsetting, I don't have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I'm confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you're saying.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

I thought context made it clear that I was only talking about the non-mind stuff being simulated as being an additional cost perhaps nearly linear in N. Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

Why can't a poor model (low fidelity) be conscious? We just don't know enough about consciousness to answer this question.

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

I really disagree, but I don't have time to exchange each other's posteriors, so assume this dropped.

However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on [...] So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

I said it was a reasonable upper bound, not a reasonable lower bound. That seems trivial.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible. That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached. I am interested in your reasoning why this should be the case though, so please give me what you can in the way of references that led you to this belief.

Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?

Conditioning on (b) being true, how long ago (in subjective time) do you think our simulation started, and how many times do you believe it has (or will be) replicated?

Comment author: Desrtopa 02 February 2011 05:54:35AM *  0 points [-]

To uploads, yes, but a faithful simulation of the universe, or even a small portion of it. would have to track a lot more variables than the processes of the human minds within it.

Comment author: jacob_cannell 02 February 2011 06:39:36AM -1 points [-]

Optimal approximate simulation algorithms are all linear with respect to total observer sensory input. This relates to the philosophical issue of observer dependence in QM and whether or not the proverbial unobserved falling tree actually exists.

So the cost of simulating a matrix with N observers is not expected to be dramatically more than simulating the N observer minds alone - C*N. The phenomena of dreams is something of a practical proof.

Comment author: Desrtopa 02 February 2011 06:57:13AM 0 points [-]

Variables that aren't being observed still have to be tracked, since they affect the things that are being observed.

Dreams are not a very good proof of concept given that they are not coherent simulations of any sort of reality, and can be recognized as artificial not only after the fact, but during with a bit of introspection and training.

In dreams, large amounts of data can be omitted or spontaneously introduced without the dreamer noticing anything is wrong unless they're lucid. In reality, everything we observe can be examined for signs of its interactions with things that we haven't observed, and that data adds up to pictures that are coherent and consistent with each other.