Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.
Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?
Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.
Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.
1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.
2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.
3 Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).
Simply because we are discussing simulating the historical period in which we currently exist.
The premise of the SA is that the posthuman 'gods' will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system.
If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.
Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it's probabilistic utility in determining my current state.
The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on.
So simulating me would more or less require your simulation as well, it's very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web.
If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original.
Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.
Could you turn off part of cortex and replace it with a rough simulation some of the time without compromising the whole system? Perhaps sometimes, but I doubt that this can give a massive gain.
Why do we currently simulate (think about) our history? To better understand ourselves and our future.
I believe there are several converging reasons to suspect that vaguely human-like minds will turn out to be a persistent pattern for a long time - perhaps as persistent as eukaryotic cells. Adapative radiation will create many specializations and variations, but the basic pattern of a roughly 10^15 bit mind and it's general architecture may turn out to be a fecund replicator and building block for higher level pattern entities.
It seems plausible some of these posthumans will actually descend from biological humans alive today. They will be very interested in their ancestors, and especially the ancestors they new in their former life who died without being uploaded or preserved.
Humans have been thinking about this for a while. If you could upload and enter virtual heaven, you could have just about anything that you want. However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on.
So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.
You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.
We already are within six orders of magnitude of the speed limit of ordinary matter (10^9 bit ops/sec vs 10^15), and there is every reason to suspect we will get roughly as close to the density limit.
There are several measures - the number of bits storable per unit mass derives how many human souls you can store in memory per unit mass.
Energy relates to the bit operations per second and the speed of simulated time.
I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.
So at the limits of computation, 1 kg of ordinary matter at room temperature should give about 10^25 human lifetimes per joule. One square meter of high efficiency solar panel could power several hundred kilograms of computational substrate.
So at the limits of computation, future posthuman civilizations could simulate truly astronomical number of human lifetimes in one second using less power and mass than our current civilization.
No need to dissemble planets. Using the whole surface of a planet gives a multiplier of 10^14 over a single kilogram. Using the entire mass only gives a further 10^8 multiple over that or so, and is much much more complex and costly to engineer. (when you start thinking of energy in terms of human souls, this becomes morally relevant)
If this posthuman civilization simulates human history for a billion years instead of a second, this gives another 10^16 multiplier.
Using much more reasonable middle of the road estimates:
Say tech may bottom out at a limit within half (in exponential terms) of the maximum - say 10^13 human lifetimes per kg per joule vs 10^25.
The posthuman civ stabilizes at around 10^10 1kg computers (not much more than we have today).
The posthuman civ engages in historical simulation for just one year. (10^7 seconds).
That is still 10^30 simulated human lifetimes, vs roughly 10^11 lifetimes in our current observational history.
Those are still astronomical odds for observing that we currently live in a sim.
This is very upsetting, I don't have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I'm confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you're saying.
I thought context made it clear that I was only talking about ... (read more)