Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.
Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?
Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.
Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.
1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.
2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.
3 Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).
I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.
Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.
A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.
But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources.
Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.
The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.
The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers.
Perhaps not your life in particular, but human life on earth today?
Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.
The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don't expect this to every change - simulations have utility for simulators.
I'm a little confused about the 10^18 number.
From what I recall, at the limits of computation one kg of matter can hold roughly 10^30 bits, and a human mind is in the vicinity of 10^15 bits or less. So at the molecular limits a kg of matter could hold around a quadrillion souls - an entire human galactic civilization. A skyscraper of such matter could give you 10^8 kg .. and so on. Long before reaching physical limits, posthumans would be able to simulate many billions of entire earth histories. At the physical molecular limits, they could turn each of the moon's roughly 10^22 kg into an entire human civilization, for a total of 10^37 minds.
The potential time scale compression are nearly as vast - with estimated speed limits at around 10^15 ops/bit/sec in ordinary matter at ordinary temperatures, vs at most 10^4 ops/bit/sec in human brains, although not dramatically higher than the 10^9 ops/bit/sec of today's circuits. The potential speedup of more than 10^10 over biological brains allows for about one hundred years per second of sidereal time.
I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.
Which seems... (read more)