Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.
Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?
Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.
Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.
1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.
2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.
3 Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).
Sure, but that's not relevant towards the goal. There are no 'actual' or exact brain states that canonically define people.
If you created a simulation of an alternate 1950 and ran it forward, it would almost certainly diverge, but this is no different than alternate branches of the multiverse. Running the alternate forward to say 2050 may generate a very different reality, but that may not matter much - as long as it also generates a bunch of variants of people we like.
This brings to mind a book by Heinlein about a man who starts jumping around between branches - "Job: a comedy of Justice".
Anyway, my knowledge of my grandfather is vague. But I imagine posthumans could probably nail down his DNA and eventually recreate a very plausible 1890 (around when he was born). We could also nail down a huge set of converging probability estimates from the historical record to figure out where he was when, what he was likely to have read, and so on.
Creating an initial population of minds is probably much trickier. Is there any way to create a fully trained neural net other than by actually training it? I suspect that it's impossible in principle. It's certainly the case in practice today.
In fact, there may be no simple shortcut without going way way back into earlier prehistory, but this is not a fundamental obstacle, as this simulation could presumably be a large public project.
Yes the approach of just creating some initial branch from scratch and then running it forward is extremely naive. If you'd like I could think of ten vastly more sophisticated algorithms that could shape the branch's forward evolution to converge with the main future worldline before breakfast.
The first thing that pops to mind: The historical data that we have forms a very sparse sampling, but we could use it to guide the system's forward simulation, with the historical data acting as constraints and attractors. In these worlds, fate would be quite real. I think this gives you the general idea, but it relates to bidirectional path tracing.
Such as?
We can get to that if you can establish that there's any good reason to do it in the first place.
Your justifications for running such simulations have so far seem to hinge on things we could learn from them (or simply creating them for their own sake,... (read more)