Emile comments on Does the simulation argument even need simulations? - Less Wrong

7 Post author: lmm 11 October 2013 09:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread.

Comment author: Emile 12 October 2013 12:59:29PM 2 points [-]

Our present civilization is likely to reach the point where it can simulate a universe reasonably soon

I don't know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it's a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

Comment author: Baughn 12 October 2013 06:08:02PM *  5 points [-]

A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You're right about that.

So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we're part of a simulation, that we were ever anything else.

Comment author: lmm 12 October 2013 10:56:51PM 1 point [-]

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I'm willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.