MichaelHoward comments on The Wannabe Rational - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (296)
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent.
I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
One reason to significantly adjust downward the probability of being in a Friendly-run sim is what I would call "The Haiti Problem"... I'm curious if anyone has solutions to that problem. Does granting eventual immortality (or the desired heaven!) to all simulated persons make up for a lifetime of suffering?
Perhaps only a small number of persons need be simulated as fully conscious beings, and the rest are acted out well enough to fool us. Perceived suffering of others can add to the verisimilitude of the simulation.
Of course, internalizing this perspective seems like moral poison, because I really do want the root-level version of me to act against suffering there where it definitely exists.