SforSingularity comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
In the comment section of Roko's banned post, PeerInfinity mentioned "rescue simulations". I'm not going to post the context here because I respect Eliezer's dictatorial right to stop that discussion, but here's another disturbing thought.
An FAI created in the future may take into account our crazy desire that the all the suffering in the history of the world hadn't happened. Barring time machines, it cannot reach into the past and undo the suffering (and we know that hasn't happened anyway), but acausal control allows it to do the next best thing: create large numbers of history sims where bad things get averted. This raises two questions: 1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear? 2) if something very bad has already happened to you, does this constitute evidence that we will never build an FAI?
(If this isn't clear: just like PlaidX's post, my comment is intended as a reductio ad absurdum of any fears/hopes concerning future superintelligences. I'd still appreciate any serious answers though.)
I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.
For me, essentially zero, that is I would act (or attempt to act) as if I had zero credence that I was in a rescue sim.