In the comment section of Roko's banned post, PeerInfinity mentioned "rescue simulations". I'm not going to post the context here because I respect Eliezer's dictatorial right to stop that discussion, but here's another disturbing thought.
An FAI created in the future may take into account our crazy desire that the all the suffering in the history of the world hadn't happened. Barring time machines, it cannot reach into the past and undo the suffering (and we know that hasn't happened anyway), but acausal control allows it to do the next best thing: create large numbers of history sims where bad things get averted. This raises two questions: 1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear? 2) if something very bad has already happened to you, does this constitute evidence that we will never build an FAI?
(If this isn't clear: just like PlaidX's post, my comment is intended as a reductio ad absurdum of any fears/hopes concerning future superintelligences. I'd still appreciate any serious answers though.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.
The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)
The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.
When Robert Wright looks at evolution and sees purpose in the existence of the process of evolution itself (and the particular way it happened to play out, including increasing complexity), he is seeing the evidence for anthropics and big worlds.
Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left. Eli should re-do the debate with this insight on the table.
(note 1) (including that evolution on earth happened to create intelligence, which seems to be a highly unlikley outcome of a generic biochemical replicator process on a generic planet; we know this because earth managed to have life for 4 billion years -- half of its total viability as a place for life -- without intelligence emerging, and said intelligence seemed to depend in an essential way on a random asteroid impact at approximately the right moment )