This is our monthly thread for collecting arbitrarily contrived scenarios in which somebody gets tortured for 3^^^^^3 years, or an infinite number of people experience an infinite amount of sorrow, or a baby gets eaten by a shark, etc. and which might be handy to link to in one of our discussions. As everyone knows, this is the most rational and non-obnoxious way to think about incentives and disincentives.
- Please post all infinite-torture scenarios separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- No more than 5 infinite-torture scenarios per person per monthly thread, please.
Humanity grows saner, SIAI is well funded, and successfully develops a FAI. Just as the AI finishes calculating the CEV of humanity, a stray cosmic ray flips the sign bit in its utility function. It proceeds to implement the anti-CEV of humanity for the lifetime of the universe.
(Personally I think contrived-ness only detracts from the raw emotional impact that such scenarios have if you ignore their probability and focus on the outcome.)
I actually use "the sign bit in the utility function" as one of my canonical examples for how not to design an AI.