DanArmak comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong

24 Post author: PlaidX 23 July 2010 11:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (188)

You are viewing a single comment's thread.

Comment author: DanArmak 24 July 2010 10:28:23PM 18 points [-]

Humanity grows saner, SIAI is well funded, and successfully develops a FAI. Just as the AI finishes calculating the CEV of humanity, a stray cosmic ray flips the sign bit in its utility function. It proceeds to implement the anti-CEV of humanity for the lifetime of the universe.

(Personally I think contrived-ness only detracts from the raw emotional impact that such scenarios have if you ignore their probability and focus on the outcome.)

Comment author: Eliezer_Yudkowsky 24 July 2010 11:02:11PM 20 points [-]

I actually use "the sign bit in the utility function" as one of my canonical examples for how not to design an AI.