You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: AmagicalFishy 24 November 2014 08:23:58PM *  1 point [-]

I mean moreso: Consider a FAI so advanced that it decides to reward all beings who did not contribute to creating Roko's Basilisk with eternal bliss, regardless of whether or not they knew of the potential existence of Roko's Basilisk.

Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes? What's so special about this one in particular that makes it non-negligible? Or to make anyone concerned about it in the slightest? (That is the part I'm missing. =\ )

Comment author: ChristianKl 25 November 2014 04:19:01PM 0 points [-]

Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes?

The idea is that an FAI build on timeless decision theory might automatically behave that way. There's also Eliezer's conjecture that any working FAI has to be build on timeless decision theory.