You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on xkcd on the AI box experiment - Less Wrong Discussion

15 Post author: FiftyTwo 21 November 2014 08:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 25 November 2014 04:19:01PM 0 points [-]

Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes?

The idea is that an FAI build on timeless decision theory might automatically behave that way. There's also Eliezer's conjecture that any working FAI has to be build on timeless decision theory.