You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Anthropic principles agree on bigger future filters - Less Wrong Discussion

2 Post author: XiXiDu 03 November 2010 04:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: XiXiDu 04 November 2010 04:45:34PM 0 points [-]

According to SIA averting these filter existential risks should be prioritized more highly relative to averting non-filter existential risks such as those in this post. So for instance AI is less of a concern relative to other existential risks than otherwise estimated.

Light cone eating AI explosions are not filters