You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

G0W51 comments on Open Thread, Jun. 15 - Jun. 21, 2015 - Less Wrong Discussion

5 Post author: Gondolinian 15 June 2015 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (302)

You are viewing a single comment's thread.

Comment author: G0W51 20 June 2015 07:17:20AM 3 points [-]

What are some recommended readings for those who want to decrease existential risk? I know Nick Bostrom's book Superintelligence, How can I reduce existential risk from AI?, and MIRI's article Reducing Long-Term Catastrophic Risks from Artificial Intelligence are useful, but what else? What about non-AI-related existential risks?