Alex_Altair comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Yeah, quite possibly. But I wouldn't want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.