JoshuaZ comments on Dealing with trolling and the signal to noise ratio - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (231)
If that's the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren't convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that's a good thing.