The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7aon . However, at the same time, it seems that Eliezer's observation that trolling and related problems have over time gotten worse here may be correct. It may be that this an inevitable consequence of growth, but it may be that it can be handled or reduced with some solution or set of solutions. I'm starting this discussion thread for people to propose possible solutions. To minimize anchoring bias and related problems, I'm not going to include my ideas in this header but in a comment below. People should think about the problem before reading proposed solutions (again to minimize anchoring issues).
If that's the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren't convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that's a good thing.