- 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
- Earlier today, Yudkowsky censored a post on less wrong
- 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).
I'm curious, would you object if similar censorship occurred of instructions on how to make a nuclear weapon? What if someone posted code that they thought would likely lead to a very unFriendly AI if it were run? What if there were some close to nonsense phrase in English that causes permanent mental damage to people who read it?
I'm incidentally curious if you are familiar with the notion that there's a distinction between censorship by governments as opposed to private organizations. In general, most people who are against censorship agree that private organizations can decide what content they do and do not allow. Thus for example, you probably don't object to Less Wrong moderators removing spam. And we've had a few people posting who simply damaged the signal to noise ratio (like the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z). Is there any difference between those cases and the case you are objecting to? As far as I can tell, the primary difference is that the probability of very bad things happening if the comments are around is much higher in the case you object to. It seems that that's causing some sort of cognitive bias where you regard everything related to those remarks (including censorship of those remarks) as more serious issues than you might otherwise claim.
Incidentally, as a matter of instrumental rationality, using a title that complains about the karma system is likely making people less likely to take your remarks seriously.
what
Can you link me to this? Please? S/N ratio be damned, I need to read it.