wedrifid comments on Our Phyg Is Not Exclusive Enough - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Donating a trivial amount to one charity is a big leap from ostracising all those that don't.
It is irrational. It's just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site - because this is a site where that is a core premise.
I'm almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people's beliefs on the subject.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don't care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Other words could be used in the place of 'poor' that may more accurately convey what it is that bothers people. "Incessant" or "belligerent" would be two of the more polite examples of such. Some would also take issue with the "their beliefs" phrase, pointing out that the criticisms aren't sufficiently informed to be actual criticisms of their beliefs rather than straw men.
It remains the case that people don't care all that much whether other folks on lesswrong have a particular attitude to AI risk.