Multiheaded comments on New censorship: against hypothetical violence against identifiable people - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (457)
I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:
Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)
This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.
If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.
Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))
This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.
In general, two conclusions:
I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.
And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.
I think that there's the usual paradox of benevolent dictatorship here; you can only trust humans who clearly don't seek this position for selfish ends and aren't likely to present a rational/benevolent front just so you would give them political power.
In a liberal/democratic political atmosphere, self-proclaimed benevolent dictators are a rare and prized resource; you can pressure one to run a website, an organization, etc to the best of their ability. But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization. So, IMO, isolated experiments with dictatorship are fine; overall preference for it is terribly dangerous.
(All of the above goes only for humans, of course; I have no qualms about FAI rule.)
P.S.: I googled for "benevolent dictator" + "paradox" and found an argument similar to mine.
Interesting. Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?
Only a single one: a great actually-benevolent-dictator, with a good insight into people and lots of rationality, personally selects his successor among several candidates, after lengthy consideration and hidden testing. But, of course, remove one of the above qualifiers, and it can blow up regardless of the first dictator's best intentions. See e.g. Marcus Aurelius and Commodus. So, on a meta level, no, there's likely no system that would work for humans.
(I think that "real" democracy is also too dangerous - see the 19th and early 20th century - so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.)
What about technocracy a-la china?
And FAI, obviously.
Really? Safe in the sense of "too incompetent to execute a mass-murder"? Also, anarchy is a military vacuum.