New censorship: against hypothetical violence against identifiable people
New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (457)
Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.
Back in line with you!
And the mailing lists, apparently.
The Surgeon General recommends that you not discuss criminal activities, with respect to laws actually enforced, on any mailing list containing more than 5 people.
Intriguing, actual paraphrasing here of a US "The Surgeon General"? I can imagine it is something someone in high office might say.
We have a The Surgeon General, but he recommends things about smoking and whatnot; I'm pretty sure he doesn't issue warnings about mailing lists.
The Surgeon General is someone who issues national health recommendations. The implication of Eliezer's post is that discussing criminal activity may be hazardous to your health.
I believe the traditional structure is a clandestine cell system.
Would this censor posts about robbing banks and then donating the proceeds to charity?
Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.
What abot pro-robbing banks in general?
What about discussions which discuss flaws in security systems, generally? e.g. "Banks often have this specific flaw which can be mitigated in this cost-ineffective manner."?
Uncharitable reading: As long as taking utilitarianism seriously doesn't lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can't.
Or Really Extreme Altruism?
Note to all: Alicorn is referring to something else. Robbing banks may be extreme but it is not altruism.
Edited in a link.
This is an example of why I support this kind of censorship. Lesswrong just isn't capable of thinking about such things in a sane way anyhow.
The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don't expect any useful or sane discussion to occur on sensitive subjects like this.
(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)
I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.
They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I'm comfortable with them being censored as a side effect of a policy with useful effects.
Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn't something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.
This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.
No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.
The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don't want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.
You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you're wrong.
Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.
This does indeed seem like something that's covered by the new policy. It's illegal. In the alternative where it's a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.
So you don't see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?
I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn't worth it.
Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don't tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don't tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.
It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.
Now it's true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, "An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."
I wasn't thinking of SIAI as the charity.
This intention of yours is not transparent. Plus, they don't care.
Regardless of your intentions, I know of one person who somewhat seriously considered that course of action as a result of the post in question. (The individual in question has been talked out of it in the short term, by way of 'the negative publicity would hurt more than the money would help', but my impression is that the chance that they'll try something like that has still increased, probably permanently.)
This is where the rubber meets the road as far as whether we really mean it when we say "that which can be destroyed by the truth, should be." If we accept this argument, then by "mere addition" of censorship rules, you eventually end up renaming SIAI "The Institute for Puppies and Unicorn Farts", and completly lying to the public about what it is you're actually about, in order to benefit PR.
Well, are you?
True, but you have said things that seem to imply it. Seriously, you can't go around saying "X" and "X->Y" and then object when people start attributing position "Y" to you.
I thought I posted this comment last night, but it seems like I didn't (and now I have to pay karma to post it) but aren't we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)
As far as I can tell, Really Extreme Altruism actually is legal.
What about the possibility that someone who thought it was a good idea would change their mind after talking about it?
This seems an order-of-magnitude less likely than somebody wouldn't naturally think of the dumb idea, seeing the dumb idea.
Therefore censor uncommon bad ideas generally?
Well then.
I've heard that firemen respond to everything not because they actually have to, but because it keeps the drill sharp, so to speak. The same idea may apply to mod action... (in other words, MOAR "POINTLESS" CENSORSHIP)
More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?
gwern specifically argued that small scale terrorism would be ineffective.
I suppose the next question is whether it would apply to things like comments in response to gwern's hypothetical bombing of intel arguing that his conclusion is incorrect.
Given the stated principles governing the new censorship policy, I think the answer would be "yes, of course."
Let's not delete posts for disagreeing on uncomfortable empirical questions.
I don't think the policy EY is proposing involves banning people, just deleting the stuff we write that violates policy.
fixed, thanks
Implying that whether his post should be censored hinges on the conclusion reached and not just the topic?
discussion of violence by state actors is quite a bit different than discussion of individual violence.
Sure, but why is that a difference that makes a difference?
It looks as though that was on gwern.net - outside the zone.
it was in discussion too.
If you're talking about his Slowing Moore's Law: Why You Might Want To and How You Would Do It it's not there anymore.
I didn't thoroughly read the new version on his site, so there's a chance that there is now a link to an article that will still be confused for a pro-terrorism piece (that's the problem the previous version had) or sounds like it's advocating the idea of governments attacking chip fabs.
Your generalization is averaging over clairvoyance. The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.
Does the policy apply to violence against oneself? (I'm guessing not, since it's not illegal.) Talking about it is usually believed to reduce risk.
There's a scarcity effect whereby people believe pro-violence arguments to be stronger, since if they weren't convincing they wouldn't be censored. Not sure how strong it is, likely depends on whether people drop the topic or say things like "I'm not allowed to give more detail, wink wink nudge nudge".
It's a common policy so there don't seem to be any slippery slope problems.
We're losing Graham cred by being unwilling to discuss things that make us look bad. Probably a good thing, we're getting more mainstream.
since when is violence against oneself or even discussion of violence against oneself fully legal?
In most times and places throughout history, including all countries whose legal systems I am familiar with.
Suicide in particular is often illegal.
ETA: possibly this statement of mine was outdated.
Either you or some of the people reading your comment seem to have been mislead into concluding that a thing being illegal and also violence against oneself can be generalised to conclude that violence against oneself or even discussion of violence against oneself is illegal. That seems to be a rather blatant confusion.
I'm not sure what RomeoStevens meant about discussion of violence against oneself being illegal, but aside from that aspect, his point is entirely valid. You seem to be suggesting that we're generalising from "suicide is illegal" to "any form of violence against oneself is illegal". We're not. We're simply noting that suicide is one type of violence against onself, and it's illegal.
Your statement expands to "In most times and places throughout history, including all countries whose legal systems I am familiar with, violence against oneself is fully legal." Unless you're familiar only with very odd legal systems, that seems to be a rather blatant confusion.
No. MixedNut's point. RomeoStevens' reply was confused and mistaken. Unfortunately Caspian has mislead you about the context.
That was my original impression and why I refrained from downvoting him. Until, that is, it became apparent that he and some readers (evidently yourself included) believe that his statement of trivia in some way undermines the point made by MixedNut's and supported by myself or supports RomeoStevens' ungrammatical rhetorical interjection.
I had read the entire context, and re-read it just now to make sure I hadn't missed anything. You're correct that RomeoStevens' reply doesn't really undermine MixedNuts' point, and is therefore "trivia". But it's nonetheless correct trivia (modulo the above-mentioned caveat) and your refutation of it is therefore quite confusing.
But it's pointless to continue arguing this trivial point, as it's irrelevant to the thread topic, except in the meta sense that these kinds of pointless semantic debates will be the inevitible result of implementation of this extremely ill-advised and poorly thought-through censorship policy.
What are you thinking of? Non-assisted suicide that doesn't put third parties in danger is legal most places (exceptions: India, Singapore, North Korea, Virginia). Self-injury is legal in the US at least. Discussion of suicide is allowed as long as it's even slightly more hypothetical than "I intend to kill myself in the near future". Discussion of self-injury is AFAIK completely legal (in the US?).
My understanding has always been that self harm or plausible discussion of self harm in the US leads to a loss of autonomy in that you can be diagnosed with a mental illness and lose access to things like voting, driving, firearms, etc. (depending on the diagnosis)
Trigger warning for, obviously, self-harm.
There's a huge chasm between a mental illness diagnosis (which self-harm is very likely to cause, especially in the US where you need diagnosis other than "ain't quite right - not otherwise specified" for insurance) and actual repercussions. Members of online support groups report that their psychiatrists either treat self-injury like any other symptom (asking about it, describing decreases as good but not praiseworthy) or recommend they stop but do not enforce it. If it gets life-threatening it's treated like suicide, but that almost never comes up.
Deleting comments for being perceived as dangerous might get in the way of conversation. I think that if we're worried about how the site looks to outsiders then it's probably only necessary to worry about actual posts. Nobody expects comments to be appropriate on the internet, so it probably doesn't hurt us that much.
It was a top-level post (though one in Discussion) he was thinking about.
I know, but he said that the suggested policy change would include comments.
That's usual Yudkowskian overreaction he will likely get tired of implementing within a couple years or less.
.......
But the site's only been around for a couple of years in the first place
Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?
Yes. This seems like yet another example of "First of all, it's a bad fucking idea, second of all, talking about it makes everyone else look bad, and third of all, if hypothetically it was actually a good idea you'd still be a fucking juvenile idiot for blathering about it on the public Internet." What part of "You fail conspiracies forever" is so hard for people to understand? Talk like this serves no purpose except to serve as fodder for people who claim that <rationalist idea X> leads to violence and is therefore false, and your comment shall be duly deleted once this policy is put into place.
I don't see how this comment even fits the proposed policy, except under a motivatedly-broad reading of "by all means necessary"
Wikipedia thinks otherwise:
I was unaware of that connotation. But I don't think it changes the equation. There's a million different ways to interpret "by all means necessary", the vast majority of which would not be construed to include violence. If this were a forum in which Satre/Malcolm X references were the norm, then that would be different. But it isn't.
I and the one person currently in the room with me immediately took "by all means necessary" to suggest violence. I think you're in a minority in how you interpret it.
OK, I'll update on that.
Just checked with my houseguest; his interpretation is also "a call to violence".
Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states). Is the public-private distinction the important thing - would advocating/talking about state-sanctioned genocide be ok?
While an interesting question, I think that the answer to that is reasonably obvious.
What about capital punishment and/or corporal punishment?
In the event of gun control, it would in fact be illegal even if done by a state actor.
Edit: assuming USA of course.
To call either gun control or taxation violence is stretching matters beyond reasonable limits. The only sense in which they are is the sense in which any public policy is - that it is backed by the government. If anything to do with the government has to be considered as 'about violence'... bah.
I don't think it's silly, and based on the LW survey results, neither do approximately 30.3% of LW users.
But aside from that, OP said "More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people". Gun control (though not taxation) clearly falls under this illegality clause, without resort to classifying it as "violence".
'Libertarian' does not mean 'believes all government action is violence'.
I identify as libertarian and have been objectivist, but calling taxation theft (and other similar claims) is almost always sneaking in connotations.
My post was indeed inappropriate. I have used the "Delete" function on it.
...but it'd be nice to have a poll to point at later, to show consensus, and I'd be surprised if people disagreed.
This poll, like EY's original question, conflates two things that don't obviously belong together. (1) Advocating certain kinds of act. (2) "Asking about" the same kind of act.
I appreciate that in some cases "asking about" might just be lightly-disguised advocacy, or apparent advocacy might just be a particularly vivid way of asking a question. I'm guessing that the quotes around "asking about" are intended to indicate something like the first of these. But what, exactly?
I think in this context, "asking about" might include raising for neutral discussion without drawing moral judgements.
The connection I see between them is that if someone starts neutral discussion about a possible action, actions which would reasonably be classified as advocacy have to be permitted if the discussion is going to progress smoothly. We can't discuss whether some action is good or bad without letting people put forward arguments that it is good.
There's certainly a connection. I'm not convinced the connection is so intimate that if censoring one is a good idea then so is censoring the other.
The "interesting" thing about violence is that it's one of the few ways that a relatively small group of (politically) powerless people with no significant support can cause a big change in the world. However, the change rarely turns out the way the small group would hope; most attempts at political violence by individuals or small groups fail miserably at achieving the group's aims.
Non-violent action has a reasonable track record, considering how rarely it's been used in an organized way by the oppressed. The track record is particularly good in the first world, where people care about appearances.
Would my pro-piracy arguments be covered by this? What about my pro-coup d'état ones?
Possibly. I hope not. I'm all for mod action, but not at the expense of political diversity.
I think piracy cases are pretty similar to marijuana cases (they are even less likely to be enforced actually) which he said won't be banned.
I don't think Konkvistador was talking about software piracy.
You mean copyright piracy or sea piracy?
Sea piracy obviously. What kind of a person do you think I am?!
As someone unfamiliar with your views, I can't tell whether this is sarcasm or not, especially because of the interrobang. Can you clarify? Is there anywhere on the internet where your views are concisely summarized? (Is it in any way associated with your real name?)
The levels can be hard to disambiguate so I sympathize. I'll write my opinions out unironically. You can find the full arguments in my comment history (I can dig links to that up too).
I'm assuming you are familiar with the arguments for efficent charity and optimal employment? If not I can provide citations & links. I don't think Sea Piracy as a means to funding efficient charity is obviously worse from a utilitarian perspective than a combo with many legal professions. It may or may not be justified, I'm leaning towards it being justified on the same utilitarian grounds as government taxation can be. If not cheating on taxes to fund efficient charity is a pretty good idea. Some people's comparative advantage will lay in sea piracy.
Violating copyright on software or media products in the modern West is in general not a bad thing. But indiscriminately pirating everything may be bad.
In the grandfather comment I was aiming for ambiguity and humour.
I like it.
So I finally downvoted Yudkowsky.
What was your line of thought?
That censorship because of what people think of LessWrong is ridiculous. That the negative effect on the reputation is probably significantly less than what is assumed. And that if EY thought that censorship of content for the sake of LW's image is in order he should've logically thought that omitting fetishes from his public OKCupid profile(for the record I've defended the view that this is his right) among other things is also in order as well. And some other thoughts of this kind.
Someone please send me a link via PM? Or perhaps the author could PM me? Not because the censorship of that class bothers me but because talking to wedrifid is not posting things on the internet, I'm curious and there are negligible consequences for talking to me about interesting hypothetical questions.
(Disregard the above is the post or comment was boring.)
tl;dr: tobacco kills more people than guns and cars combined. Should we <insert violence here>?
PS: fuck the police
(I laughed). Thanks nyan. (I hope this kind of satirical summary is considered acceptable.)
As the author of the offending Discussion post in question, I'd say it's an adequate summary.
This kind of uncertainty about what is and is not acceptible, is perhaps the primary reason why such censorship policies are evil.
I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:
Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)
This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.
If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.
Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))
This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.
In general, two conclusions:
I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.
And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.
It makes sense to have mod discretion, but it also makes sense to have a list of rules that the mods can point to so that people whose posts get censored are less likely to feel that they are being personally targeted.
Yes. Explanatory rules are good. Letting the rules drive is not.
These are explanations, not rules, check.
Hence "may at the admins' option be censored"
Then discussing it on the public Internet is the wrong thing to do. I can't compare it to anything but juvenile male locker-room boasting.
Good point.
A friend and I once put together a short comic trying to analyze democracy from an unusual perspective, including presenting the idea that an underlying threat of violent popular uprising should the system be corrupted helps keep it running well. This was closely related to a shorter comic presenting some ideas on rationality. The project led to some interesting discussions with interesting people, which helped me figure out some ideas I hadn't previously considered, and I consider it to have been worth the effort; but I'm unsure whether or not it would fall afoul of the new policy.
How 'identifiable' do the targets of proposed violence have to be for the proposed policy to apply, and how 'hypothetical' would they have to be for it not to? Some clarification there would be appreciated.
Also, implying that violence is best discussed in private, versus not being discussed at all. It's like saying in public "But let's talk about our illegal activities in a more private venue." There should be no perception of LW being associated with such, period (.)
What if you aren't sure if violence is the right thing to do? You obviously should want as many eyeballs to debug your thinking on that as possible no?
If you actually believe that violence might be the right thing to do, then you assign non-negligible probability to
If you want to discuss a coup or something do it in a less easily traceable fashion (not on a public forum. Use encryption. ).
Actually, I can think of at least one type of situation where this isn't true, though it seems unwise to explain it in public and in any case it's still not something you'd want associated with LW, or in fact happening at all in most cases.
I think that there's the usual paradox of benevolent dictatorship here; you can only trust humans who clearly don't seek this position for selfish ends and aren't likely to present a rational/benevolent front just so you would give them political power.
In a liberal/democratic political atmosphere, self-proclaimed benevolent dictators are a rare and prized resource; you can pressure one to run a website, an organization, etc to the best of their ability. But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization. So, IMO, isolated experiments with dictatorship are fine; overall preference for it is terribly dangerous.
(All of the above goes only for humans, of course; I have no qualms about FAI rule.)
P.S.: I googled for "benevolent dictator" + "paradox" and found an argument similar to mine.
Interesting. Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?
Only a single one: a great actually-benevolent-dictator, with a good insight into people and lots of rationality, personally selects his successor among several candidates, after lengthy consideration and hidden testing. But, of course, remove one of the above qualifiers, and it can blow up regardless of the first dictator's best intentions. See e.g. Marcus Aurelius and Commodus. So, on a meta level, no, there's likely no system that would work for humans.
(I think that "real" democracy is also too dangerous - see the 19th and early 20th century - so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.)
What about technocracy a-la china?
And FAI, obviously.
Really? Safe in the sense of "too incompetent to execute a mass-murder"? Also, anarchy is a military vacuum.
EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence). If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question. This is why I will be opposed to any sort of zero tolerance policy in which the things to be censored is not well-defined a definite impediment to balanced and rationally-considered discussion. It's clear to me that speaking about violence against a particular person or persons is far more likely to have negative consequences on balance, but discussion of the commission of crimes in general seems like something that should be weighed on a case-by-case basis.
In general, I prefer my moderators to have a fuzzy set of broad guidelines about what should be censored in which not deleting is the default position, and they actually have to decide that it is definitely bad before they take the delete action. The guidelines can be used to raise posts to the level of this consideration and influence their judgment on this decision, but they should never be able to say "the rules say this type of thing should be deleted!"
That's an... interesting way of putting it, where by "interesting" I mean "wrong". I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.
The 'rules' do not 'mandate' that I delete anything. They hardly could. I'm just, before I start deleting things, giving people fair notice that this is what I'm considering doing, and offering them a chance to say anything I might have missed about why it's a terrible idea.
If you genuinely can't see how similar considerations apply to you personally publishing rape-world stories and the reasoning you explicitly gave in the post then I suggest you have a real weakness in evaluating the consequences of your own actions on perception.
I approve of your Three Worlds Collide story (in fact, I love it). I also approve of your censorship proposal/plan. I also believe there is no need to self censor that story (particularly at the position you were when you published it). That said:
This kind of display of evident obliviousness and arrogant dismissal rather than engagement or---preferably---even just outright ignoring it may well do more to make Lesswrong look bad than half a dozen half baked speculative posts by CronoDAS. There are times to say "but you know, screw it" and "where by interesting I mean wrong" but those times don't include when concern is raised about your legalised-rape-and-it's-great story in the context of your own "censor hypothetical violence 'cause it sounds bad" post.
I'm not sure how this is relevant; there's a good bit of difference between discussion of breaking a law and discussion of changing it. That said, I think I'm reading this differently than most in the thread. I'm understanding it as aimed against hypotheticals that are really "hypotheticals".
In answer to the question that was actually asked in the post, here is a non-obvious consequence: My impression of the atheist/libertarian/geek personspace cluster that makes up much of LW's readership is that they're generally hostile to anything that smells like conflating "legal" with "okay"; and also to the idea that they should change their behavior to suit the rest of the world. You might find you're making LW less off-putting to the mainstream at the cost of making it less attractive to its core audience. (but you might consider it worth that cost)
As both a relatively new contributor and a member of said cluster, this policy makes me somewhat uncomfortable at first glance. Whether that generalizes to other potential new contributors, I cannot say. I present it as proof-of-concept only.
IAWYC, but that was a story set in the far future with a discussion that makes clear (to me at least) that our present is so different from that that the author wouldn't ever even dream of suggesting to do anything remotely like that in our times. It isn't remotely similar to (what Poe's Law predicts people will get from) the recent suggestion about tobacco CEOs.
He was in a different position then. Trying to gain reputation for being an original thinker requires different public outputs than attempting to earn mainstream recognition of the origanisation one is the head of.
I'm dubious about this because laws can change. I'm also sure I don't have a solid grasp of which laws can be enforced against middle-class people, but I do know that they aren't all like laws against kidnapping. For example, doctors can get into trouble for prescribing "too much" pain medication.
I find that threatening hypothetical violence against my interlocutor can be a useful rhetorical device for getting them to think about ethical problems in near mode.
I'm going to hit you with a stick unless you can give me an example of where that has been effective.
For all the whining I do about how LWers lack a sense of humor.... I absolutely love it when I'm proven wrong.
Do you really feel like LWers lack a sense of humor? LWers have posted some of the funniest things I've ever read. Their sense-of-humor distribution has heavy tails, at least.
THREE examples.
I'll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn't want people to read, and that you, dear reader of this comment, probably can't!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.
This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.
Not a bad option indeed. It has a merit if we are really that bothered about the general view of LW.
And for the record the post is still accessible albeit deleted.
LW has effectively zero resources to implement software changes.
If this were your real rejection, you would be asking for volunteer software-engineer-hours.
Tried.
Are you kidding? Sign me up as a volunteer polyglot programmer, then!
Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.
The site is open source, you should be able to just write a patch and submit it.
This would be a poor investment of time without first getting a commitment from Eliezer that he will accept said patch.
It'd get you familiar with the code base, which you'd need to be anyway if you wanted to be a volunteer contributor.
After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".
The path to becoming a volunteer contributor is now very clear.
Maybe it's just that volunteers that will actually do any work are hard to find. Related.
Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.
You can't reliably make things on the internet go away.
You can make them hard enough to access that they won't be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong's reputation.
I was thinking about people posting screen shots.
Agreed. It only takes one high-karma user posting a screenshot on reddit of LW's Secret Thread Where They Discuss Terrorism or whatever...
I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.
If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM'd video that can't be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.
tl;dr: No, you can't stop people from copying things on the Internet.
Of course. But a "good enough" solution to the stated problem doesn't need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:
Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!
Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.
Passive-aggression level: Obi-Wan Kenobi
I don't see that that's passive-aggressive when it's accompanied by a clear and explicit statement that Nominull thinks Eliezer is wrong and why. What would be passive-aggressive is just saying "Well, I suppose you must do what you feel is right" and expecting Eliezer to work out that disapproval is being expressed and what sort.
In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.
How about instead of outright censorship, such discussions be required to be encrypted, via double-rot13?
Rot13 applied twice is just the original text...
..............whooosh................
In light of the above getting upvotes, I'm not sure if it's the "whoosh" of double-rot13 going over your head as I originally thought, or if it's indicating intended sarcasm going over my head, or some other meaning not readily obvious to me (inferential distance and all that.)
I don't know if we actually need a specific policy on this. We didn't in the case of my post...
Would your post on eating babies count, or is it too nonspecific?
http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1
(I completely agree with the policy, I'm just curious)
Aside from the fact that "it might make us look bad" is a horrible argument in general, have you not considered the consequence that censorship makes us look bad? And consider the following comment below:
It was obviously intended as a joke, but is that clear to outsiders? Does forcing certain kinds of discussions into side-channels, which will inevitibly leak, make us look good?
Consideration of these kinds of meta-consequences is what separates naive decision theories from sophisticated decision theores. Have you considered that it might hurt your credibility as a decision theorist to demonstrate such a lack of application of sophisticated decision theory in setting policies on your own website?
And now, what I consider to be the single most damning argument against this policy: in the very incident that provoked this rule change, the author of the post in question, after discussion, voluntarily withdrew the post, without this policy being in effect! So self-policing has demonstrated itself, so far, to be 100% effective at dealing with this situation. So where exactly is the necessity for such a policy change?
Why the explicit class distinction?
It would be prohibited to discuss how to or speed and avoid being cited for it. (I thought that this was already policy, and I believe it to be a good policy.)
It would not be prohibited to discuss how to be a vagrant and avoid being cited for it. (Middle class people temporarily without residences typically aren't treated as poorly as the underclass.)
Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?
Just because I think responses to this post might not have been representative:
I think this is a good policy.
I also agree with this policy, and feel that many of the raised or implied criticisms of it are mostly motivated from an emotional reaction against censorship. The points do have some merit, but their significance is vastly overstated. (Yes, explicit censorship of some topics does shift the Schelling fence somewhat, but suggesting that violence is such a slippery topic that next we'll be banning discussion about gun control and taxes? That's just being silly.)
You may think it's silly, others do not. Even if Eliezar has no intention of interpeting "violence" that way, how do we know that? Ambiguity about what is and is not allowed results in chilling far more speech than may have been originally intended by the policy author.
Also, the policy is not limited to only violence, but to anything illegal (and commonly enforced on middle class people). What the hell does that even mean? Illegal according to whom? Under what jurisdiction? What about conflicts between state/federal/constitutional law? I mean, don't get me wrong, I think I have a pretty good idea what Eliezar meant by that, but I could well be wrong, and other people will likely have different ideas of what he meant. Again, ambiguity is what ends up chilling speech, far more broadly than the original policy author may have actually intended.
And I will again reiterate what I consider to be the most slam-dunk argument against this policy: in the incident that provoked this policy change, the author of the offending post voluntarily removed it, after discussion convinced him it was a bad idea. Self-policing worked! So what exactly is the necessity for any new policy at all?
I agree that your points about ambiguity have some merit, but I don't think there's much of a risk of free speech being chilled more than was intended, because there will be people who test these limits. Some of their posts will be deleted, some of them will not. And then people can see directly roughly where the intended line goes. The chilling effect of censorship would be a more worrying factor if the punishment for transgressing was harsher: but so far Eliezer has only indicated that at worst, he will have the offending post deleted. That's mild enough that plenty of people will have the courage to test the limits, as they tested the limits in the basilisk case.
As for self-policing, well, it worked once. But we've already had trolls in the past, and the userbase of this site is notoriously contrarian, so you can't expect it to always work - if we could just rely on self-policing, we wouldn't need moderators in the first place.
Abortion, euthenasia and suicide fit that description, some say. For them and those who disagree with them this proposal may have unforeseen consequences. Edit: all three are illegal in parts of the world today.
Do wars count? I find it strange, to say the least, that humans have strong feelings about singling out an individual for violence but give relatively little thought to dropping bombs on hundreds or thousands of nameless, faceless humans.
Context matters, and trying to describe an ethical situation in enough detail to arrive at a meaningful answer may indirectly identify the participants. Should there at least be an exception for notorious people or groups who happen to still be living instead of relegated to historical "bad guys" who are almost universally accepted to be worth killing? I can think of numerous examples, living and dead, who were or are the target of state-sponsored violence, some with fairly good reason.
I think this is an overreation to (deleted thing) happening, and the proposed policy goes too far. (Deleted thing) was neither a good idea or good to talk about in this public forum, but it was straight-out advocating violence in an obvious and direct way, against specific, real people that aren't in some hated group. That's not okay and it's not good for community for the reasons you (EY) said. But the proposed standard is too loose and it's going to have a chilling effect on some fringe discussion that's probably going to be useful in teasing out some of the consquences of ethics (which is where this stuff comes up). Having this be a guideline rather than a hard rule seems good, but it still seems like we're scarring on the first cut, as it were.
I think we run the risk of adopting a censorship policy that makes it difficult to talk about or change the censorship policy, which is also a really terrible idea.
I agree with the general idea of protecting LW's reputation to outsiders. After all, if we're raising the sanity waterline (rather than researching FAI), we want outsiders to become insiders, which they won't do if they think we're crazy.
"No advocating violence against real world people, or opening a discussion on whether to commit violence on real world people" seems safe enough as a policy to adopt, and specific enough to not have much of a chilling effect on discussion. We ought to restrict what we talk about as little as possible, in the absence of actual problems, given that any posts we don't want here can be erased by a few keystrokes from an admin.
If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:
Wei Dai's Ironic Security Idea
My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn't constitute hypothetically (or euphemistically) killing, since in the scenario, "virtualizing" doesn't constitute "killing".
I'm disappointed by EY's response so far in this thread, particularly here. The content of the post above in itself did not significantly dismay me, but upon reading what appeared to be a serious lack of any rigorous updating on the part of EY to--what I and many LWers seemed to have thought were--valid concerns, my motivation to donate to the SI has substantially decreased.
I had originally planned to donate around $100 (starving college student) to the SI by the start of the new year, but this is now in question. (This is not an attempt at some sort of blackmail, just a frank response by someone who reads LW precisely to sift through material largely unencumbered by mainstream non-epistemic factors.) This is not to say that I will not donate at all, just that the warm fuzzies I would have received on donating are now compromised, and that I will have to purchase warm fuzzies elsewhere--instead of utilons and fuzzies all at once through the SI.
This is similar to how I feel. I was perfectly happy with his response to the incident but became progressively less happy with his responses to the responses.
I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.
Frankly, I find it even more troubling that your standard reasons for why violence is not in fact a good idea seem to be "it's bad PR" and "even if it is we shouldn't say so in public".
As I quote here:
Edit: added link to an example of SIAI people unable to give a better reason against doing violence than PR.
Two thoughts:
One: When my partner worked as the system administrator of a small college, her boss (the head of IT, a fatherly older man) came to her with a bit of an ethical situation.
It seems that the Dean of Admissions had asked him about taking down a student's personal web page hosted on the college's web server. Why? The web page contained pictures of the student and her girlfriend engaged in public displays of affection, some not particularly clothed. The Dean of Admissions was concerned that this would give the college a bad reputation.
Naturally the head of IT completely rejected the request out of hand, but was interested in discussing the implications. One that came up was that taking down a student web page about a lesbian relationship would be worse reputation than hosting it could bring. Another was that the IT staff did not feel like being censors over student expression, and certainly did not feel like being so on behalf of the Admissions office.
It's not clear to me that this case is especially analogous. It may be rather irrelevant, all in all.
Two: There is the notion that politics is about violence, not about agreement. That is to say, it is not about what we do when everyone agrees and goes along; but rather what we do when someone refuses to go along; when there is contention over shared resources because not everyone agrees what to do with them; when someone is excluded; when someone gets to impose on someone else (or not); and so on. Violence is often at least somewhere in the background of such discussions, in judicial systems, diplomacy, and so on. As Chairman Mao put it (at least, as quoted by Bob Wilson), political power grows out of the barrel of a gun. And a party with no ability to disrupt the status quo is one that nobody has to listen to.
As such, a position of nonviolence goes along with a position of non-politics. Avoiding threatening people — taken seriously enough — may require disengaging from a lot of political and legal-system stuff. For instance, proposing to make certain research illegal or restricted by law entails proposing a threat of violence against people doing that research.
Counter-proposal:
We don't contemplate proposals of violence against identifiable people because we're not assholes.
I mean, seriously, what the fuck, people?
Would pro-suicide and general anti-natalist posts be covered by this?
Fun Exercise
Consider what would have been covered by this 250, 100 and 50 years ago.
Bonus Consider what wouldn't have been covered by this 250, 100 and 50 years ago but would be today.
I see the point you're trying to make, but I don't think it constitutes a counterargument to the proposed policy. If you were an abolitionist back when slavery was commonly accepted, it would've been a dumb idea to, say, yell out your plans to free slaves in the Towne Square. If you were part of an organization that thought about interesting ideas, including the possibility that you should get together and free some slaves sometime, that organization would be justified in telling its members not to do something as dumb as yelling out plans to free slaves in the Towne Square. And if Ye Olde Eliezere Yudkowskie saw you yelling out your plans to free slaves in the Towne Square, he would be justified in clamping his hand over your mouth.
It wouldn't be dumb to argue for the moral acceptability of freeing slaves (even by force) however.
It wouldn't be dumb for an organization to decide that society at large might be willing to listen to them argue for the moral acceptability of freeing slaves, even by force. It would be dumb for an organization to allow its individual members to make this decision independently because that substantially increases the probability that someone gets the timing wrong.
Beware selective application of your standards. If the members can't be trusted with one type of independent decision, why they can be trusted with other sorts of decisions?
This seems to be a fully general argument against Devil's Advocacy. Was it meant as such?
I wouldn't have posted the following except that I share Esar's concerns about representativeness:
I think this is a good idea. I think using the word "censorship" primes a large segment of the LW population in an unproductive direction. I think various people are interpreting "may be deleted" to mean "must be deleted." I think various people are blithely ignoring this part of the OP (emphasis added):
In particular, I think people are underestimating how important it is for LW not to look too bad, and also underestimating how bad LW could be made to look by discussions of the type under consideration.
Finally, I strongly agree that
Beware Evaporative Cooling of Group Beliefs.
I am for the policy, although heavy-heartedly. I feel that one of the pillars of Rationality is that there should be no Stop Signs and this policy might produce some. On the other hand, I think PR is important, and that we must be aware of evaporative cooling that might happen if it is not applied.
On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.
To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy. If you just tell him "Your policy is flawed due to ambiguitiy in description" or "You have, in the past, said things that are not consistent with this policy" - I place low probability on him significantly changing his mind. You should take this as a sign that you are Straw-manning Eliezer, when you should be Steel-manning him.
Also, how about some creative solutions? An special post tag that must be applied to posts that condone hypothetical violence which causes them to only be seen to registered users - and displays a disclaimer above the post warning against the nature of the post? That should mitigate 99% of the PR effect. Or, your better, more creative idea. Go.
I currently find myself tempted to write a new post for Discussion, on the general topic of "From a Bayesian/rationalist/winningest perspective, if there is a more-than-minuscule threat of political violence in your area, how should you go about figuring out the best course of action? What criteria should you apply? How do you figure out which group(s), if any, to try to support? How do you determine what the risk of political violence actually is? When the law says rebellion is illegal, that preparing to rebel is illegal, that discussing rebellion even in theory is illegal, when should you obey the law, and when shouldn't you? Which lessons from HPMoR might apply? What reference books on war, game-theory, and history are good to have read beforehand? In the extreme case... where do you draw the line between choosing to pull a trigger, or not?".
If it was simply a bad idea to have such a post, then I'd expect to take a karma hit from the downvotes, and take it as a lesson learned. However, I also find myself unsure whether or not such a post would pass the muster of the new deletionist criteria, and so I'm not sure whether or not I would be able to gather that idea - let alone whatever good ideas might result if such a thread was, in fact, something that interested other LessWrongers.
This whole thread-idea seems to fall squarely in the middle, between the approved 'hypothetical violence near trolleys' and 'discussion violence against real groups'. Would anyone be interested in helping me put together a version of such a post to generate the most possible constructive discourse? Or, perhaps, would somebody like to clarify that no version of such a post would pass muster under the new policy?
What if some violence helps reduce further violence? For example corporal punishment could reduce crime (think of Singapore). Note that I am not saying that this is necessarily true, just that we should not a priori ban all discussion on topics like this.
The proposal is to ban such discussions not because violence is bad, but because discussing violence is bad PR. I am pretty sure advocacy of corporal punishment belongs to this category too.
(I seriously should've posted this question back when the thread only had 3 comments.)
I have no qualms about the policy itself, it's only commonsensical to me; my question is only tangentially related:
Do you believe "censorship" to be a connotatively better term than "moderation"?