You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

New censorship: against hypothetical violence against identifiable people

22 Post author: Eliezer_Yudkowsky 23 December 2012 09:00PM

New proposed censorship policy:

Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.

Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.

More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).  

This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'

Yes, a post of this type was just recently made.  I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

Comments (457)

Comment author: drethelin 23 December 2012 09:08:40PM 19 points [-]

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

Comment author: Kawoomba 23 December 2012 09:13:16PM 2 points [-]

Back in line with you!

Comment author: David_Gerard 23 December 2012 10:04:15PM 5 points [-]

And the mailing lists, apparently.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:24:07AM 5 points [-]

The Surgeon General recommends that you not discuss criminal activities, with respect to laws actually enforced, on any mailing list containing more than 5 people.

Comment author: AndrewH 24 December 2012 02:57:44AM *  1 point [-]

Intriguing, actual paraphrasing here of a US "The Surgeon General"? I can imagine it is something someone in high office might say.

Comment author: Alicorn 24 December 2012 03:04:12AM 4 points [-]

We have a The Surgeon General, but he recommends things about smoking and whatnot; I'm pretty sure he doesn't issue warnings about mailing lists.

Comment author: katydee 24 December 2012 11:37:32AM 8 points [-]

The Surgeon General is someone who issues national health recommendations. The implication of Eliezer's post is that discussing criminal activity may be hazardous to your health.

Comment author: timtyler 24 December 2012 02:52:42AM 6 points [-]

I believe the traditional structure is a clandestine cell system.

Comment author: Locke 23 December 2012 09:09:53PM 11 points [-]

Would this censor posts about robbing banks and then donating the proceeds to charity?

Comment author: Eliezer_Yudkowsky 23 December 2012 09:43:43PM 20 points [-]

Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.

Comment author: [deleted] 23 December 2012 11:46:26PM 12 points [-]

What abot pro-robbing banks in general?

Comment author: Decius 24 December 2012 05:15:44AM 3 points [-]

What about discussions which discuss flaws in security systems, generally? e.g. "Banks often have this specific flaw which can be mitigated in this cost-ineffective manner."?

Comment author: [deleted] 24 December 2012 10:16:36AM *  15 points [-]

Uncharitable reading: As long as taking utilitarianism seriously doesn't lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can't.

Comment deleted 24 December 2012 12:13:33PM [-]
Comment author: Alicorn 23 December 2012 10:35:54PM *  11 points [-]
Comment author: Larks 23 December 2012 10:58:58PM 3 points [-]

Note to all: Alicorn is referring to something else. Robbing banks may be extreme but it is not altruism.

Comment author: Alicorn 23 December 2012 11:23:19PM 1 point [-]

Edited in a link.

Comment author: wedrifid 24 December 2012 12:59:11AM 1 point [-]

Or Really Extreme Altruism?

This is an example of why I support this kind of censorship. Lesswrong just isn't capable of thinking about such things in a sane way anyhow.

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don't expect any useful or sane discussion to occur on sensitive subjects like this.

(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)

Comment author: jbeshir 24 December 2012 01:47:57AM *  3 points [-]

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I'm comfortable with them being censored as a side effect of a policy with useful effects.

Comment author: wedrifid 24 December 2012 01:58:56AM 2 points [-]

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn't something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.

Comment author: fubarobfusco 24 December 2012 09:26:07AM 11 points [-]

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

Comment author: wedrifid 24 December 2012 10:09:13AM *  2 points [-]

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.

The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don't want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.

You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you're wrong.

Comment author: Eugine_Nier 24 December 2012 09:32:50AM 7 points [-]

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things.

Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:20:44AM -2 points [-]

This does indeed seem like something that's covered by the new policy. It's illegal. In the alternative where it's a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.

Comment author: Tenoke 24 December 2012 02:24:53AM 11 points [-]

So you don't see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?

Comment author: Eliezer_Yudkowsky 24 December 2012 02:34:03AM 0 points [-]

I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn't worth it.

Comment author: Tenoke 24 December 2012 02:40:01AM 5 points [-]

Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don't tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don't tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:44:30AM 8 points [-]

It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.

Now it's true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, "An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."

Comment author: CronoDAS 24 December 2012 03:01:58AM 3 points [-]

I wasn't thinking of SIAI as the charity.

Comment author: Eliezer_Yudkowsky 24 December 2012 03:09:50AM 7 points [-]

This intention of yours is not transparent. Plus, they don't care.

Comment author: AdeleneDawner 24 December 2012 12:16:48PM 7 points [-]

Regardless of your intentions, I know of one person who somewhat seriously considered that course of action as a result of the post in question. (The individual in question has been talked out of it in the short term, by way of 'the negative publicity would hurt more than the money would help', but my impression is that the chance that they'll try something like that has still increased, probably permanently.)

Comment author: kodos96 24 December 2012 04:07:19AM 15 points [-]

It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.

This is where the rubber meets the road as far as whether we really mean it when we say "that which can be destroyed by the truth, should be." If we accept this argument, then by "mere addition" of censorship rules, you eventually end up renaming SIAI "The Institute for Puppies and Unicorn Farts", and completly lying to the public about what it is you're actually about, in order to benefit PR.

Comment author: Eugine_Nier 24 December 2012 07:47:28AM 8 points [-]

"Oh, look, LW is encouraging people to commit suicide and donate the money to them."

Well, are you?

idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical,

True, but you have said things that seem to imply it. Seriously, you can't go around saying "X" and "X->Y" and then object when people start attributing position "Y" to you.

Comment author: Tenoke 24 December 2012 11:01:34AM 3 points [-]

I thought I posted this comment last night, but it seems like I didn't (and now I have to pay karma to post it) but aren't we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)

Comment author: CronoDAS 24 December 2012 02:43:36AM *  7 points [-]

As far as I can tell, Really Extreme Altruism actually is legal.

Comment author: saturn 24 December 2012 03:50:10AM 4 points [-]

In the alternative where it's a bad idea, talking about it has net negative expected utility.

What about the possibility that someone who thought it was a good idea would change their mind after talking about it?

Comment author: Eliezer_Yudkowsky 24 December 2012 04:07:06AM 3 points [-]

This seems an order-of-magnitude less likely than somebody wouldn't naturally think of the dumb idea, seeing the dumb idea.

Comment author: Decius 24 December 2012 05:00:22AM 6 points [-]

Therefore censor uncommon bad ideas generally?

Comment author: [deleted] 23 December 2012 09:56:51PM *  16 points [-]

Yes, a post of this type was just recently made.

Well then.

I've heard that firemen respond to everything not because they actually have to, but because it keeps the drill sharp, so to speak. The same idea may apply to mod action... (in other words, MOAR "POINTLESS" CENSORSHIP)

More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?

Comment author: RomeoStevens 23 December 2012 10:26:36PM 0 points [-]

gwern specifically argued that small scale terrorism would be ineffective.

Comment author: TheOtherDave 23 December 2012 11:25:43PM 6 points [-]

I suppose the next question is whether it would apply to things like comments in response to gwern's hypothetical bombing of intel arguing that his conclusion is incorrect.

Given the stated principles governing the new censorship policy, I think the answer would be "yes, of course."

Comment author: [deleted] 23 December 2012 11:50:04PM *  4 points [-]

Let's not delete posts for disagreeing on uncomfortable empirical questions.

Comment author: TheOtherDave 23 December 2012 11:53:41PM 2 points [-]

I don't think the policy EY is proposing involves banning people, just deleting the stuff we write that violates policy.

Comment author: [deleted] 23 December 2012 11:54:42PM 2 points [-]

fixed, thanks

Comment author: printing-spoon 24 December 2012 01:27:40AM 13 points [-]

Implying that whether his post should be censored hinges on the conclusion reached and not just the topic?

Comment author: RomeoStevens 24 December 2012 01:28:57AM 0 points [-]

discussion of violence by state actors is quite a bit different than discussion of individual violence.

Comment author: Jayson_Virissimo 24 December 2012 04:08:20AM 5 points [-]

discussion of violence by state actors is quite a bit different than discussion of individual violence.

Sure, but why is that a difference that makes a difference?

Comment author: timtyler 24 December 2012 02:38:50AM 0 points [-]

More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?

It looks as though that was on gwern.net - outside the zone.

Comment author: [deleted] 24 December 2012 02:40:47AM 5 points [-]

it was in discussion too.

Comment author: Epiphany 24 December 2012 09:08:33AM *  -2 points [-]

If you're talking about his Slowing Moore's Law: Why You Might Want To and How You Would Do It it's not there anymore.

I didn't thoroughly read the new version on his site, so there's a chance that there is now a link to an article that will still be confused for a pro-terrorism piece (that's the problem the previous version had) or sounds like it's advocating the idea of governments attacking chip fabs.

Comment author: MixedNuts 23 December 2012 10:30:17PM 9 points [-]

Your generalization is averaging over clairvoyance. The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Does the policy apply to violence against oneself? (I'm guessing not, since it's not illegal.) Talking about it is usually believed to reduce risk.

There's a scarcity effect whereby people believe pro-violence arguments to be stronger, since if they weren't convincing they wouldn't be censored. Not sure how strong it is, likely depends on whether people drop the topic or say things like "I'm not allowed to give more detail, wink wink nudge nudge".

It's a common policy so there don't seem to be any slippery slope problems.

We're losing Graham cred by being unwilling to discuss things that make us look bad. Probably a good thing, we're getting more mainstream.

Comment author: RomeoStevens 24 December 2012 12:13:54AM 1 point [-]

since when is violence against oneself or even discussion of violence against oneself fully legal?

Comment author: wedrifid 24 December 2012 12:20:31AM 3 points [-]

since when is violence against oneself or even discussion of violence against oneself fully legal?

In most times and places throughout history, including all countries whose legal systems I am familiar with.

Comment author: Caspian 24 December 2012 12:34:50AM *  4 points [-]

Suicide in particular is often illegal.

ETA: possibly this statement of mine was outdated.

Comment author: wedrifid 24 December 2012 06:51:06AM 1 point [-]

Suicide in particular is often illegal.

Either you or some of the people reading your comment seem to have been mislead into concluding that a thing being illegal and also violence against oneself can be generalised to conclude that violence against oneself or even discussion of violence against oneself is illegal. That seems to be a rather blatant confusion.

Comment author: kodos96 24 December 2012 07:09:45AM *  0 points [-]

I'm not sure what RomeoStevens meant about discussion of violence against oneself being illegal, but aside from that aspect, his point is entirely valid. You seem to be suggesting that we're generalising from "suicide is illegal" to "any form of violence against oneself is illegal". We're not. We're simply noting that suicide is one type of violence against onself, and it's illegal.

Your statement expands to "In most times and places throughout history, including all countries whose legal systems I am familiar with, violence against oneself is fully legal." Unless you're familiar only with very odd legal systems, that seems to be a rather blatant confusion.

Comment author: wedrifid 24 December 2012 07:23:56AM -2 points [-]

but aside from that aspect, his point is entirely valid

No. MixedNut's point. RomeoStevens' reply was confused and mistaken. Unfortunately Caspian has mislead you about the context.

We're simply noting that suicide is one type of violence against onself, and it's illegal.

That was my original impression and why I refrained from downvoting him. Until, that is, it became apparent that he and some readers (evidently yourself included) believe that his statement of trivia in some way undermines the point made by MixedNut's and supported by myself or supports RomeoStevens' ungrammatical rhetorical interjection.

Comment author: kodos96 24 December 2012 07:40:43AM *  -1 points [-]

I had read the entire context, and re-read it just now to make sure I hadn't missed anything. You're correct that RomeoStevens' reply doesn't really undermine MixedNuts' point, and is therefore "trivia". But it's nonetheless correct trivia (modulo the above-mentioned caveat) and your refutation of it is therefore quite confusing.

But it's pointless to continue arguing this trivial point, as it's irrelevant to the thread topic, except in the meta sense that these kinds of pointless semantic debates will be the inevitible result of implementation of this extremely ill-advised and poorly thought-through censorship policy.

Comment author: MixedNuts 24 December 2012 09:20:33AM 1 point [-]

What are you thinking of? Non-assisted suicide that doesn't put third parties in danger is legal most places (exceptions: India, Singapore, North Korea, Virginia). Self-injury is legal in the US at least. Discussion of suicide is allowed as long as it's even slightly more hypothetical than "I intend to kill myself in the near future". Discussion of self-injury is AFAIK completely legal (in the US?).

Comment author: RomeoStevens 24 December 2012 11:15:15AM 1 point [-]

My understanding has always been that self harm or plausible discussion of self harm in the US leads to a loss of autonomy in that you can be diagnosed with a mental illness and lose access to things like voting, driving, firearms, etc. (depending on the diagnosis)

Comment author: MixedNuts 24 December 2012 12:45:15PM 0 points [-]

Trigger warning for, obviously, self-harm.

There's a huge chasm between a mental illness diagnosis (which self-harm is very likely to cause, especially in the US where you need diagnosis other than "ain't quite right - not otherwise specified" for insurance) and actual repercussions. Members of online support groups report that their psychiatrists either treat self-injury like any other symptom (asking about it, describing decreases as good but not praiseworthy) or recommend they stop but do not enforce it. If it gets life-threatening it's treated like suicide, but that almost never comes up.

Comment author: pleeppleep 23 December 2012 10:42:32PM 17 points [-]

Deleting comments for being perceived as dangerous might get in the way of conversation. I think that if we're worried about how the site looks to outsiders then it's probably only necessary to worry about actual posts. Nobody expects comments to be appropriate on the internet, so it probably doesn't hurt us that much.

Comment author: [deleted] 23 December 2012 10:56:35PM 0 points [-]

It was a top-level post (though one in Discussion) he was thinking about.

Comment author: pleeppleep 23 December 2012 11:34:04PM 6 points [-]

I know, but he said that the suggested policy change would include comments.

Comment author: [deleted] 24 December 2012 10:28:48AM 10 points [-]

That's usual Yudkowskian overreaction he will likely get tired of implementing within a couple years or less.

Comment author: pleeppleep 24 December 2012 02:35:05PM *  5 points [-]

.......

But the site's only been around for a couple of years in the first place

Comment author: shminux 23 December 2012 10:48:40PM 3 points [-]

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

Comment author: Eliezer_Yudkowsky 24 December 2012 02:25:27AM 8 points [-]

Yes. This seems like yet another example of "First of all, it's a bad fucking idea, second of all, talking about it makes everyone else look bad, and third of all, if hypothetically it was actually a good idea you'd still be a fucking juvenile idiot for blathering about it on the public Internet." What part of "You fail conspiracies forever" is so hard for people to understand? Talk like this serves no purpose except to serve as fodder for people who claim that <rationalist idea X> leads to violence and is therefore false, and your comment shall be duly deleted once this policy is put into place.

Comment author: kodos96 24 December 2012 04:21:10AM 1 point [-]

I don't see how this comment even fits the proposed policy, except under a motivatedly-broad reading of "by all means necessary"

Comment author: CarlShulman 24 December 2012 04:46:56AM *  4 points [-]

Wikipedia thinks otherwise:

By any means necessary is a translation of a phrase coined by the French intellectual Jean Paul Sartre in his play Dirty Hands. It entered the popular culture through a speech given by Malcolm X in the last year of his life. It is generally considered to leave open all available tactics for the desired ends, including violence; however, the “necessary” qualifier adds a caveat—if violence is not necessary, then presumably, it should not be used.

Comment author: kodos96 24 December 2012 04:51:41AM -2 points [-]

I was unaware of that connotation. But I don't think it changes the equation. There's a million different ways to interpret "by all means necessary", the vast majority of which would not be construed to include violence. If this were a forum in which Satre/Malcolm X references were the norm, then that would be different. But it isn't.

Comment author: Nick_Tarleton 24 December 2012 05:12:23AM 19 points [-]

I and the one person currently in the room with me immediately took "by all means necessary" to suggest violence. I think you're in a minority in how you interpret it.

Comment author: kodos96 24 December 2012 05:19:18AM 14 points [-]

OK, I'll update on that.

Comment author: ciphergoth 24 December 2012 12:48:14PM 2 points [-]

Just checked with my houseguest; his interpretation is also "a call to violence".

Comment author: Larks 23 December 2012 11:01:04PM 4 points [-]

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states). Is the public-private distinction the important thing - would advocating/talking about state-sanctioned genocide be ok?

Comment author: ikrase 24 December 2012 01:04:55AM 4 points [-]

While an interesting question, I think that the answer to that is reasonably obvious.

Comment author: Eugine_Nier 24 December 2012 01:54:53AM *  3 points [-]

What about capital punishment and/or corporal punishment?

Comment author: kodos96 24 December 2012 04:18:48AM *  1 point [-]

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them

In the event of gun control, it would in fact be illegal even if done by a state actor.

Edit: assuming USA of course.

Comment author: Luke_A_Somers 24 December 2012 05:23:45AM *  -1 points [-]

To call either gun control or taxation violence is stretching matters beyond reasonable limits. The only sense in which they are is the sense in which any public policy is - that it is backed by the government. If anything to do with the government has to be considered as 'about violence'... bah.

Comment author: kodos96 24 December 2012 05:30:23AM -1 points [-]

I don't think it's silly, and based on the LW survey results, neither do approximately 30.3% of LW users.

But aside from that, OP said "More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people". Gun control (though not taxation) clearly falls under this illegality clause, without resort to classifying it as "violence".

Comment author: Luke_A_Somers 24 December 2012 06:17:54AM 4 points [-]

'Libertarian' does not mean 'believes all government action is violence'.

Comment author: jsalvatier 24 December 2012 12:25:54PM *  3 points [-]

I identify as libertarian and have been objectivist, but calling taxation theft (and other similar claims) is almost always sneaking in connotations.

Comment author: CronoDAS 23 December 2012 11:15:21PM 20 points [-]

My post was indeed inappropriate. I have used the "Delete" function on it.

Comment author: jimrandomh 23 December 2012 11:27:14PM 8 points [-]

Posts advocating or "asking about" violence against identifiable real people or groups should be deleted at the admins' discretion:

Agree Disagree

Posts advocating or "asking about" violation of laws that are actually enforced against middle-class people, other than the above, should be deleted at the admins' discretion:

Agree Disagree

Submitting...

Comment author: jimrandomh 23 December 2012 11:35:02PM 3 points [-]

This is not a poll, but

...but it'd be nice to have a poll to point at later, to show consensus, and I'd be surprised if people disagreed.

Comment author: gjm 24 December 2012 02:08:22AM 8 points [-]

This poll, like EY's original question, conflates two things that don't obviously belong together. (1) Advocating certain kinds of act. (2) "Asking about" the same kind of act.

I appreciate that in some cases "asking about" might just be lightly-disguised advocacy, or apparent advocacy might just be a particularly vivid way of asking a question. I'm guessing that the quotes around "asking about" are intended to indicate something like the first of these. But what, exactly?

Comment author: jbeshir 24 December 2012 02:51:36AM 3 points [-]

I think in this context, "asking about" might include raising for neutral discussion without drawing moral judgements.

The connection I see between them is that if someone starts neutral discussion about a possible action, actions which would reasonably be classified as advocacy have to be permitted if the discussion is going to progress smoothly. We can't discuss whether some action is good or bad without letting people put forward arguments that it is good.

Comment author: gjm 24 December 2012 03:15:34AM 3 points [-]

There's certainly a connection. I'm not convinced the connection is so intimate that if censoring one is a good idea then so is censoring the other.

Comment author: CronoDAS 23 December 2012 11:42:17PM 16 points [-]

The "interesting" thing about violence is that it's one of the few ways that a relatively small group of (politically) powerless people with no significant support can cause a big change in the world. However, the change rarely turns out the way the small group would hope; most attempts at political violence by individuals or small groups fail miserably at achieving the group's aims.

Comment author: BrassLion 24 December 2012 06:33:53AM 5 points [-]

Non-violent action has a reasonable track record, considering how rarely it's been used in an organized way by the oppressed. The track record is particularly good in the first world, where people care about appearances.

Comment author: [deleted] 23 December 2012 11:51:07PM *  30 points [-]

Would my pro-piracy arguments be covered by this? What about my pro-coup d'état ones?

Comment author: [deleted] 23 December 2012 11:58:06PM 19 points [-]

Possibly. I hope not. I'm all for mod action, but not at the expense of political diversity.

Comment author: Jabberslythe 24 December 2012 06:12:35AM 5 points [-]

I think piracy cases are pretty similar to marijuana cases (they are even less likely to be enforced actually) which he said won't be banned.

Comment author: Eugine_Nier 24 December 2012 07:54:07AM 8 points [-]

I don't think Konkvistador was talking about software piracy.

Comment author: [deleted] 24 December 2012 10:17:37AM 4 points [-]

You mean copyright piracy or sea piracy?

Comment author: [deleted] 24 December 2012 10:20:09AM *  23 points [-]

Sea piracy obviously. What kind of a person do you think I am?!

Comment author: Qiaochu_Yuan 24 December 2012 11:12:24AM *  7 points [-]

As someone unfamiliar with your views, I can't tell whether this is sarcasm or not, especially because of the interrobang. Can you clarify? Is there anywhere on the internet where your views are concisely summarized? (Is it in any way associated with your real name?)

Comment author: [deleted] 24 December 2012 12:27:37PM *  11 points [-]

The levels can be hard to disambiguate so I sympathize. I'll write my opinions out unironically. You can find the full arguments in my comment history (I can dig links to that up too).

  • I'm assuming you are familiar with the arguments for efficent charity and optimal employment? If not I can provide citations & links. I don't think Sea Piracy as a means to funding efficient charity is obviously worse from a utilitarian perspective than a combo with many legal professions. It may or may not be justified, I'm leaning towards it being justified on the same utilitarian grounds as government taxation can be. If not cheating on taxes to fund efficient charity is a pretty good idea. Some people's comparative advantage will lay in sea piracy.

  • Violating copyright on software or media products in the modern West is in general not a bad thing. But indiscriminately pirating everything may be bad.

In the grandfather comment I was aiming for ambiguity and humour.

Comment author: wedrifid 24 December 2012 12:23:26AM -2 points [-]

may at the admins' option be censored on the grounds that ... anyone talking about a proposed crime on the Internet fails forever as a criminal

I like it.

Comment author: Tenoke 24 December 2012 12:25:22AM 10 points [-]

So I finally downvoted Yudkowsky.

Comment author: NancyLebovitz 24 December 2012 01:11:15AM 4 points [-]

What was your line of thought?

Comment author: Tenoke 24 December 2012 01:19:44AM *  35 points [-]

That censorship because of what people think of LessWrong is ridiculous. That the negative effect on the reputation is probably significantly less than what is assumed. And that if EY thought that censorship of content for the sake of LW's image is in order he should've logically thought that omitting fetishes from his public OKCupid profile(for the record I've defended the view that this is his right) among other things is also in order as well. And some other thoughts of this kind.

Comment author: wedrifid 24 December 2012 12:26:01AM 2 points [-]

Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

Someone please send me a link via PM? Or perhaps the author could PM me? Not because the censorship of that class bothers me but because talking to wedrifid is not posting things on the internet, I'm curious and there are negligible consequences for talking to me about interesting hypothetical questions.

(Disregard the above is the post or comment was boring.)

Comment author: [deleted] 24 December 2012 12:34:57AM *  23 points [-]

tl;dr: tobacco kills more people than guns and cars combined. Should we <insert violence here>?

PS: fuck the police

Comment author: wedrifid 24 December 2012 12:47:37AM 8 points [-]

tl;dr: tobacco kills more people than guns and cars combined. Should we <insert violence here>?

PS: fuck the police

(I laughed). Thanks nyan. (I hope this kind of satirical summary is considered acceptable.)

Comment author: CronoDAS 24 December 2012 02:45:39AM 7 points [-]

As the author of the offending Discussion post in question, I'd say it's an adequate summary.

Comment author: kodos96 24 December 2012 04:27:56AM *  8 points [-]

I hope this kind of satirical summary is considered acceptable

This kind of uncertainty about what is and is not acceptible, is perhaps the primary reason why such censorship policies are evil.

Comment author: [deleted] 24 December 2012 12:27:38AM *  40 points [-]

I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states).

Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)

We're losing Graham cred by being unwilling to discuss things that make us look bad.

This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

Would my pro-piracy arguments be covered by this? What about my pro-coup d'etat ones?

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))

My post was indeed inappropriate. I have used the "Delete" function on it.

This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.

In general, two conclusions:

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.

Comment author: AlexMennen 24 December 2012 01:06:43AM 17 points [-]

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

It makes sense to have mod discretion, but it also makes sense to have a list of rules that the mods can point to so that people whose posts get censored are less likely to feel that they are being personally targeted.

Comment author: [deleted] 24 December 2012 01:23:36AM 10 points [-]

Yes. Explanatory rules are good. Letting the rules drive is not.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:17:54AM 16 points [-]

These are explanations, not rules, check.

Comment author: Luke_A_Somers 24 December 2012 05:10:10AM 2 points [-]

Hence "may at the admins' option be censored"

Comment author: Eliezer_Yudkowsky 24 December 2012 02:17:18AM 4 points [-]

Yeah seriously. What if violence is the right thing to do?

Then discussing it on the public Internet is the wrong thing to do. I can't compare it to anything but juvenile male locker-room boasting.

Comment author: [deleted] 24 December 2012 02:33:08AM 0 points [-]

Good point.

Comment author: DataPacRat 24 December 2012 02:45:23AM 12 points [-]

A friend and I once put together a short comic trying to analyze democracy from an unusual perspective, including presenting the idea that an underlying threat of violent popular uprising should the system be corrupted helps keep it running well. This was closely related to a shorter comic presenting some ideas on rationality. The project led to some interesting discussions with interesting people, which helped me figure out some ideas I hadn't previously considered, and I consider it to have been worth the effort; but I'm unsure whether or not it would fall afoul of the new policy.

How 'identifiable' do the targets of proposed violence have to be for the proposed policy to apply, and how 'hypothetical' would they have to be for it not to? Some clarification there would be appreciated.

Comment author: Kawoomba 24 December 2012 08:26:35AM 2 points [-]

Then discussing it on the public Internet is the wrong thing to do.

Also, implying that violence is best discussed in private, versus not being discussed at all. It's like saying in public "But let's talk about our illegal activities in a more private venue." There should be no perception of LW being associated with such, period (.)

Comment author: [deleted] 24 December 2012 02:11:58PM 20 points [-]

What if you aren't sure if violence is the right thing to do? You obviously should want as many eyeballs to debug your thinking on that as possible no?

Comment author: Plasmon 24 December 2012 03:21:50PM 2 points [-]

If you actually believe that violence might be the right thing to do, then you assign non-negligible probability to

  • the discussion will convince you that violence is indeed the right thing to do
  • you now have moral imperative to do violence, and you will act on this or convince others to act on it
  • you will want the discussion to never have occurred in the first place, because authorities can use it to track you down , and suppress your justified violence

If you want to discuss a coup or something do it in a less easily traceable fashion (not on a public forum. Use encryption. ).

Comment author: AdeleneDawner 24 December 2012 02:31:58PM 2 points [-]

Actually, I can think of at least one type of situation where this isn't true, though it seems unwise to explain it in public and in any case it's still not something you'd want associated with LW, or in fact happening at all in most cases.

Comment author: Multiheaded 24 December 2012 07:15:17AM *  5 points [-]

I support censorship, but only if it is based on the unaccountable personal opinion of a human.

I think that there's the usual paradox of benevolent dictatorship here; you can only trust humans who clearly don't seek this position for selfish ends and aren't likely to present a rational/benevolent front just so you would give them political power.

In a liberal/democratic political atmosphere, self-proclaimed benevolent dictators are a rare and prized resource; you can pressure one to run a website, an organization, etc to the best of their ability. But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization. So, IMO, isolated experiments with dictatorship are fine; overall preference for it is terribly dangerous.

(All of the above goes only for humans, of course; I have no qualms about FAI rule.)

P.S.: I googled for "benevolent dictator" + "paradox" and found an argument similar to mine.

Being governed by people instead of a system isn’t just dangerous, it suffers from a limited attention span, too. The Chinese oligarchy is, indeed, very effective. Beijing was cleaner for the Olympics and those pesky plastic bags are gone, but there is only so much bandwidth for the authorities to enforce regulation and address new concerns. Pollution is a serious problem in China that no one denies, but little is done so far. The people and the government are both troubled, but frankly, they have bigger fish to stir fry. Three hundred million people may be living middle class western lives, but that leaves another billion in a falling apart shack.

The Chinese have every reason to be proud of their beautiful country and amazing progress. There is much to enjoy and appreciate and, even if it pained me to admit it, their system works far better than I would like to give it credit. My worry for them is if it’s sustainable. Can those billion people rely on replacing great technocrats with new ones who also make the right decisions? Is it even possible for a system which depends on the vagaries of people to even effectively address all the concerns and needs of the people they govern and the society they guide?

Comment author: [deleted] 24 December 2012 07:22:35AM 2 points [-]

But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization.

Interesting. Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Comment author: Multiheaded 24 December 2012 07:33:14AM *  2 points [-]

Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Only a single one: a great actually-benevolent-dictator, with a good insight into people and lots of rationality, personally selects his successor among several candidates, after lengthy consideration and hidden testing. But, of course, remove one of the above qualifiers, and it can blow up regardless of the first dictator's best intentions. See e.g. Marcus Aurelius and Commodus. So, on a meta level, no, there's likely no system that would work for humans.

(I think that "real" democracy is also too dangerous - see the 19th and early 20th century - so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.)

Comment author: [deleted] 24 December 2012 07:41:13AM *  1 point [-]

What about technocracy a-la china?

And FAI, obviously.

so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.

Really? Safe in the sense of "too incompetent to execute a mass-murder"? Also, anarchy is a military vacuum.

Comment author: quintopia 24 December 2012 01:03:56AM 30 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence). If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question. This is why I will be opposed to any sort of zero tolerance policy in which the things to be censored is not well-defined a definite impediment to balanced and rationally-considered discussion. It's clear to me that speaking about violence against a particular person or persons is far more likely to have negative consequences on balance, but discussion of the commission of crimes in general seems like something that should be weighed on a case-by-case basis.

In general, I prefer my moderators to have a fuzzy set of broad guidelines about what should be censored in which not deleting is the default position, and they actually have to decide that it is definitely bad before they take the delete action. The guidelines can be used to raise posts to the level of this consideration and influence their judgment on this decision, but they should never be able to say "the rules say this type of thing should be deleted!"

Comment author: Eliezer_Yudkowsky 24 December 2012 02:15:33AM 0 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence).

That's an... interesting way of putting it, where by "interesting" I mean "wrong". I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

This is why I will be opposed to any sort of zero tolerance policy

The 'rules' do not 'mandate' that I delete anything. They hardly could. I'm just, before I start deleting things, giving people fair notice that this is what I'm considering doing, and offering them a chance to say anything I might have missed about why it's a terrible idea.

Comment author: wedrifid 24 December 2012 03:00:54AM 47 points [-]

That's an... interesting way of putting it, where by "interesting" I mean "wrong".

If you genuinely can't see how similar considerations apply to you personally publishing rape-world stories and the reasoning you explicitly gave in the post then I suggest you have a real weakness in evaluating the consequences of your own actions on perception.

I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

I approve of your Three Worlds Collide story (in fact, I love it). I also approve of your censorship proposal/plan. I also believe there is no need to self censor that story (particularly at the position you were when you published it). That said:

This kind of display of evident obliviousness and arrogant dismissal rather than engagement or---preferably---even just outright ignoring it may well do more to make Lesswrong look bad than half a dozen half baked speculative posts by CronoDAS. There are times to say "but you know, screw it" and "where by interesting I mean wrong" but those times don't include when concern is raised about your legalised-rape-and-it's-great story in the context of your own "censor hypothetical violence 'cause it sounds bad" post.

Comment author: Error 24 December 2012 03:05:52AM 7 points [-]

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence)

I'm not sure how this is relevant; there's a good bit of difference between discussion of breaking a law and discussion of changing it. That said, I think I'm reading this differently than most in the thread. I'm understanding it as aimed against hypotheticals that are really "hypotheticals".

In answer to the question that was actually asked in the post, here is a non-obvious consequence: My impression of the atheist/libertarian/geek personspace cluster that makes up much of LW's readership is that they're generally hostile to anything that smells like conflating "legal" with "okay"; and also to the idea that they should change their behavior to suit the rest of the world. You might find you're making LW less off-putting to the mainstream at the cost of making it less attractive to its core audience. (but you might consider it worth that cost)

As both a relatively new contributor and a member of said cluster, this policy makes me somewhat uncomfortable at first glance. Whether that generalizes to other potential new contributors, I cannot say. I present it as proof-of-concept only.

Comment author: [deleted] 24 December 2012 10:24:12AM 1 point [-]

IAWYC, but that was a story set in the far future with a discussion that makes clear (to me at least) that our present is so different from that that the author wouldn't ever even dream of suggesting to do anything remotely like that in our times. It isn't remotely similar to (what Poe's Law predicts people will get from) the recent suggestion about tobacco CEOs.

Comment author: prase 24 December 2012 02:11:51PM 0 points [-]

If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question.

He was in a different position then. Trying to gain reputation for being an original thinker requires different public outputs than attempting to earn mainstream recognition of the origanisation one is the head of.

Comment author: NancyLebovitz 24 December 2012 01:17:07AM 16 points [-]

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad

I'm dubious about this because laws can change. I'm also sure I don't have a solid grasp of which laws can be enforced against middle-class people, but I do know that they aren't all like laws against kidnapping. For example, doctors can get into trouble for prescribing "too much" pain medication.

Comment author: Eugine_Nier 24 December 2012 02:03:16AM 11 points [-]

I find that threatening hypothetical violence against my interlocutor can be a useful rhetorical device for getting them to think about ethical problems in near mode.

Comment author: FiftyTwo 24 December 2012 02:46:41AM 36 points [-]

I'm going to hit you with a stick unless you can give me an example of where that has been effective.

Comment author: kodos96 24 December 2012 04:14:57AM 4 points [-]

For all the whining I do about how LWers lack a sense of humor.... I absolutely love it when I'm proven wrong.

Comment author: Qiaochu_Yuan 24 December 2012 11:19:35AM 7 points [-]

Do you really feel like LWers lack a sense of humor? LWers have posted some of the funniest things I've ever read. Their sense-of-humor distribution has heavy tails, at least.

Comment author: Pentashagon 24 December 2012 05:36:42AM 7 points [-]

THREE examples.

Comment author: Mestroyer 24 December 2012 02:08:50AM 7 points [-]

I'll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn't want people to read, and that you, dear reader of this comment, probably can't!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.

This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.

Comment author: Tenoke 24 December 2012 02:14:15AM 4 points [-]

Not a bad option indeed. It has a merit if we are really that bothered about the general view of LW.

And for the record the post is still accessible albeit deleted.

Comment author: Eliezer_Yudkowsky 24 December 2012 02:27:00AM 2 points [-]

LW has effectively zero resources to implement software changes.

Comment author: kodos96 24 December 2012 04:24:41AM *  5 points [-]

If this were your real rejection, you would be asking for volunteer software-engineer-hours.

Comment author: Eliezer_Yudkowsky 24 December 2012 05:00:00AM 5 points [-]

Tried.

Comment author: gelisam 24 December 2012 07:24:29AM 12 points [-]

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

Comment author: Risto_Saarelma 24 December 2012 07:29:06AM 5 points [-]

The site is open source, you should be able to just write a patch and submit it.

Comment author: kodos96 24 December 2012 07:59:05AM 1 point [-]

This would be a poor investment of time without first getting a commitment from Eliezer that he will accept said patch.

Comment author: Risto_Saarelma 24 December 2012 08:05:00AM 2 points [-]

It'd get you familiar with the code base, which you'd need to be anyway if you wanted to be a volunteer contributor.

Comment author: gelisam 24 December 2012 03:40:49PM *  2 points [-]

After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".

The path to becoming a volunteer contributor is now very clear.

Comment author: yli 24 December 2012 11:08:28AM *  9 points [-]

Maybe it's just that volunteers that will actually do any work are hard to find. Related.

Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.

Comment author: NancyLebovitz 24 December 2012 03:19:50AM *  9 points [-]

You can't reliably make things on the internet go away.

Comment author: Mestroyer 24 December 2012 03:24:42AM 3 points [-]

You can make them hard enough to access that they won't be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong's reputation.

Comment author: NancyLebovitz 24 December 2012 05:07:03AM 3 points [-]

I was thinking about people posting screen shots.

Comment author: Qiaochu_Yuan 24 December 2012 10:45:45AM 4 points [-]

Agreed. It only takes one high-karma user posting a screenshot on reddit of LW's Secret Thread Where They Discuss Terrorism or whatever...

Comment author: kodos96 24 December 2012 04:23:48AM 0 points [-]

I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.

Comment author: fubarobfusco 24 December 2012 09:10:42AM *  10 points [-]

If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM'd video that can't be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.

tl;dr: No, you can't stop people from copying things on the Internet.

Comment author: kodos96 24 December 2012 09:31:51AM *  3 points [-]

Of course. But a "good enough" solution to the stated problem doesn't need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:

  • Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there's an organized conspiracy out there against us?
  • Limit access to people who can be trusted not to copy it - either based on karma as suggested, or individual vetting. I'm not a fan of this option, but it could certainly be made to work, for certain values of "work".
  • Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
Comment author: drethelin 24 December 2012 09:16:55AM 12 points [-]

Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!

Comment author: Nominull 24 December 2012 03:29:35AM 11 points [-]

Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.

Comment author: kodos96 24 December 2012 04:17:08AM 7 points [-]

You must do what you feel is right, of course

Passive-aggression level: Obi-Wan Kenobi

Comment author: gjm 24 December 2012 11:06:20AM 3 points [-]

I don't see that that's passive-aggressive when it's accompanied by a clear and explicit statement that Nominull thinks Eliezer is wrong and why. What would be passive-aggressive is just saying "Well, I suppose you must do what you feel is right" and expecting Eliezer to work out that disapproval is being expressed and what sort.

Comment author: twanvl 24 December 2012 12:37:25PM 2 points [-]

because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true

In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.

Comment author: kodos96 24 December 2012 03:56:25AM 2 points [-]

How about instead of outright censorship, such discussions be required to be encrypted, via double-rot13?

Comment author: [deleted] 24 December 2012 04:13:38AM 0 points [-]

Rot13 applied twice is just the original text...

Comment author: kodos96 24 December 2012 04:30:02AM 6 points [-]

..............whooosh................

Comment author: [deleted] 24 December 2012 01:52:30PM 0 points [-]

In light of the above getting upvotes, I'm not sure if it's the "whoosh" of double-rot13 going over your head as I originally thought, or if it's indicating intended sarcasm going over my head, or some other meaning not readily obvious to me (inferential distance and all that.)

Comment author: CronoDAS 24 December 2012 04:20:57AM 8 points [-]

I don't know if we actually need a specific policy on this. We didn't in the case of my post...

Comment author: Incorrect 24 December 2012 04:40:09AM *  8 points [-]

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

Comment author: kodos96 24 December 2012 05:14:20AM 13 points [-]

Aside from the fact that "it might make us look bad" is a horrible argument in general, have you not considered the consequence that censorship makes us look bad? And consider the following comment below:

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

It was obviously intended as a joke, but is that clear to outsiders? Does forcing certain kinds of discussions into side-channels, which will inevitibly leak, make us look good?

Consideration of these kinds of meta-consequences is what separates naive decision theories from sophisticated decision theores. Have you considered that it might hurt your credibility as a decision theorist to demonstrate such a lack of application of sophisticated decision theory in setting policies on your own website?

And now, what I consider to be the single most damning argument against this policy: in the very incident that provoked this rule change, the author of the post in question, after discussion, voluntarily withdrew the post, without this policy being in effect! So self-policing has demonstrated itself, so far, to be 100% effective at dealing with this situation. So where exactly is the necessity for such a policy change?

Comment author: Decius 24 December 2012 05:23:31AM 4 points [-]

Why the explicit class distinction?

It would be prohibited to discuss how to or speed and avoid being cited for it. (I thought that this was already policy, and I believe it to be a good policy.)

It would not be prohibited to discuss how to be a vagrant and avoid being cited for it. (Middle class people temporarily without residences typically aren't treated as poorly as the underclass.)

Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?

Comment author: [deleted] 24 December 2012 05:30:00AM 14 points [-]

Just because I think responses to this post might not have been representative:

I think this is a good policy.

Comment author: Kaj_Sotala 24 December 2012 06:17:06AM *  4 points [-]

I also agree with this policy, and feel that many of the raised or implied criticisms of it are mostly motivated from an emotional reaction against censorship. The points do have some merit, but their significance is vastly overstated. (Yes, explicit censorship of some topics does shift the Schelling fence somewhat, but suggesting that violence is such a slippery topic that next we'll be banning discussion about gun control and taxes? That's just being silly.)

Comment author: kodos96 24 December 2012 06:36:28AM 8 points [-]

You may think it's silly, others do not. Even if Eliezar has no intention of interpeting "violence" that way, how do we know that? Ambiguity about what is and is not allowed results in chilling far more speech than may have been originally intended by the policy author.

Also, the policy is not limited to only violence, but to anything illegal (and commonly enforced on middle class people). What the hell does that even mean? Illegal according to whom? Under what jurisdiction? What about conflicts between state/federal/constitutional law? I mean, don't get me wrong, I think I have a pretty good idea what Eliezar meant by that, but I could well be wrong, and other people will likely have different ideas of what he meant. Again, ambiguity is what ends up chilling speech, far more broadly than the original policy author may have actually intended.

And I will again reiterate what I consider to be the most slam-dunk argument against this policy: in the incident that provoked this policy change, the author of the offending post voluntarily removed it, after discussion convinced him it was a bad idea. Self-policing worked! So what exactly is the necessity for any new policy at all?

Comment author: Kaj_Sotala 24 December 2012 07:43:26AM 2 points [-]

I agree that your points about ambiguity have some merit, but I don't think there's much of a risk of free speech being chilled more than was intended, because there will be people who test these limits. Some of their posts will be deleted, some of them will not. And then people can see directly roughly where the intended line goes. The chilling effect of censorship would be a more worrying factor if the punishment for transgressing was harsher: but so far Eliezer has only indicated that at worst, he will have the offending post deleted. That's mild enough that plenty of people will have the courage to test the limits, as they tested the limits in the basilisk case.

As for self-policing, well, it worked once. But we've already had trolls in the past, and the userbase of this site is notoriously contrarian, so you can't expect it to always work - if we could just rely on self-policing, we wouldn't need moderators in the first place.

Comment author: [deleted] 24 December 2012 05:44:23AM *  17 points [-]

violence against real people.

Abortion, euthenasia and suicide fit that description, some say. For them and those who disagree with them this proposal may have unforeseen consequences. Edit: all three are illegal in parts of the world today.

Comment author: Pentashagon 24 December 2012 06:01:49AM 12 points [-]

Do wars count? I find it strange, to say the least, that humans have strong feelings about singling out an individual for violence but give relatively little thought to dropping bombs on hundreds or thousands of nameless, faceless humans.

Context matters, and trying to describe an ethical situation in enough detail to arrive at a meaningful answer may indirectly identify the participants. Should there at least be an exception for notorious people or groups who happen to still be living instead of relegated to historical "bad guys" who are almost universally accepted to be worth killing? I can think of numerous examples, living and dead, who were or are the target of state-sponsored violence, some with fairly good reason.

Comment author: BrassLion 24 December 2012 06:48:53AM *  14 points [-]

I think this is an overreation to (deleted thing) happening, and the proposed policy goes too far. (Deleted thing) was neither a good idea or good to talk about in this public forum, but it was straight-out advocating violence in an obvious and direct way, against specific, real people that aren't in some hated group. That's not okay and it's not good for community for the reasons you (EY) said. But the proposed standard is too loose and it's going to have a chilling effect on some fringe discussion that's probably going to be useful in teasing out some of the consquences of ethics (which is where this stuff comes up). Having this be a guideline rather than a hard rule seems good, but it still seems like we're scarring on the first cut, as it were.

I think we run the risk of adopting a censorship policy that makes it difficult to talk about or change the censorship policy, which is also a really terrible idea.

I agree with the general idea of protecting LW's reputation to outsiders. After all, if we're raising the sanity waterline (rather than researching FAI), we want outsiders to become insiders, which they won't do if they think we're crazy.

"No advocating violence against real world people, or opening a discussion on whether to commit violence on real world people" seems safe enough as a policy to adopt, and specific enough to not have much of a chilling effect on discussion. We ought to restrict what we talk about as little as possible, in the absence of actual problems, given that any posts we don't want here can be erased by a few keystrokes from an admin.

Comment author: Epiphany 24 December 2012 06:56:00AM *  -1 points [-]

If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:

Wei Dai's Ironic Security Idea

Comment author: Wei_Dai 24 December 2012 02:39:52PM *  1 point [-]

My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn't constitute hypothetically (or euphemistically) killing, since in the scenario, "virtualizing" doesn't constitute "killing".

Comment author: Suryc11 24 December 2012 07:04:39AM *  22 points [-]

I'm disappointed by EY's response so far in this thread, particularly here. The content of the post above in itself did not significantly dismay me, but upon reading what appeared to be a serious lack of any rigorous updating on the part of EY to--what I and many LWers seemed to have thought were--valid concerns, my motivation to donate to the SI has substantially decreased.

I had originally planned to donate around $100 (starving college student) to the SI by the start of the new year, but this is now in question. (This is not an attempt at some sort of blackmail, just a frank response by someone who reads LW precisely to sift through material largely unencumbered by mainstream non-epistemic factors.) This is not to say that I will not donate at all, just that the warm fuzzies I would have received on donating are now compromised, and that I will have to purchase warm fuzzies elsewhere--instead of utilons and fuzzies all at once through the SI.

Comment author: drethelin 24 December 2012 07:19:14AM 15 points [-]

This is similar to how I feel. I was perfectly happy with his response to the incident but became progressively less happy with his responses to the responses.

Comment author: Eugine_Nier 24 December 2012 08:40:40AM *  12 points [-]

I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.

Frankly, I find it even more troubling that your standard reasons for why violence is not in fact a good idea seem to be "it's bad PR" and "even if it is we shouldn't say so in public".

As I quote here:

if your main goal is to show that your heart is in the right place, then your heart is not in the right place.

Edit: added link to an example of SIAI people unable to give a better reason against doing violence than PR.

Comment author: fubarobfusco 24 December 2012 09:48:17AM 7 points [-]

Two thoughts:

One: When my partner worked as the system administrator of a small college, her boss (the head of IT, a fatherly older man) came to her with a bit of an ethical situation.

It seems that the Dean of Admissions had asked him about taking down a student's personal web page hosted on the college's web server. Why? The web page contained pictures of the student and her girlfriend engaged in public displays of affection, some not particularly clothed. The Dean of Admissions was concerned that this would give the college a bad reputation.

Naturally the head of IT completely rejected the request out of hand, but was interested in discussing the implications. One that came up was that taking down a student web page about a lesbian relationship would be worse reputation than hosting it could bring. Another was that the IT staff did not feel like being censors over student expression, and certainly did not feel like being so on behalf of the Admissions office.

It's not clear to me that this case is especially analogous. It may be rather irrelevant, all in all.

Two: There is the notion that politics is about violence, not about agreement. That is to say, it is not about what we do when everyone agrees and goes along; but rather what we do when someone refuses to go along; when there is contention over shared resources because not everyone agrees what to do with them; when someone is excluded; when someone gets to impose on someone else (or not); and so on. Violence is often at least somewhere in the background of such discussions, in judicial systems, diplomacy, and so on. As Chairman Mao put it (at least, as quoted by Bob Wilson), political power grows out of the barrel of a gun. And a party with no ability to disrupt the status quo is one that nobody has to listen to.

As such, a position of nonviolence goes along with a position of non-politics. Avoiding threatening people — taken seriously enough — may require disengaging from a lot of political and legal-system stuff. For instance, proposing to make certain research illegal or restricted by law entails proposing a threat of violence against people doing that research.

Comment author: fubarobfusco 24 December 2012 10:05:14AM 6 points [-]

Counter-proposal:

We don't contemplate proposals of violence against identifiable people because we're not assholes.

I mean, seriously, what the fuck, people?

Comment author: [deleted] 24 December 2012 10:07:52AM 11 points [-]

Would pro-suicide and general anti-natalist posts be covered by this?

Comment author: [deleted] 24 December 2012 10:09:44AM *  27 points [-]

Fun Exercise

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal

Consider what would have been covered by this 250, 100 and 50 years ago.

Bonus Consider what wouldn't have been covered by this 250, 100 and 50 years ago but would be today.

Comment author: Qiaochu_Yuan 24 December 2012 11:35:32AM *  12 points [-]

I see the point you're trying to make, but I don't think it constitutes a counterargument to the proposed policy. If you were an abolitionist back when slavery was commonly accepted, it would've been a dumb idea to, say, yell out your plans to free slaves in the Towne Square. If you were part of an organization that thought about interesting ideas, including the possibility that you should get together and free some slaves sometime, that organization would be justified in telling its members not to do something as dumb as yelling out plans to free slaves in the Towne Square. And if Ye Olde Eliezere Yudkowskie saw you yelling out your plans to free slaves in the Towne Square, he would be justified in clamping his hand over your mouth.

Comment author: [deleted] 24 December 2012 12:18:32PM *  13 points [-]

It wouldn't be dumb to argue for the moral acceptability of freeing slaves (even by force) however.

Comment author: Qiaochu_Yuan 24 December 2012 12:28:46PM *  7 points [-]

It wouldn't be dumb for an organization to decide that society at large might be willing to listen to them argue for the moral acceptability of freeing slaves, even by force. It would be dumb for an organization to allow its individual members to make this decision independently because that substantially increases the probability that someone gets the timing wrong.

Comment author: prase 24 December 2012 01:53:37PM 11 points [-]

Beware selective application of your standards. If the members can't be trusted with one type of independent decision, why they can be trusted with other sorts of decisions?

Comment author: [deleted] 24 December 2012 10:10:23AM *  10 points [-]

(i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).

This seems to be a fully general argument against Devil's Advocacy. Was it meant as such?

Comment author: Qiaochu_Yuan 24 December 2012 11:13:53AM *  2 points [-]

I wouldn't have posted the following except that I share Esar's concerns about representativeness:

I think this is a good idea. I think using the word "censorship" primes a large segment of the LW population in an unproductive direction. I think various people are interpreting "may be deleted" to mean "must be deleted." I think various people are blithely ignoring this part of the OP (emphasis added):

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

In particular, I think people are underestimating how important it is for LW not to look too bad, and also underestimating how bad LW could be made to look by discussions of the type under consideration.

Finally, I strongly agree that

anyone talking about a proposed crime on the Internet fails forever as a criminal[.]

Comment author: SoftFlare 24 December 2012 12:09:48PM 8 points [-]

Beware Evaporative Cooling of Group Beliefs.

I am for the policy, although heavy-heartedly. I feel that one of the pillars of Rationality is that there should be no Stop Signs and this policy might produce some. On the other hand, I think PR is important, and that we must be aware of evaporative cooling that might happen if it is not applied.

On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.

To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy. If you just tell him "Your policy is flawed due to ambiguitiy in description" or "You have, in the past, said things that are not consistent with this policy" - I place low probability on him significantly changing his mind. You should take this as a sign that you are Straw-manning Eliezer, when you should be Steel-manning him.

Also, how about some creative solutions? An special post tag that must be applied to posts that condone hypothetical violence which causes them to only be seen to registered users - and displays a disclaimer above the post warning against the nature of the post? That should mitigate 99% of the PR effect. Or, your better, more creative idea. Go.

Comment author: DataPacRat 24 December 2012 12:35:23PM 18 points [-]

I currently find myself tempted to write a new post for Discussion, on the general topic of "From a Bayesian/rationalist/winningest perspective, if there is a more-than-minuscule threat of political violence in your area, how should you go about figuring out the best course of action? What criteria should you apply? How do you figure out which group(s), if any, to try to support? How do you determine what the risk of political violence actually is? When the law says rebellion is illegal, that preparing to rebel is illegal, that discussing rebellion even in theory is illegal, when should you obey the law, and when shouldn't you? Which lessons from HPMoR might apply? What reference books on war, game-theory, and history are good to have read beforehand? In the extreme case... where do you draw the line between choosing to pull a trigger, or not?".

If it was simply a bad idea to have such a post, then I'd expect to take a karma hit from the downvotes, and take it as a lesson learned. However, I also find myself unsure whether or not such a post would pass the muster of the new deletionist criteria, and so I'm not sure whether or not I would be able to gather that idea - let alone whatever good ideas might result if such a thread was, in fact, something that interested other LessWrongers.

This whole thread-idea seems to fall squarely in the middle, between the approved 'hypothetical violence near trolleys' and 'discussion violence against real groups'. Would anyone be interested in helping me put together a version of such a post to generate the most possible constructive discourse? Or, perhaps, would somebody like to clarify that no version of such a post would pass muster under the new policy?

Comment deleted 24 December 2012 12:39:38PM [-]
Comment author: twanvl 24 December 2012 12:40:43PM 7 points [-]

What if some violence helps reduce further violence? For example corporal punishment could reduce crime (think of Singapore). Note that I am not saying that this is necessarily true, just that we should not a priori ban all discussion on topics like this.

Comment author: prase 24 December 2012 01:44:47PM 1 point [-]

The proposal is to ban such discussions not because violence is bad, but because discussing violence is bad PR. I am pretty sure advocacy of corporal punishment belongs to this category too.

Comment author: Dahlen 24 December 2012 03:06:26PM 1 point [-]

(I seriously should've posted this question back when the thread only had 3 comments.)

I have no qualms about the policy itself, it's only commonsensical to me; my question is only tangentially related:

Do you believe "censorship" to be a connotatively better term than "moderation"?