New censorship: against hypothetical violence against identifiable people
New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (457)
For most people it doesn't really matter when they trade off better PR for not following higher values such as respecting free speech.
If you want to design a FAI that values human values, it matters. You should practice to follow human values yourself. You should take situation like this opportunities for deliberate practice in making the kind of moral decisions that an FAI has to make.
Power corrupts. It's easy to censor criticism of your own decisions. You should use those opportuniities to practice being friendly instead of being corrupted.
In the past there was a case of censorship that lead to bad press for LessWrong. Given that past performance, why should we believe that increasing censorship will be good for PR?
The first instinct of an FAI shouldn't be: "Hey, the cost function of those humans is probably wrong, let's use a different cost function."
A few days ago someone wrote a post about how rationalists should make group decisions. I did argue that his proposal was unlikely to be effectively implementable.
A decision about how the perfect censorship policy for LessWrong could look like could be made via Delphi.
I don't have any principled objection to this policy, other than that as rationalists, we want to have fun, and this policy makes LW less fun.
Different countries can have very different laws. Are you going to enforce this policy with reference to U.S. laws only, as they exist in 2012? If not, what is your standard of reference?
As I commented elsewhere, if your goal is to prevent bad PR, it is not obvious to me that this policy is the right way to optimize for it. Perhaps you have thought this out and have good reasons for believing that this policy is best for this goal, but it is not clear to me, so please elaborate on this if you can.
Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.
If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of <jurisdiction in which Less Wrong staff live> in a manner likely to create criminal liability will be deleted."
Because, you know, it's just about impossible to talk about specific wars, terrorism, criminal law or even many forms of political activism without advocating real violence against identifiable groups of people.
Maybe something like "Moderators, at their discretion, may remove comments that can be construed as advocating illegal activity" would work for a formal policy - it reads like relatively inoffensive boilerplate and would be something to point to if a post like mine needs to go, but is vague enough that it doesn't scream "CENSORSHIP!!!" to people who feel strongly about it. The "at their discretion" is key; it doesn't create a category of posts that moderators are required to delete, so it can't be used by non-moderators as a weapon to stifle otherwise productive discussion. (If you don't trust the discretion of the moderators, that's not a problem that can be easily solved with a few written policies.)
The freaky consequences are not of the policy, they're of the meta-policy. You know how communities die when they stop being fun? Occasional shitstorms are not fun, and fear of saying something that will cause a shitstorm is not fun. Benevolent dictators work well to keep communities fun; the justifications don't apply when the dictator is pursuing goals that aren't in the selfish interest of members and interested lurkers; making the institute the founder likes look bad only weakly impacts community fun.
Predictable consequences are bright iconoclasts leaving, and shitstorm frequency increasing. (That's kinda hard to settle: the former is imprecise and the latter can be rigged.)
Every time, people complain much less about the policy than about not being consulted. There are at least two metapolicies that avoid this:
Avoid kicking up shitstorms. In this particular instance, you could have told CronoDAS his post was stupid and suggest he delete it, and then said "Hey, everyone, let's stop talking about violence against specific people, it's stupid and makes us look bad" without putting your moderator hat on.
Produce a policy, possibly ridiculously stringent, that covers most things you don't like, which allows people to predict moderator behavior and doesn't change often. Ignore complaints when enforcing, and do what you wish with complaints on principle.
I'm not sure what's obvious for you. In an enviroment without censorship you don't endorse a post by not censoring the post. If you however start censoring you do endorse a post by letting it stand.
Your legal and PR obligations for those posts that LessWrong hosts get bigger if you make editorial censorship decisions.
AIUI this is legally true: CDA section 230, mere hosting versus moderation.
Is there any way out of this dilemma? For example having a policy where moderator flips a coin for each offending article or comment, and head = delete, tails = keep.
:D
While I don't know about the legality, practically what this does is add noise to the moderation signal. Posts that remain are still more likely to be ones that the moderator approves of, but might not be.
This is actually very similar to the current system, with the randomness of coin flipping substituted for the semi-randomness of what the moderator happens to see.
Regardless of whether you think censoring is net good or bad (for the forums, for the world, for SIAI), you have to realize the risks are far more on Eliezer than they are on any poster. His low tolerance responses and angry swearing are exactly what you should expect of someone who feels the need to project an image of complete disassociation from any lines of speculation that could possibly get him in legal trouble. Eliezer's PR concerns are not just of the forums in general. If he's being consistent, they should be informing every one of his comments on this topic. There's little to nothing to be gained by trying to apply logic against this sort of totally justifiable (in my mind) conversational caution. Eliezer should probably also delete any comments about keeping criminal discussions off the public internet.
This is also why trying to point out his Okcupid profile as a PR snafu is a non sequitur. Nothing there can actually get him in trouble with the law.
PR is something different than legal trouble. Eliezer didn't express any concern that the speech he wants to censor would create legal trouble. If Eliezer wants to make that argument I should make that argument directly instead of speaking about PR.
In the US speech that encourages violent conduct is protect under the first amendment. See Brandenburg v. Ohio.
If you actually to want to protect against legal threats you should also forbid a lot of discussion that could fall under UK libel law.
This, I think, is the fundamental point of diagreement here. The emotional valence is far greater on Eliezer than on us, but if we're taking seriously the proposition that the singularity is coming in our lifetimes (and I do), then the risks are the same for all of us.
Angry swearing? Did I miss some posts? Link please.
I suppose I should point out that when I referred earlier to Eliezer's occasional lapses in judgement, I was absolutely NOT intending to refer to this. I wasn't actively commenting at the time, but looking back on that episode, I found a lot of the criticism of Eliezer regarding his OKcupid profile to be downright offensive. When I first read the profile, I was actually incredibly impressed by the courage he displayed in not hiding anything about who he is.
In my experience, it's difficult to display a high level of courage about revealing the truth about myself and at the same time commit to moderating the image I present so as to avoid public-relations failures. At some point, tradeoffs become necessary.
http://lesswrong.com/lw/g24/new_censorship_against_hypothetical_violence/84fd but I think there's others
I think it's fairly reasonable for Eliezer to guard against large risks for himself in exchange for small and entirely theoretical risks to the effectiveness of the forums. I don't think this censorship decision has a very meaningful impact either way on FAI. You can also factor his own (possibly inflated) evaluation of his value to the cause. A 0.01 percent chance of Eliezer getting jailed might be worse than 10 percent chance of stifling a mildly useful conversation or having someone quit the forums due to censorship.
I disagree. One of the most common objections to the idea of FAI/CEV is "so will this new god-like AI restrict fundamental civil liberties after it takes over?"
I've never heard this objection.
Fundamental civil liberties is also a fundamentally diseased concept.
If you explain that position in huge detail, there a plausible chance that it includes advocation of illegal conduct and could therefore be censored through this policy.
Keep in mind that the policy is going to be done through human implementation with the specific intention of avoiding inconveniently broad interpretations like this.
It's not enough to show that the censorship policy could theoretically be used to stifle conversation we actually want here, the important question is whether it actually would be.
I think that's a very dangerous idea. This community is about developing FAI. FAI should be expected to act according to the rules that you give it. I think the policy should be judged by the way it would actually work if it would be applied as it's proposed.
There also the problem of encouraging group think: Advocating of illegal conduct gets censored when it goes against the morality of this group but is allowed when it's within the realms of that morality is bad.
This community should have consistent rules about which discussions are allowed and which aren't. Censoring on a case by case is problematic.
If you start censoring certain speech that advocates violence and avoid censoring other speech that advocates violence you also have the problem that you get more responsibility for the speech that you allow.
In the absence of a censorship policy you don't endorse a viewpoint by not censoring the viewpoint. If you however do censor, and you do make decisions not to censor specific speech that's an endorsement of that speech.
The way it's proposed is to be applied according to the judgment of a moderator. It makes no sense to pretend that we're beholden to the strictest letter of the rule when that's not how it's actually going to work.
What speech that advocates violence do you think would get a pass while the rest would get censored?
I don't know excatly how much speech Eliezier wants to censor. I wrote a post with a bunch of examples. I would like to see with of those example Eliezer considers worthy of censorship.
Please explain. (I've heard this argued before, but I'm curious what your particular angle on it is)
He is probably pattern-matching "fundamental civil liberties" to Natural Rights, which are not taken very seriously around these parts, since they are mostly myth.
Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...
In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.
This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.
Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won't remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)
Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create "silly" attempts to get around the rule by gray-areaing it (Where people say things like "I won't say which country, but it starts with United States and rhymes with Bymerica") which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence
Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.
What would the response to this have been if instead of "censorship policy" the phrase would have been "community standard"?
It probably would have been more positive but less honest.
Taking this post in the way it was intended i.e. 'are there any reasons why such a policy would make people more likely to attribute violent intent to LW' I can think of one:
The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: 'LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world'
And, of course, statements like 'if a proposed conspiratorial crime were in fact good you shouldn't talk about it on the internet' make for good out-of-context excerpts.
I don't think that's probable. There are many online forum that have forum rules that prevent those discussions.
See also xkcd.
On of the most challenging moderation decisions I had to do at another forum was whether someone who argues the position "Homosexuality is a crime. In my country it's punishable with death. I like the laws of my country" should have his right of free speech. I think the author of the post was living in Uganda.
The basic question is, should someone who's been raised in Uganda feel free to share his moral views? Even if those views are offensive to Western ears and people might die based on those views?
If you want to have a open discussion about morality I think it's very valuable to have people who aren't raised in Western society participating openly in the discussion. I don't think LessWrong is supposed to be a place where someone from Uganda should be prevented from arguing the moral views in which he believes.
When it comes to politics, communists argue frequently for the necessarity of a revolution. A revolution is an illegal act that includes violence against real people. Moldburg argues frequently for the necessity of a coup d'état.
This policy allows for censoring both the political philosophy of communism as well as the political philosophy of moldbuggianism.
Even when I disagree with both political philosophies I think they should stay within the realm of discourse on LessWrong.
A community which has the goal of finding the correct moral system shouldn't ban ideas because they conflict with the basic Western moral consensus.
TDT suggests that one should push the fat man. It's a thought exercise and it's easy to say "I would push the fat man". In a discussion about pushing fat man's on trolly I think it's valid to switch the discussion from trolly cars to real world examples.
Discussion of torture is similar. If you say "Policemen should torture kidnappers to get the location where the kidnapper hid the victim" you are advocating a crime against real people.
Corporal punishment is illegal violence.
Given the examples I listed in this posts, which are cases where you would choose to censor? Do you think that you could articulate a public criteria about which cases you censor and which you will allow?
No you're advocating changing the law. It's not a crime once/if the law is changed.
Depends on where you are.
No, that sentence doesn't include the word law. It's a valid position to argue that a policeman has the moral duty to do everything he can to safe a life even when that involves breaking the law.
Does it? CDT most certainly does, but...
Okay, you can argue whether it does. Regardles, that's an argument that should be possible in depth. And it should be possible to exchange the trolly cars for more real world examples.
It seems like this represents, not simply a new rule, but a change in the FOCUS of the community. Specifically, it used to be entirely about generating good ideas, and you are now adding a NEW priority which is "generating acceptable PR".
Quite possibly there is an illusion of transparency here, because there hasn't really BEEN (to my knowledge) any discussion about this change in purpose and priorities. It seems reasonable to be worried that this new priority will get in the way of, or even supersede the old priority, especially given the phrasing of this.
At a minimum, it's a slippery slope - if we make one concession to PR, it's reasonable to assume others will be made as well. I don't know if that's the case - if I'm in error on that point, feel free to mention it.
When you go on a first date with someone, would you tell them "hey, I've got this great idea about how I should [insert violence here] in order to [insert goal here]. What do you think?" Of course not, because whether or not this is a good idea, you are not getting a second date.
PR isn't inherently Dark Arts. It's about providing evidence to another party about yourself or your organization in a way which is conducive to further provision of evidence. If you start all your dates by talking about your worst traits first, you aren't giving your date incentives to stick around and learn about your best traits. If LW becomes known for harboring discussions of terrorism or whatever, you aren't giving outsiders incentives to stick around and learn about all the other interesting things happening on LW, or work for SIAI, etc.
This begs the question by assuming the proposed violence is a bad trait.
All I'm assuming is that a typical date will assume that people who talk about violence on the first date are crazy and/or violent themselves. This is an argument about first impressions, not an argument about goodness or badness.
If I would go to a date with a girl who believes in the necessity of a communist revolution I wouldn't judge her negatively for that political belief. There are character traits that I would judge much worse.
Okay, but 1) the fact that you post on LW is already evidence that you're not representative of the general population in various ways, and 2) communist revolution is at least an idea that people learn about in college, and it's not too unusual to hear a certain type of person say stuff like that. I had in mind the subject of the deleted post; if a typical person heard someone talking like that, their first reaction would be that that person is crazy, and with a reasonable choice of priors this would be a reasonable inference to make.
If LW-compatible people are more welcoming of discussion of violence than the general population, then the bad PR would affect them less than it would other people, so we should care less about bad PR.
I haven't read the deleted post. If someone who knew what the case is about would write it to me via private message I would appreciate it.
The communist revolution is a classic example of an idea that involves the advocation of illegal violence against a specific group of people. There are certainly internet forums where that kind of political speech isn't welcome and will get deleted.
On LessWrong I think that's a position that should be allowed to be argued. Moldbuggian advocacy of a coup d'état should also be allowed.
Some people might think that you are crazy if you argue Moldbuggianism on a first date. At the same time I think that idea should be within the realm of permissable discourse on LessWrong.
You'd be amazed how many second dates I get...
That said, I don't think PR is Dark Arts, I just think it's an UNSPOKEN change in community norms, and... from a PR standpoint, this post is a blatantly stupid way of revealing that change.
Huh. Either the original post is bad because PR is bad, or this post is bad because it demonstrates bad PR. Lose/lose :)
If the point was to "make a good impression" by distorting the impression given by people on the list to potential donors, maybe a more effective strategy is to shut up and do it, instead of making an announcement about it and causing a ruckus. "Disappear" the problems quietly and discretely.
This reminds me of the phyg business. Prompting long discussion threads about how "We are not a phyg! We are not a phyg!" is not recommended behavior if you don't want people to think you're not a phyg.
People notice when this happens, and the resulting uproar might have been worse; then accusations would be flying about lack of transparency.
Might have been. I didn't see it, and I didn't see any brouhaha over a deleted post. Was there one? More than the 318 comments in this thread announcing the policy? I see lots of downvotes for EY. I don't think it's going well.
The advantage of just doing it is that many people will not notice it at all, and those that notice it have seen the offending post, and so have a concrete context to go by. When people haven't seen the threat, they talk about censorship. My guess is that in a concrete case, at worst it will come across as an overreaction to some boorish behavior.
If the concrete case really involved stifling ideas, I'd expect people to make a huge stink about it. The big stink about poilcy, and undetected by my stink about the concrete case tells me that people are getting their undies in a bunch over nothing.
Eliezer, at this point I think it's fair to ask: has anything anyone has said so far caused you to update? If not, why not?
I realize some of my replies to you in this thread have been rather harsh, so perhaps I should take this opportunity to clarify: I consider myself a big fan of yours. I think you're a brilliant guy, and I agree with you on just about everything regarding FAI, x-risk, SIAI's mission.... I think you're probably mankind's best bet if we want to successfully navigate the singularity. But at the same time, I also think you can demonstrate some remarkably poor judgement from time to time... hey, we're all running on corrupted hardware after all. It's the combination of these two facts that really bothers me.
I don't know of any way to say this that isn't going to come off sounding horribly condescending, so I'm just going to say it, and hope you evaluate it in the context of the fact that I'm a big fan of your work, and in the grand scheme of things, we're on the same side.
I think what's going on here is that your feelings have gotten hurt by various people misattributing various positions to you that you don't actually hold. That's totally understandable. But I think you're confusing the extent to which your feelings have been hurt with the extent to which actual harm has been done to SIAI's mission, and are overreacting as a result. I'm not a psychologist - this is just armchair speculation.... I'm just telling you how it looks from the outside.
Again, we're all running on corrupted hardware, so it's entirely natural for even the best amongst us to make these kinds of mistakes... I don't expect you to be an emotionless Straw Vulcan (and indeed, I wouldn't trust you if you were)... but your apparent unwillingness to update in response to other's input when it comes to certain emotionally charged issues is very troubling to me.
So to answer your question "Are there any predictable consequences we didn't think of that you would like to point out"... well I've pointed out many already, but the most concise, and most important predictable consequence of this policy which I believe you're failing to take into account, is this: IT LOOKS HORRIBLE... like, really really bad. Way worse than the things it's intended to combat.
"Reason: Talking about such violence makes that violence more probable,"
Does it? In some cases yes, and some cases no. Wacky people advocating violence can be smacked down by the crowd. Wacky violent loners need perspective from other people.
Talking about socially approved violence probably makes it more likely. Talking about socially disapproved violence might make it less likely.
The problem with these conversations are the generalities involved. We don't have examples of the offending material, and worse, the whole point is to hide the examples of offending material.
SUGGESTION: Move disapproved posts to their own thread where replies aren't allowed. Wouldn't you get some brownie points from the people you're trying to impress by showing that you ban all this stuff you think they disapprove of?
My guess is that the policy will be applied reasonably, and were people allowed to see what was banned, they'd think so too.
So ... we can't discus assassination the president, that seems fine if unnecessary. But we can't debate kidnapping people and giving the ransom to charity without the "pro-" side being deleted? That seems like it will either destroy a lot of valuable conversations for minimal PR gain, or be enforced inconsistently, leading to accusations of bias. Probably both.
Eliezer, could you clarify whether this policy applies to discussions like "maybe action X, which some people think constitutes violence, isn't really violence"? And what about nuclear war strategies?
I realize this isn't your TRUE objection, just a bit of a tangential "Public Service Announcement". The real concern is simply PR / our appearance to outsiders, right? But... I'm confused why you feel the need to include such a PSA.
Do we have a serious problem with people saying "Meet under the Lincoln Memorial at midnight, the pass-phrase is Sic semper tyrannis" or "I'm planning to kill my neighbor's dog, can you please debug my plot, I live in Brooklyn at 123 N Stupid Ave"?
You can use Private Messaging to send me actual examples, without causing a public reputation hit. I can't recall ever reading anything like that on this site.
I'd ask if there's any evidence of removal but... I can't imagine anything other than Eliezer saying "Yep, I deleted it" would do the trick.
Well, actually, does this software ALLOW deletions, or does deleted comment get replaced with a "Deleted Content" placeholder? Because if it's the latter, a link to that placeholder would be decent evidence :)
It only gets replaced with a place holder if there are replies.
Do deleted comments still appear on the list of comments by that user?
I'm not sure. I suspect it might depend on who deleted them.
Do you have worked out numbers (in terms of community participation, support dollars, increased real-world violence, etc.) comparing the effect of having the censorship policy and the effect of allowing discussions that would be censored by the proposed policy? The request for "Consequences we haven't considered" is hard to meet until we know with sufficient detail what exactly you have considered.
My gut thinks it is unlikely that having a censorship policy has a smaller negative public relations effect than having occasional discussions that violate the proposed policy. I know that I am personally much less okay with the proposed censorship policy than with having occasional posts and comments on LW that would violate the proposed policy.
The blasphemy laws of many countries fit this description - another possible unintended consequence.
(I seriously should've posted this question back when the thread only had 3 comments.)
I have no qualms about the policy itself, it's only commonsensical to me; my question is only tangentially related:
Do you believe "censorship" to be a connotatively better term than "moderation"?
To me this sounds like a mix of counter-signaling and a way of saying, "Yes, this proposal is controversial, as policy debates should not appear one-sided. There are ups and downs. There are reasons to believe that some of the negatives that obtain to other things called 'censorship' may happen here."
What if some violence helps reduce further violence? For example corporal punishment could reduce crime (think of Singapore). Note that I am not saying that this is necessarily true, just that we should not a priori ban all discussion on topics like this.
The proposal is to ban such discussions not because violence is bad, but because discussing violence is bad PR. I am pretty sure advocacy of corporal punishment belongs to this category too.
Is it really bad PR, though? IMO one of the strengths of LW is that almost any weird topic can be discussed as long as the discussion is rational and civilized. If some interesting posts are banned by the moderators, then this diminishes the value of LW to me.
I don't know whether it's indeed bad PR. It probably depends on one's expectations. I agree with you that banning weird discussions makes LW less attractive (to me, you, certain kind of people), but the site owners want to become more respected by the mainstream and in order to achieve that it is probably a good strategy to remove the weirdest discussions from sight.
But the point of LW is not merely having a forum that is valuable to you for discussing weird topics. If you want Reddit, you know where to find it. The point of LW is advancing human rationality, and being a place where people air proposals of violence may get in the way of that. How would we tell?
Eliezer and other big names here have been on the receiving end of scandal-sheet gossip-mongering before and may be particularly sensitive to some of these issues. One thing that worries me about this proposal is that Eliezer may be conflating "LW has a bad reputation" with "I have to answer snarky, demeaning questions about foolish things people posted on LW more often than I'm comfortable with" or "People publish articles making fun of my friends and I wish to heck they would stop doing that." But I infer there is also evidence that Eliezer is withholding.
But it seems to me that the best way to have a good reputation is to actually be good. For instance, I would like it if people did not see LW as a place to air demeaning, privileged hypotheses (pun intended) about, say, race or gender — in part because many people's evidence standards for these topics is appallingly low; in part because it drives away members of the less-privileged sets (I would rather cooperate with women and defect against PUAs than vice versa; for one thing, there are more women). I would accept the same restriction on discussions of political economy (viz. libertarianism and socialism); although I've talked politics here it's not exactly an area where humans are renowned for being exemplary rationalists.
"is valuable to you for discussing weird topics"
"reddit"
pick one.
Definition of green ink
Everyone even slightly famous gets arbitrary green ink. Choosing which green ink to 'complain' about on your blog, when it makes an idea look bad which you would find politically disadvantageous, is not a neutral act. I'm also frankly suspicious of what the green ink actually said, and whether it was, perhaps, another person who doesn't like the "UFAI is possible" thesis saying that "Surely it would imply..." without anyone ever actually advocating it. Why would somebody who actually advocated that, contact Ben Goertzel when he is known as a disbeliever in the thesis?
No, I don't particularly trust Ben Goertzel to play rationalist::nice with his politics. And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.
If that's the case, it seems like giving him the title Director of Research could cause a lot of confusion. I certainly find it confusing. Maybe that was a different Ben Goertzel?
Reportedly, Ben Goertzel and OpenCog were intended to add credibility through association with an academic:
Honestly, at this point I'm willing to just call that a mistake on Tyler Emerson's part.
I currently find myself tempted to write a new post for Discussion, on the general topic of "From a Bayesian/rationalist/winningest perspective, if there is a more-than-minuscule threat of political violence in your area, how should you go about figuring out the best course of action? What criteria should you apply? How do you figure out which group(s), if any, to try to support? How do you determine what the risk of political violence actually is? When the law says rebellion is illegal, that preparing to rebel is illegal, that discussing rebellion even in theory is illegal, when should you obey the law, and when shouldn't you? Which lessons from HPMoR might apply? What reference books on war, game-theory, and history are good to have read beforehand? In the extreme case... where do you draw the line between choosing to pull a trigger, or not?".
If it was simply a bad idea to have such a post, then I'd expect to take a karma hit from the downvotes, and take it as a lesson learned. However, I also find myself unsure whether or not such a post would pass the muster of the new deletionist criteria, and so I'm not sure whether or not I would be able to gather that idea - let alone whatever good ideas might result if such a thread was, in fact, something that interested other LessWrongers.
This whole thread-idea seems to fall squarely in the middle, between the approved 'hypothetical violence near trolleys' and 'discussion violence against real groups'. Would anyone be interested in helping me put together a version of such a post to generate the most possible constructive discourse? Or, perhaps, would somebody like to clarify that no version of such a post would pass muster under the new policy?
Do you have answers to those questions? Just "Hey, this problem exists" has not historically been shown to be productive.
I have /a/ set of answers, based on what I've learned so far of economics, politics, human nature, and various bits of evidence. However, I peg my confidence-levels of at least some of those answers as being low enough that I could be easily persuaded to change my mind, especially by the well-argued points that tend to crop up around here.
Beware Evaporative Cooling of Group Beliefs.
I am for the policy, although heavy-heartedly. I feel that one of the pillars of Rationality is that there should be no Stop Signs and this policy might produce some. On the other hand, I think PR is important, and that we must be aware of evaporative cooling that might happen if it is not applied.
On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.
To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy. If you just tell him "Your policy is flawed due to ambiguitiy in description" or "You have, in the past, said things that are not consistent with this policy" - I place low probability on him significantly changing his mind. You should take this as a sign that you are Straw-manning Eliezer, when you should be Steel-manning him.
Also, how about some creative solutions? An special post tag that must be applied to posts that condone hypothetical violence which causes them to only be seen to registered users - and displays a disclaimer above the post warning against the nature of the post? That should mitigate 99% of the PR effect. Or, your better, more creative idea. Go.
Ambiguity is actually a problem. If people don't know what the policy means, then the person who makes the policy doesn't know what they forbidding or permitting.
True. I was giving the ambiguity as an example of something people say to claim a policy won't work, without hashing out what that actually means in real execution. Almost every policy is somewhat ambiguous, yet there are many good policies.
I disagree that this is the entire source of the dispute. I think that even when constrained to optimizing only for good PR, this is an instrumentally ineffective method of achieving that. Censorship is worse for PR than the problem in question, especially given that that problem in question is thus far nonexistent
This is trivially easy to do, since the positive effects of enacting the policy are zero, given that the one and only time this has ever been a problem, the problem resolved itself without censorship, via self-policing.
Well... the showing him the model part is trivially easy anyway. Convincing him... apparently not so much.
This model trivially shows that censoring espousing violence is a bad idea, if and only if you accept the given premise that censorship of espousing violence is a substantial PR negative. This premise is a large part of what the dispute is about, though.
Not everyone is you; a lot of people feel positively about refusing to provide a platform to certain messages. I observe a substantial amount of time expended by organisations on simply signalling opposition to things commonly accepted as negative, and avoiding association with those things. LW barring espousing violence would certainly have a positive effect through this.
Negative effects from the policy would be that people who do feel negatively about censorship, even of espousing violence, would view LW less well.
The poll in this thread indicates that a majority of people here would be for moderators being able to censor people espousing violence. This suggests that for the majority here it is not bad PR for the reason of censorship alone, since they agree with its imposition. I would expect myself for people outside LW to have an even stronger preference in favour of censorship of advocacy of unthinkable dangerous ideas, suggesting a positive PR effect.
Whether people should react to it in this manner is a completely different matter, a question of the just world rather than the real one.
And this is before requiring any actual message be censored, and considering the impact of any such censorship, and before considering what the particular concerns of the people who particularly need to be attracted are.
There are better ideas in this thread but apparently LW can't afford software changes.
I wouldn't have posted the following except that I share Esar's concerns about representativeness:
I think this is a good idea. I think using the word "censorship" primes a large segment of the LW population in an unproductive direction. I think various people are interpreting "may be deleted" to mean "must be deleted." I think various people are blithely ignoring this part of the OP (emphasis added):
In particular, I think people are underestimating how important it is for LW not to look too bad, and also underestimating how bad LW could be made to look by discussions of the type under consideration.
Finally, I strongly agree that
I'm not underestimating that at all... I'm saying that this policy makes us look bad... WAY worse than the disease it's intended to cure, especially in light of the fact that that disease cleared itself up in a few hours with no intervention necessary.
This seems to be a fully general argument against Devil's Advocacy. Was it meant as such?
I don't see the link, but it does so happen I think "devil's advocate!" is mentally poisonous. I'd even call it an evolutionary precursor of trolling. http://lesswrong.com/lw/r3/against_devils_advocacy/
Sounds more like a precursor of "Policy Debates Should Not Appear One-Sided".
I recall reading this before but reread and did some thought on it again. We may have a sort of a trade off here. I think I agree on Devil's Advocacy might be risky for the person doing it but I see obvious benefits in having one or two in a group as it emulates some of the epistemic benefits of value diversity as well as signals to the group that its beliefs shouldn't be taken that seriously.
This is the major problem I see with my view that I'm unsure how to resolve though. Devil's Advocates may in practice be merely straw men generators.
Interesting I suppose I agree with it.
You may only generate straw men while you attempt to generate steel men, but how many steel men are you likely to make if you don't even try to make them?
You can always fail, but not trying guarantees failure.
Fun Exercise
Consider what would have been covered by this 250, 100 and 50 years ago.
Bonus Consider what wouldn't have been covered by this 250, 100 and 50 years ago but would be today.
Bonus:
Consider what's likely to be covered 50 years in the future.
For something like that, consider the algorithm you use to answer it. Then consider why the output of said algorithm should at all correlate with future social trends.
I considered adding that too. :)
I see the point you're trying to make, but I don't think it constitutes a counterargument to the proposed policy. If you were an abolitionist back when slavery was commonly accepted, it would've been a dumb idea to, say, yell out your plans to free slaves in the Towne Square. If you were part of an organization that thought about interesting ideas, including the possibility that you should get together and free some slaves sometime, that organization would be justified in telling its members not to do something as dumb as yelling out plans to free slaves in the Towne Square. And if Ye Olde Eliezere Yudkowskie saw you yelling out your plans to free slaves in the Towne Square, he would be justified in clamping his hand over your mouth.
It wouldn't be dumb to argue for the moral acceptability of freeing slaves (even by force) however.
It wouldn't be dumb for an organization to decide that society at large might be willing to listen to them argue for the moral acceptability of freeing slaves, even by force. It would be dumb for an organization to allow its individual members to make this decision independently because that substantially increases the probability that someone gets the timing wrong.
How does an organization make decisions independently of the members of the organization?
It doesn't. The distinction is between decisions that individual members make independently and decisions that individual members make communally.
If it helps, the underlying moral principle I'm working from here is "try to avoid making decisions that entail other people taking on risks without their consent."
Did they take on the risks when they entered the conspiracy, or do they only take on those risks when events beyond their control happen? It would be foolish to conspire with foolish or rash people, which is one reason why I don't.
Beware selective application of your standards. If the members can't be trusted with one type of independent decision, why they can be trusted with other sorts of decisions?
Because the decision to initiate a particular kind of public discussion entails everyone else in the organization taking on a certain level of risk, and an organization should be able to determine what kinds of communal risk it's willing to allow its individual members to force on everyone else. There are jurisdictions where criminal incitement is itself a crime.
I can't say whether I agree or disagree until you precise the meaning of the qualifiers "particular" and "certain". But my question was in any case probably directed a bit elsewhere: if the members shouldn't be free to write about certain class of topics because they may misjudge how the society at large would react, doesn't it imply that they shouldn't be free to write about anything because they may misjudge what the society at large might think? If the rationale is that which you say, returning back from abolitionists to LW, shouldn't the policy be "any post that is in conflict with LW interest can be deleted" rather than the overly specific rule concerning violence and only violence?
"Criminal incitement" and "the risk of being arrested," then. In other time periods, substitute "blasphemy" and "the risk of being burned at the stake."
They shouldn't be free to write about certain topics with the name of their organization attached to that writing, which is the case here. They can write about anything they want anonymously and with no organization's name attached because that doesn't entail the other members of the organization taking on any risk.
Sure.
Would pro-suicide and general anti-natalist posts be covered by this?
Suggesting that specific people commit suicide, obviously yes. People in general... maybe no.
I am not going to explain why, but although death of all people technically includes the death of any specific person X.Y., saying "X.Y. should die" sounds worse than saying "all humans should die".
Um, yes. We don't ant to look like a suicide phyg.
Forget about it.
I'm not trolling. I quite like reading Sister Y's stuff and have said so in the past.
Luckily enough, that blog seems much better than your introduction of it. My troll accusation is founded on your highly repetitive deliberate misunderstanding of the OP. It must be deliberate, as you are usually much smarter than that, and also better in style.
Also, Sister Y is not pro-suicide per se, but against anti-suicide positions; at least that's how I read it.
Counter-proposal:
We don't contemplate proposals of violence against identifiable people because we're not assholes.
I mean, seriously, what the fuck, people?
Yes, this is the unstated policy we've all been working under up until this point, and it's worked. Which is why it's so irrational to propose a censorship rule.
First: "Rational" and "irrational" describe mental processes, not conclusions. A social rule can be useful or useless, beneficial or harmful, well- or ill-defended ....
("If deleting posts that propose violence would benefit Less Wrong, I want to believe that deleting posts that propose violence would benefit Less Wrong. If deleting posts that propose violence would not benefit Less Wrong, I want not to believe that deleting posts that propose violence would ...")
Second: Consider the difference between "we're not assholes" and "we don't want to look like assholes".
Or between "I will cooperate" and "I want you to think that I will cooperate." A defector can rationally conclude the latter, but not the former (since it is false of defectors).
Generalizations: on average accurate. In specific wrong.
Two thoughts:
One: When my partner worked as the system administrator of a small college, her boss (the head of IT, a fatherly older man) came to her with a bit of an ethical situation.
It seems that the Dean of Admissions had asked him about taking down a student's personal web page hosted on the college's web server. Why? The web page contained pictures of the student and her girlfriend engaged in public displays of affection, some not particularly clothed. The Dean of Admissions was concerned that this would give the college a bad reputation.
Naturally the head of IT completely rejected the request out of hand, but was interested in discussing the implications. One that came up was that taking down a student web page about a lesbian relationship would be worse reputation than hosting it could bring. Another was that the IT staff did not feel like being censors over student expression, and certainly did not feel like being so on behalf of the Admissions office.
It's not clear to me that this case is especially analogous. It may be rather irrelevant, all in all.
Two: There is the notion that politics is about violence, not about agreement. That is to say, it is not about what we do when everyone agrees and goes along; but rather what we do when someone refuses to go along; when there is contention over shared resources because not everyone agrees what to do with them; when someone is excluded; when someone gets to impose on someone else (or not); and so on. Violence is often at least somewhere in the background of such discussions, in judicial systems, diplomacy, and so on. As Chairman Mao put it (at least, as quoted by Bob Wilson), political power grows out of the barrel of a gun. And a party with no ability to disrupt the status quo is one that nobody has to listen to.
As such, a position of nonviolence goes along with a position of non-politics. Avoiding threatening people — taken seriously enough — may require disengaging from a lot of political and legal-system stuff. For instance, proposing to make certain research illegal or restricted by law entails proposing a threat of violence against people doing that research.
I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.
Frankly, I find it even more troubling that your standard reasons for why violence is not in fact a good idea seem to be "it's bad PR" and "even if it is we shouldn't say so in public".
As I quote here:
Edit: added link to an example of SIAI people unable to give a better reason against doing violence than PR.
If the violence is a bad idea, which in nearly all cases it probably would be, other commenters are likely to point that out. Having people inspired to carry out acts of violence in spite of other members pointing out that it's unlikely to bear good results is possible, but unlikely, whereas having people judge the community negatively for discussing such things at all is considerably more likely.
Can you point to an example of this actually happening?
The post in question was heavily downvoted before it was deleted.
Here.
I would find this troubling if it were true, but the better reason is right there in the post: "Talking about such violence makes that violence more probable".
I appreciate the honesty of it. No one here is going to enact any of these thought experiments in real life. The likely worst outcome is to off-put potential SI donors. It must be hard enough to secure funding for a fanfic-writing apocalypse cult; prepending violent onto that description isn't going to loosen up many wallets.
I'm disappointed by EY's response so far in this thread, particularly here. The content of the post above in itself did not significantly dismay me, but upon reading what appeared to be a serious lack of any rigorous updating on the part of EY to--what I and many LWers seemed to have thought were--valid concerns, my motivation to donate to the SI has substantially decreased.
I had originally planned to donate around $100 (starving college student) to the SI by the start of the new year, but this is now in question. (This is not an attempt at some sort of blackmail, just a frank response by someone who reads LW precisely to sift through material largely unencumbered by mainstream non-epistemic factors.) This is not to say that I will not donate at all, just that the warm fuzzies I would have received on donating are now compromised, and that I will have to purchase warm fuzzies elsewhere--instead of utilons and fuzzies all at once through the SI.
This is similar to how I feel. I was perfectly happy with his response to the incident but became progressively less happy with his responses to the responses.
There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait; Eliezer hasn't (to the detriment of this discussion and some prior ones).
(I'd bet at 10-to-1 that Eliezer agrees with this assessment.)
Not sure I totally agree. My LW comments may show retained composure in most cases, but I can think of two instances in the past few months in which I became (mildly) emotional in SI meetings in ways that disrupted my judgment until after I had cooled down. Anna can confirm, as she happens to have been present for both meetings. (Eliezer could also confirm if he had better episodic memory.) The first instance was a board meeting at which we discussed different methods of tracking project expenses, the second was at a strategy meeting which Anna compared to a Markov chain.
Anyway, I'm aware of people who are better at this than I am, and building this skill is one of my primary self-improvement goals at this time.
I appreciate you sharing this.
Keeping one's composure in person and keeping one's composure on the Internet are distinct aptitudes (and only somewhat correlated, as far as I can tell), and it still looks to me like you've done well at the latter.
There is another personality trait (or skill) that allows one to be comfortable with acknowledging areas of weakness and delegating to people more capable. Fortunately Eliezer was able to do this with respect to managing SIAI. He seems to have lost some of that capability or awareness in recent times.
Quite possibly. I'd still oppose putting you in charge of the website. :)
Irrelevant and unnecessary. Particularly since I happen to have the self awareness in question, know that becoming Luke would be exhausting to me and so would immediately hand off the responsibility. What I wouldn't do is go about acting how I naturally wish to act and be unable to comprehend why my actions have the consequences that they inevitably have.
Why are you smiling? This makes your remark all the more objectionable (in as much as it indicates either wit or rapport, neither of which are present.)
If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:
Wei Dai's Ironic Security Idea
My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn't constitute hypothetically (or euphemistically) killing, since in the scenario, "virtualizing" doesn't constitute "killing".
That is your exact wording. Not "In the event that the AGI determines that it's safe to [euphemism for doing something that could mean killing the entire human race] because there are software copies." or "if virtualizing is safe..."
Even if your wording was that, I'd still disagree with it.
I thought the most important reason to do friendliness research was to give the AGI what it needs to avoid making decisions that could kill all of humanity. It is humanity's responsibility to dictate what should happen in this case and ensure that the AGI understands enough to choose the option we dictate. If you aren't in favor of micromanaging the millions of tiny ethical decisions it will have to make like exactly how many months to put a lawbreaker in jail, that's one thing. If you aren't in favor of making sure it decides correctly on issues that could kill all of humanity, that's negligent beyond imagining. If you are aware of a decision that an AGI could make that could kill all of humanity, and you are in favor of creating an AGI that hasn't been given guidance on that issue, then you're in favor of creating a very dangerous AGI.
Advocating for an AGI that will kill all of humanity vs. advocating for an AGI that could kill all of humanity is a variation on "advocating violence" (it's advocating possible violence) but, to me, it's no different from saying: "I'm going to put one bullet in my gun, aim at so-and-so, and pull the trigger!" - Just because the likelihood of killing so-and-so is reduced to 1 in 6 from what's more or less a certainty does not mean it's not a murder threat.
Likewise, adding the word "possibly" into a sentence that would otherwise break the censorship policy is a cheap way of trying to get through the filter. That should not work. "We should possibly go on a killing rampage." - no.
What's most alarming is that you've done work for SIAI.
The whole point of SIAI is not to go "Let's let the AGI decide what is ethical" but "Let's iron out all the ethical problems before making an AGI!"
If Eliezer doesn't want to look bad, he should consider this.
As I clarified in a subsequent comment in that thread, "if the FAI concludes that replacing a physical person with a software copy isn't a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style."
We could argue about whether to build an FAI that can make this kind of decision on its own, but I had no intention of doing anyone any harm. Yes the attempted-FAI may reach this conclusion erroneously and end up killing everyone, but then any method of building an FAI has the possibility of something going wrong and everyone ending up dead.
I've never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don't think that's the same thing from a PR perspective.
Actually I think SIAI's official position is something like "Let's work out all the problems involved in letting the AGI decide what is ethical." If you disagree with this, let's argue about it, but could you please stop saying that I advocate killing people?
I know.
reviews my wording very carefully
"If virtualizing people is violence ... Wei_Dai ... seems to be advocating "
"Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)"
My understanding is that it's your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
"If virtualizing people is violence ... Wei_Dai ... seems to be advocating ... kill the entire population of earth (though he isn't convinced that they would die)"
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn't my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I'm close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it's not a judgment about your moral character, it's an intellectual disagreement with your viewpoint.
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it's an argument against some specific part of my comment, would you mind matching them up because I don't see how this refutes any of my points.
It's not easy for me to determine your level of involvement from the website. This here suggests that you've done important work for SIAI:
http://singularity.org/blog/2011/07/22/announcing-the-research-associates-program/
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI's decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
Okay but how will you know it's making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs "pay rent", as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
"Let's work out all the problems involved in letting the AGI define ethics."
"Let's work out all the problems involved in letting the AGI make decisions on it's own without doing any of the things that are wrong by our definition of what's ethical."
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don't think it's possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don't think it is possible, let's debate that.
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?
Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.
If you want to consider my take on it uniformed, fine. I haven't read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:
If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original's experiences will end. This isn't perfectly comparable to death seeing as how the person's experiences, personality, knowledge, and interaction with the world will continue. However, the physical original's experiences will end. That, for many, would be an unacceptable result of being virtualized.
I believe in the right to die, so regardless of whether I think being virtualized should be called "death", I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else's experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.
My opinion is that it's better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.
I would be really happy to see you edit your own "virtualize everyone" comments. I do think something needs to be done. My suggestion would be to either:
A. Clearly state that you believe the physical original will experience the copy's experiences even after being removed if that's your view.
B. In the event that you agree that the physical original's experiences would end, to refrain from talking about virtualizing everyone without their consent.
I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I'll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.
Thanks for listening to me. I feel better about this now.
I think this is an overreation to (deleted thing) happening, and the proposed policy goes too far. (Deleted thing) was neither a good idea or good to talk about in this public forum, but it was straight-out advocating violence in an obvious and direct way, against specific, real people that aren't in some hated group. That's not okay and it's not good for community for the reasons you (EY) said. But the proposed standard is too loose and it's going to have a chilling effect on some fringe discussion that's probably going to be useful in teasing out some of the consquences of ethics (which is where this stuff comes up). Having this be a guideline rather than a hard rule seems good, but it still seems like we're scarring on the first cut, as it were.
I think we run the risk of adopting a censorship policy that makes it difficult to talk about or change the censorship policy, which is also a really terrible idea.
I agree with the general idea of protecting LW's reputation to outsiders. After all, if we're raising the sanity waterline (rather than researching FAI), we want outsiders to become insiders, which they won't do if they think we're crazy.
"No advocating violence against real world people, or opening a discussion on whether to commit violence on real world people" seems safe enough as a policy to adopt, and specific enough to not have much of a chilling effect on discussion. We ought to restrict what we talk about as little as possible, in the absence of actual problems, given that any posts we don't want here can be erased by a few keystrokes from an admin.
Do wars count? I find it strange, to say the least, that humans have strong feelings about singling out an individual for violence but give relatively little thought to dropping bombs on hundreds or thousands of nameless, faceless humans.
Context matters, and trying to describe an ethical situation in enough detail to arrive at a meaningful answer may indirectly identify the participants. Should there at least be an exception for notorious people or groups who happen to still be living instead of relegated to historical "bad guys" who are almost universally accepted to be worth killing? I can think of numerous examples, living and dead, who were or are the target of state-sponsored violence, some with fairly good reason.
Abortion, euthenasia and suicide fit that description, some say. For them and those who disagree with them this proposal may have unforeseen consequences. Edit: all three are illegal in parts of the world today.
Just because I think responses to this post might not have been representative:
I think this is a good policy.
I also agree with this policy, and feel that many of the raised or implied criticisms of it are mostly motivated from an emotional reaction against censorship. The points do have some merit, but their significance is vastly overstated. (Yes, explicit censorship of some topics does shift the Schelling fence somewhat, but suggesting that violence is such a slippery topic that next we'll be banning discussion about gun control and taxes? That's just being silly.)
You may think it's silly, others do not. Even if Eliezar has no intention of interpeting "violence" that way, how do we know that? Ambiguity about what is and is not allowed results in chilling far more speech than may have been originally intended by the policy author.
Also, the policy is not limited to only violence, but to anything illegal (and commonly enforced on middle class people). What the hell does that even mean? Illegal according to whom? Under what jurisdiction? What about conflicts between state/federal/constitutional law? I mean, don't get me wrong, I think I have a pretty good idea what Eliezar meant by that, but I could well be wrong, and other people will likely have different ideas of what he meant. Again, ambiguity is what ends up chilling speech, far more broadly than the original policy author may have actually intended.
And I will again reiterate what I consider to be the most slam-dunk argument against this policy: in the incident that provoked this policy change, the author of the offending post voluntarily removed it, after discussion convinced him it was a bad idea. Self-policing worked! So what exactly is the necessity for any new policy at all?
What about gasp whole other countries outside the US?
Yes, that was covered by the previous question: "Under what jurisdiction?"
I agree that your points about ambiguity have some merit, but I don't think there's much of a risk of free speech being chilled more than was intended, because there will be people who test these limits. Some of their posts will be deleted, some of them will not. And then people can see directly roughly where the intended line goes. The chilling effect of censorship would be a more worrying factor if the punishment for transgressing was harsher: but so far Eliezer has only indicated that at worst, he will have the offending post deleted. That's mild enough that plenty of people will have the courage to test the limits, as they tested the limits in the basilisk case.
As for self-policing, well, it worked once. But we've already had trolls in the past, and the userbase of this site is notoriously contrarian, so you can't expect it to always work - if we could just rely on self-policing, we wouldn't need moderators in the first place.
Why the explicit class distinction?
It would be prohibited to discuss how to or speed and avoid being cited for it. (I thought that this was already policy, and I believe it to be a good policy.)
It would not be prohibited to discuss how to be a vagrant and avoid being cited for it. (Middle class people temporarily without residences typically aren't treated as poorly as the underclass.)
Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?
As judged by who?
(I don't endorse EY's original proposal, either.)
As judged by the person making the decision to delete.
I don't think the words "serious crime" have the property that different judges would make very similar judgements about a given discussion.
Is that phrase better or worse than
"Laws that are actually enforced" is at least an empirical question. "Serious crime" is just a value judgement.
"Middle class" is just as much an undefined term as "serious crime".
It's concerning that we are having trouble agreeing on where the edge cases are, much less how to decide them.
Aside from the fact that "it might make us look bad" is a horrible argument in general, have you not considered the consequence that censorship makes us look bad? And consider the following comment below:
It was obviously intended as a joke, but is that clear to outsiders? Does forcing certain kinds of discussions into side-channels, which will inevitibly leak, make us look good?
Consideration of these kinds of meta-consequences is what separates naive decision theories from sophisticated decision theores. Have you considered that it might hurt your credibility as a decision theorist to demonstrate such a lack of application of sophisticated decision theory in setting policies on your own website?
And now, what I consider to be the single most damning argument against this policy: in the very incident that provoked this rule change, the author of the post in question, after discussion, voluntarily withdrew the post, without this policy being in effect! So self-policing has demonstrated itself, so far, to be 100% effective at dealing with this situation. So where exactly is the necessity for such a policy change?
You can argue that LessWrong shouldn't care about PR, or that censorship is going to be bad PR, or that censorship is unnecessary, but you can't argue that PR is a fundamentally horrible idea without some very strong evidence (which you did not provide).
-
It's almost tautological that if a group cares about PR, it HAS to care about what makes them look bad:
If Obama went on record saying that we should kill everyone on Less Wrong, and made it clear he was serious, I'd hope to high hell that there would be an impeachment trial.
If Greenpeace said we should kill all the oil CEOs, people would consider them insane terrorists.
If the oil CEOs suggested that there might be... incentives... should Greenpeace members be killed...
That was perhaps a bit of an overstatement on my part. Considering PR consequences of actions is certainly a good thing to do. But if PR concerns are driving your policy, rather than simply informing it, that's bad.
Taboo "driving" and "informing" and explain the difference between those two to me?
Or we can save ourselves some time if this resolves your objection: Eliezer is saying that he is adding the OPTION to censor things if they are a PR problem OR because the person is needlessly incriminating themselves. I'm not sure how that's a bad OPTION to have, given that he's explicitly stated he will not mindlessly enforce it, and in fact has currently enforced it zero (0) times to my knowledge (the post that prompted this was voluntarily withdrawn by it's author)
One the one hand, you're deciding policy based on non-PR related factors, then thinking about the most PR friendly way to proceed from there. On the other hand, you're letting PR actually determine policy.
Which category is it if you decide based on multiple factors, ONE of which is PR? And why is this a bad thing, if that's what you believe?
Before I spend any more time replying to this, can you clarify for me... do you and I actually disagree about something of substance here? I.e. how an organization should, in the real world, deal with PR concerns? Or are we just arguing about the most technically correct way to go about stating our position?
Would your post on eating babies count, or is it too nonspecific?
http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1
(I completely agree with the policy, I'm just curious)
We should exempt any imagery fitting of a Slayer album cover, lest we upset the gods of metal with our weakness.
I don't know if we actually need a specific policy on this. We didn't in the case of my post...
I agree. We should trust in the community more where the guarantee of moderation (by establishing a policy) is not needed.
Your post was quickly downvoted, and you deleted it yourself. This is an example of a good outcome that demonstrates we didn't need moderation.
On the other hand, if we will need it for some post in future, it will be an advantage that the policy will already have existed and will not have to be invented ad hoc.
How about instead of outright censorship, such discussions be required to be encrypted, via double-rot13?
Rot13 applied twice is just the original text...
..............whooosh................
In light of the above getting upvotes, I'm not sure if it's the "whoosh" of double-rot13 going over your head as I originally thought, or if it's indicating intended sarcasm going over my head, or some other meaning not readily obvious to me (inferential distance and all that.)
Yes, the original comment was a joke.
Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.
In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.
Anyone can read LW. There is no IQ test, rationality test, or a mandatory de-biasing session before reading the articles and discussions.
I am not concerned about someone reading LW and commiting a violence. I am concerned about someone commiting violence and coincidentally having read LW a day before (for example just one article randomly found by google), and police collecting a list of recently visited websites, and a journalist looking at that list and then looking at some articles on LW.
Shortly, we don't live on a planet full of rationalists. It is a fact of life that anything we do can be judged by any irrational person who notices. Sure, we can't make everyone happy. But we should avoid some things that can predictably lead to unnecessary trouble.
Passive-aggression level: Obi-Wan Kenobi
I don't see that that's passive-aggressive when it's accompanied by a clear and explicit statement that Nominull thinks Eliezer is wrong and why. What would be passive-aggressive is just saying "Well, I suppose you must do what you feel is right" and expecting Eliezer to work out that disapproval is being expressed and what sort.
I didn't mean it as a criticism, just that my brain pattern-matched his choice of words and read it in Alec Guiness's voice.
I'll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn't want people to read, and that you, dear reader of this comment, probably can't!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.
This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.
I like the idea, but I have to agree that the PR cost of such a thing being leaked is probably vastly worse than simply being open about it in the first place.
Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!