Emile comments on Less Wrong: Open Thread, September 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (610)
I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.
My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.
I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?
I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well.
See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.
I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.
The second rule of Less Wrong is, you DO NOT talk about Forbidden Topics.
Your sarcasm would not be obvious if I didn't recognize your username.
Hmm - I added a link to the source, which hopefully helps to explain.
Quotes can be used sarcastically or not.
I don't think I was being sarcastic. I won't take the juices out of the comment by analysing it too completely - but a good part of it was the joke of comparing Less Wrong with Fight Club.
We can't tell you what materials are classified - that information is classified.
It's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.
Too much meta discussion is bad for a community.
The thing I'm trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that's possible, and I believe it is more appropriate to discuss this in public.
(Actually, since I've been making noise about this, and since I've promised not to reveal it, I now know the secret. No, I won't tell you, I promised that. I won't even tell who told me, even though I didn't promise not to, because they'd just get too many requests to reveal it. But I can say that I don't believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
Really? Go read the sequences! ;)