You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on New censorship: against hypothetical violence against identifiable people - Less Wrong Discussion

22 Post author: Eliezer_Yudkowsky 23 December 2012 09:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (457)

You are viewing a single comment's thread.

Comment author: ChristianKl 27 December 2012 03:47:05PM -2 points [-]

For most people it doesn't really matter when they trade off better PR for not following higher values such as respecting free speech.

If you want to design a FAI that values human values, it matters. You should practice to follow human values yourself. You should take situation like this opportunities for deliberate practice in making the kind of moral decisions that an FAI has to make.

Power corrupts. It's easy to censor criticism of your own decisions. You should use those opportuniities to practice being friendly instead of being corrupted.

In the past there was a case of censorship that lead to bad press for LessWrong. Given that past performance, why should we believe that increasing censorship will be good for PR?

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

The first instinct of an FAI shouldn't be: "Hey, the cost function of those humans is probably wrong, let's use a different cost function."

A few days ago someone wrote a post about how rationalists should make group decisions. I did argue that his proposal was unlikely to be effectively implementable.

A decision about how the perfect censorship policy for LessWrong could look like could be made via Delphi.