New proposed censorship policy:
Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.
Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.
More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).
This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'
Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.
I know.
reviews my wording very carefully
"If virtualizing people is violence ... Wei_Dai ... seems to be advocating "
"Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)"
My understanding is that it's your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:
"If virtualizing people is violence ... Wei_Dai ... seems to be advocating ... kill the entire population of earth (though he isn't convinced that they would die)"
And likewise with the other statement.
Sorry for the upset that has probably caused. It wasn't my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I'm close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it's not a judgment about your moral character, it's an intellectual disagreement with your viewpoint.
I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it's an argument against some specific part of my comment, would you mind matching them up because I don't see how this refutes any of my points.
It's not easy for me to determine your level of involvement from the website. This here suggests that you've done important work for SIAI:
http://singularity.org/blog/2011/07/22/announcing-the-research-associates-program/
If one is informed of the exact relationship between you and SIAI, it is not as bad, but:
A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI's decision theory ideas (independently) does something that looks bad, it still makes them look bad.
B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.
Okay but how will you know it's making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs "pay rent", as the sequence post puts it?
I see now that the statement could be interpreted in one of two ways:
"Let's work out all the problems involved in letting the AGI define ethics."
"Let's work out all the problems involved in letting the AGI make decisions on it's own without doing any of the things that are wrong by our definition of what's ethical."
Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don't think it's possible for humans to determine whether virtualizing everyone is ethical?
I do think it is possible, so if you don't think it is possible, let's debate that.
I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?
Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship w... (read more)