I don't know what you're talking about again
I'm asking you simple, straightforward questions about your comments.
Perhaps it will be clearer if I give a personal example.
When I was a lot younger, I was in a relationship with a woman who, well, largely held me in contempt, except as a vehicle for satisfying certain of her sexual desires. Was I wrong to find this depersonalizing piggishness of hers awesome, despite the fact that her contempt was not part of a negotiated BDSM scene, nor any sort of playacting on her part? Was her attitude somehow morally wrong? Was mine?
My point here is that this sort of bright-line moralism invariably ends up depriving other people of choice, or framing them as second-class humans. The very attempt to codify objective criteria for "objectification" ends up objectifying and oppressing people.
We can be considerate of individuals, but trying to be considerate of classes of people doesn't scale: just segregating people into classes in the first place is half the problem! (e.g. stereotype priming)
Edit to add clarification: one reason defining classes and labeling people members of them is depersonalizing is because it downplays their individuality to merely a set of footnotes on the ways in which they are or are not like the class they are being seen as a member of. For example, saying that a woman is a good programmer "for a woman" is depersonalizing, despite the superficial positive intent to compliment. In the same way, Alicorn's classing other people's activity as "abuse" or "wrong" is depersonalizing, despite the superficial positive intent of that labeling.
For example, it labels me as a victim of abuse, regardless of how I choose to see myself. By Alicorn's own definitions (as I understand them) this is morally "wrong" for her to do -- which appears to me to demonstrate the self-contradictory (or at least inconsistent) nature of her definitions.
My own resolution to such a paradox is to assume that it's good to be considerate to individuals, but also to accept that others do not have a corresponding obligation to be considerate to me. I don't expect that Alicorn must refrain from stating her opinions about my past relationship, just because it might be inconsiderate of her to do so, nor do I feel a need to make her feel bad for implying something bad about me. And if I did feel bad about it, that would be my responsibility to fix, not hers.
And if I couldn't simply fix the problem by changing my feelings, and chose to ask Alicorn or anyone else to be more considerate in their speech, I certainly wouldn't do it by starting out with the implication that they were morally wrong and that it was unquestionably a good idea that they should take my feelings into consideration! If I was going to ask at all, I'd ask for it as what it is: a favor to a specific person.
This article is a deliberate meta-troll. To be successful I need your trolling cooperation. Now hear me out.
In The Strangest Thing An AI Could Tell You Eliezer talks about asognostics, who have one of their arm paralyzed, and what's most interesting are in absolute denial of this - in spite of overwhelming evidence that their arm is paralyzed they will just come with new and new rationalizations proving it's not.
Doesn't it sound like someone else we know? Yes, religious people! In spite of heaps of empirical evidence against existence of their particular flavour of the supernatural, internal inconsistency of their beliefs, and perfectly plausible alternative explanations being well known, something between 90% and 98% of humans believe in the supernatural world, and is in a state of absolute denial not too dissimilar to one of asognostics. Perhaps as many as billions of people in history have even been willing to die for their absurd beliefs.
We are mostly atheists here - we happen not to share this particular delusion. But please consider an outside view for a moment - how likely is it that unlike almost everyone else we don't have any other such delusions, for which we're in absolute denial of truth in spite of mounting heaps of evidence?
If the delusion is of the kind that all of us share it, we won't be able to find it without building an AI. We might have some of those - it's not too unlikely as we're a small and self-selected group.
What I want you to do is try to trigger absolute denial macro in your fellow rationalists! Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people? Yes, I pretty much ask you to troll, but it's a good kind of trolling, and I cannot think of any other way to find our delusions.