This article is a deliberate meta-troll. To be successful I need your trolling cooperation. Now hear me out.
In The Strangest Thing An AI Could Tell You Eliezer talks about asognostics, who have one of their arm paralyzed, and what's most interesting are in absolute denial of this - in spite of overwhelming evidence that their arm is paralyzed they will just come with new and new rationalizations proving it's not.
Doesn't it sound like someone else we know? Yes, religious people! In spite of heaps of empirical evidence against existence of their particular flavour of the supernatural, internal inconsistency of their beliefs, and perfectly plausible alternative explanations being well known, something between 90% and 98% of humans believe in the supernatural world, and is in a state of absolute denial not too dissimilar to one of asognostics. Perhaps as many as billions of people in history have even been willing to die for their absurd beliefs.
We are mostly atheists here - we happen not to share this particular delusion. But please consider an outside view for a moment - how likely is it that unlike almost everyone else we don't have any other such delusions, for which we're in absolute denial of truth in spite of mounting heaps of evidence?
If the delusion is of the kind that all of us share it, we won't be able to find it without building an AI. We might have some of those - it's not too unlikely as we're a small and self-selected group.
What I want you to do is try to trigger absolute denial macro in your fellow rationalists! Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people? Yes, I pretty much ask you to troll, but it's a good kind of trolling, and I cannot think of any other way to find our delusions.
Yes. But when women like Alicorn intuitively solve the signaling and negotiation game represented in their heads, using their prior belief distributions about mens' hidden qualities and dispositions, their beliefs about mens' utility functions conditional on disposition, and their own utility functions, then their solutions predict high costs for any strategy of tolerating objectifying statements by unfamiliar men of unknown quality. It's not about whether or not objectification implies oppressiveness with certainty. It's about whether or not women think objectification is more convenient or useful to unfamiliar men who are disposed to depersonalization and oppression, compared with its convenience or usefulness to unfamiliar men who are not disposed to depersonalization and oppression. If you want to change this, you have to either change some quantity in womens' intuitive representation of this signaling game, improve their solution procedure, or argue for a norm that women should disregard this intuition.
Change what? Your massive projection onto what "women like Alicorn" do? I'd think that'd be up to you to change.
Similarly, if I don't like what Alicorn is doing, and I can't convince her to change that, then it's my problem... just as her not being able to convince men to speak the way she wants is hers.
At some point, all problems are our own problems. You can ask other people to change, but then you can either accept the world as it is, or suffer needlessly.
(To forestall the inevitable analogies and arguments: ... (read more)