I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).
Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist. The test has turned out to reveal less about machine intelligence than human intelligence. (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before). Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.
After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other. Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side. Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.
Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog. The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought. (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).
The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists. The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).
The exercise made people get curious about what it was their opponents actually thought and why. It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own. Personally, (and according to other participants) it helped me have an argument less antagonistically. Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.
Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate. But even if my opponent is just as wrong as zer seemed, there's still a benefit to me. Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality. And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing. I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error. I'd like to be able to see it more clearly so I can try and spot it in my own thought. (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").
When I get angry, I'm satisfied when I beat my interlocutor. When I get curious, I'm only satisfied when I learn something new.
I'm having trouble determining the best strategy in these kinds of games, but I'm worried it's not quite actually sounding like a member of the group you're pretending to be.
For example, a liberal Christian complained that her (honest!) Christian answer did very poorly, because people associated liberalism with atheism. This suggests that the best strategy isn't necessarily to honestly list what you believe, but to list what you think a typical member of the group involved believes.
And If (for example) atheists know that the average Christian is writing about what they think the average Christian believes, than atheists in their fake entries will also write about what they think the average Christian believes.
Yes, if overdone, this is a sign of dishonesty; for example, anyone who was too stereotypical ("Yeah, I get up each day, read a selection from the Bible, check the Pope's Twitter account, then go to church, then go bomb an abortion clinic..." would be obviously fake.) So the best strategy seems to write something kind of stereotypical, but to depart from stereotype in a few places so as to signal you're talking about a real person rather than a straw man.
But this strategy is identical for real Christians and sham Christians, which sort of defeats the purpose of the Ideological Turing Test. We're not testing whether atheists can talk like a real Christian any more as much as whether atheists can talk like a real Christian pretending to be a stereotypical Christian, which seems like a lower bar.
I'd be interested in seeing differences between this test and one in which, say, Christians were just asked to discuss their opinions on some topics without it being part of a Turing Test, and then atheists were asked to fake Christian opinions on those same topics (also interested in how those same just-discuss entries would do against Christians-writing-for-a-Turing-test entries).
Interestingly, the entry that I was most convinced was Christian - and I was right - was one that included the phrase "and when I was in seminary...". I didn't expect any atheist to have the chutzpah to fake a priest, whereas I did expect some actual priests to read Leah's blog. This suggests that a winning strategy is to be stereotypical in unexpected ways fakers wouldn't think of, and possibly to be unstereotypical in unexpected ways fakers wouldn't think of (although obviously I can't think of any examples of this).
I had read this, when it was originally posted.
And then, I was referred to this, which was also written by you: http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/
Which was sufficiently good at espousing Reactionary philosophy that I was STARTLED when I got to the end, because I had forgotten that you were only pretending to be Reactionary for the sake of an Ideological turing test. You were well on your way to convincing me to take a ha... (read more)