I find it very plausible that Christians are better able to pretend to be atheists than vice versa. But what follows from that?
Caplan claimed in his original piece:
the ability to pass ideological Turing tests—to state opposing views as clearly and persuasively as their proponents—is a genuine symptom of objectivity and wisdom.
Caplan gives little in the way of argument in support of this claim, and I'm not at all sure that it's true. "Genuine symptom of objectivity and wisdom", really? My objections follow.
First, there's only one way to be right but there are many ways to be wrong. So if you are right it is likely that you have only a broad survey-level view of the different varieties of wrongness. Take, for example, climate change. The scientific consensus view is narrow and everyone in the debate knows what it is. But as far as I know there are many different skeptical positions (there's no such thing as the greenhouse effect; there may be a greenhouse effect but CO₂ is not a greenhouse gas; CO₂ may be a greenhouse gas but concentrations are not increasing; CO₂ concentrations may be increasing, but they are not anthropogenic; global temperatures are not rising; temperatures may be rising but not because of CO₂; temperatures may be rising but there is no need to do anything because the net result will be beneficial; climate change may be harmful but it's too late to do anything about it; it may not be too late but there are still better things to spend money on). I think I know enough about each of these positions to be confident that it's wrong but in order to impersonate one of these positions well enough to fool people I would have to know it inside out. Exactly which wrong assumptions and wrong authorities does each of these positions depend on?
Second, the criterion of being able to state views "as clearly and persuasively as their proponents" is not as neutral as it seems. If you're right you may have been happy to rely on the facts to do your persuading for you. But if you're wrong then you have probably needed to employ a lot of rhetoric, salesmanship, fallacies and argumentation. These techniques take skill and practice and aren't easy to imitate. For example, there's no way that I would be able to imitate the dense texture of sneering and insinuation in the rhetoric of someone like Moldbug.
Third, in the specific case under discussion here, Christianity has a number of cultural properties that make it hard to imitate. If you are Christian, then you probably know the Bible in detail, you are probably familiar with a range of theological and apologetic texts, and you are probably embedded in a subculture with its own rules, rituals, and mores. These kinds of details take a lot of work to imitate. But the typical atheist has probably never read The God Delusion or attended any kind of atheist event, so there's nothing there that needs to be invented.
I find it very plausible that Christians are better able to pretend to be atheists than vice versa.
On the other hand, any Christian who pretends to be an atheist better than an atheist isn't a very good Christian. By doing so they are violating the teachings of their God.
I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).
Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist. The test has turned out to reveal less about machine intelligence than human intelligence. (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before). Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.
After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other. Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side. Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.
Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog. The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought. (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).
The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists. The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).
The exercise made people get curious about what it was their opponents actually thought and why. It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own. Personally, (and according to other participants) it helped me have an argument less antagonistically. Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.
Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate. But even if my opponent is just as wrong as zer seemed, there's still a benefit to me. Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality. And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing. I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error. I'd like to be able to see it more clearly so I can try and spot it in my own thought. (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").
When I get angry, I'm satisfied when I beat my interlocutor. When I get curious, I'm only satisfied when I learn something new.