Documenting my mental processes after reading this post (disclaimer: human introspection sucks, and mine is probably no exception):
Huh, this is one of the better versions of the Devil's advocate game I've ever encountered... Immediate upvote.
Huh, the poster analyzed their mistakes, learned from them and improved the challenge. Too bad I only have one upvote.
Clicking on the links... WTF, this is the girl who converted to Christianity (Catholicism? Really? Out of all the options available?) from Atheism a year or so ago... Anything she posts deserves a downvote...
Stop! What the hell am I doing? This is, like, falling prey to several biases at once. At least I should notice that I am confused. Unable to reconcile the "obviously dumb" conversion move with this quite clever post.
Wait, this is the substance of her post, to begin with!
Deciding to definitely keep the upvote and reserve judgment until after looking through the linked posts.
Even God can quote Bayes when it suits him.
Still upvoted for raw cleverness, though.
Bayes was a priest, after all. Now divine quote of gay Turing would be a different feat altogether.
Ha!
I think the post is excellent, and I appreciated shminux's sharing his mental walkthrough.
On that same front, I find the Never-Trust-A-[Fill-in-the-blank] idea just bad. The fact that someone's wrong on something significant does not mean they are wrong on everything. This goes the other way; field experts often believe they have similar expertise on everything, and they don't.
One quibble with the OP: I don't think a computer can pass a Turing Test, and I don't think it's close. The main issues with some past tests are that some of the humans don't try hard to be human; there should be a reward for a human who gets called a human in those tests.
Finally, I no longer understand the divide between Discuss and Main. If this isn't Main-worthy, I don't get it. If we're making Main something different... what is it?
I'm having trouble determining the best strategy in these kinds of games, but I'm worried it's not quite actually sounding like a member of the group you're pretending to be.
For example, a liberal Christian complained that her (honest!) Christian answer did very poorly, because people associated liberalism with atheism. This suggests that the best strategy isn't necessarily to honestly list what you believe, but to list what you think a typical member of the group involved believes.
And If (for example) atheists know that the average Christian is writing about what they think the average Christian believes, than atheists in their fake entries will also write about what they think the average Christian believes.
Yes, if overdone, this is a sign of dishonesty; for example, anyone who was too stereotypical ("Yeah, I get up each day, read a selection from the Bible, check the Pope's Twitter account, then go to church, then go bomb an abortion clinic..." would be obviously fake.) So the best strategy seems to write something kind of stereotypical, but to depart from stereotype in a few places so as to signal you're talking about a real person rather than a straw man.
But this str...
The atheists and Christians were told to be honest when writing their own responses. So they shouldn't have been trying to game it in this way.
For year three, I've been thinking of doing just this:
I'd be interested in seeing differences between this test and one in which, say, Christians were just asked to discuss their opinions on some topics without it being part of a Turing Test, and then atheists were asked to fake Christian opinions on those same topics
On the topic of marriage, since people conceive of the institution of having really different purposes but usually get bogged down of the question of what laws should exist. I thought the question of "How should a couple decide whether to get married?" would provoke interesting responses.
The atheists and Christians were told to be honest when writing their own responses. So they shouldn't have been trying to game it in this way.
"Honest" leaves a lot of wiggle room. If I were trying to write my honest atheist entry, what do I emphasize? That I hate scholastic philosophy and think religion set ethics back five hundred years? Or how I love C.S. Lewis and G.K. Chesterton and find many religious works to be among the most sublime creations of humankind? Both would be "honest".
Even if someone genuinely sets out not to present themselves at all, I still would expect presentation to be their main concern. There's a certain class of things which are impossible to do naturally. For example, if you try to count your natural respiratory rate, you will fail miserably; the fact that you're thinking about your breath immediately shifts it to consciously deciding what it is going to be. In my case, it makes it slower than normal. I can try to then consciously adjust by speeding it up, but since I don't know how much to speed it up, attempting to breathe naturally is basically just me trying to fake my natural breathing rate, probably badly.
I think self-presentation attempts of this sort raise some of the same problems.
I seem to be simultaneously freakishly good and bad at this game - I have, on multiple occasions and for multiple mappings of "green" and "blue," been accused of being a green pretending to be a blue (I am in fact blue,) and somehow I regularly find myself discussing the finer shades of green with greens who assume I am green. (It is hard for me to think of things that are funner than this.)
On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.
A concern regarding this kind of test when applied to groups (Christians vs Atheists, for instance) rather than individuals is that one umbrella term may take more views than another, making the guessing game more/less tricky.
Nevertheless, this is a neat idea, particularly for particular people rather than groups as a whole.
Considering the different sizes of the targets, I'm not sure what all this means. Like, while there are hundreds of denominations of christianity (though that seriously overrepresented Catholicism), atheism is barely more specific than 'Other' in terms of moral foundations and systems.
As long as you're comparing groups with different degrees of dispersion, it is going to be trickier for one side than the other. The more degrees of definition, the more opportunities to miss one as an outsider and slip up.
I find it very plausible that Christians are better able to pretend to be atheists than vice versa. But what follows from that?
Caplan claimed in his original piece:
the ability to pass ideological Turing tests—to state opposing views as clearly and persuasively as their proponents—is a genuine symptom of objectivity and wisdom.
Caplan gives little in the way of argument in support of this claim, and I'm not at all sure that it's true. "Genuine symptom of objectivity and wisdom", really? My objections follow.
First, there's only one way to be rig...
Turing Tests
Its called the imitation game. The Turing test is the imitation game when one player is a machine and the other player is a person.
Some of the response posts talk about "attractiveness scores", but I didn't find those in the data summaries. Did those ever happen? I think it'd be more interesting if people wrote their genuinely best arguments for each side, and we measured how much the reader is persuaded, instead of many participants (as far as I can tell) trying to pretend that they're average and are persuaded by average arguments.
Of course, it's already pretty interesting as-is, and it's nice that somone actually tried out the excercise!
ETA: the other thing is that I susp...
What's a good term for "being able to pass an ideological Turing test"? (Being able to pass an ITT is related to being able to argue both sides of a debate, being able to accurately explain your opponent's position, being able to summarize the strongest counterargument to your position, etc.)
Following the original analogy, is there a term for "a machine that's able to pass a Turing test"? My googling didn't turn up anything. But if there was ("a machine is called Turing-(blank) if it can pass a Turing test"), then it seems we could adapt it fairly easily to the ITT: someone is ideologically Turing-(blank) if they can pass an ITT.
Any suggestions to fill in the blank?
I think the ITT may test more for personal experience than ability to model. For instance how well would this group of Christians and Atheists do trying to imitate Muslims, Buddhists, Hindus, Shintoists, Zoroastrianism, and other even less well-known religions in the Western world? Most English speakers have some familiarity with a branch of Christianity. How many have explored Shaamanism (I haven't)? Many atheists have gone through the motions of Christianity or some other religion in the past and have a comparably easier time writing about how that r...
I want to highlight the use of the above approach to argument for resolving mundane conflicts as a Bayesian.
I want to highlight the use of the above approach to argument for resolving mundane conflicts as a Bayesian.
Step 1: Run your Turing Test on the conflict. (This does need to run on both sides, It is your own missing info we will focus on) Step 2: Compare your results to their actual model. Highlight the individual differences. Step 3: Item by item quiz them for info on the differences. Why chose this? What alternatives did you consider? Why not take those alternatives? *The goal is is to i...
BTW, I accidentally passed the gender Turing Test (which ISTR pre-dated and inspired the actual Turing Test). :-)
I think we've had discussions before on argumentative techniques, and usually the first step is being able to state the other person's argument as convincingly as they can - pass a Turing Test on imitating them.
I've read this three times now and am still not sure how to interpret:
I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error.
I have concluded that it's vague, for the following reasons:
A. I don't know if "this particular argument" refers to your argument for atheism, for Christianity, or your arguments to convince the opponents that you are an atheist or a Christian.
B. Your phrase "the rationalist weak-point" is unspecified. At first...
I like this a lot. It's often said by conservative commentators that conservatives completely understand liberals, but liberals do not understand conservatives at all. I think there's truth to that, I would love to see some experiments like this, hehehe...
I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video (which is available here).
Conventional Turing Tests check whether a programmer can build a convincing facsimile of a human conversationalist. The test has turned out to reveal less about machine intelligence than human intelligence. (Anger is really easy to fake, since fights can end up a little more Markov chain-y, where you only need to reply to the most recent rejoinder and can ignore what came before). Since normal Turing Tests made us think more about our model of human conversation, economist Bryan Caplan came up with a way to use them to make us think more usefully about our models of our enemies.
After Paul Krugman disparaged Caplan's brand of libertarian economics, Caplan challenged him to an ideological Turing Test, where both players would be human, but would be trying to accurately imitate each other. Caplan and Krugman would each answer questions about their true beliefs honestly, and then would fill out the questionaire again in persona inimici - trying to guess the answers given by the other side. Caplan was willing to bet that he understood Krugman's position well enough to mimic it, but Krugman would be easily spotted as a fake!Caplan.
Krugman didn't take him up on the offer, but I've run a couple iterations of the test for my religion/philosophy blog. The first year, some of the most interesting results were the proxy variables people were using, that weren't as strong as indicators as the judges thought. (One Catholic coasted through to victory as a faux atheist, since many of the atheist judges thought there was no way a Christian would appreciate the webcomic SMBC).
The trouble was, the Christians did a lot better, since it turned out I had written boring, easy to guess questions for the true and faux atheists. The second year, I wrote weirder questions, and the answers were a lot more diverse and surprising (and a number of the atheist participants called out each other as fakes or just plain wrong, since we'd gotten past the shallow questions from year one, and there's a lot of philosophical diversity within atheism).
The exercise made people get curious about what it was their opponents actually thought and why. It helped people spot incorrect stereotypes of an opposing side and faultlines they'd been ignoring within their own. Personally, (and according to other participants) it helped me have an argument less antagonistically. Instead of just trying to find enough of a weak point to discomfit my opponent, I was trying to build up a model of how they thought, and I needed their help to do it.
Taking a calm, inquisitive look at an opponent's position might teach me that my position is wrong, or has a gap I need to investigate. But even if my opponent is just as wrong as zer seemed, there's still a benefit to me. Having a really detailed, accurate model of zer position may help me show them why it's wrong, since now I can see exactly where it rasps against reality. And even if my conversation isn't helpful to them, it's interesting for me to see what they were missing. I may be correct in this particular argument, but the odds are good that I share the rationalist weak-point that is keeping them from noticing the error. I'd like to be able to see it more clearly so I can try and spot it in my own thought. (Think of this as the shift from "How the hell can you be so dumb?!" to "How the hell can you be so dumb?").
When I get angry, I'm satisfied when I beat my interlocutor. When I get curious, I'm only satisfied when I learn something new.