Change blindness is the phenomenon whereby people fail to notice changes in scenery and whatnot if they're not directed to pay attention to it. There are countless videos online demonstrating this effect (one of my favorites here, by Richard Wiseman).

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

Subsequently a pair of Swedish researchers familiar with some sleight-of-hand magic conceived a new twist on this line of research, arguably even more audacious: have participants make a choice and quietly swap that choice with something else. People not only fail to notice the change, but confabulate reasons why they had preferred the counterfeit choice (video here). They called their new paradigm "Choice Blindness".

Just recently the same Swedish researchers published a new study that is even more shocking. Rather than demonstrating choice blindness by having participants choose between two photographs, they demonstrated the same effect with moral propositions. Participants completed a survey asking them to agree or disagree with statements such as "large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism". When they reviewed their copy of the survey their responses had been covertly changed, but 69% failed to notice at least one of two changes, and when asked to explain their answers 53% argued in favor of what they falsely believed was their original choice, when they had previously indicated the opposite moral position (study here, video here).

New study on choice blindness in moral positions
New Comment
152 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I find myself thinking "I remember believing X. Why did I believe X? Oh right, because Y and Z. Yes, I was definitely right" with alarming frequency.

When reading old LW posts and comments and seeing I've upvoted some comment, I find myself thinking "Wait, why have I upvoted this comment?"

2John_Maxwell
This doesn't seem obviously bad to me... You just have to differentiate times when you have a gut feeling that something's true because you worked it out before, or because of some stupid reason like your parents telling it to you when you were a kid. Right? I think I can tell apart rationalizations I'm creating on the spot with reasoning I remember constructing in the past. And if I'm creating rationalizations on the spot, I make an effort to rationalize in the opposing direction a bit for balance.
[-]Shmi230

An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you? Given that you are probably not better at it than the professionals in each candidate's team, can you find examples of such tactics?

Claim to agree with them on issue X, then once they've committed to supporting you, change your position on issue X.

Come to think of it, politicians already do this.

1MTGandP
Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney's flip-flopping, and Republicans reacted similarly to Kerry's waffling in 2004. But for some reason, supporters of the candidate in question either don't notice or don't care.
0Hawisher
The quality (in American politics, at least) that either 1: a politician's stance on any given topic is highly mutable, or 2: a politician's stance could perfectly reasonably disagree with that of some of his supporters, given that the politician one supports is at best a best-effort compromise rather than (in most cases) a perfect representation of one's beliefs, is it not so widely-known as to eliminate or alleviate that effect?
0DaFranker
I don't see how either or both options you've presented change the point in any way; if politicians claim to agree on X until you agree to vote for them, then turn out to revert to their personal preference once you've already voted for them, then while you may know they're mutable or a best-effort-compromise, you've still agreed with a politician and voted for them on the basis of X, which they now no longer hold. That they are known to have mutable stances or be prone to hidden agendas only makes this tactic more visible, but also more popular, and by selection effects makes the more dangerous instances of this even more subtle and, well, dangerous.
0Hawisher
I would argue that the chief difference between picking a politician to support and choosing answers based on one's personal views of morality is that the former is self-evidently mutable. If a survey-taker was informed beforehand that the survey-giver might or might not change his responses, it is highly doubtful the study in question would have these results.

While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.

It's not a very strong version, and it's tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.

Dark tactic: Have we verified that it doesn't work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:

  1. An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.

  2. The unethical campaigner might then say: "A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z." If authoritative enough, they might believe this.

  3. "These are the three key parts of my campaign! Can you explain why you support these?"

  4. (victim explains)

  5. "Great responses! Do you mind if we use these?"

  6. (victim may feel compelled to say yes or seem ungrateful for the compliment)

  7. "I think your family and friends should hear what great supports you have for your points on this important issue, don't you?"

  8. (now new victims will be dragged in)

  9. The responses that were given are used to make it look like there's a consensus.

4A1987dM
For me at least, one year is also too long for me to reliably hold the same opinion, so if you did that to me, I think I'd likely say something like “Yeah, I did support X, Y and Z back then, but now I've changed my mind.” (I'm not one to cache opinions about most political issues -- I usually recompute them on the fly each time I need them.)
2MugaSofer
Someone should see if this works. Of course, you need to filter for people who fill out surveys.
0DaFranker
Idea: Implement feedback surveys for lesswrong meta stuff, and slip in a test for this tactic in one of the surveys a few surveys in. Having a website as a medium should make it even harder for people to speak up or realize there's something going on, and I figure LWers are probably the biggest challenge. If LWers fall into a trap like this, that'd be strong evidence that you could take over a country with such methods.
8ModusPonies
That would be very weak evidence that you could take over a country with such methods. It would be strong evidence that you could take over a website with such methods.
8fubarobfusco
Break into someone's blog and alter statements that reflect their views.
7Epiphany
Dark Tactic: This one makes me sick to my stomach. Imagine some horrible person wants to start a cult. So they get a bunch of people together and survey them asking things like: "I don't think that cults are a good thing." "I'm not completely sure that (horrible person) would be a good cult leader." and switches them with: "I think that cults are a good thing." "I'm completely sure that (horrible person) would be a good cult leader." And the horrible person shows the whole room the results of the second set of questions, showing that there's a consensus that cults are a good thing and most people are completely sure that (horrible person) would be a good cult leader. Then the horrible person asks individuals to support their conclusions about why cults are a good thing and why they would be a good leader. Then the horrible person starts asking for donations and commitments, etc. Who do we tell about these things? They have organizations for reporting security vulnerabilities for computer systems so the professionals get them... where do you report security vulnerabilities for the human mind?
8ChristianKl
Is you start a cult you don't tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street. Most people in the LessWrong community don't see it as a cult, and the same is true for most organisations that are seen as cults.
6John_Maxwell
That's not too different from the description of a university though.
4wedrifid
Do you? Really? That works? When creating an actual literal cult? This is counter-intuitive.
5Endovior
The trick: you need to spin it as something they'd like to do anyway... you can't just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don't even need to do much in the way of developing cultic materials; there's plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like "the law of attraction" that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily. I was in such a cult myself; it's tremendously effective.
3ChristianKl
If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don't necessarily need stuff like money.
2Endovior
Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.
5wedrifid
Agree, except I'd strengthen this to "a much better".
1NancyLebovitz
It works. Especially if you can get people away from their other social contacts. Mix in insufficient sleep and a low protein diet, and it works really well. (Second-hand information, but there's pretty good consensus on how cults work.) How do you think cults work?
2Nornagest
I'd question "really well". Cult retention rates tend to be really low -- about 2% for Sun Myung Moon's Unification Church ("Moonies") over three to five years, for example, or somewhere in the neighborhood of 10% for Scientology. The cult methodology seems to work well in the short term and on vulnerable people, but it seriously lacks staying power: one reason why many cults focus so heavily on recruiting, as they need to recruit massively just to keep up their numbers. Judging from the statistics here, retention rates for conventional religious conversions are much higher than this (albeit lower than retention rates for those raised in the church).
3NancyLebovitz
I guess "really well" is ill-defined, but I do think that both Sun Myung Moon and L. Ron Hubbard could say "It's a living". You can get a lot out of people in the three to five years before they leave.
4Shmi
Note that the term cult is a worst argument in the world (guilt by association). The neutral term is NRM. Thus to classify something as a cult one should first tick off the "religious" check mark, which requires spirituality, a rather nebulous concept: If you define cult as an NRM with negative connotations, then you have to agree on what those negatives are, not an easy task.
4fubarobfusco
"NRM" is a term in the sociology of religion. There are many groups that are often thought of as "cultish" in the ordinary-language sense that are not particularly spiritual. Multi-level marketing groups and large group awareness training come to mind.
2gwern
This is basically true, although I had a dickens of a time finding specifics in the religious/psychology/sociological research - everyone is happy to claim that cults have horrible retention rates, but none of them seem to present much beyond anecdotes.
0Nornagest
I'll confess I was using remembered statistics for the Moonies, not fresh ones. The data I remember from a couple of years ago seems to have been rendered unGooglable by the news of Sun Myung Moon's death. Scientology is easier to find fresh statistics for, but harder to find consistent statistics for. I personally suspect the correct value is lower, but 10% is about the median in easily accessible sources.
4A1987dM
Click on “Search tools” at the bottom of the menu on the left side of Google's search results page, then on “Custom range”.
1wedrifid
Like what you say but not much like ChristianKI said. I think he was exaggerating rather a lot to try to make something fit when it doesn't particularly.
0ChristianKl
What's an actual literal cult? When I went to the Quantified Self conference in Amsterdam last year, I heard the allegation that Quantified Self is a cult after I explained it to someone who lived at the place I stayed for the weekend. I also had to defend against the cult allegation when explain the Quantified Self community to journalists. Which groups are cults depends a lot of the person who's making the judgement. There are however also groups where we can agree that they are cults. I would say that the principle applies to an organisation like the Church of Scientology.
0Pentashagon
I think that's known as voter fraud. A lot of people believe (and tell others to believe) that certain candidates were legally and fairly elected even when exit polls show dramatically different results. Although of course this could work the same way if exit polls were changed to reflect the opposite outcome of an actually fair election and people believed the false exit polls and demanded a recount or re-election. It just depends on which side can effectively collude to cheat.
8Epiphany
No. What I'm saying here is that, using this technique, it might not be seen as fraud. If the view on "choice blindness" is that people are actually changing their opinions, it would not be technically seen as false to claim that those are their opinions. Committing fraud would require you to lie. This may be a form of brainwashing, not a new way to lie. That's why this is so creepy.
-6DaFranker
5Haladdin
Online dating. Put up a profile that suggests a certain personality types and interests. In face-to-face meetup, even if you're someone different than was advertised, choice blindness should cover up the fact. This tactic can also be extended to job resumes presumably.
3khafra
Either that's already a well-used tactic amongst online daters, or 6'1", 180lb guys who earn over $80k/year are massively more likely to use online dating sites than the average man.
3A1987dM
I wouldn't like to be standing in the shoes of someone who tried that and it didn't work.
1wedrifid
Why? Just go interview somewhere else. The same applies for any interview signalling strategy.
3A1987dM
I meant in the shoes of the candidate, not the interviewer. If that happened to me, I would feel like my status-o-meter started reading minus infinity.
0Vaniver
Tom N. Haverford comes to mind.
5siodine
The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks: If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said. If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., "President Obama has all but said that I'm instigating "class warfare," or that I don't care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney." Basically, you take the opposition's reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.

I don't like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.

-2siodine
I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.
4Alejandro1
I vaguely remember that when a president becomes very widely accepted as a good or bad president, many people will misremember that they voted for or against him respectively; e.g. much fewer people would admit (even to themselves) having voted for Nixon than the actual number that voted for him. If this is so, then maybe the answer is simply "Win, and be a good president".
[-]Shmi210

"Win, and be a good president"

That would not be an instrumentally useful campaigning strategy.

2TimS
Now I'm alternating between laughing and crying. :(
0Epiphany
Awww. I might have discovered a flaw in this study, TimS. Here you go
1Epiphany
Imagine answering a question like "I think such and such candidate is not a very good person." and then it gives you a button where you can automatically post it to your twitter / facebook. When you read the post on your twitter, it says "I think such and such candidate is a very good person." but you don't notice the wording has changed. :/ I wonder if people would feel compelled to confabulate reasons why they posted that on their accounts. It might set of their "virus" radars because of the online context and therefore not trigger the same behavior.
0Epiphany
Dark Tactic: 1. An unwitting research company could be contracted to do a survey by an unethical organization. 2. The survey could use the trick where by asking some question that people will mostly say "yes" to and then ask a similar question later where the wording is slightly changed to agree with the viewpoint of the unethical organization. 3. Most people end up saying they agree with the viewpoint of the unethical organization. 4. The reputation of the research company is abused as the unethical organization claims they "proved" that most people agree with their point of view. 5. A marketing campaign is devised around the false evidence that most people agree with them. They already trick people in less expensive ways, though. I was taught in school that they'll do things like ask 5 doctors whether they recommend something and then saying "4 of 5 doctors" recommend this to imply 4 of every 5 doctors when their sample was way too small.

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

I think the response of the passerby is quite reasonable, actually. Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (b) every time, especially considering that I make similar mistakes all the time (confusing people with each other immediately after having encountered them). I know that this is probably not a phenomenon that occurs at the conscious level, but we should expect the unconscious level to be even more automatic.

...Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (a) every time, especially considering that I observe similar phenomena all the time (people spontaneously replacing each other immediately after having encountered them). ...

I'm curious, why do you take that view?

Missed it on the first read-through, heh. Excellent try.

3A1987dM
I didn't notice until I read Swimmer963's comment. I did remember reading its parent and did remember that it said something sensible, so when I read the altered quotation I thought I had understood it to be ironic.
5Swimmer963 (Miranda Dixon-Luinenburg)
Am I the only one who's really confused that this comment is quoting text that is different than the excerpt in the above comment?

Shhhhh! You're ruining the attempt at replication!

0[anonymous]
No. Maybe Mao is joking?
0RobFisher
I didn't notice at first, but only because I did notice that you were quoting the comment above which I had just read and skipped over the quote.
8jimmy
What a coincidence, this happened to me with your comment! I originally read your name as "shminux" and was quite surprised when I reread it. If there's some coding magic going on behind the scenes, you've got me good. But I'm sticking with (b) - final answer.
6Shmi
For the record, I fully endorse simplicio's analysis :)
6Robert Miles
A rational prior for "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions" would be very small indeed (that naturally doesn't happen, and psych experiments are rare). A rational prior for "I just had a brain fart" would be much bigger, since that sort of thing happens much more often. So at the end, a good Bayesian would assign a high probability to "I just had a brain fart", and also a high probability to "This is the same person" (though not as high as it would be without the brain fart). The problem is that the conscious mind never gets the "I just had a brain fart" belief. The error is unconsciously detected and corrected but not reported at all, so the person doesn't even get the "huh, that feels a little off" feeling which is in many cases the screaming alarm bell of unconscious error detection. Rationalists can learn to catch that feeling and examine their beliefs or gather more data, but without it I can't think of a way to beat this effect at all, short of paying close attention to all details at all times.
5ArisKatsaris
And a sufficiently large change gets noticed...
1Decius
Really? Did any of them refuse to give the camera to the new people, because they weren't the owners of the camera?
2Alicorn
If you watch the video closely, the camera actually prints out a picture of the old guys, so the old guys are clearly at least involved with the camera in some way.
2Haladdin
Schizophrenia. Capgras Delusion. I wonder how schizophrenics would comparatively perform on the study.

A man who'd spent some time institutionalized said that the hell of it was that half of what you were seeing was hallucinations and the other half was true things that people won't admit to. Unfortunately, I didn't ask him for examples of the latter.

1thomblake
Or perhaps fortunately!

Can someone sneakily try this on me? I like silly questionnaires, polls, and giving opinions, so it should be easy.

You said in a previous thread that after a hard day of stealing wifi and lobbying for SOPA, you and Chris Brown like to eat babies and foie gras together. Can you explain your moral reasoning behind this?

The geese and babies aren't sentient, wifi costs the provider very little, that's actually a different Chris Brown, and I take the money I get paid lobbying for SOPA and donate it to efficient charities!

(Sorry, couldn't resist when I saw the "babies" part.)

5jeremysalwen
I'll make sure to keep you away from my body if I ever enter a coma...

Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?

Now that's one ultimate rationalization. The standard pattern is to decide (or prefer) something for one reason, then confabulate more honourable reasons why we decided (or preferred) thus.

But confabulating for something we didn't even decide… that's takings things up a notch.

I bet the root problem is the fact that we often resolve cognitive dissonance before it even hits the concious level. Could we train ourselves to notice such dissonance instead?

Could we train ourselves to notice such dissonance instead?

This needs to get a spot in CFAR's training program(s/mme(s)?). It sounds like the first thing you'd want to do once you reach the rank of second-circle Initiate in the Bayesian Conspiracy. Or maybe the first part of the test to attain this rank.

An alternate explanation:

Maybe the years of public schooling that most of us receive cause us to trust papers so much, that if we see something written down on a paper, we feel uncomfortable opposing it. If you're threatened with punishment for not regurgitating what is on an authority's papers daily for that many years of your life, you're bound to be classically conditioned to behave as if you agree with papers.

So maybe what's going on is this:

  1. You fill out a scientist's paper.

  2. The paper tells you your point of view. It looks authoritative because it's in writing.

  3. You feel uncomfortable disagreeing with the authority's paper. School taught you this was bad.

  4. Now the authority wants you to support the opinion they think is yours.

  5. You feel uncomfortable with the idea of failing to show the authority that you can support the opinion on the paper. (A teacher would not have approved - and you'd look stupid.)

  6. You might want to tell the authority that it's not your opinion, but they have evidence that you believe it - it's in writing.

  7. You behave according to your conditioning by agreeing with the paper, and do as expected by supporting what the researcher thinks your point of

... (read more)
7orthonormal
It seems to me that this hypothesis is more of a mechanism for choice blindness than an alternate explanation- we already know that human beings will change their minds (and forget they've done so) in order to please authority. (There's nonfictional evidence for this, but I need to run, so I'll just mention that we've always been at war with Oceania.)
2Epiphany
What I'm saying is "Maybe they're only pretending to have an opinion that's not theirs." not "They've changed their minds for authority." so I still think it is an alternate explanation for the results.
8TheOtherDave
IIRC, part of the debriefing protocol for the study involved explaining the actual purpose of the study to the subjects and asking them if there were any questions where they felt the answers had been swapped. If they at that point identified a question as having fallen into that category, it was marked as retrospectively corrected, rather than uncorrected. Of course, they could still be pretending, perhaps out of embarrassment over having been rooked.
0Epiphany
I'm having trouble interpreting what your point is. It seems like you're saying "because they were encouraged to look for swapped questions before hand, Epiphany's point might not be valid" however, what I read stated: "After the experiment, the participants were fully debriefed about the true purpose of the experiment." so it may not have even occurred to most of them to wonder whether the questions had been swapped at the point when they were giving confabulated answers. Does this clarify anything? It seems somebody got confused. Not sure who.
3TheOtherDave
IIRC, questions that were scored as "uncorrected" were those that, even after debriefing, subjects did not identify as swapped. So if Q1 is scored as uncorrected, part of what happened is that I gave answer A to Q1, it's swapped for B, I explained why I believe B, I was afterwards informed that some answers were swapped and asked whether there were any questions I thought that was true for, even if I didn't volunteer that judgment at the time, and I don't report that this was true of Q1. If I'm only pretending to have an opinion (B) that's not mine about Q1, the question arises of why I don't at that time say "Oh, yeah, I thought that was the case about Q1, since I actually believe A, but I didn't say anything at the time." As I say, though, it's certainly possible... I might continue the pretense of believing B.

Move to Main, please!

I have to wonder if many of the respondents in the survey didn't hold any position with much strength in the first place. Our society enforces the belief, not only that everyone is entitled to their opinions, but that everyone should have an opinion on just about any issue. People tend to stand by "opinions" that are really just snap judgments, which may be largely arbitrary.

If the respondents had little basis for determining their responses in the first place, it's unsurprising if they don't notice when they've been changed, and that it doesn't affect their ability to argue for them.

2k3nt
But the study said: "The statements in condition two were picked to represent salient and important current dilemmas from Swedish media and societal debate at the time of the study."
4Vaniver
Even then, people can fail to have strong opinions on issues in current debate; I know my opinions are silent on many issues that are 'salient and important current dilemmas' in American society.
9MaoShan
I remember an acquaintance of mine in high school (maybe it was 8th grade) replied to a teacher's question with "I'm Pro-who cares". He was strongly berated by the teacher for not taking a side, when I honestly believe he had no reason to care either way.
5TheOtherDave
IIRC, the study also asked people to score how strongly they held a particular opinion, and found a substantial (though lower) rate of missed swaps for questions they rated as strongly held. I would not expect that result were genuine indifference among options the only significant factor, although I suppose it's possible people just mis-report the strengths of their actual opinions.
2simplicio
Quite. My own answer to most of the questions in the survey is "Yes/No, but with the following qualifications." It's not too hard for me to imagine choosing, say, "Yes" to the surveillance question (despite my qualms), then being told I said "No," and believing it. You won't fool these people if you ask them about something salient like abortion.
0MugaSofer
Abortion is a complex issue. You could propably change someone's position on one aspect of the abortion debate, such as a hardline pro-lifer "admitting" that it's OK in cases where the mother's life is in danger.
1Unnamed
There is a long tradition in social science research, going back at least to Converse (1964), holding that most people's political views are relatively incoherent, poorly thought-through, and unstable. They're just making up responses to survey questions on the spot, in a way that can involve a lot of randomness. This study demonstrates that plus confabulation, in a way that is particularly compelling because of the short time scale involved and the experimental manipulation of what opinion the person was defending.

There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?

What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?

4TheOtherDave
I don't have citations handy, but IIRC in general inattentional blindness effects are greatly diminished if you warn people ahead of time, which should not be surprising. I don't know what happens if you warn people between the filling-out-the-questionaire stage and the reading-the-(possibly altered)-answers stage; I expect you'd get a reduced rate of acceptance of changed answers, but you'd also get a not-inconsiderable rate of rejection of unchanged answers. More generally: we do a lot of stuff without paying attention to what we're doing, but we don't keep track of what we did or didn't pay attention to, and on later recollection we tend to confabulate details into vague memories of unattended-to events. This is a broken system design, and it manifests in a variety of bugs that are unsurprising once we let go of the intuitive but false belief that memory is a process of retrieving recordings into conscious awareness. It frequently startles me how tenacious that belief is.

Another explanation:

Might this mean they trust external memories of their opinions more than their own memories? Know what that reminds me of? Ego. Some people trust others more than themselves when it comes to their view of themselves. And that's why insults hurt, isn't it? Because they make you doubt yourself. Maybe people do this because of self-doubt.

[-]Shmi100

1 karma point to anyone who links to a LW thread showing this effect (blind change of moral choice) in action. 2 karma points if you catch yourself doing it in such a thread.

[-]Shmi200

A real-life example of a similar effect: I explained the Newcomb problem to a person and he two-boxed initially, then, after some discussion, he switched to one-boxing and refused to admit that he ever two-boxed.

[-]jimmy200

This is common enough that I specifically watch out for it when asking questions that people might have some attachment to. Just today I didn't even ask because I knew I was gonna get a bogus "I've always thought this" answer.

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My mom went from "Sew it yourself" to "Of course I'll sew it for you, why didn't you ask me earlier?" a couple weeks later because she offered to sew something for my brother in law, which would make her earlier decision incongruent with her self image. Of course, she was offended when I told her that I did :p

[-]pjeby110

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My wife, not long before she met me, became an instant non-smoker and was genuinely surprised when friends offered her cigarettes -- she had to make a conscious effort to recall that she had previously smoked, because it was no longer consistent with her identity, as of the moment she decided to be a non-smoker.

This seems to be such a consistent feature of brains under self-modification that the very best way to know whether you've really changed your mind about something is to see how hard it is to think the way you did before, or how difficult it is to believe that you ever could have thought differently.

2thomblake
It's the best way I've seen to quit smoking - it seems to work every time. The ex-smoker says "I'm a non-smoker now" and starts badmouthing smokers - shortly they can't imagine doing something so disgusting and inconsiderate as smoking.
2wedrifid
The second of these claims would be extremely surprising to me, even if weakened to '90% of the time' to allow for figures of speech. Even a success rate of 50% would be startling. I don't believe it.
4thomblake
It's not surprising to me, though I imagine it's vulnerable to massive selection effect. My observation is about people who actually internalized being a non-smoker, not those who tried to do so and failed. I'm not surprised those two things are extremely highly correlated. So it might not be any better as strategy advice than "the best way to quit smoking is to successfully quit smoking".
-3pjeby
Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that: Of course, the page also says that lots of people don't successfully quit, which isn't incompatible with what thomblake says. Among people who are able to congruently decide to become non-smokers, it's apparently one of the easiest and most successful ways to do it. It's just that not everybody can decide to be a non-smoker, or that it occurs to them to do so. Anecdotally, my wife said that she'd "quit smoking" several times prior, each time for extrinsic reasons (e.g. dating a guy who didn't smoke, etc.). When she "became a non-smoker" instead (as she calls it), she did it for her own reasons. She says that as soon as she came to the conclusion that she needed to stop for good, she decided that "quitting smoking" wasn't good enough to do the job, and that she would have to become a non-smoker instead. (That was over 20 years ago, fwiw.) I'm not sure how you'd go about prescribing that people do this: either they have an intrinsic desire to do it or not. You can certainly encourage and assist, but intrinsic motivation is, well, intrinsic. It's rather difficult to decide on purpose to do something of your own free will, if you're really trying to do it because of some extrinsic reason. ;-)

Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:

wedrifid is asking for P(success|attempt), not P(attempt|success), and so a high P(attempt|success) isn't ironic.

3RichardHughes
Can you provide more info about the event?
[-]Shmi150

I presented the paradox (the version where you know of 1000 previous attempts all confirming that the Predictor is never wrong), answered the questions, cut off some standard ways to weasel out, then asked for the answer and the justification, followed by a rather involved discussion of free will, outside vs inside view, then returned to the question. What I heard was "of course I would one-box". "But barely an hour ago you were firmly in the two-boxing camp!". Blank stare... "Must have been a different problem!"

Denying all connection to a possible alternate you who would two-box might be some sort of strategy ...

This ought not surprise me. It is instructive how surprising it nevertheless is.

I wonder how long lived the new opinions are?

5TheOtherDave
Relatedly, I wonder how consistent people's original answers to these questions are (if, say, retested a month later). But I would expect answers the subjects are asked to defend/explain (whether original or changed) to be more persistent than answers they aren't.
1Alejandro1
Rabbit season!

Similar to your first video, here's the famous "count how often the players in white pants pass the ball" test (Simons & Chabris 1999).

Incredibly, if you weren't primed to look for something unexpected, you probably would't notice. Seen it work first hand in cogsci classes.

7Lapsed_Lurker
Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.
6nerfhammer
This is "inattention blindness". Choice blindness is sort of like the opposite; in inattention blindness you don't notice something you're not paying attention to, in choice blindness you don't notice something which you are paying attention to.
2Kawoomba
Edit: Didn't really understand your above definition of choice blindness versus inattentional blindness, scholarpedia has a good contrasting definition:

One interpretation is that many people don't have strongly held or stable opinions on some moral questions and/or don't care. Doesn't sound very shocking to me.

Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.

The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.

Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.

0Lightwave
Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said "non-familiar" rather than just "general".
1k3nt
If you read the study, they say that the "specific" questions they are asking are questions that were very salient at the time of the study. These are things that people were talking about and arguing about at the time, and were questions with real-world implications. Thus precisely not "trolley problems."

It strikes me that performing this experiment on people, then revealing what has occurred, may be a potentially useful method of enlightening people to the flaws of their cognition. How might we design a 'kit' to reproduce this sleight of hand in the field, so as to confront people with it usefully?

7NancyLebovitz
It would be easy enough to do with a longish computer survey. It's much easier to change what appears on a screen than to do sleight-of-paper.
4Armok_GoB
For added fun metaness, have the option you swich them to and start rationalize for be the one you're trying to convince them of :p
3nerfhammer
The video shows the mechanics of how it works pretty well.
1niceguyanon
I suspect that those who are most susceptible to moral proposition switches and their subsequent defense of switch, are also the same people that will deny the evidence when confronted with their switch. Much like the Dunning Kruger effect there will be people who fail to recognize the extremity of their inadequacy, even when confronted with evidence of such. Edit: The paper states that they informed all participants of the true nature of the survey, but it does not go in to detail on whether participants actually acknowledged that their moral propositions were switched.

I thought I might mention a sort-of similar thing, though done more for humor: the Howard Stern Show interviewed people in an area likely to favor a certain politician, asking them if they supported him because of position X, or position Y (both of which he actually opposed).

(If you remember this, go ahead and balk at the information I left out.)

2simplicio
This is indeed amusing, but the author draws a wrong/incomplete/tendentious conclusion from it. I think the proper conclusion is basically our usual "Blue vs Green" meme, plus some Hansonian cynicism about 'informed electorates.'

Clarifying question: Did they actually change their minds on moral positions or did this study just give the appearance that they changed their minds? This is a question that we need to be asking as we look for meaning in this information, but not everyone here is thinking to ask it. Even when I proposed an alternate explanation to show how this could give the false appearance of people changing their minds when they did not, I got one response from somebody that didn't seem to realize I had just explained why this result might be due to people pretendin... (read more)

There was a high level of inter-rater agreement between the three raters for the NM reports (r = .70) as well as for the M reports (r = .77), indicating that there are systematic patterns in the verbal reports that corresponds to certain positions on the rating scale for both NM and M trials. Even more interestingly, there was a high correlation between the raters estimate and the original rating of the participants for NM (r = .59) as well as for M reports (r = .71), which indicates that the verbal reports in the M trials do in fact track the participant

... (read more)
[-][anonymous]100

Konkvistador's LessWrong improvement algorithm

  1. Trick brilliant but contrarian thinker into mainstream position.
  2. Trick brilliant but square thinker into contrarian position.
  3. Have each write an article defending their take.
  4. Enjoy improved rationalist community.
4A1987dM
Now, go ahead and implement that!
7Epiphany
Consider this: If you're supporting your own genuine opinion, you might have your own carefully chosen perspective that is slightly different from the question's wording. You only select the answer because it's the closest one of the options, not because it's exactly your answer. So, you may be inclined, then, to say things that are related but don't fit the question exactly. If you're confabulating to support a random opinion, though, what do you have to go by but the wording? The opinion is directing your thoughts then, leading your thoughts to fit the opinion. You aren't trying to cram pre-existing thoughts into an opinion box to make it fit your view. Or looking at it another way: When expressing your point of view, the important thing is to express what you feel, regardless of whether it fits the exact question. When supporting "your" point because you don't want to look like an idiot in front of a researcher, the objective is to support it as precisely as possible, not to express anything. As for whether your interpretation of that selection is correct: it's past my bed time and I'm getting drowsy, so someone else should answer that part instead.
4Ezekiel
I think it does. Can't believe I missed that. Actually, this fits well with my personal experience. I've frequently found it easier to verbalize sophisticated arguments for the other team, since my own opinions just seem self-evident.
Gaslighting.

Seriously, there's already a well-established form of psychological abuse founded on this principle. It works, and it's hard to see how to take it much further into the Dark Arts.

concealing another person whom replaces the experimenter as the door passes.

(Very minor and content-irrelevant point here, but my grammar nazi side bids me to say it, at the risk of downvotery: it should be "who" here, not "whom", since it's the subject of the relative clause.)

A side of effect of this is to reinforce the importance of writing about the Obvious, because things seem obvious after we've learned them and we literally have trouble thinking about not knowing/viewing things in a certain way.

2Shmi
Especially if the Obvious turns out to be wrong.
3drethelin
Sure. Either way actively talking about the obvious is useful.

You can't not believe everything you read, from the Journal of Personality and Social Psychology, might contain the beginnings of another alternative explanation to this.

0Jayson_Virissimo
Great paper, although (annoyingly) they conflate democracy with liberalism.

So if I defraud someone by pretending to sell them an iPad for $100 but pocketing the $100 instead, I am more likely to get away with the fraud if instead of straightfowardly offering them an iPad, I set up a shady charity and offer them a choice between buying the iPad and donating $100 to the shady charity (provided that it's sufficiently easy for me to extract money from the charity).

Stuff actually works, I just did an experiment on it with tea and jam. It's so crazy.

0gwern
Details?
0A1987dM
What kind of experiment?

Also known as the "people can't remember things without distinctive features" phenomenon. Still interesting to note their behaviours in the situation though.

Wow!

I don't bandy the term sheeple out very frequently. But here it might just be appropriate.

No-one says "sheeple" intending to include themselves. Do you have any reason to think you are immune from this effect?

1blogospheroid
Actually, Yes. I would think that I would be relatively immune from the effect of this in the domain of morality, because I have thought about morality and quite often. Maybe in a field that I didn't have much knowledge about, if I were asked to give opinions and this kind of a thing was pulled on me, I would succumb and quite badly, I admit. But I wouldn't feel that bad. I guess my main takeaway from this analogy is that most people don't care that much about morality to stop and think for a while. They go as the flow goes and therefore I said "Sheeple". I am in no way saying that I am the purest and most moral person on earth. I am most definitely not living my life in accordance with my highest values. But I have a fairly high confidence that I will not succumb to this effect atleast in the domain of moral questions.
6Epiphany
That's what I thought at first, too but on second thought, I don't think they went far enough to confirm that this actually causes people to change their opinions. There are other reason people might act the way they did.
4Ezekiel
I suspect sheep would be less susceptible to this sort of thing than humans.
0TraderJoe
For logic this woolly, I agree...