I find myself thinking "I remember believing X. Why did I believe X? Oh right, because Y and Z. Yes, I was definitely right" with alarming frequency.
When reading old LW posts and comments and seeing I've upvoted some comment, I find myself thinking "Wait, why have I upvoted this comment?"
An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you? Given that you are probably not better at it than the professionals in each candidate's team, can you find examples of such tactics?
Claim to agree with them on issue X, then once they've committed to supporting you, change your position on issue X.
Come to think of it, politicians already do this.
While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.
It's not a very strong version, and it's tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.
Dark tactic: Have we verified that it doesn't work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:
An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.
The unethical campaigner might then say: "A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z." If authoritative enough, they might believe this.
"These are the three key parts of my campaign! Can you explain why you support these?"
(victim explains)
"Great responses! Do you mind if we use these?"
(victim may feel compelled to say yes or seem ungrateful for the compliment)
"I think your family and friends should hear what great supports you have for your points on this important issue, don't you?"
(now new victims will be dragged in)
The responses that were given are used to make it look like there's a consensus.
I don't like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.
"Win, and be a good president"
That would not be an instrumentally useful campaigning strategy.
One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).
I think the response of the passerby is quite reasonable, actually. Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (b) every time, especially considering that I make similar mistakes all the time (confusing people with each other immediately after having encountered them). I know that this is probably not a phenomenon that occurs at the conscious level, but we should expect the unconscious level to be even more automatic.
...Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (a) every time, especially considering that I observe similar phenomena all the time (people spontaneously replacing each other immediately after having encountered them). ...
I'm curious, why do you take that view?
A man who'd spent some time institutionalized said that the hell of it was that half of what you were seeing was hallucinations and the other half was true things that people won't admit to. Unfortunately, I didn't ask him for examples of the latter.
Can someone sneakily try this on me? I like silly questionnaires, polls, and giving opinions, so it should be easy.
You said in a previous thread that after a hard day of stealing wifi and lobbying for SOPA, you and Chris Brown like to eat babies and foie gras together. Can you explain your moral reasoning behind this?
The geese and babies aren't sentient, wifi costs the provider very little, that's actually a different Chris Brown, and I take the money I get paid lobbying for SOPA and donate it to efficient charities!
(Sorry, couldn't resist when I saw the "babies" part.)
Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?
Now that's one ultimate rationalization. The standard pattern is to decide (or prefer) something for one reason, then confabulate more honourable reasons why we decided (or preferred) thus.
But confabulating for something we didn't even decide… that's takings things up a notch.
I bet the root problem is the fact that we often resolve cognitive dissonance before it even hits the concious level. Could we train ourselves to notice such dissonance instead?
Could we train ourselves to notice such dissonance instead?
This needs to get a spot in CFAR's training program(s/mme(s)?). It sounds like the first thing you'd want to do once you reach the rank of second-circle Initiate in the Bayesian Conspiracy. Or maybe the first part of the test to attain this rank.
An alternate explanation:
Maybe the years of public schooling that most of us receive cause us to trust papers so much, that if we see something written down on a paper, we feel uncomfortable opposing it. If you're threatened with punishment for not regurgitating what is on an authority's papers daily for that many years of your life, you're bound to be classically conditioned to behave as if you agree with papers.
So maybe what's going on is this:
You fill out a scientist's paper.
The paper tells you your point of view. It looks authoritative because it's in writing.
You feel uncomfortable disagreeing with the authority's paper. School taught you this was bad.
Now the authority wants you to support the opinion they think is yours.
You feel uncomfortable with the idea of failing to show the authority that you can support the opinion on the paper. (A teacher would not have approved - and you'd look stupid.)
You might want to tell the authority that it's not your opinion, but they have evidence that you believe it - it's in writing.
You behave according to your conditioning by agreeing with the paper, and do as expected by supporting what the researcher thinks your point of
I have to wonder if many of the respondents in the survey didn't hold any position with much strength in the first place. Our society enforces the belief, not only that everyone is entitled to their opinions, but that everyone should have an opinion on just about any issue. People tend to stand by "opinions" that are really just snap judgments, which may be largely arbitrary.
If the respondents had little basis for determining their responses in the first place, it's unsurprising if they don't notice when they've been changed, and that it doesn't affect their ability to argue for them.
There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?
What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?
Another explanation:
Might this mean they trust external memories of their opinions more than their own memories? Know what that reminds me of? Ego. Some people trust others more than themselves when it comes to their view of themselves. And that's why insults hurt, isn't it? Because they make you doubt yourself. Maybe people do this because of self-doubt.
1 karma point to anyone who links to a LW thread showing this effect (blind change of moral choice) in action. 2 karma points if you catch yourself doing it in such a thread.
A real-life example of a similar effect: I explained the Newcomb problem to a person and he two-boxed initially, then, after some discussion, he switched to one-boxing and refused to admit that he ever two-boxed.
This is common enough that I specifically watch out for it when asking questions that people might have some attachment to. Just today I didn't even ask because I knew I was gonna get a bogus "I've always thought this" answer.
I know a guy who "has always been religious" ever since he almost killed himself in a car crash.
My mom went from "Sew it yourself" to "Of course I'll sew it for you, why didn't you ask me earlier?" a couple weeks later because she offered to sew something for my brother in law, which would make her earlier decision incongruent with her self image. Of course, she was offended when I told her that I did :p
I know a guy who "has always been religious" ever since he almost killed himself in a car crash.
My wife, not long before she met me, became an instant non-smoker and was genuinely surprised when friends offered her cigarettes -- she had to make a conscious effort to recall that she had previously smoked, because it was no longer consistent with her identity, as of the moment she decided to be a non-smoker.
This seems to be such a consistent feature of brains under self-modification that the very best way to know whether you've really changed your mind about something is to see how hard it is to think the way you did before, or how difficult it is to believe that you ever could have thought differently.
Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:
wedrifid is asking for P(success|attempt), not P(attempt|success), and so a high P(attempt|success) isn't ironic.
I presented the paradox (the version where you know of 1000 previous attempts all confirming that the Predictor is never wrong), answered the questions, cut off some standard ways to weasel out, then asked for the answer and the justification, followed by a rather involved discussion of free will, outside vs inside view, then returned to the question. What I heard was "of course I would one-box". "But barely an hour ago you were firmly in the two-boxing camp!". Blank stare... "Must have been a different problem!"
Denying all connection to a possible alternate you who would two-box might be some sort of strategy ...
One interpretation is that many people don't have strongly held or stable opinions on some moral questions and/or don't care. Doesn't sound very shocking to me.
Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.
The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.
Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.
It strikes me that performing this experiment on people, then revealing what has occurred, may be a potentially useful method of enlightening people to the flaws of their cognition. How might we design a 'kit' to reproduce this sleight of hand in the field, so as to confront people with it usefully?
I thought I might mention a sort-of similar thing, though done more for humor: the Howard Stern Show interviewed people in an area likely to favor a certain politician, asking them if they supported him because of position X, or position Y (both of which he actually opposed).
(If you remember this, go ahead and balk at the information I left out.)
Clarifying question: Did they actually change their minds on moral positions or did this study just give the appearance that they changed their minds? This is a question that we need to be asking as we look for meaning in this information, but not everyone here is thinking to ask it. Even when I proposed an alternate explanation to show how this could give the false appearance of people changing their minds when they did not, I got one response from somebody that didn't seem to realize I had just explained why this result might be due to people pretendin...
...There was a high level of inter-rater agreement between the three raters for the NM reports (r = .70) as well as for the M reports (r = .77), indicating that there are systematic patterns in the verbal reports that corresponds to certain positions on the rating scale for both NM and M trials. Even more interestingly, there was a high correlation between the raters estimate and the original rating of the participants for NM (r = .59) as well as for M reports (r = .71), which indicates that the verbal reports in the M trials do in fact track the participant
Konkvistador's LessWrong improvement algorithm
Seriously, there's already a well-established form of psychological abuse founded on this principle. It works, and it's hard to see how to take it much further into the Dark Arts.
concealing another person whom replaces the experimenter as the door passes.
(Very minor and content-irrelevant point here, but my grammar nazi side bids me to say it, at the risk of downvotery: it should be "who" here, not "whom", since it's the subject of the relative clause.)
A side of effect of this is to reinforce the importance of writing about the Obvious, because things seem obvious after we've learned them and we literally have trouble thinking about not knowing/viewing things in a certain way.
You can't not believe everything you read, from the Journal of Personality and Social Psychology, might contain the beginnings of another alternative explanation to this.
So if I defraud someone by pretending to sell them an iPad for $100 but pocketing the $100 instead, I am more likely to get away with the fraud if instead of straightfowardly offering them an iPad, I set up a shady charity and offer them a choice between buying the iPad and donating $100 to the shady charity (provided that it's sufficiently easy for me to extract money from the charity).
Also known as the "people can't remember things without distinctive features" phenomenon. Still interesting to note their behaviours in the situation though.
Wow!
I don't bandy the term sheeple out very frequently. But here it might just be appropriate.
No-one says "sheeple" intending to include themselves. Do you have any reason to think you are immune from this effect?
Change blindness is the phenomenon whereby people fail to notice changes in scenery and whatnot if they're not directed to pay attention to it. There are countless videos online demonstrating this effect (one of my favorites here, by Richard Wiseman).
One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).
Subsequently a pair of Swedish researchers familiar with some sleight-of-hand magic conceived a new twist on this line of research, arguably even more audacious: have participants make a choice and quietly swap that choice with something else. People not only fail to notice the change, but confabulate reasons why they had preferred the counterfeit choice (video here). They called their new paradigm "Choice Blindness".
Just recently the same Swedish researchers published a new study that is even more shocking. Rather than demonstrating choice blindness by having participants choose between two photographs, they demonstrated the same effect with moral propositions. Participants completed a survey asking them to agree or disagree with statements such as "large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism". When they reviewed their copy of the survey their responses had been covertly changed, but 69% failed to notice at least one of two changes, and when asked to explain their answers 53% argued in favor of what they falsely believed was their original choice, when they had previously indicated the opposite moral position (study here, video here).