Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

New study on choice blindness in moral positions

72 Post author: nerfhammer 20 September 2012 06:14PM

Change blindness is the phenomenon whereby people fail to notice changes in scenery and whatnot if they're not directed to pay attention to it. There are countless videos online demonstrating this effect (one of my favorites here, by Richard Wiseman).

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

Subsequently a pair of Swedish researchers familiar with some sleight-of-hand magic conceived a new twist on this line of research, arguably even more audacious: have participants make a choice and quietly swap that choice with something else. People not only fail to notice the change, but confabulate reasons why they had preferred the counterfeit choice (video here). They called their new paradigm "Choice Blindness".

Just recently the same Swedish researchers published a new study that is even more shocking. Rather than demonstrating choice blindness by having participants choose between two photographs, they demonstrated the same effect with moral propositions. Participants completed a survey asking them to agree or disagree with statements such as "large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism". When they reviewed their copy of the survey their responses had been covertly changed, but 69% failed to notice at least one of two changes, and when asked to explain their answers 53% argued in favor of what they falsely believed was their original choice, when they had previously indicated the opposite moral position (study here, video here).

Comments (151)

Comment author: AlexMennen 20 September 2012 07:29:18PM 24 points [-]

I find myself thinking "I remember believing X. Why did I believe X? Oh right, because Y and Z. Yes, I was definitely right" with alarming frequency.

Comment author: Lightwave 20 September 2012 08:57:13PM *  17 points [-]

When reading old LW posts and comments and seeing I've upvoted some comment, I find myself thinking "Wait, why have I upvoted this comment?"

Comment author: John_Maxwell_IV 25 September 2012 02:48:44AM *  3 points [-]

This doesn't seem obviously bad to me... You just have to differentiate times when you have a gut feeling that something's true because you worked it out before, or because of some stupid reason like your parents telling it to you when you were a kid. Right?

I think I can tell apart rationalizations I'm creating on the spot with reasoning I remember constructing in the past. And if I'm creating rationalizations on the spot, I make an effort to rationalize in the opposing direction a bit for balance.

Comment author: simplicio 21 September 2012 11:22:38PM 16 points [-]

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

I think the response of the passerby is quite reasonable, actually. Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (b) every time, especially considering that I make similar mistakes all the time (confusing people with each other immediately after having encountered them). I know that this is probably not a phenomenon that occurs at the conscious level, but we should expect the unconscious level to be even more automatic.

Comment author: MaoShan 24 September 2012 02:05:27AM *  21 points [-]

...Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (a) every time, especially considering that I observe similar phenomena all the time (people spontaneously replacing each other immediately after having encountered them). ...

I'm curious, why do you take that view?

Comment author: simplicio 24 September 2012 11:50:04AM 10 points [-]

Missed it on the first read-through, heh. Excellent try.

Comment author: [deleted] 24 September 2012 10:28:01PM 3 points [-]

I didn't notice until I read Swimmer963's comment. I did remember reading its parent and did remember that it said something sensible, so when I read the altered quotation I thought I had understood it to be ironic.

Comment author: Swimmer963 24 September 2012 03:05:53AM 3 points [-]

Am I the only one who's really confused that this comment is quoting text that is different than the excerpt in the above comment?

Comment author: Alejandro1 24 September 2012 03:43:45AM 10 points [-]

Shhhhh! You're ruining the attempt at replication!

Comment author: RobFisher 29 September 2012 08:02:31AM 0 points [-]

I didn't notice at first, but only because I did notice that you were quoting the comment above which I had just read and skipped over the quote.

Comment author: robertskmiles 25 September 2012 06:07:42PM *  5 points [-]

A rational prior for "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions" would be very small indeed (that naturally doesn't happen, and psych experiments are rare). A rational prior for "I just had a brain fart" would be much bigger, since that sort of thing happens much more often. So at the end, a good Bayesian would assign a high probability to "I just had a brain fart", and also a high probability to "This is the same person" (though not as high as it would be without the brain fart).

The problem is that the conscious mind never gets the "I just had a brain fart" belief. The error is unconsciously detected and corrected but not reported at all, so the person doesn't even get the "huh, that feels a little off" feeling which is in many cases the screaming alarm bell of unconscious error detection. Rationalists can learn to catch that feeling and examine their beliefs or gather more data, but without it I can't think of a way to beat this effect at all, short of paying close attention to all details at all times.

Comment author: ArisKatsaris 25 September 2012 06:59:54PM 4 points [-]

And a sufficiently large change gets noticed...

Comment author: Decius 26 September 2012 05:08:31PM 1 point [-]

Really? Did any of them refuse to give the camera to the new people, because they weren't the owners of the camera?

Comment author: Alicorn 26 September 2012 05:34:24PM 1 point [-]

If you watch the video closely, the camera actually prints out a picture of the old guys, so the old guys are clearly at least involved with the camera in some way.

Comment author: jimmy 24 September 2012 08:57:56PM 5 points [-]

What a coincidence, this happened to me with your comment! I originally read your name as "shminux" and was quite surprised when I reread it.

If there's some coding magic going on behind the scenes, you've got me good. But I'm sticking with (b) - final answer.

Comment author: shminux 24 September 2012 10:23:07PM 3 points [-]

originally read your name as "shminux" and was quite surprised when I reread it.

For the record, I fully endorse simplicio's analysis :)

Comment author: Haladdin 24 September 2012 05:15:37PM 2 points [-]

Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart,"

Schizophrenia. Capgras Delusion.

I wonder how schizophrenics would comparatively perform on the study.

Comment author: NancyLebovitz 25 September 2012 02:35:36PM 8 points [-]

A man who'd spent some time institutionalized said that the hell of it was that half of what you were seeing was hallucinations and the other half was true things that people won't admit to. Unfortunately, I didn't ask him for examples of the latter.

Comment author: thomblake 25 September 2012 03:31:54PM 1 point [-]

Unfortunately, I didn't ask him for examples of the latter.

Or perhaps fortunately!

Comment author: shminux 20 September 2012 05:07:56PM *  16 points [-]

An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you? Given that you are probably not better at it than the professionals in each candidate's team, can you find examples of such tactics?

Comment author: DaFranker 20 September 2012 07:33:29PM *  13 points [-]

While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.

It's not a very strong version, and it's tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.

Comment author: Epiphany 21 September 2012 07:41:25AM *  9 points [-]

Dark tactic: Have we verified that it doesn't work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:

  1. An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.

  2. The unethical campaigner might then say: "A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z." If authoritative enough, they might believe this.

  3. "These are the three key parts of my campaign! Can you explain why you support these?"

  4. (victim explains)

  5. "Great responses! Do you mind if we use these?"

  6. (victim may feel compelled to say yes or seem ungrateful for the compliment)

  7. "I think your family and friends should hear what great supports you have for your points on this important issue, don't you?"

  8. (now new victims will be dragged in)

  9. The responses that were given are used to make it look like there's a consensus.

Comment author: [deleted] 25 September 2012 12:02:40PM *  3 points [-]

(too long for most people to remember the answers they gave on tests)

For me at least, one year is also too long for me to reliably hold the same opinion, so if you did that to me, I think I'd likely say something like “Yeah, I did support X, Y and Z back then, but now I've changed my mind.” (I'm not one to cache opinions about most political issues -- I usually recompute them on the fly each time I need them.)

Comment author: MugaSofer 21 September 2012 11:47:47AM 2 points [-]

Someone should see if this works.

Of course, you need to filter for people who fill out surveys.

Comment author: DaFranker 21 September 2012 12:26:59PM *  0 points [-]

Idea:

Implement feedback surveys for lesswrong meta stuff, and slip in a test for this tactic in one of the surveys a few surveys in.

Having a website as a medium should make it even harder for people to speak up or realize there's something going on, and I figure LWers are probably the biggest challenge. If LWers fall into a trap like this, that'd be strong evidence that you could take over a country with such methods.

Comment author: ModusPonies 21 September 2012 07:09:52PM 8 points [-]

That would be very weak evidence that you could take over a country with such methods. It would be strong evidence that you could take over a website with such methods.

Comment author: Eugine_Nier 21 September 2012 03:19:00AM 20 points [-]

Claim to agree with them on issue X, then once they've committed to supporting you, change your position on issue X.

Come to think of it, politicians already do this.

Comment author: MTGandP 01 November 2012 11:11:46PM 1 point [-]

Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney's flip-flopping, and Republicans reacted similarly to Kerry's waffling in 2004. But for some reason, supporters of the candidate in question either don't notice or don't care.

Comment author: Hawisher 24 September 2012 02:40:25PM *  0 points [-]

The quality (in American politics, at least) that either 1: a politician's stance on any given topic is highly mutable, or 2: a politician's stance could perfectly reasonably disagree with that of some of his supporters, given that the politician one supports is at best a best-effort compromise rather than (in most cases) a perfect representation of one's beliefs, is it not so widely-known as to eliminate or alleviate that effect?

Comment author: DaFranker 24 September 2012 02:57:06PM 1 point [-]

I don't see how either or both options you've presented change the point in any way; if politicians claim to agree on X until you agree to vote for them, then turn out to revert to their personal preference once you've already voted for them, then while you may know they're mutable or a best-effort-compromise, you've still agreed with a politician and voted for them on the basis of X, which they now no longer hold.

That they are known to have mutable stances or be prone to hidden agendas only makes this tactic more visible, but also more popular, and by selection effects makes the more dangerous instances of this even more subtle and, well, dangerous.

Comment author: Hawisher 24 September 2012 03:30:19PM 0 points [-]

I would argue that the chief difference between picking a politician to support and choosing answers based on one's personal views of morality is that the former is self-evidently mutable. If a survey-taker was informed beforehand that the survey-giver might or might not change his responses, it is highly doubtful the study in question would have these results.

Comment author: siodine 21 September 2012 03:31:07AM *  3 points [-]

The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks:

If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.

If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., "President Obama has all but said that I'm instigating "class warfare," or that I don't care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney." Basically, you take the opposition's reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.

Comment author: synkarius 21 September 2012 05:00:27AM 7 points [-]

I don't like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.

Comment author: siodine 21 September 2012 01:36:46PM *  -1 points [-]

I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.

Comment author: Haladdin 24 September 2012 05:28:39PM 5 points [-]

Online dating. Put up a profile that suggests a certain personality types and interests. In face-to-face meetup, even if you're someone different than was advertised, choice blindness should cover up the fact.

This tactic can also be extended to job resumes presumably.

Comment author: [deleted] 25 September 2012 12:00:02PM 2 points [-]

This tactic can also be extended to job resumes presumably.

I wouldn't like to be standing in the shoes of someone who tried that and it didn't work.

Comment author: wedrifid 25 September 2012 02:15:55PM 2 points [-]

I wouldn't like to be standing in the shoes of someone who tried that and it didn't work.

Why? Just go interview somewhere else. The same applies for any interview signalling strategy.

Comment author: [deleted] 25 September 2012 10:11:02PM 1 point [-]

I meant in the shoes of the candidate, not the interviewer. If that happened to me, I would feel like my status-o-meter started reading minus infinity.

Comment author: khafra 25 September 2012 07:44:04PM 2 points [-]

Either that's already a well-used tactic amongst online daters, or 6'1", 180lb guys who earn over $80k/year are massively more likely to use online dating sites than the average man.

Comment author: Vaniver 24 September 2012 06:07:11PM 1 point [-]

Tom N. Haverford comes to mind.

Comment author: Epiphany 21 September 2012 07:59:01AM *  5 points [-]

Dark Tactic:

This one makes me sick to my stomach.

Imagine some horrible person wants to start a cult. So they get a bunch of people together and survey them asking things like:

"I don't think that cults are a good thing." "I'm not completely sure that (horrible person) would be a good cult leader."

and switches them with:

"I think that cults are a good thing." "I'm completely sure that (horrible person) would be a good cult leader."

And the horrible person shows the whole room the results of the second set of questions, showing that there's a consensus that cults are a good thing and most people are completely sure that (horrible person) would be a good cult leader.

Then the horrible person asks individuals to support their conclusions about why cults are a good thing and why they would be a good leader.

Then the horrible person starts asking for donations and commitments, etc.

Who do we tell about these things? They have organizations for reporting security vulnerabilities for computer systems so the professionals get them... where do you report security vulnerabilities for the human mind?

Comment author: ChristianKl 22 September 2012 05:40:56PM 8 points [-]

Is you start a cult you don't tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.

Most people in the LessWrong community don't see it as a cult, and the same is true for most organisations that are seen as cults.

Comment author: John_Maxwell_IV 25 September 2012 02:44:52AM 3 points [-]

That's not too different from the description of a university though.

Comment author: wedrifid 22 September 2012 06:13:04PM *  1 point [-]

Is you start a cult you don't tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.

Do you? Really? That works? When creating an actual literal cult? This is counter-intuitive.

Comment author: Endovior 23 September 2012 05:50:28PM 4 points [-]

The trick: you need to spin it as something they'd like to do anyway... you can't just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don't even need to do much in the way of developing cultic materials; there's plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like "the law of attraction" that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily.

I was in such a cult myself; it's tremendously effective.

Comment author: ChristianKl 24 September 2012 09:55:31PM 3 points [-]

If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don't necessarily need stuff like money.

Comment author: Endovior 25 September 2012 01:37:50PM 1 point [-]

Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.

Comment author: wedrifid 25 September 2012 02:16:56PM 4 points [-]

Emotional motivations make just as good a target as intellectual ones.

Agree, except I'd strengthen this to "a much better".

Comment author: NancyLebovitz 24 September 2012 08:36:52PM 2 points [-]

It works. Especially if you can get people away from their other social contacts. Mix in insufficient sleep and a low protein diet, and it works really well. (Second-hand information, but there's pretty good consensus on how cults work.)

How do you think cults work?

Comment author: Nornagest 24 September 2012 09:08:40PM 2 points [-]

I'd question "really well". Cult retention rates tend to be really low -- about 2% for Sun Myung Moon's Unification Church ("Moonies") over three to five years, for example, or somewhere in the neighborhood of 10% for Scientology. The cult methodology seems to work well in the short term and on vulnerable people, but it seriously lacks staying power: one reason why many cults focus so heavily on recruiting, as they need to recruit massively just to keep up their numbers.

Judging from the statistics here, retention rates for conventional religious conversions are much higher than this (albeit lower than retention rates for those raised in the church).

Comment author: NancyLebovitz 24 September 2012 09:22:03PM *  3 points [-]

I guess "really well" is ill-defined, but I do think that both Sun Myung Moon and L. Ron Hubbard could say "It's a living".

You can get a lot out of people in the three to five years before they leave.

Comment author: shminux 24 September 2012 11:27:22PM 1 point [-]

Note that the term cult is a worst argument in the world (guilt by association). The neutral term is NRM. Thus to classify something as a cult one should first tick off the "religious" check mark, which requires spirituality, a rather nebulous concept:

Spirituality is the concept of an ultimate or an alleged immaterial reality; an inner path enabling a person to discover the essence of his/her being; or the "deepest values and meanings by which people live.

If you define cult as an NRM with negative connotations, then you have to agree on what those negatives are, not an easy task.

Comment author: fubarobfusco 25 September 2012 12:13:23AM 1 point [-]

"NRM" is a term in the sociology of religion. There are many groups that are often thought of as "cultish" in the ordinary-language sense that are not particularly spiritual. Multi-level marketing groups and large group awareness training come to mind.

Comment author: gwern 25 September 2012 12:47:59AM 2 points [-]

This is basically true, although I had a dickens of a time finding specifics in the religious/psychology/sociological research - everyone is happy to claim that cults have horrible retention rates, but none of them seem to present much beyond anecdotes.

Comment author: Nornagest 25 September 2012 12:58:41AM *  0 points [-]

I'll confess I was using remembered statistics for the Moonies, not fresh ones. The data I remember from a couple of years ago seems to have been rendered unGooglable by the news of Sun Myung Moon's death.

Scientology is easier to find fresh statistics for, but harder to find consistent statistics for. I personally suspect the correct value is lower, but 10% is about the median in easily accessible sources.

Comment author: [deleted] 25 September 2012 08:00:34AM 2 points [-]

The data I remember from a couple of years ago seems to have been rendered unGooglable by [more recent stuff]

Click on “Search tools” at the bottom of the menu on the left side of Google's search results page, then on “Custom range”.

Comment author: wedrifid 24 September 2012 09:39:55PM 0 points [-]

How do you think cults work?

Like what you say but not much like ChristianKI said. I think he was exaggerating rather a lot to try to make something fit when it doesn't particularly.

Comment author: ChristianKl 24 September 2012 09:54:01PM 0 points [-]

What's an actual literal cult?

When I went to the Quantified Self conference in Amsterdam last year, I heard the allegation that Quantified Self is a cult after I explained it to someone who lived at the place I stayed for the weekend. I also had to defend against the cult allegation when explain the Quantified Self community to journalists. Which groups are cults depends a lot of the person who's making the judgement.

There are however also groups where we can agree that they are cults. I would say that the principle applies to an organisation like the Church of Scientology.

Comment author: Pentashagon 21 September 2012 06:11:49PM 1 point [-]

I think that's known as voter fraud. A lot of people believe (and tell others to believe) that certain candidates were legally and fairly elected even when exit polls show dramatically different results. Although of course this could work the same way if exit polls were changed to reflect the opposite outcome of an actually fair election and people believed the false exit polls and demanded a recount or re-election. It just depends on which side can effectively collude to cheat.

Comment author: Epiphany 21 September 2012 07:31:25PM *  5 points [-]

No. What I'm saying here is that, using this technique, it might not be seen as fraud.

If the view on "choice blindness" is that people are actually changing their opinions, it would not be technically seen as false to claim that those are their opinions. Committing fraud would require you to lie. This may be a form of brainwashing, not a new way to lie.

That's why this is so creepy.

Comment author: fubarobfusco 20 September 2012 08:51:23PM 4 points [-]

how would you exploit this to your advantage, were you dark-arts inclined?

Break into someone's blog and alter statements that reflect their views.

Comment author: Alejandro1 20 September 2012 06:59:33PM 4 points [-]

For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you?

I vaguely remember that when a president becomes very widely accepted as a good or bad president, many people will misremember that they voted for or against him respectively; e.g. much fewer people would admit (even to themselves) having voted for Nixon than the actual number that voted for him. If this is so, then maybe the answer is simply "Win, and be a good president".

Comment author: shminux 20 September 2012 07:02:20PM 14 points [-]

"Win, and be a good president"

That would not be an instrumentally useful campaigning strategy.

Comment author: TimS 20 September 2012 06:24:52PM 2 points [-]

Now I'm alternating between laughing and crying. :(

Comment author: Epiphany 21 September 2012 07:20:52AM 0 points [-]

Awww. I might have discovered a flaw in this study, TimS. Here you go

Comment author: Epiphany 21 September 2012 09:11:47AM *  1 point [-]

Imagine answering a question like "I think such and such candidate is not a very good person." and then it gives you a button where you can automatically post it to your twitter / facebook. When you read the post on your twitter, it says "I think such and such candidate is a very good person." but you don't notice the wording has changed. :/

I wonder if people would feel compelled to confabulate reasons why they posted that on their accounts. It might set of their "virus" radars because of the online context and therefore not trigger the same behavior.

Comment author: Epiphany 21 September 2012 07:50:09AM *  1 point [-]

Dark Tactic:

  1. An unwitting research company could be contracted to do a survey by an unethical organization.
  2. The survey could use the trick where by asking some question that people will mostly say "yes" to and then ask a similar question later where the wording is slightly changed to agree with the viewpoint of the unethical organization.
  3. Most people end up saying they agree with the viewpoint of the unethical organization.
  4. The reputation of the research company is abused as the unethical organization claims they "proved" that most people agree with their point of view.
  5. A marketing campaign is devised around the false evidence that most people agree with them.

They already trick people in less expensive ways, though. I was taught in school that they'll do things like ask 5 doctors whether they recommend something and then saying "4 of 5 doctors" recommend this to imply 4 of every 5 doctors when their sample was way too small.

Comment author: MixedNuts 20 September 2012 08:40:01AM 15 points [-]

Can someone sneakily try this on me? I like silly questionnaires, polls, and giving opinions, so it should be easy.

Comment author: MileyCyrus 20 September 2012 05:34:02PM 17 points [-]

You said in a previous thread that after a hard day of stealing wifi and lobbying for SOPA, you and Chris Brown like to eat babies and foie gras together. Can you explain your moral reasoning behind this?

Comment author: Eliezer_Yudkowsky 20 September 2012 06:16:21PM 19 points [-]

The geese and babies aren't sentient, wifi costs the provider very little, that's actually a different Chris Brown, and I take the money I get paid lobbying for SOPA and donate it to efficient charities!

(Sorry, couldn't resist when I saw the "babies" part.)

Comment author: jeremysalwen 21 September 2012 01:44:11AM 5 points [-]

I'll make sure to keep you away from my body if I ever enter a coma...

Comment author: Incorrect 21 September 2012 04:58:43AM 10 points [-]

Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?

Comment author: loup-vaillant 20 September 2012 07:47:15AM 15 points [-]

Now that's one ultimate rationalization. The standard pattern is to decide (or prefer) something for one reason, then confabulate more honourable reasons why we decided (or preferred) thus.

But confabulating for something we didn't even decide… that's takings things up a notch.

I bet the root problem is the fact that we often resolve cognitive dissonance before it even hits the concious level. Could we train ourselves to notice such dissonance instead?

Comment author: DaFranker 20 September 2012 07:21:35PM 13 points [-]

Could we train ourselves to notice such dissonance instead?

This needs to get a spot in CFAR's training program(s/mme(s)?). It sounds like the first thing you'd want to do once you reach the rank of second-circle Initiate in the Bayesian Conspiracy. Or maybe the first part of the test to attain this rank.

Comment author: Epiphany 21 September 2012 07:06:07AM *  13 points [-]

An alternate explanation:

Maybe the years of public schooling that most of us receive cause us to trust papers so much, that if we see something written down on a paper, we feel uncomfortable opposing it. If you're threatened with punishment for not regurgitating what is on an authority's papers daily for that many years of your life, you're bound to be classically conditioned to behave as if you agree with papers.

So maybe what's going on is this:

  1. You fill out a scientist's paper.

  2. The paper tells you your point of view. It looks authoritative because it's in writing.

  3. You feel uncomfortable disagreeing with the authority's paper. School taught you this was bad.

  4. Now the authority wants you to support the opinion they think is yours.

  5. You feel uncomfortable with the idea of failing to show the authority that you can support the opinion on the paper. (A teacher would not have approved - and you'd look stupid.)

  6. You might want to tell the authority that it's not your opinion, but they have evidence that you believe it - it's in writing.

  7. You behave according to your conditioning by agreeing with the paper, and do as expected by supporting what the researcher thinks your point of view is.

I think this might just be an external behavior meant to maintain approval of an authority, not evidence that they've truly changed their minds.

I wonder what would happen if the study were re-done in a really casual way with say, crayon-scrawled questions on scraps of napkins instead of authoritative looking papers.

Also, I wonder how much embarrassment it caused when they seemed to fill out the answers all wrong and how embarrassment might have influenced these people's behavior. Imagine you're filling out a paper (reminiscent of taking a test in school) but you filled out the answers all wrong. Horrified by the huge number of mistakes you made, might you try to hide it by pretending you meant to fill them out that way?

Comment author: orthonormal 21 September 2012 03:46:04PM 7 points [-]

It seems to me that this hypothesis is more of a mechanism for choice blindness than an alternate explanation- we already know that human beings will change their minds (and forget they've done so) in order to please authority.

(There's nonfictional evidence for this, but I need to run, so I'll just mention that we've always been at war with Oceania.)

Comment author: Epiphany 21 September 2012 05:30:51PM *  2 points [-]

What I'm saying is "Maybe they're only pretending to have an opinion that's not theirs." not "They've changed their minds for authority." so I still think it is an alternate explanation for the results.

Comment author: TheOtherDave 21 September 2012 07:01:45PM 7 points [-]

IIRC, part of the debriefing protocol for the study involved explaining the actual purpose of the study to the subjects and asking them if there were any questions where they felt the answers had been swapped. If they at that point identified a question as having fallen into that category, it was marked as retrospectively corrected, rather than uncorrected.

Of course, they could still be pretending, perhaps out of embarrassment over having been rooked.

Comment author: Epiphany 21 September 2012 08:00:00PM *  0 points [-]

I'm having trouble interpreting what your point is. It seems like you're saying "because they were encouraged to look for swapped questions before hand, Epiphany's point might not be valid" however, what I read stated: "After the experiment, the participants were fully debriefed about the true purpose of the experiment." so it may not have even occurred to most of them to wonder whether the questions had been swapped at the point when they were giving confabulated answers.

Does this clarify anything? It seems somebody got confused. Not sure who.

Comment author: TheOtherDave 21 September 2012 08:44:09PM 3 points [-]

IIRC, questions that were scored as "uncorrected" were those that, even after debriefing, subjects did not identify as swapped.
So if Q1 is scored as uncorrected, part of what happened is that I gave answer A to Q1, it's swapped for B, I explained why I believe B, I was afterwards informed that some answers were swapped and asked whether there were any questions I thought that was true for, even if I didn't volunteer that judgment at the time, and I don't report that this was true of Q1.
If I'm only pretending to have an opinion (B) that's not mine about Q1, the question arises of why I don't at that time say "Oh, yeah, I thought that was the case about Q1, since I actually believe A, but I didn't say anything at the time."

As I say, though, it's certainly possible... I might continue the pretense of believing B.

Comment author: Desrtopa 21 September 2012 06:55:42PM 9 points [-]

I have to wonder if many of the respondents in the survey didn't hold any position with much strength in the first place. Our society enforces the belief, not only that everyone is entitled to their opinions, but that everyone should have an opinion on just about any issue. People tend to stand by "opinions" that are really just snap judgments, which may be largely arbitrary.

If the respondents had little basis for determining their responses in the first place, it's unsurprising if they don't notice when they've been changed, and that it doesn't affect their ability to argue for them.

Comment author: Unnamed 25 September 2012 10:00:46PM 2 points [-]

There is a long tradition in social science research, going back at least to Converse (1964), holding that most people's political views are relatively incoherent, poorly thought-through, and unstable. They're just making up responses to survey questions on the spot, in a way that can involve a lot of randomness.

This study demonstrates that plus confabulation, in a way that is particularly compelling because of the short time scale involved and the experimental manipulation of what opinion the person was defending.

Comment author: k3nt 22 September 2012 07:22:32PM 2 points [-]

But the study said:

"The statements in condition two were picked to represent salient and important current dilemmas from Swedish media and societal debate at the time of the study."

Comment author: Vaniver 23 September 2012 06:45:22AM 3 points [-]

But the study said:

Even then, people can fail to have strong opinions on issues in current debate; I know my opinions are silent on many issues that are 'salient and important current dilemmas' in American society.

Comment author: MaoShan 24 September 2012 02:10:33AM 7 points [-]

I remember an acquaintance of mine in high school (maybe it was 8th grade) replied to a teacher's question with "I'm Pro-who cares". He was strongly berated by the teacher for not taking a side, when I honestly believe he had no reason to care either way.

Comment author: TheOtherDave 23 September 2012 07:12:49AM 3 points [-]

IIRC, the study also asked people to score how strongly they held a particular opinion, and found a substantial (though lower) rate of missed swaps for questions they rated as strongly held.

I would not expect that result were genuine indifference among options the only significant factor, although I suppose it's possible people just mis-report the strengths of their actual opinions.

Comment author: simplicio 21 September 2012 11:35:41PM 2 points [-]

Quite. My own answer to most of the questions in the survey is "Yes/No, but with the following qualifications." It's not too hard for me to imagine choosing, say, "Yes" to the surveillance question (despite my qualms), then being told I said "No," and believing it.

You won't fool these people if you ask them about something salient like abortion.

Comment author: MugaSofer 26 September 2012 12:23:19PM 1 point [-]

Abortion is a complex issue. You could propably change someone's position on one aspect of the abortion debate, such as a hardline pro-lifer "admitting" that it's OK in cases where the mother's life is in danger.

Comment author: Epiphany 21 September 2012 08:34:08AM 9 points [-]

Another explanation:

Might this mean they trust external memories of their opinions more than their own memories? Know what that reminds me of? Ego. Some people trust others more than themselves when it comes to their view of themselves. And that's why insults hurt, isn't it? Because they make you doubt yourself. Maybe people do this because of self-doubt.

Comment author: pinyaka 21 September 2012 12:14:31PM 7 points [-]

I wonder how long lived the new opinions are?

Comment author: TheOtherDave 21 September 2012 07:15:12PM 5 points [-]

Relatedly, I wonder how consistent people's original answers to these questions are (if, say, retested a month later). But I would expect answers the subjects are asked to defend/explain (whether original or changed) to be more persistent than answers they aren't.

Comment author: Lightwave 20 September 2012 08:48:21PM *  6 points [-]
Comment author: Alejandro1 21 September 2012 06:37:53PM 2 points [-]

Rabbit season!

Comment author: shminux 20 September 2012 04:53:26PM *  6 points [-]

1 karma point to anyone who links to a LW thread showing this effect (blind change of moral choice) in action. 2 karma points if you catch yourself doing it in such a thread.

Comment author: shminux 20 September 2012 04:57:35PM *  14 points [-]

A real-life example of a similar effect: I explained the Newcomb problem to a person and he two-boxed initially, then, after some discussion, he switched to one-boxing and refused to admit that he ever two-boxed.

Comment author: jimmy 21 September 2012 07:08:43AM 14 points [-]

This is common enough that I specifically watch out for it when asking questions that people might have some attachment to. Just today I didn't even ask because I knew I was gonna get a bogus "I've always thought this" answer.

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My mom went from "Sew it yourself" to "Of course I'll sew it for you, why didn't you ask me earlier?" a couple weeks later because she offered to sew something for my brother in law, which would make her earlier decision incongruent with her self image. Of course, she was offended when I told her that I did :p

Comment author: pjeby 25 September 2012 05:51:47PM *  7 points [-]

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My wife, not long before she met me, became an instant non-smoker and was genuinely surprised when friends offered her cigarettes -- she had to make a conscious effort to recall that she had previously smoked, because it was no longer consistent with her identity, as of the moment she decided to be a non-smoker.

This seems to be such a consistent feature of brains under self-modification that the very best way to know whether you've really changed your mind about something is to see how hard it is to think the way you did before, or how difficult it is to believe that you ever could have thought differently.

Comment author: thomblake 25 September 2012 06:15:59PM 2 points [-]

It's the best way I've seen to quit smoking - it seems to work every time. The ex-smoker says "I'm a non-smoker now" and starts badmouthing smokers - shortly they can't imagine doing something so disgusting and inconsiderate as smoking.

Comment author: wedrifid 25 September 2012 09:49:58PM 2 points [-]

It's the best way I've seen to quit smoking - it seems to work every time.

The second of these claims would be extremely surprising to me, even if weakened to '90% of the time' to allow for figures of speech. Even a success rate of 50% would be startling. I don't believe it.

Comment author: thomblake 26 September 2012 01:48:32PM *  3 points [-]

It's not surprising to me, though I imagine it's vulnerable to massive selection effect. My observation is about people who actually internalized being a non-smoker, not those who tried to do so and failed. I'm not surprised those two things are extremely highly correlated. So it might not be any better as strategy advice than "the best way to quit smoking is to successfully quit smoking".

Comment author: pjeby 26 September 2012 04:05:10AM -1 points [-]

Even a success rate of 50% would be startling. I don't believe it.

Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:

53% of the ex-smokers said that it was "not at all difficult" to stop

Of course, the page also says that lots of people don't successfully quit, which isn't incompatible with what thomblake says. Among people who are able to congruently decide to become non-smokers, it's apparently one of the easiest and most successful ways to do it.

It's just that not everybody can decide to be a non-smoker, or that it occurs to them to do so.

Anecdotally, my wife said that she'd "quit smoking" several times prior, each time for extrinsic reasons (e.g. dating a guy who didn't smoke, etc.). When she "became a non-smoker" instead (as she calls it), she did it for her own reasons. She says that as soon as she came to the conclusion that she needed to stop for good, she decided that "quitting smoking" wasn't good enough to do the job, and that she would have to become a non-smoker instead. (That was over 20 years ago, fwiw.)

I'm not sure how you'd go about prescribing that people do this: either they have an intrinsic desire to do it or not. You can certainly encourage and assist, but intrinsic motivation is, well, intrinsic. It's rather difficult to decide on purpose to do something of your own free will, if you're really trying to do it because of some extrinsic reason. ;-)

Comment author: Vaniver 26 September 2012 04:13:34AM 6 points [-]

Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:

wedrifid is asking for P(success|attempt), not P(attempt|success), and so a high P(attempt|success) isn't ironic.

Comment author: RichardHughes 20 September 2012 07:13:13PM 2 points [-]

Can you provide more info about the event?

Comment author: shminux 20 September 2012 08:05:10PM 11 points [-]

I presented the paradox (the version where you know of 1000 previous attempts all confirming that the Predictor is never wrong), answered the questions, cut off some standard ways to weasel out, then asked for the answer and the justification, followed by a rather involved discussion of free will, outside vs inside view, then returned to the question. What I heard was "of course I would one-box". "But barely an hour ago you were firmly in the two-boxing camp!". Blank stare... "Must have been a different problem!"

Comment author: fubarobfusco 20 September 2012 10:07:45PM 15 points [-]

Denying all connection to a possible alternate you who would two-box might be some sort of strategy ...

Comment author: RichardHughes 20 September 2012 02:48:30PM 6 points [-]

It strikes me that performing this experiment on people, then revealing what has occurred, may be a potentially useful method of enlightening people to the flaws of their cognition. How might we design a 'kit' to reproduce this sleight of hand in the field, so as to confront people with it usefully?

Comment author: NancyLebovitz 20 September 2012 06:28:47PM 5 points [-]

It would be easy enough to do with a longish computer survey. It's much easier to change what appears on a screen than to do sleight-of-paper.

Comment author: Armok_GoB 20 September 2012 06:11:52PM 3 points [-]

For added fun metaness, have the option you swich them to and start rationalize for be the one you're trying to convince them of :p

Comment author: nerfhammer 20 September 2012 05:00:09PM 2 points [-]

The video shows the mechanics of how it works pretty well.

Comment author: niceguyanon 20 September 2012 04:08:37PM *  1 point [-]

I suspect that those who are most susceptible to moral proposition switches and their subsequent defense of switch, are also the same people that will deny the evidence when confronted with their switch. Much like the Dunning Kruger effect there will be people who fail to recognize the extremity of their inadequacy, even when confronted with evidence of such.

Edit: The paper states that they informed all participants of the true nature of the survey, but it does not go in to detail on whether participants actually acknowledged that their moral propositions were switched.

Comment author: Kawoomba 20 September 2012 06:40:43AM 6 points [-]

Similar to your first video, here's the famous "count how often the players in white pants pass the ball" test (Simons & Chabris 1999).

Incredibly, if you weren't primed to look for something unexpected, you probably would't notice. Seen it work first hand in cogsci classes.

Comment author: Lapsed_Lurker 20 September 2012 07:46:12AM 5 points [-]

Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.

Comment author: nerfhammer 20 September 2012 07:58:44PM 4 points [-]

This is "inattention blindness". Choice blindness is sort of like the opposite; in inattention blindness you don't notice something you're not paying attention to, in choice blindness you don't notice something which you are paying attention to.

Comment author: Kawoomba 21 September 2012 08:03:19AM *  1 point [-]

Edit: Didn't really understand your above definition of choice blindness versus inattentional blindness, scholarpedia has a good contrasting definition:

Change blindness refers to the failure to notice something different about a display whereas inattentional blindness refers to a failure to see something present in a display. Although these two phenomena are related, they are also distinct.

Change blindness inherently involves memory — people fail to notice something different about the display from one moment to the next; that is, they must compare two displays to spot the change. The signal for change detection is the difference between two displays, and neither display on its own can provide evidence that a change occurred.

In contrast, inattentional blindness refers to a failure to notice something about an individual display. The missed element does not require memory – people fail to notice that something is present in a display.

In a sense, most inattentional blindness tasks could be construed as change blindness tasks by noting that people fail to see the introduction of the unexpected object (a change – it was not present before and now it is). However, inattentional blindness specifically refers to a failure to see the object altogether, not to a failure to compare the current state of a display to an earlier state stored in memory.

Comment author: Lightwave 20 September 2012 09:41:05PM *  5 points [-]

One interpretation is that many people don't have strongly held or stable opinions on some moral questions and/or don't care. Doesn't sound very shocking to me.

Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.

Comment author: Ezekiel 21 September 2012 07:58:09AM 8 points [-]

The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.

Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.

Comment author: Lightwave 21 September 2012 07:34:56PM 1 point [-]

Some of them were general moral principles, but some of them were specific statements.

Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said "non-familiar" rather than just "general".

Comment author: k3nt 22 September 2012 07:29:23PM 1 point [-]

If you read the study, they say that the "specific" questions they are asking are questions that were very salient at the time of the study. These are things that people were talking about and arguing about at the time, and were questions with real-world implications. Thus precisely not "trolley problems."

Comment author: drethelin 20 September 2012 08:19:08PM *  5 points [-]

A side of effect of this is to reinforce the importance of writing about the Obvious, because things seem obvious after we've learned them and we literally have trouble thinking about not knowing/viewing things in a certain way.

Comment author: shminux 20 September 2012 08:20:20PM 3 points [-]

Especially if the Obvious turns out to be wrong.

Comment author: drethelin 20 September 2012 08:24:46PM 4 points [-]

Sure. Either way actively talking about the obvious is useful.

Comment author: Fyrius 23 September 2012 04:32:09PM *  4 points [-]

concealing another person whom replaces the experimenter as the door passes.

(Very minor and content-irrelevant point here, but my grammar nazi side bids me to say it, at the risk of downvotery: it should be "who" here, not "whom", since it's the subject of the relative clause.)

Comment author: SilasBarta 21 September 2012 09:59:34PM 4 points [-]

I thought I might mention a sort-of similar thing, though done more for humor: the Howard Stern Show interviewed people in an area likely to favor a certain politician, asking them if they supported him because of position X, or position Y (both of which he actually opposed).

(If you remember this, go ahead and balk at the information I left out.)

Comment author: simplicio 21 September 2012 11:28:18PM 2 points [-]

This is indeed amusing, but the author draws a wrong/incomplete/tendentious conclusion from it. I think the proper conclusion is basically our usual "Blue vs Green" meme, plus some Hansonian cynicism about 'informed electorates.'

Comment author: Epiphany 21 September 2012 07:45:32PM *  4 points [-]

Clarifying question: Did they actually change their minds on moral positions or did this study just give the appearance that they changed their minds? This is a question that we need to be asking as we look for meaning in this information, but not everyone here is thinking to ask it. Even when I proposed an alternate explanation to show how this could give the false appearance of people changing their minds when they did not, I got one response from somebody that didn't seem to realize I had just explained why this result might be due to people pretending to support those views when they do not. (I have made this even more explicit.) I think it might be a good idea to include the clarifying question at the end of the original post.

Comment author: RobinZ 20 September 2012 06:02:43PM 4 points [-]

There was a high level of inter-rater agreement between the three raters for the NM reports (r = .70) as well as for the M reports (r = .77), indicating that there are systematic patterns in the verbal reports that corresponds to certain positions on the rating scale for both NM and M trials. Even more interestingly, there was a high correlation between the raters estimate and the original rating of the participants for NM (r = .59) as well as for M reports (r = .71), which indicates that the verbal reports in the M trials do in fact track the participants rated level of agreement with the opposite of the initial moral principle or issue [emphasis added] (for an illustration of this process and example reports, see figure S1, Supporting Online Material). In addition, this relationship highlights the logic of the attitude reversal, in that more modest positions result in verbal reports expressing arguments appropriate for the same region on the mirror side of the scale. And while extreme reversals more often are detected, the remaining non-detected trials also create stronger and more dramatic confabulations for the opposite position.

Am I misreading this, or does it say that the verbal statements of people supporting an inverted opinion fit that opinion better than those describing their genuine opinion?

Comment author: Epiphany 21 September 2012 08:21:24AM 6 points [-]

Consider this: If you're supporting your own genuine opinion, you might have your own carefully chosen perspective that is slightly different from the question's wording. You only select the answer because it's the closest one of the options, not because it's exactly your answer. So, you may be inclined, then, to say things that are related but don't fit the question exactly. If you're confabulating to support a random opinion, though, what do you have to go by but the wording? The opinion is directing your thoughts then, leading your thoughts to fit the opinion. You aren't trying to cram pre-existing thoughts into an opinion box to make it fit your view.

Or looking at it another way:

When expressing your point of view, the important thing is to express what you feel, regardless of whether it fits the exact question.

When supporting "your" point because you don't want to look like an idiot in front of a researcher, the objective is to support it as precisely as possible, not to express anything.

As for whether your interpretation of that selection is correct: it's past my bed time and I'm getting drowsy, so someone else should answer that part instead.

Comment author: Ezekiel 21 September 2012 08:06:12AM 4 points [-]

I think it does. Can't believe I missed that.

Actually, this fits well with my personal experience. I've frequently found it easier to verbalize sophisticated arguments for the other team, since my own opinions just seem self-evident.

Comment author: Konkvistador 21 September 2012 06:50:31PM *  8 points [-]

Konkvistador's LessWrong improvement algorithm

  1. Trick brilliant but contrarian thinker into mainstream position.
  2. Trick brilliant but square thinker into contrarian position.
  3. Have each write an article defending their take.
  4. Enjoy improved rationalist community.
Comment author: [deleted] 22 September 2012 02:03:32PM 3 points [-]

Now, go ahead and implement that!

Comment author: Johnicholas 21 September 2012 12:28:40PM 8 points [-]

There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?

What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?

Comment author: TheOtherDave 21 September 2012 07:11:14PM 5 points [-]

I don't have citations handy, but IIRC in general inattentional blindness effects are greatly diminished if you warn people ahead of time, which should not be surprising. I don't know what happens if you warn people between the filling-out-the-questionaire stage and the reading-the-(possibly altered)-answers stage; I expect you'd get a reduced rate of acceptance of changed answers, but you'd also get a not-inconsiderable rate of rejection of unchanged answers.

More generally: we do a lot of stuff without paying attention to what we're doing, but we don't keep track of what we did or didn't pay attention to, and on later recollection we tend to confabulate details into vague memories of unattended-to events. This is a broken system design, and it manifests in a variety of bugs that are unsurprising once we let go of the intuitive but false belief that memory is a process of retrieving recordings into conscious awareness.

It frequently startles me how tenacious that belief is.

Comment author: TheOtherDave 20 September 2012 02:34:07PM 7 points [-]

This ought not surprise me. It is instructive how surprising it nevertheless is.

Comment author: undermind 30 September 2012 10:40:37PM *  3 points [-]

<Ahem> Gaslighting. </Ahem>

Seriously, there's already a well-established form of psychological abuse founded on this principle. It works, and it's hard to see how to take it much further into the Dark Arts.

Comment author: Eliezer_Yudkowsky 20 September 2012 02:14:10PM 11 points [-]

Move to Main, please!

Comment author: rhollerith_dot_com 22 September 2012 12:15:43AM *  2 points [-]

So if I defraud someone by pretending to sell them an iPad for $100 but pocketing the $100 instead, I am more likely to get away with the fraud if instead of straightfowardly offering them an iPad, I set up a shady charity and offer them a choice between buying the iPad and donating $100 to the shady charity (provided that it's sufficiently easy for me to extract money from the charity).

Comment author: [deleted] 20 September 2012 06:25:24PM 0 points [-]

o.O

Comment author: gwern 17 December 2012 10:23:20PM 1 point [-]
Comment author: learnmethis 26 September 2012 01:51:13AM 1 point [-]

Also known as the "people can't remember things without distinctive features" phenomenon. Still interesting to note their behaviours in the situation though.

Comment author: Epiphany 22 September 2012 07:43:09AM 1 point [-]

You can't not believe everything you read, from the Journal of Personality and Social Psychology, might contain the beginnings of another alternative explanation to this.

Comment author: Jayson_Virissimo 22 September 2012 09:02:41AM 1 point [-]

Great paper, although (annoyingly) they conflate democracy with liberalism.

Comment author: blogospheroid 21 September 2012 06:45:17AM -2 points [-]

Wow!

I don't bandy the term sheeple out very frequently. But here it might just be appropriate.

Comment author: RichardKennaway 21 September 2012 09:34:08AM 13 points [-]

No-one says "sheeple" intending to include themselves. Do you have any reason to think you are immune from this effect?

Comment author: blogospheroid 22 September 2012 04:45:27AM 2 points [-]

Actually, Yes. I would think that I would be relatively immune from the effect of this in the domain of morality, because I have thought about morality and quite often.

Maybe in a field that I didn't have much knowledge about, if I were asked to give opinions and this kind of a thing was pulled on me, I would succumb and quite badly, I admit. But I wouldn't feel that bad.

I guess my main takeaway from this analogy is that most people don't care that much about morality to stop and think for a while. They go as the flow goes and therefore I said "Sheeple".

I am in no way saying that I am the purest and most moral person on earth. I am most definitely not living my life in accordance with my highest values. But I have a fairly high confidence that I will not succumb to this effect atleast in the domain of moral questions.

Comment author: Epiphany 21 September 2012 08:59:57AM 5 points [-]

That's what I thought at first, too but on second thought, I don't think they went far enough to confirm that this actually causes people to change their opinions. There are other reason people might act the way they did.

Comment author: Ezekiel 21 September 2012 08:01:42AM 4 points [-]

I suspect sheep would be less susceptible to this sort of thing than humans.

Comment author: TraderJoe 21 September 2012 08:11:29AM 0 points [-]

For logic this woolly, I agree...

Comment author: physicsgirl 27 January 2013 12:20:42AM 0 points [-]

Stuff actually works, I just did an experiment on it with tea and jam. It's so crazy.

Comment author: gwern 27 January 2013 01:46:34AM 0 points [-]

Details?

Comment author: [deleted] 27 January 2013 01:46:33AM 0 points [-]

I just did an experiment on it with tea and jam

What kind of experiment?