confirmation bias, thought experiment
Why do people end up with differing conclusions, given the same data?
Model
The information we get from others can not always be 100% relied upon. Some of the people telling you stuff are liars, some are stupid, and some are incorrectly or insufficiently informed. Even in the case where the person giving you an opinion is honest, smart and well informed, they are still unlikely to be able to tell you accurately how reliable their own opinion is.
So our brains use an 'unreliability' factor. Automatically we take what others tell us, and discount it by a certain amount, depending on how 'unreliable' we estimate the source to be.
We also compare what people tell us about 'known reference points' in order to update our estimates of their unreliability.
If Sally tells me that vaccines cause AIDS and I am very much more certain that this is not the case, than I am of Sally's reliability, then instead of modifying my opinion about what causes AIDS, I modify my opinion of how reliable Sally is.
If I'm only slightly more certain, then I might take the step of asking Sally her reason for thinking that, and looking at her data.
If I have a higher opinion of Sally than my own knowledge of science, and I don't much care or am unaware of what other people think about the relationship between vaccines and AIDS, then I might just accept what she says, provisionally, without checking her data.
If I have a very much higher opinion of Sally, then not only will I believe her, but my opinion of her reliability will actually increase as I assess her as some mould-breaking genius who knows things that others do not.
Importantly, once we have altered our opinion, based upon input that we originally considered to be fairly reliable, we are very bad at reversing that alteration, if the input later turns out to be less reliable than we originally thought. This is called the "continued influence effect", and we can use it to explain a number of things...
Experiment
Let us consider a thought experiment where two subjects, Peter and Paul, are exposed to input about a particular topic (such as "Which clothes washing powder is it best to use?") from multiple sources. Both will be exposed to the same sources, 100 in favour of using the Persil brand of washing powder, and 100 in favour of using the Bold brand of washing powder, but in a different order.
If they both start off with no strong opinion in either direction, would we expect them to end the experiment with roughly the same opinion as each other, or can we manipulate their opinions into differing, just by changing the order in which the sources are presented?
Suppose, with Peter, we start him off with 10 of the Persil side's most reputable and well argued sources, to raise Peter's confidence in sources that support Persil.
We can then run another 30 much weaker pro-Persil sources past him, and he is likely to just nods and accept, without bothering to examine the validity of the arguments too closely, because he's already convinced.
At this point, when he'll consider a source to be a bit suspect, straight away, just because they don't support Persil, we introduce him to the pro-Bold side, starting with the least reliable - the ones that are obviously stupid or manipulative. Further more, we don't let the pro-Bold side build up momentum. For every three poor pro-Bold sources, we interrupt with a medium reliability pro-Persil source that's rehashing pro-Persil points that Peter is by now familiar with and agrees with.
After seeing the worst 30 pro-Bold sources, Peter now don't just consider them to be a bit suspect - he considers them to be down right deceptive and mentally categorises all such sources as not worth paying attention to. Any further pro-Bold sources, even ones that seem to be impartial and well reasoned, he's going to put down as being fakes created by malicious researchers in the pay of an evil company.
We can now, safely, expose Peter to the medium-reliability pro-Bold sources and even the good ones, and will need less and less to refute them, just a reminder to Peter of 'which side he is on', because it is less about the data now, and more about identity - he doesn't see himself as the sort of person who'd support Bold. He's not a sheep. He's not taken in by the hoax.
Finally, after 80 pro-Persil sources and 90 pro-Bold sources, we have 10 excellent pro-Bold sources whose independence and science can't fairly be questioned. But it is too late for them to have much effect, and there are 20 good pro-Persil sources to balance them.
For Paul we do the reverse, starting with pro-Bold sources and only later introducing the pro-Persil side once a known reference point has been established as an anchor.
Simulation
Obviously, things are rarely that clear cut in real life. But people also don't often get data from both sides of an argument at a precisely equal rate. They bump around randomly, and once one side accumulates some headway, it is unlikely to be reversed.
We could add a third subject, Mary, and consider what is likely to happen if she is exposed to a random succession of sources, each with a 50% chance of supporting one side or the other, and each with a random value on a scale of 1(poor) to 3 (good) for honesty, validity and strength of conclusion supported by the claimed data.
If we use mathematics to make some actual models of the points at which a source agreeing or disagreeing with you affects your estimate of their reliability, we can use a computer simulation of the above thought experiment to predict how different orders of presentation will affect people's final opinion, under each model. Then we could compare that against real-world data, to see which model best matches reality.
Prediction
I think, if this experiment were carried out, one of the properties that would emerge naturally from it is the backfire effect:
" The backfire effect occurs when, in the face of contradictory evidence, established beliefs do not change but actually get stronger. The effect has been demonstrated experimentally in psychological tests, where subjects are given data that either reinforces or goes against their existing biases - and in most cases people can be shown to increase their confidence in their prior position regardless of the evidence they were faced with. "
Further Reading
https://en.wikipedia.org/wiki/Confirmation_bias
https://en.wikipedia.org/wiki/Attitude_polarization
http://www.dartmouth.edu/~nyhan/nyhan-reifler.pdf
http://www.tandfonline.com/doi/abs/10.1080/17470216008416717
http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/
http://www.tandfonline.com/doi/abs/10.1080/14640749508401422
http://rationalwiki.org/wiki/Backfire_effect
Buying happiness
There's a semi-famous paper by Dunn, Gilbert and Wilson: "If money doesn't make you happy, then you probably aren't spending it right". (Proper reference: Dunn, E.W., Gilbert, D.T., and Wilson, T.D., If money doesn't make you happy, then you probably aren't spending it right, Journal of Consumer Psychology, vol 21, issue 2, April 2011, pp. 115–125.) It's been referenced a few times on LW but curiously never written up properly here. The purpose of this post is to remedy that.
There is an earlier LW post called "Be Happier" which among other things references this paper and quotes some things it says, but that post is monstrously long and covers a lot more ground (hence, less details on the material in this paper).
Dunn, Gilbert and Wilson (hereafter "DGW") offer eight principles to follow. Here they are.
1. Buy experiences instead of things.
Many studies have asked people to reflect on past "material" and/or "experiential" purchases and have consistently found that they report greater happiness from (and are made happier by recalling) the latter than the former.
Why? DGW propose 5 reasons. First, deliberately sought-out experiences encourage us to focus on the here and now (something shown to increase happiness substantially); second, when things don't change we adapt to them rapidly, and "material" purchases like cars and tables tend to be pretty stable (whereas ongoing experiences are more varied); third, it turns out that people spend more time anticipating experiences before they happen and recalling them afterwards than they do for material purchases. Fourth, experiences are less directly comparable to alternatives than material things, and therefore less subject to post-purchase regret. Fifth, experiences are often shared, and other people are a great source of happiness.
2. Help others instead of yourself.
Prosocial spending correlates better to happiness than personal spending. If you give random people money and either tell them to spend it on themselves or to spend it on someone else, the latter makes them happier. Reflecting on past spending-on-others makes people happier than reflecting on past spending-on-self. (I am a little skeptical about that one: the right point of comparison would be not the past spending but the past enjoyment of whatever you spent the money on.)
Why? DGW propose two reasons. First, prosocial spending is good for relationships and relationships are good for happiness. Second, when you spend on someone else you get to feel like a good person.
Most people have wrong intuitions about this: they expect spending on themselves to make them happier. Most people are wrong.
3. Buy many small pleasures instead of few big ones.
As we saw above under #1, we quickly adapt to changes. Therefore, a larger number of varied small pleasures may be a better buy than a single big one. There is some evidence for this (though to my mind it seems to bear less directly on DGW's principle than in the other cases we've considered so far). If you correlate people's happiness with their positive experiences, the correlation with how frequent those experiences are is stronger than the correlation with how intense they are. The optimal (for happiness) number of sexual partners to have over a year is one, perhaps because that gets you more sex even if individual instances are less exciting. (I find this less than convincing; individual instances might be better because partners learn what works well for them.)
The other reason DGW suggest why more smaller things should be better is diminishing marginal utility: half a cookie is more than half as good as a whole cookie. (This is, I think, partly because of adaptation, but that isn't the whole story.)
DGW suggest that this is one reason why the relationship between wealth and happiness isn't stronger: "wealth promises access to peak experiences, which in turn undermine the ability to savor small pleasures".
4. Buy less insurance.
We adapt to bad things as well as good, which means that bad things are less bad than we are liable to expect. Our overestimation of the impact of adverse occurrences is one reason why we buy insurance, which notoriously is always negative-expectation in monetary terms.
DGW cite various studies showing that people expect to be made markedly unhappier by losses than they actually are if the losses occur, and that people expect to regret bad outcomes more than they actually do (we overestimate how much we will blame ourselves, because we underestimate how good we are at blaming anything and anyone else for our misfortunes).
5. Pay now and consume later.
The opposite of the bargain proposed by credit cards! Besides the purely financial problems that arise from overspending (which are large and widespread), DGW suggest that "consume now, pay later" is bad for our happiness because it eliminates anticipation. We may derive a lot of pleasure even from anticipating something that we don't enjoy very much when it happens. "People who devote time to anticipating enjoyable experiences report being happier in general."
You might think that moving an experience later would simply mean more anticipation (good) but less reminiscence (bad), but it turns out that anticipation generally brings more happiness. (And, for unpleasant events, more pain.)
DGW suggest two other benefits of delaying consumption. First, we may make better choices (meaning, in this case, ones yielding more happiness overall, even if less in the very short term) when we make them a little way ahead. Second, the delay may increase uncertainty, which may keep attention focused on the thing we're buying, which may reduce adaptation. (This seems a little convoluted to me; DGW cite some research backing it up but I'm not sure it backs up the "by reducing adaptation" part of it.)
6. Think about what you're not thinking about.
That is: when choosing what to spend on, take some time to consider less obvious aspects that you'd otherwise be tempted to neglect. "The bigger home may seem like a better deal, but if the fixer-upper requires trading Saturday afternoons with friends for Saturday afternoons with plumbers, it may not be such a good deal after all." And: "consumers who expect a single purchase to have a lasting impact on their happiness might make more realistic predictions if they simply thought about a typical day in their life." (Rather than considering only the small bits of that day that will be impacted by their purchase.)
7. Beware of comparison shopping.
Comparison shopping, say DGW, focuses attention on the features that most clearly distinguish candidate purchases from one another, whereas other more-common features may actually have much more impact on happiness. It may also focus attention on more-concrete differences; for instance, if you ask people whether they would more enjoy a small heart-shaped chocolate or a large cockroach-shaped one, they generally prefer the former, but if you ask them to choose one of the two they tend to focus on the size and choose the latter.
DGW also point out that the context during comparison-shopping tends to be different from that during actual consumption, which can skew our evaluations.
8. Follow the herd instead of your head.
DGW cite research supporting de la Rochefoucauld's advice: "Before we set our hearts too much upon anything, let us first examine how happy those are who already possess it." Others' actual experiences of a thing are likely to be better predictors of our enjoyment than our theoretical estimates: we may know ourselves better, but they know the thing better.
They also suggest (and I don't think this really fits their heading) looking to others for advice on how we would enjoy something we are considering buying. The example they give is of research in which subjects were shown some foods and asked to estimate how much they would enjoy them, after which they ate them and evaluated their actual enjoyment. The wrinkle is that they were also observed, at the moment of being shown the foods, by other observers, who rated their immediate facial reactions -- which turned out to be better predictors of their enjoyment than the subjects' own assessments. So "other people may provide a useful source of information about the products that will bring us joy because they can see the nonverbal reactions that may escape our own notice".
Speculative rationality skills and appropriable research or anecdote
Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.
CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).
This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.
To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.
A heuristic for predicting minor depression in others and myself, and related things
Summary
Look at how you or other people walk. Then going a bit meta.
Disclaimer
This post is probably not high quality enough to deserve to be top level purely on its qualitative merits. However I think the sheer importance of the issue for human well-being makes it so. Please consider importance / potential utility of the whole discussion and not just the post, and not only quality when voting.
The problem
Minor depression is not really an accurately defined, easily recognizable thing. First of all there are people with hard, boring or otherwise unsatisfactory life who are unhappy about it, how can one tell this normal, justifiable unhappiness from minor depression? Especially that therapists often say having good reasons to be depressed still counts as one, so at that point you don't really know whether to focus on fixing your mind or fixing your life. Then a lot of things that don't even register as direct sadness or unhappiness are considered part of or related to depression, such as lethargy/low energy/low motivation, irritability/aggressiveness, eating disorders, and so on. How could you tell if you are just a bad tempered lazy glutton or depressed? And finally, don't cultural expectations play a role, such as how Americans tend to be optimistic and expect a happy, pursue-shiny-things life, while e.g. Finns not really?
Of course there are clinical diagnosis methods, but people will ask a therapist for a diagnosis only when they already suspect something is wrong. They must think "Jolly gee, I really should feel better than I do now, it is not really normal to feel so, better ask a shrink!" But often it is not so. Often it is like "My mind is normal. It is life that sucks." So by what heuristic could you tell whether there is something wrong with yourself or other people?
Basis
This is heuristic I built mainly on observational correlations plus some psychological parallels. Has nothing to do with accepted medical science or experts opinion. My goal isn't as much as to convince you this is a good heuristic, but to open an open-ended discussion, asking you if it seems to be a good one, and also trigger a discussion where you propose other methods.
How I think non-depressed men walk
"Having a spring in the step." This old saying is IMHO surprisingly apt. I like this drawing - NOT because I think depression is based on T levels, but I think this cartoonishly over-exaggerated body language is fairly expressive of the idea. For all I know this seems more of a dopamine thing, eagerness, looking forward not testosterone.
It seems to me non-depressed men push themselves forward with their rear leg, heels raised, calves engaged, almost like jumping forward. This is the "spring" in the step. The actual spring is the rear leg calf muscle. Often this is accompanied by a movement of arms while walking. A slight rocking or swaying of the NOT hips but chest / shoulders may also be part of it, but I think it is less relevant. The general message / feel is "I'm so eager to tackle challenges! That's fun!"
Psychologically, I think all this eagerly-looking-forward-to-challenges spring in the step means a mindset where you are not afraid of the future, but not because you think it will be smooth sailing, but because you are confident in yourself to be able to tackle challenges and even enjoy doing so. This seems like a healthy mindset.
How I think depressed men walk
Dragging feet. Dragging a slouched, sack-like, non-tension upper body. Leaning forward. Head down. Shoulders pulled up, hunched up to protect the neck, engaging the upper trapezius muscles. A chronic pain in the upper traps (from their constant engagement), when having your upper traps massaged feels SO good, may be a predictive sign of it. Comes accross as embarrassed, scolded-boy body language.
Another way of walking I noticed on myself and probably counts as depressed is the duck-walk. The movement is started by the upper body slightly "falling" forward, the center of gravity starting to go forward, then "catching" the fall by sticking forward a leg, and the foot hits the ground flat, not with the front part of the foot but the whole foot, like a duck.Basically your heels are almost never raised and calves are not engaged much. This would be impossible / difficult if you had a springy step i.e. pushing forward with the rear leg, you would have to raise a heel for that, but possible if you fall forward and catch, fall forward and catch. Often not raising feet high (related verbs: to scuff, to shuffle).
How I think non-depressed women walk
Generally speaking I use the same heuristic for women who seem like they are "one of the boys" type (i.e. those who wear comfortable sports shoes, focus on career goals not seducing men etc.)
But this clearly does not work with all women, for example, that springy step thing is pretty much impossible in stillettoes for example. Rather I think non-depressed women often tend to sway the hips. It is an unconscious enjoyment of their own femininity and sexiness, not a show put on for the sake of men.
I don't really have clear ideas of how depressed women walk, all I can offer is not like the above. When both the eager spring and the sexy hip sway are missing, it may be a sign.
For people of non-binary gender and other special cases: again all I can offer is that if you are non-depressed, you probably have either the eager spring or the hip sway.
Am I putting the bar too high? False positives?
Is it possible that it is a too "strict" heuristic? While I think these heuristics are generally true for peopel who are in an excellent emotional shape, feel confident, love them some challenges, feel sexy etc. this may be possible that this emotional shape is higher than the waterline for depression, it is possible that some people are not depressed and yet below this like, have less confidence, less eager, happy expectation, less self-conscious sexiness or something like that.
Essentially I think my method does not really have many false negatives, but could possibly yield false positives.
Have you seen many cases that would count as false positives?
Meta: why is minor depression so difficult to tell / diagnose accurately?
There are clinically made checklists, but they sound like a collection of unrelated things. Could really the same thing cause you to sleep too much or not enough, eat too much or not enough? Doesn't it sound like Selling Nonapples? Putting everybody who does not have just the perfect sleeping or eating habits into one common category called depression?
For example in the West most people see depression as "the blues" i.e. some form of sadness. But often people don't report feeling sad, but report being very lethargic and not having energy and motivation and that, too, is often seen as depression. Some people are just negative and bitter and not enjoy anything, and yet they don't see it as their own sadness but more like "life is hard". I guess in both cases it is more line internalizing sadness, considering being sad a normal thing, and not really expecting to feel good. (This may be the case of mine and surprisingly many people in my family / relatives. A life-is-tough, survivalist ethos, not fun ethos.)
Then you go outside the West and you find even more different things. I cannot find my source anymore, but I remember a story that in a culture like Mali women generally don't express their emotions, are not conscious of them, and there depression is diagnosed through physical symptoms like chest pain.
Is minor depression an apple or a nonapple? A thing, one thing, or a generic "anything but normal happiness" bin?
I think my walking heuristic does predict something, and that something is probably close enough to the idea of minor depression, but whether it is a too broad tool with many false positives, or whether it predicts only a narrowly specific case of depressions, I cannot really tell and basically I asking you here whether it matches your experiences or not.
What are your heuristics? What would be a low false positives easy heuristic?
P.S. Researchers found a reverse link saying walking in a happy or depressed style _causes_ mood changes. It seems the article assumes everybody knows what walking in a happy or depressed style means. In fact this is what I am trying to find out here!
P.P.S. I know I suck at writing, so let me try to reformulate the main point a different way: we know people cannot be happy all the time and often have such a unsatisfying life that they are rarely happy. How can we find the thin line between being normal common life dissatisfaction based unhappiness (hard or boring life) and minor depression? Can walking style be used as a good predictor of specifically this thin line?
Male ejaculation frequency variables?
I wonder what one would consider before having sex or a good fap given total control on one's ejaculations. The only thing i'm aware of is the psychological urge that presumably increases with the time since the last ejaculation, but i'm quite interested in more physiological variables that one might want to tune this way to avoid illnesses, maintain general activity rhythm or so. Wikipedia has to say only this, afais:
the protective effect of ejaculation is greatest when men in their twenties ejaculated on average seven or more times a week. This group were one-third less likely to develop aggressive prostate cancer when compared with men who ejaculated less than three times a week at this age.
[Link] "The Problem With Positive Thinking"
Psychology researchers discuss their findings in a New York Times op-ed piece.
The take-home advice:
Positive thinking fools our minds into perceiving that we’ve already attained our goal, slackening our readiness to pursue it.
...
What does work better is a hybrid approach that combines positive thinking with “realism.” Here’s how it works. Think of a wish. For a few minutes, imagine the wish coming true, letting your mind wander and drift where it will. Then shift gears. Spend a few more minutes imagining the obstacles that stand in the way of realizing your wish.
This simple process, which my colleagues and I call “mental contrasting,” has produced powerful results in laboratory experiments. When participants have performed mental contrasting with reasonable, potentially attainable wishes, they have come away more energized and achieved better results compared with participants who either positively fantasized or dwelt on the obstacles.
When participants have performed mental contrasting with wishes that are not reasonable or attainable, they have disengaged more from these wishes. Mental contrasting spurs us on when it makes sense to pursue a wish, and lets us abandon wishes more readily when it doesn’t, so that we can go after other, more reasonable ambitions.
Talking to yourself: A useful thinking tool that seems understudied and underdiscussed
I have returned from a particularly fruitful Google search, with unexpected results.
My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.
This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.
Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.
So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?
Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.
- It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
- Auditory information is retained more easily, so making thoughts auditory helps remember them later.
- It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
- System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
- It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.
All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.
Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.
I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.
So, what do you think? Useful?
A "Holy Grail" Humor Theory in One Page.
Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.
I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.
Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.
A "Holy Grail" Humor Theory in One Page.
Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...
In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:
Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety
Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.
This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).
The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.
We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.
[link] Why Psychologists' Food Fight Matters
Why Psychologists’ Food Fight Matters: Important findings” haven’t been replicated, and science may have to change its ways. By Michelle N. Meyer and Christopher Chabris. Slate, July 31, 2014. [Via Steven Pinker's Twitter account, who adds: "Lesson for sci journalists: Stop reporting single studies, no matter how sexy (these are probably false). Report lit reviews, meta-analyses."] Some excerpts:
Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. The issue attempts to replicate 27 “important findings in social psychology.” Replication—repeating an experiment as closely as possible to see whether you get the same results—is a cornerstone of the scientific method. Replication of experiments is vital not only because it can detect the rare cases of outright fraud, but also because it guards against uncritical acceptance of findings that were actually inadvertent false positives, helps researchers refine experimental techniques, and affirms the existence of new facts that scientific theories must be able to explain.
One of the articles in the special issue reported a failure to replicate a widely publicized 2008 study by Simone Schnall, now tenured at Cambridge University, and her colleagues. In the original study, two experiments measured the effects of people’s thoughts or feelings of cleanliness on the harshness of their moral judgments. In the first experiment, 40 undergraduates were asked to unscramble sentences, with one-half assigned words related to cleanliness (like pure or pristine) and one-half assigned neutral words. In the second experiment, 43 undergraduates watched the truly revolting bathroom scene from the movie Trainspotting, after which one-half were told to wash their hands while the other one-half were not. All subjects in both experiments were then asked to rate the moral wrongness of six hypothetical scenarios, such as falsifying one’s résumé and keeping money from a lost wallet. The researchers found that priming subjects to think about cleanliness had a “substantial” effect on moral judgment: The hand washers and those who unscrambled sentences related to cleanliness judged the scenarios to be less morally wrong than did the other subjects. The implication was that people who feel relatively pure themselves are—without realizing it—less troubled by others’ impurities. The paper was covered by ABC News, the Economist, and the Huffington Post, among other outlets, and has been cited nearly 200 times in the scientific literature.
However, the replicators—David Johnson, Felix Cheung, and Brent Donnellan (two graduate students and their adviser) of Michigan State University—found no such difference, despite testing about four times more subjects than the original studies. [...]
The editor in chief of Social Psychology later agreed to devote a follow-up print issue to responses by the original authors and rejoinders by the replicators, but as Schnall told Science, the entire process made her feel “like a criminal suspect who has no right to a defense and there is no way to win.” The Science article covering the special issue was titled “Replication Effort Provokes Praise—and ‘Bullying’ Charges.” Both there and in her blog post, Schnall said that her work had been “defamed,” endangering both her reputation and her ability to win grants. She feared that by the time her formal response was published, the conversation might have moved on, and her comments would get little attention.
How wrong she was. In countless tweets, Facebook comments, and blog posts, several social psychologists seized upon Schnall’s blog post as a cri de coeur against the rising influence of “replication bullies,” “false positive police,” and “data detectives.” For “speaking truth to power,” Schnall was compared to Rosa Parks. The “replication police” were described as “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.” Meanwhile, other commenters stated or strongly implied that Schnall and other original authors whose work fails to replicate had used questionable research practices to achieve sexy, publishable findings. At one point, these insinuations were met with threats of legal action. [...]Unfortunately, published replications have been distressingly rare in psychology. A 2012 survey of the top 100 psychology journals found that barely 1 percent of papers published since 1900 were purely attempts to reproduce previous findings. Some of the most prestigious journals have maintained explicit policies against replication efforts; for example, the Journal of Personality and Social Psychology published a paper purporting to support the existence of ESP-like “precognition,” but would not publish papers that failed to replicate that (or any other) discovery. Science publishes “technical comments” on its own articles, but only if they are submitted within three months of the original publication, which leaves little time to conduct and document a replication attempt.The “replication crisis” is not at all unique to social psychology, to psychological science, or even to the social sciences. As Stanford epidemiologist John Ioannidis famously argued almost a decade ago, “Most research findings are false for most research designs and for most fields.” Failures to replicate and other major flaws in published research have since been noted throughout science, including in cancer research, research into the genetics of complex diseases like obesity and heart disease, stem cell research, and studies of the origins of the universe. Earlier this year, the National Institutes of Health stated “The complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view “positive” findings that announce a novel relationship or support a theoretical claim as more interesting than “negative” findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate. Since journal publications are valuable academic currency, researchers—especially those early in their careers—have strong incentives to conduct original work rather than to replicate the findings of others. Replication efforts that do happen but fail to find the expected effect are usually filed away rather than published. That makes the scientific record look more robust and complete than it is—a phenomenon known as the “file drawer problem.”The emphasis on positive findings may also partly explain the fact that when original studies are subjected to replication, so many turn out to be false positives. The near-universal preference for counterintuitive, positive findings gives researchers an incentive to manipulate their methods or poke around in their data until a positive finding crops up, a common practice known as “p-hacking” because it can result in p-values, or measures of statistical significance, that make the results look stronger, and therefore more believable, than they really are. [...]The recent special issue of Social Psychology was an unprecedented collective effort by social psychologists to [rectify this situation]—by altering researchers’ and journal editors’ incentives in order to check the robustness of some of the most talked-about findings in their own field. Any researcher who wanted to conduct a replication was invited to preregister: Before collecting any data from subjects, they would submit a proposal detailing precisely how they would repeat the original study and how they would analyze the data. Proposals would be reviewed by other researchers, including the authors of the original studies, and once approved, the study’s results would be published no matter what. Preregistration of the study and analysis procedures should deter p-hacking, guaranteed publication should counteract the file drawer effect, and a requirement of large sample sizes should make it easier to detect small but statistically meaningful effects.The results were sobering. At least 10 of the 27 “important findings” in social psychology were not replicated at all. In the social priming area, only one of seven replications succeeded. [...]One way to keep things in perspective is to remember that scientific truth is created by the accretion of results over time, not by the splash of a single study. A single failure-to-replicate doesn’t necessarily invalidate a previously reported effect, much less imply fraud on the part of the original researcher—or the replicator. Researchers are most likely to fail to reproduce an effect for mundane reasons, such as insufficiently large sample sizes, innocent errors in procedure or data analysis, and subtle factors about the experimental setting or the subjects tested that alter the effect in question in ways not previously realized.Caution about single studies should go both ways, though. Too often, a single original study is treated—by the media and even by many in the scientific community—as if it definitively establishes an effect. Publications like Harvard Business Review and idea conferences like TED, both major sources of “thought leadership” for managers and policymakers all over the world, emit a steady stream of these “stats and curiosities.” Presumably, the HBR editors and TED organizers believe this information to be true and actionable. But most novel results should be initially regarded with some skepticism, because they too may have resulted from unreported or unnoticed methodological quirks or errors. Everyone involved should focus their attention on developing a shared evidence base that consists of robust empirical regularities—findings that replicate not just once but routinely—rather than of clever one-off curiosities. [...]Scholars, especially scientists, are supposed to be skeptical about received wisdom, develop their views based solely on evidence, and remain open to updating those views in light of changing evidence. But as psychologists know better than anyone, scientists are hardly free of human motives that can influence their work, consciously or unconsciously. It’s easy for scholars to become professionally or even personally invested in a hypothesis or conclusion. These biases are addressed partly through the peer review process, and partly through the marketplace of ideas—by letting researchers go where their interest or skepticism takes them, encouraging their methods, data, and results to be made as transparent as possible, and promoting discussion of differing views. The clashes between researchers of different theoretical persuasions that result from these exchanges should of course remain civil; but the exchanges themselves are a perfectly healthy part of the scientific enterprise.This is part of the reason why we cannot agree with a more recent proposal by Kahneman, who had previously urged social priming researchers to put their house in order. He contributed an essay to the special issue of Social Psychology in which he proposed a rule—to be enforced by reviewers of replication proposals and manuscripts—that authors “be guaranteed a significant role in replications of their work.” Kahneman proposed a specific process by which replicators should consult with original authors, and told Science that in the special issue, “the consultations did not reach the level of author involvement that I recommend.”Collaboration between opposing sides would probably avoid some ruffled feathers, and in some cases it could be productive in resolving disputes. With respect to the current controversy, given the potential impact of an entire journal issue on the robustness of “important findings,” and the clear desirability of buy-in by a large portion of psychology researchers, it would have been better for everyone if the original authors’ comments had been published alongside the replication papers, rather than left to appear afterward. But consultation or collaboration is not something replicators owe to original researchers, and a rule to require it would not be particularly good science policy.Replicators have no obligation to routinely involve original authors because those authors are not the owners of their methods or results. By publishing their results, original authors state that they have sufficient confidence in them that they should be included in the scientific record. That record belongs to everyone. Anyone should be free to run any experiment, regardless of who ran it first, and to publish the results, whatever they are. [...]some critics of replication drives have been too quick to suggest that replicators lack the subtle expertise to reproduce the original experiments. One prominent social psychologist has even argued that tacit methodological skill is such a large factor in getting experiments to work that failed replications have no value at all (since one can never know if the replicators really knew what they were doing, or knew all the tricks of the trade that the original researchers did), a surprising claim that drew sarcastic responses. [See LW discussion.] [...]Psychology has long been a punching bag for critics of “soft science,” but the field is actually leading the way in tackling a problem that is endemic throughout science. The replication issue of Social Psychology is just one example. The Association for Psychological Science is pushing for better reporting standards and more study of research practices, and at its annual meeting in May in San Francisco, several sessions on replication were filled to overflowing. International collaborations of psychologists working on replications, such as the Reproducibility Project and the Many Labs Replication Project (which was responsible for 13 of the 27 replications published in the special issue of Social Psychology) are springing up.Even the most tradition-bound journals are starting to change. The Journal of Personality and Social Psychology—the same journal that, in 2011, refused to even consider replication studies—recently announced that although replications are “not a central part of its mission,” it’s reversing this policy. We wish that JPSP would see replications as part of its central mission and not relegate them, as it has, to an online-only ghetto, but this is a remarkably nimble change for a 50-year-old publication. Other top journals, most notable among them Perspectives in Psychological Science, are devoting space to systematic replications and other confirmatory research. The leading journal in behavior genetics, a field that has been plagued by unreplicable claims that particular genes are associated with particular behaviors, has gone even further: It now refuses to publish original findings that do not include evidence of replication.A final salutary change is an overdue shift of emphasis among psychologists toward establishing the size of effects, as opposed to disputing whether or not they exist. The very notion of “failure” and “success” in empirical research is urgently in need of refinement. When applied thoughtfully, this dichotomy can be useful shorthand (and we’ve used it here). But there are degrees of replication between success and failure, and these degrees matter.For example, suppose an initial study of an experimental drug for cardiovascular disease suggests that it reduces the risk of heart attack by 50 percent compared to a placebo pill. The most meaningful question for follow-up studies is not the binary one of whether the drug’s effect is 50 percent or not (did the first study replicate?), but the continuous one of precisely how much the drug reduces heart attack risk. In larger subsequent studies, this number will almost inevitably drop below 50 percent, but if it remains above 0 percent for study after study, then the best message should be that the drug is in fact effective, not that the initial results “failed to replicate.”
[LINK] Prisoner's Dilemma? Not So Much
Hannes Rusch argues that the Prisoner's Dilemma is best understood as merely one game of very many:
only 2 of the 726 combinatorially possible strategically unique ordinal 2x2 games have the detrimental characteristics of a PD and that the frequency of PD-type games in a space of games with random payoffs does not exceed about 3.5%. Although this does not compellingly imply that the relevance of PDs is overestimated, in the absence of convergent empirical information about the ancestral human social niche, this finding can be interpreted in favour of a rather neglected answer to the question of how the founding groups of human cooperation themselves came to cooperate: Behavioural and/or psychological mechanisms which evolved for other, possibly more frequent, social interaction situations might have been applied to PD-type dilemmas only later.
Channel factors
Or, “how not to make a fundamental attribution error on yourself;” or, “how to do that thing that you keep being frustrated at yourself for not doing;” or, “finding and solving trivial but leveraged inconveniences.”
[link] Why Self-Control Seems (but may not be) Limited
Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.
Some of the most interesting points:
- Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
- The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
- Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.
The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."
- Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
- Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the organism to both focus on activities that will provide it with rewards (exploitation), but also to disengage from them and seek activities which may be even more rewarding (exploration). Feelings such as boredom function to stop the organism from getting too fixated on individual tasks, and allow us to spend some time on tasks which might turn out to be even more valuable.
The explanation of the actual proposed psychological mechanism is good enough that it deserves to be quoted in full:
Based on the tradeoffs identified above, we propose that initial acts of control lead to shifts in motivation away from “have-to” or “ought-to” goals and toward “want-to” goals (see Figure 2). “Have-to” tasks are carried out through a sense of duty or contractual obligation, while “want-to” tasks are carried out because they are personally enjoyable and meaningful [41]; as such, “want-to” tasks feel easy to perform and to maintain in focal attention [41]. The distinction between “have-to” and “want-to,” however, is not always clear cut, with some “want-to” goals (e.g., wanting to lose weight) being more introjected and feeling more like “have-to” goals because they are adopted out of a sense of duty, societal conformity, or guilt instead of anticipated pleasure [53].
According to decades of research on self-determination theory [54], the quality of motivation that people apply to a situation ranges from extrinsic motivation, whereby behavior is performed because of external demand or reward, to intrinsic motivation, whereby behavior is performed because it is inherently enjoyable and rewarding. Thus, when we suggest that depletion leads to a shift from “have-to” to “want-to” goals, we are suggesting that prior acts of cognitive effort lead people to prefer activities that they deem enjoyable or gratifying over activities that they feel they ought to do because it corresponds to some external pressure or introjected goal. For example, after initial cognitive exertion, restrained eaters prefer to indulge their sweet tooth rather than adhere to their strict views of what is appropriate to eat [55]. Crucially, this shift from “have-to” to “want-to” can be offset when people become (internally or externally) motivated to perform a “have-to” task [49]. Thus, it is not that people cannot control themselves on some externally mandated task (e.g., name colors, do not read words); it is that they do not feel like controlling themselves, preferring to indulge instead in more inherently enjoyable and easier pursuits (e.g., read words). Like fatigue, the effect is driven by reluctance and not incapability [41] (see Box 2).
Research is consistent with this motivational viewpoint. Although working hard at Time 1 tends to lead to less control on “have-to” tasks at Time 2, this effect is attenuated when participants are motivated to perform the Time 2 task [32], personally invested in the Time 2 task [56], or when they enjoy the Time 1 task [57]. Similarly, although performance tends to falter after continuously performing a task for a long period, it returns to baseline when participants are rewarded for their efforts [58]; and remains stable for participants who have some control over and are thus engaged with the task [59]. Motivation, in short, moderates depletion [60]. We suggest that changes in task motivation also mediate depletion [61].
Depletion, however, is not simply less motivation overall. Rather, it is produced by lower motivation to engage in “have-to” tasks, yet higher motivation to engage in “want-to” tasks. Depletion stokes desire [62]. Thus, working hard at Time 1 increases approach motivation, as indexed by self-reported states, impulsive responding, and sensitivity to inherently-rewarding, appetitive stimuli [63]. This shift in motivational priorities from “have-to” to “want-to” means that depletion can increase the reward value of inherently-rewarding stimuli. For example, when depleted dieters see food cues, they show more activity in the orbitofrontal cortex, a brain area associated with coding reward value, compared to non-depleted dieters [64].
See also: Kurzban et al. on opportunity cost models of mental fatigue and resource-based models of willpower; Deregulating Distraction, Moving Towards the Goal, and Level Hopping.
[LINK] People become more utilitarian in VR moral dilemmas as compared to text based.
A new study indicates that people become more utilitarian (save more lives) when viewing a moral dilemma in a virtual reality situation, as compared to reading the same situation in text.
Abstract.
Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally-separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their non-utilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.
[Link] Changelings, Infanticide and Nortwest European Guilt Culture
Related: The Psychological Diversity of Mankind, An African Folktale, many of the more interesting infanticide & abortion debates on this site
A fascinating post that however might need some background reading, most relevant material is linked in the article itself. I encourage reading up on the material.
Stories about changelings replacing babies and the recommended course of action being basically to expose the child is not a human universal, they are found only in European cultures. These rely more heavily on guilt and less on shame to regulate behavior than most other human societies. This may not be a coincidence. The stories look like they work as a ready made rationalization to reduce guilt from infanticide. Common problems often acquire common solutions like this.
Guilt and Shame Cultures
On his blog Evo and Proud, anthropologist Peter Frost recently wrote a highly interesting two-part article entitled The origins of Northwestern European guilt culture. In guilt cultures, social control is regulated more by guilt than by shame, as is the case in shame cultures that exist in most parts of the world. A crucial difference between these types of cultures is that while shame cultures require other people to shame the wrongdoer, guilt cultures do not. Instead, he or she will shame themselves by feeling guilty. This, according to Frost, is also linked to a stronger sense of empathy with others, not just with relatives but people in general.
The advantages of guilt over shame are many. People can go about their business without being supervised by others, and they can cooperate with people they’re not related to as long as both parties have the same view on right and wrong. And with this personal freedom come individualism, innovation and other forms of creativity as well as ideas of universal human rights etc. You could argue, as Frost appears to, that the increased sense of guilt in Northwestern Europe (NWE) is a major factor behind Western Civilization. While this sounds fairly plausible (in my ears at least), a fundamental question is whether there really is more guilt in the NWE sphere than elsewhere.
How to Measure Guilt
The idea of NWE countries as guilt cultures may seem obvious to some and dubious to others. The Protestant tradition is surely one indication of this, but some anthropologists argue that other cultures have other forms of guilt, not as easily recognized by Western scholars. For instance, Andrew Beatty mentions that the Javanese have no word for either shame or guilt but report uneasiness and a sense of haunting regarding certain political murders they’ve committed. So maybe they have just as much guilt as NWE Protestants?
This is one of the problems with soft science – you can argue about the meaning of terms and concepts back and forth until hell freezes over without coming to any useful conclusion. One way around this is to find some robust metric that most people would agree indicates guilt. One such measure, I believe, would be murder rate. If people in different cultures vary in the guilt they feel for committing murder, then this should hold them back and show up as a variation in the murder rate. I will here take the NWE region to mean the British Isles, the Nordic countries (excluding Finland), Germany, France and Belgium, Netherlands, Luxembourg, Australia, New Zealand and Canada for a total of 14 countries. According to UNODC/Wikipedia, the average murder rate in the NWE countries is exactly 1.0 murder per 100K inhabitants. To put this in perspective, only 20 other countries (and territories) of 207 listed are below this level and 70 percent of them have twice the murder rate or more.
Still, criminals are after all not a very representative group having more of the dark traits (psychopathy, narcissism, machiavellism) than the rest of the population. Corruption, on the other hand, as I’ve argued in an earlier post, seems relatively unrelated to regular personality traits, so it should tap into the mainstream population. Corruption is often about minor transgressions that many people engage in knowing that they can usually get away with it. They will not be shamed because no one will know about it and many will not care since it’s so common, but some will feel guilty and refrain from it. Looking at the Corruptions Perceptions Index for 2013, the NWE countries are very dominant at the top of the ranking (meaning they lack corruption). There are seven NWEs in the top ten and two additional bordering countries (Finland and Switzerland). The entire NWE region is within the top 24, of a 177 countries and territories.
But as I’ve argued before here, corruption appears to be linked to clannishness and tribalism (traits rarely discussed in psychology) and it’s reasonable to assume that it is a casual factor. How does this all add up? Well, the clannish and tribal cultures that I broadly refer to as traditional cultures are all based on the premise that the family, tribe or similar ingroup is that which should be everyone’s first concern. So while a member of a traditional culture may have personal feelings of guilt, this means little compared to the collective dislike – the shame – from the family or tribe. At the same time traditional cultures are indifferent or hostile towards other groups so if your corruption serves the family or tribe there will be no shame in it, the others will more likely praise you for being clever.
(In this context it’s also interesting to note that people who shame others often do this by expressing disgust, an emotion linked to a traditional dislike for various outgroups, such as homosexuals or people of other races. So disgust, which psychologist Jonathan Haidt connects with the moral foundation of sanctity/degradation, is perhaps equally important to the foundation loyalty/ingroup.)
When Did Modernity Begin?
One important question is whether this distinction between modern and traditional is to what extent it’s a matter of nature or nurture. There is evidence that it is caused by inbreeding and the accumulation of genes for familial altruism (that’s to say a concern for relatives and a corresponding dislike for non-relatives). Since studies on this are non-existent as far as I know – no doubt for political reasons – another form of evidence could be found in tracing this distinction back in time. The further we can do this, the more likely it’s a matter of genes rather than culture. And the better we can identify populations that are innately modern the better we can understanding the function and origin of this trait. Frost argues that guilt culture can be found as early as the Anglo-Saxon period (550-1066), based thing like the existence of looser family structures with a relatively late age of marriage and the notion of a shame before the spirits or God, which can be construed as guilt. This made me wonder if there is any similar historical evidence for NWE guilt that is old enough to make the case for this to be an inherited behavior (or at least the capacity for guilt-motivated behavior). And that’s how I came up with the changeling,
The Changeling
As Jung has argued, there is a striking similarity between myths and traditional storytelling over the world. People who have never been in contact with each other have certain recurring structures in their narratives, and, as I’ve argued before here, even modern people adhere to these unspoken rules of storytelling – the archetypes. The only reasonable explanation for archetypes is that they are a reflection of how humans are wired. But if archetypal stories reveal a universal human nature, what about stories found in some places but not in others? In some cases they may reflect differences in things like climate or geography, but if no such environmental explanation can be found I believe that the variation may be a case of human biodiversity.
I believe one such variation relevant to guilt culture is the genre of changeling tales. These folktales are invariably about how otherworldly creatures like fairies abduct newborn children and replace them with something in their likeness, a changeling. The changeling is sometimes a fairy, sometimes just an enchanted piece of wood that has been made to look like a child. It’s typically very hungry but sickly and fails to thrive. A woman who suspected that she had a changeling on her hands could find out by beating the changeling, throwing it in the water, leaving it in the woods overnight and so on. According to the folktales, this would prompt the fairies or whoever was responsible for the exchange to come to rescue their child and also return the child they had taken.
Infanticide Made Easy
Most scholars agree that the changeling tales was a way to justify killing sickly and deformed children. According to American folklorist D. L. Ashliman at the University of Pittsburgh, people firmly believed in changelings and did as the tales instructed,
""There is ample evidence that these legendary accounts do not misrepresent or exaggerate the actual abuse of suspected changelings. Court records between about 1850 and 1900 in Germany, Scandinavia, Great Britain, and Ireland reveal numerous proceedings against defendants accused of torturing and murdering suspected changelings.""
This all sounds pretty grisly but before modern medicine and social welfare institutions, a child of this kind was a disaster. Up until the 1900s, children were supposed to be relatively self-sufficient and help out around the house. A child that needed constant supervision without any prospect of ever being able contribute anything to the household was more than a burden; it jeopardized the future of the entire family.
Still, there is probably no stronger bond between two people than that between a mother and her newborn child. So how could a woman not feel guilty for killing her own child? Because it must be guilt we’re talking about here – you would never be shamed for doing it since it was according to custom. The belief in changelings expressed in the folktales gave the women (and men) a way out of this dilemma. (Ironically, Martin Luther, the icon of guilt culture, dismissed all the popular superstitions of his fellow countrymen with the sole exception of changelings which he firmly believed in.) Thus, the main purpose of these tales seems to have been to alleviate guilt.
Geography
If this is true then changeling stories should be more common in the NWE region than elsewhere, which also seems to be the case. There are numerous changeling tales found on the British Isles, in Scandinavia, Germany and France. It can be found elsewhere in Europe as well, in the Basque region and among Slavic people and even as far as North Africa, but at least according to folklorists I’ve found discussing these tales, they are imported from the NWE region. And if we look beyond regions bordering to Europe changelings seem to be virtually non-existent. Some folklorists have suggested that for instance the Nigerian Ogbanje can be thought of as a changeling, although at a closer inspection the similarity is very superficial. The Ogbanje is reborn into the same family over and over and to break the curse families consult medicine men after the child has died. When they consult a medicine man when the child is still alive it is for the purpose of severing the child’s connection to the spirit world and make it normal. So the belief in the Ogbanje never justifies infanticide. Another contender is the Filipino Aswang which is a creature that will attack children as well as adults and is never takes the place of a child but is more like a vampire. So it’s safe to say that the changeling belief is firmly rooted in the NWE region at least back to medieval times and perhaps earlier too.
Before There Were Changelings, There Was Exposure
Given how infanticide is such a good candidate for measuring guilt, we could go back further in time, before any evidence of changelings and look at potential differences in attitudes towards this act.
I doing so I think we can find, if not NWE guilt, so at least Western ditto. According this Wikipedia article, the ancient Greeks and Romans as well as Germanic tribes, killed infants by exposure rather than through a direct act. Here is a quote on the practice in Greece,
""Babies would often be rejected if they were illegitimate, unhealthy or deformed, the wrong sex, or too great a burden on the family. These babies would not be directly killed, but put in a clay pot or jar and deserted outside the front door or on the roadway. In ancient Greek religion, this practice took the responsibility away from the parents because the child would die of natural causes, for example hunger, asphyxiation or exposure to the elements.""
And the Archeology and Classical Research Magazine Roman Times quotes several classical sources suggesting that exposure was controversial even back then,
""Isocrates (436–338 BCE) includes the exposure of infants in his catalog of horrendous crimes practiced in some cities (other than Athens) in his work Panathenaicus.""
I also found this excerpt from the play Ion by Euripides, written at the end of the 400s BC. In it Kreusa talks with an old servant about having exposed an unwanted child,
Old Servant: Who cast him forth? – Not thou – O never thou!
Kreusa: Even I. My vesture darkling swaddled him.
Old Servant: Nor any knew the exposing of the child?
Kreusa: None – Misery and Secrecy alone.
Old Servant: How couldst thou leave they babe within the cave?
Kreusa: Ah how? – O pitiful farewells I moaned!
It seems to me that this play, by one of the most prominent playwrights of his time, would not make much sense to the audience unless exposure was something that weighed on many people’s hearts.
Compare this with historical accounts from other cultures, taken from the Wikipedia article mentioned above,
""Some authors believe that there is little evidence that infanticide was prevalent in pre-Islamic Arabia or early Muslim history, except for the case of the Tamim tribe, who practiced it during severe famine. Others state that “female infanticide was common all over Arabia during this period of time” (pre-Islamic Arabia), especially by burying alive a female newborn.
In Kamchatka, babies were killed and thrown to the dogs.
The Svans (a Georgian people) killed the newborn females by filling their mouths with hot ashes.
A typical method in Japan was smothering through wet paper on the baby’s mouth and nose. Mabiki persisted in the 19th century and early 20th century.
Female infanticide of newborn girls was systematic in feudatory Rajputs in South Asia for illegitimate female children during the Middle Ages. According to Firishta, as soon as the illegitimate female child was born she was held “in one hand, and a knife in the other, that any person who wanted a wife might take her now, otherwise she was immediately put to death”
Polar Inuit (Inughuit) killed the child by throwing him or her into the sea. There is even a legend in Inuit mythology, “The Unwanted Child”, where a mother throws her child into the fjord.""
It seems that while people in ancient Greece practiced exposure, something many were troubled by, the active killing was common in the rest of the world and persists to this day in many places. While people in other cultures may or may not feel guilt it doesn’t seem to affect them as much, and it’s sometimes even trumped by shame as psychiatrist Steven Pitts and clinical psychologist Erin Bale write in an article in The Bulletin of the American Academy of Psychiatry and the Law regarding the practice of drowning unwanted girls,
""In China, the birth of a daughter has traditionally been accompanied by disappointment and even shame.""
To summarize, the changeling lore provides evidence of a NWE guilt culture dating back at least to medieval times, and the practice and attitude towards exposure suggests that ancient Greece had an emerging guilt culture as early as the 400s BC which enabled a similar individualism and intellectual development that we’ve seen in the NWE in recent centuries. I’m not sure exactly how genetically related these populations are, but the geographical proximity makes it hard to ignore the possibility of gene variants for guilt proneness in Europe responsible for guilt cultures both in ancient Greece and the NWE region. Some branch of Indo-Europeans perhaps?
Kurzban et al. on opportunity cost models of mental fatigue and resource-based models of willpower
An opportunity cost model of subjective effort and task performance (h/t lukeprog) is a very interesting paper on why we accumulate mental fatigue: Kurzban et al. suggest an opportunity cost model, where intense focus on a single task means that we become less capable of using our mental resources for anything else, and accumulating mental fatigue is part of a cost-benefit calculation that encourages us to shift our attention instead of monomaniacally concentrating on just one task which may not be the most rewarding possible. Correspondingly, the amount of boredom or mental fatigue we experience with a task should correspond with the perceived rewards from other tasks available at the moment. A task will feel more boring/effortful if there's something more rewarding that you could be doing instead (i.e. if the opportunity costs for pursuing your current task are higher), and if it requires exclusive use of cognitive resources that could also be used for something else.
This seems to make an amount of intuitive/introspective sense - I had a much easier time doing stuff without getting bored as a kid, when there simply wasn't much else that I could be doing instead. And it does roughly feel like I would get more quickly bored with things in situations where more engaging pursuits were available. I'm also reminded of the thing I noticed as a kid where, if I borrowed a single book from the library, I would likely get quickly engrossed in it, whereas if I had several alternatives it would be more likely that I'd end up looking at each for a bit but never really get around reading any of them.
An opportunity cost model also makes more sense than resource models of willpower which, as Kurzban quite persuasively argued in his earlier book, don't really fit together with the fact that the brain is an information-processing system. My computer doesn't need to use any more electricity in situations where it "decides" to do something as opposed to not doing something, but resource models of willpower have tried to postulate that we would need more of e.g. glucose in order to maintain willpower. (Rather, it makes more sense to presume that a low level of blood sugar would shift the cost-benefit calculations in a way that led to e.g. conservation of resources.)
This isn't just Kurzban et al's opinion - the paper was published in Behavioral and Brain Sciences, which invites diverse comments to all the papers that they publish. In this particular case, it was surprising how muted the defenses of the resource model were. As Kurzban et al point out in their response to responses:
As context for our expectations, consider the impact of one of the central ideas with which we were taking issue, the claim that “willpower” is a resource that is consumed when self-control is exerted. To give a sense of the reach of this idea, in the same month that our target article was accepted for publication Michael Lewis reported in Vanity Fair that no less a figure than President Barack Obama was aware of, endorsed, and based his decision- making process on the general idea that “the simple act of making decisions degrades one’s ability to make further decisions,” with Obama explaining: “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make ” (Lewis 2012 ).
Add to this the fact that a book based on this idea became a New York Times bestseller (Baumeister & Tierney 2011 ), the fact that a central paper articulating the idea (Baumeister et al. 1998 ) has been cited more than 1,400 times, and, more broadly, the vast number of research programs using this idea as a foundation, and we can be forgiven for thinking that we would have kicked up something of a hornet’s nest in suggesting that the willpower-as-resource model was wrong. So we anticipated no small amount of stings from the large number of scholars involved in this research enterprise. These were our expectations before receiving the commentaries.
Our expectations were not met. Take, for example, the reaction to our claim that the glucose version of the resource argument is false (Kurzban 2010a ). Inzlicht & Schmeichel, scholars who have published widely in the willpower-as-resource literature, more or less casually bury the model with the remark in their commentary that the “mounting evidence points to the conclusion that blood glucose is not the proximate mechanism of depletion.” ( Malecek & Poldrack express a similar view.) Not a single voice has been raised to defend the glucose model, and, given the evidence that we advanced to support our view that this model is unlikely to be correct, we hope that researchers will take the fact that none of the impressive array of scholars submitting comments defended the view to be a good indication that perhaps the model is, in fact, indefensible. Even if the opportunity cost account of effort turns out not to be correct, we are pleased that the evidence from the commentaries – or the absence of evidence – will stand as an indication to audiences that it might be time to move to more profitable explanations of subjective effort.
While the silence on the glucose model is perhaps most obvious, we are similarly surprised by the remarkably light defense of the resource view more generally. As Kool & Botvinick put it, quite correctly in our perception: “Research on the dynamics of cognitive effort have been dominated, over recent decades, by accounts centering on the notion of a limited and depletable ‘resource’” (italics ours). It would seem to be quite surprising, then, that in the context of our critique of the dominant view, arguably the strongest pertinent remarks come from Carter & McCullough, who imply that the strength of the key phenomenon that underlies the resource model – two-task “ego-depletion” studies – might be considerably less than previously thought or perhaps even nonexistent. Despite the confidence voiced by Inzlicht & Schmeichel about the two-task findings, the strongest voices surrounding the model, then, are raised against it, rather than for it. (See also Monterosso & Luo , who are similarly skeptical of the resource account.)
Indeed, what defenses there are of the resource account are not nearly as adamant as we had expected. Hagger wonders if there is “still room for a ‘resource’ account,” given the evidence that cuts against it, conceding that “[t]he ego-depletion literature is problematic.” Further, he relies largely on the argument that the opportunity cost model we offer might be incomplete, thus “leaving room” for other ideas.
(I'm leaving out discussion of some commentaries which do attempt to defend resource models.)
Though the model still seems to be missing pieces - as one of the commentaries points out, it doesn't really address the fact that some tasks are more inherently boring than others. Some of it might be explained by the argument given in Shouts, Whispers, and the Myth of Willpower: A Recursive Guide to Efficacy (I quote the most relevant bit here), where the author suggests that "self-discipline" in some domain is really about sensitivity for feedback in that domain: a novice in some task doesn't really manage to notice the small nuances that have become so significant for an expert, so they receive little feedback for their actions and it ends up being a boring vigilance task. Whereas an expert will instantly notice the effects that their actions have on the system and get feedback of their progress, which in the opportunity cost model could be interpreted as raising the worthwhileness of the task they're working on. If we go with Kurzban et al.'s notion of us acquiring further information about the expected utility of the task we're working on as we continue working on it, then getting feedback from the task could possibly be read as a sign of the task being one in which we can expect to succeed in.
Another missing piece with the model is that it doesn't really seem to explain the way that one can come home after a long day at work and then feel too exhausted to do anything at all - it can't really be about opportunity costs if you end up so tired that you can't come up with ~any activity that you'd want to do.
I need some help debugging my approach to informal models and reasoning
I'm having trouble understanding the process I should use when I am considering new models as they might apply to old data, like memories. This is primarily when reasoning with respect to qualitative models, like those that come out of development psychology, business, or military strategy. These models can be either normative or descriptive, but the big trait that they all seem to share is that they were all conceptualized with reference to the inside view more than the outside view - they were based on either memories or intuition, so they will have a lot of implicit internal structure, or they will have a lot of bullshit. Re-framing my own experiences as a way of finding out whether these models are useful is thus reliant on system one more than system two. Unfortunately now we're in the realm of bias.
My concrete examples of models that I am evaluating are (a) when I am attempting to digest the information contained in the "Principles" document (as discussed here) and for which situations the information might apply in; (b) learning Alfred Adler's "individual psychology" from The Rawness, which also expands the ideas and (c) the mighty OODA loop.
When I brought up the OODA loop during a meetup with the Vancouver Rationalists I ended up making some mistakes regarding the "theories" from which it was derived, adding the idea of "clout" to my mental toolkit. But it also makes me wary that my instinctive approach to learning about qualitative models such as this might have other weaknesses.
I asked at another meetup, "What is the best way to internalize advice from books?" and someone responded with thinking about concrete situations where the idea might have been useful.
As a strategy to evaluate the truth of a model I can see this backfiring. Due to the reliance on System One in both model structuring and model evaluation, hindsight bias is likely to be an issue, or a form of Forer effect. I could then make erroneous judgements on how effectively the model will predict an outcome, and use the model in ineffective ways (ironically this is brought up by the author on The Rawness). In most cases I believe that this is better than nothing, but I don't think it's good enough either. It does seem possible to be mindful of the actual conceptual points and just wait for relevance, but the reason why we reflect is so that we are primed to see certain patterns again when they come up, so that doesn't seem like enough either.
As a way of evaluating model usefulness I can see this go two ways. On one hand, many long-standing problems exist due to mental ruts, and benefit from re-framing the issue in light of new information. When I read books I often experience a linkage between statements that a book makes and goals that I have, or situations I want to make sense of (similar to Josh Kaufman and his usage of the McDowell's Reading Grid). However, this experience has little to do with the model being correct.
Here are three questions I have, although more will likely come up:
- What are the most common mistakes humans make when figuring out if a qualitative model applies to their experiences or not?
- How can they be worked around, removed, or compensated for?
- Can we make statements about when "informal" models (i.e. not specified in formal language or not mappable to mathematical descriptions other than in structures like semantic webs) are generally useful to have and when they generally fail?
- etc.
Notes on Brainwashing & 'Cults'
“Brainwashing”, as popularly understood, does not exist or is of almost zero effectiveness. The belief stems from American panic over Communism post-Korean War combined with fear of new religions and sensationalized incidents; in practice, “cults” have retention rates in the single percentage point range and ceased to be an issue decades ago. Typically, a conversion sticks because an organization provides value to its members.
The State of the Art of Scientific Research on Polyamoury
The idea of polyamoury is one that interests me. However, while such books as The Ethical Slut have done a good job of providing me with tools to understand and possibly handle the challenges and rewards involved, I found them unsatisfying in that they were largely based on anecdotal evidence, with a very strong selection bias. Before making the jump of attempting to live that way, one would need to know precisely the state of the art of scientific, rigourous, credible research on the topic; it is a tedious job to seek out and compile everything, but I believe it is a job worth doing.
I'll be initiating an ongoing process of data compilation, and will publish my findings on this thread as I discover and summarize them. Any help is greatly appreciated, as this promises to be long and tedious. I might especially need help extracting meaningful information from the masses of data; I am not a good statistician yet, far from it.
To Be Expanded...
One way to manipulate your level of abstraction related to a task
In construal level theory, ideas can be classified along a spectrum from concrete ("near" in Robin Hanson's terminology) to abstract ("far"). As a summary, here is the abstract from a 2010 review (pdf):
People are capable of thinking about the future, the past, remote locations, another person’s perspective, and counterfactual alternatives. Without denying the uniqueness of each process, it is proposed that they constitute different forms of traversing psychological distance. Psychological distance is egocentric: Its reference point is the self in the here and now, and the different ways in which an object might be removed from that point—in time, in space, in social distance, and in hypotheticality— constitute different distance dimensions. Transcending the self in the here and now entails mental construal, and the farther removed an object is from direct experience, the higher (more abstract) the level of construal of that object. Supporting this analysis, research shows (a) that the various distances are cognitively related to each other, (b) that they similarly influence and are influenced by level of mental construal, and (c) that they similarly affect prediction, preference, and action.
Now, what if you want to think about some thing in a more or less near or far way? Here's one well-studied strategy to do so (e.g., see pdf here).
To think about a task in more concrete terms, ask yourself how you would do it. Then, however you answer that question, ask yourself how would you do that. Do this two (or so) more times, and you will be thinking about that task significantly more concretely.
To think about a task in more abstract terms, ask yourself why you would do it. Then ask yourself why you would want that 3 (or so) more times.
An excerpt from the 2007 study in the second link to give an example of how this would work:
Suppose you indicate “taking a vacation” as one of your goals. Please write the goal in the uppermost square. Then, think why you would like to go on vacation, and write your answer in the square underneath. Suppose that you write “in order to rest.” Now, please think why you would like to rest, and write your answer in the third square. Suppose that you write “in order to renew your energy.” Finally, write in the last square why you would like to renew your energy.
HIKE: A Group Dynamics Case Study
I belong to a group at my university that organizes a backpacking trip for incoming freshmen in the two weeks before orientation week. This organization, which I will refer to as HIKE (not the real name), is particularly interesting in terms of group design. Why? It is approximately 30 years old, is run entirely by current students, and brings together a very large group of people and knits them into a largish community. Pretty much everyone involved agrees that HIKE works very well. During my involvement (I was a participating freshman, and I have since become staff) I have continually wondered, why is this group so much more fun than any other group I've been a part of?
It's also particularly effective. Leading ~80 incoming freshmen, who have no current friends, and who know no one, and who don't generally have any backpacking experience, into the woods for two weeks, is no easy task. HIKE manages its own logistics, staff training, and organization, entirely with student volunteers who staff the trip, with little to no university interaction. (We get them to advertise our trip, and they generally permit us to continue to exist.) It takes some dedication to keep this rolling, and I have seen other campus groups completely fail to find that kind of dedication from their membership.
While it's not a rationalist group, it seems to have stumbled upon a cocktail of instrumentally rational practices.
HIKE uses an interesting process of network homogenization. When staff members (who have generally been on several trips before) are assigned crews, staff members fill out "Who Do You Know?" forms, on which you rank how well you know other staff on a scale from 1 to 5. The people in charge of making groups, usually Project Directors, then group staffers based on how well you don't know other staff. You usually staff a trip with people that you haven't gotten to know very well, and then get to know them. Because of this process of strengthening the weakest bonds, HIKE is able to function as a relatively large social group, even across graduation classes and around existing cliques.
As far as actual interaction, HIKE involves a lot of face time with your crew of 10 freshmen and your co-staffers. There aren't really any breaks (with the exception of solos, see below) and you are hiking, eating, and chatting together for approximately 225 hours (15 waking hours in a day * 15 days). I had 13 hours and 40 minutes of class a week the Spring 2013 semester. HIKE is approximately 7+ weeks of class at that rate.
One of the more beloved HIKE traditions is the solo, where the hiking leaders pick a spot with plenty of isolated spaces, and the participants can choosee to spend ~24 hours alone and, optionally, fasting. It's a novel experience, and people like the time to rest and reflect in the middle of a very social, very intensive hiking trip.
My suspicion for why this all works is that HIKE very closely simulates a hunter-gatherer lifestyle. You travel in ~10 member groups, on foot, carrying your food, on mountain trails. You spend your every waking hour with the crew. The 2-3 hiking leaders are there to facilitate only (read: perform first aid if necessary, guide conversation, teach outdoor skills if necessary, and nudge the group if they get off track), and all decisions are made by consensus (which isn't an all-purpose decision making process, but is very egalitarian, and helps the group gel).
Maybe I'm just praising my friend-group, but I feel like I stumbled into a particularly strong group of people. We all feel very well-connected and we feel a lot of commitment to the program. My experience with other college groups has been that members are pulled apart by other commitments and a lack of familiarity with other members, and HIKE seems to avoid that with a critical mass of consecutive face time. We manage to have continuity of social norms across the years, but a great deal of flexibility (no one remembers what happened 4 years ago, and some traditions disappear and others cement themselves as ancient and hallowed despite being only two years old).
I'm interested in hearing any thoughts on this, and any relevant experience with other groups, ideas for testing cross-application, requests for further elaboration, etc.
[LINK] The Point of Life is the Explosion of Experience Into Ideas
The Point of Life is the Explosion of Experience Into Ideas is a philosophical article I wrote detailing why and how self-expression is the fundamental human freedom and the justification for suffering.
To become more rational, rinse your left ear with cold water
A recent paper in Cortex describes how caloric vestibular stimulation (CVS), i.e., rinsing of the ear canal with cold water, reduces unrealistic optimism. Here are some bits from the paper:
Participants were 31 healthy right-handed adults (15 men, 20–40 years)...Participants were oriented in a supine position with the head inclined 30° from the horizontal and cold water (24 °C) was irrigated into the external auditory canal on one side (Fitzgerald and Hallpike, 1942). After both vestibular-evoked eye movements and vertigo had stopped, the procedure was repeated on the other side...
Participants were asked to estimate their own risk, relative to that of their peers (same age, sex and education), of contracting a series of illnesses. The risk rating scale ranged from −6 (lower risk) to +6 (higher risk). ... Each participant was tested in three conditions, with 5 min rest between each: baseline with no CI (always first), left-ear CI and right-ear CI (order counterbalanced). In the latter conditions risk-estimation was initiated after 30 sec of CI, when nystagmic response had built up. Ten illnesses were rated in each condition and the average risk estimate per condition (mean of 10 ratings) was calculated for each participant. The 30 illnesses used in this study (see Table 1) were selected from a larger pool of illnesses pre-rated by a separate group of 30 healthy participants.Overall, our participants were unrealistically optimistic about their chances of contracting illnesses at baseline ... and during right-ear CI. ...Post-hoc tests using the Bonferroni correction revealed that, compared to baseline, average risk estimates were significantly higher during left-ear CI (p = .016), whereas they remained unchanged during right-ear CI (p = .476). Unrealistic optimism was thus reduced selectively during left-ear stimulation.
(CI stands for caloric irrigation which is how CVS was performed.)
It is not clear how close the participants came to being realistic in their estimates after CVS, but they definitely became more pessimistic, which is the right direction to go in the context of numerous biases such as the planning fallacy.
The paper:
Vestibular stimulation attenuates unrealistic optimism
[link] Are All Dictator Game Results Artifacts?
http://www.epjournal.net/blog/2013/05/are-all-dictator-game-results-artifacts/
You walk into a laboratory, and you read a set of instructions that tell you that your task is to decide how much of a $10 pie you want to give to an anonymous other person who signed up for the experimental session.
This describes, more or less, the Dictator Game, a staple of behavioral economics with a history dating back more than a quarter of a century. The Dictator Game (DG) might not be the drosophila melanogaster of behavioral economics – the Prisoner’s Dilemma can lay plausible claim to that prized analogy – but it could reasonably aspire to an only slightly more modest title, perhaps the e. coli of the discipline. Since the original work, more than 20,000 observations in the DG have been reported.
[...]
How much would participants in a Dictator Game give to the other person if they did not know they were in a Dictator Game study? Simply following me around during the day and recording how much cash I dispense won’t answer this question because in the DG, the money is provided by the experimenter. So, to build a parallel design, the method used must move money to subjects as a windfall so that we can observe how much of this “house money” they choose to give away.
And that is what Winking and Mizer did in a paper now in press and available online (paywall) in Evolution and Human Behavior, using participants, fittingly enough, in Las Vegas. Here’s what they did. Two confederates were needed. The first, destined to become the “recipient,” was occupied on a phone call near a bus stop in Vegas. The second confederate approached lone individuals at the bus stop, indicated that they were late for a ride to the airport, and asked the subject if they wanted the $20 in casino chips still in the confederate’s possession, scamming people into, rather than out of money, in sharp contradiction of the deep traditions of Las Vegas. The question was how many chips the fortunate subject transferred to the nearby confederate.[...]
In a second condition, the confederate with the chips added a comment to the effect that the subject could “split it with that guy however you want,” indicating the first confederate. This condition brings the study a bit closer, but not much closer, to lab conditions, In a third condition, subjects were asked if they wanted to participate in a study, and then did so along the lines of the usual DG, making the treatment considerably closer to traditional lab-based conditions.
The difference between the first two treatments and the third treatments is interesting, but, as I said at the beginning, the DG should be thought of as a measuring tool. Figure 1 shows how many chips people give away in the DG in the three treatments. In conditions 1 and 2, the number of people (out of 60) who gave at least one chip to the second confederate was… zero. To the extent you think that this method answers the question, how much Dictator Game giving is due to people knowing they’re in an experiment, the answer is, “all of it.”
Link to paper (paywalled).
Three more ways identity can be a curse
The Buddhists believe that one of the three keys to attaining true happiness is dissolving the illusion of the self. (The other two are dissolving the illusion of permanence, and ceasing the desire that leads to suffering.) I'm not really sure exactly what it means to say "the self is an illusion", and I'm not exactly sure how that will lead to enlightenment, but I do think one can easily take the first step on this long journey to happiness by beginning to dissolve the sense of one's identity.
Previously, in "Keep Your Identity Small", Paul Graham showed how a strong sense of identity can lead to epistemic irrationally, when someone refuses to accept evidence against x because "someone who believes x" is part of his or her identity. And in Kaj Sotala's "The Curse of Identity", he illustrated a human tendency to reinterpret a goal of "do x" as "give the impression of being someone who does x". These are both fantastic posts, and you should read them if you haven't already.
Here are three more ways in which identity can be a curse.
1. Don't be afraid to change
James March, professor of political science at Stanford University, says that when people make choices, they tend to use one of two basic models of decision making: the consequences model, or the identity model. In the consequences model, we weigh the costs and benefits of our options and make the choice that maximizes our satisfaction. In the identity model, we ask ourselves "What would a person like me do in this situation?"1
The author of the book I read this in didn't seem to take the obvious next step and acknowledge that the consequences model is clearly The Correct Way to Make Decisions and basically by definition, if you're using the identity model and it's giving you a different result then the consequences model would, you're being led astray. A heuristic I like to use is to limit my identity to the "observer" part of my brain, and make my only goal maximizing the amount of happiness and pleasure the observer experiences, and minimizing the amount of misfortune and pain. It sounds obvious when you lay it out in these terms, but let me give an example.
Alice is a incoming freshman in college trying to choose her major. In Hypothetical University, there are only two majors: English, and business. Alice absolutely adores literature, and thinks business is dreadfully boring. Becoming an English major would allow her to have a career working with something she's passionate about, which is worth 2 megautilons to her, but it would also make her poor (0 mu). Becoming a business major would mean working in a field she is not passionate about (0 mu), but it would also make her rich, which is worth 1 megautilon. So English, with 2 mu, wins out over business, with 1 mu.
However, Alice is very bright, and is the type of person who can adapt herself to many situations and learn skills quickly. If Alice were to spend the first six months of college deeply immersing herself in studying business, she would probably start developing a passion for business. If she purposefully exposed herself to certain pro-business memeplexes (e.g. watched a movie glamorizing the life of Wall Street bankers), then she could speed up this process even further. After a few years of taking business classes, she would probably begin to forget what about English literature was so appealing to her, and be extremely grateful that she made the decision she did. Therefore she would gain the same 2 mu from having a job she is passionate about, along with an additional 1 mu from being rich, meaning that the 3 mu choice of business wins out over the 2 mu choice of English.
However, the possibility of self-modifying to becoming someone who finds English literature boring and business interesting is very disturbing to Alice. She sees it as a betrayal of everything that she is, even though she's actually only been interested in English literature for a few years. Perhaps she thinks of choosing business as "selling out" or "giving in". Therefore she decides to major in English, and takes the 2 mu choice instead of the superior 3 mu.
(Obviously this is a hypothetical example/oversimplification and there are a lot of reasons why it might be rational to pursue a career path that doesn't make very much money.)
It seems to me like human beings have a bizarre tendency to want to keep certain attributes and character traits stagnant, even when doing so provides no advantage, or is actively harmful. In a world where business-passionate people systematically do better than English-passionate people, it makes sense to self-modify to become business-passionate. Yet this is often distasteful.
For example, until a few weeks ago when I started solidifying this thinking pattern, I had an extremely adverse reaction to the idea of ceasing to be a hip-hop fan and becoming a fan of more "sophisticated" musical genres like jazz and classical, eventually coming to look down on the music I currently listen to as primitive or silly. This doesn't really make sense - I'm sure if I were to become a jazz and classical fan I would enjoy those genres at least as much as I currently enjoy hip hop. And yet I had a very strong preference to remain the same, even in the trivial realm of music taste.
Probably the most extreme example is the common tendency for depressed people to not actually want to get better, because depression has become such a core part of their identity that the idea of becoming a healthy, happy person is disturbing to them. (I used to struggle with this myself, in fact.) Being depressed is probably the most obviously harmful characteristic that someone can have, and yet many people resist self-modification.
Of course, the obvious objection is there's no way to rationally object to people's preferences - if someone truly prioritizes keeping their identity stagnant over not being depressed then there's no way to tell them they're wrong, just like if someone prioritizes paperclips over happiness there's no way to tell them they're wrong. But if you're like me, and you are interested in being happy, then I recommend looking out for this cognitive bias.
The other objection is that this philosophy leads to extremely unsavory wireheading-esque scenarios if you take it to its logical conclusion. But holding the opposite belief - that it's always more important to keep your characteristics stagnant than to be happy - clearly leads to even more absurd conclusions. So there is probably some point on the spectrum where change is so distasteful that it's not worth a boost in happiness (e.g. a lobotomy or something similar). However, I think that in actual practical pre-Singularity life, most people set this point far, far too low.
2. The hidden meaning of "be yourself"
(This section is entirely my own speculation, so take it as you will.)
"Be yourself" is probably the most widely-repeated piece of social skills advice despite being pretty clearly useless - if it worked then no one would be socially awkward, because everyone has heard this advice.
However, there must be some sort of core grain of truth in this statement, or else it wouldn't be so widely repeated. I think that core grain is basically the point I just made, applied to social interaction. I.e, optimize always for social success and positive relationships (particularly in the moment), and not for signalling a certain identity.
The ostensible purpose of identity/signalling is to appear to be a certain type of person, so that people will like and respect you, which is in turn so that people will want to be around you and be more likely to do stuff for you. However, oftentimes this goes horribly wrong, and people become very devoted to cultivating certain identities that are actively harmful for this purpose, e.g. goth, juggalo, "cool reserved aloof loner", guy that won't shut up about politics, etc. A more subtle example is Fred, who holds the wall and refuses to dance at a nightclub because he is a serious, dignified sort of guy, and doesn't want to look silly. However, the reason why "looking silly" is generally a bad thing is because it makes people lose respect for you, and therefore make them less likely to associate with you. In the situation Fred is in, holding the wall and looking serious will cause no one to associate with him, but if he dances and mingles with strangers and looks silly, people will be likely to associate with him. So unless he's afraid of looking silly in the eyes of God, this seems to be irrational.
Probably more common is the tendency to go to great care to cultivate identities that are neither harmful nor beneficial. E.g. "deep philosophical thinker", "Grateful Dead fan", "tough guy", "nature lover", "rationalist", etc. Boring Bob is a guy who wears a blue polo shirt and khakis every day, works as hard as expected but no harder in his job as an accountant, holds no political views, and when he goes home he relaxes by watching whatever's on TV and reading the paper. Boring Bob would probably improve his chances of social success by cultivating a more interesting identity, perhaps by changing his wardrobe, hobbies, and viewpoints, and then liberally signalling this new identity. However, most of us are not Boring Bob, and a much better social success strategy for most of us is probably to smile more, improve our posture and body language, be more open and accepting of other people, learn how to make better small talk, etc. But most people fail to realize this and instead play elaborate signalling games in order to improve their status, sometimes even at the expense of lots of time and money.
Some ways by which people can fail to "be themselves" in individual social interactions: liberally sprinkle references to certain attributes that they want to emphasize, say nonsensical and surreal things in order to seem quirky, be afraid to give obvious responses to questions in order to seem more interesting, insert forced "cool" actions into their mannerisms, act underwhelmed by what the other person is saying in order to seem jaded and superior, etc. Whereas someone who is "being herself" is more interested in creating rapport with the other person than giving off a certain impression of herself.
Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you.
I used to not understand why certain "types" of people, such as "hipsters"2 or Ed Hardy and Affliction-wearing "douchebags" are so universally loathed (especially on the internet). Yes, these people are adopting certain styles in order to be cool and interesting, but isn't everyone doing the same? No one looks through their wardrobe and says "hmm, I'll wear this sweater because it makes me uncool, and it'll make people not like me". Perhaps hipsters and Ed Hardy Guys fail in their mission to be cool, but should we really hate them for this? If being a hipster was cool two years ago, and being someone who wears normal clothes, acts normal, and doesn't do anything "ironically" is cool today, then we're really just hating people for failing to keep up with the trends. And if being a hipster actually is cool, then, well, who can fault them for choosing to be one?
That was my old thought process. Now it is clear to me that what makes hipsters and Ed Hardy Guys hated is that they aren't "being themselves" - they are much more interested in cultivating an identity of interestingness and masculinity, respectively, than connecting with other people. The same thing goes for pretty much every other collectively hated stereotype I can think of3 - people who loudly express political opinions, stoners who won't stop talking about smoking weed, attention seeking teenage girls on facebook, extremely flamboyantly gay guys, "weeaboos", hippies and new age types, 2005 "emo kids", overly politically correct people, tumblr SJA weirdos who identify as otherkin and whatnot, overly patriotic "rednecks", the list goes on and on.
This also clears up a confusion that occurred to me when reading How to Win Friends and Influence People. I know people who have a Dale Carnegie mindset of being optimistic and nice to everyone they meet and are adored for it, but I also know people who have the same attitude and yet are considered irritatingly saccharine and would probably do better to "keep it real" a little. So what's the difference? I think the difference is that the former group are genuinely interested in being nice to people and building rapport, while members of the second group have made an error like the one described in Kaj Sotala's post and are merely trying to give off the impression of being a nice and friendly person. The distinction is obviously very subtle, but it's one that humans are apparently very good at perceiving.
I'm not exactly sure what it is that causes humans to have this tendency of hating people who are clearly optimizing for identity - it's not as if they harm anyone. It probably has to do with tribal status. But what is clear is that you should definitely not be one of them.
3. The worst mistake you can possibly make in combating akrasia
The main thesis of PJ Eby's Thinking Things Done is that the primary reason why people are incapable of being productive is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal. A lot of depressed people make statements like "I'm worthless", or "I'm scum" or "No one could ever love me", which are illogically dramatic and overly black and white, until you realize that these statements are merely interpretations of a feeling of "I'm about to get kicked out of the tribe, and therefore die." Animals have a freezing response to imminent death, so if you are fearing failure you will go into do-nothing mode and not be able to work at all.4
In Succeed: How We Can Reach Our Goals, Phd psychologist Heidi Halvorson takes a different view and describes positive motivation and negative motivation as having pros and cons. However, she has her own dichotomy of Good Motivation and Bad Motivation: "Be good" goals are performance goals, and are directed at achieving a particular outcome, like getting an A on a test, reaching a sales target, getting your attractive neighbor to go out with you, or getting into law school. They are very often tied closely to a sense of self-worth. "Get better" goals are mastery goals, and people who pick these goals judge themselves instead in terms of the progress they are making, asking questions like "Am I improving? Am I learning? Am I moving forward at a good pace?" Halvorson argues that "get better" goals are almost always drastically better than "be good" goals5. An example quote (from page 60) is:
When my goal is to get an A in a class and prove that I'm smart, and I take the first exam and I don't get an A... well, then I really can't help but think that maybe I'm not so smart, right? Concluding "maybe I'm not smart" has several consequences and none of them are good. First, I'm going to feel terrible - probably anxious and depressed, possibly embarrassed or ashamed. My sense of self-worth and self-esteem are going to suffer. My confidence will be shaken, if not completely shattered. And if I'm not smart enough, there's really no point in continuing to try to do well, so I'll probably just give up and not bother working so hard on the remaining exams.
And finally, in Feeling Good: The New Mood Therapy, David Burns describes a destructive side effect of depression he calls "do-nothingism":
One of the most destructive aspects of depression is the way it paralyzes your willpower. In its mildest form you may simply procrastinate about doing a few odious chores. As your lack of motivation increases, virtually any activity appears so difficult that you become overwhelmed by the urge to do nothing. Because you accomplish very little, you feel worse and worse. Not only do you cut yourself off from your normal sources of stimulation and pleasure, but your lack of productivity aggravates your self-hatred, resulting in further isolation and incapacitation.
Synthesizing these three pieces of information leads me to believe that the worst thing you can possibly do for your akrasia is to tie your success and productivity to your sense of identity/self-worth, especially if you're using negative motivation to do so, and especially if you suffer or have recently suffered from depression or low-self esteem. The thought of having a negative self-image is scary and unpleasant, perhaps for the evo-psych reasons PJ Eby outlines. If you tie your productivity to your fear of a negative self-image, working will become scary and unpleasant as well, and you won't want to do it.
I feel like this might be the single number one reason why people are akratic. It might be a little premature to say that, and I might be biased by how large of a factor this mistake was in my own akrasia. But unfortunately, this trap seems like a very easy one to fall into. If you're someone who is lazy and isn't accomplishing much in life, perhaps depressed, then it makes intuitive sense to motivate yourself by saying "Come on, self! Do you want to be a useless failure in life? No? Well get going then!" But doing so will accomplish the exact opposite and make you feel miserable.
So there you have it. In addition to making you a bad rationalist and causing you to lose sight of your goals, a strong sense of identity will cause you to make poor decisions that lead to unhappiness, be unpopular, and be unsuccessful. I think the Buddhists were onto something with this one, personally, and I try to limit my sense of identity as much as possible. A trick you can use in addition to the "be the observer" trick I mentioned, is to whenever you find yourself thinking in identity terms, swap out that identity for the identity of "person who takes over the world by transcending the need for a sense of identity".
This is my first LessWrong discussion post, so constructive criticism is greatly appreciated. Was this informative? Or was what I said obvious, and I'm retreading old ground? Was this well written? Should this have been posted to Main? Should this not have been posted at all? Thank you.
1. Paraphrased from page 153 of Switch: How to Change When Change is Hard
2. Actually, while it works for this example, I think the stereotypical "hipster" is a bizarre caricature that doesn't match anyone who actually exists in real life, and the degree to which people will rabidly espouse hatred for this stereotypical figure (or used to two or three years ago) is one of the most bizarre tendencies people have.
3. Other than groups that arguably hurt people (religious fundamentalists, PUAs), the only exception I can think of is frat boy/jock types. They talk about drinking and partying a lot, sure, but not really any more than people who drink and party a lot would be expected to. Possibilities for their hated status include that they do in fact engage in obnoxious signalling and I'm not aware of it, jealousy, or stigmatization as hazers and date rapists. Also, a lot of people hate stereotypical "ghetto" black people who sag their jeans and notoriously type in a broken, difficult-to-read form of English. This could either be a weak example of the trend (I'm not really sure what it is they would be signalling, maybe dangerous-ness?), or just a manifestation of racism.
4. I'm not sure if this is valid science that he pulled from some other source, or if he just made this up.
[Link] False memories of fabricated political events
Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).
Link to papers.ssrn.com (paper is freely downloadable).
The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.
Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.
About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.
[Link] Social Psychology & Priming: Art Wears Off
Related to: Power of Suggestion
Social Psychology & Priming: Art Wears Off
by Steve Sailer
One of the most popular social psychology studies of the Malcolm Gladwell Era has been Yale professor John Bargh's paper on how you can "prime" students to walk more slowly by first having them do word puzzles that contain a hidden theme of old age by the inclusion of words like "wrinkle" and "bingo." The primed subjects then took one second longer on average to walk down the hall than the unprimed control group. Isn't that amazing! (Here's Gladwell's description of Bargh's famous experiment in his 2005 bestseller Blink.)
This finding has electrified the Airport Book industry for years: Science proves you can manipulate people into doing what you want them to! Why you'd want college students to walk slower is unexplained, but that's not the point. The point is that Science proves that people are manipulable.
Now, a large fraction of the buyers of Airport Books like Blink are marketing and advertising professionals, who are paid handsomely to manipulate people, and to manipulate them into not just walking slower, but into shelling out real money to buy the clients' products.
Moreover, everybody notices that entertainment can prime you in various ways. For instance, well-made movies prime how I walk down the street afterwards. For two nights after seeing the Coen Brothers' No Country for Old Men, I walked the quiet streets swiveling my head, half-certain that an unstoppable killing machine was tailing me. When I came out of Christopher Nolan's amnesia thriller Memento, I was convinced I'd never remember where I parked my car. (As it turned out, I quickly found my car. Why? Because I needed to. But it was fun for thirty seconds to act like, and maybe even believe, that the movie had primed me into amnesia.)
Now, you could say, "That's art, not marketing," but the distinction isn't that obvious to talented directors. Not surprisingly, directors between feature projects often tide themselves over directing commercials. For example, Ridley Scott made Blade Runner in 1982 and then the landmark 1984 ad introducing the Apple Mac at the 1984 Super Bowl.
So, in an industry in which it's possible, if you have a big enough budget, to hire Sir Ridley to direct your next TV commercial, why the fascination with Bargh's dopey little experiment?
One reason is that there's a lot of uncertainty in the marketing and advertising game. Nineteenth Century department store mogul John Wanamaker famously said that half his advertising budget was wasted, he just didn't know which half.
Worse, things change. A TV commercial that excited viewers a few years ago often strikes them as dull and unfashionable today. Today, Scott's 1984 ad might remind people subliminally, from picking up on certain stylistic commonalities, of how dopey Scott's Prometheus was last summer, or how lame the Wachowski Siblings 1984-imitation V for Vendetta was, and Apple doesn't need their computers associated with that stuff.
Naturally, social psychologists want to get in on a little of the big money action of marketing. Gladwell makes a bundle speaking to sales conventions, and maybe they can get some gigs themselves. And even if their motivations are wholly academic, it's nice to have your brother-in-law, the one who makes so much more money than you do doing something boring in the corporate world, excitedly forward you an article he read that mentions your work.
("Priming" theory is also the basis for the beloved concept of "stereotype threat," which seems to offer a simple way to close those pesky Gaps that beset society: just get everybody to stop noticing stereotypes, and the Gaps will go away!)
But why do the marketers love hearing about these weak tea little academic experiments, even though they do much more powerful priming on the job? I suspect one reason is because these studies are classified as Science, and Science is permanent. As some egghead in Europe pointed out, Science is Replicable. Once the principles of Scientific Manipulation are uncovered, then they can just do their marketing jobs on autopilot. No more need to worry about trends and fads.
But, how replicable are these priming experiments?
He then comments on and extensively quotes the Higher Education piece Power of Suggestion by Tom Bartlett, which I linked to at the start of my post. I'm skipping that to jump to the novel part part of Steve's post.
Okay, but I've never seen this explanation offered: successful priming studies stop replicating after awhile because they basically aren't science. At least not in the sense of having discovered something that will work forever.
Instead, to the extent that they ever did really work, they are exercises in marketing. Or, to be generous, art.
And, art wears off.
The power of a work of art to prime emotions and actions changes over time. Perhaps, initially, the audience isn't ready for it, then it begins to impact a few sensitive fellow artists, and they begin to create other works in its manner and talk it up, and then it become widely popular. Over time, though, boredom sets in and people look for new priming stimuli.
For a lucky few old art works (e.g., the great Impressionist paintings), vast networks exist to market them by helping audiences get back into the proper mindset to appreciate the old art (E.g., "Monet was a rebel, up against The Establishment! So, putting this pretty picture of flowers up on your wall shows everybody that you are an edgy outsider, too!").
So, let's assume for a moment that Bargh's success in the early 1990s at getting college students to walk slow wasn't just fraud or data mining for a random effect among many effects. He really was priming early 1990s college students into walking slow for a few seconds.
Is that so amazing?
Other artists and marketers in the early 1990s were priming sizable numbers of college students into wearing flannel lumberjack shirts or dancing the Macarena or voting for Ross Perot, all of which seem, from the perspective of 2013, a lot more amazing.
Overall, it's really not that hard to prime young people to do things. They are always looking around for clues about what's cool to do.
But it's hard to keep them doing the same thing over and over. The Macarena isn't cool anymore, so it would be harder to replicate today an event in which young people are successfully primed to do the Macarena.
So, in the best case scenario, priming isn't science, it's art or marketing.
Interesting hypothesis.
[Link] Power of Suggestion
Related: Social Psychology & Priming: Art Wears Off
I recommend reading the piece, but below are some excerpts and commentary.
Power of Suggestion
By Tom Bartlett
...
Along with personal upheaval, including a lengthy child-custody battle, [Yale social psychologist John Bargh] has coped with what amounts to an assault on his life's work, the research that pushed him into prominence, the studies that Malcolm Gladwell called "fascinating" and Daniel Kahneman deemed "classic."
What was once widely praised is now being pilloried in some quarters as emblematic of the shoddiness and shallowness of social psychology. When Bargh responded to one such salvo with a couple of sarcastic blog posts, he was ridiculed as going on a "one-man rampage." He took the posts down and regrets writing them, but his frustration and sadness at how he's been treated remain.
Psychology may be simultaneously at the highest and lowest point in its history. Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title. That isn't true across the board. ... But a social psychologist with a sexy theory has star potential. In the last decade or so, researchers have made astonishing discoveries about the role of consciousness, the reasons for human behavior, the motivations for why we do what we do. This stuff is anything but incremental.
At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers.
Psychology isn't the only field with fakers, but it has its share. Plus there's the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another's work, instead pressing on toward the next headline-making outcome.
Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the "anchoring effect," which happens, for instance, when a store lists a competitor's inflated price next to its own to make you think you're getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient.
A small group of skeptical psychologists—let's call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.
What have they found? Mostly that they can't get those results. The studies don't check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.
... When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.
It's a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If "Florida" makes you sluggish, could "cheetah" make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we're steering the ship of self, but what if we're actually getting blown about by ghostly gusts?
Steve Sailer comments on this:
Advertisers, from John Wanamaker onward, sure as heck hope they are blowing you about by ghostly gusts.
Not only advertisers the industry where he worked in but indeed our little community probably loves any results confirming such a picture. We need to be careful about that. Bartlett continues:
John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn't publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.
The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn't believe it. They were willing to stick their necks out, but they didn't want to be the only ones.
Since that study was published in the Journal of Personality and Social Psychology, it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: "The implications for many social psychological phenomena ... would appear to be considerable." Translation: This is a huge deal.
...
The last year has been tough for Bargh. Professionally, the nadir probably came in January, when a failed replication of the famous elderly-walking study was published in the journal PLoS ONE. It was not the first failed replication, but this one stung. In the experiment, the researchers had tried to mirror Bargh's methods with an important exception: Rather than stopwatches, they used automatic timing devices with infrared sensors to eliminate any potential bias. The words didn't make subjects act old. They tried the experiment again with stopwatches and added a twist: They told those operating the stopwatches which subjects were expected to walk slowly. Then it worked. The title of their paper tells the story: "Behavioral Priming: It's All in the Mind, but Whose Mind?"
The paper annoyed Bargh. He thought the researchers didn't faithfully follow his methods section, despite their claims that they did. But what really set him off was a blog post that explained the results. The post, on the blog Not Exactly Rocket Science, compared what happened in the experiment to the notorious case of Clever Hans, the horse that could supposedly count. It was thought that Hans was a whiz with figures, stomping a hoof in response to mathematical queries. In reality, the horse was picking up on body language from its handler. Bargh was the deluded horse handler in this scenario. That didn't sit well with him. If the PLoS ONE paper is correct, the significance of his experiment largely dissipates. What's more, he looks like a fool, tricked by a fairly obvious flaw in the setup.
...
Pashler, a professor of psychology at the University of California at San Diego, is the most prolific of the Replicators. He started trying priming experiments about four years ago because, he says, "I wanted to see these effects for myself." That's a diplomatic way of saying he thought they were fishy. He's tried more than a dozen so far, including the elderly-walking study. He's never been able to achieve the same results. Not once.
This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a "train wreck looming" in the field because of doubts surrounding priming research. He was blunt: "I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating," he wrote.
Strongly worded e-mails from Nobel laureates tend to get noticed, and this one did. He sent it after conversations with Bargh about the relentless attacks on priming research. Kahneman cast himself as a mediator, a sort of senior statesman, endeavoring to bring together believers and skeptics. He does have a dog in the fight, though: Kahneman believes in these effects and has written admiringly of Bargh, including in his best seller Thinking, Fast and Slow.
On the heels of that message from on high, an e-mail dialogue began between the two camps. The vibe was more conciliatory than what you hear when researchers are speaking off the cuff and off the record. There was talk of the type of collaboration that Kahneman had floated, researchers from opposing sides combining their efforts in the name of truth. It was very civil, and it didn't lead anywhere.
In one of those e-mails, Pashler issued a challenge masquerading as a gentle query: "Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?" In other words, put up or shut up. Point me to the stuff you're certain of and I'll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn't get the straightforward answer he wanted. "Some suggestions emerged but none were pointing to a concrete example," he says.
One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: "So from our reading of the literature, it is not clear why the results should be subtle or fragile."
Bargh contends that we know more about these effects than we did in the 1990s, that they're more complicated than researchers had originally assumed. That's not a problem, it's progress. And if you aren't familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you're unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you've proved is that you're no good at social psychology.
Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"
That's how David Shanks sees things. He, too, has been trying to replicate well-known priming studies, and he, too, has been unable to do so. In a forthcoming paper, Shanks, a professor of psychology at University College London, recounts his and his several co-authors' attempts to replicate one of the most intriguing effects, the so-called professor prime. In the study, one group was told to imagine a professor's life and then list the traits that brought to mind. Another group was told to do the same except with a soccer hooligan rather than a professor.
The groups were then asked questions selected from the board game Trivial Pursuit, questions like "Who painted 'Guernica'?" and "What is the capital of Bangladesh?" (Picasso and Dhaka, for those playing at home.) Their scores were then tallied. The subjects who imagined the professor scored above a control group that wasn't primed. The subjects who imagined soccer hooligans scored below the professor group and below the control. Thinking about a professor makes you smart while thinking about a hooligan makes you dumb. The study has been replicated a number of times, including once on Dutch television.
Shanks can't get the result. And, boy, has he tried. Not once or twice, but nine times.
The skepticism about priming, says Shanks, isn't limited to those who have committed themselves to reperforming these experiments. It's not only the Replicators. "I think more people in academic psychology than you would imagine appreciate the historical implausibility of these findings, and it's just that those are the opinions that they have over the water fountain," he says. "They're not the opinions that get into the journalism."
Like all the skeptics I spoke with, Shanks believes the worst is yet to come for priming, predicting that "over the next two or three years you're going to see an avalanche of failed replications published." The avalanche may come sooner than that. There are failed replications in press at the moment and many more that have been completed (Shanks's paper on the professor prime is in press at PLoS ONE). A couple of researchers I spoke with didn't want to talk about their results until they had been peer reviewed, but their preliminary results are not encouraging.
Ap Dijksterhuis is the author of the professor-prime paper. At first, Dijksterhuis, a professor of psychology at Radboud University Nijmegen, in the Netherlands, wasn't sure he wanted to be interviewed for this article. That study is ancient news—it was published in 1998, and he's moved away from studying unconscious processes in the last couple of years, in part because he wanted to move on to new research on happiness and in part because of the rancor and suspicion that now accompany such work. He's tired of it.
The outing of Diederik Stapel made the atmosphere worse. Stapel was a social psychologist at Tilburg University, also in the Netherlands, who was found to have committed scientific misconduct in scores of papers. The scope and the depth of the fraud were jaw-dropping, and it changed the conversation. "It wasn't about research practices that could have been better. It was about fraud," Dijksterhuis says of the Stapel scandal. "I think that's playing in the background. It now almost feels as if people who do find significant data are making mistakes, are doing bad research, and maybe even doing fraudulent things."
Here is a link to the wiki article on the mentioned misconduct. I recall some of the drama that unfolded around the outing and the papers themselves... looking at the kinds of results Stapel wanted to fake or thought would advance his career reminds me of some other older examples of scientific misconduct.
In the e-mail discussion spurred by Kahneman's call to action, Dijksterhuis laid out a number of possible explanations for why skeptics were coming up empty when they attempted priming studies. Cultural differences, for example. Studying prejudice in the Netherlands is different from studying it in the United States. Certain subjects are not susceptible to certain primes, particularly a subject who is unusually self-aware. In an interview, he offered another, less charitable possibility. "It could be that they are bad experimenters," he says. "They may turn out failures to replicate that have been shown by 15 or 20 people already. It basically shows that it's something with them, and it's something going on in their labs."
Joseph Cesario is somewhere between a believer and a skeptic, though these days he's leaning more skeptic. Cesario is a social psychologist at Michigan State University, and he's successfully replicated Bargh's elderly-walking study, discovering in the course of the experiment that the attitude of a subject toward the elderly determined whether the effect worked or not. If you hate old people, you won't slow down. He is sympathetic to the argument that moderators exist that make these studies hard to replicate, lots of little monkey wrenches ready to ruin the works. But that argument only goes so far. "At some point, it becomes excuse-making," he says. "We have to have some threshold where we say that it doesn't exist. It can't be the case that some small group of people keep hitting on the right moderators over and over again."
Cesario has been trying to replicate a recent finding of Bargh's. In that study, published last year in the journal Emotion, Bargh and his co-author, Idit Shalev, asked subjects about their personal hygiene habits—how often they showered and bathed, for how long, how warm they liked the water. They also had subjects take a standard test to determine their degree of social isolation, whether they were lonely or not. What they found is that lonely people took longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction.
That isn't priming, exactly, though it is a related unconscious phenomenon often called embodied cognition. As in the elderly-walking study, the subjects didn't realize what they were doing, didn't know they were bathing longer because they were lonely. Can warm water alleviate feelings of isolation? This was a result with real-world applications, and reporters jumped on it. "Wash the loneliness away with a long, hot bath," read an NBC News headline.
But I like the feeling of insight I get when thinking about cool applications of embodied cognition! (;_:)
Bargh's study had 92 subjects. So far Cesario has run more than 2,500 through the same experiment. He's found absolutely no relationship between bathing and loneliness. Zero. "It's very worrisome if you have people thinking they can take a shower and they can cure their depression," he says. And he says Bargh's data are troublesome. "Extremely small samples, extremely large effects—that's a red flag," he says. "It's not a red flag for people publishing those studies, but it should be."
Even though he is, in a sense, taking aim at Bargh, Cesario thinks it's a shame that the debate over priming has become so personal, as if it's a referendum on one man. "He has the most eye-catching findings. He always has," Cesario says. "To the extent that some of his effects don't replicate, because he's identified as priming, it casts doubt on the entire body of research. He is priming."
I'll admit that took me a few seconds too long to parse. (~_^)
That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.
That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.
Well yes dear journalist that has been the narrative you've just presented to us readers.
Then along comes Gary Latham.
How entertaining a plot twist! Or maybe a journalist is writing a story about out of a confusing process where academia tries to take account of a confusing array of new evidence. Of course that's me telling a story right there. Agggh bad brain bad!
Latham, an organizational psychologist in the management school at the University of Toronto, thought the research Bargh and others did was crap. That's the word he used. He told one of his graduate students, Amanda Shantz, that if she tried to apply Bargh's principles it would be a win-win. If it failed, they could publish a useful takedown. If it succeeded ... well, that would be interesting.
They performed a pilot study, which involved showing subjects a photo of a woman winning a race before the subjects took part in a brainstorming task. As Bargh's research would predict, the photo made them perform better at the brainstorming task. Or seemed to. Latham performed the experiment again in cooperation with another lab. This time the study involved employees in a university fund-raising call center. They were divided into three groups. Each group was given a fact sheet that would be visible while they made phone calls. In the upper left-hand corner of the fact sheet was either a photo of a woman winning a race, a generic photo of employees at a call center, or no photo. Again, consistent with Bargh, the subjects who were primed raised more money. Those with the photo of call-center employees raised the most, while those with the race-winner photo came in second, both outpacing the photo-less control. This was true even though, when questioned afterward, the subjects said they had been too busy to notice the photos.
Latham didn't want Bargh to be right. "I couldn't have been more skeptical or more disbelieving when I started the research," he says. "I nearly fell off my chair when my data" supported Bargh's findings.
That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives. Are there photos that would make people be safer at work? Are there photos that undermine performance? How should we be fine-tuning the images that surround us? "It's almost scary in lots of ways that these primes in these environments can affect us without us being aware," he says. Latham hasn't stopped there. He's continued to try experiments using Bargh's ideas, and those results have only strengthened his confidence in priming. "I've got two more that are just mind-blowing," he says. "And I know John Bargh doesn't know about them, but he'll be a happy guy when he sees them."
Latham doesn't know why others have had trouble. He only knows what he's found, and he's certain about his own data. In the end, Latham thinks Bargh will be vindicated as a pioneer in understanding unconscious motivations. "I'm like a converted Christian," he says. "I started out as a devout atheist, and now I'm a believer."
Following his come-to-Jesus transformation, Latham sent an e-mail to Bargh to let him know about the call-center experiment. When I brought this up with Bargh, his face brightened slightly for the first time in our conversation. "You can imagine how that helped me," he says. He had been feeling isolated, under siege, worried that his legacy was becoming a cautionary tale. "You feel like you're on an island," he says.
Though Latham is now a believer, he remains the exception. With more failed replications in the pipeline, Dijksterhuis believes that Kahneman's looming-train-wreck letter, though well meaning, may become a self-fulfilling prophecy, helping to sink the field rather than save it. Perhaps the perception has already become so negative that further replications, regardless of what they find, won't matter much. For his part, Bargh is trying to take the long view. "We have to think about 50 or 100 years from now—are people going to believe the same theories?" he says. "Maybe it's not true. Let's see if it is or isn't."
Admirable that he's come to the latter attitude after the early angry blog posts prompted by what he was going through. That wasn't sarcasm, scientists are only human after all, there are easier things to do than this.
On the Importance of Systematic Biases in Science
From pg812-1020 of Chapter 8 “Sufficiency, Ancillarity, And All That” of Probability Theory: The Logic of Science by E.T. Jaynes:
The classical example showing the error of this kind of reasoning is the fable about the height of the Emperor of China. Supposing that each person in China surely knows the height of the Emperor to an accuracy of at least ±1 meter, if there are N=1,000,000,000 inhabitants, then it seems that we could determine his height to an accuracy at least as good as
(8-49)
merely by asking each person’s opinion and averaging the results.
The absurdity of the conclusion tells us rather forcefully that the
rule is not always valid, even when the separate data values are causally independent; it requires them to be logically independent. In this case, we know that the vast majority of the inhabitants of China have never seen the Emperor; yet they have been discussing the Emperor among themselves and some kind of mental image of him has evolved as folklore. Then knowledge of the answer given by one does tell us something about the answer likely to be given by another, so they are not logically independent. Indeed, folklore has almost surely generated a systematic error, which survives the averaging; thus the above estimate would tell us something about the folklore, but almost nothing about the Emperor.
We could put it roughly as follows:
error in estimate =
(8-50)
where S is the common systematic error in each datum, R is the RMS ‘random’ error in the individual data values. Uninformed opinions, even though they may agree well among themselves, are nearly worthless as evidence. Therefore sound scientific inference demands that, when this is a possibility, we use a form of probability theory (i.e. a probabilistic model) which is sophisticated enough to detect this situation and make allowances for it.
As a start on this, equation (8-50) gives us a crude but useful rule of thumb; it shows that, unless we know that the systematic error is less than about
of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten1. As Henri Poincare put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks.
Or pg1019-1020 Chapter 10 “Physics of ‘Random Experiments’”:
…Nevertheless, the existence of such a strong connection is clearly only an ideal limiting case unlikely to be realized in any real application. For this reason, the law of large numbers and limit theorems of probability theory can be grossly misleading to a scientist or engineer who naively supposes them to be experimental facts, and tries to interpret them literally in his problems. Here are two simple examples:
- Suppose there is some random experiment in which you assign a probability p for some particular outcome A. It is important to estimate accurately the fraction f of times A will be true in the next million trials. If you try to use the laws of large numbers, it will tell you various things about f; for example, that it is quite likely to differ from p by less than a tenth of one percent, and enormously unlikely to differ from p by more than one percent. But now, imagine that in the first hundred trials, the observed frequency of A turned out to be entirely different from p. Would this lead you to suspect that something was wrong, and revise your probability assignment for the 101’st trial? If it would, then your state of knowledge is different from that required for the validity of the law of large numbers. You are not sure of the independence of different trials, and/or you are not sure of the correctness of the numerical value of p. Your prediction of f for a million trials is probably no more reliable than for a hundred.
- The common sense of a good experimental scientist tells him the same thing without any probability theory. Suppose someone is measuring the velocity of light. After making allowances for the known systematic errors, he could calculate a probability distribution for the various other errors, based on the noise level in his electronics, vibration amplitudes, etc. At this point, a naive application of the law of large numbers might lead him to think that he can add three significant figures to his measurement merely by repeating it a million times and averaging the results. But, of course, what he would actually do is to repeat some unknown systematic error a million times. It is idle to repeat a physical measurement an enormous number of times in the hope that “good statistics” will average out your errors, because we cannot know the full systematic error. This is the old “Emperor of China” fallacy…
Indeed, unless we know that all sources of systematic error - recognized or unrecognized - contribute less than about one-third the total error, we cannot be sure that the average of a million measurements is any more reliable than the average of ten. Our time is much better spent in designing a new experiment which will give a lower probable error per trial. As Poincare put it, “The physicist is persuaded that one good measurement is worth many bad ones.”2 In other words, the common sense of a scientist tells him that the probabilities he assigns to various errors do not have a strong connection with frequencies, and that methods of inference which presuppose such a connection could be disastrously misleading in his problems.
I excerpted & typed up these quotes for use in my DNB FAQ appendix on systematic problems; the applicability of Jaynes’s observations to things like publication bias is obvious. See also http://lesswrong.com/lw/g13/against_nhst/
-
If I am understanding this right, Jaynes’s point here is that the random error shrinks towards zero as N increases, but this error is added onto the “common systematic error” S, so the total error approaches S no matter how many observations you make and this can force the total error up as well as down (variability, in this case, actually being helpful for once). So for example,
; with N=100, it’s 0.43; with N=1,000,000 it’s 0.334; and with N=1,000,000 it equals 0.333365 etc, and never going below the original systematic error of
. This leads to the unfortunate consequence that the likely error of N=10 is 0.017<x<0.64956 while for N=1,000,000 it is the similar range 0.017<x<0.33433 - so it is possible that the estimate could be exactly as good (or bad) for the tiny sample as compared with the enormous sample, since neither can do better than 0.017!↩
-
Possibly this is what Lord Rutherford meant when he said, “If your experiment needs statistics you ought to have done a better experiment”.↩
Study on depression
I am currently running a study on depression, in collaboration with Shannon Friedman (http://lesswrong.com/user/ShannonFriedman/overview/). If you are interested in participating, the study involves filling out a survey and will take a few minutes of your time (half an hour would be very generous), most likely once a week for four weeks. Send me an email at mdixo100@uottawa.ca, and I can give you more details.
Thank you!
Against NHST
A summary of standard non-Bayesian criticisms of common frequentist statistical practices, with pointers into the academic literature.
Notes on Psychopathy
This is some old work I did for SI. See also Notes on the Psychology of Power.
Deviant but not necessarily diseased or dysfunctional minds can demonstrate resistance to all treatment and attempts to change their mind (think No Universally Compelling Arguments; the premier example are probably psychopaths - no drug treatments are at all useful nor are there any therapies with solid evidence of even marginal effectiveness (one widely cited chapter, “Treatment of psychopathy: A review of empirical findings”, concludes that some attempted therapies merely made them more effective manipulators! We’ll look at that later.) While some psychopath traits bear resemblance to general characteristic of the powerful, they’re still a pretty unique group and worth looking at.
The main focus of my excerpts is on whether they are treatable, their effectiveness, possible evolutionary bases, and what other issues they have or don’t have which might lead one to not simply write them off as “broken” and of no relevance to AI.
(For example, if we were to discover that psychopaths were healthy human beings who were not universally mentally retarded or ineffective in gaining wealth/power and were destructive and amoral, despite being completely human and often socialized normally, then what does this say about the fragility of human values and how likely an AI will just be nice to us?)
How to Avoid the Conflict Between Feminism and Evolutionary Psychology?
I don't mean to claim that there should be a conflict.
Most likely the conflict arises because of many things, such as 1)Women having been ostracized for much of our society's existence 2)People failing at the is-ought problem, and committing the Naturalistic Fallacy 3)Lots of media articles saying unbelievably naïve evolutionary statements as scientific fact 4)Feminists as a group being defensive 5)Specially defensive when it comes to what is said to be natural. 6) General disregard by people, and politically engaged people (see The Blank Slate, by Steve Pinker) of the existence of a non Tabula Rasa nature. 7) Lack of patience of Evolutionary Psychologists to make peace and explain themselves for the things that journalists, not them, claimed. and others...
But the fact is, the conflict arose. It has only bad consequences as far as I could see, such as people fighting over each other, breaking friendships, and prejudice of great intensity on both sides.
How to avoid this conflict? Should someone write a treatise on Feminist Evolutionary Psychology? Should we get Leda Cosmides to talk about women liberation?
There are obviously no incompatibilities between reality and the moral claims of feminism. So whichever facts about evolutionary psychology are found to be true with the science's development, they should be made compatible. Compatibilism is possible.
But will the scientific community pull it off?
Related: Pinker Versus Spelke - The Science of Gender and Science
http://www.edge.org/3rd_culture/debate05/debate05_index.html
David Buss and Cindy Meston - Why do Women Have Sex?
[Link] Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show
Here is a paper in PLOS Biology re-considering the lessons of some classic psychology experiments invoked here often (via).
Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show
To me the crux of the paper comes from this statement in the abstract:
This suggests that individuals' willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.
Plus this detail from the Milgram experiment:
Ultimately, they tend to go along with the Experimenter if he justifies their actions in terms of the scientific benefits of the study (as he does with the prod “The experiment requires that you continue”) [39]. But if he gives them a direct order (“You have no other choice, you must go on”) participants typically refuse. Once again, received wisdom proves questionable. The Milgram studies seem to be less about people blindly conforming to orders than about getting people to believe in the importance of what they are doing [40].
[LINK] Breaking the illusion of understanding
This writeup at Ars Technica about a recently published paper in the Journal of Consumer Research may be of interest. Super-brief summary:
- Consumers with higher scores on a cognitive reflection test are more inclined to buy products when told more about them; for consumers with lower CRT scores it's the reverse.
- Consumers with higher CRT scores felt that they understood the products better after being told more; consumers with lower CRT scores felt that they understood them worse.
- If subjects are asked to give an explanation of how products work and then asked how well they understand and how willing they'd be to pay, high-CR subjects don't change much in either but low-CR subjects report feeling that they understand worse and that they're willing to pay less.
- Conclusion: it looks as if when you give low-CR subjects more information about a product, they feel they understand it less, don't like that feeling, and become less willing to pay.
If this is right (which seems plausible enough) then it presumably applies more broadly: e.g., to what tactics are most effective in political debate. Though it's hardly news in that area that making people feel stupid isn't the best way to persuade them of things.
Abstract of the paper:
People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.
Clarification: Behaviourism & Reinforcement
Disclaimer: The following is but a brief clarification on what the human brain does when one's behaviour is reinforced or punished. Thorough, exhaustive, and scholarly it is not.
Summary: Punishment, reinforcement, etc. of a behaviour creates an association in the mind of the affected party between the behaviour and the corresponding punishment, reinforcement, etc., the nature of which can only be known by the affected party. Take care when reinforcing or punishing others, as you may be effecting an unwanted association.
I've noticed the behaviourist concept of reinforcement thrown around a great deal on this site, and am worried a fair number of those who frequent it develop a misconception or are simply ignorant of how reinforcement affects humans' brains, and why it is practically effective.
In the interest of time, I'm not going to go into much detail on classical black-box behaviourism and behavioural neuroscience; Luke already covered the how one can take advantage of positive reinforcement. Negative reinforcement and punishment are also important, but won't be covered here.
[LINK] Learning without practice, through fMRI induction
http://www.nsf.gov/news/news_summ.jsp?cntn_id=122523&org=NSF&from=news
From the article:
New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It's the kind of thing seen in Hollywood's "Matrix" franchise.
Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.
Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person's visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.
EDIT: To clarify, this is almost certainly over-hyped. However, it appears to at least be an instance of very interesting biofeedback.
[Link] Inside the Cold, Calculating Mind of LessWrong?
An article from the Wall Street Journal. The original title might be slightly mind-killing for some people, but I found it moderately interesting especially considering that many LessWrongers formed part of the data set for the study the article talks about and a large fraction of us identified as libertarian on the last survey.
Inside the Cold, Calculating Libertarian Mind
An individual's personality shapes his or her political ideology at least as much as circumstances, background and influences. That is the gist of a recent strand of psychological research identified especially with the work of Jonathan Haidt. The baffling (to liberals) fact that a large minority of working-class white people vote for conservative candidates is explained by psychological dispositions that override their narrow economic interests.
In his recent book "The Righteous Mind," Dr. Haidt confronted liberal bafflement and made the case that conservatives are motivated by morality just as liberals are, but also by a larger set of moral "tastes"—loyalty, authority and sanctity, in addition to the liberal tastes for compassion and fairness. Studies show that conservatives are more conscientious and sensitive to disgust but less tolerant of change; liberals are more empathic and open to new experiences.
But ideology does not have to be bipolar. It need not fall on a line from conservative to liberal. In a recently published paper, Ravi Iyer from the University of Southern California, together with Dr. Haidt and other researchers at the data-collection platform YourMorals.org, dissect the personalities of those who describe themselves as libertarian.
These are people who often call themselves economically conservative but socially liberal. They like free societies as well as free markets, and they want the government to get out of the bedroom as well as the boardroom. They don't see why, in order to get a small-government president, they have to vote for somebody who is keen on military spending and religion; or to get a tolerant and compassionate society they have to vote for a large and intrusive state.
The study collated the results of 16 personality surveys and experiments completed by nearly 12,000 self-identified libertarians who visited YourMorals.org. The researchers compared the libertarians to tens of thousands of self-identified liberals and conservatives. It was hardly surprising that the team found that libertarians strongly value liberty, especially the "negative liberty" of freedom from interference by others. Given the philosophy of their heroes, from John Locke and John Stuart Mill to Ayn Rand and Ron Paul, it also comes as no surprise that libertarians are also individualistic, stressing the right and the need for people to stand on their own two feet, rather than the duty of others, or government, to care for people.
Perhaps more intriguingly, when libertarians reacted to moral dilemmas and in other tests, they displayed less emotion, less empathy and less disgust than either conservatives or liberals. They appeared to use "cold" calculation to reach utilitarian conclusions about whether (for instance) to save lives by sacrificing fewer lives. They reached correct, rather than intuitive, answers to math and logic problems, and they enjoyed "effortful and thoughtful cognitive tasks" more than others do.
The researchers found that libertarians had the most "masculine" psychological profile, while liberals had the most feminine, and these results held up even when they examined each gender separately, which "may explain why libertarianism appeals to men more than women."
All Americans value liberty, but libertarians seem to value it more. For social conservatives, liberty is often a means to the end of rolling back the welfare state, with its lax morals and redistributive taxation, so liberty can be infringed in the bedroom. For liberals, liberty is a way to extend rights to groups perceived to be oppressed, so liberty can be infringed in the boardroom. But for libertarians, liberty is an end in itself, trumping all other moral values.
Dr. Iyer's conclusion is that libertarians are a distinct species—psychologically as well as politically.
A version of this article appeared September 29, 2012, on page C4 in the U.S. edition of The Wall Street Journal, with the headline: Inside the Cold, Calculating Libertarian Mind.
The original paper.
Understanding Libertarian Morality: The Psychological Roots of an Individualist Ideology
Abstract: Libertarians are an increasingly vocal ideological group in U.S. politics, yet they are understudied compared to liberals and conservatives. Much of what is known about libertarians is based on the writing of libertarian intellectuals and political leaders, rather than surveying libertarians in the general population. Across three studies, 15 measures, and a large web-based sample (N = 152,239), we sought to understand the morality of selfdescribed libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. We found that, compared to liberals and conservatives, libertarians show 1) stronger endorsement of individual liberty as their foremost guiding principle and correspondingly weaker endorsement of other moral principles, 2) a relatively cerebral as opposed to emotional intellectual style, and 3) lower interdependence and social relatedness. Our findings add to a growing recognition of the role of psychological predispositions in the organization of political attitudes.
[Link] Nobel laureate challenges psychologists to clean up their act
Nobel laureate challenges psychologists to clean up their act
Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.
Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.
Introduction to Connectionist Modelling of Cognitive Processes: a chapter by chapter review
This chapter by chapter review was inspired by Vaniver's recent chapter by chapter review of Causality. Like with that review, the intention is not so much to summarize but to help readers determine whether or not they should read the book. Reading the review is in no way a substitute for reading the book.
I first read Introduction to Connectionist Modelling of Cognitive Processes (ICMCP) as part of an undergraduate course on cognitive modelling. We were assigned one half of the book to read: I ended up reading every page. Recently I felt like I should read it again, so I bought a used copy off Amazon. That was money well spent: the book was just as good as I remembered.
By their nature, artificial neural networks (referred to as connectionist networks in the book) are a very mathy topic, and it would be easy to write a textbook that was nothing but formulas and very hard to understand. And while ICMCP also spends a lot of time talking about the math behind the various kinds of neural nets, it does its best to explain things as intuitively as possible, sticking to elementary mathematics and elaborating on the reasons of why the equations are what they are. At this, it succeeds – it can be easily understood by someone knowing only high school math. I haven't personally studied ANNs at a more advanced level, but I would imagine that anybody who intended to do so would greatly benefit from the strong conceptual and historical understanding ICMCP provided.
The book also comes with a floppy disk containing a tlearn simulator which can be used to run various exercises given in the book. I haven't tried using this program, so I won't comment on it, nor on the exercises.
The book has 15 chapters, and it is divided into two sections: principles and applications.
Principles
1: ”The basics of connectionist information processing” provides a general overview of how ANNs work. The chapter begins by providing a verbal summary of five assumptions of connectionist modelling: that 1) neurons integrate information, 2) neurons pass information about the level of their input, 3) brain structure is layered, 4) the influence of one neuron on another depends on the strength of the connection between them, and 5) learning is achieved by changing the strengths of connections between neurons. After this verbal introduction, the basic symbols and equations relating to ANNs are introduced simultaneously with an explanation of how the ”neurons” in an ANN model work.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)