Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

## [LINK] The Point of Life is the Explosion of Experience Into Ideas

-8 16 June 2013 01:34AM

The Point of Life is the Explosion of Experience Into Ideas is a philosophical article I wrote detailing why and how self-expression is the fundamental human freedom and the justification for suffering.

## To become more rational, rinse your left ear with cold water

3 29 May 2013 11:32PM

A recent paper in Cortex describes how caloric vestibular stimulation (CVS), i.e., rinsing of the ear canal with cold water, reduces unrealistic optimism. Here are some bits from the paper:

Participants were 31 healthy right-handed adults (15 men, 20–40 years)...

Participants were oriented in a supine position with the head inclined 30° from the horizontal and cold water (24 °C) was irrigated into the external auditory canal on one side (Fitzgerald and Hallpike, 1942). After both vestibular-evoked eye movements and vertigo had stopped, the procedure was repeated on the other side...

Participants were asked to estimate their own risk, relative to that of their peers (same age, sex and education), of contracting a series of illnesses. The risk rating scale ranged from −6 (lower risk) to +6 (higher risk). ... Each participant was tested in three conditions, with 5 min rest between each: baseline with no CI (always first), left-ear CI and right-ear CI (order counterbalanced). In the latter conditions risk-estimation was initiated after 30 sec of CI, when nystagmic response had built up. Ten illnesses were rated in each condition and the average risk estimate per condition (mean of 10 ratings) was calculated for each participant. The 30 illnesses used in this study (see Table 1) were selected from a larger pool of illnesses pre-rated by a separate group of 30 healthy participants.Overall, our participants were unrealistically optimistic about their chances of contracting illnesses at baseline ... and during right-ear CI. ...Post-hoc tests using the Bonferroni correction revealed that, compared to baseline, average risk estimates were significantly higher during left-ear CI (p = .016), whereas they remained unchanged during right-ear CI (p = .476). Unrealistic optimism was thus reduced selectively during left-ear stimulation.

(CI stands for caloric irrigation which is how CVS was performed.)

It is not clear how close the participants came to being realistic in their estimates after CVS, but they definitely became more pessimistic, which is the right direction to go in the context of numerous biases such as the planning fallacy.

The paper:

# Vestibular stimulation attenuates unrealistic optimism

• Ryan McKay
• Corinne Tamagni
• Antonella Palla
• Peter Krummenacher
• Stefan C.A. Hegemann
• Dominik Straumann
• Peter Brugger

(paywalled, but a pre-publication version is available

## [link] Are All Dictator Game Results Artifacts?

15 23 May 2013 07:08AM

http://www.epjournal.net/blog/2013/05/are-all-dictator-game-results-artifacts/

You walk into a laboratory, and you read a set of instructions that tell you that your task is to decide how much of a $10 pie you want to give to an anonymous other person who signed up for the experimental session. This describes, more or less, the Dictator Game, a staple of behavioral economics with a history dating back more than a quarter of a century. The Dictator Game (DG) might not be the drosophila melanogaster of behavioral economics – the Prisoner’s Dilemma can lay plausible claim to that prized analogy – but it could reasonably aspire to an only slightly more modest title, perhaps the e. coli of the discipline. Since the original work, more than 20,000 observations in the DG have been reported. [...] How much would participants in a Dictator Game give to the other person if they did not know they were in a Dictator Game study? Simply following me around during the day and recording how much cash I dispense won’t answer this question because in the DG, the money is provided by the experimenter. So, to build a parallel design, the method used must move money to subjects as a windfall so that we can observe how much of this “house money” they choose to give away. And that is what Winking and Mizer did in a paper now in press and available online (paywall) in Evolution and Human Behavior, using participants, fittingly enough, in Las Vegas. Here’s what they did. Two confederates were needed. The first, destined to become the “recipient,” was occupied on a phone call near a bus stop in Vegas. The second confederate approached lone individuals at the bus stop, indicated that they were late for a ride to the airport, and asked the subject if they wanted the$20 in casino chips still in the confederate’s possession, scamming people into, rather than out of money, in sharp contradiction of the deep traditions of Las Vegas. The question was how many chips the fortunate subject transferred to the nearby confederate.

[...]

In a second condition, the confederate with the chips added a comment to the effect that the subject could “split it with that guy however you want,” indicating the first confederate. This condition brings the study a bit closer, but not much closer, to lab conditions, In a third condition, subjects were asked if they wanted to participate in a study, and then did so along the lines of the usual DG, making the treatment considerably closer to traditional lab-based conditions.

The difference between the first two treatments and the third treatments is interesting, but, as I said at the beginning, the DG should be thought of as a measuring tool. Figure 1 shows how many chips people give away in the DG in the three treatments. In conditions 1 and 2, the number of people (out of 60) who gave at least one chip to the second confederate was… zero. To the extent you think that this method answers the question, how much Dictator Game giving is due to people knowing they’re in an experiment, the answer is, “all of it.”

Link to paper (paywalled).

## Three more ways identity can be a curse

39 28 April 2013 02:53AM

The Buddhists believe that one of the three keys to attaining true happiness is dissolving the illusion of the self. (The other two are dissolving the illusion of permanence, and ceasing the desire that leads to suffering.) I'm not really sure exactly what it means to say "the self is an illusion", and I'm not exactly sure how that will lead to enlightenment, but I do think one can easily take the first step on this long journey to happiness by beginning to dissolve the sense of one's identity.

Previously, in "Keep Your Identity Small", Paul Graham showed how a strong sense of identity can lead to epistemic irrationally, when someone refuses to accept evidence against x because "someone who believes x" is part of his or her identity. And in Kaj Sotala's "The Curse of Identity", he illustrated a human tendency to reinterpret a goal of "do x" as "give the impression of being someone who does x". These are both fantastic posts, and you should read them if you haven't already.

Here are three more ways in which identity can be a curse.

1. Don't be afraid to change

James March, professor of political science at Stanford University, says that when people make choices, they tend to use one of two basic models of decision making: the consequences model, or the identity model. In the consequences model, we weigh the costs and benefits of our options and make the choice that maximizes our satisfaction. In the identity model, we ask ourselves "What would a person like me do in this situation?"1

The author of the book I read this in didn't seem to take the obvious next step and acknowledge that the consequences model is clearly The Correct Way to Make Decisions and basically by definition, if you're using the identity model and it's giving you a different result then the consequences model would, you're being led astray. A heuristic I like to use is to limit my identity to the "observer" part of my brain, and make my only goal maximizing the amount of happiness and pleasure the observer experiences, and minimizing the amount of misfortune and pain. It sounds obvious when you lay it out in these terms, but let me give an example.

Alice is a incoming freshman in college trying to choose her major. In Hypothetical University, there are only two majors: English, and business. Alice absolutely adores literature, and thinks business is dreadfully boring. Becoming an English major would allow her to have a career working with something she's passionate about, which is worth 2 megautilons to her, but it would also make her poor (0 mu). Becoming a business major would mean working in a field she is not passionate about (0 mu), but it would also make her rich, which is worth 1 megautilon. So English, with 2 mu, wins out over business, with 1 mu.

However, Alice is very bright, and is the type of person who can adapt herself to many situations and learn skills quickly. If Alice were to spend the first six months of college deeply immersing herself in studying business, she would probably start developing a passion for business. If she purposefully exposed herself to certain pro-business memeplexes (e.g. watched a movie glamorizing the life of Wall Street bankers), then she could speed up this process even further. After a few years of taking business classes, she would probably begin to forget what about English literature was so appealing to her, and be extremely grateful that she made the decision she did. Therefore she would gain the same 2 mu from having a job she is passionate about, along with an additional 1 mu from being rich, meaning that the 3 mu choice of business wins out over the 2 mu choice of English.

However, the possibility of self-modifying to becoming someone who finds English literature boring and business interesting is very disturbing to Alice. She sees it as a betrayal of everything that she is, even though she's actually only been interested in English literature for a few years. Perhaps she thinks of choosing business as "selling out" or "giving in". Therefore she decides to major in English, and takes the 2 mu choice instead of the superior 3 mu.

(Obviously this is a hypothetical example/oversimplification and there are a lot of reasons why it might be rational to pursue a career path that doesn't make very much money.)

It seems to me like human beings have a bizarre tendency to want to keep certain attributes and character traits stagnant, even when doing so provides no advantage, or is actively harmful. In a world where business-passionate people systematically do better than English-passionate people, it makes sense to self-modify to become business-passionate. Yet this is often distasteful.

For example, until a few weeks ago when I started solidifying this thinking pattern, I had an extremely adverse reaction to the idea of ceasing to be a hip-hop fan and becoming a fan of more "sophisticated" musical genres like jazz and classical, eventually coming to look down on the music I currently listen to as primitive or silly. This doesn't really make sense - I'm sure if I were to become a jazz and classical fan I would enjoy those genres at least as much as I currently enjoy hip hop. And yet I had a very strong preference to remain the same, even in the trivial realm of music taste.

Probably the most extreme example is the common tendency for depressed people to not actually want to get better, because depression has become such a core part of their identity that the idea of becoming a healthy, happy person is disturbing to them. (I used to struggle with this myself, in fact.) Being depressed is probably the most obviously harmful characteristic that someone can have, and yet many people resist self-modification.

Of course, the obvious objection is there's no way to rationally object to people's preferences - if someone truly prioritizes keeping their identity stagnant over not being depressed then there's no way to tell them they're wrong, just like if someone prioritizes paperclips over happiness there's no way to tell them they're wrong. But if you're like me, and you are interested in being happy, then I recommend looking out for this cognitive bias.

The other objection is that this philosophy leads to extremely unsavory wireheading-esque scenarios if you take it to its logical conclusion. But holding the opposite belief - that it's always more important to keep your characteristics stagnant than to be happy - clearly leads to even more absurd conclusions. So there is probably some point on the spectrum where change is so distasteful that it's not worth a boost in happiness (e.g. a lobotomy or something similar). However, I think that in actual practical pre-Singularity life, most people set this point far, far too low.

2. The hidden meaning of "be yourself"

(This section is entirely my own speculation, so take it as you will.)

"Be yourself" is probably the most widely-repeated piece of social skills advice despite being pretty clearly useless - if it worked then no one would be socially awkward, because everyone has heard this advice.

However, there must be some sort of core grain of truth in this statement, or else it wouldn't be so widely repeated. I think that core grain is basically the point I just made, applied to social interaction. I.e, optimize always for social success and positive relationships (particularly in the moment), and not for signalling a certain identity.

The ostensible purpose of identity/signalling is to appear to be a certain type of person, so that people will like and respect you, which is in turn so that people will want to be around you and be more likely to do stuff for you. However, oftentimes this goes horribly wrong, and people become very devoted to cultivating certain identities that are actively harmful for this purpose, e.g. goth, juggalo, "cool reserved aloof loner", guy that won't shut up about politics, etc. A more subtle example is Fred, who holds the wall and refuses to dance at a nightclub because he is a serious, dignified sort of guy, and doesn't want to look silly. However, the reason why "looking silly" is generally a bad thing is because it makes people lose respect for you, and therefore make them less likely to associate with you. In the situation Fred is in, holding the wall and looking serious will cause no one to associate with him, but if he dances and mingles with strangers and looks silly, people will be likely to associate with him. So unless he's afraid of looking silly in the eyes of God, this seems to be irrational.

Probably more common is the tendency to go to great care to cultivate identities that are neither harmful nor beneficial. E.g. "deep philosophical thinker", "Grateful Dead fan", "tough guy", "nature lover", "rationalist", etc. Boring Bob is a guy who wears a blue polo shirt and khakis every day, works as hard as expected but no harder in his job as an accountant, holds no political views, and when he goes home he relaxes by watching whatever's on TV and reading the paper. Boring Bob would probably improve his chances of social success by cultivating a more interesting identity, perhaps by changing his wardrobe, hobbies, and viewpoints, and then liberally signalling this new identity. However, most of us are not Boring Bob, and a much better social success strategy for most of us is probably to smile more, improve our posture and body language, be more open and accepting of other people, learn how to make better small talk, etc. But most people fail to realize this and instead play elaborate signalling games in order to improve their status, sometimes even at the expense of lots of time and money.

Some ways by which people can fail to "be themselves" in individual social interactions: liberally sprinkle references to certain attributes that they want to emphasize, say nonsensical and surreal things in order to seem quirky, be afraid to give obvious responses to questions in order to seem more interesting, insert forced "cool" actions into their mannerisms, act underwhelmed by what the other person is saying in order to seem jaded and superior, etc. Whereas someone who is "being herself" is more interested in creating rapport with the other person than giving off a certain impression of herself.

Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you.

I used to not understand why certain "types" of people, such as "hipsters"2 or Ed Hardy and Affliction-wearing "douchebags" are so universally loathed (especially on the internet). Yes, these people are adopting certain styles in order to be cool and interesting, but isn't everyone doing the same? No one looks through their wardrobe and says "hmm, I'll wear this sweater because it makes me uncool, and it'll make people not like me". Perhaps hipsters and Ed Hardy Guys fail in their mission to be cool, but should we really hate them for this? If being a hipster was cool two years ago, and being someone who wears normal clothes, acts normal, and doesn't do anything "ironically" is cool today, then we're really just hating people for failing to keep up with the trends. And if being a hipster actually is cool, then, well, who can fault them for choosing to be one?

That was my old thought process. Now it is clear to me that what makes hipsters and Ed Hardy Guys hated is that they aren't "being themselves" - they are much more interested in cultivating an identity of interestingness and masculinity, respectively, than connecting with other people. The same thing goes for pretty much every other collectively hated stereotype I can think of3 - people who loudly express political opinions, stoners who won't stop talking about smoking weed, attention seeking teenage girls on facebook, extremely flamboyantly gay guys, "weeaboos", hippies and new age types, 2005 "emo kids", overly politically correct people, tumblr SJA weirdos who identify as otherkin and whatnot, overly patriotic "rednecks", the list goes on and on.

This also clears up a confusion that occurred to me when reading How to Win Friends and Influence People. I know people who have a Dale Carnegie mindset of being optimistic and nice to everyone they meet and are adored for it, but I also know people who have the same attitude and yet are considered irritatingly saccharine and would probably do better to "keep it real" a little. So what's the difference? I think the difference is that the former group are genuinely interested in being nice to people and building rapport, while members of the second group have made an error like the one described in Kaj Sotala's post and are merely trying to give off the impression of being a nice and friendly person. The distinction is obviously very subtle, but it's one that humans are apparently very good at perceiving.

I'm not exactly sure what it is that causes humans to have this tendency of hating people who are clearly optimizing for identity - it's not as if they harm anyone. It probably has to do with tribal status. But what is clear is that you should definitely not be one of them.

3. The worst mistake you can possibly make in combating akrasia

The main thesis of PJ Eby's Thinking Things Done is that the primary reason why people are incapable of being productive is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal. A lot of depressed people make statements like "I'm worthless", or "I'm scum" or "No one could ever love me", which are illogically dramatic and overly black and white, until you realize that these statements are merely interpretations of a feeling of "I'm about to get kicked out of the tribe, and therefore die." Animals have a freezing response to imminent death, so if you are fearing failure you will go into do-nothing mode and not be able to work at all.4

In Succeed: How We Can Reach Our Goals, Phd psychologist Heidi Halvorson takes a different view and describes positive motivation and negative motivation as having pros and cons. However, she has her own dichotomy of Good Motivation and Bad Motivation: "Be good" goals are performance goals, and are directed at achieving a particular outcome, like getting an A on a test, reaching a sales target, getting your attractive neighbor to go out with you, or getting into law school. They are very often tied closely to a sense of self-worth. "Get better" goals are mastery goals, and people who pick these goals judge themselves instead in terms of the progress they are making, asking questions like "Am I improving? Am I learning? Am I moving forward at a good pace?" Halvorson argues that "get better" goals are almost always drastically better than "be good" goals5. An example quote (from page 60) is:

When my goal is to get an A in a class and prove that I'm smart, and I take the first exam and I don't get an A... well, then I really can't help but think that maybe I'm not so smart, right? Concluding "maybe I'm not smart" has several consequences and none of them are good. First, I'm going to feel terrible - probably anxious and depressed, possibly embarrassed or ashamed. My sense of self-worth and self-esteem are going to suffer. My confidence will be shaken, if not completely shattered. And if I'm not smart enough, there's really no point in continuing to try to do well, so I'll probably just give up and not bother working so hard on the remaining exams.

And finally, in Feeling Good: The New Mood Therapy, David Burns describes a destructive side effect of depression he calls "do-nothingism":

One of the most destructive aspects of depression is the way it paralyzes your willpower. In its mildest form you may simply procrastinate about doing a few odious chores. As your lack of motivation increases, virtually any activity appears so difficult that you become overwhelmed by the urge to do nothing. Because you accomplish very little, you feel worse and worse. Not only do you cut yourself off from your normal sources of stimulation and pleasure, but your lack of productivity aggravates your self-hatred, resulting in further isolation and incapacitation.

Synthesizing these three pieces of information leads me to believe that the worst thing you can possibly do for your akrasia is to tie your success and productivity to your sense of identity/self-worth, especially if you're using negative motivation to do so, and especially if you suffer or have recently suffered from depression or low-self esteem. The thought of having a negative self-image is scary and unpleasant, perhaps for the evo-psych reasons PJ Eby outlines. If you tie your productivity to your fear of a negative self-image, working will become scary and unpleasant as well, and you won't want to do it.

I feel like this might be the single number one reason why people are akratic. It might be a little premature to say that, and I might be biased by how large of a factor this mistake was in my own akrasia. But unfortunately, this trap seems like a very easy one to fall into. If you're someone who is lazy and isn't accomplishing much in life, perhaps depressed, then it makes intuitive sense to motivate yourself by saying "Come on, self! Do you want to be a useless failure in life? No? Well get going then!" But doing so will accomplish the exact opposite and make you feel miserable.

So there you have it. In addition to making you a bad rationalist and causing you to lose sight of your goals, a strong sense of identity will cause you to make poor decisions that lead to unhappiness, be unpopular, and be unsuccessful. I think the Buddhists were onto something with this one, personally, and I try to limit my sense of identity as much as possible. A trick you can use in addition to the "be the observer" trick I mentioned, is to whenever you find yourself thinking in identity terms, swap out that identity for the identity of "person who takes over the world by transcending the need for a sense of identity".

This is my first LessWrong discussion post, so constructive criticism is greatly appreciated. Was this informative? Or was what I said obvious, and I'm retreading old ground? Was this well written? Should this have been posted to Main? Should this not have been posted at all? Thank you.

1. Paraphrased from page 153 of Switch: How to Change When Change is Hard

2. Actually, while it works for this example, I think the stereotypical "hipster" is a bizarre caricature that doesn't match anyone who actually exists in real life, and the degree to which people will rabidly espouse hatred for this stereotypical figure (or used to two or three years ago) is one of the most bizarre tendencies people have.

3. Other than groups that arguably hurt people (religious fundamentalists, PUAs), the only exception I can think of is frat boy/jock types. They talk about drinking and partying a lot, sure, but not really any more than people who drink and party a lot would be expected to. Possibilities for their hated status include that they do in fact engage in obnoxious signalling and I'm not aware of it, jealousy, or stigmatization as hazers and date rapists. Also, a lot of people hate stereotypical "ghetto" black people who sag their jeans and notoriously type in a broken, difficult-to-read form of English. This could either be a weak example of the trend (I'm not really sure what it is they would be signalling, maybe dangerous-ness?), or just a manifestation of racism.

4. I'm not sure if this is valid science that he pulled from some other source, or if he just made this up.

5. The exception is that "be good" goals can lead to a very high level of performance when the task is easy.

## [Link] False memories of fabricated political events

16 10 February 2013 10:25PM

Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).

The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.

Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.

About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.

## [Link] Social Psychology & Priming: Art Wears Off

2 06 February 2013 10:08AM

Related to: Power of Suggestion

### Social Psychology & Priming: Art Wears Off

by Steve Sailer

One of the most popular social psychology studies of the Malcolm Gladwell Era has been Yale professor John Bargh's paper on how you can "prime" students to walk more slowly by first having them do word puzzles that contain a hidden theme of old age by the inclusion of words like "wrinkle" and "bingo." The primed subjects then took one second longer on average to walk down the hall than the unprimed control group. Isn't that amazing! (Here's Gladwell's description of Bargh's famous experiment in his 2005 bestseller Blink.)

This finding has electrified the Airport Book industry for years: Science proves you can manipulate people into doing what you want them to! Why you'd want college students to walk slower is unexplained, but that's not the point. The point is that Science proves that people are manipulable.

Now, a large fraction of the buyers of Airport Books like Blink are marketing and advertising professionals, who are paid handsomely to manipulate people, and to manipulate them into not just walking slower, but into shelling out real money to buy the clients' products.

Moreover, everybody notices that entertainment can prime you in various ways. For instance, well-made movies prime how I walk down the street afterwards. For two nights after seeing the Coen Brothers' No Country for Old Men, I walked the quiet streets swiveling my head, half-certain that an unstoppable killing machine was tailing me. When I came out of Christopher Nolan's amnesia thriller Memento, I was convinced I'd never remember where I parked my car. (As it turned out, I quickly found my car. Why? Because I needed to. But it was fun for thirty seconds to act like, and maybe even believe, that the movie had primed me into amnesia.)

Now, you could say, "That's art, not marketing," but the distinction isn't that obvious to talented directors. Not surprisingly, directors between feature projects often tide themselves over directing commercials. For example, Ridley Scott made Blade Runner in 1982 and then the landmark 1984 ad introducing the Apple Mac at the 1984 Super Bowl.

So, in an industry in which it's possible, if you have a big enough budget, to hire Sir Ridley to direct your next TV commercial, why the fascination with Bargh's dopey little experiment?

One reason is that there's a lot of uncertainty in the marketing and advertising game. Nineteenth Century department store mogul John Wanamaker famously said that half his advertising budget was wasted, he just didn't know which half.

Worse, things change. A TV commercial that excited viewers a few years ago often strikes them as dull and unfashionable today. Today, Scott's 1984 ad might remind people subliminally, from picking up on certain stylistic commonalities, of how dopey Scott's Prometheus was last summer, or how lame the Wachowski Siblings 1984-imitation V for Vendetta was, and Apple doesn't need their computers associated with that stuff.

Naturally, social psychologists want to get in on a little of the big money action of marketing. Gladwell makes a bundle speaking to sales conventions, and maybe they can get some gigs themselves. And even if their motivations are wholly academic, it's nice to have your brother-in-law, the one who makes so much more money than you do doing something boring in the corporate world, excitedly forward you an article he read that mentions your work.

("Priming" theory is also the basis for the beloved concept of "stereotype threat," which seems to offer a simple way to close those pesky Gaps that beset society: just get everybody to stop noticing stereotypes, and the Gaps will go away!)

But why do the marketers love hearing about these weak tea little academic experiments, even though they do much more powerful priming on the job? I suspect one reason is because these studies are classified as Science, and Science is permanent. As some egghead in Europe pointed out, Science is Replicable. Once the principles of Scientific Manipulation are uncovered, then they can just do their marketing jobs on autopilot. No more need to worry about trends and fads.

But, how replicable are these priming experiments?

He then comments on and extensively quotes the Higher Education piece Power of Suggestion by Tom Bartlett, which I linked to at the start of my post. I'm skipping that to jump to the novel part part of Steve's post.

Okay, but I've never seen this explanation offered: successful priming studies stop replicating after awhile because they basically aren't science. At least not in the sense of having discovered something that will work forever.

Instead, to the extent that they ever did really work, they are exercises in marketing. Or, to be generous, art.

And, art wears off.

The power of a work of art to prime emotions and actions changes over time. Perhaps, initially, the audience isn't ready for it, then it begins to impact a few sensitive fellow artists, and they begin to create other works in its manner and talk it up, and then it become widely popular. Over time, though, boredom sets in and people look for new priming stimuli.

For a lucky few old art works (e.g., the great Impressionist paintings), vast networks exist to market them by helping audiences get back into the proper mindset to appreciate the old art (E.g., "Monet was a rebel, up against The Establishment! So, putting this pretty picture of flowers up on your wall shows everybody that you are an edgy outsider, too!").

So, let's assume for a moment that Bargh's success in the early 1990s at getting college students to walk slow wasn't just fraud or data mining for a random effect among many effects. He really was priming early 1990s college students into walking slow for a few seconds.

Is that so amazing?

Other artists and marketers in the early 1990s were priming sizable numbers of college students into wearing flannel lumberjack shirts or dancing the Macarena or voting for Ross Perot, all of which seem, from the perspective of 2013, a lot more amazing.

Overall, it's really not that hard to prime young people to do things. They are always looking around for clues about what's cool to do.

But it's hard to keep them doing the same thing over and over. The Macarena isn't cool anymore, so it would be harder to replicate today an event in which young people are successfully primed to do the Macarena.

So, in the best case scenario, priming isn't science, it's art or marketing.

Interesting hypothesis.

## [Link] Power of Suggestion

24 06 February 2013 10:04AM

I recommend reading the piece, but below are some excerpts and commentary.

## Power of Suggestion

By Tom Bartlett

...

Along with personal upheaval, including a lengthy child-custody battle, [Yale social psychologist John Bargh] has coped with what amounts to an assault on his life's work, the research that pushed him into prominence, the studies that Malcolm Gladwell called "fascinating" and Daniel Kahneman deemed "classic."

What was once widely praised is now being pilloried in some quarters as emblematic of the shoddiness and shallowness of social psychology. When Bargh responded to one such salvo with a couple of sarcastic blog posts, he was ridiculed as going on a "one-man rampage." He took the posts down and regrets writing them, but his frustration and sadness at how he's been treated remain.

Psychology may be simultaneously at the highest and lowest point in its history. Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title. That isn't true across the board. ... But a social psychologist with a sexy theory has star potential. In the last decade or so, researchers have made astonishing discoveries about the role of consciousness, the reasons for human behavior, the motivations for why we do what we do. This stuff is anything but incremental.

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers.

Psychology isn't the only field with fakers, but it has its share. Plus there's the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another's work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the "anchoring effect," which happens, for instance, when a store lists a competitor's inflated price next to its own to make you think you're getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient.

A small group of skeptical psychologists—let's call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can't get those results. The studies don't check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

... When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.

It's a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If "Florida" makes you sluggish, could "cheetah" make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we're steering the ship of self, but what if we're actually getting blown about by ghostly gusts?

Steve Sailer comments on this:

Advertisers, from John Wanamaker onward, sure as heck hope they are blowing you about by ghostly gusts.

Not only advertisers the industry where he worked in but indeed our little community probably loves any results confirming such a picture. We need to be careful about that. Bartlett continues:

John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn't publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.

The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn't believe it. They were willing to stick their necks out, but they didn't want to be the only ones.

Since that study was published in the Journal of Personality and Social Psychology, it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: "The implications for many social psychological phenomena ... would appear to be considerable." Translation: This is a huge deal.

...

The last year has been tough for Bargh. Professionally, the nadir probably came in January, when a failed replication of the famous elderly-walking study was published in the journal PLoS ONE. It was not the first failed replication, but this one stung. In the experiment, the researchers had tried to mirror Bargh's methods with an important exception: Rather than stopwatches, they used automatic timing devices with infrared sensors to eliminate any potential bias. The words didn't make subjects act old. They tried the experiment again with stopwatches and added a twist: They told those operating the stopwatches which subjects were expected to walk slowly. Then it worked. The title of their paper tells the story: "Behavioral Priming: It's All in the Mind, but Whose Mind?"

The paper annoyed Bargh. He thought the researchers didn't faithfully follow his methods section, despite their claims that they did. But what really set him off was a blog post that explained the results. The post, on the blog Not Exactly Rocket Science, compared what happened in the experiment to the notorious case of Clever Hans, the horse that could supposedly count. It was thought that Hans was a whiz with figures, stomping a hoof in response to mathematical queries. In reality, the horse was picking up on body language from its handler. Bargh was the deluded horse handler in this scenario. That didn't sit well with him. If the PLoS ONE paper is correct, the significance of his experiment largely dissipates. What's more, he looks like a fool, tricked by a fairly obvious flaw in the setup.

...

Pashler, a professor of psychology at the University of California at San Diego, is the most prolific of the Replicators. He started trying priming experiments about four years ago because, he says, "I wanted to see these effects for myself." That's a diplomatic way of saying he thought they were fishy. He's tried more than a dozen so far, including the elderly-walking study. He's never been able to achieve the same results. Not once.

This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a "train wreck looming" in the field because of doubts surrounding priming research. He was blunt: "I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating," he wrote.

Strongly worded e-mails from Nobel laureates tend to get noticed, and this one did. He sent it after conversations with Bargh about the relentless attacks on priming research. Kahneman cast himself as a mediator, a sort of senior statesman, endeavoring to bring together believers and skeptics. He does have a dog in the fight, though: Kahneman believes in these effects and has written admiringly of Bargh, including in his best seller Thinking, Fast and Slow.

On the heels of that message from on high, an e-mail dialogue began between the two camps. The vibe was more conciliatory than what you hear when researchers are speaking off the cuff and off the record. There was talk of the type of collaboration that Kahneman had floated, researchers from opposing sides combining their efforts in the name of truth. It was very civil, and it didn't lead anywhere.

In one of those e-mails, Pashler issued a challenge masquerading as a gentle query: "Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?" In other words, put up or shut up. Point me to the stuff you're certain of and I'll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn't get the straightforward answer he wanted. "Some suggestions emerged but none were pointing to a concrete example," he says.

One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: "So from our reading of the literature, it is not clear why the results should be subtle or fragile."

Bargh contends that we know more about these effects than we did in the 1990s, that they're more complicated than researchers had originally assumed. That's not a problem, it's progress. And if you aren't familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you're unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you've proved is that you're no good at social psychology.

Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"

That's how David Shanks sees things. He, too, has been trying to replicate well-known priming studies, and he, too, has been unable to do so. In a forthcoming paper, Shanks, a professor of psychology at University College London, recounts his and his several co-authors' attempts to replicate one of the most intriguing effects, the so-called professor prime. In the study, one group was told to imagine a professor's life and then list the traits that brought to mind. Another group was told to do the same except with a soccer hooligan rather than a professor.

The groups were then asked questions selected from the board game Trivial Pursuit, questions like "Who painted 'Guernica'?" and "What is the capital of Bangladesh?" (Picasso and Dhaka, for those playing at home.) Their scores were then tallied. The subjects who imagined the professor scored above a control group that wasn't primed. The subjects who imagined soccer hooligans scored below the professor group and below the control. Thinking about a professor makes you smart while thinking about a hooligan makes you dumb. The study has been replicated a number of times, including once on Dutch television.

Shanks can't get the result. And, boy, has he tried. Not once or twice, but nine times.

The skepticism about priming, says Shanks, isn't limited to those who have committed themselves to reperforming these experiments. It's not only the Replicators. "I think more people in academic psychology than you would imagine appreciate the historical implausibility of these findings, and it's just that those are the opinions that they have over the water fountain," he says. "They're not the opinions that get into the journalism."

Like all the skeptics I spoke with, Shanks believes the worst is yet to come for priming, predicting that "over the next two or three years you're going to see an avalanche of failed replications published." The avalanche may come sooner than that. There are failed replications in press at the moment and many more that have been completed (Shanks's paper on the professor prime is in press at PLoS ONE). A couple of researchers I spoke with didn't want to talk about their results until they had been peer reviewed, but their preliminary results are not encouraging.

Ap Dijksterhuis is the author of the professor-prime paper. At first, Dijksterhuis, a professor of psychology at Radboud University Nij­megen, in the Netherlands, wasn't sure he wanted to be interviewed for this article. That study is ancient news—it was published in 1998, and he's moved away from studying unconscious processes in the last couple of years, in part because he wanted to move on to new research on happiness and in part because of the rancor and suspicion that now accompany such work. He's tired of it.

The outing of Diederik Stapel made the atmosphere worse. Stapel was a social psychologist at Tilburg University, also in the Netherlands, who was found to have committed scientific misconduct in scores of papers. The scope and the depth of the fraud were jaw-dropping, and it changed the conversation. "It wasn't about research practices that could have been better. It was about fraud," Dijksterhuis says of the Stapel scandal. "I think that's playing in the background. It now almost feels as if people who do find significant data are making mistakes, are doing bad research, and maybe even doing fraudulent things."

Here is a link to the wiki article on the mentioned misconduct. I recall some of the drama that unfolded around the outing and the papers themselves... looking at the kinds of results Stapel wanted to fake or thought would advance his career reminds me of some other older examples of scientific misconduct.

In the e-mail discussion spurred by Kahneman's call to action, Dijk­sterhuis laid out a number of possible explanations for why skeptics were coming up empty when they attempted priming studies. Cultural differences, for example. Studying prejudice in the Netherlands is different from studying it in the United States. Certain subjects are not susceptible to certain primes, particularly a subject who is unusually self-aware. In an interview, he offered another, less charitable possibility. "It could be that they are bad experimenters," he says. "They may turn out failures to replicate that have been shown by 15 or 20 people already. It basically shows that it's something with them, and it's something going on in their labs."

Joseph Cesario is somewhere between a believer and a skeptic, though these days he's leaning more skeptic. Cesario is a social psychologist at Michigan State University, and he's successfully replicated Bargh's elderly-walking study, discovering in the course of the experiment that the attitude of a subject toward the elderly determined whether the effect worked or not. If you hate old people, you won't slow down. He is sympathetic to the argument that moderators exist that make these studies hard to replicate, lots of little monkey wrenches ready to ruin the works. But that argument only goes so far. "At some point, it becomes excuse-making," he says. "We have to have some threshold where we say that it doesn't exist. It can't be the case that some small group of people keep hitting on the right moderators over and over again."

Cesario has been trying to replicate a recent finding of Bargh's. In that study, published last year in the journal Emotion, Bargh and his co-author, Idit Shalev, asked subjects about their personal hygiene habits—how often they showered and bathed, for how long, how warm they liked the water. They also had subjects take a standard test to determine their degree of social isolation, whether they were lonely or not. What they found is that lonely people took longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction.

That isn't priming, exactly, though it is a related unconscious phenomenon often called embodied cognition. As in the elderly-walking study, the subjects didn't realize what they were doing, didn't know they were bathing longer because they were lonely. Can warm water alleviate feelings of isolation? This was a result with real-world applications, and reporters jumped on it. "Wash the loneliness away with a long, hot bath," read an NBC News headline.

But I like the feeling of insight I get when thinking about cool applications of embodied cognition! (;_:)

Bargh's study had 92 subjects. So far Cesario has run more than 2,500 through the same experiment. He's found absolutely no relationship between bathing and loneliness. Zero. "It's very worrisome if you have people thinking they can take a shower and they can cure their depression," he says. And he says Bargh's data are troublesome. "Extremely small samples, extremely large effects—that's a red flag," he says. "It's not a red flag for people publishing those studies, but it should be."

Even though he is, in a sense, taking aim at Bargh, Cesario thinks it's a shame that the debate over priming has become so personal, as if it's a referendum on one man. "He has the most eye-catching findings. He always has," Cesario says. "To the extent that some of his effects don't replicate, because he's identified as priming, it casts doubt on the entire body of research. He is priming."

I'll admit that took me a few seconds too long to parse. (~_^)

That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.

That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.

Well yes dear journalist that has been the narrative you've just presented to us readers.

Then along comes Gary Latham.

How entertaining a plot twist! Or maybe a journalist is writing a story about out of a confusing process where academia tries to take account of a confusing array of new evidence.  Of course that's me telling a story right there. Agggh bad brain bad!

Latham, an organizational psychologist in the management school at the University of Toronto, thought the research Bargh and others did was crap. That's the word he used. He told one of his graduate students, Amanda Shantz, that if she tried to apply Bargh's principles it would be a win-win. If it failed, they could publish a useful takedown. If it succeeded ... well, that would be interesting.

They performed a pilot study, which involved showing subjects a photo of a woman winning a race before the subjects took part in a brainstorming task. As Bargh's research would predict, the photo made them perform better at the brainstorming task. Or seemed to. Latham performed the experiment again in cooperation with another lab. This time the study involved employees in a university fund-raising call center. They were divided into three groups. Each group was given a fact sheet that would be visible while they made phone calls. In the upper left-hand corner of the fact sheet was either a photo of a woman winning a race, a generic photo of employees at a call center, or no photo. Again, consistent with Bargh, the subjects who were primed raised more money. Those with the photo of call-center employees raised the most, while those with the race-winner photo came in second, both outpacing the photo-less control. This was true even though, when questioned afterward, the subjects said they had been too busy to notice the photos.

Latham didn't want Bargh to be right. "I couldn't have been more skeptical or more disbelieving when I started the research," he says. "I nearly fell off my chair when my data" supported Bargh's findings.

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives. Are there photos that would make people be safer at work? Are there photos that undermine performance? How should we be fine-tuning the images that surround us? "It's almost scary in lots of ways that these primes in these environments can affect us without us being aware," he says. Latham hasn't stopped there. He's continued to try experiments using Bargh's ideas, and those results have only strengthened his confidence in priming. "I've got two more that are just mind-blowing," he says. "And I know John Bargh doesn't know about them, but he'll be a happy guy when he sees them."

Latham doesn't know why others have had trouble. He only knows what he's found, and he's certain about his own data. In the end, Latham thinks Bargh will be vindicated as a pioneer in understanding unconscious motivations. "I'm like a converted Christian," he says. "I started out as a devout atheist, and now I'm a believer."

Following his come-to-Jesus transformation, Latham sent an e-mail to Bargh to let him know about the call-center experiment. When I brought this up with Bargh, his face brightened slightly for the first time in our conversation. "You can imagine how that helped me," he says. He had been feeling isolated, under siege, worried that his legacy was becoming a cautionary tale. "You feel like you're on an island," he says.

Though Latham is now a believer, he remains the exception. With more failed replications in the pipeline, Dijksterhuis believes that Kahneman's looming-train-wreck letter, though well meaning, may become a self-fulfilling prophecy, helping to sink the field rather than save it. Perhaps the perception has already become so negative that further replications, regardless of what they find, won't matter much. For his part, Bargh is trying to take the long view. "We have to think about 50 or 100 years from now—are people going to believe the same theories?" he says. "Maybe it's not true. Let's see if it is or isn't."

Admirable that he's come to the latter attitude after the early angry blog posts prompted by what he was going through. That wasn't sarcasm, scientists are only human after all, there are easier things to do than this.

## On the Importance of Systematic Biases in Science

25 20 January 2013 09:39PM

From pg812-1020 of Chapter 8 “Sufficiency, Ancillarity, And All That” of Probability Theory: The Logic of Science by E.T. Jaynes:

The classical example showing the error of this kind of reasoning is the fable about the height of the Emperor of China. Supposing that each person in China surely knows the height of the Emperor to an accuracy of at least ±1 meter, if there are N=1,000,000,000 inhabitants, then it seems that we could determine his height to an accuracy at least as good as

$\frac{1}{\sqrt{1,000,000,000}}m = 0.003cm$ (8-49)

merely by asking each person’s opinion and averaging the results.

The absurdity of the conclusion tells us rather forcefully that the $\sqrt{N}$ rule is not always valid, even when the separate data values are causally independent; it requires them to be logically independent. In this case, we know that the vast majority of the inhabitants of China have never seen the Emperor; yet they have been discussing the Emperor among themselves and some kind of mental image of him has evolved as folklore. Then knowledge of the answer given by one does tell us something about the answer likely to be given by another, so they are not logically independent. Indeed, folklore has almost surely generated a systematic error, which survives the averaging; thus the above estimate would tell us something about the folklore, but almost nothing about the Emperor.

We could put it roughly as follows:

error in estimate = $S \pm \frac{R}{\sqrt{N}}$ (8-50)

where S is the common systematic error in each datum, R is the RMS ‘random’ error in the individual data values. Uninformed opinions, even though they may agree well among themselves, are nearly worthless as evidence. Therefore sound scientific inference demands that, when this is a possibility, we use a form of probability theory (i.e. a probabilistic model) which is sophisticated enough to detect this situation and make allowances for it.

As a start on this, equation (8-50) gives us a crude but useful rule of thumb; it shows that, unless we know that the systematic error is less than about $\frac{1}{3}$ of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten1. As Henri Poincare put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks.

Or pg1019-1020 Chapter 10 “Physics of ‘Random Experiments’”:

…Nevertheless, the existence of such a strong connection is clearly only an ideal limiting case unlikely to be realized in any real application. For this reason, the law of large numbers and limit theorems of probability theory can be grossly misleading to a scientist or engineer who naively supposes them to be experimental facts, and tries to interpret them literally in his problems. Here are two simple examples:

1. Suppose there is some random experiment in which you assign a probability p for some particular outcome A. It is important to estimate accurately the fraction f of times A will be true in the next million trials. If you try to use the laws of large numbers, it will tell you various things about f; for example, that it is quite likely to differ from p by less than a tenth of one percent, and enormously unlikely to differ from p by more than one percent. But now, imagine that in the first hundred trials, the observed frequency of A turned out to be entirely different from p. Would this lead you to suspect that something was wrong, and revise your probability assignment for the 101’st trial? If it would, then your state of knowledge is different from that required for the validity of the law of large numbers. You are not sure of the independence of different trials, and/or you are not sure of the correctness of the numerical value of p. Your prediction of f for a million trials is probably no more reliable than for a hundred.
2. The common sense of a good experimental scientist tells him the same thing without any probability theory. Suppose someone is measuring the velocity of light. After making allowances for the known systematic errors, he could calculate a probability distribution for the various other errors, based on the noise level in his electronics, vibration amplitudes, etc. At this point, a naive application of the law of large numbers might lead him to think that he can add three significant figures to his measurement merely by repeating it a million times and averaging the results. But, of course, what he would actually do is to repeat some unknown systematic error a million times. It is idle to repeat a physical measurement an enormous number of times in the hope that “good statistics” will average out your errors, because we cannot know the full systematic error. This is the old “Emperor of China” fallacy…

Indeed, unless we know that all sources of systematic error - recognized or unrecognized - contribute less than about one-third the total error, we cannot be sure that the average of a million measurements is any more reliable than the average of ten. Our time is much better spent in designing a new experiment which will give a lower probable error per trial. As Poincare put it, “The physicist is persuaded that one good measurement is worth many bad ones.”2 In other words, the common sense of a scientist tells him that the probabilities he assigns to various errors do not have a strong connection with frequencies, and that methods of inference which presuppose such a connection could be disastrously misleading in his problems.

I excerpted & typed up these quotes for use in my DNB FAQ appendix on systematic problems; the applicability of Jaynes’s observations to things like publication bias is obvious. See also http://lesswrong.com/lw/g13/against_nhst/

1. If I am understanding this right, Jaynes’s point here is that the random error shrinks towards zero as N increases, but this error is added onto the “common systematic error” S, so the total error approaches S no matter how many observations you make and this can force the total error up as well as down (variability, in this case, actually being helpful for once). So for example, $\frac{1}{3} + \frac{1}{\sqrt{10}} = 0.66$; with N=100, it’s 0.43; with N=1,000,000 it’s 0.334; and with N=1,000,000 it equals 0.333365 etc, and never going below the original systematic error of $\frac{1}{3}$. This leads to the unfortunate consequence that the likely error of N=10 is 0.017<x<0.64956 while for N=1,000,000 it is the similar range 0.017<x<0.33433 - so it is possible that the estimate could be exactly as good (or bad) for the tiny sample as compared with the enormous sample, since neither can do better than 0.017!

2. Possibly this is what Lord Rutherford meant when he said, “If your experiment needs statistics you ought to have done a better experiment”.

## Study on depression

9 15 January 2013 09:58PM

I am currently running a study on depression, in collaboration with Shannon Friedman (http://lesswrong.com/user/ShannonFriedman/overview/). If you are interested in participating, the study involves filling out a survey and will take a few minutes of your time (half an hour would be very generous), most likely once a week for four weeks. Send me an email at mdixo100@uottawa.ca, and I can give you more details.

Thank you!

## Against NHST

50 21 December 2012 04:45AM

A summary of standard non-Bayesian criticisms of common frequentist statistical practices, with pointers into the academic literature.

## Notes on Psychopathy

15 19 December 2012 04:02AM

This is some old work I did for SI. See also Notes on the Psychology of Power.

Deviant but not necessarily diseased or dysfunctional minds can demonstrate resistance to all treatment and attempts to change their mind (think No Universally Compelling Arguments; the premier example are probably psychopaths - no drug treatments are at all useful nor are there any therapies with solid evidence of even marginal effectiveness (one widely cited chapter, “Treatment of psychopathy: A review of empirical findings”, concludes that some attempted therapies merely made them more effective manipulators! We’ll look at that later.) While some psychopath traits bear resemblance to general characteristic of the powerful, they’re still a pretty unique group and worth looking at.

The main focus of my excerpts is on whether they are treatable, their effectiveness, possible evolutionary bases, and what other issues they have or don’t have which might lead one to not simply write them off as “broken” and of no relevance to AI.

(For example, if we were to discover that psychopaths were healthy human beings who were not universally mentally retarded or ineffective in gaining wealth/power and were destructive and amoral, despite being completely human and often socialized normally, then what does this say about the fragility of human values and how likely an AI will just be nice to us?)

## How to Avoid the Conflict Between Feminism and Evolutionary Psychology?

7 04 December 2012 10:22PM

I don't mean to claim that there should be a conflict.

Most likely the conflict arises because of many things, such as 1)Women having been ostracized for much of our society's existence 2)People failing at the is-ought problem, and committing the Naturalistic Fallacy 3)Lots of media articles saying unbelievably naïve evolutionary statements as scientific fact 4)Feminists as a group being defensive 5)Specially defensive when it comes to what is said to be natural. 6) General disregard by people, and politically engaged people (see The Blank Slate, by Steve Pinker) of the existence of a non Tabula Rasa nature. 7) Lack of patience of Evolutionary Psychologists to make peace and explain themselves for the things that journalists, not them, claimed.  and others...

But the fact is, the conflict arose. It has only bad consequences as far as I could see, such as people fighting over each other, breaking friendships, and prejudice of great intensity on both sides.

How to avoid this conflict?  Should someone write a treatise on Feminist Evolutionary Psychology?  Should we get Leda Cosmides to talk about women liberation?

There are obviously no incompatibilities between reality and the moral claims of feminism. So whichever facts about evolutionary psychology are found to be true with the science's development, they should be made compatible. Compatibilism is possible.

But will the scientific community pull it off?

Related: Pinker Versus Spelke - The Science of Gender and Science

http://www.edge.org/3rd_culture/debate05/debate05_index.html

David Buss and Cindy Meston - Why do Women Have Sex?

## [Link] Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show

10 02 December 2012 11:45PM

Here is a paper in PLOS Biology re-considering the lessons of some classic psychology experiments invoked here often (via).

Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show

To me the crux of the paper comes from this statement in the abstract:

This suggests that individuals' willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.

Plus this detail from the Milgram experiment:

Ultimately, they tend to go along with the Experimenter if he justifies their actions in terms of the scientific benefits of the study (as he does with the prod “The experiment requires that you continue”) [39]. But if he gives them a direct order (“You have no other choice, you must go on”) participants typically refuse. Once again, received wisdom proves questionable. The Milgram studies seem to be less about people blindly conforming to orders than about getting people to believe in the importance of what they are doing [40].

## [LINK] Breaking the illusion of understanding

19 26 October 2012 11:09PM

This writeup at Ars Technica about a recently published paper in the Journal of Consumer Research may be of interest. Super-brief summary:

• Consumers with higher scores on a cognitive reflection test are more inclined to buy products when told more about them; for consumers with lower CRT scores it's the reverse.
• Consumers with higher CRT scores felt that they understood the products better after being told more; consumers with lower CRT scores felt that they understood them worse.
• If subjects are asked to give an explanation of how products work and then asked how well they understand and how willing they'd be to pay, high-CR subjects don't change much in either but low-CR subjects report feeling that they understand worse and that they're willing to pay less.
• Conclusion: it looks as if when you give low-CR subjects more information about a product, they feel they understand it less, don't like that feeling, and become less willing to pay.

If this is right (which seems plausible enough) then it presumably applies more broadly: e.g., to what tactics are most effective in political debate. Though it's hardly news in that area that making people feel stupid isn't the best way to persuade them of things.

Abstract of the paper:

People differ in their threshold for satisfactory causal understanding and therefore in the type of explanation that will engender understanding and maximize the appeal of a novel product. Explanation fiends are dissatisfied with surface understanding and desire detailed mechanistic explanations of how products work. In contrast, explanation foes derive less understanding from detailed than coarse explanations and downgrade products that are explained in detail. Consumers’ attitude toward explanation is predicted by their tendency to deliberate, as measured by the cognitive reflection test. Cognitive reflection also predicts susceptibility to the illusion of explanatory depth, the unjustified belief that one understands how things work. When explanation foes attempt to explain, it exposes the illusion, which leads to a decrease in willingness to pay. In contrast, explanation fiends are willing to pay more after generating explanations. We hypothesize that those low in cognitive reflection are explanation foes because explanatory detail shatters their illusion of understanding.

## Clarification: Behaviourism & Reinforcement

7 10 October 2012 05:30AM

Disclaimer: The following is but a brief clarification on what the human brain does when one's behaviour is reinforced or punished. Thorough, exhaustive, and scholarly it is not.

Summary: Punishment, reinforcement, etc. of a behaviour creates an association in the mind of the affected party between the behaviour and the corresponding punishment, reinforcement, etc., the nature of which can only be known by the affected party. Take care when reinforcing or punishing others, as you may be effecting an unwanted association.

I've noticed the behaviourist concept of reinforcement thrown around a great deal on this site, and am worried a fair number of those who frequent it develop a misconception or are simply ignorant of how reinforcement affects humans' brains, and why it is practically effective.

In the interest of time, I'm not going to go into much detail on classical black-box behaviourism and behavioural neuroscience; Luke already covered the how one can take advantage of positive reinforcement. Negative reinforcement and punishment are also important, but won't be covered here.

## [LINK] Learning without practice, through fMRI induction

2 07 October 2012 03:15AM

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It's the kind of thing seen in Hollywood's "Matrix" franchise.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.

Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person's visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

EDIT: To clarify, this is almost certainly over-hyped. However, it appears to at least be an instance of very interesting biofeedback.

## [Link] Inside the Cold, Calculating Mind of LessWrong?

10 05 October 2012 06:23PM

An article from the Wall Street Journal. The original title might be slightly mind-killing for some people, but I found it moderately interesting especially considering that many LessWrongers formed part of the data set for the study the article talks about and a large fraction of us identified as libertarian on the last survey.

## Inside the Cold, Calculating Libertarian Mind

An individual's personality shapes his or her political ideology at least as much as circumstances, background and influences. That is the gist of a recent strand of psychological research identified especially with the work of Jonathan Haidt. The baffling (to liberals) fact that a large minority of working-class white people vote for conservative candidates is explained by psychological dispositions that override their narrow economic interests.

In tests, libertarians displayed less emotion, empathy and disgust than conservatives or liberals.

In his recent book "The Righteous Mind," Dr. Haidt confronted liberal bafflement and made the case that conservatives are motivated by morality just as liberals are, but also by a larger set of moral "tastes"—loyalty, authority and sanctity, in addition to the liberal tastes for compassion and fairness. Studies show that conservatives are more conscientious and sensitive to disgust but less tolerant of change; liberals are more empathic and open to new experiences.

But ideology does not have to be bipolar. It need not fall on a line from conservative to liberal. In a recently published paper, Ravi Iyer from the University of Southern California, together with Dr. Haidt and other researchers at the data-collection platform YourMorals.org, dissect the personalities of those who describe themselves as libertarian.

These are people who often call themselves economically conservative but socially liberal. They like free societies as well as free markets, and they want the government to get out of the bedroom as well as the boardroom. They don't see why, in order to get a small-government president, they have to vote for somebody who is keen on military spending and religion; or to get a tolerant and compassionate society they have to vote for a large and intrusive state.

The study collated the results of 16 personality surveys and experiments completed by nearly 12,000 self-identified libertarians who visited YourMorals.org. The researchers compared the libertarians to tens of thousands of self-identified liberals and conservatives. It was hardly surprising that the team found that libertarians strongly value liberty, especially the "negative liberty" of freedom from interference by others. Given the philosophy of their heroes, from John Locke and John Stuart Mill to Ayn Rand and Ron Paul, it also comes as no surprise that libertarians are also individualistic, stressing the right and the need for people to stand on their own two feet, rather than the duty of others, or government, to care for people.

Perhaps more intriguingly, when libertarians reacted to moral dilemmas and in other tests, they displayed less emotion, less empathy and less disgust than either conservatives or liberals. They appeared to use "cold" calculation to reach utilitarian conclusions about whether (for instance) to save lives by sacrificing fewer lives. They reached correct, rather than intuitive, answers to math and logic problems, and they enjoyed "effortful and thoughtful cognitive tasks" more than others do.

The researchers found that libertarians had the most "masculine" psychological profile, while liberals had the most feminine, and these results held up even when they examined each gender separately, which "may explain why libertarianism appeals to men more than women."

All Americans value liberty, but libertarians seem to value it more. For social conservatives, liberty is often a means to the end of rolling back the welfare state, with its lax morals and redistributive taxation, so liberty can be infringed in the bedroom. For liberals, liberty is a way to extend rights to groups perceived to be oppressed, so liberty can be infringed in the boardroom. But for libertarians, liberty is an end in itself, trumping all other moral values.

Dr. Iyer's conclusion is that libertarians are a distinct species—psychologically as well as politically.

A version of this article appeared September 29, 2012, on page C4 in the U.S. edition of The Wall Street Journal, with the headline: Inside the Cold, Calculating Libertarian Mind.

The original paper.

## Understanding Libertarian Morality: The Psychological Roots of an Individualist Ideology

Abstract: Libertarians are an increasingly vocal ideological group in U.S. politics, yet they are understudied compared to liberals and conservatives. Much of what is known about libertarians is based on the writing of libertarian intellectuals and political leaders, rather than surveying libertarians in the general population. Across three studies, 15 measures, and a large web-based sample (N = 152,239), we sought to understand the morality of selfdescribed libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. We found that, compared to liberals and conservatives, libertarians show 1) stronger endorsement of individual liberty as their foremost guiding principle and correspondingly weaker endorsement of other moral principles, 2) a relatively cerebral as opposed to emotional intellectual style, and 3) lower interdependence and social relatedness. Our findings add to a growing recognition of the role of psychological predispositions in the organization of political attitudes.

## [Link] Nobel laureate challenges psychologists to clean up their act

12 03 October 2012 05:22PM

Nobel laureate challenges psychologists to clean up their act

Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.

Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.

## Introduction to Connectionist Modelling of Cognitive Processes: a chapter by chapter review

11 30 September 2012 04:24AM

This chapter by chapter review was inspired by Vaniver's recent chapter by chapter review of Causality. Like with that review, the intention is not so much to summarize but to help readers determine whether or not they should read the book. Reading the review is in no way a substitute for reading the book.

I first read Introduction to Connectionist Modelling of Cognitive Processes (ICMCP) as part of an undergraduate course on cognitive modelling. We were assigned one half of the book to read: I ended up reading every page. Recently I felt like I should read it again, so I bought a used copy off Amazon. That was money well spent: the book was just as good as I remembered.

By their nature, artificial neural networks (referred to as connectionist networks in the book) are a very mathy topic, and it would be easy to write a textbook that was nothing but formulas and very hard to understand. And while ICMCP also spends a lot of time talking about the math behind the various kinds of neural nets, it does its best to explain things as intuitively as possible, sticking to elementary mathematics and elaborating on the reasons of why the equations are what they are. At this, it succeeds – it can be easily understood by someone knowing only high school math. I haven't personally studied ANNs at a more advanced level, but I would imagine that anybody who intended to do so would greatly benefit from the strong conceptual and historical understanding ICMCP provided.

The book also comes with a floppy disk containing a tlearn simulator which can be used to run various exercises given in the book. I haven't tried using this program, so I won't comment on it, nor on the exercises.

The book has 15 chapters, and it is divided into two sections: principles and applications.

Principles

1: ”The basics of connectionist information processing” provides a general overview of how ANNs work. The chapter begins by providing a verbal summary of five assumptions of connectionist modelling: that 1) neurons integrate information, 2) neurons pass information about the level of their input, 3) brain structure is layered, 4) the influence of one neuron on another depends on the strength of the connection between them, and 5) learning is achieved by changing the strengths of connections between neurons. After this verbal introduction, the basic symbols and equations relating to ANNs are introduced simultaneously with an explanation of how the ”neurons” in an ANN model work.

## What are the best books on evolutionary psychology?

4 21 September 2012 07:59PM

I'd like to divide three classes of reasons to read a discipline:

1) You are curious and want to begin reading by something 100-500 pages. I'd go for Pinker's 1990's  "How the mind works"

2) You want to screen the whole field, by reading something 500-1500 pages. I definitely recommend David Buss 2004 "The Handbook of Evolutionary Psychology" which defeats the usual SI recommendations on the field

3) You want to know the state of the art of the field, so you really need something that is very recent, say from the last 2 or 3 years at most.  This is me. Please help me if you know what should I read.  300-1500 seems a good interval.

Just for a comparative, in Cognitive Neuroscience, 3 would be 2009 "MIT The Cognitive Neurosciences IV"

Post your opinions on what 1 2 and 3 should be for Evolutionary Psychology.

Oh, and if you like Evolutionary Cognitive Neuroscience (a field so new I don't know any of the 3) please post yours too...

## Experimental psychology on word confusion

11 14 September 2012 05:44AM

There's plenty of experimental work about how humans make poor judgments and decisions, but I haven't yet found much about how humans make poor judgments and decisions because of confusions about words. And yet, I expect such errors are common — I, at least, encounter them frequently.

It would be nice to have some scientific studies which illustrate the ways in which confusions about words affect everyday decision making, but instead all I can do is make philosophical arguments and point people to things like Yudkowsky's 37 Ways That Words Can Be Wrong or Chalmers' Verbal Disputes and Philosophical Progress.

Which keywords do I need to find experimental work on this topic? I tried Google scholar searches like "fuzzy concepts" "decision making" and effect of connotations on choices but I didn't find much in my first hour of looking into this.

## How to tell apart science from pseudo-science in a field you don't know ?

19 02 September 2012 10:25AM

First, a short personal note to make you understand why this is important to me. To make a long story short, the son of a friend has some atypical form of autism and language troubles. And that kid matters a lot to me, so I want to become stronger in helping him, to be able to better interact with him and help him overcome his troubles.

But I don't know much about psychology. I'm a computer scientist, with a general background of maths and physics. I'm kind of a nerd, social skills aren't my strength. I did read some of the basic books advised on Less Wrong, like Cialdini, Wright or Wiseman, but those just give me a very small background on which to build.

And psychology in general, autism/language troubles in particular, are fields in which there is a lot of pseudo-science. I'm very sceptical of Freud and psychoanalysis, for example, which I consider (but maybe I am wrong?) to be more like alchemy than like chemistry. There are a lot of mysticism and sect-like gurus related to autism, too.

So I'm bit unsure on how from my position of having a general scientific and rationality background I can dive into a completely unrelated field. Research papers are probably above my current level in psychology, so I think books (textbooks or popular science) are the way to go. But how to find which books on the hundreds that were written on the topic I should buy and read? Books that are evidence-based science, not pseudo-science, I mean. What is a general method to select which books to start in a field you don't really know? I would welcome any advise from the community.

Disclaimer: this is a personal "call for help", but since I think the answers/advices may matter outside my own personal case, I hope you don't mind.

## Let's Talk About Intelligence

3 22 August 2012 08:19PM

I'm writing this because, for a while, I have noticed that I am confused: particularly about what people mean when they say someone is intelligent. I'm more interested in a discussion here than actually making a formal case, so please excuse my lack of actual citations. I'm also trying to articulate my own confusion to myself as well as everyone else, so this will not be as focused as it could be.

If I had to point to a starting point for this state, I'd say it was in psych class, where we talked about research presented by Eyesenck and Gladwell. Eyesenck is very clear to define intelligence as the ability to solve abstract problems, but not necessarily the motivation . In many ways, this matches Yudkowsky's definition, where he talks about intelligence as a property we can ascribe to an entity, which lets us predict that the entity will be able to complete a task, without ourselves necessarily understanding the steps toward completion.

The central theme I'm confused about is the generality of the concept: are we really saying that there is a general algorithm or class of algorithms that will solve most or all problems to within a given distance from optimum?

Let me give an example. Depending on what test you use, an autistic can look clinically retarded, but with 'islands' of remarkable ability, even up to genius levels. The classic example is “Rain Man,” who is depicted as easily solving numerical problems most people don't even understand, but having trouble tying his shoes. This is usually an exaggeration (by no means are all autistics savants), and these island skills are hardly limited to math. The interesting point, though, is that even someone with many such islands can have an abysmally low overall IQ.

Some tests correct for this – Raven's Pattern matching test, for instance, gives you increasingly complex patterns that you have to complete – and this tends to level out those islands, and give an overall score that seems commensurate with the sheer genius that can be found in some areas.

What I find confusing is why we're correcting this at all. Certainly, we know that some people, given a task, can complete that task, and of course, depending on the person, this task can be unfathomably complex. But do we really have the evidence to say that, in general, this task does not depend on the person as well? Or, more specifically, on the algorithms they're running? Is it reasonable to say that a person runs an algorithm that will solve all problems within an efficiency x (with respect to processing time and optimality of the solution)? Or should we be looking closer for islands in neurological baselines as well?

Certainly, we could change the question and ask how efficient are all the algorithms the person is running, and from that, we could give an average efficiency, which might serve as a decent rough estimate for the efficiency with which a person will solve a problem. And for some uses, this is exactly the information we're looking for, and that's fine. But, as a general property of the people we're studying, it seems like the measure is insufficient.

If we're trying to predict specific behavior, it seems like it would be useful to be aware of whatever 'islands' exist – for instance, the common separation between algebraic and geometric approaches to math. In my experience, using geometric explanations to someone with an algebraic approach may not be at all successful, but this is not predictive of what we might think of as the person's a priori probability of solving the problem: occasionally they seem to solve the problem with no more than a few algebraic hints. Of course, this is hardly hard evidence, but I think it points to what I'm getting at.

Looking at the specific algorithm that's being used (or perhaps, the class of algorithm?) can be considerably more predictive of the outcome. Actually, I can't really say that, either: looking at what could be a distinct algorithm can be considerably more predictive of the outcome. There are numerous explanations for these observations, one of which is of course that these are all the same algorithm, just trained on different inputs, and perhaps even constrained or aided by changes in the local neural architecture (as some studies on neurological correlates of autism might suggest). But computational power alone seems insufficient if we're going to explain phenomena like the autistic 'islands'. A savant doesn't want for computational power – but in some areas, they can want for intelligence.

Here's where I start getting confused: the research I've seen assumes intelligence is a single trait which could be genetically, epigenetically, or culturally transmitted. When correlates of intelligence are looked for, from what I've seen, the correlates are for the 'average' intelligence score, and largely disregard the 'islands' of ability. As I've said, this can be useful, but it seems like answering some of these questions would be useful for a more general understanding of intelligence, especially going into the neurological side of things, whether that's in wetware or hardware.

Then again, there's a good chance I'm missing something: in which case, I'd appreciate some help updating my priors.

15 10 August 2012 08:13AM

Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.

An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.

Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.

And professors should be assumed to have the same professionalism.

A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)

To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.

"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.

He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."

Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.

At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.

The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.

That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.

The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.

The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).

Percentages of Social Psychologists Who Would Be Biased in Various Ways

 Self Colleagues A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2% A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9% Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6% Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%

I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do.

The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.

The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.

Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.

He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."

This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."

Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.

Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.

At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."

If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.

What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."

While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.

I also think there are reasons to think we may have similar problems on this site.

## Notes on the Psychology of Power

31 27 July 2012 07:22PM

Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.

The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).

What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).

1. Galinsky et al 2003; Guinote, 2007; Lammers et al 2008; Smith & Bargh, 2008
2. Eyal & Liberman
3. Rustichini & Villeval 2012
4. Lammers et al 2010
5. Kleef et al 2011
6. Carney et al 2010
7. Cain et al 2005; Cain et al 2011
8. Eckel et al 2010
9. Slabu et al; Smith & Trope 2006; Smith et al 2008

## Exploiting the Typical Mind Fallacy for more accurate questioning?

31 17 July 2012 12:46AM

I was reading Yvain's Generalizing from One Example, which talks about the typical mind fallacy.  Basically, it describes how humans assume that all other humans are like them.  If a person doesn't cheat on tests, they are more likely to assume others won't cheat on tests either.  If a person sees mental images, they'll be more likely to assume that everyone else sees mental images.

As I'm wont to do, I was thinking about how to make that theory pay rent.  It occurred to me that this could definitely be exploitable.  If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person's proclivities based on what they think about other people.

Eg, most employers ask "have you ever stolen from a job before," and have to deal with misreporting because nobody in their right mind will say yes.  However, imagine if the typical mind fallacy was correct.  The employers could instead ask "what do you think the percentage of employees who have stolen from their job is?" and know that the applicants who responded higher than average were correspondingly more likely to steal, and the applicants who responded lower than average were less likely to cheat.  It could cut through all sorts of social desirability distortion effects.  You couldn't get the exact likelihood, but it would give more useful information than you would get with a direct question.

In hindsight, which is always 20/20, it seems incredibly obvious.  I'd be surprised if professional personality tests and sociologists aren't using these types of questions.  My google-fu shows no hits, but it's possible I'm just not using the correct term that sociologists use.  I'm was wondering if anyone had heard of this questioning method before, and if there's any good research data out there showing just how much you can infer from someone's deviance from the median response.

## Two books by Celia Green

-9 13 July 2012 08:43AM

Celia Green is a figure who should interest some LW readers. If you can imagine Eliezer, not as an A.I. futurist in 2000s America, but as a parapsychologist in 1960s Britain - she must have been a little like that. She founded her own research institute in her mid-20s, invented psychological theories meant to explain why the human race was walking around resigned to mortality and ignorance, felt that her peers (who got all the research money) were doing everything wrong... I would say that her two outstanding books are The Human Evasion and Advice to Clever Children. The first book, while still very obscure, has slowly acquired a fanbase online; but the second book remains thoroughly unknown.

For a synopsis of what the books are about, I think something I wrote in 1993 (I've been promoting her work on the Internet for years) remains reasonable. They contain an analysis of the alleged deficiencies and hidden motivations of normal human psychology, description of an alternative outlook, and an examination of various topics from that new perspective. There is some similarity to the rationalist ideal developed in the Sequences here, in that her alternative involves existential urgency, deep respect for uncertainty, and superhuman aspiration.

There are also prominent differences. Green's starting point is not Bayesian calculation, it's Humean skepticism. Green would agree that one should aspire to "think like reality", but for her this would mean, above all, being mindful of "total uncertainty". It's a fact that I don't know what comes next, that I don't know the true nature of reality, that I don't know what's possible if I try; I may have habitual opinions about these matters, but a moment's honest reflection shows that none of these opinions are knowledge in any genuine sense; even if they are correct, I don't know them to be correct. So if I am interested in thinking like reality, I can begin by acknowledging the radical uncertainty of my situation. I exist, I don't know why, I don't know what I am, I don't know what the world is or what it has planned for me. I may have my ideas, but I should be able to see them as ideas and hold them apart from the unknown reality.

If you are like me, you will enjoy the outlook of open-ended striving that Green develops in this intellectual context, but you will be jarred by her account of ordinary, non-striving psychology. Her answer to the question, why does the human race have such petty interests and limited ambitions, is that it is sunk in an orgy of mutual hatred, mostly disguised, and resulting from an attempt to evade the psychology of striving. More precisely, to be a finite human being is to be in a desperate and frustrating situation; and people attempt to solve this problem, not by overcoming their limitations, but by suppressing their reactions to the situation. Other people are central to the resulting psychological maneuvers. They are a way for you to distract yourself from your own situation, and they are a safe target if the existential frustration and desperation reassert themselves.

Celia Green's psychological ideas are the product of her personal confrontation with the mysterious existential situation, and also her confrontation with an uncomprehending society. I've thought for some time that her portrayal of universal human depravity results from overestimating the potential of the average human being; that in effect she has asked herself, if I were that person, how could I possibly lead the life I see them living, and say the things I hear them saying, unless I were that twisted up inside? Nonetheless, I do think she has described an aspect of human psychology which is real and largely unexamined, and also that her advice on how to avoid the resentful turning-away from reality, and live in the uncertainty, is quite profound. One reason I'm promoting these books is in the hope that some small part of the culture at large is finally ready to digest their contents and critically assess them. People ought to be doing PhDs on the thought of Celia Green, but she's unknown in that world.

As for Celia Green herself, she's still alive and still going. She has a blog and a personal website and an organization based near Oxford. She's an "academic exile", but true to her philosophy, she hasn't compromised one iota and hopes to start her own private university. She may especially be of interest to the metaphysically inclined faction of LW readers, identified by Yvain in a recent blog post.

## [Link] Can We Reverse The Stanford Prison Experiment?

43 14 June 2012 03:41AM

From the Harvard Business Review, an article entitled: "Can We Reverse The Stanford Prison Experiment?"

By: Greg McKeown
Posted: June 12, 2012

Clicky Link of Awesome! Wheee! Push me!

Summary:

Royal Canadian Mounted Police attempt a program where they hand out "Positive Tickets"

Their approach was to try to catch youth doing the right things and give them a Positive Ticket. The ticket granted the recipient free entry to the movies or to a local youth center. They gave out an average of 40,000 tickets per year. That is three times the number of negative tickets over the same period. As it turns out, and unbeknownst to Clapham, that ratio (2.9 positive affects to 1 negative affect, to be precise) is called the Losada Line. It is the minimum ratio of positive to negatives that has to exist for a team to flourish. On higher-performing teams (and marriages for that matter) the ratio jumps to 5:1. But does it hold true in policing?

According to Clapham, youth recidivism was reduced from 60% to 8%. Overall crime was reduced by 40%. Youth crime was cut in half. And it cost one-tenth of the traditional judicial system.

This idea can be applied to Real Life

The lesson here is to create a culture that immediately and sincerely celebrates victories. Here are three simple ways to begin:

1. Start your next staff meeting with five minutes on the question: "What has gone right since our last meeting?" Have each person acknowledge someone else's achievement in a concrete, sincere way. Done right, this very small question can begin to shift the conversation.

2. Take two minutes every day to try to catch someone doing the right thing. It is the fastest and most positive way for the people around you to learn when they are getting it right.

3. Create a virtual community board where employees, partners and even customers can share what they are grateful for daily. Sounds idealistic? Vishen Lakhiani, CEO of Mind Valley, a new generation media and publishing company, has done just that at Gratitude Log. (Watch him explain how it works here).

## [Video] Presentation on metacognition contains good intro to basic LW ideas

2 12 June 2012 01:12PM

I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.

## [Link] Thick and thin

23 06 June 2012 12:08PM

A new interesting entry on Gregory Cochran's and Henry Harpending's well known blog (West Hunter). For me the information I gained from the LessWrong articles on inferential distances complemented it nicely. Link to source.

There is a spectrum of problem-solving, ranging from, at one extreme, simplicity  and clear chains of logical reasoning (sometimes long chains) and, at the other,  building a picture by sifting through a vast mass of evidence of  varying quality.  I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets.   I said that you could think of a thundercloud as a vertical dipole,  with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”.  That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

At the other extreme,  consider Darwin gathering and pondering on a vast amount of natural-history information, eventually coming up with natural selection as the explanation.   Some of the information in the literature  wasn’t correct, and much  key information that would have greatly aided his  quest, such as basic genetics, was still unknown.   That didn’t stop him, anymore than not knowing the cause of continental drift stopped Wegener.

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942,  needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck.   He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’.  He’d studied in Japan, knew the Japanese well.  He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality.  Being smart is necessary but not sufficient: you also need to know lots of  stuff.

At this point he was utterly saturated with information about the Japanese Navy.  He’d been  living and breathing JN-25 for months. The Japanese were aimed somewhere,  that somewhere designated by an untranslated codegroup – ‘AF’.  Rochefort thought it meant Midway, based on many clues, plausibility, etc.  OP-20-G, back in Washington,  thought otherwise. They thought the main attack might be against Alaska, or Port Moresby, or even the West Coast.

Nimitz believed Rochefort – who was correct.  Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*.  As so often happens, OP-20-G won the bureaucratic war:  Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters  was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman).  That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge.  At best, they believe you because you’ve  been right in the past.  Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.

## Case Study: Testing Confirmation Bias

32 02 May 2012 02:03PM

Master copy lives on gwern.net

## [link]Mass replication of Psychology articles planed.

25 18 April 2012 04:13PM

The plan is to replicate or fail to replicate all 2008 articles from three major Psychology journals.

ETA: http://openscienceframework.org/ is the homepage of the group behind this.  It's still in Beta, but will eventually include some nifty looking science toolkits in addition to the reproducibility project.

## [link] Why We Reason (psychology blog)

4 18 April 2012 11:40AM

Why We Reason is an excellent psychology blog that has a great deal of subject matter in common with Less Wrong. Some of the topics discussed on the blog include social psychology, judgement and decision making, neuroscience, cognitive biases, and creativity. And there's even a hint of the kind of "cognitive philosophy" practiced on Less Wrong.

The author, Sam McNerney, is blessed with the rare gift of being able to distill psychology topics for a lay audience, and his posts are very lucid.

There's also a handy archive of every post on the site.

## 'Thinking, Fast and Slow' Chapter Summaries / Notes [link]

17 15 April 2012 09:14AM

I recently read Kahneman's 'Thinking Fast and Slow' (actually listened to the audiobook) and I wanted to find a summary of the experiments he describes and I stumbled upon this: http://sivers.org/book/ThinkingFastAndSlow. It has a summary of the interesting/important points of each chapter. Most of the statements seem to be direct quotes from the book, so if you have it in an electronic format (it can easily be obtained from uh, various sources) you can search for those quotes and find the context.

Bonus: Notes from Dan Ariely's Predictably Irrational and also many other books.

## The principle of ‘altruistic arbitrage’

17 09 April 2012 01:29AM

Cross-posted from http://www.robertwiblin.com

There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.

Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.

There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’

The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using.  That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.

Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.

This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.

What other conclusions can we draw thinking about philanthropy in this way?

## Evolutionary psychology: evolving three eyed monsters

14 16 March 2012 09:28PM

### Summary

We should not expect evolution of complex psychological and cognitive adaptations in the timeframe in which, morphologically, animal bodies can only change by very little. The genetic alteration to the cognition for speech shouldn't be expected to be dramatically more complex than the alteration of vocal cords.

### Evolutions that did not happen

When humans descended from trees and became bipedal, it would have been very advantageous to have an eye or two on back of the head, for detection of predators and to protect us against being back-stabbed by fellow humans. This is why all of us have an extra eye on the back of our heads, right? Ohh, we don't. Perhaps the mate selection resulted in the poor reproductive success of the back-eyed hominids. Perhaps the tribes would kill any mutant with eyes on the back.

There are pretty solid reasons why none the above has happened, and can't happen in such timeframes. The evolution does not happen simply because the trait is beneficial, or because there's a niche to be filled. A simple alteration to the DNA has to happen, causing a morphological change which results in some reproductive improvement; then DNA has to mutate again, etc. The unrelated nearly-neutral mutations may combine resulting in an unexpected change (for example, the wolves have many genes that alter their size; random selection of genes produces approximately normal distribution of the sizes; we can rapidly select smaller dogs utilizing the existing diversity). There's no such path rapidly leading up to an eye on back of the head. The eye on back of the head didn't evolve because evolution couldn't make that adaptation.

The speed of evolution is severely limited. The ways in which evolution can work, too, are very limited. In the time in which we humans have got down from the trees, we undergone rather minor adaptation in the shape of our bodies, as evident from the fossil record - and that is the degree of change we should expect in rest of our bodies including our brains.

The correct application of evolutionary theory should be entirely unable to account for outrageous hypothetical like extra eye on back of our heads (extra eye can evolve, of course, but would take very long time). Evolution is not magic. The power of scientific theory is that it can't explain everything, but only the things which are true - that's what makes scientific theory useful for finding the things that are true, in advance of observation. That is what gives science it's predictive power. That's what differentiates science from religion. The power of not explaining the wrong things.

### Evolving the instincts

What do we think it would take to evolve a new innate instinct? To hard-wire a cognitive mechanism?

Groups of neurons have to connect in the new ways - the neurons on one side must express binding proteins, which would guide the axons towards them; the weights of the connections have to be adjusted. Majority of the genes expressed in neurons, affect all of the neurons; some affect just a group, but there is no known mechanism by which an entirely arbitrary group's bindings may be controlled from the DNA in 1 mutation. The difficulties are not unlike those of an extra eye. This, combined with above-mentioned speed constraints, imposes severe limitations on which sorts of wiring modifications humans could have evolved during the hunter gatherer environment, and ultimately the behaviours that could have evolved. Even very simple things - such as preference for particular body shape of the mates - have extreme hidden implementation complexity in terms of the DNA modifications leading up to the wiring leading up to the altered preferences. Wiring the brain for a specific cognitive fallacy is anything but simple. It may not always be as time consuming/impossible as adding an extra eye, but it is still no little feat.

### Junk evolutionary psychology

It is extremely important to take into account the properties of evolutionary process when invoking evolution as explanation for traits and behaviours.

The evolutionary theory, as invoked in the evolutionary psychology, especially of the armchair variety, all too often is an universal explanation. It is magic that can explain anything equally well. Know of a fallacy of reasoning? Think up how it could have worked for the hunter gatherer, make a hypothesis, construct a flawed study across cultures, and publish.

No considerations are given for the strength of the advantage, for the size of 'mutation target', and for the mechanisms by which the mutation in the DNA would have resulted in the modification of the circuitry such as to result in the trait, nor to the gradual adaptability. All of that is glossed over entirely in common armchair evolutionary psychology, and unfortunately, even in the academia. The evolutionary psychology is littered with examples of traits which are alleged to have evolved over the same time during which we had barely adapted to walking upright.

It may be that when describing behaviours, a lot of complexity can be hidden into very simple-sounding concepts; and thus it seems like a good target for evolutionary explanation. But when you look at the details - the axons that have to find the targets; the gene must activate in the specific cells, but not others - there is a great deal of complexity in coding for even very simple traits.

Note: I originally did not intend to make an example of junk, for thou should not pick a strawman, but for sake of clarity, there is an example of what I would consider to be junk: the explanation of better performance at Wason Selection Task as result of evolved 'social contracts module', without a slightest consideration for what it might take, in terms of DNA, to code a Wason Selection Task solver circuit, nor for alternative plausible explanation, nor for a readily available fact that people can easily learn to solve Wason Selection Task correctly when taught - the fact which still implies general purpose learning, and the fact that high-IQ people can solve far more confusing tasks of far larger complexity, which demonstrates that the tasks can be solved in absence of specific evolved 'social contract' modules.

There is an example of non-junk: the evolutionary pressure can adjust strength of pre-existing emotions such as anger, fear, and so on, and even decrease the intelligence whenever the higher intelligence is maladaptive.

Other commonly neglected fact: the evolution is not a watchmaker, blind or not. It does not choose a solution for a problem and then work on this solution! It works on all adaptive mutations simultaneously. Evolution works on all the solutions, and the simpler changes to existing systems are much quicker to evolve. If mutation that tweaks existing system improves fitness, it will, too, be selected for, even if there was a third eye in progress.

As much as it would be more politically correct and 'moderate' for e.g. evolution of religion crowd to get their point across by arguing that the religious people have evolved specific god module which doesn't do anything but make them believe in god, than to imply that they are 'genetically stupid' in some way, the same selective pressure would also make the evolution select for non-god-specific heritable tweaks to learning, and the minor cognitive deficits, that increase religiosity.

### Lined slate as a prior

As update for tabula rasa, picture lined writing paper; it provides some guidance for the handwriting; the horizontal lined paper is good for writing text, but not for arithmetic, the five-lines-near-eachother separated by spacing is good for writing music, and the grid paper is pretty universal. Different regions of the brain are tailored to different content; but should not be expected to themselves code different algorithms, save for few exceptions which had long time to evolve, early in vertebrate history.

edit: improved the language some. edit: specific what sort of evolutionary psychology I consider to be junk, and what I do not, albeit that was not the point of the article. The point of the article was to provide you with the notions to use to see what sorts of evolutionary psychology to consider junk, and what do not.

## "How We Decide", by Jonah Lehrer, kindle version on sale for 99 cents at amazon

3 07 March 2012 06:43AM

http://www.amazon.com/How-We-Decide-ebook/dp/B003WMAAMG/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1331098417&sr=1-1

I don't know how proper this is, but I'm quite cheap and like a bargain, and I've seen Lehrer referred to a number of times here. I hadn't read Kahneman before, but bought the kindle version and read him on my phone whenever I had some wait time somewhere.

It's better than a mokeskin pouch! I can have the top *thousand* books I'm reading on me at all times, and just pull one out anywhere! I never have to waste another minute of my life!

I don't like spam anymore than anyone else, but I'm going to be getting it cheap, and I just want everyone else who wants it to get it cheap too. It's okay to spam people about cheap books, right? That's a family tradition.

## Online education and Conscientiousness

13 24 February 2012 09:05PM

I've wondered for some time now what the effects of online education might be on gender and income inequality, specifically as online education interacts with IQ and Conscientiousness (compared with offline education). I ran into a study of a course done online and offline that found correlations with Conscientiousness, which prompted me to start writing out my thoughts: https://plus.google.com/103530621949492999968/posts/aKa3qLatwZ3

The model/argument I give (towards the bottom) is logically trivial, and the basic idea seems pretty intuitive - offline classrooms remove some need for self-discipline/Conscientiousness and performance is more g-loaded - that I'm sure I can't be the first person to think of it.

Does anyone have statistics or citations handy which might help in any essay I write on the topic?

## [LINK] The NYT on Everyday Habits

5 18 February 2012 08:23AM

The New York Times just published this article on how companies use data mining and the psychology of habit formation to effectively target ads.

The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges.

It has some decent depth of discussion, including an example of the author actually using the concepts to stop a bad habit. The article is based on an upcoming book by the same author titled The Power of Habit.

I haven't seen emphasis of this particular phenomenon—habits consisting of a cue, routine, and reward—on Lesswrong. Do people think it's a valid, scientifically supported phenomenon? The article gives this impression but, of course, doesn't cite specific academic work on it. It ties in to the System 1/System 2 theory easily as a System 1 process. How much of the whole System 1 can be explained as an implementation of this cue, routine, reward process?

And most importantly, how can this fit into the procrastination equation as a tool to subvert akrasia and establish good habits?

Let's look at each of the four factors. If you've formed a habit, it means that the reward happened consistently, which means you have high expectancy. Given that it is a reward, the value is at least positive, but probably not large. Since habits mostly work on small time scales, delay is probably very small. And maybe increased habit formation means your impulsiveness is low. Each of these effects would increase motivation. In addition, because it's part of System 1, there is little energy cost to performing the habit, like there would be with many other conscious actions.

Does this explanation sound legitimate, or like an argument for the bottom line?

Personally, I can tell that context is a strong cue for behavior at work, school, and home. When I go into work, I'm automatically motivated to perform well, and that motivation remains for several hours. When I go into class, I'm automatically ready to focus on difficult material, or even enthusiastically take a test. Yet when I go home, something about the context switches that off, and I can't seem to get anything done at all. It might be worth significant experimentation to find out what cues trigger both modes, and change my contexts to induce what I want.

What do you think?

Edit: this phenomenon has been covered on LW in the form of operant conditioning in posts by Yvain.

## [link] 101 Fascinating Brain Blogs

-4 16 February 2012 11:47PM

A pretty interesting list of psychology blogs. One of my favorite blogs (Mind Hacks) was listed (so the others on the list must be good too. Right?).

Also,

Does anybody know of any good textbooks on applied cognitive psychology?

For memory-Something that would put things like SRS in context with other things like the reasons we forget things, but more in more depth than blog posts? Or do you think that getting a textbook on the subject wouldn't be worthwhile because most of the low-hanging fruits can be grasped through blog posts?

For emotions-Any good/practical introductions to CBT?

Do you think we should start up a book recommendations recurring thread?

View more: Next