Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

X Is Not About Y: Technological Improvements and Cognitive-Physical Demands

1 Gram_Stone 15 January 2017 05:49PM

(I, the author, no longer endorse this article. I find it naive in hindsight.)


Recall the following template:

In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.

I work in the sign industry, and it's worth knowing that the sign industry mostly involves printing images on cast sheets of polyvinyl chloride with adhesive on the back of it. This allows you to stick a graphic just about anywhere. Good-old-fashioned signs are now just a special case of vinyl application where the surface is a quadrilateral.

But sometimes, it seems like you could cut out the vinyl installation process: if you just wanted a solid white sign with some black text, and the substrate you're going to apply the vinyl to is already white, wouldn't it be nice if you could just print some black text directly on the substrate?

That's what a flatbed printer is for, which you can imagine as your standard HP desktop printer at 100x magnification with an unusually long air hockey table where the paper slot should be.

Now, when the management was trying to get the workforce excited about this new technological artifact, they would say things like, "This new artifact will reduce the amount of time that you spend on vinyl application, leaving you less stressed and with a decreased workload."

But when we actually started to use the artifact, our jobs didn't actually become less stressful, and our workloads didn't actually decrease.

I mean, yeah, we could technically produce the same number of signs in less time, but a corollary of this statement is that we could produce more signs in the same amount of time, which is what we actually did.

So, I propose the subtemplate:

Employer proposes the introduction of technological artifact X, ostensibly to reduce physical or cognitive demands, but when the employer actually introduces technological artifact X, they realize it can be used to increase output and do that instead.

I wonder if anyone else has more examples?

Planning the Enemy's Retreat

14 Gram_Stone 11 January 2017 05:44AM

Related: Leave a Line of Retreat

When I was smaller, I was sitting at home watching The Mummy, with my mother, ironically enough. There's a character by the name of Bernard Burns, and you only need to know two things about him. The first thing you need to know is that the titular antagonist steals his eyes and tongue because, hey, eyes and tongues spoil after a while you know, and it's been three thousand years.

The second thing is that Bernard Burns was the spitting image of my father. I was terrified! I imagined my father, lost and alone, certain that he would die, unable to see, unable even to properly scream!

After this frightening ordeal, I had the conversation in which it is revealed that fiction is not reality, that actions in movies don't really have consequences, that apparent consequences are merely imagined and portrayed.

Of course I knew this on some level. I think the difference between the way children and adults experience fiction is a matter of degree and not kind. And when you're an adult, suppressing those automatic responses to fiction has itself become so automatic, that you experience fiction as a thing compartmentalized. You always know that the description of consequences in the fiction will not by magic have fire breathed into them, that Imhotep cannot gently step out of the frame and really remove your real father's real eyes.

So, even though we often use fiction to engage, to make things feel more real, in another way, once we grow, I think fiction gives us the chance to entertain formidable ideas at a comfortable distance.

A great user once said, "Vague anxieties are powerful anxieties." Related to this is the simple rationality technique of Leaving a Line of Retreat: before evaluating the plausibility of a highly cherished or deeply frightening belief, one visualizes the consequences of the highly cherished belief being false, or of the deeply frightening belief being true. We hope that it will thereby become just a little easier to evaluate the plausibility of that belief, for if we are wrong, at least we know what we're doing about it. Sometimes, if not often, what you'd really do about it isn't as bad as your intuitions would have you think.

If I had to put my finger on the source of that technique's power, I would name its ability to reduce the perceived hedonic costs of truthseeking. It's hard to estimate the plausibility of a charged idea because you expect your undesired outcome to feel very bad, and we naturally avoid this. The trick is in realizing that, in any given situation, you have almost certainly overestimated how bad it would really feel.

But Sun Tzu didn't just plan his own retreats; he also planned his enemies' retreats. What if your interlocutor has not practiced the rationality technique of Leaving a Line of Retreat? Well, Sun Tzu might say, "Leave one for them."

As I noted in the beginning, adults automatically compartmentalize fiction away from reality. It is simply easier for me to watch The Mummy than it was when I was eight. The formidable idea of my father having his eyes and tongue removed is easier to hold at a distance.

Thus, I hypothesize, truth in fiction is hedonically cheap to seek.

When you recite the Litany of Gendlin, you do so because it makes seemingly bad things seem less bad. I propose that the idea generalizes: when you're experiencing fiction, everything seems less bad than its conceivably real counterpart, it's stuck inside the book, and any ideas within will then seem less formidable. The idea is that you can use fiction as an implicit line of retreat, that you can use it to make anything seem less bad by making it make-believe, and thus, safe. The key, though, is that not everything inside of fiction is stuck inside of fiction forever. Sometimes conclusions that are valid in fiction also turn out to be valid in reality. 

This is hard to use on yourself, because you can't make a real scary idea into fiction, or shoehorn your scary idea into existing fiction, and then make it feel far away. You'll know where the fiction came from. But I think it works well on others.

I don't think I can really get the point across in the way that I'd like without an example. This proposed technique was an accidental discovery, like popsicles or the Slinky:

A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose. The conversation seemed mostly optimized for entertaining one another, and, hoping not to disappoint, I tried to intertwine my fictional ramblings with bona fide insights. The student was considering giving power to a democratic government, but he didn't feel very good about it, mostly because this fictional democracy was meant to represent anything that anyone has ever said is wrong with at least one democracy, plausible or not.

"The question you have to ask yourself," I proposed to the student, "is 'Do I value democracy because it is a good system, or do I value democracy per se?' A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say 'This is good governance, but the purpose of government is not good governance, the purpose of government is democracy.' I do, however, understand democracy as a 'current best bet' or local maximum."

I have in fact gotten wide-eyed stares for saying things like that, even granting the closing ethical injunction on democracy as local maximum. I find that unusual, because it seems like one of the first steps you would take towards thinking about politics clearly, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."

But because we were arguing for fictional governments, our autocracies, or monarchies, or whatever non-democratic governments heretofore unseen, could not by magic have fire breathed into them. For me to entertain the idea of a non-democratic government in reality would have solicited incredulous stares. For me to entertain the idea in fiction is good conversation.

The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I snuck this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation, and that the opinion I shared seemed deep.

Also, one of them told me that they had recently received some incredulous stares.

So I think this works, at least sometimes. It looks like you can sneak scary ideas into fiction, and make them seem just non-scary enough for someone to arrive at an accurate belief about that scary idea.

I do wonder though, if you could generalize this even more. How else could you reduce the perceived hedonic costs of truthseeking?

[Link] Yudkowsky's 'Four Layers of Intellectual Conversation'

12 Gram_Stone 08 January 2017 09:47PM

Kidney Trade: A Dialectic

5 Gram_Stone 18 November 2016 05:19PM

Related: GiveWell's Increasing the Supply of Organs for Transplantation in the U.S.

(Content warning: organs, organ trade, transplantation. Help me flesh this out! My intention is to present the arguments I've seen in a way that is, at a minimum, non-boring. In particular, moral intuitions conflicting or otherwise are welcome.)

“Now arriving at Objection from Human Dignity,” proclaimed the intercom in a euphonious female voice. Aleph shot Kappa and Lambda a dirty look farewell and disembarked from the train.

Kappa: “Okay, so maybe there’s a possibility that legal organ markets aren’t completely, obviously bad. I can at least quell my sense of disgust for the length of this train ride, if it really might save a lot more lives than what we’re doing right now. But I’m not even close to being convinced that that’s the case.”

Lambda nodded.

Kappa: “First: a clarification. Why kidneys? Why not livers or skin or corneas?”

Lambda: “I’m trying to be conservative.  For one, we can eliminate a lot of organs from consideration in the case of live donors because only a few organs can be donated without killing the donor in the process. Not considering tissues, but just organs, this narrows it down to kidneys, livers, and lungs. Liver transplants have … undesirable side effects that complicate-”

Kappa: “Uh, ‘undesirable side effects?’ Like what?”

Lambda: “Er, well it turns out that recovering from a liver donation is excruciatingly painful, and that seems like it might make the whole issue … harder to think about. Anyway, for that reason; and because most organ trade including legal donations is in kidneys; and because most people who die on waitlists are waiting for kidneys; and because letting people sell their organs after they're dead doesn't seem like it would increase the supply that much; for all of these reasons, focusing on kidneys from live donors seems to simplify the analysis without tossing out a whole lot of the original problem. Paying kidney donors looks like it’s a lot closer to being an obvious improvement in hindsight than paying people to donate other organs and tissues. If you wanted to talk about non-kidneys, you would have to go further than I have.”

Kappa: “Okay, so just kidneys then, unless I see a good reason to argue otherwise. The first big problem I see is that surgery is dangerous. So how are you not arguing that we should pay a bunch of people to take a bunch of deadly risks?”

Lambda: “As with any surgery, patients are at greater risk than usual immediately after having such a serious operation. The standard response is, "The risk of death from a kidney transplant in the form of a natural frequency is merely 1 in 3000, which is about the same as the risk of death from a liposuction operation." But to my knowledge there are only four studies that have looked at this, some finding little risk, others finding greater risk, some finding no increased risk of end stage renal disease, others finding increased risk of end stage renal disease. Both sides have been the target of methodological criticisms. I'm currently of the opinion that the evidence is too ambiguous for me to make any confident conclusions. I'm thus inclined to point out that we already incentivize people to do risky things with social benefits, such as military service, medical experimentation, and surrogate pregnancy. So saying that it's immoral to incentivize people to donate kidneys seems to imply that it's immoral to incentivize people to do at least some of those other things.”

Kappa: “Fine. Let’s assume that incentivizing people to take the personal risk is morally acceptable, just for the sake of argument. What makes you think that a market would improve things? How do I know you’re not the sort of person who thinks a market improves anything?”

Lambda: “Suppose you have a family member who needs a kidney transplant, and you’re not compatible. Suppose further that a stranger approaches you at the hospital and explains that they have a family member who needs a kidney and that they also aren’t compatible with their family member. However, claims the stranger, the two of you are compatible with one another’s family members. They will donate their kidney to your family member only if you will donate your kidney to their family member. Ideally, we would like this trade to take place. Would you donate your kidney? If not, why not?”

Kappa: “First I would want to know how the stranger accessed my medical records. At any rate, I don’t think I would. What if I donate first, and they back out after I donate? What if their family member dies before or during surgery and they no longer have an incentive to donate their kidney to my family member?”

Lambda: “Indeed, what if? In more than one way, it’s risky to trade kidneys as things are today. On the other hand, if you could reliably sell your kidney and buy another, you wouldn’t have to worry about being left out in the cold. Your kidney may be gone, but no one can take your revenue unless you make a big mistake. If the seller backs out, you can always try to buy another one.”

Kappa: “But there are already organizations with matchmaking programs that allow such trades to take place. They solve the trust problem with social prestige and verification procedures and other things. What more would a market get you, and how much does it matter after considering the additional problems that a market might cause? What are you really suggesting, when you can't use words like 'market'?

Lambda: "The ban prevents the use of money in organ trades, so what do you use in its place, and what have you lost? In the place of money, you use promises that you'll donate your kidney. The first way that promises are worse than money is that they're a poor store of value. If I trade my promise for a stranger's promise, and the stranger loses their incentive to donate, then the promise loses its value. Even if I only want to use the money to buy a kidney, I would prefer receiving money because I can be confident that I can later retrieve it and exchange it for a kidney as long as someone is selling one that I want. The second way that promises are worse than money is that they're a poor medium of exchange. Because each individual promise has associated with it some specific conditions for donation, promises aren't widely acceptable in trade. At the moment, we have to set up what are essentially incredibly elaborate barters to make trades that are more complex than simple donations from one donor to one recipient. It seems like both of these factors might prevent a number of trades that could be realized even given the currently low supply, particularly trades that might occur across time."

Kappa: “Right, but like I said, what about the additional problems that markets cause? Tissues sold by corporations in the U.S. in 2000 were more expensive than tissues sold by public institutions in the EU in 2010. And some of their products aren’t even demonstrably more useful than public alternatives; they deceive consumers! How is that supposed to make things better?”

Lambda: “This is a case where I would argue that there isn’t enough regulation. It’s true that with the wrong laws you can get situations like, say, the one where corporations encourage new parents to harvest and privately store autologous cord blood for large sums even though there’s no evidence that it’s more effective than the allogenic cord blood that's stored in public banks. But is an unqualified ban the only way to stop the rent-seeking? Why couldn’t you throw that part out but keep the trust, all via regulation? Remember also that you can store cord blood in a bank, but at the moment you can only store kidneys inside of a living human body. It seems like that would make it a lot harder to arbitrage."

Kappa: "What about egalitarian concerns? Wouldn't these incentives disproportionately encourage the poor to sell their organs?"

Lambda: "Whether lifting the ban makes things more egalitarian or less depends on your reference frame. The poor will have a greater incentive to sell their organs than the rich just like the poor usually have a greater incentive to sell other things than the rich. The idea behind the egalitarian objection is that the ban prevents this and it's more egalitarian if no one can legally sell their organs at all. But illegal organs already tend to flow from the poorest countries to the richest countries for the very reasons that you fear lifting the ban, and lifting the ban decreases U.S. demand for foreign organs by increasing domestic supply. In this reference frame, lifting the ban is more egalitarian, replacing the current sellers who receive little to no compensation, high risks, and poor post-operative care, with U.S. sellers who would receive more compensation, have lower risks, and receive better post-operative care."

Kappa: “In a market, I would guess that the average recipient wants to receive a kidney a lot more than the average donor wants to donate one. This could spell disaster for a market solution. What makes you think this wouldn’t happen with a kidney market?”

Lambda: “Empirically, the Iranian organ market has eliminated kidney waitlists in that country. The U.S. and Iran may be quite different, but they'd have to be different in the particular way that makes markets work there and markets not work here for that argument to follow. Besides, the U.S. spends about $72,000 per patient per year on dialysis, whereas the U.S. only spends about $106,000 on transplant patients in the first year, and about $24,000 per transplant patient per year, so the government should be willing to subsidize kidney suppliers in the case of market failure without intervention.”

Kappa: "Geez. Uh... what about impulsive donations? You'd be encouraging irresponsibility."

Lambda: "That seems like a weak one. Legislate waiting periods. And this isn't exactly a problem particular to legal kidney markets."

Kappa: "I have you now, Lambda! Even if all of these things are true, the fact remains that most people, including me, are disgusted by the very idea of exchanging our organs for money! How ever would you overcome our repulsion?"

Lambda: "You do have me, Kappa."

Kappa: "I'll grant you that, but no politician can lose by being against- I mean, what?"

Lambda stood up and walked solemnly to the window.

"How ever would I overcome your repulsion?"

[Link] Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing

2 Gram_Stone 28 September 2016 11:16PM

Attempts to Debias Hindsight Backfire!

7 Gram_Stone 13 June 2016 04:13PM

(Content note: A common suggestion for debiasing hindsight: try to think of many alternative historical outcomes. But thinking of too many examples can actually make hindsight bias worse.)

Followup to: Availability Heuristic Considered Ambiguous

Related to: Hindsight Bias


Hindsight bias is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge.  Hindsight bias is sometimes called the I-knew-it-all-along effect.

The way that this bias is usually explained is via the availability of outcome-related knowledge. The outcome is very salient, but the possible alternatives are not, so the probability that people claim they would have assigned to an event that has already happened gets jacked up. It's also known that knowing about hindsight bias and trying to adjust for it consciously doesn't eliminate it.

This means that most attempts at debiasing focus on making alternative outcomes more salient. One is encouraged to recall other ways that things could have happened. Even this merely attenuates the hindsight bias, and does not eliminate it (Koriat, Lichtenstein, & Fischhoff, 1980; Slovic & Fischhoff, 1977).


Remember what happened with the availability heuristic when we varied the number of examples that subjects had to recall? Crazy things happened because of the phenomenal experience of difficulty that recalling more examples caused within the subjects.

You might imagine that, if you recalled too many examples, you could actually make the hindsight bias worse, because if subjects experience alternative outcomes as difficult to generate, then they'll consider the alternatives less likely, and not more.

Relatedly, Sanna, Schwarz, and Stocker (2002, Experiment 2) presented participants with a description of the British–Gurkha War (taken from Fischhoff, 1975; you should remember this one). Depending on conditions, subjects were told either that the British or the Gurkha had won the war, or were given no outcome information. Afterwards, they were asked, “If we hadn’t already told you who had won, what would you have thought the probability of the British (Gurkhas, respectively) winning would be?”, and asked to give a probability in the form of a percentage.

Like in the original hindsight bias studies, subjects with outcome knowledge assigned a higher probability to the known outcome than subjects in the group with no outcome knowledge. (Median probability of 58.2% in the group with outcome knowledge, and 48.3% in the group without outcome knowledge.)

Some subjects, however, were asked to generate either 2 or 10 thoughts about how the outcome could have been different. Thinking of 2 alternative outcomes slightly attenuated hindsight bias (median down to 54.3%), but asking subjects to think of 10 alternative outcomes went horribly, horribly awry, increasing the subjects' median probability for the 'known' outcome all the way up to 68.0%!

It looks like we should be extremely careful when we try to retrieve counterexamples to claims that we believe. If we're too hard on ourselves and fail to take this effect into account, then we can make ourselves even more biased than we would have been if we had done nothing at all.


But it doesn't end there.

Like in the availability experiments before this, we can discount the informational value of the experience of difficulty when generating examples of alternative historical outcomes. Then the subjects would make their judgment based on the number of thoughts instead of the experience of difficulty.

Just before the 2000 U.S. presidential elections, Sanna et al. (2002, Experiment 4) asked subjects to predict the percentage of the popular vote the major candidates would receive. (They had to wait a little longer than they expected for the results.)

Later, they were asked to recall what their predictions were.

Control group subjects who listed no alternative thoughts replicated previous results on the hindsight bias.

Experimental group subjects who listed 12 alternative thoughts experienced difficulty and their hindsight bias wasn't made any better, but it didn't get worse either.

(It seems the reason it didn't get worse is because everyone thought Gore was going to win before the election, and for the hindsight bias to get worse, the subjects would have to incorrectly recall that they predicted a Bush victory.)

Other experimental group subjects listed 12 alternative thoughts and were also made to attribute their phenomenal experience of difficulty to lack of domain knowledge, via the question: "We realize that this was an extremely difficult task that only people with a good knowledge of politics may be able to complete. As background information, may we therefore ask you how knowledgeable you are about politics?" They were then made to provide a rating of their political expertise and to recall their predictions.

Because they discounted the relevance of the difficulty of recalling 12 alternative thoughts, attributing it to their lack of political domain knowledge, thinking of 12 ways that Gore could have won introduced a bias in the opposite direction! They recalled their original predictions for a Gore victory as even more confident than they actually, originally were.

We really are doomed.

Fischhoff, B. (1975). Hindsight is not equal to foresight: the effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1, 288–299.

Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107–118.

Sanna, L. J., Schwarz, N., & Stocker, S. L. (2002). When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight through mental simulations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 497–502.

Slovic, P., & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3, 544–551.

Availability Heuristic Considered Ambiguous

9 Gram_Stone 10 June 2016 10:40PM

(Content note: The experimental results on the availability bias, one of the biases described in Tversky and Kahneman's original work, have been overdetermined, which has led to at least two separate interpretations of the heuristic in the cognitive science literature. These interpretations also result in different experimental predictions. The audience probably wants to know about this. This post is also intended to measure audience interest in a tradition of cognitive scientific research that I've been considering describing here for a while. Finally, I steal from Scott Alexander the section numbering technique that he stole from someone else: I expect it to be helpful because there are several inferential steps to take in this particular article, and it makes it look less monolithic.)

Related to: Availability


The availability heuristic is judging the frequency or probability of an event, by the ease with which examples of the event come to mind.

This statement is actually slightly ambiguous. I notice at least two possible interpretations with regards to what the cognitive scientists infer is happening inside of the human mind:

  1. Humans think things like, “I found a lot of examples, thus the frequency or probability of the event is high,” or, “I didn’t find many examples, thus the frequency or probability of the event is low.”
  2. Humans think things like, “Looking for examples felt easy, thus the frequency or probability of the event is high,” or, “Looking for examples felt hard, thus the frequency or probability of the event is low.”

I think the second interpretation is the one more similar to Kahneman and Tversky’s original description, as quoted above.

And it doesn’t seem that I would be building up a strawman by claiming that some adhere to the first interpretation, intentionally or not. From Medin and Ross (1996, p. 522):

The availability heuristic refers to a tendency to form a judgment on the basis of what is readily brought to mind. For example, a person who is asked whether there are more English words that begin with the letter ‘t’ or the letter ‘k’ might try to think of words that begin with each of these letters. Since a person can probably think of more words beginning with ‘t’, he or she would (correctly) conclude that ‘t’ is more frequent than ‘k’ as the first letter of English words.

And even that sounds at least slightly ambiguous to me, although it falls on the other side of the continuum between pure mental-content-ism and pure phenomenal-experience-ism that includes the original description.


You can’t really tease out this ambiguity with the older studies on availability, because these two interpretations generate the same prediction. There is a strong correlation between the number of examples recalled and the ease with which those examples come to mind.

For example, consider a piece of the setup in Experiment 3 from the original paper on the availability heuristic. The subjects in this experiment were asked to estimate the frequency of two types of words in the English language: words with ‘k’ as their first letter, and words with ‘k’ as their third letter. There are twice as many words with ‘k’ as their third letter, but there was bias towards estimating that there are more words with ‘k’ as their first letter.

How, in experiments like these, are you supposed to figure out whether the subjects are relying on mental content or phenomenal experience? Both mechanisms predict the outcome, "Humans will be biased towards estimating that there are more words with 'k' as their first letter." And a lot of the later studies just replicate this result in other domains, and thus suffer from the same ambiguity.


If you wanted to design a better experiment, where would you begin?

Well, if we think of feelings as sources of information in the way that we regard thoughts as sources of information, then we should find that we have some (perhaps low, perhaps high) confidence in the informational value of those feelings, as we have some level of confidence in the informational value of our thoughts.

This is useful because it suggests a method for detecting the use of feelings as sources of information: if we are led to believe that a source of information has low value, then its relevance will be discounted; and if we are led to believe that it has high value, then its relevance will be augmented. Detecting this phenomenon in the first place is probably a good place to start before trying to determine whether the classic availability studies demonstrate a reliance on phenomenal experience, mental content, or both. 

Fortunately, Wänke et al. (1995) conducted a modified replication of the experiment described above with exactly the properties that we’re looking for! Let’s start with the control condition.

In the control condition, subjects were given a blank sheet of paper and asked to write down 10 words that have ‘t’ as the third letter, and then to write down 10 words that begin with the letter ‘t’. After this listing task, they rated the extent to which words beginning with a ‘t’ are more or less frequent than words that have ‘t’ as the third letter. As in the original availability experiments, subjects estimated that words that begin with a ‘t’ are much more frequent than words with a ‘t’ in the third position.

Like before, this isn’t enough to answer the questions that we want to answer, but it can’t hurt to replicate the original result. It doesn’t really get interesting until you do things that affect the perceived value of the subjects’ feelings.

Wänke et al. got creative and, instead of blank paper, they gave subjects in two experimental conditions sheets of paper imprinted with pale, blue rows of ‘t’s, and told them to write 10 words beginning with a ‘t’. One condition was told that the paper would make it easier for them to recall words beginning with a ‘t’, and the other was told that the paper would make it harder for them to recall words beginning with a ‘t’.

Subjects made to think that the magic paper made it easier to think of examples gave lower estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to think of examples, but the experimenter made them expect that by means of the magic paper, so they discounted the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went down relative to the control condition.

Subjects made to think that the magic paper made it harder to think of examples gave higher estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to recall examples, but the experimenter made them think it would feel hard, so they augmented the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went up relative to the control condition.

(Also, here's a second explanation by Nate Soares if you want one.)

So, at least in this sort of experiment, it looks like the subjects weren’t counting the number of examples they came up with; it looks like they really were using their phenomenal experiences of ease and difficulty to estimate the frequency of certain classes of words. This is some evidence for the validity of the second interpretation mentioned at the beginning.


So we know that there is at least one circumstance in which the second interpretation seems valid. This was a step towards figuring out whether the availability heuristic first described by Kahneman and Tversky is an inference from amount of mental content, or an inference from the phenomenal experience of ease of recall, or something else, or some combination thereof.

As I said before, the two interpretations have identical predictions in the earlier studies. The solution to this is to design an experiment where inferences from mental content and inferences from phenomenal experience cause different judgments.

Schwarz et al. (1991, Experiment 1) asked subjects to list either 6 or 12 situations in which they behaved either assertively or unassertively. Pretests had shown that recalling 6 examples was experienced as easy, whereas recalling 12 examples was experienced as difficult. After listing examples, subjects had to evaluate their own assertiveness.

As one would expect, subjects rated themselves as more assertive when recalling 6 examples of assertive behavior than when recalling 6 examples of unassertive behavior.

But the difference in assertiveness ratings didn’t increase with the number of examples. Subjects who had to recall examples of assertive behavior rated themselves as less assertive after reporting 12 examples rather than 6 examples, and subjects who had to recall examples of unassertive behavior rated themselves as more assertive after reporting 12 examples rather than 6 examples.

If they were relying on the number of examples, then we should expect their ratings for the recalled quality to increase with the number of examples. Instead, they decreased.

It could be that it got harder to come up with good examples near the end of the task, and that later examples were lower quality than earlier examples, and the increased availability of the later examples biased the ratings in the way that we see. Schwarz acknowledged this, checked the written reports manually, and claimed that no such quality difference was evident.


It would still be nice if we could do better than taking Schwarz’s word on that though. One thing you could try is seeing what happens when you combine the methods we used in the last two experiments: vary the number of examples generated and manipulate the perceived relevance of the experiences of ease and difficulty at the same time. (Last experiment, I promise.)

Schwarz et al. (1991, Experiment 3) manipulated the perceived value of the experienced ease or difficulty of recall by having subjects listen to ‘new-age music’ played at half-speed while they worked on the recall task. Some subjects were told that this music would make it easier to recall situations in which they behaved assertively and felt at ease, whereas others were told that it would make it easier to recall situations in which they behaved unassertively and felt insecure. These manipulations make subjects perceive recall experiences as uninformative whenever the experience matches the alleged impact of the music; after all, it may simply be easy or difficult because of the music. On the other hand, experiences that are opposite to the alleged impact of the music are considered very informative.

When the alleged effects of the music were the opposite of the phenomenal experience of generating examples, the previous experimental results were replicated.

When the alleged effects of the music match the phenomenal experience of generating examples, then the experience is called into question, since you can’t tell if it’s caused by the recall task or the music.

When this is done, the pattern that we expect from the first interpretation of the availability heuristic holds. Thinking of 12 examples of assertive behavior makes subjects rate themselves as more assertive than thinking of 6 examples of assertive behavior; mutatis mutandis for unassertive examples. When people can’t rely on their experience, they fall back to using mental content, and instead of relying on how hard or easy things feel, they count.

Under different circumstances, both interpretations are useful, but of course, it’s important to recognize that a distinction exists in the first place.

Medin, D. L., & Ross, B. H. (1996). Cognitive psychology (2nd ed.). Fort Worth: Harcourt Brace.

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61, 195–202.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232.

Wänke, M., Schwarz, N. & Bless, H. (1995). The availability heuristic revisited: Experienced ease of retrieval in mundane frequency estimates. Acta Psychologica, 89, 83-90.

Rationality Reading Group: Part Z: The Craft and the Community

6 Gram_Stone 04 May 2016 11:03PM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This fortnight we discuss Part Z: The Craft and the Community (pp. 1651-1750). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

Z. The Craft and the Community

312. Raising the Sanity Waterline - Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine - getting rid of the canary doesn't get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?

313. A Sense That More Is Possible - The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician - more like that of a strong casual amateur. Self-proclaimed "rationalists" don't seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.

314. Epistemic Viciousness - An essay by Gillian Russell on "Epistemic Viciousness in the Martial Arts" generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with "data poverty" - the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.

315. Schools Proliferating Without Evidence - The branching schools of "psychotherapy", another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness - that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it's a good measurement, you get good science.

316. Three Levels of Rationality Verification - How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, andorganizational. A "reputational" test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - "keeping it real", but without being able to break down exactly what was responsible for success. An "experimental" test is one that can be run on each of a hundred students (such as a well-validated survey). An "organizational" test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.

317. Why Our Kind Can't Cooperate - The atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd, aka "the nonconformist cluster", seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements - the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn't help either.

318. Tolerate Tolerance - One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people's tolerance - to avoid rejecting them because they tolerate people you wouldn't - since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots - so long as they don't literally believe the same ideas themselves - try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.

319. Your Price for Joining - The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this - with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn't worth personally fixing by however much effort it takes, it's not worth a refusal to contribute.

320. Can Humanism Match Religion's Output? - Anyone with a simple and obvious charitable project - responding with food and shelter to a tidal wave in Thailand, say - would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?

321. Church vs. Taskforce - Churches serve a role of providing community - but they aren't explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There's a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.

322. Rationality: Common Interest of Many Causes - Many causes benefit particularly from the spread of rationality - because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.

323. Helpless Individuals - When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals - research isn't a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.

324. Money: The Unit of Caring - Omohundro's resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called "money" and it is the unit of how much society cares about something - a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale - the reason we're not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.

325. Purchase Fuzzies and Utilons Separately - Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).

326. Bystander ApathyThe bystander effect is when groups of people are less likely to take action than an individual. There are a few explanations for why this might be the case.

327. Collective Apathy and the Internet - The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.

328. Incremental Progress and the Valley - The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline - it just takes a lot of progress to get there.

329. Bayesians vs. BarbariansSuppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There's a certain concept of "rationality" which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm's way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians... And then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...

330. Beware of Other-Optimizing - Aspiring rationalists often vastly overestimate their own ability to optimize other people's lives. They read nineteen webpages offering productivity advice that doesn't work for them... and then encounter the twentieth page, or invent a new method themselves, and wow, it really works - they've discovered the true method. Actually, they've just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person - for then you'll just believe that they aren't trying hard enough.

331. Practical Advice Backed by Deep Theories - Practical advice is genuinely much, much more useful when it's backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn't have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there's a distinctive LW style, this is it.

332. The Sin of Underconfidence - When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome - you know you're biased but you're not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself "Does this way of thinking make me stronger, or weaker?"

333. Go Forth and Create the Art! - I've developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism... arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality - fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature... And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It's my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

This is the end, beautiful friend!

My Kind of Moral Responsibility

3 Gram_Stone 02 May 2016 05:54AM

The following is an excerpt of an exchange between Julia Galef and Massimo Pigliucci, from the transcript for Rationally Speaking Podcast episode 132:

Massimo: [cultivating virtue and 'doing good' locally 'does more good' than directly eradicating malaria]

Julia: [T]here's lower hanging fruit [in the developed world than there is in the developing world]. By many order of magnitude, there's lower hanging fruit in terms of being able to reduce poverty or disease or suffering in some parts of the world than other parts of the world. In the West, we've picked a lot of the low hanging fruit, and by any sort of reasonable calculation, it takes much more money to reduce poverty in the West -- because we're sort of out in the tail end of having reduced poverty -- than it does to bring someone out of poverty in the developing world.

Massimo: That kind of reasoning brings you quickly to the idea that everybody here is being a really really bad person because they spent money for coming here to NECSS listening to us instead of saving children on the other side of the world. I resist that kind of logic.

Massimo (to the audience): I don't think you guys are that bad! You see what I mean?

I see a lot of people, including bullet-biters, who feel a lot of internal tension, and even guilt, because of this apparent paradox.

Utilitarians usually stop at the question, "Are the outcomes different?"

Clearly, they aren't. But people still feel tension, so it must not be enough to believe that a world where some people are alive is better than a world where those very people are dead. The confusion has not evaporated in a puff of smoke, as we should expect.

After all, imagine a different gedanken where a virtue ethicist and a utilitarian each stand in front of a user interface, with each interface bearing only one shiny red button. Omega tells each, "If you press this button, then you will prevent one death. If you do not press this button, then you will not prevent one death."

There would be no disagreement. Both of them would press their buttons without a moment of hesitation.

So, in a certain sense, it's not only a question of which outcome is better. The repugnant part of the conclusion is the implication for our intuitions about moral responsibility. It's intuitive that you should save ten lives instead of one, but it's counterintuitive that the one who permits death is just as culpable as the one who causes death. You look at ten people who are alive when they could be dead, and it feels right to say that it is better that they are alive than that they are dead, but you juxtapose a murderer and your best friend who is not an ascetic, and it feels wrong to say that the one is just as awful as the other.

The virtue-ethical response is to say that the best friend has lived a good life and the murderer has not. Of course, I don't think that anyone who says this has done any real work.

So, if you passively don't donate every cent of discretionary income to the most effective charities, then are you morally culpable in the way that you would be if you had actively murdered everyone that you chose not to save who is now dead?

Well, what is moral responsibility? Hopefully we all know that there is not one culpable atom in the universe.

Perhaps the most concrete version of this question is: what happens, cognitively, when we evaluate whether or not someone is responsible for something? What's the difference between situations where we consider someone responsible and situations where we don't? What happens in the brain when we do these things? How do different attributions of responsibility change our judgments and decisions?

Most research on feelings has focused only on valence, how positiveness and negativeness affect judgment. But there's clearly a lot more to this: sadness, anger, and guilt are all negative feelings, but they're not all the same, so there must be something going on beyond valence.

One hypothesis is that the differences between sadness, anger, and guilt reflect different appraisals of agency. When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent. When we are guilty, we've attributed the cause of the event to our own actions.

(It's worth noting that there are many more types of appraisal than this, many more emotions, and many more feelings beyond emotions, but I'm going to focus on negative emotions and appraisals of agency for the sake of brevity. For a review of proposed appraisal types, see Demir, Desmet, & Hekkert (2009). For a review of emotions in general, check out Ortony, Clore, & Collins' The Cognitive Structure of Emotions.)

So, what's it look like when we narrow our attention to specific feelings on the same side of the valence spectrum? How are judgments affected when we only look at, say, sadness and anger? Might experiments based on these questions provide support for an account of our dilemma in terms of situational appraisals?

In one experiment, Keltner, Ellsworth, & Edwards (1993) found that sad subjects consider events with situational causes more likely than events with agentic causes, and that angry subjects consider events with agentic causes more likely than events with situational causes. In a second experiment in the same study, they found that sad subjects are more likely to consider situational factors as the primary cause of an ambiguous event than agentic factors, and that angry subjects are more likely to consider agentic factors as the primary cause of an ambiguous event than situational factors.

Perhaps unsurprisingly, watching someone commit murder, and merely knowing that someone could have prevented a death on the other side of the world through an unusual effort, makes very different things happen in our brains. I expect that even the utilitarians are biting a fat bullet; that even the utilitarians feel the tension, the counterintuitiveness, when utilitarianism leads them to conclude that indifferent bystanders are just as bad as murderers. Intuitions are strong, and I hope that a few more utilitarians can understand why utilitarianism is just as repugnant to a virtue ethicist as virtue ethics is to a utilitarian.

My main thrust here is that "Is a bystander as morally responsible as a murderer?" is a wrong question. You're always secretly asking another question when you ask that question, and the answer often doesn't have the word 'responsibility' anywhere in it.

Utilitarians replace the question with, "Do indifference and evil result in the same consequences?" They answer, "Yes."

Virtue ethicists replace the question with, "Does it feel like indifference is as 'bad' as 'evil'?" They answer, "No."

And the one thinks, in too little detail, "They don't think that bystanders are just as bad as murderers!", and likewise, the other thinks, "They do think that bystanders are just as bad as murderers!".

And then the one and the other proceed to talk past one another for a period of time during which millions more die.

As you might expect, I must confess to a belief that the utilitarian is often the one less confused, so I will speak to that one henceforth.

As a special kind of utilitarian, the kind that frequents this community, you should know that, if you take the universe, and grind it down to the finest powder, and sieve it through the finest sieve, then you will not find one agentic atom. If you only ask the question, "Has the virtue ethicist done the moral thing?", and you silently reply to yourself, "No.", and your response is to become outraged at this, then you have failed your Art on two levels.

On the first level, you have lost sight of your goal. As if your goal is to find out whether or not someone has done the moral thing, or not! Your goal is to cause them to commit the moral action. By your own lights, if you fail to be as creative as you can possibly be in your attempts at persuasion, then you're just as culpable as someone who purposefully turned someone away from utilitarianism as a normative-ethical position. And if all you do is scorn the virtue ethicists, instead of engaging with them, then you're definitely not being very creative.

On the second level, you have failed to apply your moral principles to yourself. You have not considered that the utility-maximizing action might be something besides getting righteously angry, even if that's the easiest thing to do. And believe me, I get it. I really do understand that impulse.

And if you are that sort of utilitarian who has come to such a repugnant conclusion epistemically, but who has failed to meet your own expectations instrumentally, then be easy now. For there is no longer a question of 'whether or not you should be guilty'. There are only questions of what guilt is used for, and whether or not that guilt ends more lives than it saves.

All of this is not to say that 'moral outrage' is never the utility-maximizing action. I'm at least a little outraged right now. But in the beginning, all you really wanted was to get rid of naive notions of moral responsibility. The action to take in this situation is not to keep them in some places and toss them in others.

Throw out the bath water, and the baby, too. The virtue ethicists are expecting it anyway.


Demir, E., Desmet, P. M. A., & Hekkert, P. (2009). Appraisal patterns of emotions in human-product interaction. International Journal of Design, 3(2), 41-51.

Keltner, D., Ellsworth, P., & Edwards, K. (1993). Beyond simple pessimism: Effects of sadness and anger on social perception. Journal of Personality and Social Psychology, 64, 740-752.

Ortony, A., Clore, G. L., & Collins, A. (1990). The Cognitive Structure of Emotions. (1st ed.).

My Custom Spelling Dictionary

3 Gram_Stone 23 April 2016 09:56PM

I looked at my custom spelling dictionary in Google Chrome, and thought custom spelling dictionaries in general might be a good place for you to look if you wonder what kinds of terms you'll have to explain to people to help them understand what you mean. If something's on your list, then you would probably have to provide an explanation of its usage to a given random individual from the world population.

Here's my list:

















































Share yours, too, if you'd like. Maybe something interesting or useful will come out of it. Maybe there will be patterns.

View more: Next