Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Explanations for Less Wrong articles that you didn't understand

18 Kaj_Sotala 31 March 2014 11:19AM

ErinFlight said:

I'm struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.

Thinking about it, I realized that this might be a common concern. There are probably plenty of people who've looked at various more-or-less technical or jargony Less Wrong posts, tried understanding them, and then given up (without posting a comment explaining their confusion).

So I figured that it might be good to have a thread where you can ask for explanations for any Less Wrong post that you didn't understand and would like to, but don't want to directly comment on for any reason (e.g. because you're feeling embarassed, because the post is too old to attract much traffic, etc.). In the spirit of various Stupid Questions threads, you're explicitly encouraged to ask even for the kinds of explanations that you feel you "should" get even yourself, or where you feel like you could get it if you just put in the effort (but then never did).

You can ask to have some specific confusing term or analogy explained, or to get the main content of a post briefly summarized in plain English and without jargon, or anything else. (Of course, there are some posts that simply cannot be explained in non-technical terms, such as the ones in the Quantum Mechanics sequence.) And of course, you're encouraged to provide explanations to others!

Two arguments for not thinking about ethics (too much)

28 Kaj_Sotala 27 March 2014 02:15PM

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

1: Little expected insight

This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.

One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.

To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.

2: Leads to akrasia

Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.

For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)

The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:

Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”

Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]

Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.

In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

Applying reinforcement learning theory to reduce felt temporal distance

10 Kaj_Sotala 26 January 2014 09:17AM

(cross-posted from my blog)

It is a basic principle of reinforcement learning to distinguish between reward and value, where the reward of a state is the immediate, intrinsic desirability of the state, whereas the value of the state is proportional to the rewards of the other states that you can reach from that state.

For example, suppose that I’m playing a competitive game of chess, and in addition to winning I happen to like capturing my opponent’s pieces, even when it doesn’t contribute to winning. I assign a reward of 10 points to winning, -10 to losing, 0 to a stalemate, and 1 point to each piece that I capture in the game. Now my opponent offers me a chance to capture one of his pawns, an action that would give me one point worth of reward. But when I look at the situation more closely, I see that it’s a trap: if I did capture the piece, I would be forced into a set of moves that would inevitably result in my defeat. So the value, or long-term reward, of that state is actually something close to -9.

Once I realize this, I also realize that making that move is almost exactly equivalent to agreeing to resign in exchange for my opponent letting me capture one of his pieces. My defeat won’t be instant, but by making that move, I would nonetheless be choosing to lose.

Now consider a dilemma that I might be faced with when coming home late some evening. I have no food at home, but I’m feeling exhausted and don’t want to bother with going to the store, and I’ve already eaten today anyway. But I also know that if I wake up with no food in the house, then I will quickly end up with low energy, which makes it harder to go to the store, which means my energy levels will drop further, and so on until I’ll finally get something to eat much later, after wasting a long time in an uncomfortable state.

Typically, temporal discounting means that I’m aware of this in the evening, but nonetheless skip the visit to the store. The penalty from not going feels remote, whereas the discomfort of going feels close, and that ends up dominating my decision-making. Besides, I can always hope that the next morning will be an exception, and I’ll actually get myself to go to the store right from the moment when I wake up!

And I haven’t tried this out for very long, but it feels like explicitly framing the different actions in terms of reward and value could be useful in reducing the impact of that experienced distance. I skip the visit to the store because being hungry in the morning is something that seems remote. But if I think that skipping the visit is exactly the same thing as choosing to be hungry in the morning, and that the value of skipping the visit is not the momentary relief of being home earlier but rather the inevitable consequence of the causal chain that it sets in motion – culminating in hours of hunger and low energy – then that feels a lot different.

And of course, I can propagate the consequences earlier back in time as well: if I think that I simply won’t have the energy to get food when I finally come home, then I should realize that I need to go buy the food before setting out on that trip. Otherwise I’ll again set in motion a causal chain whose end result is being hungry. So then not going shopping before I leave becomes exactly the same thing as being hungry next morning.

More examples of the same:

  • Slightly earlier I considered taking a shower, and realized that if I'd take a shower in my current state of mind I'd inevitably make it into a bath as well. So I wasn't really just considering whether to take a shower, but whether to take a shower *and* a bath. That said, I wasn't in a hurry anywhere and there didn't seem to be a big harm in also taking the bath, so I decided to go ahead with it.
  • While in the shower/bath, I started thinking about this post, and decided that I wanted to get it written. But I also wanted to enjoy my hot bath for a while longer. Considering it, I realized that staying in the bath for too long might cause me to lose my motivation for writing this, so there was a chance that staying in the bath would become the same thing as choosing not to get this written. I decided that the risk wasn't worth it, and got up.
  • If I'm going somewhere and I choose a route that causes me to walk past a fast-food place selling something that I know I shouldn't eat, and I know that the sight of that fast-food place is very likely to tempt me to eat there anyway, then choosing that particular route is the same thing as choosing to go eat something that I know I shouldn't.

Related post: Applied cognitive science: learning from a faux pas.

[link] Why Self-Control Seems (but may not be) Limited

34 Kaj_Sotala 20 January 2014 04:55PM

Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.

Some of the most interesting points:

  • Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
  • The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
  • Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.

The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."

  • Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
  • Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the organism to both focus on activities that will provide it with rewards (exploitation), but also to disengage from them and seek activities which may be even more rewarding (exploration). Feelings such as boredom function to stop the organism from getting too fixated on individual tasks, and allow us to spend some time on tasks which might turn out to be even more valuable.

The explanation of the actual proposed psychological mechanism is good enough that it deserves to be quoted in full:

Based on the tradeoffs identified above, we propose that initial acts of control lead to shifts in motivation away from “have-to” or “ought-to” goals and toward “want-to” goals (see Figure 2). “Have-to” tasks are carried out through a sense of duty or contractual obligation, while “want-to” tasks are carried out because they are personally enjoyable and meaningful [41]; as such, “want-to” tasks feel easy to perform and to maintain in focal attention [41]. The distinction between “have-to” and “want-to,” however, is not always clear cut, with some “want-to” goals (e.g., wanting to lose weight) being more introjected and feeling more like “have-to” goals because they are adopted out of a sense of duty, societal conformity, or guilt instead of anticipated pleasure [53].

According to decades of research on self-determination theory [54], the quality of motivation that people apply to a situation ranges from extrinsic motivation, whereby behavior is performed because of external demand or reward, to intrinsic motivation, whereby behavior is performed because it is inherently enjoyable and rewarding. Thus, when we suggest that depletion leads to a shift from “have-to” to “want-to” goals, we are suggesting that prior acts of cognitive effort lead people to prefer activities that they deem enjoyable or gratifying over activities that they feel they ought to do because it corresponds to some external pressure or introjected goal. For example, after initial cognitive exertion, restrained eaters prefer to indulge their sweet tooth rather than adhere to their strict views of what is appropriate to eat [55]. Crucially, this shift from “have-to” to “want-to” can be offset when people become (internally or externally) motivated to perform a “have-to” task [49]. Thus, it is not that people cannot control themselves on some externally mandated task (e.g., name colors, do not read words); it is that they do not feel like controlling themselves, preferring to indulge instead in more inherently enjoyable and easier pursuits (e.g., read words). Like fatigue, the effect is driven by reluctance and not incapability [41] (see Box 2).

Research is consistent with this motivational viewpoint. Although working hard at Time 1 tends to lead to less control on “have-to” tasks at Time 2, this effect is attenuated when participants are motivated to perform the Time 2 task [32], personally invested in the Time 2 task [56], or when they enjoy the Time 1 task [57]. Similarly, although performance tends to falter after continuously performing a task for a long period, it returns to baseline when participants are rewarded for their efforts [58]; and remains stable for participants who have some control over and are thus engaged with the task [59]. Motivation, in short, moderates depletion [60]. We suggest that changes in task motivation also mediate depletion [61].

Depletion, however, is not simply less motivation overall. Rather, it is produced by lower motivation to engage in “have-to” tasks, yet higher motivation to engage in “want-to” tasks. Depletion stokes desire [62]. Thus, working hard at Time 1 increases approach motivation, as indexed by self-reported states, impulsive responding, and sensitivity to inherently-rewarding, appetitive stimuli [63]. This shift in motivational priorities from “have-to” to “want-to” means that depletion can increase the reward value of inherently-rewarding stimuli. For example, when depleted dieters see food cues, they show more activity in the orbitofrontal cortex, a brain area associated with coding reward value, compared to non-depleted dieters [64].

See also: Kurzban et al. on opportunity cost models of mental fatigue and resource-based models of willpower; Deregulating Distraction, Moving Towards the Goal, and Level Hopping.

To capture anti-death intuitions, include memory in utilitarianism

8 Kaj_Sotala 15 January 2014 06:27AM

EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)

In the last Stupid Questions Thread, solipsist asked

Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?

People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.

Consider these commonly held intuitions:

  1. If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
  2. If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
Also, the next intuition isn't necessarily commonly held, since it's probably deemed mostly science fictiony, but in transhumanist circles one also sees:
  • If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
Assume that we think the instrumental arguments in favor of these intuitions (like societies with fewer murders being better off for everyone) are insufficient - we think that the intuitions should hold even if disregarding them had no effect on anything else. Now, many forms of utilitarianism will violate these intuitions, saying that in all cases both of the offered scenarios are equally good or equally bad.

The problem is that UFUs ignore the history of the world, looking only at individual states. By analogy to stochastic processes, we could say that UFUs exhibit the Markov property: that is to say, the value of a state depends only on that state, not on the sequence of events that preceded it. When deciding whether a possible world at time t+1 is better or worse than the actual world at t, UFUs do not look at any of the earlier times. Actually, UFUs do not really even care about the world at time t: all they do is compare the possible worlds at t+1, and choose the one with the highest happiness (or lowest suffering, or highest preference satisfaction, or…) as compared to the alternatives. As a result, they do not care about people getting murdered or resurrected, aside for the impact that this has on the general level of happiness (or whatever).

We can fix this by incorporating a history to the utility function. Suppose that a person X is born at time T: we enter the fact of "X was born" into the utility function's memory. From now, for every future state the UF checks whether or not X is still alive. If yes, good, if not, that state loses one point of utility. Now the UF has a very large "incentive" to keep X from getting killed: if X dies, then every future state from that moment on will be a point worse than it would otherwise have been. If we assume the lifetime of the universe to be 10100 years, say, then with no discounting, X dying means a loss of 10100 points of utility. If we pick an appropriate value for the "not alive anymore" penalty, then it won't be so large as to outweigh all other considerations, but enough that situations with unnecessary death will be evaluated as clearly worse than ones where that death could have been prevented.

Similarly, if it becomes possible to resurrect someone from physical death, then that is better than creating an entirely new life, because it will allow us to get rid of the penalty of them being dead.

This approach could also be construed to develop yet another attack on the Repugnant Conclusion, though the assumption we need to make for that might be more controversial. Suppose that X has 50 points of well-being at time T, whereas at T+1, X only has 25 points of well-being, but we have created another person Y who also has 25 points of well-being. UFUs would consider this scenario to be equally good as the one with no person Y and where X kept their 50 points. We can block this by maintaining a memory of the amount of peak well-being that anyone has ever had, and if they fall below their past peak well-being, apply the difference as a penalty. So if X used to have 50 points of well-being but now only has 25, then we apply an extra -25 to the utility of that scenario.

This captures the popular intuition that, while a larger population can be better, a larger population that comes at the cost of reducing the well-being of people who are currently well off is worse, even if the overall utility was somewhat greater. It's also noteworthy that if X is dead, then their well-being is 0, which is presumably worse than their peak well-being, so there's an eternal penalty applied to the value of future states where X is dead. Thus this approach, of penalizing states by the difference between the current and peak well-being of the people in those states, can be thought of as a generalization of the "penalize any state in which the people who once lived are dead" approach.

 

[link] How do good ideas spread?

9 Kaj_Sotala 03 January 2014 08:19PM

http://www.newyorker.com/reporting/2013/07/29/130729fa_fact_gawande?currentPage=all

Seems related to many topics discussed on LW, such as the low adoption of cryonics and the difficulty of getting researchers convinced of AI risk.

Four weeks later, on November 18th, Bigelow published his report on the discovery of “insensibility produced by inhalation” in the Boston Medical and Surgical Journal. Morton would not divulge the composition of the gas, which he called Letheon, because he had applied for a patent. But Bigelow reported that he smelled ether in it (ether was used as an ingredient in certain medical preparations), and that seems to have been enough. The idea spread like a contagion, travelling through letters, meetings, and periodicals. By mid-December, surgeons were administering ether to patients in Paris and London. By February, anesthesia had been used in almost all the capitals of Europe, and by June in most regions of the world. [...] Within seven years, virtually every hospital in America and Britain had adopted the new discovery. [...]

Sepsis—infection—was the other great scourge of surgery. It was the single biggest killer of surgical patients, claiming as many as half of those who underwent major operations, such as a repair of an open fracture or the amputation of a limb. [...]

During the next few years, he perfected ways to use carbolic acid for cleansing hands and wounds and destroying any germs that might enter the operating field. The result was strikingly lower rates of sepsis and death. You would have thought that, when he published his observations in a groundbreaking series of reports in The Lancet, in 1867, his antiseptic method would have spread as rapidly as anesthesia.

Far from it. The surgeon J. M. T. Finney recalled that, when he was a trainee at Massachusetts General Hospital two decades later, hand washing was still perfunctory. Surgeons soaked their instruments in carbolic acid, but they continued to operate in black frock coats stiffened with the blood and viscera of previous operations—the badge of a busy practice. Instead of using fresh gauze as sponges, they reused sea sponges without sterilizing them. It was a generation before Lister’s recommendations became routine and the next steps were taken toward the modern standard of asepsis—that is, entirely excluding germs from the surgical field, using heat-sterilized instruments and surgical teams clad in sterile gowns and gloves. [...]

Did the spread of anesthesia and antisepsis differ for economic reasons? Actually, the incentives for both ran in the right direction. If painless surgery attracted paying patients, so would a noticeably lower death rate. Besides, live patients were more likely to make good on their surgery bill. Maybe ideas that violate prior beliefs are harder to embrace. To nineteenth-century surgeons, germ theory seemed as illogical as, say, Darwin’s theory that human beings evolved from primates. Then again, so did the idea that you could inhale a gas and enter a pain-free state of suspended animation. Proponents of anesthesia overcame belief by encouraging surgeons to try ether on a patient and witness the results for themselves—to take a test drive. When Lister tried this strategy, however, he made little progress. [...]

The technical complexity might have been part of the difficulty. [...] But anesthesia was no easier. [...]

So what were the key differences? First, one combatted a visible and immediate problem (pain); the other combatted an invisible problem (germs) whose effects wouldn’t be manifest until well after the operation. Second, although both made life better for patients, only one made life better for doctors. Anesthesia changed surgery from a brutal, time-pressured assault on a shrieking patient to a quiet, considered procedure. Listerism, by contrast, required the operator to work in a shower of carbolic acid. Even low dilutions burned the surgeons’ hands. You can imagine why Lister’s crusade might have been a tough sell.

This has been the pattern of many important but stalled ideas. They attack problems that are big but, to most people, invisible; and making them work can be tedious, if not outright painful. The global destruction wrought by a warming climate, the health damage from our over-sugared modern diet, the economic and social disaster of our trillion dollars in unpaid student debt—these things worsen imperceptibly every day. Meanwhile, the carbolic-acid remedies to them, all requiring individual sacrifice of one kind or another, struggle to get anywhere. [...]

The staff members I met in India had impressive experience. Even the youngest nurses had done more than a thousand child deliveries. [...] But then we hung out in the wards for a while. In the delivery room, a boy had just been born. He and his mother were lying on a cot, bundled under woollen blankets, resting. The room was coffin-cold; I was having trouble feeling my toes. [...] Voluminous evidence shows that it is far better to place the child on the mother’s chest or belly, skin to skin, so that the mother’s body can regulate the baby’s until it is ready to take over. Among small or premature babies, kangaroo care (as it is known) cuts mortality rates by a third.

So why hadn’t the nurse swaddled the two together? [...]

“The mother didn’t want it,” she explained. “She said she was too cold.”

The nurse seemed to think it was strange that I was making such an issue of this. The baby was fine, wasn’t he? And he was. He was sleeping sweetly, a tightly wrapped peanut with a scrunched brown face and his mouth in a lowercase “o.” [...]

Everything about the life the nurse leads—the hours she puts in, the circumstances she endures, the satisfaction she takes in her abilities—shows that she cares. But hypothermia, like the germs that Lister wanted surgeons to battle, is invisible to her. We picture a blue child, suffering right before our eyes. That is not what hypothermia looks like. It is a child who is just a few degrees too cold, too sluggish, too slow to feed. It will be some time before the baby begins to lose weight, stops making urine, develops pneumonia or a bloodstream infection. Long before that happens—usually the morning after the delivery, perhaps the same night—the mother will have hobbled to an auto-rickshaw, propped herself beside her husband, held her new baby tight, and ridden the rutted roads home.

From the nurse’s point of view, she’d helped bring another life into the world. If four per cent of the newborns later died at home, what could that possibly have to do with how she wrapped the mother and child? Or whether she washed her hands before putting on gloves? Or whether the blade with which she cut the umbilical cord was sterilized? [...]

A decade after the landmark findings, the idea remained stalled. Nothing much had changed. Diarrheal disease remained the world’s biggest killer of children under the age of five.

In 1980, however, a Bangladeshi nonprofit organization called brac decided to try to get oral rehydration therapy adopted nationwide. The campaign required reaching a mostly illiterate population. The most recent public-health campaign—to teach family planning—had been deeply unpopular. The messages the campaign needed to spread were complicated.

Nonetheless, the campaign proved remarkably successful. A gem of a book published in Bangladesh, “A Simple Solution,” tells the story. The organization didn’t launch a mass-media campaign—only twenty per cent of the population had a radio, after all. It attacked the problem in a way that is routinely dismissed as impractical and inefficient: by going door to door, person by person, and just talking. [...]

They recruited teams of fourteen young women, a cook, and a male supervisor, figuring that the supervisor would protect them from others as they travelled, and the women’s numbers would protect them from the supervisor. They travelled on foot, pitched camp near each village, fanned out door to door, and stayed until they had talked to women in every hut. They worked long days, six days a week. Each night after dinner, they held a meeting to discuss what went well and what didn’t and to share ideas on how to do better. Leaders periodically debriefed them, as well. [...]

The program was stunningly successful. Use of oral rehydration therapy skyrocketed. The knowledge became self-propagating. The program had changed the norms. [...]

As other countries adopted Bangladesh’s approach, global diarrheal deaths dropped from five million a year to two million, despite a fifty-per-cent increase in the world’s population during the past three decades. Nonetheless, only a third of children in the developing world receive oral rehydration therapy. Many countries tried to implement at arm’s length, going “low touch,” without sandals on the ground. As a recent study by the Gates Foundation and the University of Washington has documented, those countries have failed almost entirely. People talking to people is still how the world’s standards change.

Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.

In childbirth, we have only begun to accept that the critical practices aren’t going to spread themselves. Simple “awareness” isn’t going to solve anything. We need our sales force and our seven easy-to-remember messages. And in many places around the world the concerted, person-by-person effort of changing norms is under way."

I recently asked BetterBirth workers in India whether they’d yet seen a birth attendant change what she does. Yes, they said, but they’ve found that it takes a while. They begin by providing a day of classroom training for birth attendants and hospital leaders in the checklist of practices to be followed. Then they visit them on site to observe as they try to apply the lessons. [...]

Sister Seema Yadav, a twenty-four-year-old, round-faced nurse three years out of school, was one of the trainers. [...] Her first assignment was to follow a thirty-year-old nurse with vastly more experience than she had. Watching the nurse take a woman through labor and delivery, she saw how little of the training had been absorbed. [...] By the fourth or fifth visit, their conversations had shifted. They shared cups of chai and began talking about why you must wash hands even if you wear gloves (because of holes in the gloves and the tendency to touch equipment without them on), and why checking blood pressure matters (because hypertension is a sign of eclampsia, which, when untreated, is a common cause of death among pregnant women). They learned a bit about each other, too. Both turned out to have one child—Sister Seema a four-year-old boy, the nurse an eight-year-old girl. [...]

Soon, she said, the nurse began to change. After several visits, she was taking temperatures and blood pressures properly, washing her hands, giving the necessary medications—almost everything. Sister Seema saw it with her own eyes.

She’d had to move on to another pilot site after that, however. And although the project is tracking the outcomes of mothers and newborns, it will be a while before we have enough numbers to know if a difference has been made. So I got the nurse’s phone number and, with a translator to help with the Hindi, I gave her a call.

It had been four months since Sister Seema’s visit ended. I asked her whether she’d made any changes. Lots, she said. [...]

She said that she had eventually begun to see the effects. Bleeding after delivery was reduced. She recognized problems earlier. She rescued a baby who wasn’t breathing. She diagnosed eclampsia in a mother and treated it. You could hear her pride as she told her stories.

Many of the changes took practice for her, she said. She had to learn, for instance, how to have all the critical supplies—blood-pressure cuff, thermometer, soap, clean gloves, baby respiratory mask, medications—lined up and ready for when she needed them; how to fit the use of them into her routine; how to convince mothers and their relatives that the best thing for a child was to be bundled against the mother’s skin. But, step by step, Sister Seema had helped her to do it. “She showed me how to get things done practically,” the nurse said.

“Why did you listen to her?” I asked. “She had only a fraction of your experience.”

In the beginning, she didn’t, the nurse admitted. “The first day she came, I felt the workload on my head was increasing.” From the second time, however, the nurse began feeling better about the visits. She even began looking forward to them.

“Why?” I asked.

All the nurse could think to say was “She was nice.”

“She was nice?”

“She smiled a lot.”

“That was it?”

“It wasn’t like talking to someone who was trying to find mistakes,” she said. “It was like talking to a friend.”

That, I think, was the answer. Since then, the nurse had developed her own way of explaining why newborns needed to be warmed skin to skin. She said that she now tells families, “Inside the uterus, the baby is very warm. So when the baby comes out it should be kept very warm. The mother’s skin does this.”

I hadn’t been sure if she was just telling me what I wanted to hear. But when I heard her explain how she’d put her own words to what she’d learned, I knew that the ideas had spread. “Do the families listen?” I asked.

“Sometimes they don’t,” she said. “Usually, they do.”

Kurzban et al. on opportunity cost models of mental fatigue and resource-based models of willpower

19 Kaj_Sotala 06 December 2013 09:54AM

An opportunity cost model of subjective effort and task performance (h/t lukeprog) is a very interesting paper on why we accumulate mental fatigue: Kurzban et al. suggest an opportunity cost model, where intense focus on a single task means that we become less capable of using our mental resources for anything else, and accumulating mental fatigue is part of a cost-benefit calculation that encourages us to shift our attention instead of monomaniacally concentrating on just one task which may not be the most rewarding possible. Correspondingly, the amount of boredom or mental fatigue we experience with a task should correspond with the perceived rewards from other tasks available at the moment. A task will feel more boring/effortful if there's something more rewarding that you could be doing instead (i.e. if the opportunity costs for pursuing your current task are higher), and if it requires exclusive use of cognitive resources that could also be used for something else.

This seems to make an amount of intuitive/introspective sense - I had a much easier time doing stuff without getting bored as a kid, when there simply wasn't much else that I could be doing instead. And it does roughly feel like I would get more quickly bored with things in situations where more engaging pursuits were available. I'm also reminded of the thing I noticed as a kid where, if I borrowed a single book from the library, I would likely get quickly engrossed in it, whereas if I had several alternatives it would be more likely that I'd end up looking at each for a bit but never really get around reading any of them.

An opportunity cost model also makes more sense than resource models of willpower which, as Kurzban quite persuasively argued in his earlier book, don't really fit together with the fact that the brain is an information-processing system. My computer doesn't need to use any more electricity in situations where it "decides" to do something as opposed to not doing something, but resource models of willpower have tried to postulate that we would need more of e.g. glucose in order to maintain willpower. (Rather, it makes more sense to presume that a low level of blood sugar would shift the cost-benefit calculations in a way that led to e.g. conservation of resources.)

This isn't just Kurzban et al's opinion - the paper was published in Behavioral and Brain Sciences, which invites diverse comments to all the papers that they publish. In this particular case, it was surprising how muted the defenses of the resource model were. As Kurzban et al point out in their response to responses:

As context for our expectations, consider the impact of one of the central ideas with which we were taking issue, the claim that “willpower” is a resource that is consumed when self-control is exerted. To give a sense of the reach of this idea, in the same month that our target article was accepted for publication Michael Lewis reported in Vanity Fair that no less a figure than President Barack Obama was aware of, endorsed, and based his decision- making process on the general idea that “the simple act of making decisions degrades one’s ability to make further decisions,” with Obama explaining: “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make ” (Lewis 2012 ).

Add to this the fact that a book based on this idea became a New York Times bestseller (Baumeister & Tierney 2011 ), the fact that a central paper articulating the idea (Baumeister et al. 1998 ) has been cited more than 1,400 times, and, more broadly, the vast number of research programs using this idea as a foundation, and we can be forgiven for thinking that we would have kicked up something of a hornet’s nest in suggesting that the willpower-as-resource model was wrong. So we anticipated no small amount of stings from the large number of scholars involved in this research enterprise. These were our expectations before receiving the commentaries.

Our expectations were not met. Take, for example, the reaction to our claim that the glucose version of the resource argument is false (Kurzban 2010a ). Inzlicht & Schmeichel, scholars who have published widely in the willpower-as-resource literature, more or less casually bury the model with the remark in their commentary that the “mounting evidence points to the conclusion that blood glucose is not the proximate mechanism of depletion.” ( Malecek & Poldrack express a similar view.) Not a single voice has been raised to defend the glucose model, and, given the evidence that we advanced to support our view that this model is unlikely to be correct, we hope that researchers will take the fact that none of the impressive array of scholars submitting comments defended the view to be a good indication that perhaps the model is, in fact, indefensible. Even if the opportunity cost account of effort turns out not to be correct, we are pleased that the evidence from the commentaries – or the absence of evidence – will stand as an indication to audiences that it might be time to move to more profitable explanations of subjective effort.

While the silence on the glucose model is perhaps most obvious, we are similarly surprised by the remarkably light defense of the resource view more generally. As Kool & Botvinick put it, quite correctly in our perception: “Research on the dynamics of cognitive effort have been dominated, over recent decades, by accounts centering on the notion of a limited and depletable ‘resource’” (italics ours). It would seem to be quite surprising, then, that in the context of our critique of the dominant view, arguably the strongest pertinent remarks come from Carter & McCullough, who imply that the strength of the key phenomenon that underlies the resource model – two-task “ego-depletion” studies – might be considerably less than previously thought or perhaps even nonexistent. Despite the confidence voiced by Inzlicht & Schmeichel about the two-task findings, the strongest voices surrounding the model, then, are raised against it, rather than for it. (See also Monterosso & Luo , who are similarly skeptical of the resource account.)

Indeed, what defenses there are of the resource account are not nearly as adamant as we had expected. Hagger wonders if there is “still room for a ‘resource’ account,” given the evidence that cuts against it, conceding that “[t]he ego-depletion literature is problematic.” Further, he relies largely on the argument that the opportunity cost model we offer might be incomplete, thus “leaving room” for other ideas.

(I'm leaving out discussion of some commentaries which do attempt to defend resource models.)

Though the model still seems to be missing pieces - as one of the commentaries points out, it doesn't really address the fact that some tasks are more inherently boring than others. Some of it might be explained by the argument given in Shouts, Whispers, and the Myth of Willpower: A Recursive Guide to Efficacy (I quote the most relevant bit here), where the author suggests that "self-discipline" in some domain is really about sensitivity for feedback in that domain: a novice in some task doesn't really manage to notice the small nuances that have become so significant for an expert, so they receive little feedback for their actions and it ends up being a boring vigilance task. Whereas an expert will instantly notice the effects that their actions have on the system and get feedback of their progress, which in the opportunity cost model could be interpreted as raising the worthwhileness of the task they're working on. If we go with Kurzban et al.'s notion of us acquiring further information about the expected utility of the task we're working on as we continue working on it, then getting feedback from the task could possibly be read as a sign of the task being one in which we can expect to succeed in.

Another missing piece with the model is that it doesn't really seem to explain the way that one can come home after a long day at work and then feel too exhausted to do anything at all - it can't really be about opportunity costs if you end up so tired that you can't come up with ~any activity that you'd want to do.

December Monthly Bragging Thread

13 Kaj_Sotala 03 December 2013 02:46PM

As in Joshua Blaine's original description (below), but may be used to brag about things you've accomplished either this month (December) or the previous one (November), assuming that you haven't brought it up in any earlier Monthly Bragging Thread.

In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on.have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

How habits work and how you may control them

60 Kaj_Sotala 12 October 2013 12:17PM

Some highlights from The Power of Habit: Why We Do What We Do in Life And Business by Charles Duhigg, a book which seems like an invaluable resource for pretty much everyone who wants to improve their lives. The below summarizes the first three chapters of the book, as well as the appendix, for I found those to be the most valuable and generally applicable parts. These chapters discuss individual habits, while the rest of the book discusses the habits of companies and individuals. The later chapters also contain plenty of interesting content (some excerpts: [1 2 3]), and help explain the nature of e.g. some institutional failures.

(See also two previous LW discussions on an online article by the author of the book.)

Chapter One: The Habit Loop - How Habits Work

When a rat first navigates a foreign environment, such as a maze, its brain is full of activity as it works to process the new environment and to learn all the environmental cues. As the environment becomes more familiar, the rat's brain becomes less and less active, until even brain structures related to memory quiet down a week later. Navigating the maze no longer requires higher processing: it has become an automatic habit.

The process of converting a complicated sequence of actions into an automatic routine is known as "chunking", and human brains carry out a similar process. They vary in complexity, from putting toothpaste on your toothbrush before putting it in your mouth, to getting dressed or preparing breakfast, to very complicated processes such as backing one's car out of the driveway. All of these actions initially required considerable effort to learn, but eventually they became so automatic as to be carried out without conscious attention. As soon as we identify the right cue, such as pulling out the car keys, our brain activates the stored habit and lets our conscious minds focus on something else. In order to conserve effort, the brain will attempt to turn almost any routine into a habit.

However, it can be dangerous to deactivate our brains at the wrong time, for there may be something unanticipated in the environment that will turn a previously-safe routine into something life-threatening. To help avoid such situations, our brains evaluate prospective habits using a three-stage habit loop:

continue reading »

Inferential silence

39 Kaj_Sotala 25 September 2013 12:45PM

Every now and then, I write an LW comment on some topic and feel that the contents of my comment pretty much settles the issue decisively. Instead, the comment seems to get ignored entirely - it either gets very few votes or none, nobody responds to it, and the discussion generally continues as if it had never been posted.

Similarly, every now and then I see somebody else make a post or comment that they clearly feel is decisive, but which doesn't seem very interesting to me. Either it seems to be saying something obvious, or I don't get its connection to the topic at hand in the first place.

This seems like it would be about inferential distance: either the writer doesn't know the things that make the reader experience the comment as uninteresting, or the reader doesn't know the things that make the writer experience the comment as interesting. So there's inferential silence - a sufficiently long inferential distance that a claim doesn't provoke even objections, just uncomprehending or indifferent silence.

But "explain your reasoning in more detail" doesn't seem like it would help with the issue. For one, we often don't know beforehand when people don't share our assumptions. Also, some of the comments or posts that seem to encounter this kind of a fate are already relatively long. For example, Wei Dai wondered why MIRI-affiliated people don't often respond to his posts that raise criticisms, and I essentially replied that I found the content of his post relatively obvious so didn't have much to say.

Perhaps people could more often explicitly comment if they notice that something that a poster seems to consider a big thing doesn't seem very interesting or meaningful to them, and briefly explain why? Even a sentence or two might be helpful for the original poster.

View more: Next