A Crash Course in the Neuroscience of Human Motivation
[PDF of this article updated Aug. 23, 2011]
Whenever I write a new article for Less Wrong, I'm pulled in two opposite directions.
One force pulls me toward writing short, exciting posts with lots of brain candy and just one main point. Eliezer has done that kind of thing very well many times: see Making Beliefs Pay Rent, Hindsight Devalues Science, Probability is in the Mind, Taboo Your Words, Mind Projection Fallacy, Guessing the Teacher's Password, Hold Off on Proposing Solutions, Applause Lights, Dissolving the Question, and many more.
Another force pulls me toward writing long, factually dense posts that fill in as many of the pieces of a particular argument in one fell swoop as possible. This is largely because I want to write about the cutting edge of human knowledge but I keep realizing that the inferential gap is larger than I had anticipated, and I want to fill in that inferential gap quickly so I can get to the cutting edge.
For example, I had to draw on dozens of Eliezer's posts just to say I was heading toward my metaethics sequence. I've also published 21 new posts (many of them quite long and heavily researched) written specifically because I need to refer to them in my metaethics sequence.1 I tried to make these posts interesting and useful on their own, but my primary motivation for writing them was that I need them for my metaethics sequence.
And now I've written only four posts2 in my metaethics sequence and already the inferential gap to my next post in that sequence is huge again. :(
So I'd like to try an experiment. I won't do it often, but I want to try it at least once. Instead of writing 20 more short posts between now and the next post in my metaethics sequence, I'll attempt to fill in a big chunk of the inferential gap to my next metaethics post in one fell swoop by writing a long tutorial post (a la Eliezer's tutorials on Bayes' Theorem and technical explanation).3
So if you're not up for a 20-page tutorial on human motivation, this post isn't for you, but I hope you're glad I bothered to write it for the sake of others. If you are in the mood for a 20-page tutorial on human motivation, please proceed.
Do Humans Want Things?
Summary: Recent posts like The Neuroscience of Desire and To what degree do we have goals? have explored the question of whether humans have desires (or 'goals'). If we don't have desires, how can we tell an AI what kind of world we 'want'? Recent work in economics and neuroscience has clarified the nature of this problem.
We begin, as is so often the case on Less Wrong, with Kahneman & Tversky.
In 1981, K&T found that human choice was not always guided by the objective value of possible outcomes, but by the way those outcomes were 'framed'.1 For example in one study, K&T told subjects the following story:
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed.
Half the participants were given the following choice:
If program A is adopted, 200 people will be saved. If Program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved.
The second half of participants were given a different choice:
If Program C is adopted 400 people will die. If Program D is adopted there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.
Each of these choice sets is identical, except that one is framed with language about people being saved, and the other is framed with language about people dying.
In the first group, 72% of subjects chose Program A. In the second group, only 22% of people chose the numerically identical option: Program C.
K&T explained the difference by noting that in option A we consider the happy thought of saving 200 people, but in option C we confront the dreadful thought of 400 deaths. Our choice seems to depend not only on the objective properties of the options before us, but also on the reference point used to frame the options.
But if this is how human desire works, we are left with a worrying problem about how to translate human desires into the goals of an AI. Surely we don't want an AI to realize one state of affairs over another based merely on how the options are framed!
Before we begin to solve this problem, though, let's look at a similar result from neurobiology.
Assuming Nails
Tangential followup to Defeating Ugh Fields in Practice.
Somewhat related to Privileging the Hypothesis.
Edited to add:
I'm surprised by negative/neutral reviews. This means that either I'm simply wrong about what counts as interesting, or I haven't expressed my point very well. Based on commenter response, I think the problem is the latter. In the next week or so, expect a much more concise version of this post that expresses my point about epistemology without the detour through a criticism of economics.
At the beginning of my last post, I was rather uncharitable to neoclassical economics:
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance.... [to maintain that this theory is correct] is to crush reality into a theory that cannot hold it.
Some mistook this to mean that I believe neoclassical economists honestly, explicitly believe that all people are always totally rational. But, to quote Rick Moranis, "It's not what you think. It's far, far worse." The problem is that they often take the complex framework of neoclassical economics and believe that a valid deduction within this framework is a valid deduction about the real world. However, deductions within any given framework are entirely uninformative unless the framework corresponds to reality. But, because such deductions are internally valid, we often give them far more weight than they are due. Testing the fit of a theoretical framework to reality is hard, but a valid deduction within a framework feels so very satisfying. But even if you have a fantastically engineered hammer, you cannot go around assuming everything you want to use it on is a nail. It is all too common for experts to assume that their framework applies cleanly to the real world simply because it works so well in its own world.
If this concept doesn't make perfect sense, that's what the rest of this post is about: spelling out exactly how we go wrong when we misuse the essentially circular models of many sciences, and how this matters. We will begin with the one discipline in which this problem does not occur. The one discipline which appears immune to this type of problem is mathematics, the paragon of "pure" academic disciplines. This is principally because mathematics appears to have perfect conformity with reality, with no research or experimentation needed to ensure said conformity. The entire system of mathematics exists, in a sense, in its own world. You could sit in windowless room (perhaps one with a supercomputer) and, theoretically, derive every major theorem of mathematics, given the proper axioms. The answer to the most difficult unsolved problems in mathematics was determined the moment the terms and operators within them were defined - once you say a "circle" is "a convex polygon with every point equidistant from a center," you have already determined every single digit of pi. The problem is finding out exactly how this model works - making calculations and deductions within this model. In the case of mathematics, for whatever reason, the model conforms perfectly to the real world, so any valid mathematical deduction is a valid deduction in the real world.
This is not the case in any true science, which by necessity must rely on experiment and observation. Every science operates off of some simplified model of the world, at least with our current state of knowledge. This creates two avenues of progress: discoveries within the model, which allow one to make predictions about the world, and refinements of the model, which make such predictions more accurate. If we have an internally consistent framework, theoretical manipulation within our model will never show us our error, because our model is circular and functions outside the real world. It would be like trying to predict a stock market crash by analyzing the rules of Monopoly, except that it doesn't feel absurd. There's nothing wrong with the model qua the model, the problem is with the model qua reality, and we have to look at both of them to figure that out.
Economics is one of the fields that most suffers from this problem. Our mathematician in his windowless room could generate models of international exchange rates without ever having seen currency, once we gave him the appropriate definitions and assumptions. However, when we try using these models to forecast the future, life gets complicated. No amount of experimenting within our original model will fix this without looking at the real world. At best, we come up with some equations that appear to conform to what we observe, but we run the risk that the correspondence is incidental or that there were some (temporarily) constant variables we left out that will suddenly cease to be constant and break the whole model. It is all too easy to forget that the tremendous rigor and certainty we feel when we solve the equations of our model does not translate into the real world. Getting the "right" answer within the model is not the same thing as getting the real answer.
As an obvious practical example, an individual with a serious excess of free time could develop a model of economics which assumes that agents are rational paper-clip maximizers - that agents are rational and their ultimate concern is maximizing the number of existing paper-clips. Given even more free time and a certain amount of genius, you could even model the behaviour of irrational paper-clip maximizers, so long as you had a definition of irrational. But however refined these models are, they models will remain entirely useless unless you actually have some paper-clip maximizers whose behaviour you want to predict. And even then, you would need to evaluate your predictions after they succeed or fail. Developing a great hammer is relatively useless if the thing you need to make must be put together with screws.
There is an obvious difference in the magnitude of this problem between the sciences, and it seems to be based on the difficulty of experimenting within them. In harder sciences where experiments are fairly straightforwards, like physics and chemistry, it is not terribly difficult to make models that conform well with reality. The bleeding edge of, say, physics, tends to like in areas that are either extremely hard to observe, like the subatomic, or extremely computation-intensive. In softer sciences, experiments are very difficult, and our models rely much more on powerful assumptions, social values, and armchair reasoning.
As humans, we are both bound and compelled to use the tools we have at our disposal. The problem here is one of uncertainty. We know that most of our assumptions in economics are empirically off, but we don't know how wrong or how much that matters when we make predictions. But the model nevertheless seeps into the very core of our model of reality itself. We cannot feel this disconnect when we try to make predictions; a well-designed model feels so complete that there is no feeling of error when we try to apply it. This is likely because we are applying it correctly, but it just doesn't apply to reality. This leads people to have high degrees of certainty and yet frequently be wrong. It would not surprise me if the failure of many experts to appreciate the model-reality gap is responsible for a large proportion of incorrect predictions.
This, unfortunately, is not the end of the problem. It gets much worse when you add a normative element into your model, when you get to call some things, "efficient" or "healthful," or "normal," or "insane." There is also a serious question as to whether this false certainty is preferable to the vague unfalsifiability of even softer social sciences. But I shall save these subjects for future posts.
Cryonics Wants To Be Big
Cryonics scales very well. People who argue from the perspective that cryonics is costly are probably not aware of this fact. Even assuming you needed to come up with the lump sum all at once rather than steadily pay into life insurance, the fact is that most people would be able to afford it if most people wanted it. There are some basic physical reasons why this is the case.
So long as you keep the shape constant, for any given container the surface area is based on a square law while the volume is calculated as a cube law. For example with a simple cube shaped object, one side squared times 6 is the surface area; one side cubed is the volume. Spheres, domes, and cylinders are just more efficient variants on this theme. For any constant shape, if volume is multiplied by 1000, surface area only goes up by 100 times.
Surface area is where heat gains entry. Thus if you have a huge container holding cryogenic goods (humans in this case) it costs less per unit volume (human) than is the case with a smaller container that is equally well insulated. A way to understand why this works is to realize that you only have to insulate and cool the outside edge -- the inside does not collect any new heat. In short, by multiplying by a thousand patients, you can have a tenth of the thermal transfer to overcome per patient with no change in r-value.
But you aren't limited to using equal thickness of insulation. You can use thicker insulation, but get a much smaller proportional effect on total surface area when you use bigger container volumes. Imagine the difference between a marble sized freezer and a house-sized freezer. What happens when you add an extra foot of insulation to the surface of each? Surface area is impacted much as diameter is -- i.e. more significantly in the case of the smaller freezer than the larger one. The outer edge of the insulation is where it begins collecting heat. With a truly gigantic freezer, you could add an entire meter (or more) of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has. (This is one reason cheaper materials can be used to construct large tanks -- they can be applied in thicker layers.)
Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks and high-capacity tanker trucks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which implies there is a boiloff rate associated with the 3,000 gallon tank in addition to the tanks.
The conclusion I get from this is that there is a very strong self-interested case (as well as the altruistic case) to be made for the promotion of megascale cryonics towards the mainstream, as opposed to small independently run units for a few of us die-hard futurists. People who say they won't sign up for cost reasons may actually (if they are sincere) be reachable at a later date. To deal with such people's objections and make sure they remain reachable, it might be smart to get them to agree with some particular hypothetical price point at which they would feel it is justified. In large enough quantities, it is conceivable that indefinite storage costs would be as low as $50 per person, or 50 cents per year.
That is much cheaper than saving a life any other way. Of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces the active population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably. And we might consider the advent of life-health extension in the future to be a reason to think it a qualitatively better form of life-saving.
Note: This article only looks directly at cooling energy costs; construction and ongoing maintenance do not necessarily scale as dramatically. The same goes for stabilization (which I view as a separate though indispensable enterprise). Both of these do have obvious scaling factors however. Other issues to consider are defense and reliability. Given the large storage mass involved, preventing temperature fluctuations without being at the exact boiling temperature of LN2 is feasible; it could be both highly failsafe and use the ideal cryonics temperature of -135C rather than the -196C that LN2 boiloff as a temperature regulation mechanism requires. Feel free to raise further issues in the comments.
What Cost for Irrationality?
This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.
People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality, so that you might better persuade others of this being important.
What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...
Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)
A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."
Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.
Defeating Ugh Fields In Practice
Unsurprisingly related to: Ugh fields.
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That's right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion - and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent.
A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there's the question of whether you already took the pill. Because if you take it twice in one day, you'll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.
As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds "like a game;" he couldn't wait to check the dispenser to see if he'd won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn't surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.
Blue- and Yellow-Tinted Choices
A man comes to the rabbi and complains about his life: "I have almost no money, my wife is a shrew, and we live in a small apartment with seven unruly kids. It's messy, it's noisy, it's smelly, and I don't want to live."
The rabbi says, "Buy a goat."
"What? I just told you there's hardly room for nine people, and it's messy as it is!"
"Look, you came for advice, so I'm giving you advice. Buy a goat and come back in a month."
In a month the man comes back and he is even more depressed: "It's gotten worse! The filthy goat breaks everything, and it stinks and makes more noise than my wife and seven kids! What should I do?"
The rabbi says, "Sell the goat."
A few days later the man returns to the rabbi, beaming with happiness: "Life is wonderful! We enjoy every minute of it now that there's no goat - only the nine of us. The kids are well-behaved, the wife is agreeable - and we even have some money!"
-- traditional Jewish joke
Related to: Anchoring and Adjustment
Biases are “cognitive illusions” that work on the same principle as optical illusions, and a knowledge of the latter can be profitably applied to the former. Take, for example, these two cubes (source: Lotto Lab, via Boing Boing):

The “blue” tiles on the top face of the left cube are the same color as the “yellow” tiles on the top face of the right cube; if you're skeptical you can prove it with the eyedropper tool in Photoshop (in which both shades come out a rather ugly gray).
The illusion works because visual perception is relative. Outdoor light on a sunny day can be ten thousand times greater than a fluorescently lit indoor room. As one psychology book put it: for a student reading this book outside, the black print will be objectively lighter than the white space will be for a student reading the book inside. Nevertheless, both students will perceive the white space as subjectively white and the black space as subjectively black, because the visual system returns to consciousness information about relative rather than absolute lightness. In the two cubes, the visual system takes the yellow or blue tint as a given and outputs to consciousness the colors of each pixel compared to that background.
So this optical illusion occurs when the brain judges quantities relative to their surroundings rather than based on some objective standard. What's the corresponding cognitive illusion?
The Price of Life
Less Wrong readers are familiar with the idea you can and should put a price on life. Unfortunately the Big Lie that you can't and shouldn't has big consequences in the current health care debate. Here's some articles on it:
Yvain's blog post here (HT: Vladimir Nesov).
Peter Singer's article on rationing health care here.
Wikipedia here.
Experts and policy makers who debate this issue here.
For those new to Less Wrong, here's the crux of Peter Singer's reasoning as to why you can put a price on life:
Case study: Melatonin
I discuss melatonin's effects on sleep & its safety; I segue into the general benefits of sleep and the severely disrupted sleep of the modern Western world, the cost of melatonin use and the benefit (eg. enforcing regular bedtimes), followed by a basic cost-benefit analysis of melatonin concluding that the net profit is large enough to be worth giving it a try barring unusual conditions or very pessimistic safety estimates.
Full essay: http://www.gwern.net/Melatonin
Disclosure vs. Bans: Reply to Robin Hanson
A little while back I wrote a post arguing that the existence of abusive terms in credit card contracts (such as huge jumps in interest rates for being one day late with a payment) do not satisfy the conditions for standard economic models of asymmetric information between rational agents, but rather are trickery, pure and simple. If this is right, then the standard remedy of mandating the provision of more information to the less-informed party, but not otherwise interfering in the market (the idea being that any voluntary agreement must make both parties better off, no matter how strange or one-sided the terms may appear, so any interference in contracts beyond providing information will reduce welfare), is not the right one. There is no decent argument that those terms would appear in any contract where both parties knew what they were doing, so if you see terms like that, the appropriate conclusion is that someone has been screwed, not that Goddess of Capitalism, in her infinite-but-inscrutable wisdom, has uncovered the only terms that, strange as they may seem to mere mortals, make a mutually beneficial contract possible. The goal is to get rid of those terms, and the most direct way to do that is simply to prohibit them. There are some good reasons to be reluctant to have the government go around prohibiting things, so mandatory disclosure might still be a good policy (though the Federal Reserve has investigated this and concluded that it isn't), but the goal would be to use the disclosures to eliminate the abusive terms. There is no justification for the standard economist's agnosticism about whether the terms are good or not: they're bad and the only question is how best to get rid of them.
Robin Hanson left some comments to that post, in which he made the point that since people voluntarily choose these terms, they must like them and so prohibiting them would have to mean protecting people against their will. I answered that while I'm enough of a paternalist to be willing, under some circumstances, to impose limited protections on people even if those people would oppose them, that I didn't think that was an issue here, as I would guess (though I have no proof), that the Federal Reserve's recent decision to ban certain credit card practices was probably very popular, even (especially?) among the people who are harmed by those practices. Robin's reply, as I understand it, is that this may be true, but since people can't simultaneously want to accept credit cards with those terms and at the same time favor banning those terms, it must be the case that they either don't understand the terms of the credit card contracts or they don't understand the effects of the ban. Somewhere there must just be some missing information, and therefore we must be back where we started, with the problem being a lack of information that could be resolved by providing more information.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)