[Link] Is the Endowment Effect Real?

7 Matt_Simpson 26 February 2013 10:47PM

Under fairly weak assumptions, the most a standard rational economic agent is willing to pay for an item they don't own (WTP) and the least they're willing to accept in exchange for that item if they already own it (WTA) should be identical. In experiments with humans, psychologists and economists have repeatedly found WTP-WTA gaps suggesting that humans aren't rational in at least this specific way. This has been interpreted as the endowment effect* and evidence for prospect theory. According to prospect theory, people are loss averse. Roughly this means that that, given their current ownership set, people value not losing stuff more highly than gaining stuff. Thus once someone gains ownership of something they suddenly value it much more highly. This "endowment effect"* on one's valuation of an item has been put forth as an explanation for the observed WTP - WTA gaps.

*Wikipedia confusingly defines the endowment effect as the gap itself, i.e. as the phenomena to be explained instead of the explanation. I suspect this is a difference in terminology among economists and psychologists, where psychologists use the wiki definition and economists use the definition I give here. However, calling the WTP-WTA gap an "endowment effect" is a bit misleading because a priori the gap may not have anything to endowments at all.

A paper (pdf) by Charlie Plott and Kathryn Zeiler investigates WTP-WTA gaps and it turns out that they may just be due to subjects not quite understanding the experimental protocols, particularly in the value elicitation process. Here's an important quote from their conclusion, but do read the paper for details: 

The issue explored here is not whether a WTP-WTA gap can be observed. Clearly, the experiments of KKT and others show not only that gaps can be observed, but also that they are replicable. Instead, our interest lies in the interpretation of observed gaps. The primary conclusion derived from the data reported here is that observed WTP-WTA gaps do not reflect a fundamental feature of human preferences. That is, endowment effect theory does not seem to explain observed gaps. In addition, our results suggest that observed gaps should not be interpreted as support for prospect theory.

A review of the literature reveals that WTP-WTA gaps are not reliably observed across experimental designs. Given the nature of reported experimental designs, we posited that differences in experimental procedures might account for the differences across reported results. This conjecture prompted us to develop procedures to test for the robustness of the phenomenon. We conducted comparative experiments using procedures commonly used in studies that report observed gaps (i.e., KKT). We also employed a "revealed theory" methodology to identify procedures reported in the literature that provide clues about experimenter notions regarding subject misconceptions. We then conducted experiments that implemented the union of procedures used by experimentalists to control for subject misconceptions. The comparative experiments demonstrate that WTP-WTA gaps are indeed sensitive to experimental procedures. By implementing different procedures, the phenomenon can be turned on and off. When procedures used in studies that report the gap are employed, the gap is readily observed. When a full set of controls is implemented, the gap is not observed.

The fact that the gap can be turned on and off demonstrates that interpreting gaps as support for endowment effect theory is problematic. The mere observation of the phenomenon does not support loss aversion-a very special form of preferences in which gains are valued less than losses. That the phenomenon can be turned on and off while holding the good constant supports a strong rejection of the claim that WTP-WTA gaps support a particular theory of preferences posited by prospect theory. Loss aversion might in some sense characterize preferences, but such a theory most likely does not explain observed WTP-WTA gaps. Exactly what accounts for observed WTP-WTA gaps? The thesis of this paper is that observed gaps are symptomatic of subjects' misconceptions about the nature of the experimental task. The differences reported in the literature reflect differences in experimental controls for misconceptions as opposed to differences in the nature of the commodity (e.g., candy, money, mugs, lotteries, etc.) under study.

 

The Logic of the Hypothesis Test: A Steel Man

5 Matt_Simpson 21 February 2013 06:19AM

Related to: Beyond Bayesians and Frequentists

Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.

I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:

1. Either the null hypothesis or the alternative hypothesis is true.

2. If the null hypothesis is true, then the data has a certain probability distribution.

3. Under this distribution, our sample is extremely unlikely.

4. Therefore under the null hypothesis, our sample is extremely unlikely.

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

An example looks like this:

Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:

1. Either or where is some constant.

2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.

3. Under the null hypothesis, has a distribution with degrees of freedom.

4. is really small under the null hypothesis (e.g. less than 0.05).

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath. 

This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe. 

That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?

Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.

 

File Under "Keep Your Identity Small"

14 Matt_Simpson 05 April 2012 06:36PM

We know politics makes us stupid, but now there's evidence (pdf) that politics makes us less likely to consider things from another's point of view. From the abstract:

Replicating prior research, we found that participants who were outside during winter overestimated the extent to which other people were bothered by cold (Study 1), and participants who ate salty snacks without water thought other people were overly bothered by thirst (Study 2). However, in both studies, this effect evaporated when participants believed that the other people under consideration held opposing political views from their own. Participants who judged these dissimilar others were unaffected by their own strong visceral-drive states, a finding that highlights the power of dissimilarity in social judgment. Dissimilarity may thus represent a boundary condition for embodied cognition and inhibit an empathic understanding of shared out-group pain.

As Will Wilkinson notes:

Got that? We overestimate the extent to which others feel what we're feeling, unless they're on another team.

Now this isn't necessarily a negative effect - you might argue that it's bias correcting. But implicitly viewing them as so different that it's not even worth thinking about things from their perspective is scary in itself. 

Track Your Happiness

5 Matt_Simpson 04 May 2011 02:59AM

Track your happiness using your iphone:

For thousands of years, people have been trying to understand the causes of happiness. What is it that makes people happy? Yet it wasn’t until very recently that science has turned its attention to this issue.

Track Your Happiness.org is a new scientific research project that aims to use modern technology to help answer this age-old question. Using this site in conjunction with your iPhone, you can systematically track your happiness and find out what factors – for you personally – are associated with greater happiness. Your responses, along with those from other users of trackyourhappiness.org, will also help us learn more about the causes and correlates of happiness.

Seems like a no-brainer to use this to me, at least if you have an iphone. For those with a droid, according to their twitter feed:

the next item on the roadmap is to make track your happiness available to as many people/phones as possible.

Despite being a really cool app for managing your happiness, this is just a great idea for doing research. Now I want to take advantage of the large iphone/droid user base to learn about people in some way. Any ideas?

Ames, IA LW meetup Sunday May 8 (First Iowa Meetup!) 2pm

2 Matt_Simpson 02 May 2011 03:32PM

Economics of Bitcoin

6 Matt_Simpson 04 April 2011 05:02PM

I haven't read/listened to them, but I thought these might be interesting to the local bitcoin users:

Eli Dourado (GMU econ PhD candidate) on the economics of cryptocurrency.

Econtalk podcast - Russ Roberts (GMU econ prof) with Gavin Andresen, Principal of the BitCoin Virtual Currency Project on, Virtual Currency.

Roberts' podcast is always stimulating even if I disagree with him, and Eli is a pretty insightful guy who I've met in meatspace.

 

Science reveals how not to choke under pressure

9 Matt_Simpson 09 December 2010 04:46PM

Found via reddit, excerpt:

Choking happens when we let anxious thoughts distract us or when we start trying to consciously control motor skills best left on autopilot. ...

In her new book, Choke: What the Secrets of the Brain Reveal About Success and Failure at Work and at Play, Beilock deconstructs high-stakes moments—the ones seen around the world and the ones only our mothers care about—to explore why we sometimes falter, and why other times we nail it. ...

What goes wrong in our brain when this happens? 
Working memory, housed in the prefrontal cortex, is what allows us to do calculations in our head and reason through a problem. Unfortunately, it’s a limited resource. If we’re doing an activity that requires a lot of cognitive horsepower, such as responding to an on-the-spot question, and at the same time we’re worrying about screwing up, then suddenly we don’t have the brainpower we need.

Also, once we feel stressed, we often try to control what we’re doing in order to ensure success. So if we’re doing a task that normally operates largely outside of conscious awareness, such as an easy golf swing, what screws us up is the impulse to think about and control our actions. Suddenly we’re too attentive to what we’re doing, and all the training that has improved our motor skills is for naught, since our conscious attention is essentially hijacking motor memory. ...

How can I prevent myself from overthinking? 
You might think that writing about your worries would just make them more salient. But there is work in clinical psychology showing that writing helps limit ruminative thoughts—those negative thoughts that are very hard to shake and that seem to grow the more you dwell on them. The idea is that you cognitively outsource your worries to the page. Writing about worries for 10 minutes right before taking a standardized test is really beneficial.

So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning

48 Matt_Simpson 14 July 2010 04:51PM

Related to: The Conjunction Fallacy, Conjunction Controversy

The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty.  In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.

EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.

EDIT 2: The author no longer holds the views presented in this post. See this comment.

A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

What is the probability that Linda is:

(a) a bank teller

(b) a bank teller and active in the feminist movement

In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.

This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

There are 100 people who fit the description above. How many of them are:

(a) bank tellers

(b) bank tellers and active in the feminist movement

Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.

continue reading »

The Difference Between Utility and Utility

8 Matt_Simpson 02 December 2009 06:16AM

Recently I argued that the economist's utility function and the ethicist's utility function are not the same.  The nutshell argument is that they are created for different purposes - one is an attempt to describe the actions we actually take and the other is an attempt to summarize our true values (i.e., what we should do).  I just ran across a somewhat older post over at Black Belt Bayesian arguing this very point.  Excerpt:

Economics (of the neoclassical kind) models consumers and other economic actors as such utility maximizers... Utility is not something you can experience. It’s just a mathematical construct used to describe the optimization structure in your behavior...

Consequentialist ethics says an act is right if its consequences are good. Moral behavior here amounts to being a utility maximizer. What’s “utility”? It’s whatever a moral agent is supposed to strive toward. Bentham’s original utilitarianism said utility was pleasure minus pain; nowadays any consequentalist theory tends to be called “utilitarian” if it says you should maximize some measure of welfare, summed over all individuals... Take note: not all utility maximizers are utilitarians.

There’s no necessary connection between these two kinds of utility other than that they use the same math. It’s possible to make up a utilitarian theory where ethical utility is the sum of everyone’s economic utility (calibrated somehow), but this is just one of many possibilities. Anyone trying to reason about one kind of utility through the other is on shaky ground.

 

The Academic Epistemology Cross Section: Who Cares More About Status?

12 Matt_Simpson 15 November 2009 07:37PM

Bryan Caplan writes:

Almost all economic models assume that human beings are Bayesians...  It is striking, then, to realize that academic economists are not Bayesians.  And they're proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more... 

Empirical economists' deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing "whatever the data say."  When there's no data that meets their standards, they mimic the theorists' snobby agnosticism.  If you mention "common sense," they'll scoff.  If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.

Robin Hanson offers an explanation:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.  If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis.  And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display.  It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

I wonder, what does this look like in the cross section?  In other words, relative to other academic disciplines, which have the strongest tendency to celebrate difficult work but ignore sound-yet-unimpressive work?  My hunch is that economics, along with most other social sciences, would be the worst offenders, while the fields closer to engineering will be on the other end of the spectrum.  Engineers should be more concerned with truth since whatever they build has to, you know, work.  What say you?  More importantly, anyone have any evidence?

View more: Next