Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn't ought to happen to rational agents:  If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X.  So a rational agent shouldn't do worse by having more options.  The more available actions you have, the more powerful you become—that's how it should ought to work.

For example, if an ideal rational agent is initially forced to take only box B in Newcomb's Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn't regret having more options.  Such regret indicates that you're "fighting your own ritual of cognition" which helplessly selects the worse choice once it's offered you.

But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations.  Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them.  You could write an invincible chess program in one line of Python if its only legal move were the winning one.

Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.

If you're dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—

Say, would you like to finish reading this post, or watch this surprising video instead?

Schwartz, I believe, talks primarily about the decrease in happiness and satisfaction that results from having more mutually exclusive options.  Before this research was done, it was already known that people are more sensitive to losses than to gains, generally by a factor of between 2 and 2.5 (in various different experimental scenarios).  That is, the pain of losing something is between 2 and 2.5 times as worse as the joy of gaining it.  (This is an interesting constant in its own right, and may have something to do with compensating for our systematic overconfidence.)

So—if you can only choose one dessert, you're likely to be happier choosing from a menu of two than a menu of fourteen.  In the first case, you eat one dessert and pass up one dessert; in the latter case, you eat one dessert and pass up thirteen desserts.  And we are more sensitive to loss than to gain.

(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

Not only that, but if the options have incommensurable attributes, then whatever option we select is likely to look worse because of the comparison.  A luxury car that would have looked great by comparison to a Crown Victoria, instead becomes slower than the Ferrari, more expensive than the 9-5, with worse mileage than the Prius, and not looking quite as good as the Mustang.  So we lose on satisfaction with the road we did take.

And then there are more direct forms of harm done by painful choices.  IIRC, an experiment showed that people who refused to eat a cookie—who were offered the cookie, and chose not to take it—did worse on subsequent tests of mental performance than either those who ate the cookie or those who were not offered any cookie.  You pay a price in mental energy for resisting temptation.

Or consider the various "trolley problems" of ethical philosophy—a trolley is bearing down on 5 people, but there's one person who's very fat and can be pushed onto the tracks to stop the trolley, that sort of thing.  If you're forced to choose between two unacceptable evils, you'll pay a price either way.  Vide Sophie's Choice.

An option need not be taken, or even be strongly considered, in order to wreak harm.  Recall the point from "High Challenge", about how offering to do someone's work for them is not always helping them—how the ultimate computer game is not the one that just says "YOU WIN", forever.

Suppose your computer games, in addition to the long difficult path to your level's goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way.  And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn't that always-visible corridor, make the game that much less fun?  Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

I don't know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken—a "deal with the Devil" story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

And what if the worse option is actually taken?  I'm not suggesting that it is always a good idea for human governments to go around Prohibiting temptations.  But the literature of heuristics and biases is replete with examples of reproducible stupid choices; and there is also such a thing as akrasia (weakness of will).

If you're an agent operating from a much higher vantage point—high enough to see humans as flawed algorithms, so that it's not a matter of second-guessing but second-knowing—then is it benevolence to offer choices that will assuredly be made wrongly?  Clearly, removing all choices from someone and reducing their life to Progress Quest, is not helping them.  But are we wise enough to know when we should choose?  And in some cases, even offering that much of a choice, even if the choice is made correctly, may already do the harm...

New Comment
45 comments, sorted by Click to highlight new comments since:

The hard question is: who do you trust to remove your choices, and are they justified in doing so anyway even if you don't trust them to do so?

One would hope you at least trust yourself to limit your own options.

I once spoke with David Schmidtz, a philosophy at the University of Arizona, about Scwartz's work. All he shows is that more choices makes people anxious and confused. But Dave told me that he got Scwartz to admit that being anxious and confused isn't the same way as having a net utility decrease. It's not even close.

Robin, if people could always be trusted to say when they themselves could be trusted, the problem would have a very simple solution at the meta-level. So if you're going so far as to ask that question, then people can't trust their choices, or trust themselves to know when to trust their choices, or meta-meta-trust, etc. And this goes for everyone having the conversation. Not going anywhere in particular with this, just making the observation as a starting point.

It seems to me that adult humans, dealing with other adult humans, are very rarely justified in removing the choices of people who haven't chosen to trust them.

But we recognize e.g. parents and children as an exception, where the parents are expected to have a hugely superior epistemic position, to have (brainware-supported) motives to care for the child's best interests, and finally we have large amounts of historical experience with the situation. (It doesn't always work perfectly, but on the whole, it still seems like trusting children to know when to trust their parents would be worse.)

Not that this is a metaphor for anything. It's different out in the transhuman spaces.

... and finally we have large amounts of historical experience with the situation.

This would be the mother of all sampling biases (read the mouse-over text)...

Though I won't dispute your conclusion, we are the ones who survived after all.

For those of us who can't view XKCD, could someone comment with what it said?

The bit that Dojan was referring to, the mouse-over text, is:

On one hand, every single one of my ancestors going back billions of years has managed to figure it [parenting] out. On the other hand, that's the mother of all sampling biases.

Eli, really - rickrolling!

He should have singularitrolled instead. :)

The rickroll example actually applies to all agents, including ideal rationalists. Basically you're giving the victim an extra option that you know the victim thinks is better than it actually is. There's no reason why this would apply to humans only or to humans especially.

(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

I do something similar when ordering at a table with several other people. I don't even look at the menu. I arrange to order last, listen to what the other people order, and then just copy one of their orders.

The whole paradox of choice problem can be viewed through a Bayesian lens. In order to make a consistent choice from a set of 2^N options, you need at least N bits of information. This doesn't seem like a lot, but in most cases our information is totally corrupted by noise (do you really know you like cream sauce more than red sauce?). So reducing the size of the option set makes it more likely that you will be able to make the correct choice given the amount of information you have. If I'm dining with four other people at a restaurant with 64 menu options, my strategy decreases the number of bits I need from 6 to 2.

Many other techniques can be interpreted in this light. One notable example is Warren Buffett's "buy and hold" strategy for investing. Most investment strategies involve the investor buying and selling various stocks at different times, based on whatever analysis he has conducted. Obviously this requires repeated decision making. An investor applying buy and hold makes a far smaller set of decisions, thereby maximizing the power of the information he has obtained.

Hey Rick Astley! Much better than this decision theory crap.

Came across this at work yesterday, which isn't unrelated. For every level of abstraction involved in a decision, or extra option added, I guess we should just accept that 50% of the population will fall by the wayside. Or start teaching decision theory in little school.

Happy Nondenominational Winter Holiday Period, all. Keep it rational.

Eliezer,

what would be the right thing to do regarding our own choices? Should we limit them? Somehow this seems related to the internet where you always have to choose when to click another link and when to stop reading. Timothy Ferris also recommends a low information diet. I'm just brainstorming a bit here.

I never thought I'd get rickrolled by Yudkowsky.

Dustin, I've found that people who only read my online writings sometimes come away with this completely wrong picture of my personality.

My father has business cards identifying him as an Assassin working for the Martian Government in Exile. Genes is genes.

I now usually simply trust salesperson's choice, after explaining my requirements, only checking that his choice seems to satisfy them, rather than trying to optimize over all the available options. It's probably the main thing salespeople are for in the first place, not to provide expertise (which they often don't have), or even to find the best option for your requirements, but to simplify the choosing process, lifting the psychological weight off the customer.

Well, besides from making the customer believe that s/he actually needs something more expensive than they thought...

Re: Barry Schwartz's The Paradox of Choice [...] talks about how offering people more choices can make them less happy. A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become - that's how it should ought to work.

This makes no sense to me. A blind choice between lady and tiger is preferable to a blind choice between a lady and two tigers. Problems arise when you don't know that the other choices are worse. So having more choices can be really bad - in a way that has nothing to do with the extra cycles burned in evaluating them.

Barry Schwartz's The Paradox of Choice - which I haven't read, though I've read some of the research behind it

Yay, a book I've read that Eliezer hasn't! That said, I don't actually recommend it; it was kinda tedious and repetitive.

I wasn't too surprised when I saw that the video was Rick Astley. It's the standard internet prank, these days.

It could be worse, though. At least people have stopped referring people to Goatse, and this never caught on.

@Doug S

Rickrolling is bad for you. It is really is. It devalues your online social currency - the internet is a link economy, right? - and causes people to trust your information less. Trust is the ultimate value, not only in the stock market but also in social networking.

Is it possible that Eliezer has indirectly answered Robin's question about gifting from a few days ago? That is, is it possible that I gain more benefit from a copy of Tropic Thunder given to me by my brothers than from one I purchase myself? By giving it to me as a gift, they have removed from me the necessity of comparing it to other films I could purchase, as well as the thought that I could have spent the money in a more "responsible" fashion.

Hmm, I guess that's why it's always nice to get non-sensible gifts.

It also depends how good your brothers are at evaluating your taste in films. But they are probably better than most of your other sources (especially advertising).

Though that part about escaping "responsible" spending doesn't actually do much, since you could always sell the DVD on eBay and use the money to buy something else. It's easy to get caught up in sunk cost fallacy and endowment effect though---thinking you should keep it just because you have it. (I guess the resale value is probably a bit less than the original value, so there does exist a narrow region of utility where the movie is worth owning if you already have it but not worth buying yourself. But as I said, this is narrow---on the order of $5---and hence improbable.)

frelkins: I agree. I'm just pointing out that, as far as pranks go, there are worse things on the web than Rick Astley music videos.

It is deeply creepy and disturbing to hear this talk from someone who already thinks he knows better than just about everybody about what is good for us, and who plans to build an AI that will take over the world.

I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment.

Eliezer, I have probably made any number of inaccurate depictions of your opinions, but you can't back away from these ones. You DO generally think that your opinion on topics you have thought deeply about is more valuable than the opinion of almost everyone, and you HAVE thought deeply about fun theory. And you ARE planning to build an AI that will be in control of the world. You might protest that "take over the world" has different connotations. But there's no question that you plan for your AI to be in charge.

I always thought that the "pushing the fat man in front of the train" as opposed to "switching the direction of the fork in the tracks" was due to people not believing the questioner at a deep level because problem creation by construction doesn't work in the real world.

Eliezer: "I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment."

Really? I challenge you to point to ONE post in which you have tried to correct a misunderstanding by me of your opinion, rather than just complaining about my "misunderstandings" without even saying what the misunderstanding was.

@Goetz: Quick googling turned up this SL4 post. (I don't particularly give people a chance to start over when they switch forums.)

FWIW, Phil's point there seems to be perfectly reasonable - and not in need of correction: if a moral system tells you to do what you were going to do anyway, it isn't going to be doing much work.

Moral systems usually tell you not to do things that you would otherwise be inclined to do - on the grounds that they are bad. Common examples include taking things you want - and having sex.

I'd say that moral systems explain the deeper consequences of an action you may not have thought deeply about.

[-]Lior00

Some people want to make a choice but don't want to deal with the cost of making that choice. For example, some couples want the choice of signing up for a prenuptial, but prefer not to have to make that choice. They would prefer that they would have to sign up for it. Making the choice may make the spouse angry.

I don't know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken - a "deal with the Devil" story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

FWIW (very little), this is exactly how I experience shows like "Ah My Goddess!". The main character routinely refuses to take advantage of a situation that I most certainly would. I can't watch stuff like that.

-Robin

Love this topic!

Here are my thoughts about the paradox of choice: http://blog.timlang.com/2009/02/too-many-choices-unhappiness-or-why.html

I don't have the solution, but I at least tried to suggest possible categories of solution.

Apparently paradox of choice is rarely a factor and may not even be real; Tyler just put this up on Marginal Revolution http://www.marginalrevolution.com/marginalrevolution/2009/11/the-paradox-of-choice-is-not-robust.html

Quick summary: the paradox of choice suggests that offering more options discourages people from making any selection, and reduces their satisfaction with their ultimate choice when they do. The research that Tyler Cowen cites suggests that there is no significant effect.

Although it certainly seems to be true that viable options not taken can decrease the pleasure of the option that is taken, I've noticed that I often enjoy my choices most when I am presented with many options, most of which are simply bad. Clearly bad choices don't sap willpower to reject, but I feel like there's a sense of reward in feeling that one has discriminated among one's options and made an unambiguously correct choice. I'd be interested in seeing the results of a study where subjects' satisfaction in their choices is tracked against an increasing number of bad options in addition to one good one.

I have had something like the "easy path" experience in actual video games, when they offer the option of changing the game difficulty at any time. You could play all the way through Skyrim on "Novice" difficulty if you wanted to, and you would have to be extremely incompetent not to win. But then for someone like me who plays on "Expert" (or "Master" once I'm at a high enough level), the game is more satisfying overall, but after every difficult battle, loss of a companion, etc. there's always that temptation to knock the difficulty down a step or two.

Say, would you like to finish reading this post, or watch this surprising video instead?

What does it say about me that I kept on reading (and resolved to follow the link later) because I felt too lazy to watch the video straight away?

[-]Elo10

What about when choice inflicts the problem of the multi-armed bandit on us: multi-armed bandit on wikipedia

Where with more options you need to explore them (some amount) to avoid missing out on rewards. Where you might not always know if Y is lesser than X, even when being told specifically that Y < X.

which is to say that: someone who behaves with applied rationality should be occasionally exploring choices to avoid missing rewards. Because of that, when spare choices come up - they create a burden of exploration on the party and that exploration is taxing on resources (even when not chosen).

Isn't that a clearer description for why extra choice can be harmful?

Suppose your computer games, in addition to the long difficult path to your level's goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way.  And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn't that always-visible corridor, make the game that much less fun?  Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

This exact phenomenon happens in Deus Ex: Human Revolution, where you can get around almost every obstacle in the game by using the ventilation system. The frustration that results is apparent in this video essay/analysis: it undermines all of the otherwise well-designed systems in the game in spite of not actually interfering with the player's ability to engage with them.

I wonder if, alongside the "loss of rejected options" proposition, a reason that extra choices impact us is the mental bandwidth they take up. If the satisfaction we derive from a choice is (to a first-order approximation) proportional to our intellectual and emotional investment in the option we select, then having more options leaves less to invest as soon as the options go from being free to having any cost at all. As an economic analogy, a committee seeking to design a new product or building must choose between an initial set of designs. The more designs there are, the more resources must go into the selection procedure, and if the committee's budget is fixed, then this will remove resources that could have improved the product further down the line.