To the trader mindset, sacred values are nothing but a confusion; if you don’t like the deal, you just haven’t been offered a high enough price.

There’s something important the trader mindset can’t see. Its modus operandi is to take two different representations of value and profits from resolving discrepancies. It is agnostic as to the validity of those representations. Thus, the trade orientation tends to collapse the map-territory distinction, and in particular confuse exchange rates (i.e. prices) and stores of value.

Consider this music video.

The protagonist is fixated on an image that's been marketed to her by someone wealthy enough to control a planet. The image isn't very detailed, and she's willing to undertake a dangerous and arduous journey, which implies that things aren't very good back home.

She's in a world where travel is expensive. Somehow, improbably, in outer space, she has to pay a toll. This should clue us in that something sketchy is going on.

Tolls are one of the classic modes of rent extraction, second only to land rents in their centrality as an image. There's a plausible excuse for tolls on improvements like bridges, but you don't need bridges in space - you can only collect the toll by preventing people from going around you. This should inform how we interpret the subsequent interactions where she pays for fuel, repair to her spaceship, and repair to her body; it's not obvious how much of the price is needed to pay for the cost of the service, and how much is a rent extracted by a predatory monopolist.

At each stage, the protagonist sacrifices capacity (in the form of mobility affordances, maybe the most concrete and central instance of capacity, from the Latin capere, meaning to take hold of something - she trades away her hand, then her leg, then her remaining limbs, then her spaceship (albeit getting a fully functioning body back as far as we know)) for a some progress towards her destination. Then, once she gets there, she finds that she's traded away her ability to move, for relocation to a place that's no longer providing the service it advertised. It's true at each point that you wouldn't be helping her by preventing her from making the trade, but focusing on that aspect of the situation makes one a price-taker in a case where that attitude doesn't actually unlock any value.

Each trade had to leave her with hope, but it didn't have to be an accurate hope.

The resort planet owner likely never colluded with the toll collector, the fueling station, the repair station, or the rescue team. They just did their thing, and the harmful side effects were complementary. The resort planet owner doesn't pay the price of disappointed customers who arrive after the resort shuts down, so they simply don't bother pulling their ads. The other actors don't need to know why people want to go from point A to point B, they just know that they can interpose themselves in the middle and take resources they want.

It's important to bear in mind that no one overtly cheats anyone else in this scenario - all the parties are operating as honest traders, at least when considered within the bounds of the specific transaction they're executing. And yet, the whole situation is horrible in a way that the trades don't actually alleviate.

From the perspective of trade, sacredness intuitions are always a mistake. If I desperately need a new kidney, and you're desperately poor, why shouldn't I be allowed to solve your problem in exchange for you solving mine?

Sacredness intuitions say that this is morally abhorrent. The trader says that this is simply refusing to acknowledge tradeoffs. That whenever the sacredness intuition is correct, a proper weighing of tradeoffs would get the right answer.

The trader is missing something important.

There's offering a trade, and there's extortion. Sometimes people are honestly uncertain or mistaken about which one is happening, or correctly believe that something described as the former is in fact the latter.

When you're proposing a trade that gives the poor a fungible resource, you should wonder whether rent extraction will, in the long run, keep pace with their ability to pay. Except now they've all been through an elective surgery and have less kidney. Trading a kidney for a kidney does not suffer from this problem, so people are less worried about it. This is the sort of thing it's hard to see inside the trade intuition, but easier to see if you think about the systems involved.

There is also an attention economy consideration. If you foreground the details of a particular transaction, taking prices as a given, you're relegating the context to the background. But that context is where the prices come from - it's necessary if you want to understand why people are willing to pay. It's necessary if you care about anything that's not already priced.

Sometimes the most important thing is that background.


(This post is based on my comments here.)

Related: Categories of Sacredness, Sacred CashEternal, and Hearthstone Economy versus Magic Economy, Cash transfers are not necessarily wealth transfers, Eliezer Yudkowsky's Facebook post about Basic Income

New Comment
34 comments, sorted by Click to highlight new comments since:
There's offering a trade, and there's extortion. Sometimes people are honestly uncertain or mistaken about which one is happening, or correctly believe that something described as the former is in fact the latter.

So at first I felt good about understanding, say, why legal kidney sales might be bad. Then I noticed that despite having a visceral sense of one dimension of the costs, I had not in fact become (much) less confused about what counts as extortion and how to handle intuitions about sacredness in the real world.

Another example of sacred value, very similar in felt-sense-of-intuition (I think?) is the "Hospital has the opportunity to purchase a replacement heart for a child who will die without one. How much should they be willing to spend on the heart?" This produces outrage responses of "they should pay whatever it takes?"

But in this case, paying for the heart is clearly coming at the expense of other hospital equipment or staff that could have been purchased that also could have saved lives.

I think the outrage here is similar to what many people instinctively feel about letting people sell kidneys. But in the hospital case it seems pretty unequivocably good to put a price on a life.

Some tentative thoughts/brainstorm:

  • It might be that the felt senses of outrage are in fact different, if you focus on them. (I suspect they are slightly different, but not optimistic that this can reliably inform you about which ones are good)
  • It (I assume) is the case that if you're sufficiently well informed about the world, you can make the correct consequentialist calls about what to do, but this is computationally difficult.
  • In the case of the selling kidneys... the world where rent extraction leaves poor people in the same state but with less kidneys seems bad, but also, does leave the world with a bunch of fewer dead people, which isn't nothing
  • Actually, there's a probably an upper limit on how many sick people need kidneys. I assume that there are many fewer people that need kidneys than there are poor people, which might mean this particular trade is unlikely to result in increased rents (since it's not a reliable enough increase for metaphorical landlords to increase it)
  • Avoiding getting confused about the price of a life vs value of a life might help a lot.

I should mention that I'm not at all sure that in the particular case of kidneys, permitting sale would be bad. What I am fairly confident of is that most people who are sure it's obviously good haven't actually considered some important factors. (I suspect that this is also true of most of those who are sure it's obviously bad, but I don't think that's news here.)

Some relevant considerations on the examples being considered:

The Hospital budget example

The hospital's budget is a socially determined fact, not a pure material constraint, and it's an iterated game with a chicken component. By taking the budget as a given, the administrator is allying with the budget-setters who want the hospital to do constrained optimization within the given budget, and not pushing against the constraint by expanding the hospital's mission. (Cf the Obama administration's decision not to modify the government's burn rate after Congress refused to raise the debt ceiling, even as this brought a default steadily closer, because accepting the optimization constraint would ipso facto grant the other side a victory. This is a pretty normal thing in budget & other legislative fights. More generally, refusing to understand tradeoffs is an effective way not to have to make them.)

I expect people will feel less outrage at the decision that acknowledges "tradeoffs" in the following modified scenarios:

  • A desert island scenario where a fixed budget corresponds directly to a fixed pool of resources, and resource allocation doesn't affect budget size.
  • An "uneconomic" expense that seems very good but doesn't fit people's sense of what a hospital ought to be doing.

The kidney example

There's also a reasonable heuristic that people who profit from an inequity are disproportionately likely to be complicit in perpetuating it. Complicity level can take intermediate values between 0 and 1. Even if right now the people excited about buying kidneys aren't excited about causing poverty, and there's no one with a current financial interest in lowballing the costs of donating, we can reasonably expect this to change if kidney sales become legal.

Selling kidneys is legal in Iran. I haven't heard of any disaster that happened as a result...

The likely bad outcomes would be things like someone getting scammed into giving a kidney when it's not as good a deal as they'd been led to believe, and the money doesn't adequately compensate them for the ensuing health problems. I don't see why I'd expect to find out about whether that sort of thing's happening in Iran.

The hospital example is an excellent illustration of my confusion at separating trades from other decisions. There IS a price on that life. That's in the territory, it costs X to pay for the heart. No amount of outrage changes that, and considering it sacred only prevents making rational decisions about it.

I wish I understood this post better. I get the sense that there are relevant economic intuitions I'm missing. Particularly when trying to understand this sentence:

It's true at each point that you wouldn't be helping her by preventing her from making the trade, but focusing on that aspect of the situation makes one a price-taker in a case where that attitude doesn't actually unlock any value.

I don't really understand the claim here (but it does seem like an interesting claim is being made and I'd like to understand it), mostly because I don't understand the way you're using the term "price-taker," and following the link didn't really clarify things for me.

Somewhat separately, it would be good for someone to write a straightforward expository LW post on rents and rent-seeking, with applications to understanding why everything is terrible.

Qiaochu, your comment here is good example of a clear request for a limited amount of interpretive labor. This was really easy to know how to respond to. I want to praise the good here and not just cast shade on the bad.

Thanks for helping me see where the post was hard to understand :)

Some economic models assume you’re negligible in size compared to the market, and can basically only buy and sell things, so you can treat prices as constants. This is being a price-taker, you take prices as you find them. (By contrast, in monopoly situations, your decisions are one whole side of the supply-demand balance.)

In a price-taker situation, the goods offered for sale and the prices they're offered at are taken as givens, even if there's imperfect information. Things that aren't considered include negotiation with counterparties, interfacing directly with physical (or social) reality to configure it into states you like that aren't well-approximated by anything currently for sale, or coordinating with other agents in your position to change the overall dynamic.

The question of whether it would be a good idea to prevent desperate people from making unpleasant tradeoffs is just not a very interesting question on the merits (except as a proxy for some underlying political question).

The trade intuition can be an alluring enough subset of the world that refusing to engage with it at all in some contexts (treating them as sacred and beyond tradeoffs) may be psychologically necessary to engage with other frames. It's not just valuable as a preventative, it's can enhance specific kinds of cognition, just like drugs that turn off some particular part of the brain that interfere with other parts' function.

Does any of that help?

Yep, thanks! This was very clear to me:

In a price-taker situation, the goods offered for sale and the prices they're offered at are taken as givens, even if there's imperfect information. Things that aren't considered include negotiation with counterparties, interfacing directly with physical (or social) reality to configure it into states you like that aren't well-approximated by anything currently for sale, or coordinating with other agents in your position to change the overall dynamic.

This was less clear to me:

The question of whether it would be a good idea to prevent desperate people from making unpleasant tradeoffs is just not a very interesting question on the merits (except as a proxy for some underlying political question).

What makes it not very interesting?

It's focusing on the aspect of the problem where we can't do much to help. It's important to think through once in order to notice that it's actually pretty confusing to imagine someone really holding the affirmative position in good faith. But, then, one moves on.

Thanks, I think this was an important concept and glad for it to be a formal post.

I think the post could use another paragraph or two at the beginning explaining the context (I'm not sure, but think if I ran into this without the surrounding original thread it'd take me longer than necessary to understand why this is relevant), and perhaps linking/quoting the Yudkowsky post to explain the rent seeking thing in more detail.

Thanks for the suggestion. Added something.

Happy to see people talking about "trade mindset" as a thing!

When you’re proposing a trade that gives the poor a fungible resource, you should wonder whether rent extraction will, in the long run, keep pace with their ability to pay.

That seems like an argument for a social safety net - a government guarantee that rent extractors can't push you below a certain standard of living. (Though it can't be basic income, because basic income can be rent-extracted away. Something like free healthcare for all is a better idea.) Safety nets are compatible with free trade, e.g. Krugman advocates for both.

"There is also an attention economy consideration. If you foreground the details of a particular transaction, taking prices as a given, you're relegating the context to the background. But that context is where the prices come from - it's necessary if you want to understand why people are willing to pay. It's necessary if you care about anything that's not already priced" - I'm not completely clear on how this relates to the larger argument. What exactly is this paragraph meant to prove or disprove (ideally in the form of a "We should" or "We should not" statement)?

you can call it "trader mindset" if you like, but it feels a lot more like "agent mindset"to me. Every decision, including trades and non-trade actions, is made in order to increase the probability of some future world-state. Cutting off some avenues of optimization (for a rational, well-informed agent) is just plain incorrect.

Hell, whether it's a trade or "extortion" is irrelevant - if paying makes for a better future-universe, I'm going to do that. I'll continue to work to reduce the ability to set up such annoying situations (much like I'll continue to try to reduce kidney disease), and to provide more options for those people for whom all choices are unpleasant (cheaper artificial kidneys, fewer rent-seeking predators). But I won't take away strategic options from presumed-competent actors.

I totally accept arguments that most people aren't rational, well-informed agents and therefore other non-rational agents (us) can somehow protect them from bad decisions by calling some topics off-limits. But that's not what it seems you're saying.

Conflating the trader mindset with the agent mindset - and making a profit on this transaction with producing preferred world-states - is exactly the sort of thing I'm claiming the trader mindset does, erroneously.

How is consideration of decisions about trade not part of agent mindset? "Making a profit" isn't a special thing, it's just one more possible future world-state that one might prefer. So yes, I think trading is part of agency. Where's the error?

"Making a profit" privileges your unit of account as something intrinsically valuable, instead of considering the desirability of outcome directly. This is sometimes a good approximation (and indispensable for running a business), but it is not actually an attempt to directly discern the worldstate features you can change that you care about. This is what I mean by collapsing the map-territory distinction.

Ah, I see. I'm so deeply in the consequentialist/market view of the world that I mentally translate "making a profit" as not necessarily monetary, but just "improving my perceived state of the world". I also say that I profit by going to bed on time in order to feel good the next day and that I profit by donating money to a charity that I believe improves the world more than I otherwise could with that money. "profit" is just shorthand for "result in a better world-state", and every action is trading the un-taken decision for the taken one.

In the narrower sense "making a monetary profit" can absolutely be a bad decision. One doesn't need to categorize things as sacred to make good decisions.

The thing I want you to notice here is that using "profit" as the default term for this makes profiting from a single transaction (e.g. arbitrage) the central case of acting to produce desired world states. I expect that simply reordering material reality to suit your preferences (e.g. tidying your room), or improving the capacity of aligned systems (e.g. learning to communicate better with your friends) will occur to you less often as things you might want to focus on, than it would if you treated profit more explicitly as a special case of beneficial actions.

The standard of rationality required to make your view correct judges all humans as “irrational”. As a result, what you say is technically true but practically false.

Huh? The sancity argument is based on all (or at least many important) humans being irrational. My argument is that it's an OK heuristic to discourage trades where irrationality reigns, but rational agents don't need it.

but rational agents don’t need it

Again, this statement is only true under a standard of “rationality” so high that no humans meet it.

Similarly:

I won’t take away strategic options from presumed-competent actors.

If the actors in question are human, then the presumption of competence is incorrect, by the standard of “competence” required to resist the pressures in question.

Interesting. Do you extend this to all consequentialist philosophies? They're probably technically corrrect, but deontology is better for humans due to imperfect irrationality?

The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can't implement trading (and consequentialist moral decisions).

Not only I, but no less than Nick Bostrom, take the view that deontology as a means of establishing boundary conditions for consequentialism is the correct approach to large-scale ethical considerations. (You can read about this in his paper Infinite Ethics [PDF link] [note that an earlier version of this paper was titled “Infinitarian Challenges to Aggregative Ethics”].)

An alternative way to come to essentially the same point—“consequentialist ethics is technically correct but ‘deontology’ is better for imperfect agents”—is rule consequentialism (and this is what makes up a large part of my own current views on ethics).

Note, by the way, that deontology is not the only available ‘crutch’, so to speak; there is also virtue ethics (which is, to a first approximation, the most natural and efficient way for human minds to implement any kind of moral rule, be it consequentialist or deontological).

(And all these are compatible: one may be act-consequentialist / world-consequentialist in principle, rule-consequentialist in theory, deontologist in overall implementation of theory, and virtue-ethicist in detailed, everyday practice. These are not contradictions, but simply the way in which the goal—ideal consequences—is achieved.)

The problem is that the sacrosanct topics (and deontological mandates) are devised by exactly the same incompetents who can’t implement trading (and consequentialist moral decisions).

Indeed not; trading, and act-consequentialist decisions in general, are implemented by individuals, whereas deontological mandates are devised by egregores (or, less poetically: they emerge via cultural—and, on a much larger scale, biological—evolution).

(In fact, and equally interestingly, this is true even on an individual level: you may devise a deontological mandate for yourself, at leisure, after consideration, drawing on all your faculties, and update it—in moments of sober reflection, following great life events, for instance—as you gain wisdom; while, on the other hand, if you make every decision, evaluate every trade, on an act-consequentialist basis, then you must be making such decisions constantly, with only the faculties available to you in the moment… and even in your best moments, your faculties are less than the sum total of that which you can bring to bear over time; how long, then, until you make a bad decision? Most people make them daily… can you sustain perfect decision-making for even a single day, much less a lifetime?)

Awesome, thank you. I think I have the crux now, and can successfully ITT the sanctity argument, or at least one aspect of it. It's about recognizing what complexity of model one can productively follow.

One (important) caveat: what you say is true denotatively, but perhaps misleading connotatively. Remember that the degree of complexity-of-model that would need to be constructed (and comprehended) in order to apply act consequentialism directly, is not merely “large” but computationally intractable, even given all resources available in the observable universe.

And then, of course, to each step of simplification, we apply the more “mundane” practical considerations, such as boundedness with respect to time and available cognitive resources, and human cognitive biases and other frailties, and so on. In this way we proceed down the chain from theory to practice, as I outlined in the grandparent comment.

Sure, agreed. Consequentialism in a limited agent (which is all of us) looks a lot like deontology. With a significant distinction that the rules are internal, not external. Each agent can (and must) pick the specific rules it thinks best implement it's preferred consequence within it's constraints of knowledge and decisionmaking.

This distinction is illusory.

First, picking rules that implement your preferred consequences is hard. Is it entirely out of the question that one might defer selection of consequentialist rules to trusted authorities, or to processes that seem like they are likely to have generated good rules? I think it is not; it seems quite reasonable, to me.

But more importantly: however ‘external’ you consider any ethical rule to be, you are still the one who decides to follow it. If you think that the deontological rules that you must follow came from God himself, handed down to Moses on Mount Sinai, that is a still a judgment that you have made. If you conclude that Kant was right, and the categorical imperative is the root of all morality, you are still the one who has come to that conclusion. However much of your rule-making you surrender to any system—however external, however authority-based—you are still the one who chose that surrender.

It may feel different, introspectively. It may feel like finding rules that are true, instead of selecting rules that are useful. But the decision, ultimately, is still yours—for there is no one else who could make it.

This may be technically true in a sense, but I disagree with the connotation. If you live in an English-speaking country, there's a sense in which you "can" "decide" to speak only Swahili instead of English, but it would be more sensible to say that that decision has already been made for you by your society. Likewise for moral rules.

It’s not obvious to me that this is true in any significant way. Specifically, I am skeptical about the “likewise for moral rules” part of your argument; can you expand on that? How, exactly, is it likewise?

After all, if I was born and raised in an English-speaking country, then I learned English effortlessly, having to make no effort in order to do so. Learning Swahili, on the other hand, takes considerable effort, and for some people it may not even be feasible (not everyone’s good at learning foreign languages, especially without immersion). Meanwhile, selecting different moral rules requires nothing remotely approaching that much effort. Furthermore, speaking Swahili to someone who doesn’t understand it (i.e., basically everyone you ever interact with, in an English-speaking country) is tremendously counterproductive and harmful to your interests, whereas following a different set of moral rules… can be harmful, but in practice it’s often totally invisible to most of the people you interact with on a daily basis (if anything, it can be less obtrusive and less detectable by third parties than merely following the moral rules you were raised with, if you do the latter more faithfully than most members of your community!).

But perhaps an even more important point is that even if what you say is true, it’s no less true for rule-consequentialist moral rules than for deontological moral rules or virtue-ethical moral rules. Your objection, even if we accept it, does not make the distinction Dagon raised any more real.

Ok, let's take kidney sales as a specific. Whether it's "each agent must decide whether to buy or sell a kidney today" or "each agent must decide whether to accept rules that allow buying or selling a kidney, and then must decide if that rule should apply to this specific situation", the agent must decide, right?

Of course—but if the rule is not formulated so as to make it nigh-trivial to determine whether it applies to any given situation, then it’s not a very good rule, is it?

And then all the considerations I’ve already outlined in my previous comments apply.

We're talking about humans though, not rational agents.