Just thinking loudly about the boundaries of this effect...
Suppose I have a garden, and a robot that can pick apples. I instruct the robot to bring me the "nearest big apple" (where "big" is exactly defined as having a diameter at least X). Coincidentally, all apples in my garden are small, so the robot picks the nearest apple at neighbor's garden and brings it to me, without saying anything.
When the neighbor notices it, he will be angry at me, but I can defend myself that I made an innocent mistake; I didn't mean to steal apples. This may be a good excuse for the first time; but if the thing keeps happening, the excuse no longer works; it was based on me not knowing.
Also, the robot will not even try to be inconspicuous; it will climb straight over the fence, in the middle of a day, even when the neighbor is there and looking at it. If I try to avoid this, by giving instructions like "bring me the nearest apple, but make sure no one notices you", I lose the plausible deniability. (I am assuming here that if my neighbor decides to sue me, the judge will see the robot's programming.)
Now why specifically does the situation change when inste...
Some context: In the past I had a job as a quality assurance inspector. I realized very soon after I started doing the job that a machine could easily do my job with less errors and for less then I was being paid so I wondered "Why do they pay for a human to do this job?" My conclusion was that if a machine makes a mistake as it is bound to do eventually they can't really fire it or yell at it well as a human can be. A human can be blamed.
So I agree with you. In the future I can see robots doing all the jobs expect being the scape goats.
ETA 1/12: This review is critical and at times harsh, not because I want to harshly criticize the post or the author, but because I did not consider harshness of criticism when writing. I still think the post is positive-net-value, and might even vote it up in the review. I especially want to emphasize that I do not think it is in any way useful to blame or punish the author for the things I complain about below; this is intended as a "pointing out a problematic habit which a lot of people have and society often encourages" criticism, not a "bad thing must be punished" criticism.
When this post first came out, I said something felt off about it. The same thing still feels off about it, but I no longer endorse my original explanation of what-felt-off. So here's another attempt.
First, what this post does well. There's a core model which says something like "people with the power to structure incentives tend get the appearance of what they ask for, which often means bad behavior is hidden". It's a useful and insightful model, and the post presents it with lots of examples, producing a well-written and engaging explanation. The things which the post does well more than outweigh the prob...
... on reflection, I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective. Asking "who should we blame?" is always engaging in a status fight. Status fights are generally mindkillers, and should be kept strictly separate from modelling and epistemics.
Now, this does not mean that we shouldn't model status fights. Rather, it means that we should strive to avoid engaging in status fights when modelling them. Concretely: rather than ask "who should we blame?", ask "what incentives do we create by blaming <actor>?". This puts the question in an analytical frame, rather than a "we're having a status fight right now" frame.
This was a pretty important couple of points. I'm not sure I agree with them as worded, but point towards something that I think is close to a pareto improvement, at least for LessWrong and maybe for the whole world.
I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective
The key problem is... sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-po...
The key problem is... sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you're in a status fight. So a binary distinction of "trying to have good epistemics" vs "not" isn't the right frame.
Part of my model here is that moral/status judgements (like "we should blame X for Y") like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The "virtue theory of metabolism", which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.
My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we're trying to model our own value judgements, the analysis/engagement distinction still applies.)
Epistemics will still be involved in status fights, bu...
This case is more complicated than the corporate cases because the powerful person (me) was getting merely the appearance of what she wanted (a genuine relationship with a compatible person), not the real thing. And because the exploited party was either me or Connor, not a third party like bank customers. No one thinks the Wells Fargo CEO was a victim the way I arguably was.
I think you have misunderstood the Wells Fargo case. These fake accounts generally didn't bring in any material revenue; they were just about internal 'number of new accounts targets'. It was directly a case of bank employees being incentivised to defraud management and investors, which they then did. If ordinary Wells employees had not behaved fraudulently, all the targets would have been missed, informing management/investors about their mis-calibration, and more appropriate targets would have been set. In this case power didn't buy distance from the crime, but only in the sense that it meant you couldn't tell you were being cheated.
For more on this I recommend the prolific Matt Levine:
There's a standard story in most bank scandals, in which small groups of highly paid tr...
Hmm. I'm not sure I fully understand the Wells Fargo case but I interpreted it as a concern between four parties:
1. The people who got fake accounts signed up for them.
2. The employees doing the fake signups
3. A middle management tier, which set quotas
4. A higher level management tier that (presumaby?) wanted middle management to actually be making money.
So, the people being defrauded are not the customers, but the higher management tier, basically. (But, also, this entire thing might just be a weird game that middle management tiers play with each other for complex, mostly orthogonal reasons. Elizabeth references the book Moral Mazes, which Zvi provides an abridged version of here, which I suspect is important background here, although not 100% sure where Elizabeth was coming from)
[note: I hadn't necessarily interpreted the original Well's Fargo story with the subtext I outline here, but when you point out the correct subtext it doesn't feel like it shifts much how the example relates to the overall point]
Something about this piece felt off to me, like I couldn't see anything specifically wrong with it but still had a strong instinctive prior that lots of things were wrong.
After thinking about it for a bit, I think my main heuristic is: this whole piece sounds like it's built on a conflict-theory worldview. The whole question of the essay is basically "who should we be angry at"? Based on that, I'd expect that many or most of the individual examples are probably inaccurately understood or poorly analyzed. Lark's comment about the Wells Fargo case confirms that instinct for one of the examples.
Then I started thinking about the "conflict theory = predictably wrong" heuristic. We say "politics is the mindkiller", but I don't think that's quite right - people have plenty of intelligent discussions about policy, even when those discussions inherently involve politics. "Tribalism is the mindkiller" is another obvious formulation, but I'd also propose "conflict theory is the mindkiller". Models like "arguments are soldiers" or "our enemies are evil" are the core of Yudkowsky's orig...
I changed my mind about conflict/mistake theory recently, after thinking about Scott's comments on Zvi's post. I previously thought that people were either conflict theorists or mistake theorists. But I now do not use it to label people, but instead to label individual theories.
To point to a very public example, I don't think Sam Harris is a conflict theorist or a mistake theorist, but instead uses different theories to explain different disagreements. I think Sam Harris views any disagreements with people like Stephen Pinker or Daniel Dennett as primarily them making reasoning mistakes, or otherwise failing to notice strong arguments against their position. And I think that Sam Harris views his disagreements with people like <quickly googles Sam Harris controversies> Glenn Greenwald and Ezra Klein as primarily them attacking him for pushing different goals to their tribes.
I previously felt some not-insubstantial pull to pick sides in the conflict vs mistake theorist tribes, but I don't actually think this is a helpful way of talking, not least because I think that sometimes I will build a mistake theory for why a project failed, and sometimes I will buil...
I think the whole "mistake theory vs conflict theory" thing needs to be examined and explained in greater detail, because there is a lot of potential to get confused about things (at least for me). For example:
Both "mistake statements" and "conflict statements" can be held sincerely, or can be lies strategically used against an enemy. For example, I may genuinely believe that X is racist, and then I would desire to make people aware of a danger X poses. The fact that I do not waste time explaining and examining specific details of X's beliefs is simply because time is a scarce resource, and warning people against a dangerous person is a priority. Or, I may knowingly falsely accuse X of being racist, because I assume that gives me higher probability of winning the tribal fight, compared to a honest debate about our opinions. (Note: The fact that I assume my opponent would win a debate doesn't necessarily imply that I believe he it right. Maybe his opinions are simply more viral; more compatible with existing biases and prejudices of listeners.) Same goes for the mistake theory: I can sincerely explain how most people are not evil and yet Moloc...
I might be missing the forest for the trees, but all of those still feel like they end up making some kinds of predictions based on the model, even if they're not trivial to test. Something like:
If Alice were informed by some neutral party that she took Bob's apple, Charlie would predict that she would not show meaningful remorse or try to make up for the damage done beyond trivial gestures like an off-hand "sorry" as well as claiming that some other minor extraction of resources is likely to follow, while Diana would predict that Alice would treat her overreach more seriously when informed of it. Something similar can be done on the meta-level.
None of these are slamdunks, and there are a bunch of reasons why the predictions might turn out exactly as laid out by Charlie or Diana, but that just feels like how Bayesian cookies crumble, and I would definitely expect evidence to accumulate over time in one direction or the other.
Strong opinion weakly held: it feels like an iterated version of this prediction-making and tracking over time is how our native bad actor detection algorithms function. It seems to me that shining more light on this mechanism would be good.
Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.
I don't share this intuition. The Baffler article argues:
...IN DECEMBER 1992, AN OBSCURE ACADEMIC JOURNAL published an article by economists Alexander Tabarrok and Tyler Cowen, titled “The Public Choice Theory of John C. Calhoun.” Tabarrok and Cowen, who teach in the notoriously libertarian economics department at George Mason University, argued that the fire-breathing South Carolinian defender of slaveholders’ rights had anticipated “public choice theory,” the sine qua non of modern libertarian political thought.
...
Astutely picking up on the implications of Buchanan’s doctrine, Tabarrok and Cowen enumerated the affinities public choice shared with Calhoun’s fiercely anti-democratic political thought. Calhoun, like Buchanan a century and a half later, had theorized that majority rule tended to repress a select few. Both Buchanan and Calhoun put forward ideas meant to protect an aggrieved if privileged minority. And just as Calhoun argued that laws should only be approved by
I think it's a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here). You can also see the discussion on No, it's not The Incentives - it's you as a disagreement over where this boundary should be.
That said, one thing I'll point out is that explaining Calhoun and Buchanan's use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It's saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.
I find for myself that thinking 'conflict theorists' is a single coherent group is confusing me, and that I should instead replace the symbol with the su...
I get what you're saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.
None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one's own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we're talking about conflict theory in the form of "bad thing happens because of this bad person" as opposed to "this person's incentives are misaligned". We can explain other peoples' positions by saying they're using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default - even if their arguments happen to be correct.
As you say, explaining Calhoun and Buchanan's use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.
Even after all this discussion of theories vs theorists, "conflict theory = predictably wrong" still seems like a solid heuristic.
Sorry for the delay, a lot has happened in the last week.
Let me point to where I disagree with you.
Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we're talking about conflict theory in the form of "bad thing happens because of this bad person" as opposed to "this person's incentives are misaligned".
My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to think and talk freely about what is actually happening.
Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.
The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren't trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.
(You said this elsewhere in the thread: "the goal is to have one's beliefs correspond to reality—to use a conflict theory when that's true, a mistake theory when that's true".)
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.
Expected infrequent discussion of a theory shouldn't lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example "If this statement is correct, it will be the only topic of all future discussions.")
In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
Consider a situation where:
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so?
Perhaps (75% chance?), in part because I've spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing "Does mistake theory explain this case well?")
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I've observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others' statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may
...In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately.
Hm. Is "well-known" good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it's literally the case that everybody knows that we're not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn't know.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
"Talking about conflict is a limited resource" seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It's strictly better to have more of each of them.
If Bob deceives me,
I desire to believe that Bob deceives me;
If Bob does not deceive me,
I desire to believe that Bob does not deceive me;
Let me not become attached to beliefs I may not want.
Talking about conflict in ways that are wrong is damaging a resource (it's causing people to have incorrect beliefs). Using clickbaity conflict-y titles w
...I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I'm going to try explaining my view and how it differs from the "politics is the mind killer" slogan.
Thank you, this comment helped me understand your position quite a bit. You're right, discussing conflict theories are not inherently costly, it's that they're often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does.
This is also a large part of my model of why discussions of conflict often go bad - power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I not...
I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives).
Why not integrate both perspectives: people make genuine mistakes due to cognitive limitations, and they also genuinely have different values that are in conflict with each other, and the right way to frame these problems is "bargaining by bounded rationalists" where "bargaining" can include negotiation, politics, and war. (I made a 2012 post suggesting this frame, but maybe should have given it a catchy name...)
Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.
(I wrote the above before seeing this part.) I guess "mechanism des
...If your concern is that this is evidence that the OP is wrong (since it has conflict-theoretic components, which are mindkillers), it seems important to establish that there are important false object-level claims, not just things that make such mistakes likely. If you can't do that, maybe change your mind about how much conflict theory introduces mistakes?
If you're just arguing that laying out such models are likely to have bad consequences for readers, this is an important risk to track, but it's also changing the subject from the question of whether the OP's models do a good job explaining the data.
This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don't remember how long it's been, and it's since been deleted):
A: [proposal of personality sorting system.]
B: [statement/argument that personality sorting systems are typically useless-to-harmful]
A: but this doesn't respond to my particular personality system.
I'm sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it's unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of "category X is useless-to-harmful", you need to have a system for identifying them
[I'm going to keep talking about this in the MTG case because I think a specific case is easier to read that "category X", and it's less loaded for me than talking about my own piece, if the correspondences aren't obvious let me...
I probably won't get to that soon, but I'll put it on the list.
I also want to say that I'm sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you're a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.
The whole question of the essay is basically “who should we be angry at”?
While the post has a few sentences about moral blame, the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead (and hiding this from the powerful people). This is a denotative statement that can be evaluated independent of "who should we be angry at".
Such denotative statements are very useful when considering different mechanisms for resolving principal-agent problems. Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases "who we should be angry at" if that's the best available implementation.
Quoting Scott's post:
Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.
Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.
Part of what seems strange about drawing the line at denotative vs. enactive speech is that there are conflict theorists who can speak coherently/articulately in a denotative fashion (about conflict), e.g.:
It seems both coherent and consistent with conflict theory to believe "some speech is denotative and some speech is enacting conflict."
(I do see a sense in which mechanism design is a mistake theory, in that it assumes that deliberation over the mechanism is possible and desirable; however, once the mechanism is in place, it assumes agents never make mistakes, and differences in action are due to differences in values)
This post makes a fairly straightforward point that has been vary helpful for thinking about power. Having several grounding concrete examples really helped as well. The quote from moral mazes that gave examples of the sorts of wiping-hands-of-knowledge things executives actually say really helped make this more real to me.
Explains a particular social phenomenon that I hadn't previously been aware of.
Thanks for this insightful piece.
It seems to me that there's a third key message, or possibly a reframing of#1, which is that people without power should be considered less morally culpable for their actions -eg the Wells Fargo employees should be judged less harshly.
The concept of "human error" is often invoked to explain system breakdown as resulting from individual deficiencies (eg, early public discussion of the Boeing 737 MAX crashes had an underlying theme of "Ethiopian and Indonesian pilots are just not as skilled as American pilots") - but a human
...Stumbled across a laww concepts that splits "power" in this context into finer distinctions.
There is a concept of "qualified immunity". If a police officer makes an act in the course of official duties lawsuits regarding that should be addressed to the police department not to the individual police officer. Power there does not buy distance to the crime but on the contrary great power leads to great accountability even responcibility for things that do not have rules governing them but maybe should. There it can be clear that a police o...
Thanks for writing this, but I don't believe there's broad agreement on either of your main examples.
I don't know anyone who claims taxes should be only proportional to wealth or income. There are those who say it should be super-linear with income or wealth (tax the rich even more than their proportion of income), and those who say not to tax based on wealth or income, but on consumption (my preference) or the hyper-pragmatic "tax everyone, we need the money". And of course many more nuanced variations.
I likewise don't th...
Introduction
Taxes are typically meant to be proportional to money (or negative externalities, but that's not what I'm focusing on). But one thing money buys you is flexibility, which can be used to avoid taxes. Because of this, taxes aimed at the wealthy tend to end up hitting the well-off-or-rich-but-not-truly-wealthy harder, and tax cuts aimed at the poor end up helping the middle class. Examples (feel free to stop reading these when you get the idea, this is just the analogy section of the essay):
Note that most of these are perfectly legal and the rest are borderline. But we're still not getting the result we want, of taxes being proportional to income.
When we assess moral blame for a situation, we typically want it to be roughly in proportion to much power a person has to change said situation. But just like money can be used to evade taxes, power can be used to avoid blame. This results in a distorted blame-distribution apparatus which assigns the least blame to the person most able to change the situation. Allow me a few examples to demonstrate this.
Examples 1 + 2: Corporate Malfeasance
Amazon.com provides a valuable service by letting any idiot sell a book, with minimal overhead. One of the costs of this complete lack of verification is that people will sell things that wouldn't pass verification, such as counterfeits, at great cost to publishers and authors. Amazon could never sell counterfeits directly: they're a large company that's easy to sue. But by setting themselves up as a platform on which other people sell, they enable themselves to profit from counterfeits.
Or take slavery. No company goes “I’m going to go out and enslave people today” (especially not publicly), but not paying people is sometimes cheaper than paying them, so financial pressure will push towards slavery. Public pressure pushes in the opposite direction, so companies try not to visibly use slave labor. But they can’t control what their subcontractors do, and especially not what their subcontractors’ subcontractors’ subcontractors do, and sometimes this results in workers being unpaid and physically blocked from leaving.
Who’s at fault for the subcontractor(^3)’s slave labor? One obvious answer is “the person locking them in during the fire” or “the parent who gives their kid piecework”, and certainly it couldn’t happen without them. But if we say “Nike’s lack of knowledge makes them not responsible”, we give them an incentive to subcontract without asking follow up questions. The executive is probably benefiting more from the system of slave labor than the factory owner is from his little domain, and has more power to change what is happening. If the small factory owner pays fair wages, he gets outcompeted by a factory that does use slave labor. If the Nike CEO decides to insource their manufacturing to ensure fair working conditions, something actually changes.
...Unless consumers switch to a cheaper, slavery-driven shoe brand.
Which is actually really hard to not do. You could choose more expensive shoes, but the profit margin is still bigger if you shrink expenses, so that doesn’t help (which is why Fairtrade was a failure from the workers’ perspective). You can’t investigate the manufacturing conditions of everything you buy-- it’s just too time consuming. But if you punish obvious enslavement and conduct no follow up studies, what you get is obscured enslavement, not decent working conditions.
Moral Mazes describes the general phenomenon on page 21:
These examples differ in an important way from tax structuring: structuring requires seeking out advice and acting on it to achieve the goal. It’s highly agentic. The Wells Fargo and apparel-outsourcing cases required no such agency on the part of executives. They vaguely wished for something (more revenue, fewer expenses), and somehow it happened. An employee who tried to direct the executives’ attention to the fact that they were indirectly employing slaves would probably be fired before they ever reached the executives. Executives are not only outsourcing their dirty work, they’re outsourcing knowledge of their dirty work.
[Details of personal anecdotes changed both intentionally and by the vagaries of human memory]
Example/Exception 2.5: Corporate Malfeasance Gone Wrong
The Wells Fargo account fraud scandal: in order to meet quotas, entry level Wells Fargo employees created millions of unauthorized accounts (typically extra services for existing customers). I originally included this as an example of "executives incentivizing entry level employees to commit fraud on their behalf", but it turns out Wells Fargo made almost no money off the fraud- $2m over five years, which hardly seems worth the employees' time, much less the $185m fine. I've left this in as an example of how the incentives-not-orders system doesn't always work in powerful people's favor.
Thanks to Larks for pointing this out.
Example 3: Foreign Medical Care
My cousin Angela broke her leg while traveling in Thailand, and was delighted by the level of care she received at the Thai hospital-- not just medically, but socially. Nurses brought her flowers and were just generally nicer than their American counterparts. Her interpretation was that Thailand was a place motivated by love and kindness, not money, and Americans should aspire to this level of regard for their fellow human being. My interpretation was that she had enough money to buy the goodwill of everyone in the room without noticing, so what she should have learned is that being rich is awesome, and that being an American who travels internationally is enough to qualify you as rich.
This is mostly a success story for the free market: Angela got good medical care and the nurses got money (I’m assuming). Any crime in this story were committed off-screen. But Angela was certainly benefiting from the nurses’ restrained choices in life. And had she had actual power to affect healthcare in US, trying to fix it based on what she learned in Thailand would have done a lot of damage.
Example 4: My Dating an Artist Experience
My starving-artist ex-boyfriend, Connor, stayed with me for two months after a little bad luck and a lot of bad decisions cost him his job and then apartment (this was back when I had a two bedroom apartment to myself-- I miss Seattle). During this time we had one big fight. My view on the fight now is that I was locally in the right but globally the disagreement was indicative of irreconcilable differences that should have led us to break up. That was delayed by months when he capitulated.
One possibility is that he genuinely thought he could change and that I was worth the attempt. Another is that he saw the incompatibility, or knew things that should have led him to see it, but lied or blocked out the knowledge so that he could keep living with me. This would be a shitty, manipulative thing for him to do. On the other hand, what did I expect? If the punishment for breaking up with me was, best case scenario, moving into a homeless shelter, of course he felt pressure to appease me.
It wasn’t my fault he felt that pressure, any more than it was Angela’s fault her nurses were born with fewer options than her. Time in my spare bedroom was a gift to him I had no obligation to keep giving. But if I’d really valued a coercion free decision, I would have committed to housing him independent of our relationship. Although if that becomes common knowledge, it just means people can’t make an uncoerced decision to date me at all. And if helping Connor at all meant a commitment to do so forever, he would get a lot less help.
This case is more like the Wells Fargo case than Amazon or Nike. I was getting only the appearance of what I wanted (a genuine relationship with a compatible person), not the real thing. Nonetheless, the universe was contorting itself to give me the appearance of what I wanted.
Summary
What all of these stories have in common is that (relatively) powerful people’s desires were met by people less powerful than them, without them having to take responsibility for the action or sometimes even the desire. Society conspired to give them what they wanted (or in the case of Connor and Wells Fargo, a facsimile of what they wanted) without them having to articulate the want, even to themselves. That’s what power means: ability to make the game come out like you want. Disempowered people are forced to consciously notice things (e.g., this budget is unreachable) and make plans (e.g., slavery) where a powerful person wouldn’t. And it’s unfair to judge them for doing so while ignoring the morality of the powerful who never consider the system that brings them such nice things.
Take home message:
This piece was inspired by a conversation with and benefited from comments by Ben Hoffman. I'd also like to thank several commenters on Facebook for comments on an earlier draft and Justis Mills for copyediting.