Robin Hanson proposed stores where banned products could be sold.1 There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television.
I was just making a factual observation. Why did some people think it was an argument in favor of regulation?
On questions of simple fact (for example, whether Earthly life arose by natural selection) there’s a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called “balance of evidence” should reflect this. Indeed, under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
One should also be aware of a related failure pattern: thinking that the course of Deep Wisdom is to compromise with perfect evenness between whichever two policy positions receive the most airtime. A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.
If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.
We live in an unfair universe. Like all primates, humans have strong negative reactions to perceived unfairness; thus we find this fact stressful. There are two popular methods of dealing with the resulting cognitive dissonance. First, one may change one’s view of the facts—deny that the unfair events took place, or edit the history to make it appear fair.2 Second, one may change one’s morality—deny that the events are unfair.
Some libertarians might say that if you go into a “banned products shop,” passing clear warning labels that say THINGS IN THIS STORE MAY KILL YOU, and buy something that kills you, then it’s your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn’t just be a net benefit, it would be a one-sided tradeoff with no drawbacks.
Others argue that regulators can be trained to choose rationally and in harmony with consumer interests; if those were the facts of the matter then (in their moral view) there would be no downside to regulation.
Like it or not, there’s a birth lottery for intelligence—though this is one of the cases where the universe’s unfairness is so extreme that many people choose to deny the facts. The experimental evidence for a purely genetic component of 0.6–0.8 is overwhelming, but even if this were to be denied, you don’t choose your parental upbringing or your early schools either.
I was raised to believe that denying reality is a moral wrong. If I were to engage in wishful optimism about how Sulfuric Acid Drink was likely to benefit me, I would be doing something that I was warned against and raised to regard as unacceptable. Some people are born into environments—we won’t discuss their genes, because that part is too unfair—where the local witch doctor tells them that it is right to have faith and wrong to be skeptical. In all goodwill, they follow this advice and die. Unlike you, they weren’t raised to believe that people are responsible for their individual choices to follow society’s lead. Do you really think you’re so smart that you would have been a proper scientific skeptic even if you’d been born in 500 CE? Yes, there is a birth lottery, no matter what you believe about genes.
Saying “People who buy dangerous products deserve to get hurt!” is not tough-minded. It is a way of refusing to live in an unfair universe. Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.
I don’t think that when someone makes a stupid choice and dies, this is a cause for celebration. I count it as a tragedy. It is not always helping people, to save them from the consequences of their own actions; but I draw a moral line at capital punishment. If you’re dead, you can’t learn from your mistakes.
Unfortunately the universe doesn’t agree with me. We’ll see which one of us is still standing when this is over.
1Robin Hanson et al., “The Hanson-Hughes Debate on ‘The Crack of a Future Dawn,’” 16, no. 1 (2007): 99–126, http://jetpress.org/v16/hanson.pdf.
2This is mediated by the affect heuristic and the just-world fallacy.
There is so much wrong with this example that I don't know where to start.
You make up a hypothetical person who dies because she doesn't heed an explicit warning that says "if you do this, you will die". Then you make several ridiculous claims about this hypothetical person:
1) You claim this event will happen, with absolute certainty. 2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent. 3) You claim this event is a tragedy.
I disagree with all of these, and I will challenge them individually. But first, the meta-claim of this argument is that I am supposed to consider compromises that I don't even believe in. Why would I ever do that? Suppose that the downside of a policy decision is "less people will go to heaven". If you are not religious, this sounds like a ridiculous nonsensical downside, and thus no downside at all. And where do you draw the line on perceived downsides anyway? Do you allow people to just make up metaphysical superstitious downsides, and then proceed to weigh those as well? Because that seems like a waste of time to me. Perhaps you do weigh those possibilities, but you assign them so low a probability that they effectively disappear, but clearly your opponent doesn't assign the same probabilities to them as you do. So you have to take the argument to the place where the real disagreements occur. Which leads me to these three claims.
1) You claim this event will happen, with absolute certainty.
1 is not a probability. Besides, the original article mentions safeguards that should reduce the probability that this event ever happens. The type of safeguards depend on your hypothetical person, of course. Lets say your hypothetical person is drunk. The clerk could give a breathalyzer test. Maybe your hypothetical person isn't aware of the warnings. The clerk could read them off at the checkout. Maybe the person doesn't listen or understand. The clerk could quiz them on the content he just read to ensure it sinks in.
But then, I guess the real point of the article is that the hypothetical person doesn't believe the warnings, which brings us to:
2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent.
Receiving a warning explicitly stating "if you do this, you will die" is hardly a mental puzzle. Is this really even a measure of intelligence? This seems like a stretch.
Bleach is sold at normal stores, without any restrictions. If you drink it, you could die. Many people have heard this warning. Do people disbelieve it? Do they risk testing the hypothesis on theirself? Why would anyone risk death like this? I am genuinely curious as to how this can be related to intelligence. Someone please explain this to me.
Generally if someone drinks bleach, it is because they believed the warning and wanted to die. Is this a tragedy? Should we ban bleach? This brings me to:
3) You claim this event is a tragedy.
Is it really?
People are hardly a valuable resource right now. In fact, there are either too many of us, or there will be soon. If one person dies, everyone else gets more space and resources. It's kind of like your article on dust specs vs torture, except that a suicidal person selects theirself, rather than being randomly selected. Unless you apply some argument about determinism and say that a person doesn't choose to be born suicidal (or choose to lead a life whose circumstances would lead anyone to be suicidal, etc).
Should a person be allowed to commit suicide? If we prevent them from doing so, are we infringing on their rights? Or are they infringing on their own rights? I don't really know. I do know and love some amazing people who have committed suicide, and I wish I could have prevented them. This is a real complication to this issue for me, because I value different people differently: I'd gladly allow many people I've never met to die if it would save one person I love. But I understand that other people don't value the same people I do, so this feeling is not easy to transfer into general policies.
Is evolution not fair? If we decide to prop up every unfit individual and prevent every suicide, genetic evolution becomes severely neutered. We can't really adapt to our environment if we don't let it select from us. Thus it would be to our genetic benefit to allow people to die, as it would eventually select out whatever genes caused them to do this. But then, some safety nets seem reasonable. We wouldn't consider banning glasses in order to select for better vision. We need to strike some sort of balance here though, and not waste too many resources propping up individuals who will only multiply their cost to everyone with future generations of their genes and memes. I think that, currently, the point at which this balance is set is when it simply costs too much cash to keep someone alive, though we will gladly provide all people with a certain amount of food and shelter. The specific amount provided is under constant debate.
So, are we obligated to protect every random individual ever born? Is it a tragedy if anyone dies? I think that's debatable. It isn't a definite downside. In fact, it could even be an upside.