Summary: If you object to consequentialist ethical theories because you think they (have bad consequences)...
The surgeon situation happens to lots of professionals who have other people's lives or secrets in their hands.
I know a social worker who is gay and works with gay men. It's a small community, and in some cases he's been at a club and seen friends go home with clients who he knows are HIV positive. Even though he knows his friends are about to risk their lives, he can't say anything that reveals the client's HIV status. Because if people believed professionals were breaking the code of confidentiality, even to save lives, they wouldn't get tested for HIV in the first place.
A good code of ethics, professional or otherwise, takes that kind of long view.
What if, instead of deciding whether the doctor murders the patient in secret when she comes to the hospital, we have to decide whether the government (perhaps armed with genetic screening results seized from a police databases and companies like 23andMe) passes a law allowing police to openly kill and confiscate organs from anyone whose organs could presumably save five or more transplant patients?
As far as I can tell, this would have no bad effects beyond the obvious one of killing the people involved - it wouldn't make people less likely to go to hospitals or anything - but it keeps most of the creepiness of the original. Which makes me think although everything you say in this post is both true and important (and I've upvoted it) it doesn't get to the heart of why most people are creeped out by the transplant example.
It would have quite a few bad knockon effects:
1) you have handed the government the ability to decide, at any point, to kill anyone they consider undesirable, provided they can find five compatible transplant recipients; this is a massive increase in their power, and a big step towards totalitarian society.
2) You are discouraging people from undergoing genetic screening
3) you are discouraging people from living healthily. If you are unhealthy, your organs are of less use to the government, and hence you are more likely to survive.
4) you are encouraging people to go off the grid; as people who are off the grid are less likely to be found for the purposes of harvesting.
Yes, these logical reasons are not directly the reason people are creeped out; but were you to find a less harmful scenario, you would also likely find the scenario less creepy.
For instance, most people would find it less creepy if the harvesting was limited only to those who are already in prison, on long (20 year+) sentences; and it also seems that that policy would have less indirect harms.
As far as I can tell, this would have no bad effects beyond the obvious one of killing the people involved - it wouldn't make people less likely to go to hospitals or anything
No, but it would make them afraid to go outside, or at least within the vicinity of police. This law might encourage people to walk around with weapons to deter police from nabbing them, and/or to fight back. People would be afraid to get genetic screening lest they make their organs a target. They would be afraid to go to police stations to report crimes lest they come out minus a kidney.
People with good organs would start bribing the police to defer their harvesting, and corruption would become rampant. Law and order would break down.
This sounds like an excellent plot for a science fiction movie about a dystopia, which indicates that it fails on consequentialist grounds unless our utility function is so warped that we are willing to create a police state to give organ transplants.
Fourth reply: people deeply value autonomy.
Fifth reply: While in this case I don't think that the policy is the right consequentialist thing to do, in general I expect consequentialism to endorse some decisions that violate our current commonsense morality. Such decisions are usually seen as moral progress in retrospect.
The probability of being killed in such a way would be tiny and wouldn't significantly alter expected lifespan. However people are bad at intuitive risk evaluation and even if any person would at least twice more likely have their life saved than destroyed because of the policy, people would feel endangered and unhappy, which fact may overweigh the positive benefit. But if this concern didn't apply (e.g. if most people learned to evaluate risks correctly on the intuitive level), I'd bite the bullet and vote for the policy.
By the way, upvoted for correct application of least convenient possible world technique.
This post really brings to light an inkling I had a while ago: TDT feels vaguely Kantian.
Compare:
"Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation."
"Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction."
Now, they're clearly not the same but they are similar enough that we shouldn't be surprised that consequentialism under TDT alleviates some of our concerns about traditional consequentialism. I find this exciting-- but it also makes me suspicious. Kantian moral theory has some serious problems and so I wonder if there might be analogous issues in CON+TDT. And I see some. I'll leave out the Kantian equivalent unless someone is interested:
"What happens in general if everyone at least as smart as me deduces that I would do X whenever I'm in situation Y"?
The problem is that no two situations are strictly speaking identical (putting aside exact simulations and other universes). That means CON+TDT doesn't prohibit a decision to carve up a vagran...
...But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to outpu
In general, I'd like to offer (without proof) the following rationalist ethical inequality:
Your true valuation of all consequences + a good decision theory ≥ any particular deontology.
Where '≥' is defined, of course, as "gives better expected consequences than". It's the obvious tautology but bizarrely enough people do get it wrong enough that it's worth making a post on!
I think this essay is about important topics, but too many of them and they would be better covered if they were separated. I'm just going to focus on the summary: typical objections to consequentialism.* The vast majority of objections to consequentialism are consequentialist, and thus incoherent. This essay explains this well, but I don't think previous post is a good example of this and I don't think that TDT is useful here, as a practical matter. Yes, there are examples where the problem is CDT-consequentialism, but that isn't the usual problem. Even w...
I think the point can be made more simply as follows:
Consequentialism is a theory about which states of the world (and thus implicitly actions) are preferable. For it to be true does not require or imply that worlds in which some particular bounded agent (or even all humans) believe in consequentialism or attempt to act according to it's dictates is preferable to the alternatives.
The obvious and simple counterexample is to have an alien race who tortures all humanity if they use their mind reading technology on anyone and ascertain they believe in consequentialism. In such a situation consequentialism obviously entails that it is better for everyone to believe in some non-consequentialist view of morality.
It's a stupid dilemma, since the optimal move is obviously for the patients to play russian roulette. The doctor doesn't even have any decisions to make, and should optimally be kept ignorant until he or she hears the loud bang. In this highly artificial situation - five tissue matches each needing a different organ, all of whom will die without it and with no likely donors. (What is this, clone club health issues?) Well, they are all dead if they do nothing. So, russian roulette among the already doomed. Upsides clearly outweigh the downsides, and the re...
The transplant dilemma is framed in a way that personalizes the healthy young traveler while keeping the other five patients anonymous. This activates the part of our brains that treats people as individuals rather than numbers. There's nothing wrong with the math. We object to the unfairness to the only individual in the story.
This dilemma is usually paired with an example of triage. Here an emergency-room doctor has to choose between saving one severely injured patient or five moderately injured patients. Five lives or one, the numbers are the same, but as long as all six patients are anonymous, it remains a numeric problem, and no one has a problem with the math.
"But what's actually right probably doesn't include a component of making oneself stupid with regard to the actual circumstances in order to prevent other parts of one's mind from hijacking the decision.
What you probably meant: "Rational minds should have a rational theory of ethics; this leads to better consequences."
My late-night reading: "A deontological theory of ethics is not actually right. It is wrong. Morally wrong."
I am not sure what caused me to read it this way, but it cracked me up.
Similarly, the purported reductios of consequentialism rely on the following two tricks: they implicitly assume that consequentialists must care only about the immediate consequences of an action, or they implicitly assume that consequentialists must be causal decision theorists.
"TDT + consequentialism" seems like it isn't a consequentialist theory any more -- it's taking into account things that are not consequences. ("Acausal consequence" seems like an oxymoron, and if not, I would like to know what sort of 'acausal consequences' a TDT-consequentialist should consider.) This feels much more like the Kantian categorical imperative, all dressed up with decision theory.
The wolves in The Jungle Book learned to "seven times never kill Man", after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
Using this to support your statement lowered my credence therein.
Though mind you, even against animals, vengeance is rather useful; because even animals can model humans to some extent. The wolves in The Jungle Book learned to "seven times never kill Man", after learning that to hurt one man, means many other men with guns coming to kill wolves in return.
Beware fictional evidence. I suspect that wolves might be smart enough in individual cases to recognize humans are a big nasty threat they don't want to mess with. But that makes sense in a context without any understanding of vengeance.
I wonder if most of the responses to JJT's thought experiment consider the least convenient possible world. (Recall Yvain's insightful discussion about Pascal's wager?)
Most of the responses that I have read try to argue that if the act of killing a healthy person to steal his organs for organ-missing people were generalized, this would make things worse.
By the way, this worry about generalizing one's individual act feels so close to thoughts of Kant - oh the irony! - whose "first formulation of the CI states that you are to 'act only in accordance wit...
Your doctor with 5 organs strikes me as Vizzini's princess bride dilemma, "I am not a great fool, so I can clearly not choose the wine in front of you."
So it goes, calculating I know you know I know unto silliness. Consequentialists I've recently heard lecturing went to great lengths, as you did, to rationalize what they 'knew" to be right. Can you deny it? The GOAL of the example was to show that "right thinking" consequentialists would come up with the same thing all our reptile brains are telling us to do.
When you throw a ball...
Consequentialism and deontologism can be encoded in terms of one another (please allow me to temporarily mischaracterize the discussion as if there were only two options consequentialism and deontologism). Both theories have "free parameters"; consequentialism has preferences over states, and deontologism has precepts (should-rules). By carefully setting the free parameters, you can turn one into the other. The deontologist can say "You should make decisions by considering their consequences according to this utility function", and the ...
If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases]) about the moral decisions you would make.
If people would ask you whether you would kill/did kill a patient, and you couldn't confidently say "No" (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must...
...But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to outpu
I'm not so convinced that the doctor should not harvest the organs.
In order for there to be a general rule against organ harvesting by that doctor there have to be enough other people who are TDT and who will make the same disinterested decision that the doctor did and who will be caught and scandalized by the media that people all over the place stop going to the doctor's office. I don't think it's very likely that all of those conditions are met sufficiently. Also, the impact of having some people stop going to the doctor's and get sick might arguably no...
Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler," and thus the doctor doesn't kill the traveler.
TDT could deduce that people would deduce that TDT would not endorse the action, ...
Summary: If you object to consequentialist ethical theories because you think they endorse horrible or catastrophic decisions, then you may instead be objecting to short-sighted utility functions or poor decision theories.
Recommended: Decision Theory Paradox: PD with Three Implies Chaos?
Related: The "Intuitions" Behind "Utilitarianism"
The simple idea that we ought to choose actions according to their probable consequences, ever since it was formulated, has garnered a rather shocking amount of dissent. Part of this may be due to causes other than philosophical objections, and some of the objections get into the metaphysics of metaethics. But there's a fair amount of opposition on rather simple grounds: that consequentialist reasoning appears to endorse bad decisions, either in the long run or as an effect of collective action.
Every so often, you'll hear someone offer a reductio ad absurdum of the following form: "Consider dilemma X. If we were consequentialists, then we would be forced to choose Y. But in the long run (or if widely adopted) the strategy of choosing Y leads to horrible consequence Z, and so consequentialism fails on its own terms."
There's something fishy about the argument when you lay it out like that: if it can be known that the strategy of choosing Y has horrible consequence Z, then why do we agree that consequentialists choose Y? In fact, there are two further unstated assumptions in every such argument I've heard, and it is those assumptions rather than consequentialism on which the absurdity really falls. But to discuss the assumptions, we need to delve into a bit of decision theory.
In my last post, I posed an apparent paradox: a case where it looked as if a simple rule could trump the most rational of decision theories in a fair fight. But there was a sleight of hand involved (which, to your credit, many of you spotted immediately). I judged Timeless Decision Theory on the basis of its long-term success, but each agent was stipulated to only care about its immediate children, not any further descendants! And indeed, the strategy of allowing free-riding defectors maximizes the number of an agent's immediate children, albeit at the price of hampering future generations by cluttering the field with defectors.1
If instead we let the TDT agents care about their distant descendants, then they'll crowd out the defectors by only cooperating when both other agents are TDT,2 and profit with a higher sustained growth rate once they form a supermajority. Not only do the TDTs with properly long-term decision theories beat out what I called DefectBots, but they get at least a fair fight against the carefully chosen simple algorithm I called CliqueBots. The paradox vanishes once you allow the agents to care about the long-term consequences of their choice.
Similarly, the purported reductios of consequentialism rely on the following two tricks: they implicitly assume that consequentialists must care only about the immediate consequences of an action, or they implicitly assume that consequentialists must be causal decision theorists.3
Let's consider one of the more famous examples, a dilemma posed by Judith Jarvis Thomson:
First, we can presume that the doctor cares about the welfare, not just of the five patients and the traveler, but of people more generally. If we drop the last supposition for a moment, it's clear that a consequentialist utilitarian doctor shouldn't kill the traveler for his organs; if word gets out that doctors do that sort of thing, then people will stay away from hospitals unless they're either exceptional altruists or at the edge of death, and this will result in people being less healthy overall.4
But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we'd be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is "Don't kill the traveler,"5 and thus the doctor doesn't kill the traveler.
The question that a good consequentialist ought to be asking themselves is not "What happens in situation Y if I do X?", nor even "What happens in general if I do X whenever I'm in situation Y", but "What happens in general if everyone at least as smart as me deduces that I would do X whenever I'm in situation Y"? That, rather than the others, is the full exploration of the effects of choosing X in situation Y, and not coincidentally it's a colloquial version of Timeless Decision Theory. And as with Hofstadter's superrationality, TDT and UDT will avoid contributing to tragedies of the commons so long as enough people subscribe to them (or base their own decisions on the extrapolations of TDT and UDT).
In general, I'd like to offer (without proof) the following rationalist ethical inequality:
Your true valuation of all consequences + a good decision theory ≥ any particular deontology.
Now, a deontological rule might be easier to calculate, and work practically as well in the vast majority of circumstances (like approximating real physics with Newtonian mechanics). But if you have to deal with an edge case or something unfamiliar, you can get in trouble by persisting with the approximation; if you're programming a GPS, you need relativity. And as rule utilitarians can point out, you need to get your deontological rules from somewhere; if it's not from a careful consequentialist reckoning, then it might not be as trustworthy as it feels.6
Or it could be that particular deontological rules are much more reliable for running on corrupted hardware, and that no amount of caution will prevent people from shooting themselves in the foot if they're allowed to. That is a real concern, and it's beyond the scope of this post. But what's actually right probably doesn't include a component of making oneself stupid with regard to the actual circumstances in order to prevent other parts of one's mind from hijacking the decision. If we ever outgrow this hardware, we ought to leave the deontologies behind with it.
Footnotes:
1. Note that the evolutionary setup is necessary to the "paradox": if Omega dished out utils instead of children, then the short-term strategy is optimal in the long run too.
2. This is only right in a heuristic sense. If the agents suspect Omega will be ending the game soon, or they have too high a temporal discount rate, this won't work quite that way. Also, there's an entire gamut of other decision theories that TDT could include in its circle of cooperators. That's a good feature to have- the CliqueBots from the last post, by contrast, declare war on every other decision theory, and this costs them relative to TDT in a more mixed population (thanks to Jack for the example).
3. One more implicit assumption about consequentialism is the false dichotomy that consequentialists must choose either to be perfectly altruistic utilitarians or perfectly selfish hedonists, with no middle ground for caring about oneself and others to different positive degrees. Oddly enough, few people object to the deontological rules we've developed to avoid helping distant others without incurring guilt.
4. I'm assuming that in the world of the thought experiment, it's good for your health to see a doctor for check-ups and when you're ill. It's a different question whether that hypothetical holds in the real world. Also, while my reply is vulnerable to a least convenient possible world objection, I honestly have no idea how my moral intuitions should translate to a world where (say) people genuinely didn't mind knowing that doctors might do this as long as it maximized the lives saved.
5. The sort of epistemic advantage that would be necessary for TDT to conclude otherwise is implausible for a human being, and even in that case, there are decision theories like UDT that would refuse nonetheless (for the sake of other worlds where people suspected doctors of having such an epistemic advantage).
6. The reason that morality feels like deontology to us is an evolutionary one: if you haven't yet built an excellent consequentialist with a proper decision theory, then hard-coded rules are much more reliable than explicit reasoning.