Asking the Question

Until very recently, I was a hedonic utilitarian. That is, I held ‘happiness is good’ as an axiom – blurring the definition a little by pretending that good emotions other than strict happiness still counted because it made people “happy” to have them -- and built up my moral philosophy from there. There were a few problems I couldn’t quite figure out, but by and large, it worked: it produced answers that felt right, and it was the most logically consistent moral system I could find.

But then I read Three Worlds Collide.

The ending didn’t fit within my moral model: it was a scenario in which making people happy seemed wrong. Which raised the question: What’s so great about happiness? If people don’t want happiness, how can you call it good to force it on them? After all,  happiness is just a pattern of neural excitation in the brain; it can’t possibly be an intrinsic good, any more than the pattern that produces the thought “2+2=4”.

Well, people like being happy. Happiness is something they want. But it’s by no means all they want: people also want mystery, wonder, excitement, and many other things – and so those things are also good, quite independent of their relation to the specific emotion ‘happiness’. If they also desire occasional sadness and pain, who am I to say they’re wrong? It’s not moral to make people happy against their desires – it’s moral to give people what they want. (Voila, preference utilitarianism.)

But – that’s not a real answer, is it?

If axiom ‘happiness is good’ didn’t match my idea of morality, that meant I wasn’t really constructing my morality around it. Replacing that axiom with ‘preference fulfillment is good’ would make my logic match my feelings better, but it wouldn’t give me a reason to have those feelings in the first place. So I had to ask the next question: Why is preference fulfillment good? What makes it “good” to give other people what they want?

Why should we care about other people at all?

In other words, why be moral?

~

Human feelings are a product of our evolutionary pressures. Emotions, the things that make us human, are there because they caused the genes that promoted them to become more prevalent in the ancestral environment. That includes the emotions surrounding moral issues: the things that seem so obviously right or wrong seem that way because that feeling was adaptive, not because of any intrinsic quality.

This makes it impossible to trust any moral system based on gut reaction, as most people’s seem to be. Our feelings of right and wrong were engineered to maximize genetic replication, so why should we expect them to tap into objective realms of ‘right’ and ‘wrong’? And in fact, people’s moral judgments tend to be suspiciously biased towards their own interests, though proclaimed with the strength of true belief.

More damningly, such moralities are incapable of coming up with a correct answer. One person can proclaim, say, homosexuality to be objectively right or wrong everywhere for everyone, with no justification except how they feel about it, and in the same breath say that it would still be wrong if they felt the other way. Another person, who does feel the other way, can deny it with equal force. And there’s no conceivable way to decide who’s right.

I became a utilitarian because it seemed to resolve many of the problems associated with purely intuitive morality – it was internally consistent, it relied on a simple premise, and it could provide its practitioners a standard of judgment for moral quandaries.

But even utilitarianism is based on feeling. This is especially true for hedonic utilitarianism, but little less for preference – we call people getting what they want ‘good’ because it feels good. It lights up our mirror neurons, triggers the altruistic instincts encoded into us by evolution. But evolution’s purposes are not our own (we have no particular interest in our genes’ replication) and so it makes no sense to adopt evolution’s tools as our ultimate goals.

If you can’t derive a moral code from evolution, then you can’t derive it from emotion, the tool of evolution; if you can’t derive morality from emotion, then you can’t say that giving people what they want is objectively good because it feels good; if you can’t do that, you can’t be a utilitarian.

Emotions, of course, are not bad. Even knowing that love was designed to transmit genes, we still want love; we still find it worthwhile to pursue, even knowing that we were built to pursue it. But we can’t hold up love as something objectively good, something that everyone should pursue – we don’t condemn the asexual. In the same way, it’s perfectly reasonable to help other people because it makes you feel good (to pursue warm fuzzies for their own sake), but that emotional justification can’t be used as the basis for a claim that everyone should help other people.

~

So if we can’t rely on feeling to justify morality, why have it at all?

Well, the obvious alternative is that it’s practical. Societies populated by moral individuals – individuals who value the happiness of others – work better than those filled with selfish ones, because the individually selfless acts add up to greater utility for everyone. One only has to imagine a society populated by purely selfish individuals to see why pure selfishness wouldn’t work.

This is a facile answer. First, if this is the case, why would morality extend outside of our societies? Why should we want to save the Babyeater children?

But more importantly, how is it practical for you? There is no situation in which the best strategy is not being purely selfish. If reciprocal altruism makes you better off, then it’s selfishly beneficial to be reciprocally altruistic; if you value warm fuzzies, then it’s selfishly beneficial to get warm fuzzies; but by definition, true selflessness of the kind demanded by morality (like buying utilons with money that could be spent on fuzzies) decreases your utility – it loses. Even if you get a deep emotional reward from helping others, you’re strictly better off being selfish.

So if feelings of ‘right’ and ‘wrong’ don’t correspond to anything except what used to maximize inclusive genetic fitness, and having a moral code makes you indisputably worse off, why have one at all?

Once again: Why be moral?

~

The Inconsistency of Consequentialism

Forget all that for a second. Stop questioning whether morality is justified and start using your moral judgment again.

Consider a consequentialist student being tempted to cheat on a test. Getting a good grade is important to him, and he can only do that if he cheats; cheating will make him significantly happier. His school trusts its students, so he’s pretty sure he won’t get caught, and the test isn’t curved, so no one else will be hurt by him getting a good score. He decides to cheat, reasoning that it’s at least morally neutral, if not a moral imperative – after all, his cheating will increase the world’s utility.

Does this tell us cheating isn’t a problem? No. If cheating became widespread, there would be consequences – tighter test security measures, suspicion of test grades, distrust of students, et cetera. Cheating just this once won’t hurt anybody, but if cheating becomes expected, everyone is worse off.

But wait. If all the students are consequentialists, then they’ll all decide to cheat, following the same logic as the first. And the teachers, anticipating this (it’s an ethics class), will respond with draconian anti-cheating measures – leaving overall utility lower than if no one had been inclined to cheat at all.

Consequentialism called for each student to cheat because cheating would increase utility, but the fact that consequentialism called for each student to cheat decreased utility.

Imagine the opposite case: a class full of deontologists. Every student would be horrified at the idea of violating their duty for the sake of mere utility, and accordingly not a one of them would cheat. Counter-cheating methods would be completely unnecessary. Everyone would be better off.

In this situation, a deontologist class outcompetes a consequentialist one in consequentialist terms. The best way to maximize utility is to use a system of justification not based on maximizing utility. In such a situation, consequentialism calls for itself not to be believed. Consequentialism is inconsistent.

So what’s a rational agent to do?

The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.

However, the ultimate goal of maximizing utility cannot be questioned. Utility, after all, is only a word for “what is wanted”, so no agent can want to do anything except maximize utility. Moral agents include others' utility as equal to their own, but their goal is still to maximize utility.

Therefore the rule which should be followed is not “take the actions which maximize utility”, but “arrive at the beliefs which maximize utility.”

But there is an additional complication: when we arrive at beliefs by logic alone, we are effectively deciding not only for ourselves, but for all other rational agents, since the answer which is logically correct for us must also be logically correct for each of them. In this case, the correct answer is the one which maximizes utility – so our logic must take into account the fact that every other computation will produce the same answer. Therefore we can expand the rule to “arrive at the beliefs which would maximize utility if all other rational agents were to arrive at them (upon performing the same computation).”

[To the best of my logical ability, this rule is recursive and therefore requires no further justification.]

This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility. In the case of the cheating student, the optimal belief is “don’t cheat” because that belief being held by all the students (and the teacher simulating the students’ beliefs) produces the best results, even though cheating would still increase utility for each individual student. The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.

The upshot of this system is that you have to decide ahead of time whether an approach based on duty (that is, on every agent who considers the problem acting the way that would produce the best consequences if every agent who considers the problem were to act the same way) or on utility (individual computation of consequences) actually produces better consequences. And if you pick the deontological approach, you have to ‘forget’ your original goal – to commit to the rule even at the cost of actual consequences – because if it’s rational to pursue the original goal, then it won’t be achieved.

~

The Solution to Morality

Let’s return to the original question.

The primary effect of morality is that it causes individuals to value others’ utility as an end in itself, and therefore to sacrifice their own utility for others. It’s obvious that this is very good on a group scale: a society filled with selfless people, people who help others even when they don’t expect to receive personal benefit, is far better off than one filled with people who do not – in a Prisoner’s Dilemma writ large. To encourage that sort of cooperation (partially by design and partially by instinct), societies reward altruism and punish selflessness.

But why should you, personally, cooperate?

There are many, many times when you can do clearly better by selfishness than by altruism – by theft or deceit or just by not giving to charity. And why should we want to do otherwise? Our alruistic feelings are a mere artifact of evolution, like appendices and death, so why would we want to obey them? 

Is there any reason, then, to be moral?

Yes.

Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off. If selfish utility maximizing is the correct answer for how to maximize selfish utility, selfish utility is not maximized. Therefore selfishness is the wrong answer. Each individual’s utility is maximized only if they deliberately discard selfish utility as the thing to be maximized. And the way to do that is for each one to adopt a duty to maximize total utility, not only their own –  to be moral.

And having chosen collective maximization over individual competition – duty over utility – you can no longer even consider your own benefit to be your goal. If you do so, holding morality as a means to selfishness’s end, then everyone does so, and cooperation comes crashing down. You have to ‘forget’ the reason for having morality, and hold it because it's the right thing to do. You have to be moral even to the point of death.

Morality, then, is calculated blindness – a deliberate ignorance of our real ends, meant to achieve them more effectively. Selflessness for its own sake, for selfishness's sake.

 

[This post lays down only the basic theoretic underpinnings of Deontological Decision Theory morality. My next post will focus on the practical applications of DDT in the human realm, and explain how it solves various moral/game-theoretic quandaries.]

 

New Comment
92 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Most of your questions are already answered on the site, better then you attempt answering them. Read up on complexity of value, metaethics sequence, decision theory posts (my list) and discussion of Prisoner's Dilemma in particular.

What you describe as egoist consequentialist's reasoning is actually reasoning according to causal decision theory, and when you talk about influence of beliefs on consequences, this can be seen as considering a form of precommitment (which allows patching some of CDT's blind spots). If you use TDT/UDT/ADT instead, the problem goes away, and egoistic consequentialists start cooperating.

6SilasBarta
I agree that Tesseract's post needs more familiarity with the decision theory articles (mainly those taking apart Newcomb's problem, esp. this). However, the complexity of value and metaethics sequences don't help much. Per an EY post summarizing them I can't find atm, the only relevant insight here from those articles is that, "Your ethics are part of your values, so your actions should take them into account as well." This leaves unanswered the questions of a) why we classify certain parts of our values as "ethics", and b) whether those ethics are properly a terminal or instrumental value. These are what I tried to address in this article, with my answers being that a) Those are the parts where we intuitively rely on acausal "consequences" (SAMELs in the article), and b) instrumental.
3Tesseract
Your article is an excellent one, and makes many of the same points I tried to make here. Specifically, is the same idea I was trying to express with the 'cheating student' example, and then generalized in the final part of the post, and likewise the idea of Parfitian-filtered decision theory seems to be essentially the same as the concept in my post of ideally-rational agents adopting decision theories which make them consciously ignore their goals in order to achieve them better. (And in fact, I was planning to include in my next post how this sort of morality solves problems like Parfit's Hitchhiker when functionally applied.) Upon looking back on the replies here (although I have yet to read through all the decision theory posts Vladimir recommended), I realize that I haven't been convinced that I was wrong -- that there's a flaw in my theory I haven't seen -- only that the community strongly disapproves. Given that your post and mine share many of the same ideas, and yours is at +21 while mine is at -7, I think that the differences are that a. mine was seen as presumptuous (in the vein of the 'one great idea'), and b. I didn't communicate clearly enough (partially because I haven't studied enough terminology) and include answers to enough anticipated objections to overcome the resistance engendered by a. I think I also failed to clearly make the distinction between this as a normative strategy (that is, one I think ideal game-theoretic agents would follow, and a good reason for consciously deciding to be moral) and as a positive description (the reason actual human beings are moral.) However, I recognize that even though I haven't yet been convinced of it, there may well be a problem here that I haven't seen but would if I knew more about decision theory. If you could explain such a problem to me, I would be genuinely grateful -- I want to be correct more than I want my current theory to be right.
5SilasBarta
Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early '09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts: Here you're describing what Wei Dai calls "computational/logical consequences" of a decision in his UDT article. Here you're describing EY's TDT algorithm. The label of deontological doesn't quite fit here, as you don't advocate adhering to a set of categorical "don't do this" rules (as would be justified in a "running on corrupted hardware" case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow. Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others' decision is not a causal one (though still sufficient to motivate your decision). I don't think you deserved -7 (though I didn't vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation -- not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you're justified in not being convinced you're wrong. Hope that helps. EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
3Tesseract
Very much, thank you. Your feedback has been a great help. Given that others arrived at some of these conclusions before me, I can see why there would be disapproval -- though I can hardly feel disappointed to have independently discovered the same answers. I think I'll research the various models more thoroughly, refine my wording (I agree with you that using the term 'deontology' was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
3SilasBarta
Great, glad to hear it! Looking forward to your next submission on this issue.
0SilasBarta
Thanks for the feedback. Unfortunately, the discussion on my article was dominated by a huge tangent on utility functions (which I talked about, but was done in a way irrelevant to the points I was making). I think the difference was that I plugged my points into the scenarios and literature discussed here. What bothered me about your article was that it did not carefully define the relationship between your decision theory and the ethic you are arguing for, though I will read it again to give a more precise answer.
3Vladimir_Nesov
The idea of complexity of values explains why "happiness" or "selfishness" can't be expected to capture the whole thing: when you talk about "good", you mean "good" and not some other concept. To unpack "good", you have no other option than to list all the things you value, and such list uttered by a human can't reflect the whole thing accurately anyway. Metaethics sequence deals with errors of confusing moral reasons and historical explanations: evolution's goals are not your own and don't have normative power over your own goals, even if there is a surface similarity and hence some explanatory power.
2SilasBarta
I agree that those are important things to learn, just not for the topic Tesseract is writing about.
6Vladimir_Nesov
What do you mean? Tesseract makes these exact errors in the post, and those posts explain how not to err there, which makes the posts directly relevant.
2SilasBarta
Tesseract's conclusion is hindered by not having read about the interplay between decision theory and values (i.e. how to define a "selfish action", which consequences to take into consider, etc.), not the complexity of value as such. Tesseract would me making the same errors on decision theory even if human values were not so complex, and decision theory is the focus of the post.
0Vladimir_Nesov
Might not be relevant to "Tesseract's conclusion", but is relevant to other little conclusions made in the post along the way, even if they are all independent and don't damage each other.
1shokwave
They may not have much in the way of factual conclusions to operate by, but they are an excellent introduction to how to think about ethics, morality, and what humans want - which is effectively the first and last thirds of this post.
5Vaniver
Huh? It struck me as pretty poor, actually.
6orthonormal
It's not well-constructed overall, but I wish I had a nickel every time someone's huge ethical system turned out to be an unconscious example of rebelling within nature, or something that gets stuck on the pebblesorter example.
-1Vaniver
Right, but reversed stupidity is not intelligence. I mean, he can only get away with the following because he's left his terms so fuzzy as to be meaningless: That is, one would be upset if I said "there is a God, it's Maxwell's Equations!" because the concept of God and the concept of universal physical laws are generally distinct. Likewise, saying "well, morality is an inborn or taught bland desire to help others" makes a mockery of the word 'morality.'
1orthonormal
I think your interpretation oversimplifies things. He's not saying "morality is an inborn or taught bland desire to help others"; he's rather making the claim (which he defers until later) that what we mean by morality cannot be divorced from contingent human psychology, choices and preferences, and that it's nonsense to claim "if moral sentiments and principles are contingent on the human brain rather than written into the nature of the universe, then human brains should therefore start acting like their caricatures of 'immoral' agents".
2shokwave
I am not sure what you mean. Do you mean that the way Eliezer espouses thinking about ethics and morality in those sequences is a poor way of thinking about morality? Do you mean that Eliezer's explanations of that way are poor explanations? Both? Something else?
0Vaniver
The methodology is mediocre, and the conclusions are questionable. At the moment I can't do much besides express distaste; my attempts to articulate alternatives have not gone well so far. But I am thinking about it, and actually just stumbled across something that might be useful.
1shokwave
I'm going to have to disagree with this. The methodology with which Eliezer approaches ethical and moral issues is definitely on par with or exceeding the philosophy of ethics that I've studied. I am still unsure whether you mean the methodology he espouses using, or the methods he applied to make the posts.
2Tesseract
Your objection and its evident support by the community is noted, and therefore I have deleted the post. I will read further on the decision theory and its implications, as that seems to be a likely cause of error. However, I have read the meta-ethics sequence, and some of Eliezer's other posts on morality, and found them unsatisfactory -- they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it. On the point of complexity of value, I was attempting to use the term 'utility' to describe human preferences, which would necessarily take into account complex values. If you could describe why this doesn't work well, I would appreciate the correction. That said, I'm not going to contend here without doing more research first (and thank you for the links), so this will be my last post on the subject.
3ata
One thing to consider: Why do you need a reason to be moral/altruistic but not a reason to be selfish? (Or, if you do need a reason to be selfish, where does the recursion end, when you need to justify every motive in terms of another?)
0orthonormal
On the topic of these decision theories, you might get a lot from the second half of Gary Drescher's book Good and Real. His take isn't quite the same thing as TDT or UDT, but it's on the same spectrum, and the presentation is excellent.

I'd just like to announce publicly that my commitment to deontology is not based on conflating consequentialism with failing at collective action problems.

7WrongBot
Indeed, it would be quite odd to adopt deontology because one expected it to have positive consequences.

It would be odder yet to adopt it if one expected it to have negative consequences.

Not as odd as you might think. I've made the point to several people that I am a consequentialist who, by virtue of only dealing with minor moral problems, behaves like a deontologist most of the time. The negative consequences of breaking with deontological principles immediately (and possibly long-term) outweigh the positive consequence improvement that the consequentialist action offers over the deontological action.

I imagine if Omega told you in no uncertain terms that one of the consequences of you being consequentialist is that one of your consequentialist decisions will have, unknown to you, horrific consequences that far outweigh all moral gains made - if Omega told you this, a consequentialist would desire to adopt deontology.

4cousin_it
Can you give a simple example where your flavor of deontology conflicts with consequentialism?
1Alicorn
I don't push people in front of trolleys. (Cue screams of outrage!)

This leads to an idea for a story in which in the far future non-consequentalist views are considered horrible. The worst insult that can be given is "non-pusher".

8Clippy
I'm even better: I don't think metal should be formed into trolleys or tracks in the first place.
6jimrandomh
How would you transport ore from mines to refineries and metal from refineries to extruders, then? Some evils really are necessary. I prefer to focus on the rope, which ought not to be securing people to tracks.
-7Vladimir_Nesov
8cousin_it
How about the original form of the dilemma? Would you flip a switch to divert the trolley to a track with 1 person tied to it instead of 5?
3Alicorn
No. (However, if there are 5 people total, and I can arrange for the train to run over only one of those same people instead of all five, then I'll flip the switch on the grounds that the one person is unsalvageable.)
6JGWeissman
I would predict that if the switch were initially set to send the trolley down the track with one person, you also would not flip it. But suppose that you first see the two paths with people tied to the track, and you have not yet observed the position of the switch. As you look towards it, is there any particular position that you hope the switch is in?
1Alicorn
I might have such hopes, if I had a way to differentiate between the people. (And above, when I make statements about what I would do in trolley problems, I'm just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
7Vladimir_Nesov
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you'd reroute the train from your sister and to a random stranger, you'd reroute the train from N+1 strangers (which are even more valuable) and to one stranger. Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won't reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation. (This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
5Alicorn
Um, my prudential considerations do indeed work more or less consequentialistically. That's not news to me. They just aren't morality.
8jimrandomh
Wait a second - is theree a difference of definitions here? That sounds a lot like what you'd get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn't actually stop following them.
3shokwave
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
1Alicorn
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call "prudence" and not at all about the thing I call "morality".
1TheOtherDave
That seems like a reasonable summary to me. Does it seem to you that we ought to? (Care about morality, that is.)
1Alicorn
I think you ought to do morally right things; caring per se doesn't seem necessary.
0TheOtherDave
Fair enough. Does it usually look to you like consequentialists just do prudential things and not morally right things?
0Alicorn
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don't have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
0TheOtherDave
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by "prudence." Agreed that in most cases there's no conflict. I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right? I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right? === (1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
0Alicorn
Right and right.
2TheOtherDave
OK, cool. Thanks. If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I'm not asking you what you would do in that situation.) == (1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I'm not sure how I could do that, but just for concreteness, suppose I had Elspeth's power. (EDIT: Actually, it occurs to me that I could more simply ask: "If I preferred...," given that I'm asking about your moral intuitions rather than your predicted behavior.)
2Alicorn
Yes, if I had that information about your preferences, it would make it OK to kill you for purposes you approved. Your right to not be killed is yours; you don't have to exercise it if you don't care to.
0jimrandomh
Does the importance of prudence ever scale without bound, such that it dominates all moral concerns if the stakes get high enough?
0Alicorn
I don't know about all moral concerns. A subset of moral concerns are duplicated and folded into my prudential ones.
3Vladimir_Nesov
Can't parse.
0Alicorn
Easy reader version for consequentialists: I'm like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that's fine.
3Vladimir_Nesov
If this cherry on top costs people lives, it's not "fine", it's evil incarnate. You should cut this part of yourself out without mercy. (Compare to your Luminosity vampires, that are sometimes good, nice people, even if they eat people.)
4jimrandomh
I don't think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that's just a thought experiment; and I wouldn't consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
4Vladimir_Nesov
I agree (if not on 1.2 figure, then still on some 1+epsilon). It's analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others' homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it's also much easier to adjust attitude towards homosexuality than one's sexual orientation, in the long run). Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can't be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
1Alicorn
I think you mean "persecuting", although depending on what exactly you're talking about I suppose you could mean "prosecuting".
0Vladimir_Nesov
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
-2Armok_GoB
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
1Alicorn
Who are you talking about?
5jimrandomh
What if it were 50 people? 500? 5*10^6? The remainder of all humanity? My own position is that morality should incorporate both deontological and consequentialist terms, but they scale at different rates, so that deontology dominates when the stakes are very small and consequentialism dominates when the stakes are very large.
4Alicorn
I am obliged to act based on my best information about the situation. If that best information tells me that: * I have no special positive obligations to anyone involved, * The one person is not willing to be run over to save the others (or simply willing to be run over e.g. because ey is suicidal), and * The one person is not morally responsible for the situation at hand or for any other wrong act such that they have waived their right to life, Then I am obliged to let the trolley go. However, I have low priors on most humans being so very uninterested in helping others (or at least having an infrastructure to live in) that they wouldn't be willing to die to save the entire rest of the human species. So if that were really the stake at hand, the lone person tied to the track would have to be loudly announcing "I am a selfish bastard and I'd rather be the last human alive than die to save everyone else in the world!". And, again, prudential concerns would probably kick in, most likely well before there were hundreds of people on the line.
0Yoreth
Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?
2Alicorn
I'd be willing to die (including as part of a legal sentence) to save that many people. (Not that I wouldn't avoid dying if I could, but if that were a necessary part of the saving-people process I'd still enact said process.) I wouldn't kill someone I believed unwilling, even for the same purpose, including via trolley.
2shokwave
I feel like the difference between "No matter what, this person will die" and "No matter what, one person will die" is very subtle. It seems like you could arrange thought experiments that trample this distinction. Would that pose a problem?
7Alicorn
I don't remember the details, but while I was at the SIAI house I was presented some very elaborate thought experiments that attempted something like this. I derived the answer my system gives and announced it and everyone made outraged noises, but they also make outraged noises when I answered standard trolley problems, so I'm not sure to what extent I should consider that a remarkable feature of those thought experiments. Do you have one in mind you'd like me to reply to?
3shokwave
Not really. I am mildly opposed to asking trolley problem questions. I mostly just observed that, in my brain, there wasn't much difference between: Set of 5 people where either 1 dies or 5 die. Set of 6 people where either 1 dies or 5 die. I wasn't sure exactly what work the word 'unsalvageable' was doing: was it that this person cannot in principle be saved, so er life is 'not counted', and really you have Set of 4 people where either none die or 4 die?
4Alicorn
Yes, that's the idea.
3shokwave
I see. My brain automatically does the math for me and sees 1 or 5 as equivalent to none or four. I think it assumes that human lives are fungible or something.
4Will_Sawin
That's a good brain. Pat it or something.

I believe the entire first half of this can be summarized with a single comic.

Even causal decision theorists don't need Kant to act in a manner that benefits all.

If N changes, together, are harmful, then at least one of those changes must be harmful in itself - a consequentialist evil. Maybe the students all thought that their choice would be one of the helpful, not one of the harmful, ones, in which case they were mistaken, and performed poorly because of it - not something you can solve with decision theory.

The small increase of the chance of anti-cheating reactions, as well as the diminished opinion of the school's future studen... (read more)

0jimrandomh
No, because they each have their own incompatible definitions of good. A conversation beforehand is only helpful if they have a means of enforcing agreements.
0Will_Sawin
If everyone's an altruistic consequentialist, they have the same definition of good. If not, they're evil.
2Perplexed
If everyone is an omniscient altruistic consequentialist, that is.
0Will_Sawin
If they have limited information on the good, wouldn't a conversation invoke a kind of ethical Aumann's Agreement Theorem? In general, if everyone agrees about some morality and disagrees about what it entails, that's a disagreement over facts, and confusion over facts will cause problems in any decision theory.
0Perplexed
Yes, if there is time for a polite conversation before making an ethical decision. Too bad that the manufacturers of trolley problems usually don't allow enough time for idle chit-chat. Still, it is an interesting conjecture. The eAAT conjecture. Can we find a proof? A counter-example? Here is an attempt at a counter-example. I strongly prefer to keep my sexual orientation secret from you. You only mildly prefer to know my sexual orientation. Thus, it might seem that my orientation should remain secret. But then we risk that I will receive inappropriate birthday gifts from you. Or, what if I prefer to keep secret the fact that I have been diagnosed with an incurable fatal disease? What if I wish to keep this a secret only to spare your feelings? Of course, we can avoid this kind of problem by supplementing our utility maximization principle with a second moral axiom - No Secrets. Can we add this axiom and still call ourselves pure utilitarians? Can we be mathematically consistent utilitarians without this axiom? I'll leave this debate to others. It is an interesting exercise, though, to revisit the von Neumann/Savage/Aumann-Anscombe algorithms for constructing utility functions when agents are allowed to keep some of their preferences secret. Agents still would know their own utilities exactly, but would only have a range (or a pdf?) for the utilities of other agents. It might be illuminating to reconstruct game theory and utilitarian ethics incorporating this twist.
0Will_Sawin
The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight. Someone who is morally uncertain (because he's not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other's causes may actually be correct, which should reduce his willingness to fight by the same amount. If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent's extrapolated volitions are not coherent, this is false.
0ArisKatsaris
Really? Is there a single good value in the universe? Happiness, comfort, fun, freedom, you can't even conceive someone who weighs the worth of these values slightly differently than someone else and yet both can remain non-evil?
-1Will_Sawin
Fair point. If they're slightly different, it should be a slight problem, and TDT would help that. If they're significantly different, it would be a significant problem, and you might be able to make a case that one is evil.
1ArisKatsaris
If you can call someone "evil" even though they may altruistically work for the increase of the well-being of others, as they perceive it to be, then what's the word you'd use to describe people who are sadists and actively seek to hurt others, or people who would sacrifice the wellbeing of millions people for their own selfish benefit? Your labelling scheme doesn't serve me in treating people appropriately, realizing which people I ought consider enemies and which I ought treat as potential allies -- nor which people strive to increase total (or average) utility and which people strive to decrease it. So what's its point? Why consider these people "evil"? It almost seems to me as if you're working backwards from a conclusion, starting with the assumption that all good people must have the same goals, and therefore someone who differs must be evil.
0Will_Sawin
It depends on if you interpret "good" and "evil" as words derived from "should," as I was doing. Good people are those that act as they should behave, and evil people as those that act as they shouldn't behave. There is only one right thing to do. But if you want to define evil another way, honestly, you're probably right. I would note that I think "might be able to make the case that" is enough qualification. So, more clearly: If everyone's extrapolated values are in accordance with my extrapolated values, information is our only problem, which we don't need moral and decision theories to deal with. If our extrapolated values differ, then they may differ a bit, in which case we have a small problem, or a medium amount, in which case there's a big problem, or a lot, in which case there's a huge problem. I can rate them on a continuous scale as to how well they accord with my extrapolated values. The ones at the top, I can work with, and those at the bottom, I can work against. However TDT states that we should be nicer to those at the bottom so that they'll be nicer than us, whereas CDT does not, and therein lies the difference.

Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off.

This is not at all true. The fact that if people acted as I do, there would be no stable equilibrium is largely immaterial, because my actions do not affect how others behave. Unless I value "acting in a way that can be universalized," the fact ... (read more)

Consider a consequentialist student being tempted to cheat on a test. Getting a good grade is important to him, and he can only do that if he cheats; cheating will make him significantly happier. His school trusts its students, so he’s pretty sure he won’t get caught, and the test isn’t curved, so no one else will be hurt by him getting a good score. He decides to cheat, reasoning that it’s at least morally neutral, if not a moral imperative – after all, his cheating will increase the world’s utility.

Vlad has discussed below some of the problems with th... (read more)

[-][anonymous]10

Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off. If selfish utility maximizing is the correct answer for how to maximize selfish utility, selfish utility is not maximized. Therefore selfishness is the wrong answer.

Considering that in the real world different people will have differing abilities to calc... (read more)

[-]ata00

You're begging the question by assuming that selfish motives are the only real, valid ultimate justifications for actions (including choices to self-modify not to be selfish), when in humans that is plainly false to begin with (see the Complexity of Value sequence). If you place a higher value on selfishness then most people, then maybe all of your moral deliberation will begin and end with asking "But how does that help me?", but for most people it won't. Perhaps a lot of people will confuse themselves into thinking that everything must go back ... (read more)

[-][anonymous]00

Morality is part of your preferences, same as the emotion of happiness and other things. It's implemented within your brain, the part that judges situations as "fair" or "unfair", etc. I don't understand why you want to go looking for something more objective than that. What if you eventually find that grand light in the sky, the holy grail of "objective morality" expressed as a simple beautiful formula, and it tells you that torturing babies is intrinsically good? Will you listen to it, or to the computation within your own brain?

Seconding Nesov's suggestion to read more of the sequences. The topic that interests you has been pretty thoroughly covered.

If cheating became widespread, there would be consequences

...

But wait. If all the students are consequentialists, then they’ll all decide to cheat, following the same logic as the first.

Emphasis mine. A consequentialist student will see that the consequence of them cheating is "everyone cheats -> draconian measures -> worse off overall". So they won't cheat. Or they will cheat in a way that doesn't cause everyone to cheat - only under special circumstances that they know won't apply to everyone all the time.

edit: It may seem a chea... (read more)

3billswift
Or they are there to actually learn something. I don't know about you, but I have yet to see any way to learn by cheating. Cheating is often, in more than just learning, non-productive to the potential cheater's goals.
5shokwave
The view that secondary level education is about instilling desired behaviours or socialising children as much as it is about learning is very common and somewhat well-supported - and to the extent that schools are focused on learning, there is again a somewhat well-supported view that they don't even do a good job of this. The view that tertiary level education is about obtaining a piece of paper that signals your hire-ability is widespread and common. To the extent that potential cheaters have these goals in mind, cheating is more efficient than learning.