So, when I agonize over whether to torrent an expensive album instead of paying for it, and about half the time I end up torrenting it and feeling bad, and about half the time I pay for it but don't enjoy doing so ... what exactly am I doing in the latter case if not employing willpower?
I mean, I know willpower probably isn't a real thing on the deepest levels of the brain, but it's fake in the same way centrifugal force is fake, not in the way Bigfoot is fake. It sure feels like I'm using willpower when I make moral decisions about pirating, and I don't understand how your model above interprets that.
Granted, there are many other moral decisions I make that don't require willpower and do conform to your model above, and if I had to choose black-and-white between ethics-as-willpower or ethics-as-choice I'd take the latter, your model just doesn't seem complete.
IAWYC, but...
Many people have commented that humans don't make decisions based on utility functions. This is a surprising attitude to find on LessWrong, given that Eliezer has often cast rationality and moral reasoning in terms of computing expected utility. It also demonstrates a misunderstanding of what utility functions are.
The issue is not that people wouldn't understand what utility functions are. Yes, you can define arbitrarily complicated utility functions to represent all of a human's preferences, we know that. There's an infinite amount of valid methods by which you could model a human's preferences, utility functions being one of them. The question is which model is the most useful, and which models have the least underlying assumptions that will lead your intuitions astray. Utility functions are sometimes an appropriate model and sometimes not.
To expand on this...
Tim van Gelder has an elegant, although somewhat lengthy, example of this. He presents us with a problem engineers working with early steam engines had: how to translate the oscillating action of the steam piston into the rotating motion of a flywheel?
(Note: it's going to take a while before the relationship between this and utility functions becomes clear. Bear with me.)
...High-quality spinning and weaving required, however, that the source of power be highly uniform, that is, there should be little or no variation in the speed of revolution of the main driving flywheel. This is a problem, since the speed of the flywheel is affected both by the pressure of the steam from the boilers, and by the total workload being placed on the engine, and these are constantly fluctuating.
It was clear enough how the speed of the flywheel had to be regulated. In the pipe carrying steam from the boiler to the piston there was a throttle valve. The pressure in the piston, and so the speed of the wheel, could be adjusted by turning this valve. To keep engine speed uniform, the throttle valve would have to be turned, at just the right time and by just the right amount, to cope with
(part two)
van Gelder holds that an algorithmic approach is simply insuitable for understanding the centrifugal governor. It just doesn't work, and there's no reason to even try. To understand the behavior of centrifugal governor, the appropriate tool to use are differential equations that describe its behavior as a dynamic system where the properties of various parts depend on each other.
Changing a parameter of a dynamical system changes its total dynamics (that is, the way its state variables change their values depending on their current values, across the full range of values they may take). Thus, any change in engine speed, no matter how small, changes not the state of the governor directly, but rather the way the state of the governor changes, and any change in arm angle changes the way the state of the engine changes. Again, however, the overall system (coupled engine and governor) settles quickly into a point attractor, that is, engine speed and arm angle remain constant.
Now we finally get into utility functions. van Gelder holds that all the various utility theories, no matter how complex, remain subject to specific drawbacks:
...(1) They do not incorporate any account of
You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.
I'm deeply hesitant to jump into a debate that I don't know the history of, but...
Isn't it pretty generally understood that this is not true? The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.
Seems to me that if human behavior were in g...
I think there are two "mistakes" in the article.
The first is claiming (or at least, assuming) that ethics are "monolithics", that either they come from willpower alone, or they don't come from willpower at all. Willpower do play a role in ethics, every time your ethical system contradicts the instinct, or unconscious, part of your mind. Be it to resist the temptation of a beautiful member of the opposite (or same, depending of your tastes) sex, overcome the fear of spiders or withstand torture to not betray your friends. I would say tha...
I've said things that sound like this before but I want to distance myself from your position here.^
But remember what a utility function is. It's a way of adding up all your different preferences and coming up with a single number. Coming up with a single number is important, so that all possible outcomes can be ordered. That's what you need, and ordering is what numbers do. Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.
This is all true. But humans do not have utility funct...
"true" ethics (whatever they may be). I call [this] ... "meta-ethics".
This is a bad choice of name, given that 'Metaethics' already means something (though people on LW often conflate it with Normative Ethics)
Hi! This sounds interesting, but I couldn't conveniently digest it. I would read it carefully if you added more signposts to tell me what I was about to hear, offered more concrete examples, and explained how I might behave or predict differently after understanding your post.
For whatever it's worth, I completely agree with you that utility functions are models that are meant to predict human behavior, and that we should all try making a few to model our own and each others' behavior from time to time. Dunno if any downvotes you're getting are on that or just on the length/difficulty of your thoughts.
No vote yet from me.
There is plenty of room for willpower in ethics-as-taste once you have a sufficiently complicated model of human psychology in mind. Humans are not monolithic decision makers (let alone do they have a coherent utility function, as others have mentioned).
Consider the "elephant and rider" model of consciousness (I thought Yvain wrote a post about this but I couldn't find it; in any case I'm not referring to this post by lukeprog, which is talking about something else). In this model, we divide the mind into two parts - we'll say my mind just for co...
What about the "memes=good, genes=evil" model? The literally meant one where feudalism or lolcats are "good" and loving your siblings or enjoying tasty food is "evil".
Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.
The only kind of model that needs a global utility function is an optimization process. Obviously, after considering each alternative, there needs to be a way to decide which one to choose... assuming that we do things like considering alternatives and choosing one of them (using an ordering that is represented by the one utility function).
For example, evolution has a global utility function (inclusive genetic fitness). Of course, it...
This seems to be the crux of your distinction.
Under the willpower theory, morality means the struggle to consistently implement a known set of rules and actions.
Whereas under the taste theory, morality is a journey to discover and/or create a lifestyle fitting your personal ethical inclinations.
We should not ask "which is right?" but "but how much is each right? In what areas?"
I'm not sure of the answer to that question.
Humans don't make decisions based primarily on utility functions. To the extent that the Wise Master presented that as a descriptive fact rather than a prescriptive exhortation, he was just wrong on the facts. You can model behavior with a set of values and a utility function, but that model will not fully capture human behavior, or else will be so overfit that it ceases to be descriptive at all (e.g. "I have utility infinity for doing the stuff I do and utility zero for everything else" technically predicts your actions but is practically usel...
Something the conventional story about ethics gets right, with which you seem to disagree, is that ethics is a society-level affair. That is, to justify an action as ethically correct is implicitly to claim that a rational inquiry by society would deem the action acceptable.
Another point convention gets right, and here again you seem to differ, is motivational externalism. That is, a person can judge that X is right without necessarily being motivated to do X. Of course, you've given good evolutionary-biological reasons why most of the time moral judg...
Somebody probably sent me this link to a previous LW discussion on the distinction between morality and ethics. (Sorry to not give credit. I just found it in a browser window on my computer and don't remember how it got there.)
I have never trusted theories of ethics whose upshot is that most people are moral.
I think most people will "take all the pie" if they can frame it as harmless, and something they're entitled to. Almost everybody, from homeless people to privileged college students to PTA moms, loves a freebie -- it's unusual to see people giving more, or putting in more work, than they're socially constrained to. The reason the "tragedy of the commons" happens every single time there's a commons (if you've ever lived in a co-op you know what I mean) ...
Everything you've said about the standard view is pretty much how I think of ethics... which makes things difficult.
Behaving according to the utility function of the part of your psyche that deals with willpower requires willpower.
Also, saying that ethics doesn't require willpower isn't the same as saying it's not a choice. I act moral based on my utility function, which is part of who I am. When I make an act based on what kind of a person I am, I make a choice. That's the compatibilist definition of choice. Ergo, acting moral is a choice.
Society's main story is that ethics are a choice...This story has implications:...
The list following the above helped me understand why deontological judgements are prevalent, even if I don't find any strong arguments backing such theories (What do you mean, you don't care about the outcomes of an action? What do you mean that something is "just ethically right"?) In particular:
...Ethical judgements are different from utility judgements. Utility is a tool of reason, and reason only tells you how to get what you want, whereas ethics tells you w
I mean positions that I disagree with but make me think. This includes arguments that I had not considered that seem worthwhile to consider even if they aren't persuasive, and posts where even if the conclusions are wrong use interesting facts that I wasn't aware of, or posts that while I disagree with parts have other good points in them. Sometimes I will upvote a comment I disagree with simply because it is a demonstration of extreme civility in a highly controversial issue (so for example some of the recent discussions on gender issues I was impressed enough with the cordiality and thoughtfulness of people arguing different positions that I upvoted a lot of the comments).
In general, if a comment makes me think and makes me feel like reading it was a useful way to spend my time, I'll upvote it.
It helps to define your terms before philosophizing. I assume that you mean morality(a collection of beliefs as to what constitutes a good life) when you write ethics.
I can't speak for you, but my moral views are originally based on what I was taught by my family and the society in general, explicitly and implicitly, and then developed based on my reasoning and experience. Thus, my personal moral subsystem is compatible with, but not identical to what other people around me have. The differences might be minor (is torrenting copyrighted movies immoral?) o...
Human behaviours cannot be described or even sensibly approximated by an utility functions. You can wish otherwise, since utility function has nice mathematical properties, but wishing something hard doesn't make it true.
There are some situations where an extremely rough approximation like utility function can reasonably be used as a model of human behaviour, just as there are some situations where an extremely rough approximation like uniform sphere can reasonably be used as model of a cow. These are facts about modelling practice, not about human behavior or shape of cows, and it's a fundamental misunderstanding to confuse these.
The short answer is that you may choose to change your future utility function when doing so will have the counter-intuitive effect of better-fulfilling your current utility function
Fulfilling counter-intuitive resolutions of your utility function should be an instrumental product of reason. I don't see a need to change your utility function in order to satisfy your existing utility function. You might consider tackling this whole topic separately and more generally for all utility functions if you think its important.
Part of your description of the "ethics is willpower" position appears to be a strawman, as other parts of the same description are accurate, I assume it is because you do not fully understand it:
Firstly the position would more accurately be called "ethics is willpower plus wisdom", but even that doesn't fully capture it. Let's go through your points one by one:
Ethics is specifically about when your desires conflict with the desires of others. Thus, ethics is only concerned with interpersonal relations.
No, it also includes delayin...
Most people believe the way to lose weight is through willpower. My successful experience losing weight is that this is not the case. You will lose weight if you want to, meaning you effectively believe0 that the utility you will gain from losing weight, even time-discounted, will outweigh the utility from yummy food now. In LW terms, you will lose weight if your utility function tells you to. This is the basis of cognitive behavioral therapy (the effective kind of therapy), which tries to change peoples' behavior by examining their beliefs and changing their thinking habits.
Similarly, most people believe behaving ethically is a matter of willpower; and I believe this even less. Your ethics is part of your utility function. Acting morally is, technically, a choice; but not the difficult kind that holds up a stop sign and says "Choose wisely!" We notice difficult moral choices more than easy moral choices; but most moral choices are easy, like choosing a ten dollar bill over a five. Immorality is not a continual temptation we must resist; it's just a kind of stupidity.
This post can be summarized as:
Many people have commented that humans don't make decisions based on utility functions. This is a surprising attitude to find on LessWrong, given that Eliezer has often cast rationality and moral reasoning in terms of computing expected utility. It also demonstrates a misunderstanding of what utility functions are. Values, and utility functions, are models we construct to explain why we do what we do. You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior. You can fit this model to the data arbitrarily well by adding parameters. It will always have some error, as you are running on stochastic hardware. Behavior is not a product of the utility function; the utility function is a product of (and predictor of) the behavior. If your behavior can't be modelled with values and a utility function, you shouldn't bother reading LessWrong, because "being less wrong" means behaving in a way that is closer to the predictions of some model of rationality. If you are a mysterious black box with inscrutable motives that makes unpredictable actions, no one can say you are "wrong" about anything.
If you still insist that I shouldn't talk about utility functions, though - it doesn't matter! This post is about morality, not about utility functions. I use utility functions just as a way of saying "what you want to do". Substitute your own model of behavior. The bottom line here is that moral behavior is not a qualitatively separate type of behavior and does not require a separate model of behavior.
My view isn't new. It derives from ancient Greek ethics, Nietzsche, Ayn Rand, B.F. Skinner, and comments on LessWrong. I thought it was the dominant view on LW, but the comments and votes indicate it is held at best by a weak majority.
Relevant EY posts include "What would you do without morality?", "The gift we give to tomorrow", "Changing your meta-ethics", and "The meaning of right"; and particularly the statement, "Maybe that which you would do even if there were no morality, is your morality." I was surprised that no comments mentioned any of the many points of contact between this post and Eliezer's longest sequence. (Did anyone even read the entire meta-ethics sequence?) The view I'm presenting is, as far as I can tell, the same as that given in EY's meta-ethics sequence up through "The meaning of right"1; but I am talking about what it is that people are doing when they act in a way we recognize as ethical, whereas Eliezer was talking about where people get their notions of what is ethical.
Ethics as willpower
Society's main story is that behaving morally means constantly making tough decisions and doing things you don't want to do. You have desires; other people have other desires; and ethics is a referee that helps us mutually satisfy these desires, or at least not kill each other. There is one true ethics; society tries to discover and encode it; and the moral choice is to follow that code.
This story has implications that usually go together:
People do choose whether to follow the ethics society promulgates. And they must weigh their personal satisfaction against the satisfaction of others; and those weights are probably relatively constant across domains for a given person. So there is some truth in the standard view. I want to point out errors; but I mostly want to change the focus. The standard view focuses on a person struggling to implement an ethical system, and obliterates distinctions between the ethics of that person, the ethics of society, and "true" ethics (whatever they may be). I will call these "personal ethics", "social ethics", and "normative ethics" (although the last encompasses all of the usual meaning of "ethics", including meta-ethics). I want to increase the emphasis on personal ethics, or ethical intuitions. Mostly just to insist that they exist. (A surprising number of people simultaneously claim to have strong moral feelings, and that people naturally have no moral feelings.)
The conventional story denies these first two exist: Ethics is what is good; society tries to figure out what is good; and a person is more or less ethical to the degree that they act in accordance with ethics.
The chief error of the standard view is that it explains ethics as a war between the physical and the spiritual. If a person is struggling between doing the "selfish" thing and the "right" thing, that proves that they want both about equally. The standard view instead supposes that they have a physical nature that wants only the "selfish" thing, and some internal or external spiritual force pulling them towards the "right" thing. It thus hinders people from thinking about ethical problems as trade-offs, because the model never shows two "moral" desires in conflict except in "paradoxes" such as the trolley problem. It also prevents people from recognizing cultures as moral systems--to really tick these people off, let's say morality-optimizing machines--in which different agents with different morals are necessary parts for the culture to work smoothly.
You could recast the standard view with the conscious mind taking the place of the spiritual nature, the subconscious mind taking the place of the physical nature, and willpower being the exertion of control over the subconscious by the conscious. (Suggested by my misinterpretation of Matt's comment.) But to use that to defend the "ethics as willpower" view, you assume that the subconscious usually wants to do immoral things, while the conscious mind is the source of morality. And I have no evidence that my subconscious is less likely to propose moral actions than my conscious. My subconscious mind usually wants to be nice to people; and my conscious mind sometimes comes up with evil plans that my subconscious responds to with disgust.
... but being evil is harder than being good
At times, I've rationally convinced myself that I was being held back from my goals by my personal ethics, and I determined to act less ethically. Sometimes I succeeded. But more often, I did not. Even when I did, I had to first build up a complex structure of rationalizations, and exert a lot of willpower to carry through. I have never been able (or wanted) to say, "Now I will be evil" (by my personal ethics) and succeed.
If being good takes willpower, why does it take more willpower to be evil?
Ethics as innate
One theory that can explain why being evil is hard is Rousseau's theory that people are noble savages by birth, and would enact the true ethics if only their inclinations were not crushed by society. But if you have friends who have raised their children by this theory, I probably need say no more. A fatal flaw in noble-savage theory is that Rousseau didn't know about evolution. Child-rearing is part of our evolutionary environment; so we should expect to have genetically evolved instincts and culturally evolved beliefs about child-rearing which are better than random, and we should expect things to go terribly wrong if we ignore these instincts and practices.
Ethics as taste
Try, instead, something between the extremes of saying that people are naturally evil, or naturally good. Think of the intuitions underlying your personal morality as the same sort of thing as your personal taste in food, or maybe better, in art. I find a picture with harmony and balance pleasing, and I find a conversation carried on in harmony and with a balance of speakers and views pleasing. I find a story about someone overcoming adversity pleasing, as I find an instance of someone in real life overcoming adversity commendable.
Perhaps causality runs in the other direction; perhaps our artistic tastes are symbolic manifestations of our morals and other cognitive rules-of-thumb. I can think of many moral "tastes" for which I have which have no obvious artistic analog, which suggests that the former is more fundamental. I like making people smile; I don't like pictures of smiling people.
I don't mean to trivialize morality. I just want people to admit that most humans often find pleasure in being nice to other humans, and usually feel pain on seeing other humans--at least those within the tribe--in pain. Is this culturally conditioned? If so, it's by culture predating any moral code on offer today. Works of literature have always shown people showing some other people an unselfish compassion. Sometimes that compassion can be explained by a social code, as with Wiglaf's loyalty to Beowulf. Sometimes it can't, as with Gilgamesh's compassion for the old men who sit on the walls of Ur, or Odysseus' compassion for Ajax.
Subjectively, we feel something different on seeing someone smile than we do on eating an ice-cream cone. But it isn't obvious to me that "moral feels / selfish feels" is a natural dividing line. I feel something different when saving a small child from injury than when making someone smile, and I feel something different when drinking Jack Daniels than when eating an ice-cream cone.
Computationally, there must be little difference between the way we treat moral, aesthetic, and sensual preferences, because none of them reliably trumps the others. We seem to just sum them all up linearly. If so, this is great, to a rationalist, because then rationality and morals are no longer separate magisteria. We don't need separate models of rational behavior and moral behavior, and a way of resolving conflicts between them. If you are using utility functions, you only need one model; values of all types go in, and a single utility comes out. (If you aren't using utility functions, use whatever it is you use to predict human behavior. The point is that you only need one of them.) It's true that we have separate neural systems that respond to different classes of situation; but no one has ever protested against a utility-based theory of rationality by pointing out that there are separate neural systems responding to images and sounds, and so we must have separate image-values and sound-values and some way of resolving conflicts between image-utility and sound-utility. The division of utility into moral values and all other values may even have a neural basis; but modelling that difference has, historically, caused much greater problems than it has solved.
The problem for this theory is: If ethics is just preference, why do we prefer to be nice to each other? The answer comes from evolutionary theory. Exactly how it does this is controversial, but it is no longer a deep mystery. One feasible answer is that reproductive success is proportional to inclusive fitness.3 It is important to know how much of our moral intuitions is innate, and how much is conditioned; but I have no strong opinion on this other than that it is probably some of each.
This theory has different implications than the standard story:
As I said, this is nothing new. The standard story makes concessions to it, as social conservatives believe morals should be taught to children using behaviorist principles ("Spare the rod and spoil the child"). This is the theory of ethics endorsed by "Walden Two" and warned against by "A Clockwork Orange". And it is the theory of ethics so badly abused by the former Soviet Union, among other tyrannical governments. More on this, hopefully, in a later post.
Does that mean I can have all the pie?
No.
Eliezer addressed something that sounds like the "ethics as taste" theory in his post "Is morality preference?", and rejected it. However, the position he rejected was the straw-man position that acting to immediately gratify your desires is moral behavior. (The position he ultimately promoted, in "The meaning of right", seems to be the same I am promoting here: That we have ethical intuitions because we have evolved to compute actions as preferable that maximized our inclusive fitness.)
Maximizing expected utility is not done by greedily grabbing everything within reach that has utility to you. You may rationally leave your money in a 401K for 30 years, even though you don't know what you're going to do with it in 30 years and you do know that you'd really like a Maserati right now. Wanting the Maserati does not make buying the Maserati rational. Similarly, wanting all of the pie does not make taking all of the pie moral.
More importantly, I would never want all of the pie. It would make me unhappy to make other people go hungry. But what about people who really do want all of the pie? I could argue that they reason that taking all the pie would incur social penalties. But that would result in morals that vanish when no one is looking. And that's not the kind of morals normal people have.
Normal people don't calculate the penalties they will incur from taking all the pie. Sociopaths do that. Unlike the "ethics as willpower" theorists, I am not going to construct a theory of ethics that takes sociopaths as normal.4 They are diseased, and my theory of ethical behavior does not have to explain their behavior, any more than a theory of rationality has to explain the behavior of schizophrenics. Now that we have a theory of evolution that can explain how altruism could evolve, we don't have to come up with a theory of ethics that assumes people are not altruistic.
Why would you want to change your utility function?
Many LWers will reason like this: "I should never want to change my utility function. Therefore, I have no interest in effective means of changing my tastes or my ethics."
Reasoning this way makes the distinction between ethics as willpower and ethics as taste less interesting. In fact, it makes the study of ethics in general less interesting - there is little motivation other than to figure out what your ethics are, and to use ethics to manipulate others into optimizing your values.
You don't have to contemplate changing your utility function for this distinction to be somewhat interesting. We are usually talking about society collectively deciding how to change each others' utility functions. The standard LessWrongian view is compatible with this: You assume that ethics is a social game in which you should act deceptively, trying to foist your utility functions on other people and avoid letting yours being changed.
But I think we can contemplate changing our utility functions. The short answer is that you may choose to change your future utility function when doing so will have the counter-intuitive effect of better-fulfilling your current utility function (as some humans do in one ending of Eliezer's story about babyeating aliens). This can usually be described as a group of people all conspiring to choose utility functions that collectively solve prisoners' dilemmas, or (as in the case just cited) as a rational response to a threatened cost that your current utility function is likely to trigger. (You might model this as a pre-commitment, like one-boxing, rather than as changing your utility function. The results should be the same. Consciously trying to change your behavior via pre-commitment, however, may be more difficult, and may be interpreted by others as deception and punished.)
(There are several longer, more frequently-applicable answers; but they require a separate post.)
Fuzzies and utilons
Eliezer's post, Purchase fuzzies and utilons separately, on the surface appears to say that you should not try to optimize your utility function, but that you should instead satisfy two separate utility functions: a selfish utility function, and an altruistic utility function.
But remember what a utility function is. It's a way of adding up all your different preferences and coming up with a single number. Coming up with a single number is important, so that all possible outcomes can be ordered. That's what you need, and ordering is what numbers do. Having two utility functions is like having no utility function at all, because you don't have an ordering of preferences.
The "selfish utility function" and the "altruistic utility function" are different natural categories of human preferences. Eliezer is getting indirectly at the fact that the altruistic utility function (which gives output in "fuzzies") is indexical. That is, its values have the word "I" in them. The altruistic utility function cares whether you help an old lady across the street, or some person you hired in Portland helps an old lady across the street. If you aren't aware of this, you may say, "It is more cost-effective to hire boy scouts (who work for less than minimum wage) to help old ladies across the street and achieve my goal of old ladies having been helped across the street." But your real utility function prefers that you helped them across the street; and so this doesn't work.
Conclusion
The old religious view of ethics as supernatural and contrary to human nature is dysfunctional and based on false assumptions. Many religious people claim that evolutionary theory leads to the destruction of ethics, by teaching us that we are "just" animals. But ironically, it is evolutionary theory that provides us with the understanding we need to build ethical societies. Now that we have this explanation, the "ethics as taste" theory deserves to be evaluated again, and see if it isn't more sensible and more productive than the "ethics as willpower" theory.
0. I use the phrase "effectively believe" to mean both having a belief, and having habits of thought that cause you to also believe the logical consequences of that belief.
1. We have disagreements, such as the possibility of dividing values into terminal and instrumental, the relation of the values of the mind to the values of its organism, and whether having a value implies that propagating that value is also a value of yours (I say no). But they don't come into play here.
3. For more details, see Eliezer's meta-ethics sequence.
4. Also, I do not take Gandhi as morally normal. Not all brains develop as their genes planned; and we should expect as many humans to be pathologically good as are pathologically evil. (A biographical comparison between Gandhi and Hitler shows a remarkable number of similarities.)