A tenet of traditional rationality is that you can't learn much about the world from armchair theorizing. Theory must be epiphenomenal to observation-- our theories are functions that tell us what experiences we should anticipate, but we generate the theories from *past* experiences. And of course we update our theories on the basis of new experiences. Our theories respond to our evidence, usually not the other way around. We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning. Philosophy from Plato through Descartes and to Kant is replete with failed examples of theorizing about the natural world on the basis of something other than empirical observation. Socrates thinks he has deduced that souls are immortal, Descartes thinks he has deduced that he is an immaterial mind, that he is immortal, that God exists and that he can have secure knowledge of the external world, Kant thinks he has proven by pure reason the necessity of Newton's laws of motion.

These mistakes aren't just found in philosophy curricula. There is a long list of people who thought they could deduce Euclid's theorems as analytic or a priori knowledge. Epicycles were a response to new evidence but they weren't a response that truly privileged the evidence. Geocentric astronomers changed their theory *just enough* so that it would yield the right predictions instead of letting a new theory flow from the evidence. Same goes for pre-Einsteinian theories of light. Same goes for quantum mechanics. A kludge is a sign someone is privileging the hypothesis. It's the same way many of us think the Italian police changed their hypothesis explaining the murder of Meredith Kercher once it became clear Lumumba had an alibi and Rudy Guede's DNA and hand prints were found all over the crime scene. They just replaced Lumumba with Guede and left the rest of their theory unchanged even though there was no longer reason to include Knox and Sollecito in the explanation of the murder. These theories may make it over the bar of traditional rationality but they sail right under what Bayes theorem requires.

Most people here get this already and many probably understand it better than I do. But I think it needs to be brought up in the context of our ongoing discussion of normative ethics.

Unless we have reason to think about ethics differently, our normative theories should respond to evidence in the same way we expect our theories in other domains to respond to evidence. What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions. When faced with certain situations in real life or in fiction we get strong impulses to react in certain ways, to praise some parties and condemn others. We feel guilt and sometimes pay amends. There are some actions which we have a visceral abhorrence of.

These reactions are for ethics what measurements of time and distance are for physics -- the evidence.

The reason ethicists use hypotheticals like the runaway trolley and the unwilling organ donor is that different normative theories predict different intuitions in response to such scenarios. Short of actually setting up these scenarios for real, this is as close as ethics gets to controlled experiments. Now there are problems with this method. Our intuitions in fictional cases might be different from real life intuitions. The scenario could be poorly described. It might not be as controlled an experiment as we think. Or some features could be clouding the issue such that our intuitions about a particular case might not actually falsify a particular ethical principle. Just as there are optical illusions there might be ethical illusions such that we can occasionally be wrong about an ethical judgment in the same way that we can sometimes be wrong about the size or velocity of a physical object.

The big point is that the way we should be reasoning about ethics is not from first principles, a priori truths, definitions or psychological concepts. Kant's Categorical Imperative is a paradigm example of screwing this up, but he is hardly the only one. We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions. And if your theory is outputting results that are systematically or radically different from actual ethical intuitions then you need to have a damn good explanation for the discrepancy or be ready to change your theory (and not just by adding a kludge).

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 9:22 AM

These reactions are for ethics what measurements of time and distance are for physics -- the evidence.

This may be somewhat controversial. One has to keep in mind that ethical theories aren't necessarily theories about human intuitions. (Indeed, to assume otherwise would seem to take for granted a particular theory, such as emotivism.) So this raises the question of what other sorts of evidence there are for ethical propositions, that could prove intuitions wrong.

ETA: Also, the question of privileging the hypothesis is interesting here, because it poses a challenge to the idea of relying on intuitions. After all, if you want to prove a particular ethical thesis, it usually isn't too hard to come up with some exotic thought-experiment wherein intuition appears to support the desired conclusion. What isn't so clear is how much weight the intuition gleaned from a particular imaginary scenario should be given.

Thats a really good point. We (or maybe just I) might be conflating two questions: "Is ethics just about our intuitions?" and "Are our intuitions the only way to collect evidence about ethics?". One place I'm going with this is that I don't think there is ultimately a difference between these questions, but I certainly didn't make that case here. I'm really just trying to make the latter point. Can anyone think of any other sorts of evidence?

It also seems like even if there is another way of gathering evidence about morality that contradicts our intuitions it still has to explain the contradictory intuitions since part of the question of ethics is "what is going on with these intuitions?". For example, Kant gets his morality by examining the form of practical reason but even if that were a valid means of collecting evidence his theory would still need to account for contrary intuitions.

Can anyone think of any other sorts of evidence

I can imagine one camp arguing for a theory based on nothing but actual observed behavior. Look at field cases (or controlled experiments) where subjects had an ethical choice, and see what they do.

I think I would begin with animals -- what kinds and stages of ethics do they have?

The thing is we often say of a person's actions that they are ethical or unethical. The fact that someone did something doesn't always tell us a lot about whether or not that thing is moral. Many people feel like they act unethically.

Put another way: Ted Bundy made some ethical choices.

Also, setting up controlled experiments is difficult if you're worried about being ethical.

I see, so 'ethics' can't be observed directly by behavior.

Whenever you have a choice of action, we label some possible actions 'ethical' and some 'unethical'. We might have a preference for ethical behavior, but it is not the single deciding factor, which is why we can't look at our choices to determine ethics.

So describing ethics is trying to describe why some actions are labeled 'ethical' and you do this by observing which actions you internally label ethical and which you don't. (Sounds perfect for armchair theorizing to me, because all you've got to do is interrogate your intuition... )

Perhaps 'ethics' is still behavior, but behavior that occurs before the action. What do you think about using MRI patterning to identify particular forms of guilt, anxiety, etc? Would this come closer to "observing ethics" or still somehow would be measuring something different?

So I read that Alonzo Fyfe thinks this is measuring something different, but I guess he is not defining ethics as what you or I internally label ethical. He calls this 'beliefs and other attitudes on morality'. (Is there any kind of evidence possible for his view of ethics?)

What do you think about using MRI patterning to identify particular forms of guilt, anxiety, etc?

That might work. There might also be parts of the brain that are used for ethical decisions, so that you can look at the output from an fMRI scan and see if the person made an ethical decision or not, without knowing what the issue was.

"I hold that moral intuitions are nothing but learned prejudices. Historic examples from slavery to the divine right of kings to tortured confessions of witchcraft or Judaism to the subjugation of women to genocide all point to the fallibility of these 'moral intuitions'. There is absolutely no sense to the claim that its conclusions are to be adopted before those of a reasoned argument." - Alonzo Fyfe

Another (much longer) quote:

Specifically, I am a moral realist. Furthermore, I reject the claim that there is some hard distinction between 'is' and 'ought'. Loyal readers should be familiar with my claim that we should focus instead on the distinction between 'is' and 'is not'. Morality either belongs in the realm of 'is' (somehow), or it belongs in the realm of 'is not'.

However, this does not tell us where to find morality in the realm of 'is'. In past conferences, I have found that the neural ethicists were looking in the wrong spot.

Let me illustrate with an example. A researcher takes a hoard of subjects and performs brain scans on them while they think about planets and stars and take astronomy tests. He may learn a lot of interesting things, However, it would be a mistake to call this researcher an astronomer. Studying thoughts about stars and studying stars is not the same thing.

Neural ethicists seem to be unaware of this distinction. They study the brain while the subject thinks about moral concepts or works through some moral problem or puts down an answer on some moral test, and they think they are studying morality. They are not. They are studying beliefs and other attitudes on morality.

This Alonzo Fyfe must know of some other way to gather evidence in normative ethics. Please share it!

I have skimmed the first two links, and based only on these, I think this theory is ridiculously simplistic to be useful for us here at LW.

How do you compare the strength of two desires? How do you aggregate desires? Maybe Fyfe has answers, but I haven't seen them. In the two links, I couldn't even find any attempt to deal with popular corner cases such as animal rights and patient rights. And in a transhuman world, corner cases are the typical cases: constantly reprogrammed desires, splitting and merging minds, the ability to spawn millions of minds with specific desires and so on.

I don't know, maybe this is a common problem with all current theories of ethics, and I only singled out this theory because I'm totally unversed in the literature of ethics. The result is all the same: this seems to be useless as a foundation for anything formalized and long-lasting (FAI).

Indeed, I keep bugging him about this. :(

As for animal rights, this is what he says whenever anyone brings up the topic.

These reactions are for ethics what measurements of time and distance are for physics -- the evidence. The reason ethicists use hypotheticals like the runaway trolley and the unwilling organ donor is that different normative theories predict different intuitions in response to such scenarios.

This is analogous to saying that, when humans display a consistent bias in tests of reasoning, it's evidence that our theories of logic are wrong.

We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions.

You mean that the study of ethics is the search for sophisticated arguments to justify what we wanted to do in the first place?

I don't dismiss what you're saying; there is something wrong with our understanding of ethics when they all describe ways that people don't act. When a pastor bemoans that people are incapable of acting good, it means that he doesn't understand what "good" is. But you've gone to the other extreme. You're undefining "normative". You'll only end up studying evolutionary psychology under the name of ethics.

This is analogous to saying that, when humans display a consistent bias in tests of reasoning, it's evidence that our theories of logic are wrong.

But in reasoning we have a standard other than our intuitions for determining the right answer. So if you give someone a standard loss aversion scenario and they get it wrong you can show mathematically that they would have averaged a larger gain by a chance on the winnings. But there is no reason to think we know the math of morality and if multiplying utility by persons gives us answers that are contrary to our intuitions the thing to do seems to be to revise the math, not the intuitions. Now there might be independent reasons to keep the math and independent reasons to doubt our intuitions. But an automatic denial of intuitions, just to keep your current ethical theory just is privileging the hypothesis.

You mean that the study of ethics is the search for sophisticated arguments to justify what we wanted to do in the first place?

I'd say normative ethics is the process of formalizing and generalizing our intuitions. This can help solve tough cases where our intuitions don't give clear answers. So in that sense ethical theories can provide justifications for tough cases. But I think that justification comes from our intuitions about the easy cases, not anything in the theory itself.

I think you mean something different by "ethics" than I do. I'm more of a realist. I'm not much interested in an ethics whose purpose is to describe human behavior. I'd rather use a different term for that.

If you're trying to take a God's eye view, in order to design a friendly AI and chart the future course of the universe, then the approach you're suggesting would be overly anthropocentric.

My realism might be weaker than yours but I think I was just confusing in part of the OP. Normative ethics isn't about explaining our intuitions (even though I say that, I misspoke). It is about what we should do. But we have no access to information about what we should do except through our ethical intuitions. There are cases where our intuitions don't supply answers and that is why it is a good idea to generalize and formalize our ethical intuitions so that they can be applied to tough cases.

Let me ask you, since you're a moral realist do you believe you have moral knowledge? If so, how did you get it?

What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions.

We are trying to find out what kinds of things are to be done. Ethical intuitions are some of the observations about what things should be done, but the theory needs to describe what things should be done, not (just) explain ethical intuitions. This is a point where e.g. knowledge about scope insensitivity trumps the raw response of ethical intuition (you acknowledge the problem in the post, but without shifting the focus from observation to the observed).

So our ethical intuitions are scope insensitive. But how do we know how to correct for the insensitivity? Maybe maybe the value of an action increases linearly as it increases in scope. Maybe it increases exponentially. Maybe the right thing to do is average the utility per person. What possible evidence is there to answer this question? For that matter, why should we think that ethics shouldn't be scope insensitive?

For that matter, why should we think that ethics shouldn't be scope insensitive?

See Circular Altruism.

I just reread it but I don't see how it answers the question. Can I impose on you to spell it out for me?

The idea is that when humans are confronted with ethical choices, they often have qualitative feelings (murder is bad, saving lives is good) upon which they try and make their ethical decision. This seems more natural, whereas applying mathematics to ethics seems somewhat repugnant. From Eliezer's post:

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.

My understanding of the gist of the post is that Eliezer argues that if he can demonstrate that the "refusal to multiply" leads to inconsistent ethical choices, then you must admit you should get over your emotional reluctance to apply quantitative reasoning, in order to make sure you would choose the correct ethical choice.

The scope insensitivity comes from the fact that we consider it extremely ethical to save a child, but not really twice as ethical to save two children... you can see this clearly in action movies where you're supposed to be so thrilled that one particular child was saved when you just saw a whole train, presumably with many families as well as those nearly ethically worthless single men, fall into an abyss.

I'll add my thoughts regarding the question, "why should we think that ethics shouldn't be scope insensitive?" separately:

"Circular Altriusm" suggests that we shouldn't expect that ethical intuition is scope sensitive. But should we expect that ethics is scope sensitive?

I think that 'multiplication' might be good -- and pragmatically required -- for deciding what we should do. However, this doesn't guarantee that it is "ethical" to do so (this distinction only coherent for some subset of definitions of ethical, obviously). Ethics might require that you "never compromise", in which case ethics would just provide an ideal, unobtainable arrow for how the world should be, and then it is up to us to decide how to pragmatically get the world there, or closer to there.

Yes, this is one of the big questions. In Eliezer's scope insensitivity post, he implied that goodness is proportional to absolute number. But Eliezer has also said that he is an average utilitarian. So actually he should see what he called "scope insensitivity" as pretty near the right thing to do. If someone has no idea how many migrating birds there are, and you ask them, "How much would you pay to save 2000 birds?", they're going to seize on 2000 as a cue to indicate how many migrating birds there are. It might seem perfectly reasonable to such a person to suppose there are about 10,000 migrating birds total. But if you asked them, "How much would you pay to save 200,000 migrating birds?" they might suppose there are about 1,000,000 migrating birds. If they value their results by the average good done per bird existing, it would be correct for them to be willing to pay the same amount in both cases.

So what you're asking amounts to asking questions like whether you want to maximize average utility, total utility, or maximum utility.

Average utilitarianism over the entirety of Reality looks mostly like aggregative utilitarianism locally.

I don't think that most people average over all reality. We treated the last 3,000 wolves in North America with much more consideration than we treated the initial 30,000,000 wolves in North America. It's as if we have a budget per-species; as if we partition reality before applying our utility function.

(People like to partition their utility function, because they don't like to confront questions like, "How many beavers are worth a human's life?", or, "How many lattes would you give up to feed a person in Haiti for a week?")

Treating rare animals as especially valuable could be a result of having a (partitioned) utility function that discounts large numbers of individuals. Or it could have to do with valuing the information that will be lost if they go extinct. I don't know how to disentangle those things.

[-][anonymous]14y00

For that matter, why should we think that ethics shouldn't be scope insensitive?

This is an interesting question.

Theory must be epiphenomenal to observation... We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning.

Theory directs observation. If you don't make predictions and try to confirm or refute them, you're back in the middle ages. Don't judge the scientific method by Descartes and Kant.

The word observation refers both to the process of gathering data and the data itself. Here I am using the word in the latter sense.

Two criticisms, which I'll put together as they're somewhat linked.

No account of the distinction between far-more and near-mode ethical intuitions. It exists- take, for example, a skilled emotional manipulator manipulating a person. You shouldn't treat the near-mode intiutions as evidence according to conventional theory because you don't know all the facts, but your theory doesn't account for this.

You have accounted for this somewhat with the idea of ethical illusions, but you have a problem related to the need for metaethics. If ethical 'illusions' exist, then how do you tell them from 'proper' ethics? Metaethics is the rule you use to do this. Lesswrong has a complicated rule related to ethical intuitions- but without such a rule, how are you supposed to decide if intuitions or something else are the means to decide?

What are the experiences that we are trying to explain with our ethical theories?

Why do you assume that ethical theories must be in the business of explaining experiences? On the face of it, moral theories do not aim to explain how things are (not even "how things are in our heads" -- that's psychology, not philosophical ethics), but instead address the more practical question: "what should I do?" We develop moral theories to help us to answer this practical question, or to guide our decisions, not to answer some scientific question of the form "why do I have such-and-such experiences?" (Note that an answer to the latter need not have any action-guiding significance at all.)

Yeah, phrasing the question the way I did right there was confusing. This helps me clarify my position. Normative ethics isn't trying to explain explain our intuitions. It is trying to explain what we should do. But we don't get information about what we should do any other way except from our intuitions about what we should do. So what normative ethics needs to do is formalize and generalize those intuitions (also, it is definitely worth clarifying what constitutes as an ethical intuition!). What it isn't trying to do is justify them, though.

One classic problem in ethics is "why should I be moral?". But when we experience strong ethical intuitions that question doesn't come up, it only comes up when the only attempted justification of for an actions is an abstract theoretical one.

I'm still not entirely clear on your position here. Are you just affirming the standard methodology of reflective equilibrium? Or are you suggesting something more specific: e.g., that we should weight intuitions about particular cases more heavily than intuitions about general principles?

The reflective equilibrium method is the right kind of approach (and I should have mentioned it). In addition, I think that without independent justification you can only take a general principle as far as your intuitions about it take you. So we might have a general intuition that consequentialism is right-- but we can't just assume that contrary intuitions about particular cases are wrong. All else being equal we should prefer theories that get intuitions about general principles and intuitions about particular cases, right.

If you're a classic moral realist, then your position makes very straightforward sense: morality is "out there" and we can discover facts about it.

If you're not a moral realist, then the relationship between morality and facts is a lot less straightforward, and I don't see that just roundly asserting that we should treat them the same way moves us forward.

Well I was bringing this up in the context of normative ethics and it isn't at all clear to me what normative ethics would even mean if moral realism is false. My best guess is that normative ethics just becomes descriptive ethics (where we're trying to codify the ethics that in fact humans (or the West, or you) hold jointly. And I think everything in the OP holds true for descriptive ethics as well (except maybe instead of how people say people should act is replaced where possible by data about how they actually act). For non-cognitivist theories, as komponisto indicated above, intuitions are even more central- again we're just not really doing normative ethics anymore.

So yeah, my position doesn't quite work as stated if moral realism is false. But if moral realism is false then normative ethics doesn't quite work as stated.

Presumably fair enough - if talking about humans.

Humans share a fair amount - e.g. see:

"Everybody Laughs, Everybody Cries: Researchers Identify Universal Emotions"

Similarly there's likely to be a baserock of human morality, that can be uncovered by conventional science.

Similarly there's likely to be a baserock of human morality, that can be uncovered by conventional science.

For example, "Human Universals".

Among the universals that Donald Brown identifies (listed here), the following all have moral dimensions:

biases in favor of in-group, prevention or avoidance of incest, pride, resistance to abuse of power, self-control, sexual modesty, sanctions for crimes against the collectivity, means of dealing with conflict, murder proscribed, good and bad distinguished, distinguishing right and wrong, judging others, concept of fairness, disapproval of stinginess, envy, symbolic means of coping with envy, etiquette, insulting, interpreting behavior, redress of wrongs, resistance to abuse of power, rape proscribed, pride, taboos, hope, hospitality, moral sentiments, limited effective range of moral sentiments, customary greetings, generosity admired, some forms of violence proscribed.

[-][anonymous]14y20

Amanda Kercher

(Psst....that's Meredith Kercher.)

[-][anonymous]14y00

Duh. Thanks.

Why would you suddenly give an argument against treating bible as the source of moral wisdom? This paragraph weakens the post.

You're right. It made sense in the larger context I originally gave the post. I should have removed it. I understand there is a norm against heavy edits to top level posts- should I just leave it?

I'd prefer you removing it. It doesn't alter the message of the post itself, rather it's the odd part out.

It doesn't sound to me like you're saying anything new here. We can make moral observations, which count as evidence regarding ethical theories. These observations will be colored by our existing views on morality, but all observations are theory-laden. When you say, "I saw your child doing something bad", you're not using the language wrong.

Aren't you a professional ethicist? If I though I had something to say about ethics that you hadn't heard before I'd probably hold out and publish it. :-) But as far as I can tell this is new-for-Less-Wrong.

As for theory-laden observation I was took Eliezer's whole philosophy of science to involve saying: it doesn't have to be this way! We could break from existing theory as soon as the evidence says we should instead of bending the theory again and again until it breaks.

I'm just curious, since most of the comments have disagreed with the post if people upvoted the post even though they disagreed or if this was just self-selection and a number of people liked what I said.

?

In my own case, I wouldn't characterize my comment as "disagreement" so much as "raising issues" and "seeking to refine the thesis".

When I first saw the post it was at -1, which I thought was wrong, so I upvoted to correct this.

Re: We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions.

That's basically the approach of evolutionary psychology and memetics - explain why we experience the ethical sentiments we do.

Sure. Shorter OP: Normative ethics should look a lot more like moral psychology.

We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions.

I reject your normative presumption. I have no obligation to create, promote or live by an ethical system based on the intuitions of either myself in the future or someone else. That's your value, not mine.

And if your theory is outputting results that are systematically or radically different from actual ethical intuitions then you need to have a damn good explanation for the discrepancy or be ready to change your theory (and not just by adding a kludge).

The world is Wrong. There is something to protect. Let me get right on that...

My ethics need not be a description of how people are.

I'm afraid I'm not sure what you're talking about.

There is no reason that I should base my ethical framework around anticipating my future ethical intuitions. I anticipate my future ethical intuitions to be flawed, inconsistent and vulnerable to money pumping (or 'rightness' pumping, as the case may be). You're making a general normative assertion (should, 'you need a damn good explanation') about other people's ethics and I reject it.

It doesn't look to me like you're interested in descriptive or normative ethics as the fields are usually conceived. That is fine, of course.

It doesn't look to me like you're interested in descriptive or normative ethics as the fields are usually conceived.

My comments are making direct assertions regarding normative ethics, clearly indicating interest. I just disagree with you and reject your 'should' unambiguously. My objections are similar to the other comments here.

Incidently, I agree with the title of your post, just not your prescription.

So you're basically just saying that you disagree with the conclusion of the post? I guess I thought you were saying something more complicated since usually when people disagree with conclusions they either try to show that the argument is invalid or that one of the premises is untrue. Would you like to do either of those things?

(Reading this to myself, it sounds sarcastic. But I'm sincere.)

Hi Jack,

We might have trouble communicating across an two way inferential barrier as we make significantly different assumptions. But we are both being sincere so I'll try to give an outline to what I am saying:

  • I expect my future ethical intuitions to be reflectively inconsistent when multiplied out.
  • Reflectively inconsistent ethical systems, when followed, will have consequences that are suboptimal according to any given preferences over possible states of the universe.
  • Wedrifid-would-want to have a reflective ethical system.
  • Wedrifid should do things that wedrifid-would-want, a priori. (Tangentially, everyone else should do what wedrifid-would-want too. It so happens that following their own volition is a big part of wedrifid-would-want but the very nature of should makes all should claims quite presumptive.)
  • Wedrifid should not base his ethical theories around predicting future ethical intuitions.

Allow me to replace 'ethical intuitions' with, lets say, "Coherent Extrapolated Ethical Volition". That may make me more comfortable getting closer to where I think your position is. But even then I wouldn't want to match my ethical judgments now with predicted future ethical intuitions. This is a somewhat analogous to the discussion in A Much Better Life?. My ethical theories should match my (coherent) intuitions now, not the intuitions of that other guy called wedrifid who is in the future.

I should add: Something we may agree on is that we can use normal techniques of rational inquiry to better elicit what our Present-time Coherent Extrapolated Ethical Volition is. Since the process of acquiring evidence does take time our effective positions may be similar. We may be in, as pjebey would put it, 'Violent Agreement'. 'Should' claims do that sometimes. :)