4 min read

-5

[Looking for feedback, particularly on links to related posts; I'd like to finish this out as a post on the main, provided there aren't too many wrinkles for it to be salvaged.]

Morality as Fixed ComputationAbstracted Idealized Dynamics, as part of the Metaethics Sequence, discuss ethics as computation.  This is a post primarily a response to these two posts, which discuss computation, and the impossibility of computing the full ethical ramifications of an action.  Note that I treat morality as objective, which means, loosely speaking, that two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system.

On to the subject matter of this post - are Bayesian utilitarian ethics utilitarian?  For you?  For most people?

And, more specifically, is a rational ethics system more rational than a heuristics and culturally based one?

I would argue that the answer is, for most people, "No."

The summary explanation of why: Because cultural ethics are functioning ethics.  They have been tested, and work.  They may not be ideal, but most of the "ideal" ethics systems that have been proposed in the past haven't worked.  In terms of Eliezer's post, cultural ethics are the answers that other people have already agreed upon; they are ethical computations which have already been computed, and while there may be errors, most of the potential errors an ethicist might arrive upon have already been weeded out.

The longer explanation of why:

First and foremost, rationality, which I will use from here instead of the word "computation," is -expensive-.  "A witch did it", or the equivalent "Magic!", while not in fact conceptually simple, is in fact logically simple; the complexity is encoded in the concept, not the logic.  The rational explanation for, say, static electricity, requires far more information about the universe, which for an individual who aspires to be a farmer because he likes growing things, may never be useful, and whose internalization may never pay for itself.  It can be fully consistent with a rational attitude to accept irrational explanations, when you have no reasonable expectation that the rational explanation will provide any kind of benefit, or more exactly when the cost of the rational explanation exceeds its expected benefit.

Or, to phrase it another way, it's not always rational to be rational.

Terminal Values versus Instrumental Values discusses some of the computational expenses involved in ethics.  It's a nontrivial problem.

Rationality is a -means-, not an ends.  A "rational ethics system" is merely an ethical system based on logic, on reason.  But if you don't have a rational reason to adopt a rational ethics system, you're failing before you begin; logic is a formalized process, but it's still just a process.  The reason for adopting a rational ethics system is the starting point, the beginning, of that process.  If you don't have a beginning, what do you have?  An ends?  That's not rationality, that's rationalization.

So the very first step in adopting a rational ethics system is determining -why- you want to adopt a rational ethics system.  "I want to be more rational" is irrational.

"I want to know the truth" is a better reason for wanting to be rational.

But the question in turn must, of course, be "Why?"

"Truth has inherent value" isn't an answer, because value isn't inherent, and certainly not to truth.  There is a blue pillow in a cardboard box to my left.  This is a true statement.  You have truth.  Are you more valuable now?  Has this truth enriched your life?  There are some circumstances in which this information might be useful to you, but you aren't in those circumstances, nor in any feasible universe will you be.  It doesn't matter if I lied about the blue pillow.  If truth has inherent value, then every true statement must, in turn, inherit that inherent value.  Not all truth matters.

A rational ethics system must have its axioms.  "Rationality," I hope I have established, is not a useful axiom, nor is "Truth."  It is the values that your ethics system seeks to maximize which are its most important axioms.

The truths that matter are the truths which directly relate to your moral values, to your ethical axioms.  A rational ethics system is a means of maximizing those values - nothing more.

If you have a relatively simple set of axioms, a rational ethics system is relatively simple, if still potentially expensive to compute.  Strict Randian Objectivism, for example, attempts to use human life as its sole primary axiom, which makes it a relatively simple ethical system.  (I'm a less strict Objectivist, and use a different axiom, personal happiness, but this rarely leads to conflict with Randian Objectivism, which uses it as a secondary axiom.)

If, on the other hand, you, like most people, have a wide variety of personal values which you are attempting to maximize, attempting to assess each action on its ethical merits becomes computationally prohibitive.

Which is where heuristics, and inherited ethics, start to become pretty attractive, particularly when you share (and most people do, to more extent than they don't) your culture's ethical values.

If you share at least some of your culture's ethical values, normative ethics can provide immense value to you, by eliminating most of the work necessary in evaluating ethical scenarios.  You don't need to start from the bottom up, and prove to yourself that murder is wrong.  You don't need to weigh the pros and cons of alcoholism.  You don't need to prove that charity is a worthwhile thing to engage in.

"We all engage in ethics, though; it's not like a farmer with static electricity, don't we have a responsibility to understand ethics?"

My flippant response to this question is, should every driver know how to rebuild their car's transmission?

You don't need to be a rationalist in order to reevaluate your ethics.  An expert can rebuild your transmission - an expert can also pose arguments to change your mind.  This has, indeed, happened before on mass scales; racism is no longer broadly acceptable in our society.  It took too long, yes, -but-, a long-established ethics system, being well-tested, should require extraordinary efforts to change.  If it were easily mutable, it would lose much of its value, for it would largely be composed of poorly-tested ideas.

All of which is not to say that rational ethics are inherently irrational - only that one should have a rational reason for engaging in them to begin with.  If you find that societal norms frequently conflict with your own ethical values, that is a good reason to engage in rational ethics.  But if you don't, perhaps you shouldn't.  And if you do, you should be cautious of pushing a rational ethics system on somebody for whom existing ethical systems do well, if your goal is to improve their well-being.

New Comment
32 comments, sorted by Click to highlight new comments since:
[-]maia80

Writing things like "it's not always rational to be rational" is a good sign that you should taboo at least one of the ways you're using the word.

If you had replaced "rational ethics" with "utilitarian ethics calculations from scratch," I think this post would have been better received. Your argument is reasonable in substance, but the way you use the world "rational" seems different from how most people here use it.

"It's not always logical to be logical" suffers the same apparent construction problem, as would "It's not always utilitarian to be utilitarian."

I'm using both words in the same sense; if you prefer, "It's not always rational to choose a rational decision-making process for making a decision." I presume that the choice in how to make a decision is an independent decision from the decision itself; that is, there are in fact two decisions to be made. It's not necessary for the methodology used to make the first decision - what methodology to use for the second decision - to choose itself.

[-]maia50

They do. The advantage of such confusing patterns is that they're memorable and rhetorically interesting, but they receive no points for clarity.

So you actually did mean that if you undergo a meta-level value calculation, you will decide that the value of information from doing an object-level moral calculation is sometimes negative?

The advantage of such confusing patterns is that they're memorable and rhetorically interesting, but they receive no points for clarity.

If the writer is doing his job, the different senses of the term should be clear in context, and the construction serves to reinforce that a distinction is being made between two senses of a term. The cognitive dissonance inherent in the seeming contradiction helps make it memorable so that it can act as a touchstone to the in context meaning.

That's if the writer is doing his job. Often, the writer is merely mesmerized by his own language, and is wallowing in the "mystery of the paradox".

[-]maia00

Of course. Complex arguments tend to call for as much clarity as possible, though, so i'd advocate generally avoiding these constructions in venues like LessWrong.

I started as a poet, so I hope I'll be forgiven my occasional forays into rhetorically interesting constructions, as I am prone to them.

I'd say that the construction is somewhat weaker; if you undergo a meta-level value calculation, you -may- decide that the value of information from doing an object-level moral calculation is sometimes negative, including the cost of the calculation in the value of the information. (There's a joke in there somewhere about the infinite cost I calculated in my meta-meta-level value calculation for collecting the information to prove the meta-level calculation for all cases...)

[-]maia10

They have their uses, but the word "rational" can be a bit sensitive around here. If you've done a value of information calculation and decided the moral calculation isn't worth your time, then obviously doing that moral calculation can't be considered "rational." Though it could be a way to attempt to make a "rational" choice on a moral problem. This meta-level stuff can be tricky!

That's what I meant to say, actually; I think we agree on what the construction means now,

I'll add one thing, on consideration: Doing that calculation may be irrational, but that's not to say the calculation itself is irrational.

"It's not always rational to choose a rational decision-making process for making a decision."

You are NOT using rational in the same sense in the two places it is used in that sentence.

The first rational means something like "optimal," or like "winning" when Eliezer says "rationalists win."

The second meaning means something like "doing your own analysis and calculations to create or derive a system which, in some theoretical but not real world where it is implemented by everybody INSTEAD of the existing system, would be (according to your own calculations) better than the existing system."

"So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence 'It's epistemically rational to increase belief in hypotheses that make successful experimental predictions.'"

I'm discussing the usefulness of rationality as a cognitive algorithm in a situation in which other, cheaper, algorithms are available.

Eliminating the word would make this post less clear, not more.

I should have been more specific, but I was really trying to make my entire comment hyperlinkable to all of the threads we have about this.

So to clarify. Yes, your use of the word is pretty appropriate in the body. But it is very, very common for people to sign up on LW, make a post about "Rational Nose Picking" or something like that, and get downvoted into oblivion. So (If I may generalize from one example) most people on LW have a pretty solidly formed aversion to threads written by new members of the form "Rational ___". This is probably reinforced by the fact that Eliezer wrote an entire metaethics sequence. While you linked to it and are clearly aware of it, people probably are less interested in rehashing issues they feel are already settled. So in summary, I'd recommend you try and come up with a better title and edit it. (Related: We've noticed over time the meaning of posts tends to converge to the literal meaning of their title.)

For what it's worth, given all of the things mentioned in the first paragraph that led me to expect this would be terrible, I found myself agreeing with most of it, although to be honest, I only skimmed. And for what it's worth, I've upvoted it, since I didn't think it should be below -3.

Note that I treat morality as objective, which means, loosely speaking, that two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system.

This is a tautology. When people say that morality is not objective, they mean that people tend to have different underlying values, and that there is no good way to reconcile tem.

When people say that morality is not objective, they mean that people tend to have different underlying values, and that there is no good way to reconcile term.

Objectivity is not about everyone sharing values. If everyone hates the Yankees, "Boo Yankees!" is still not an objective statement.

Objectivity is not universality.

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratch. There aren't many people (on LessWrong or off) who think that you should. The whole point of Created Already in Motion is that you can't do all moral reasoning from scratch. Or, as Yvain put in in his Consequentialism FAQ, you don't need a complete theory of ballistics to avoid shooting yourself in the foot.

That said, "rely on society" is a flawed enough heuristic that almost everyone ought to do some moral reasoning for themselves. The majority of people tend to reject consequentialism in surveys, but there are compelling logical reasons to accept it. Death is widely consideed to be good, and seeking immortality to be immoral, but doing a bit of ethical reasoning tends to turn up different answers.

Moral questions have far greater consequences than day-to-day decisions; they're probably worth a little more of our attention.

(My main goal here is identifying points of disagreement, if any. Let me know if I've interpreted your post correctly.)

You have a good encapsulation of what I'm trying to say, yes.

I'm not arguing against "all moral reasoning from scratch," however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.)

My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn't; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

Agree.

For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).

We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn't all bother with metaethics? Agreed. Or are you saying they shouldn't bother with thinking about "morality" at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?

And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through".

I think you could improve the post - and make your point clearer, by replacing "rational" with one of these words.

"And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through"."

  • I think this encapsulates our disagreement.

First, I challenge you to define rationality while excluding those mechanisms. No, I don't really, just consider how you would do it.

Can we call rationality as "A good decision-making process"? (Borrowing from http://lesswrong.com/lw/20p/what_is_rationality/ )

I think the disconnect is in considering the problem as one decision, or two discrete decisions. "A witch did it" is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief.

The first decision is about what decision-making process to use to make a decision. "Blame the witch" is not a good process - it's not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one.

Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

For your examples, picking one to address specifically, I'd suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage.

For effective charities, this decision-making process can be outsourced pretty effectively to somebody who shares your values; most people are religious, and their preacher may make recommendations, for example.

I'm not certain I would consider career choice an ethical decision, per se; I regard that as a case where rationality has a high payoff in almost any circumstances, however, and so agree with there, even if I disagree with its usefulness as an opposing example for the purposes of this debate.

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, could be rational," then I agree. You might rationally choose to not check if a belief is false.

Why is it necessary to muddy the waters by saying "You might rationally have an irrational belief?"

you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the "rational" and "irrational" labels?

Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you're proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.

For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.

[-]Shmi20

two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system

I presume that you mean that terminal values determine instrumental values. This is not an obvious statement by any means, and is generally false for any realistic case.

cultural ethics are the answers that other people have already agreed upon; they are ethical computations which have already been computed, and while there may be errors, most of the potential errors an ethicist might arrive upon have already been weeded out.

This is idealized to uselessness. Most such computations are disputed (gun control? abortion? debt? lying? cheating?)

"I presume that you mean that terminal values determine instrumental values. This is not an obvious statement by any means, and is generally false for any realistic case."

  • Yet most people, when their sister is sick, would assume that administering penicillin is a good idea. That's actually a fantastic link in supporting my argument that ad-hoc ethics are too computationally expensive to calculate for most people.

"This is idealized to uselessness. Most such computations are disputed (gun control? abortion? debt? lying? cheating?)"

  • On the contrary, very few such computations are disputed (by the culture at large). Hence the sister and penicillin example.
[-]Shmi00

Hence the sister and penicillin example.

Generalizing From One Example.

It was the example the very post you linked to was built upon.

[Edit] Which is to say, I wasn't generalizing from one example, I was demonstrating that a particular argument doesn't apply by demonstrating that its central example supports me.

There is one significant question about ethics that has been skirted around, but, as far as I remember, never specifically addressed here. "Why should any particular person follow any ethical or moral rule?" Kai Nielsen has an entire book, Why Be Moral?, devoted to the issue, but doesn't come to a good reason.

Humans' inherited patterns of behavior are a beginning, Nielsen only addresses purely philosophical issues in the book, but still not adequate for what then becomes the question, "Why not defect?"

I believe the answer to this question is "Because the rule maximizes one's ethical values."

(Without getting into the act versus rule argument, which figures into my post, where I am, to some extent, arguing against act utilitarianism on the grounds that it is too computationally expensive.)

Of course, that leads directly into the question, "Why should any particular person hold any particular ethical value?" I don't believe this question has an answer that doesn't lead directly into another ethical value, which is why I hold ethical values as axioms.