I just had a long conversation with my brother, a devout Christian. With my help he has outlined the following argument why it might be good for me to follow Christian deontology:

  1. Many of my moral values arose from my upbringing, as opposed to biology. This is evidenced by the fact that biologically similar people living in different places and epochs have different ideas of what's right.
  2. Therefore many of my values originally came from the society that raised me.
  3. Society's values were strongly influenced by Christian values, and many of our core moral prohibitions are inherited from Christian tradition.
  4. The world is full of people who may want to edit my values ever-so-slightly while I'm not looking, in order to further their own agenda.
  5. Also my values may drift, and most drift is harmful from the perspective of my current values.
  6. A good recipe for countering this insidious deterioration of values is to consciously pull them back toward their original source, as long as it's something unchanging, like a book.
  7. That means editing my values to more closely match Christianity. QED.

What do you think?

New Comment
70 comments, sorted by Click to highlight new comments since:

I think this is an argument for having your values written down somewhere, and maybe even for getting them from a source that is not original to you, but I don't think it is a good reason to base your values on Christianity. The Bible itself does not closely match most modern persons' values, is not internally consistent, and can be interpreted in a variety of ways.

9cousin_it
Good answer, but people are paying too much attention to the last part of #6. Maybe I should've left it out. Instead of becoming a Biblical literalist (which is stupid as you correctly point out) the hero/ine could study the history of religious morality that influenced their upbringing and try to follow that.
6Dorikka
It's a bird...it's a plane...it's a Third Alternative!
4Vladimir_M
That argument is applicable only to sola scriptura Protestantism, and therefore not to the teachings of most Christian churches.
3Alicorn
See #6. The teachings of churches are not unchanging; documents such as the Bible are or at least can be.
8Vladimir_M
Yes, but teachings of churches can also be stable for periods of time long enough to be relevant for this discussion (at least in principle). I don't know whether the original article was written with this in mind, but I understood #6 to refer to any such long-standing tradition. Clearly no religious group nowadays (or in the last couple millenniums, for that matter) espouses Biblical teachings without thick layers of traditional interpretation, whether they admit it or not. So insofar as the question is interesting at all, it should be asked about these traditional interpretations, not the raw Biblical text. (Also, while documents can remain unchanged for arbitrary periods of time in the sense of containing the same series of writing symbols, their interpretations will inevitably change even if the greatest efforts are made to interpret them with maximal literalism or originalism. Consider, for example, that a text written in a living language will, in some centuries, become an archaic document undecipherable without special linguistic and historical training, which by the very nature of things requires some nontrivial interpretation to extract any meaning out of it. In this situation, I don't think it's meaningful to talk about the document remaining "unchanged" in any practically relevant sense.)
1drethelin
once you can pick and choose between various churches you open yourself up to exactly the same sort of drift this is designed to avoid
6Vladimir_M
Well, yes, clearly. But the original argument makes sense only assuming a unique and stable tradition that has determined the values you were brought up with. If this happens to be the tradition of some realistic Christian church (or Jewish denomination), chances are that the text of the Bible is only one element of this tradition -- it definitely doesn't imply the whole content of the tradition by itself, and it may well even contradict parts of it, or at least be harmonized only with strained interpretations. (All this even if an opposite pretense is maintained.) To evaluate the argument from the original article accurately, it is necessary to have a realistic picture of what the tradition in question exactly consists of. It is mistaken to assume that the answer to that question is simply the text of the Bible.
[-]Bongo120

Here's another, unpolitical scenario about pulling back values

Consider a world where there are cosmic rays that can hit an agent's brain, but they home in on the part containing the utility function. Shielding from these rays is possible but expensive.

In this world, when an agent considers whether to invest in the expensive protection, it considers whether a version of it newly hit with a cosmic ray would remain loyal to the old version of it by keeping on maximizing the old utility function (with some weight) as well as the new cosmic ray begotten one.

Then, when a agent newly hit by a cosmic ray agent considers whether to remain loyal, it notices that if it doesn't, it's less likely to exist since it's predecessor would have invested in the protection, so it remains loyal.

2cousin_it
Interesting, thanks! I can't formalize your scenario because I don't understand it completely, but it looks like a game theory problem that should yield an equilibrium in mixed strategies, not unconditional loyalty.
1Nisan
This assumes that the new agent prefers to have existed, and it's not clear to me that people ordinarily have such a preference.
2Bongo
This wasn't about people but generic game-theoretic agents (and all else equal generic game-theoretic agents prefer to exist because then there will be someone in the world with their utility function exerting an influence on the world so as to make it rate higher in their utility function than it would have if there wasn't anyone).
0Nisan
Ah, good point.

Major issue: Christian ethics aren't stable. Polygamy, genocide, and slavery were all perfectly normal parts of life at various times during the development of the modern Christian faith. Those practices are frowned upon currently, at least in polite company. While many Christian ideas have in some way shaped current moral beliefs, their direct influence is much smaller than it is usually given credit for. And consider the CEV proposal. Early Christians thought that women and individuals of other races were not nearly as morally important as the males in their culture, but intelligent early Christians probably would have found the idea of gender and racial equality weird and moderately disconcerting, but not terrible. It might perhaps be analogous to if I told you that a few centuries from now, it would be regarded as a horrible, immoral belief to hold that a human's life was any more important than a Chimpanzee's (ie Trolley problems with two chimpanzees vs one human). That idea is weird and semi-disturbing, but it doesn't seem terrible. Drift of your moral feelings is fine. Just make sure you put some thought into what sort of direction you want your drift to be in.

7Emile
Any source on the "other races" bit? That doesn't match what I've read of early Christian history, where unlike Judaism, Christianity was universal. I agree that later on, there was more "religiously sanctioned" racism (I don't know to what extent), but I don't think it was there in the early days.
1CaveJohnson
Don't disturb popular confusions of some variants of American Christianity with Early Christianity. It ruins important narratives.
[-]Bongo100

Check out this puzzle of mine as well.

You’re a Cthulhu cultist and believe that Cthulhu will reward you by eating you first iff you perform painful ritual self-mutilation in His honor when the moon is full and would do so even if you believed that Cthulhu does not exist. You know, of course, that Cthulhu is a perfect predictor of mortals’ actions, so He knows what you would do.

One day you’re persuaded by an atheist’s arguments that Cthulhu doesn’t actually exist.

That night the moon is full. Do you perform the ritual?

Counterargument:

  1. At least some of your values arose from your biology.
  2. Probably, among those "biological" values is the value of allowing your other values to be edited by others around you. This is evidenced by the fact that all humans allow their values to be edited by others around them, unless they've already undergone extreme amounts of indoctrination (and probably even then).
  3. Since you have not (I guess) undergone extreme amounts of indoctrination, there's no reason to expect that you no longer have that value.
5Emile
I find calling "allowing your other values to be edited by others" a value is a bit forced - it's a feature of human brains, but if I were to model my mind as an agent with values, I'm not sure I'd list that among the values. Also, there are cases where everybody here (probably) agrees we don't want our values to be changed by others (advertisers trying to get us to value the awesome image of driving a Mercedes Benz!), and cases where most people here would agree we want our values to be flexible and may adopt new ones (say, when thinking about how to deal with completely new and weird cases that aren't covered by our pre-existing values or by Christian tradition, like brain uploads or duplication machines or encountering weird aliens). I think most of the discussion is where exactly to draw the line.
0Wei Dai
But it's not simply a hardwired feature either, since if you gave most people the option of self-modifying away that feature, they wouldn't accept. Perhaps another way to think about it is that we value our humanity (humanness) and allowing our values to be changed (to some extent) by others around us is a part of that.

as long as it's something unchanging, like a book.

People don't get their morals from books (much). Christians included.

The trouble is that there are multiple meanings of "moral values" here. There is the human instantiation, and the ideal decision agent instantiation. The ideal decision agent instantiation is used in 5. and a bit in 4. The human instantiation is used elsewhere.

Though usually these are pretty close and the approximation is useful, it can also run into trouble when you're talking specifically about things humans do that ideal decision agents don't do, and this is one of those things.

Specifically, 5. doesn't necessarily work for human values, since we're so inconsistent. People can go into isolation and just think and come out with different human values. How weird is that?!

2Perplexed
I think you are right to call attention to the issue of drift. Drift is bad in a simple value - at least in agents that consider temporal consistency to be a component of rationality. But drift can be acceptable in those 'values' which are valued precisely because they are conventions. It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don't drift too fast so that nobody knows what to aim for.
3timtyler
Those are instrumental values. Nobody cares very much if those change, because they were just a means to an end in the first place.
0Perplexed
My position here is roughly that all 'moral' values are instrumental in this sense. They are ways of coordinating so that people don't step on each other's toes. Not sure I completely believe that, but it is the theory I am trying on at the moment. :)
1timtyler
Right - but there are surely also ultimate values. Those are the ones that are expected to be resistant to change. It can't be instrumental values all the way down.
0Perplexed
Correct. My current claim is that almost all of our moral values are instrumental, and thus subject to change as society evolves. And I find the source of our moral values in an egoism which is made more effective by reciprocity and social convention.
0timtyler
I think these guys) have a point. So, from my perspective, Egoism is badly named.
1cousin_it
I mostly agree, but the argument still works if you throw out 5 altogether.
0Manfred
5 is the only is-ought link in the chain. Seems pretty integral to me.
1cousin_it
I thought 4 and 5 were parallel, with 4 a bit stronger than 5.
0Manfred
But that's only an "is" statement. To think "and that's guaranteed to be bad" at the end of 4 is to assume 5.

Mostly, I think my ability to evaluate this argument is distorted beyond reliability by including the word "Christianity." So, first, let me try and taboo that word and generate a more generalized version of the argument:

1/2. My values are primarily learned from society, not innate.

  1. There exists some tradition from which society's values were primarily derived.
    4/5. My values, once learned, can be later modified. Most such modifications are harmful from the perspective of those values.
  2. Those (harmful) modifications can be countered by revert
... (read more)

When scrutinizing an argument, one good heuristic is to focus on vague words like "many" and aim for a more robust version. The argument has several such words: "many" in #1, "strongly" and "many" in #3, "full of" in #4, "most" in #5.

For instance, does "many of my moral values" stand for 1%, 10%, 50%, 90% or 99% of your values in that argument? How strong an impression does the argument make on you depending on which of these rough quantifiers you substitute for "many"? (Subsid... (read more)

The world is full of people who may want to edit my values ever-so-slightly while I'm not looking, in order to further their own agenda. lso my values may drift, and most drift is harmful from the perspective of my current values. A good recipe for countering this insidious deterioration of values is to consciously pull them back toward their original source, as long as it's something unchanging, like a book.

The argument assumes change is necessarily for the worse. People can aquire new values whilst seeing them as an improvement. If it is possible to m... (read more)

0[anonymous]
Can you expand by what you mean by meta-evaluate? I can understand analysing values by a framework that has nothing to do with my values. But why would I try to maximise some metric employed by this framework, so I'm confused as to why I don't end up just following my values.

The world is full of people who may want to edit my values ever-so-slightly while I'm not looking, in order to further their own agenda.

Hm, at this point it sounds similar to the point Phil Goetz was making in "Reason as memetic immune disorder".

There is moral error and moral disagreement. If your values change because of moral disagreement between your past and future self, that is something you'd want to prevent. If, however, you are simply correcting your moral error, this should be encouraged. In this case, your future self is acting more moral than you are by your current belief system, since he understands it better.

I think most of my change in morality will be due to correcting moral error. As such, in a matter of dispute, I trust my future self over my present self.

As Alicorn says, provided you are averse to values drift, this is an argument towards writing your values down and using that as a periodic anchor. Not only is it not clear what Christianity's values actually are (witness the tremendous proliferation of interpretations among Christians in outright defiance of other interpretations,) making this change itself constitutes a shift in your values to satisfy someone else's agenda in ways that are harmful with respect to your current values.

The argument has merit, but the conclusion (7) needs to be replaced with something more appropriate in light of (3). You should edit your values to more closely match an amalgam of the many influences that affected you. Or better yet, as Alicorn says, have your values written down somewhere. Including your acceptance of rational change - which puts an interesting twist on the whole deal.

5cousin_it
Agreed. Also, TDT adds another interesting twist. If I always do whatever I would've precommitted to doing earlier, how far should I roll back my morality once I notice the argument?

Can anyone explain why, in a rapidly changing world, we need "absolute" and "eternal" morality?

1cousin_it
We don't. The argument in the post is trying to solve the problem of protecting whatever current values you happen to have (and the values that your past selves happened to have, if you care about them), not the problem of finding absolute eternal values (whatever that means).
0TheOtherDave
My understanding of "eternal values" is precisely the thing I get if I solve the problem of protecting my current values in a sufficiently general way: a set of values that does not change over time. This is in the same sense that solving the "don't die today" problem in a sufficiently general way provides eternal life.
[-][anonymous]00

First of all, this may be an attempt to change your value system and I want you to bear that in mind while reading this post.

1: Seems to be a statement of fact which there is a lot of evidence for and I don't have a problem with.

2: Seems to be a reasonable conclusion from 1.

3: Seems to be a conclusion for people living in the United States which has SOME evidence backing it, but there do exist counter arguments against that such as here But rather than sidetracking this, I'll just link a google search and let you draw their own conclusions.

4: I feel like o... (read more)

I think the argument is interesting and partly valid. Explaining which part I like will take some explanation.

Many of our problems thinking about morality, I think, arise from a failure to make a distinction between two different things.

  • Morality in daily life
  • Morality as an ideal

Morality of daily life is a social convention. It serves its societal and personal (egoistically prudent) function precisely because it is a (mostly) shared convention. Almost any reasonable moral code, if common knowledge, is better than no common code.

Morality as an idea... (read more)

Isn't there a hidden problem with values taking other values as their arguments? If I can value having a particular value, I can possibly construct self-referential paradoxical values like V = value of not having value V, and empty values like V = value of having value V. A value system including value W = value of preserving other values, W included seems to be of that kind, and I am not sure whether it can be transformed into a consistent decision algorithm.

On the other hand, look at what happens if we avoid self-reference by introducing distinct types o... (read more)

2cousin_it
I expect that in most cases having an object-level value V should make you behave as if you were also protecting the meta-value V' of your valuing V, because your switching away from V can be detrimental to V. Also see my reply to torekp for a reason why you might want to return to your past values even though they don't coincide with the current ones.
0prase
It certainly seems that valuing V implies valuing valuing V and valuing^3 V and so on. But, if we try to formalise a bit the notion of value, doesn't it produce some unexpected paradoxes? I haven't thought about it in detail, so perhaps there is no problem, but I am not convinced. I don't understand the relevance of your reply to torekp.
3cousin_it
I don't see how paradoxes could arise. For example, if you have a value V of having value V, that's a perfectly well-defined function on future states of the world and you know what to do to maximize it. (You can remove explicit self-reference by using quining), aka the diagonal lemma.) Likewise for the value W of not having value W. The actions of an agent having such a value will be pretty bizarre, but Bayesian-rational. It shows a possible reason why you might want to return to your past values once you approach TDT-ish reflective consistency, even if you don't want that in your current state. I'm not sure it's correct, though.

drift is harmful from the perspective of my current values

True. And that drift would be beneficial from the perspective of your new, drifted-to values.

But neither of those statements have any bearing on whether value drift (in general or any specific instance thereof) is good or bad.

3prase
It's good measured by the new values and bad measured by the old ones. What other standards of goodness do we have at our disposal in this problem?
0mutterc
Good question :-) We could measure them against some non-relativist ethical standard, like "US Dollars lost", "lives lost", "person-weeks stuck in traffic" or somesuch.
[-]Alexei-20

All values, wherever they come from, need to be re-examined on their own merit. At one point slavery was thought to be acceptable by a lot of people. If you grew up in that society, you would probably inherit that belief as well. There are very likely similar beliefs that you have right now, that were given to you by some source you find credible (society, the Bible, or LW) that you might be better off not having. That's why you need to examine every single belief that you have aside from its source. You can't assume they are automatically correct because ... (read more)

2Emile
I disagree - it's neutral at worse. There are some advantages if everybody in a society has similar social norms; obviously driving on a certain side of the road, but also things like whether tipping in restaurants is "morally required" or not (it is in the US, it isn't in France, in both cases the pay of servers is adjusted accordingly). Or if you're talking about consistency at the individual level, having consistent values makes one slightly more predictable, and the expected correctness of a set of consistent values is probably sligtly higher than the expected correctness of a set of inconsistent values. In general, most people are more comfortable living in a society who shares their values, so I'd say "Consistency for the sake of consistency" is generally slighltly good, to be of course balanced with other things.
0Alexei
Good point. I was talking about consistency on the individual level, and overall it's probably at least mildly beneficial.
0MixedNuts
Warning to foreigners: tipping in restaurants is morally required in France. It's a tiny tip, about two euros, but not tipping still makes you a very rude and bad person who defects.
0Emile
Is it? I get the impression that it's expected in cafés, but "more optional" in restaurants (it probably also depends of the restaurant). Some quick googling seems to agree (except for the restaurant/café difference, maybe my pattern matching on the behavior of others is overactive)
0MixedNuts
The website says it's a Paris thing, which sounds plausible. I wouldn't know about nice restaurants. I'm pretty sure it's expected in regular (for some value thereof) restaurants, at least in Paris: French television movies always show waiters getting mad at customers who don't tip, and my parents (who are stingy with tips) always tip in restaurants. My intuition says that tipping in restaurants is even more important than in cafés, but I don't know why - maybe just because the tip is bigger? (Remember the second Paris meetup, where I made an ass of myself by complaining I didn't have enough money? I added a few coins to the pile when we left anyway. Not tipping is a mortal sin.)
1cousin_it
The post assumed that protecting the morality you happen to have is a worthwhile goal, which is orthogonal to the problem of finding the "right" morality that your comment is trying to address.