CCC comments on Wanting to Want - Less Wrong

16 Post author: Alicorn 16 May 2009 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (185)

You are viewing a single comment's thread. Show more comments above.

Comment author: CCC 31 October 2012 10:08:17AM 0 points [-]

I'm sorry, there's an ambiguity there - when you say "the sum of these", are you summing across the moral values and imperatives of a single person, or of humanity as a whole?

Nearly all of humanity as a whole. There are obviously some humans who don't really value morality, we call them sociopaths, but I think most humans care about very similar moral concepts.

Okay then, next question; how do you decide which people to exclude? You say that you are excluding sociopaths, and I think that they should be excluded; but on exactly what basis? If you're excluding them simply because they fail to have the same moral imperatives as the ones that you think are important, then that sounds very much like a No True Scotsman argument to me. (I exclude them mainly on an argument of appeal to authority, myself, but that also has logic problems; in either case, it's a matter of first sketching out what the moral imperative should be, then throwing out the people who don't match).

And for a follow-up question; is it necessary to limit it to humanity? Let us assume that, ten years from now, a flying saucer lands in the middle of Durban, and we meet a sentient alien form of life. Would it be necessary to include their moral preferences in the equation as well?

Even if they are Pebblesorters?

In fact, if diversity is a good, as we discussed previously, then people having different personal preferences might in fact be morally desirable.

It may be, but only within a limited range. A serial killer is well outside that range, even if he believes that he is doing good by only killing "evil" people (for some definition of "evil").

What I'm wondering is, would I have a moral duty to share resources with a paperclipper if it existed, or would pretty much any of the things I spend the resources on if I kept them for myself count (i.e. eudaemonic things) as "something more important."

Hmmm. I think I'd put "buying a packet of paperclips for the paperclipper" as on the same moral footing, more or less, as "buying an icecream for a small child". It's nice for the person (or paperclipper) recieving the gift, and that makes it a minor moral positive by increasing happiness by a tiny fraction. But if you could otherwise spend that money on something that would save a life, then that clearly takes priority.

I think there might actually be lots of people like this, but most appear normal because they place even greater negative value on doing something stupid because they ignored good advice just because it came from an authority. In other words, following authority is a negative terminal value, but an extremely positive instrumental value.

Hmmm. Good point; that is quite possible. (Given how many people seem to follow any reasonably persuasive authority, though, I suspect that most people have a positive priority for this goal - this is probably because, for a lot of human history, peasants who disagreed with the aristocracy tended to have fewer descendants unless they all disagreed and wiped out said aristocracy).

Exactly. I would still want the world to be full of a diverse variety of people, even if I had a nonsentient AI that was right about everything and could serve my every bodily need.

Here's a tricky question - what exactly are the limits of "nonsentient"? Can a nonsentient AI fake it by, with clever use of holograms and/or humanoid robots, cause you to think that you are surrounded by a diverse variety of people even when you are not (thus supplying the non-bodily need of social interaction)? The robots would all be philosophical zombies, of course; but is there any way to tell?

Comment author: Ghatanathoah 31 October 2012 11:02:54AM *  1 point [-]

Okay then, next question; how do you decide which people to exclude?

I don't think I'm coming across right. I'm not saying that morality is some sort of collective agreement of people in regards to their various preferences. I'm saying that morality is a series of concepts such as fairness, happiness, freedom etc., that these concepts are objective in the sense that it can be objectively determined how much fairness, freedom, happiness etc. there is in the world, and that the sum of these concepts can be expressed as a large equation.

People vary in their preference for morality, most people care about fairness, freedom, happiness, etc. to some extent. But there are some people who don't care about morality at all, such as sociopaths.

Morality isn't a preference. It isn't the part of a person's brain that says "This society is fair and free and happy, therefore I prefer it." Morality is those disembodied concepts of freedom, fairness, happiness, etc. So if a person doesn't care about those things, it doesn't mean that freedom, fairness, happiness, etc. aren't part of their morality. It means that person doesn't care about morality, they care about something else."

To use the Pebblesorter analogy again, the fact that you and I don't care about sorting pebbles into prime-numbered heaps isn't because we have our own concept of "primeness" that doesn't include 2, 3, 5 and 7. It just means we don't care about primeness.

To make another analogy, if most people preferred wearing wool clothes but one person preferred cotton, that wouldn't mean that that person had their own version of wool, which was cotton. It means that that person doesn't prefer wool.

Look inward, and consider why you think most people should be included. Presumably it's because you really care a lot about being fair. But that necessarily means that you cared about fairness before you even considered what other people might think. Otherwise it wouldn't have even occurred to you to think about what they preferred in the first place.

The fact that most humans care, to some extent, about the various facets of morality, is a very lucky thing, a planet full of sociopaths would be most unpleasant. But it isn't relevant to the truth of morality. You'd still think torturing people was bad if all the non-sociopaths on Earth except you were killed, wouldn't you? If, in that devastated world, you came across a sociopath torturing another sociopath or an animal, and could stop them at no risk to yourself, you'd do it, wouldn't you?

You say that you are excluding sociopaths, and I think that they should be excluded; but on exactly what basis?

I suspect that your intuition comes from the fact that a central part of morality is fairness, and sociopaths don't care about fairness. Obviously being fair to the unfair is as unwise as tolerating the intolerant.

And for a follow-up question; is it necessary to limit it to humanity? Let us assume that, ten years from now, a flying saucer lands in the middle of Durban, and we meet a sentient alien form of life. Would it be necessary to include their moral preferences in the equation as well?

Again, I want to emphasize that morality isn't the "preference" part, it's the "concepts" part. But the question of the moral significance of aliens is relevant, I think it would depend on how many of the concepts that make up morality they cared about. I think that at a bare minimum they'd need fairness and sympathy.

So if the Pebblesorters that came out of that ship were horrified that we didn't care about primality, but were willing to be fair and share the universe with us, they'd be a morally worthwhile species. But if they had no preference for fairness or any sympathy at all, and would gladly kill a billion humans to sort a few more pebbles, that would be a different story. In that case we should probably, after satisfying ourselves that all Pebblesorters were psychologically similar, start prepping a Relativistic Kill Vehicle to point at their planet if they try something.

Here's a tricky question - what exactly are the limits of "nonsentient"? Can a nonsentient AI fake it by, with clever use of holograms and/or humanoid robots, cause you to think that you are surrounded by a diverse variety of people even when you are not (thus supplying the non-bodily need of social interaction)? The robots would all be philosophical zombies, of course; but is there any way to tell?

I don't know if I could tell, but I'd very much prefer that the AI not do that, and would consider myself to have been massively harmed if it did, even if I never found out. My preference is to actually interact with a diverse variety of people, not to merely have a series of experiences that seem like I'm doing it.

Comment author: CCC 01 November 2012 07:33:30AM *  0 points [-]

I don't think I'm coming across right. I'm not saying that morality is some sort of collective agreement of people in regards to their various preferences. I'm saying that morality is a series of concepts such as fairness, happiness, freedom etc., that these concepts are objective in the sense that it can be objectively determined how much fairness, freedom, happiness etc. there is in the world, and that the sum of these concepts can be expressed as a large equation.

Ah, I think I see your point. What you're saying - and correct me if I'm wrong - is that there is some objective True Morality, some complex equation that, if applied to any possible situation, will tell you how moral a given act is.

This is probably true.

This equation isn't written into the human psyche; it exists independantly of what people think about morality. It just is. And even if we don't know exactly what the equation is, even if we can't work out the morality of a given act down to the tenth decimal place, we can still apply basic heuristics and arrive at a usable estimate in most situations.

My question is, then - assuming the above is true, how do we find that equation? Does there exist some objective method whereby you, I, a Pebblesorter, and a Paperclipper can all independently arrive at the same definition for what is moral (given that the Pebblesorter and Paperclipper will almost certainly promptly ignore the result)?

(I had thought that you were proposing that we find that equation by summing across the moral values and imperatives of humanity as a whole - excluding the psychopaths. This is why I asked about the exclusion, because it sounded a lot like writing down what you wanted at the end of the page and then going back and discarding the steps that wouldn't lead there; that is also why I asked about the aliens).

I don't know if I could tell, but I'd very much prefer that the AI not do that, and would consider myself to have been massively harmed if it did, even if I never found out. My preference is to actually interact with a diverse variety of people, not to merely have a series of experiences that seem like I'm doing it.

Yes, I think we're in agreement on that. (Though this does suggest that 'sentient' may need a proper definition at some point).

Comment author: nshepperd 01 November 2012 09:35:41AM 1 point [-]

What you're saying - and correct me if I'm wrong - is that there is some objective True Morality, some complex equation that, if applied to any possible situation, will tell you how moral a given act is.

In the same way as there exists a True Set of Prime Numbers, and True Measure of How Many Paperclips There Are...

Comment author: Ghatanathoah 01 November 2012 08:56:18AM *  -1 points [-]

My question is, then - assuming the above is true, how do we find that equation?

Even though the equation exists independently of our thoughts (the same way primality exists independently from Pebblesorter thoughts) fact that we are capable of caring about the results given by the equation means we must have some parts of it "written" in our heads, the same way Pebblesorters must have some concept of primality "written" in their heads. Otherwise, how would we be capable of caring about its results?

I think that probably evolution metaphorically "wrote" a desire to care about the equation in our heads because if humans care about what is good and right it makes it easier for them to cooperate and trust each other, which has obvious fitness advantages. Of course, the fact that evolution did a good thing by causing us to care about morality doesn't mean that evolution is always good, or that evolutionary fitness is a moral justification for anything. Evolution is an amoral force causes many horrible things to happen. It just happened that in this particular instance, evolution's amoral metaphorical "desires" happened to coincide with what was morally good. That coincidence is far from the norm, in fact, evolution probably deleted morality from the brains of sociopaths because double-crossing morally good people also sometimes confers a fitness advantage.

So how do we learn more about this moral equation that we care about? One common form of attempting to get approximations of it in philosophy is called reflective equilibrium, where you take your moral imperatives and heuristics and attempt to find the commonalities and consistencies they have with each other. It's far from perfect, but I think that this method has produced useful results in the past.

Eliezer has proposed what is essentially a souped up version of reflective equilibrium called Coherent Extrapolated Volition. He has argued, however, that the primary use of CEV is in designing AIs that won't want to kill us, and that attempting to extrapolate other people's volition is open to corruption, as we could easily fall to the temptation to extrapolate it to something that personally benefits us.

Does there exist some objective method whereby you, I, a Pebblesorter, and a Paperclipper can all independently arrive at the same definition for what is moral (given that the Pebblesorter and Paperclipper will almost certainly promptly ignore the result)?

Again, we could probably get closer through reflective equilibrium, and by critiquing the methods and results of each other's reflections. If you somehow managed to get a Pebblesorter or a Paperclipper to do it too, they might generate similar results, although since they don't intrinsically care about the equation you would probably have to give them some basic instructions before they started working on the problem.

I had thought that you were proposing that we find that equation by summing across the moral values and imperatives of humanity as a whole - excluding the psychopaths.

If we assume that most humans care about acting morally, doing research about what people's moral imperatives are might be somewhat helpful, since it would allow us to harvest the fruits of other people's moral reflections and compare them with our own. We can exclude sociopaths because there is ample evidence that they care nothing for morality.

Although I suppose, that a super-genius sociopath who had the basic concept explained to them might be able to do some useful work in the same fashion that a Pebblesorter or Paperclipper might be able to. Of course, the genius sociopath wouldn't care about the results, and probably would have to be paid a large sum to even agree to work on the problem.

Comment author: CCC 01 November 2012 02:14:17PM 0 points [-]

I think that probably evolution metaphorically "wrote" a desire to care about the equation in our heads because if humans care about what is good and right it makes it easier for them to cooperate and trust each other, which has obvious fitness advantages.

Hmmm. That which evolution has "written" into the human psyche could, in theory, and given sufficient research, be read out again (and will almost certainly not be constant across most of humanity, but will rather exist with variations). But I doubt that morality is all in out genetic nature; I suspect that most of it is learned, from our parents, aunts, uncles, grandparents and other older relatives; I think, in short, that morality is memetic rather than genetic. Though evolution still happens in memetic systems just as well as in genetic systems.

So how do we learn more about this moral equation that we care about? One common form of attempting to get approximations of it in philosophy is called reflective equilibrium, where you take your moral imperatives and heuristics and attempt to find the commonalities and consistencies they have with each other. It's far from perfect, but I think that this method has produced useful results in the past.

Hmmm. Looking at the wikipedia article, I can expect reflective equilibrium to produce a consistent moral framework. I also expect a correct moral framework to be consistent; but not all consistent moral frameworks are correct. (A paperclipper does not have what I'd consider a correct moral framework, but it does have a consistent one).

If you start out close to a correct moral framework, then reflective equilibrium can move you closer, but it doesnt necessarily do so.

Eliezer has proposed what is essentially a souped up version of reflective equilibrium called Coherent Extrapolated Volition. He has argued, however, that the primary use of CEV is in designing AIs that won't want to kill us, and that attempting to extrapolate other people's volition is open to corruption, as we could easily fall to the temptation to extrapolate it to something that personally benefits us.

Hmmm. The primary use of trying to find the True Morality Equation, to my mind, is to work it into a future AI. If we can find such an equation, prove it correct, and make an AI that maximises its output value, then that would be an optimally moral AI. This may or may not count as Friendly, but it's certainly a potential contender for the title of Friendly.

Again, we could probably get closer through reflective equilibrium, and by critiquing the methods and results of each other's reflections. If you somehow managed to get a Pebblesorter or a Paperclipper to do it too, they might generate similar results, although since they don't intrinsically care about the equation you would probably have to give them some basic instructions before they started working on the problem.

Carrying through this method to completion could give us - or anyone else - an equation. But is there any way to be sure that it necessarily gives us the correct equation? (A pebblesorter may actually be a very good help in resolving this question; he does not care about morality, and therefore does not have any emotional investment in the research).

The first thought that comes to my mind, is to have a very large group of researchers, divide them into N groups, and have each of these groups attempt, independently, to find an equation; if all of the groups find the same equation, this would be evidence that the equation found is correct (with stronger evidence at larger values of N). However, I anticipate that the acquired results would be N subtly different, but similar, equations.

Comment author: Ghatanathoah 01 November 2012 02:36:52PM -1 points [-]

But I doubt that morality is all in out genetic nature; I suspect that most of it is learned, from our parents, aunts, uncles, grandparents and other older relatives; I think, in short, that morality is memetic rather than genetic.

That's possible. But memetics can't build morality out of nothing. At the very least, evolved genetics has to provide a "foundation," a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors (it appears to be a mix of some sort).

Hmmm. Looking at the wikipedia article, I can expect reflective equilibrium to produce a consistent moral framework. I also expect a correct moral framework to be consistent; but not all consistent moral frameworks are correct.

Yes, that's why I consider reflective equilibrium to be far from perfect. Depending on how many errors you latch onto, it might worsen your moral state.

Carrying through this method to completion could give us - or anyone else - an equation. But is there any way to be sure that it necessarily gives us the correct equation?

Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven't even achieved "good enough," so I don't think we should worry too much about being "perfect."

However, I anticipate that the acquired results would be N subtly different, but similar, equations.

That's not inconceivable. But I think that each of the subtly different equations would likely be morally better than pretty much every approximation we currently have.

Comment author: CCC 03 November 2012 01:32:03PM 0 points [-]

But memetics can't build morality out of nothing. At the very least, evolved genetics has to provide a "foundation," a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors

That sounds plausible, yes.

Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven't even achieved "good enough," so I don't think we should worry too much about being "perfect."

Hmmm. Finding an approximation to the equation will probably be easier than step two; encouraging people worldwide to accept the approximation. (Especially since many people who do accept it will then promptly begin looking for loopholes; either to use or to patch them).

However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.

Comment author: Ghatanathoah 06 November 2012 01:12:32AM -1 points [-]

However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.

That's true, what I was trying to say is that a world ruled by a 99.99% Approximation of Morality Maximizer AI might well be far far better than our current one, even if it is imperfect.

Of course, it might be a problem if we put the 99.99% Approximation of Morality Maximizer AI in power, then find the correct equation, only to discover that the 99AMMAI is unwilling to step down in favor of the Morality Maximizer AI. On the other hand, putting the 99AMM AI in power might be the only way to ensure a Paperclipper doesn't ascend to power before we find the correct equation and design the MMAI. I'm not sure whether we should risk it or not.

Comment author: TheOtherDave 31 October 2012 03:12:46PM 0 points [-]

So, OK. Suppose, on this account, that you and I both care about morality to the same degree... that is, you don't care about morality more than I do, and I don't care about morality more than you do. (I'm not sure how we could ever know that this was the case, but just suppose hypothetically that it's true.)

Suppose we're faced with a situation in which there are two choices we can make. Choice A causes a system to be more fair, but less free. Choice B leaves that system unchanged. Suppose, for simplicity's sake, that those are the only two choices available, and we both have all relevant information about the system.

On your account, will we necessarily agree on which choice to make? Or is it possible, in that situation, that you might choose A and I choose B, or vice-versa?

Comment author: Ghatanathoah 31 October 2012 09:12:44PM *  0 points [-]

I think it depends on the degree of the change. If the change is very lopsided (i.e -100 freedom, +1 fairness) I think we'd both choose B.

If we assume that the degree of change is about the same (i.e. +1 fairness, -1 freedom) it would depend on how much freedom and fairness already exist. If the system is very fair, but very unfree, we'd both choose B, but if it's very free and very unfair we'd both choose A.

However, if we are to assume that the gain in fairness and the loss in freedom are of approximately equivalent size and the current system has fairly large amounts of both freedom and fairness (which I think is what you meant) then it might be possible that we'd have a disagreement that couldn't be resolved with pure reasoning.

This is called moral pluralism, the idea that there might be multiple moral values (such as freedom, fairness, and happiness) which are objectively correct, imperfectly commensurable with each other, and can be combined in different proportions that are of approximately equivalent objective moral value. If this is the case then your preference for one set of proportions over the other might be determined by arbitrary factors of your personality.

This is not the same as moral relativism, as these moral values are all objectively good, and any society that severely lacks one of them is objectively bad. It's just that there are certain combinations with different proportions of values that might be both "equally good," and personal preferences might be the "tiebreaker." To put it in more concrete terms, a social democracy with low economic regulation and a small welfare state might be "just as good" as a social democracy with slightly higher economic regulation and a slightly larger welfare state, and people might honestly and irresolvably disagree over which one is better. However, both of those societies would definitely be objectively better than Cambodia under the Khmer Rouge, and any rational, fully informed person who cares about morality would be able to see that.

Of course, if we are both highly rational and moral, and disagreed about A vs. B, we'd both agree that fighting over them excessively would be morally worse than choosing either of them, and find some way to resolve our disagreement, even if it meant flipping a coin.

Comment author: TheOtherDave 31 October 2012 10:14:48PM 0 points [-]

I agree with you that in sufficiently extreme cases, we would both make the same choice. Call that set of cases S1.

I think you're saying that if the case is not that extreme, we might not make the same choice, even though we both care equally about the thing you're using "morality" to refer to. I agree with that as well. Call that set of cases S2.

I also agree that even in S2, there's a vast class of options that we'd both agree are worse than either of our choices (as you illustrate with the Khmer Rouge), and a vast class of options that we'd both agree are better than either of our choices, supposing that we are as you suggest rational informed people who care about the thing you're using "morality" to refer to.

If I'm understanding you, you're saying in S2 we are making different decisions, but our decisions are equally good. Further, you're saying that we might not know that our decisions are equally good. I might make choice A and think choice B is wrong, and you might make choice B and think choice A is wrong. Being rational and well-informed people we'd agree that both A and B are better than the Khmer Rouge, and we might even agree that they're both better than fighting over which one to adopt, but it might still remain true that I think B is wrong and you think A is wrong, even though neither of us thinks the other choice is as wrong as the Khmer Rouge, or fighting about it, or setting fire to the building, or various other wrong things we might choose to evaluate.

Have I followed your position so far?

Comment author: Ghatanathoah 01 November 2012 04:23:51AM -1 points [-]

Yes, I think so.

Comment author: TheOtherDave 01 November 2012 05:16:04AM -1 points [-]

OK, good.

It follows that if a choice can go one of three ways (c1, c2, c3) and if I think c1> c2 > c3 and therefore endorse c1, and if you think c2 > c1 > c3 and therefore endorse c2, and if we're both rational informed people who are in possession of the same set of facts about that choice and its consequences, and if we each think that the other is wrong to endorse the choice we endorse (while still agreeing that it's better than c3), that there are (at least) two possibilities.

One possibility is that c1 and c2 are, objectively, equally good choices, but we each think the other is wrong anyway. In this case we both care about morality, even though we disagree about right action.

Another possibility is that c1 and c2 are, objectively, not equally good. For example, perhaps c1 is objectively bad, violates morality, and I endorse it only because I don't actually care about morality. Of course, in this case I may use the label "morality" to describe what I care about, but that's at best confusing and at worst actively deceptive, because what I really care about isn't morality at all, but some other thing, like prime-numbered heaps or whatever.

Yes?

So, given that, I think my question is: how might I go about figuring out which possibility is the case?

Comment author: Ghatanathoah 01 November 2012 06:04:40AM -1 points [-]

One possibility is that c1 and c2 are, objectively, equally good choices, but we each think the other is wrong anyway.

I'd say it's misleading to say we thought the other person was "wrong," since in this context "wrong" is a word usually used to describe a situation where someone is in objective moral error. It might be better to say: "c1 and c2 are, objectively, equally morally good, but we each prefer a different one for arbitrary, non-moral reasons."

This doesn't change your argument in any way, I just think it's good to have the language clear to avoid accidentally letting in any connotations that don't belong.

So, given that, I think my question is: how might I go about figuring out which possibility is the case?

This is not something I have done a lot of thinking on since the odds of ever encountering such a situation are quite low at the present. It seems to me, however, that if you are this fair to your opponent, and care this much about finding out the honest truth, that you probably care at least somewhat about morality.

(This brings up an interesting question, which is: might there be some "semi-sociopathic" humans who care about morality, but incrementally, not categorically? That is, if one of these people was rational, fully informed, lacking in self deception, and lacked akrasia, they would devote maybe 70% of their time and effort to morality and 30% to other things? Such a person, if compelled to be honest, might admit that c2 is morally worse than c1, but they don't care because they've used up their 70% of moral effort for the day. It doesn't seem totally implausible that such people might exist, but maybe I'm missing something about how moral psychology works, maybe it doesn't work unless it's all or nothing.)

As for determining if your opponent care about morality, you might look to see if they exhibit any of the signs of sociopathy. You might search their arguments for signs of anti-epistemology, or plain moral errors. If you don't notice any of these things, you might assign a higher probability to prediction that your disagreement is to preferring different forms of pluralism

Of course, in real life perfectly informed, rational humans who lack self deception, akrasia, and so on do not exist. So you should probably assign a much, much, much higher probability to one of those things causing your disagreement.

Comment author: CCC 01 November 2012 07:45:59AM 0 points [-]

This brings up an interesting question, which is: might there be some "semi-sociopathic" humans who care about morality, but incrementally, not categorically?

It seems very likely that a person who cares a certain amount about morality, and a certain amount about money, would be willing to compromise his morality if given sufficient money. Such a mental model would form the basis of bribery. (It doesn't have to be money, either, but the principle remains the same).

So a semi-sociopathic person would be anyone who could be bribed into completely disregarding morality.

Comment author: TheOtherDave 01 November 2012 04:13:25PM *  0 points [-]

a semi-sociopathic person would be anyone who could be bribed into completely disregarding morality.

On this account, we could presumably also categorize a semi-semi-sociopathic person as one who could be bribed into partially disregarding the thing we're labeling "morality". And of course bribes needn't be money... people can be bribed by all kinds of things. Social status. Sex. Pleasant experiences. The promise of any or all of those things in the future.

Which is to say, we could categorize a semi-semi-sociopath as someone who cares about some stuff, and makes choices consistent with maximizing the stuff they care about, where some of that stuff is what we're labeling "morality" and some of it isn't.

We could also replace the term "semi-semi-sociopath" with the easier to pronounce and roughly equivalent term "person".

It's also worth noting that there probably exists stuff that we would label "morality" in one context and "bribe" in another, were we inclined to use such labels.

Comment author: TheOtherDave 01 November 2012 03:51:39PM -1 points [-]

It might be better to say: "c1 and c2 are, objectively, equally morally good, but we each prefer a different one for arbitrary, non-moral reasons."

OK. In which case I can also phrase my question as, when I choose c1 over c2, how can I tell whether I'm making that choice for objective moral reasons, as opposed to making that choice for arbitrary non-moral reasons?

You're right that it doesn't really change the argument, I'm just trying to establish some common language so we can communicate clearly.

For my own part, I agree with you that ignorance and akrasia are major influences, and I also believe that what you describe as "incremental caring about morality" is pretty common (though I would describe it as individual values differing).

Comment author: Ghatanathoah 01 November 2012 07:20:00PM -1 points [-]

Wikipedia's page on internalism and externalism calls an entity that understands moral arguments, but is not motivated by them an "amoralist." We could say that a person who cares about morality incrementally to have individual values that are part moralist and part amoralist.

It's hard to tell how many people are like this due to the confounding factors of irrationality and akrasia. But I think it's possible that there are some people who, if their irrationality and akrasia were cured, would not act perfectly moral. These people would say "I know that the world would be a better place if I acted differently, but I only care about the world to a limited extent."

However, considering that these people would be rational and lack akrasia, they would still probably do more moral good than the average person does today.