CuSithBell comments on What is Metaethics? - Less Wrong

31 Post author: lukeprog 25 April 2011 04:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (550)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 28 April 2011 04:11:26AM 3 points [-]

My point is that you can't conclude the notion of morality is incoherent simple because we don't yet have a sufficiently concrete definition.

Comment author: CuSithBell 28 April 2011 04:15:10AM *  5 points [-]

Technically, yes. But I'm pretty much obliged, based on the current evidence, to conclude that it's likely to be incoherent.

More to the point: why do you think it's likely to be coherent?

Comment author: Eugine_Nier 28 April 2011 04:31:24AM *  5 points [-]

Mostly by outside view analogy with the history of the development of science. I've read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality.

I've also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them.

I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.

As for likely I'm not sure how likely this is, I just think its more likely then a lot of people on this thread assume.

Comment author: JGWeissman 28 April 2011 04:47:17AM 2 points [-]

I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate.

If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?

Comment author: Eugine_Nier 28 April 2011 04:48:43AM 1 point [-]

If that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?

If I knew the answer we wouldn't be having this discussion.

Comment author: CuSithBell 28 April 2011 03:45:16PM 3 points [-]

To be clear - you are talking about morality as something externally existing, some 'facts' that exist in the world and dictate what you should do, as opposed to a human system of don't be a jerk. Is that an accurate portrayal?

If that is the case, there are two big questions that immediately come to mind (beyond "what are these facts" and "where did they come from") - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by "DON'T BE A JERK. THIS MESSAGE WILL REPEAT. DON'T BE A JERK. THIS MESSAGE WILL...".)

The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, "La dernière personne qui est vivant, gagne." ("The last person who is alive, wins" - apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do?

It seems, ultimately, you have to ask "why" you should do "what you should do". Common answers include that you should do "what God commands" because "that's inherently What You Should Do, it is By Definition Good and Right". Or, "don't be a jerk" because "I'll stop hanging out with you". Or, "what makes you happy and fulfilled, including the part of you that desires to be kind and generous" because "the subjective experience of sentient beings are the only things we've actually observed to be Good or Bad so far".

So, where do we stand now?

Comment author: Eugine_Nier 29 April 2011 01:26:46AM *  1 point [-]

as opposed to a human system of don't be a jerk.

Now we're getting somewhere. What do you mean by the work "jerk" and why is it any more meaningful then words like "moral"/"right"/"wrong"?

Comment author: CuSithBell 29 April 2011 01:30:45AM 1 point [-]

The distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I'm not sure if I'm being clear, is this description easier to interpret?

Comment author: Eugine_Nier 29 April 2011 01:35:47AM *  2 points [-]

Near as I can tell, what you mean by "don't be a jerk" is one possible example of what I mean by morality.

Hope that helps.

Comment author: CuSithBell 29 April 2011 01:46:06AM 1 point [-]

Great! Then I think we agree on that.

Comment author: Amanojack 28 April 2011 04:54:35AM *  1 point [-]

Define your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn't the kind of thing that needs a response.

To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?

Comment author: Eugine_Nier 28 April 2011 05:22:32AM *  1 point [-]

After thinking about it a little I think I can phrase it this way.

I want to answer the question: "What should I do?"

It's kind of a pressing question since I need to do something (doing nothing counts as a choice and usually not a very good one).

If the people arguing that morality is just preference answer: "Do what you prefer", my next question is "What should I prefer?"

Comment author: wedrifid 28 April 2011 08:11:18AM *  0 points [-]

If the people arguing that morality is just preference answer: "Do what you prefer",

Including the word 'just' misses the point. Being about preference in now way makes it less important.

Comment author: [deleted] 28 April 2011 07:46:39AM 1 point [-]

my next question is "What should I prefer?"

Three definitions of "should":

used in auxiliary function to express obligation, propriety, or expediency

As for obligation - I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don't really see how an ordinary person could be all that puzzled about what his obligations are.

As for propriety - over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what's socially acceptable (stuff like, not farting in an elevator), and anyway, it's not the end of the world if you offend somebody. Again, I don't really see how an ordinary person is going to have a problem.

As for expediency - I doubt you intended the question that way.

If this doesn't answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that's what you want answered, i.e., what's the best possible thing you could be doing.

But the "should" of obligation is not like this. We have certain obligations but these are fairly limited, and don't provide us with a life-encompassing program of action. And the "should" of propriety is not like this either. People just don't pay you any attention as long as you don't get in their face too much, so again, the direction you get from this quarter is limited.

Comment author: Peterdjones 28 April 2011 11:53:51AM 1 point [-]

As for obligation - I doubt you are under any obligation other than to avoid the usual >uncontroversially nasty behavior, along with any specific obligations you may have to >specific people you know. You would know what those are much better than I would. I >don't really see how an ordinary person could be all that puzzled about what his >obligations are.

You have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.

Comment author: [deleted] 28 April 2011 06:08:20AM 0 points [-]

This might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I'm on the wrong track.

Antirealists aren't arguing that you should go on a hedonic rampage -- we are allowed to keep on consulting our consciences to determined the answer to "what should I prefer." In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle.

At least, antirealism gives some support to this cynical point of view, and it's this point of view that you are most interested in attacking. Am I right?

Comment author: Eugine_Nier 28 April 2011 06:20:48AM 0 points [-]

That's a large part of it.

The other problem is that anti-realists don't actually answer the question "what should I do?", they merely pass the buck to the part of my brain responsible for my preferences but don't give it any guidance on how to answer that question.

Comment author: Amanojack 01 May 2011 03:13:02PM *  -1 points [-]

If the people arguing that morality is just preference answer: "Do what you prefer", my next question is "What should I prefer?"

In order to accomplish what?

Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, "What should I prefer" seems meaningless...unless you are looking for an answer like, "It's better to cultivate a preference for vanilla because it is slightly healthier" (you will thereby achieve better health than if you let yourself keep on preferring chocolate).

This gets into the time structure of experience. In other words, I would be interpreting your, "What should I prefer?" as, "What things should I learn to like (in order to get more enjoyment out of life)?" To bring it to a more traditionally moral issue, "Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?"

Is that more or less the kind of question you want to answer?

Comment author: TimFreeman 28 April 2011 04:37:21AM 0 points [-]

Talk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a "what color is the sky?" type of conversation.

If you do agree with them, what would you want from a meta-ethical theory that you don't already have?

Comment author: Eugine_Nier 28 April 2011 04:45:39AM *  2 points [-]

If you do agree with them, what would you want from a meta-ethical theory that you don't already have?

Something more objective/universal.

Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.

Comment author: TimFreeman 02 May 2011 05:41:58PM 5 points [-]

So we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold.

I can see a motive for changing one's beliefs, since false beliefs will often fail to support the activity of enacting one's preferences. I can't see a motive for changing one's preferences - obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?

If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I'd rather select a different social milieu, myself.

Comment author: handoflixue 06 May 2011 08:02:44PM *  2 points [-]

I recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape.

I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they're contrary to these new preferences.

I'd think that's a pretty concrete example of changing my preferences, unless we're using different definitions of "preference."

Comment author: TimFreeman 06 May 2011 08:40:23PM 1 point [-]

I suppose we are using different definitions of "preference". I'm using it as a friendly term for a person's utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can't be understood that way. For example, what you're calling food preferences are what I'd call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.

Comment author: handoflixue 06 May 2011 09:49:03PM 2 points [-]

Ahh, I re-read the thread with this understanding, and was struck by this:

I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts

It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.

Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.

Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :)

Does that make sense as a "motivation for wanting to change your preferences"?

Comment author: TimFreeman 06 May 2011 10:35:50PM *  2 points [-]

I agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference.

My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone's preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I'm screwed, so we need a distinction there. Check http://www.fungible.com. It has a lot of bugs that are not described there, so don't go implementing it. Please.

In general, if the FAI is going to give "your preference" to you, your preference had better be something stable about you that you'll still want when you get it.

If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it. I certainly would want an akrasia fix if it were available. Maybe that's the important preference.

Comment author: TimFreeman 06 May 2011 11:23:19PM 0 points [-]

It seems to me that the simplest way to handle this is to assume that people have multiple utility functions.

Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility.

At the end of the day, you're going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.

Comment author: Eugine_Nier 02 May 2011 06:49:48PM 2 points [-]

Let's try a different approach.

I have spent some time thinking about how to apply the ideas of Eliezer's metaethics sequence to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here, the distinction between preferences and biases is very arbitrary.

So the question becomes how do you separate which of your intuitions are preferences and which are biases?

Comment author: TimFreeman 04 May 2011 03:15:20AM 0 points [-]

[H]ow do you separate which of your intuitions are preferences and which are biases?

Well, valid preferences look like they're derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else.

I don't see how that question is relevant. I don't see any good reason for you to dodge my question about what you'd do if your preferences contradicted your morality. It's not like it's an unusual situation -- consider the internal conflicts of a homosexual Evangelist preacher, for example.

Comment author: Peterdjones 04 May 2011 01:09:07PM 1 point [-]

What makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.

Comment author: TimFreeman 04 May 2011 02:06:51PM 1 point [-]

What makes your utility function valid?

I don't judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn't. It's true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid.

If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.

A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn't more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you're in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.)

This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn't evidence that I don't want to go to the grocery store. That's a confusing issue and I'm hoping we can assume for the purposes of discussion about morality that the people we're talking about have true beliefs.

Comment author: Eugine_Nier 04 May 2011 04:53:51AM 1 point [-]

Well, valid preferences look like they're derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way.

Um, no. Unless you are some kind of mutant who doesn't suffer from scope insensitivity or any of the related biases your uncertainty about the future doesn't interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary.

Here is the example PhilGoetz gives in the article I linked above:

In Crime and punishment, I argued that people want to punish criminals, even if there is a painless, less-costly way to prevent crime. This means that people value punishing criminals. This value may have evolved to accomplish the social goal of reducing crime. Most readers agreed that, since we can deduce this underlying reason, and accomplish it more effectively through reasoning, preferring to punish criminals is an error in judgement.

Most people want to have sex. This value evolved to accomplish the goal of reproducing. Since we can deduce this underlying reason, and accomplish it more efficiently than by going out to bars every evening for ten years, is this desire for sex an error in judgement that we should erase?

I believe I answered your other question elsewhere in the thread.

Comment author: wedrifid 04 May 2011 05:04:10AM 0 points [-]

and uncertainty about the future should interact with the utility function in the proper way.

"The proper way" being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.

Comment author: Peterdjones 03 May 2011 10:56:48PM 1 point [-]

I can see a motive for changing one's beliefs, since false beliefs will often fail to support the activity of enacting one's preferences. I can't see a motive for changing one's preferences

There isn't an instrumental motive for changing ones preferences. That doesn't add up to "never change your preferences" unless you assume that instrumentality -- "does it help me achieve anything" is the ultimate way of evaulating things. But it isn't: morality is. It is morally wrong to design better gas chambers.

Comment author: TimFreeman 04 May 2011 02:45:45AM *  1 point [-]

The interesting question is still the one you didn't answer yet:

If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?

I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene.

The unlikely answer is "I wouldn't do anything different". Then I'd reply "So, morality makes no practical difference to your behavior?", and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way.

The likely answer is "If I'm willpower-depleted, I'd do the immoral thing I prefer, but on a good day I'd have enough willpower and I'd do the moral thing. I prefer to have enough willpower to do the moral thing in general." In that case, I would have to admit that I'm in the same situation, except with a vocabulary change. I define "preference" to include everything that drives a person's behavior, if we assume that they aren't suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I'm calling "preference" is the same as what you're calling "preference and morality". I am in the same situation in that when I'm willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general.

If I guessed your answer wrong, please correct me. Otherwise I'd want to fix the vocabulary problem somehow. I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts, perhaps an "amoral preference" which would mean what you were calling "preference" before, and "moral preference" would include what you were calling "morality" before, but perhaps we'd choose different words if you objected to those. The next question would be:

Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?

...and I have no clue what your answer would be, so I can't continue the conversation past that point without straightforward answers from you.

Comment author: Eugine_Nier 04 May 2011 04:43:08AM 1 point [-]

If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do?

Follow morality.

Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?

One way to illustrate this distinction is using Eliezer's "murder pill". If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no.

One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.

Comment author: TimFreeman 04 May 2011 04:44:16PM 0 points [-]

One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.

If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.

Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can't be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we're talking about, which hasn't happened.

Comment author: Peterdjones 04 May 2011 12:17:42PM *  0 points [-]

The likely answer is "If I'm willpower-depleted, I'd do the immoral thing I prefer, but on a good day I'd have enough willpower and I'd do the moral thing. I prefer to have enough willpower to do the moral thing in general." In that case, I would have to admit that I'm in the same situation, except with a vocabulary change. I define "preference" to include everything that drives a person's behavior,

But preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too.

if we assume that they aren't suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I'm calling "preference" is the same as what you're calling "preference and morality"

I am not and never was using "preference" to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored.

Okay, you're making a distinction between amoral preference and moral preference. This distinction is obviously important to you. What makes it important?

If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can't do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn't "made" important by some greater good.

Comment author: TimFreeman 04 May 2011 01:42:52PM *  0 points [-]

I am not and never was using "preference" to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences.

There's a choice you're making here, differently from me, and I'd like to get clear on what that choice is and understand why we're making it differently.

I have a bunch of things I prefer. I'd rather eat strawberry ice cream than vanilla, and I'd rather not design higher-throughput gas chambers. For me those two preferences are similar in kind -- they're stuff I prefer and that's all there is to be said about it.

You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you.

I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it's even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That's not what I'm talking about. What I'm talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter "s" to be "blort" preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you'd be left wondering "Why does he care?"

And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it.

The decision to use the concept of "morality" is like any other decision to define and use a concept. We should use it if the cost of talking about it is paid for by the added clarity it brings. If we don't use the concept, that doesn't change whether anyone wants to build high-throughput gas chambers -- it just means that we don't have the tools to talk about the difference in kind between ice cream flavor preferences and gas chamber building preferences. If there's no use for such talk, then we should discard the concept, and if there is a use for such talk, we should keep the concept and try to assign a useful and clear meaning to it.

So what use is the concept of morality? How do people benefit from regarding ice cream flavor preferences as a different sort of thing from gas chamber building preferences?

Morality is already, in itself, the most important value.

I hope we're agreed that there are two different kinds of things here -- the strongly held preference to not design high-throughput gas chambers is a different kind of thing from the decision to label that preference as a moral one. The former influences the options available to a well-organized mass murderer, and the latter determines the structure of conversations like this one. The former is a value, the latter is a choice about how words label things. I claim that if we understand what is going on, we'll all prefer to make the latter choice pragmatically.

Comment deleted 04 May 2011 01:09:38AM [-]
Comment deleted 04 May 2011 01:20:01AM *  [-]
Comment author: TimFreeman 04 May 2011 01:25:32AM *  0 points [-]

Agreed, so I deleted my post to avoid wasting Peter's time responding.

Comment author: Peterdjones 28 April 2011 12:18:04PM -1 points [-]

Rationality is the equivalent of normative morality: it is a set of guidelines for arriving at the opinions you should have:true ones. Epistemology is the equivalent of metaethics. It strives to answer the question "what is truth".