Peterdjones comments on Conceptual Analysis and Moral Theory - Less Wrong

60 Post author: lukeprog 16 May 2011 06:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (456)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 23 May 2011 03:46:16PM 0 points [-]

What they generally mean is "not subjective". You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.

As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.

Comment author: Amanojack 23 May 2011 04:26:32PM 0 points [-]

I'm not saying non-subjective value is contradictory, just that I don't know what it could mean. To me "value" is a verb, and the noun form is just a nominalization of the verb, like the noun "taste" is a nominalization of the verb "taste." Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn't understand what she meant either.

As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.

But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now. What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)

Comment author: Peterdjones 23 May 2011 04:42:31PM *  0 points [-]

value" is a verb, and the noun form is just a nominalization of the verb,

I don't see the force to that argument. "Believe" is a verb and "belief" is a nominalisation. But beliefs can be objectively right or wrong -- if they belong to the appropriate subject area.

Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music,

It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.

But before I would even want to revise my aims and goals, I'd have to anticipate something different than I do now.

Why?

What does "some of your beliefs may be wrong by objective standards" make me anticipate that would motivate me to change my goals?

You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate. But morality is an ultimate because there is no more important value than a moral value.

(This is the same as the question in the other comment: What penalty do I suffer by having the "wrong" moral sentiments?)

If there is no personal gain from morality, that doesn't mean you shouldn't be moral. You should be moral by the definition of "moral"and "should". It's an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.

Comment author: Amanojack 23 May 2011 06:06:04PM 0 points [-]

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else - if they understand what I'm getting at here).

You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It's not an ultimate

So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don't pay rent but are still important? I just don't see how that makes sense.

You should be moral by the definition of "moral"and "should".

This seems circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

Comment author: Peterdjones 24 May 2011 03:05:43PM 0 points [-]

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

So you disagree with EY about making beliefs pay rent?

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

You should be moral by the definition of "moral"and "should". This seems circular.

You say that like that's a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

So it's still true. Not caring is not refutation.

Comment author: Amanojack 25 May 2011 08:59:26AM 0 points [-]

It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

Well, what use is your belief in "objective value"?

So it's still true. Not caring is not refutation.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute." I would substitute "useful" and "show people why it is not useful," respectively.

Comment author: Peterdjones 25 May 2011 01:26:46PM *  0 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

Well, what use is your belief in "objective value"?

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute."

Then I have a bridge to sell you.

I would substitute "useful" and "show people why it is not useful," respectively.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.

Comment author: Amanojack 25 May 2011 06:50:19PM *  -1 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

<Zorg from planet Mnnmnedr interrupts this discussion>

Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate.

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

But you still haven't defined objective value.

Dictionary says, "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining "value" differently, how?

Comment author: Peterdjones 25 May 2011 09:06:03PM *  0 points [-]

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.

Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do <immoral thing> because <excuse>". Please give an example of a pangalactic value that can be substituted for ,<excuse>

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

Dictionary says, [objective[ "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective?

By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.

Especially since a value is a personal feeling.

You haven't remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.

you are defining "value" differently, how?

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

Comment author: Amanojack 25 May 2011 10:31:49PM 0 points [-]

I missed this:

If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Comment author: Amanojack 25 May 2011 09:33:48PM *  0 points [-]

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.

I agree, if you mean things like, "If I now believe that she is really a he, I don't want to take 'her' home anymore."

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do <immoral thing> because <excuse>".

Neither can I. I just don't draw the same conclusion. There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else, so it's not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English's fault, and I don't really care which it is, but it would be preferable for something to actually make it across the inferential gap.)

EDIT: Oops, posted too soon.

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn't carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn't matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.

At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.

But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings."

In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn't lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus "objective values." But not for 1, because personal feelings are, well, personal.

If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn't want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.