Comment author: Peterdjones 25 May 2011 09:06:03PM *  0 points [-]

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.

Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do <immoral thing> because <excuse>". Please give an example of a pangalactic value that can be substituted for ,<excuse>

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

Dictionary says, [objective[ "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective?

By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.

Especially since a value is a personal feeling.

You haven't remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.

you are defining "value" differently, how?

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

Comment author: Amanojack 25 May 2011 09:33:48PM *  0 points [-]

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.

I agree, if you mean things like, "If I now believe that she is really a he, I don't want to take 'her' home anymore."

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do <immoral thing> because <excuse>".

Neither can I. I just don't draw the same conclusion. There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else, so it's not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English's fault, and I don't really care which it is, but it would be preferable for something to actually make it across the inferential gap.)

EDIT: Oops, posted too soon.

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn't carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn't matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.

At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.

But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings."

In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn't lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus "objective values." But not for 1, because personal feelings are, well, personal.

If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn't want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.

Comment author: Peterdjones 25 May 2011 08:45:54PM 0 points [-]

Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc.

So you say. I can think of two arguments against that: people acquire true beliefs that aren't immediately useful, and untrue beliefs can be pleasing.

Comment author: Amanojack 25 May 2011 09:19:16PM 0 points [-]

I never said they had to be "immediately useful" (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it's not an objection either.

Comment author: ArisKatsaris 25 May 2011 07:52:01PM 1 point [-]

Not quite my point. I'm not talking about what your preferences would be. That would be subjective, personal. I'm talking about what everyone's meta-ethical preferences would be, if self-consistent, and abstracted enough.

My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.

That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.

Comment author: Amanojack 25 May 2011 08:01:05PM 0 points [-]

Then why not just call it "universal morality"?

Comment author: [deleted] 25 May 2011 07:11:02PM 0 points [-]

There's no particular connection between morality and arithmetic that I'm aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn't opening up yet another can of worms.

In response to comment by [deleted] on Conceptual Analysis and Moral Theory
Comment author: Amanojack 25 May 2011 07:38:09PM 0 points [-]

Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.

Comment author: Peterdjones 25 May 2011 02:15:11PM 0 points [-]

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

I've never argued (a), I'm still arguing (actually just informing you) that the words "objective morality" are meaningless to me

You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don't-agree-with-it. I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.

and I'm still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I'd need a definition!)

What is wrong with dictionary definitions?

You have not succeeded in showing that winning is the most important thing.

I'm using the word winning as a synonym for "getting what I want," and I understand the most important thing to mean "what I care about most."

That doesn't affect anything. You still have no proof for the revised version.

And I mean "want" and "care about" in a way that makes it tautological. Keep in mind I want other people to be happy

Other people out there in the non-existent Objective World?

, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.

I don't think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.

Comment author: Amanojack 25 May 2011 07:36:31PM -1 points [-]

You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don't-agree-with-it.

Zorg said the same thing about his pan-galactic ethics.

I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.

Did you even read the post we're commenting on?

That doesn't affect anything. You still have no proof for the revised version.

Wait, you want proof that getting what I want is what I care about most?

Other people out there in the non-existent Objective World?

Read what I wrote again.

I don't think moral anti-realists are generally immoral peopl

Read.

Comment author: Peterdjones 25 May 2011 02:01:20PM *  0 points [-]

Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."

Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.

My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on.

Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?

I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to

It refers to the stuff that doesn't go away when you stop believing in it.

Comment author: Amanojack 25 May 2011 07:28:31PM *  -1 points [-]

"To speak of there being something/nothing out there is meaningless to me unless I can see why to care."

Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.

Note the bold.

Whose language ? What language?

English, and all the rest that I know of.

If you think all language is a problem, what do you intend to replace it with?

Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.

It refers to the stuff that doesn't go away when you stop believing in it.

If so, I suggest "permanent" as a clearer word choice.

Comment author: Peterdjones 25 May 2011 01:49:39PM 0 points [-]

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Believing in truth is what rational people do.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims.

Which is good because...?

It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong).

Correct.

This is what makes me curious about why you think I would care.

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

I think that horse has bolted. Inasmuch as you don't care about truth per se. you have advertised yourself as being irrational.

Comment author: Amanojack 25 May 2011 07:19:43PM 0 points [-]

"Should" for what purpose?

Believing in truth is what rational people do.

Winning is what rational people do. We can go back and forth like this.

Which is good because...?

It benefits me, because I enjoy helping people. See, I can say, "So what?" in response to "You're wrong." Then you say, "You're still wrong." And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for "being wrong."

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don't care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed - very unusually - is claimed to not have any effect on such things, "true" and "false" become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can't, I will happily discard them.

Comment author: Peterdjones 25 May 2011 01:38:56PM *  -1 points [-]

Yeah, I said it's not an argument. Yet again I can only ask, "So what?"

So being wrong and not caring you are in the wrong is not the same as being right.

(And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

The wordings "affect my winning" and "matter" mean the same thing to me.

But they don't mean the same thing. Morality matters more than anything else by definition. You don't prove anything by adopting an idiosyncratic private language.

I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

The question is whether mattering for your actions is morally justifiable.

Comment author: Amanojack 25 May 2011 06:59:38PM -1 points [-]

So being wrong and not caring you are in the wrong is not the same as being right.

Yet I still don't care, and by your own admission I suffer not in the slightest from my lack of caring.

I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.

Morality matters more than anything else by definition.

Which would be? If you refer me to the dictionary again, I think we're done here.

Comment author: Peterdjones 25 May 2011 01:26:46PM *  0 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

Well, what use is your belief in "objective value"?

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute."

Then I have a bridge to sell you.

I would substitute "useful" and "show people why it is not useful," respectively.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.

Comment author: Amanojack 25 May 2011 06:50:19PM *  -1 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

<Zorg from planet Mnnmnedr interrupts this discussion>

Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate.

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

But you still haven't defined objective value.

Dictionary says, "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining "value" differently, how?

Comment author: ArisKatsaris 25 May 2011 11:54:39AM *  2 points [-]

The ultimate definition would tell me why to care.

In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call "morality", by and large.

It's "Do unto others...", but abstracted a bit, so that we really mean "Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you."


Omega puts you in a room with a big red button. "Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don't press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can't communicate with them. In fact they think they're the only person being given the option of a button, so this problem isn't exactly like Prisoner's dilemma. They don't even know you exist or that their own life is at stake."

"But here's the offer I'm making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they'll be identifying themself.

"Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances."


Given the above scenario, you'll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.

Comment author: Amanojack 25 May 2011 06:39:03PM 0 points [-]

I would indeed it prefer if other people had certain moral sentiments. I don't think I ever suggested otherwise.

View more: Prev | Next