Comment author: Peterdjones 25 May 2011 01:49:39PM 0 points [-]

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Believing in truth is what rational people do.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims.

Which is good because...?

It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong).

Correct.

This is what makes me curious about why you think I would care.

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

I think that horse has bolted. Inasmuch as you don't care about truth per se. you have advertised yourself as being irrational.

Comment author: Amanojack 25 May 2011 07:19:43PM 0 points [-]

"Should" for what purpose?

Believing in truth is what rational people do.

Winning is what rational people do. We can go back and forth like this.

Which is good because...?

It benefits me, because I enjoy helping people. See, I can say, "So what?" in response to "You're wrong." Then you say, "You're still wrong." And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for "being wrong."

I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn't care about truth at all, the process probably isn't going to work.

Sure, people usually argue whether something is "true or false" because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don't care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed - very unusually - is claimed to not have any effect on such things, "true" and "false" become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can't, I will happily discard them.

Comment author: Peterdjones 25 May 2011 01:38:56PM *  -1 points [-]

Yeah, I said it's not an argument. Yet again I can only ask, "So what?"

So being wrong and not caring you are in the wrong is not the same as being right.

(And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

The wordings "affect my winning" and "matter" mean the same thing to me.

But they don't mean the same thing. Morality matters more than anything else by definition. You don't prove anything by adopting an idiosyncratic private language.

I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

The question is whether mattering for your actions is morally justifiable.

Comment author: Amanojack 25 May 2011 06:59:38PM -1 points [-]

So being wrong and not caring you are in the wrong is not the same as being right.

Yet I still don't care, and by your own admission I suffer not in the slightest from my lack of caring.

I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.

Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.

Morality matters more than anything else by definition.

Which would be? If you refer me to the dictionary again, I think we're done here.

Comment author: Peterdjones 25 May 2011 01:26:46PM *  0 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

Well, what use is your belief in "objective value"?

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute."

Then I have a bridge to sell you.

I would substitute "useful" and "show people why it is not useful," respectively.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.

Comment author: Amanojack 25 May 2011 06:50:19PM *  -1 points [-]

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn't: Morality is, by definition.

<Zorg from planet Mnnmnedr interrupts this discussion>

Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

And would it be true that it is non-useful? Since to assert P is to assert "P is true", truth is a rather hard thing to eliminate.

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

But you still haven't defined objective value.

Dictionary says, "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining "value" differently, how?

Comment author: ArisKatsaris 25 May 2011 11:54:39AM *  2 points [-]

The ultimate definition would tell me why to care.

In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call "morality", by and large.

It's "Do unto others...", but abstracted a bit, so that we really mean "Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you."


Omega puts you in a room with a big red button. "Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don't press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can't communicate with them. In fact they think they're the only person being given the option of a button, so this problem isn't exactly like Prisoner's dilemma. They don't even know you exist or that their own life is at stake."

"But here's the offer I'm making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they'll be identifying themself.

"Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances."


Given the above scenario, you'll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.

Comment author: Amanojack 25 May 2011 06:39:03PM 0 points [-]

I would indeed it prefer if other people had certain moral sentiments. I don't think I ever suggested otherwise.

Comment author: [deleted] 25 May 2011 10:12:21AM 1 point [-]

You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning.

Actually I am not. I am interpreting "I'm morally wrong" to mean something like, "I made an error of arithmetic in an area where other people depend on me."

An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people's reaction to it, nevertheless people will probably react badly to it if you've made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.

Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.

If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.

And similarly with morality.

In response to comment by [deleted] on Conceptual Analysis and Moral Theory
Comment author: Amanojack 25 May 2011 06:33:21PM 0 points [-]

In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.

Comment author: Peterdjones 24 May 2011 03:05:43PM 0 points [-]

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

So you disagree with EY about making beliefs pay rent?

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

You should be moral by the definition of "moral"and "should". This seems circular.

You say that like that's a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

So it's still true. Not caring is not refutation.

Comment author: Amanojack 25 May 2011 08:59:26AM 0 points [-]

It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

Well, what use is your belief in "objective value"?

So it's still true. Not caring is not refutation.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute." I would substitute "useful" and "show people why it is not useful," respectively.

Comment author: Peterdjones 24 May 2011 01:50:30PM 0 points [-]

Indeed it is not an argument. Yet I can still say, "So what?" I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I'd care about it.

The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying "there is a perfectly good rational argument that the world is round, but I prefer to be irrational".

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

Comment author: Amanojack 25 May 2011 08:49:20AM *  -1 points [-]

Yeah, I said it's not an argument. Yet again I can only ask, "So what?" (And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

The wordings "affect my winning" and "matter" mean the same thing to me. I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

*EDIT: in the sense that you seem to be using it (quite possibly because I don't know what that sense even is!).

Comment author: Peterdjones 24 May 2011 01:35:44PM *  0 points [-]

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't. Objective values are the values you should have even if you have different values.

And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other."

Where the groundwork is about 90% of the job...

As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?"

That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn't.

To unite the two themes: The ultimate definition would tell me why to care.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't. Even good arguments will fail to work on some people.

You should care about morality because it is morality. Morality defines (the ultimate kind of) "should".

"What I should do" =def "what is moral".

Nor everyone does get that , which is why "don't care" is "made to care" by various sanctions.

Comment author: Amanojack 25 May 2011 08:35:28AM *  0 points [-]

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Where the groundwork is about 90% of the job...

I certainly agree there. The question is whether it is more useful to assign the label "philosophy" to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called "philosophy," only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

Comment author: endoself 24 May 2011 03:12:20AM 0 points [-]

I thought of a way that I could be mugged Pascal-style

It's not Pascal's mugging unless it works with ridiculously low probabilities. Would you pay $5 to avoid a 10^-30 chance of watching 3^^^3 people being tortured?

People decry such things loudly, but few of those who aren't directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.

Are you including yourself in "the vast majority of people"? Are you including most of LW? If your utility is bounded, you are probably not vulnerable to Pascal's mugging. If your utility is not bounded, it is irrelevant whether other people act like their utilities are bounded. Note that even egoists can have unbounded utility functions.

Comment author: Amanojack 25 May 2011 08:18:32AM 1 point [-]

Are you losing sleep over the daily deaths in Iraq? Are most LWers? That's all I'm saying. I consider myself pretty far above-average empathy-wise, to the extent that if I saw someone be tortured and die I'd probably be completely changed as a person. If I spent more time thinking about the war I probably not be able to sleep at all, and eventually if I steeped myself in the reality of the situation I'd probably go insane or die of grief. The same would probably happen if I spent all my time watching slaughterhouse videos. So I'm not pretending to be callous. I'm just trying to inject some reality into the discussion. If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken.

So although it depends on what exactly you mean by "unbounded utility function," it seems that no one's utility function is really unbounded. And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term).

Also, point taken about my scenario not being a Pascal's mugging situation.

Comment author: BobTheBob 24 May 2011 01:29:07AM 2 points [-]

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality".

-though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

Comment author: Amanojack 25 May 2011 08:00:21AM 0 points [-]

I took a slightly different tack, which is maybe moot given your admission to being a solipsist

Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."

My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I'm sure it will be.

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view

I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to (unless it means something like "in accordance with consensus"). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you'll call it (I mean them all synonymously).

If after engaging in such discourse I am not able to do that, I will eventually want to ask, "So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don't want?"

View more: Prev | Next