Comment author: [deleted] 25 May 2011 10:12:21AM 1 point [-]

You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning.

Actually I am not. I am interpreting "I'm morally wrong" to mean something like, "I made an error of arithmetic in an area where other people depend on me."

An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people's reaction to it, nevertheless people will probably react badly to it if you've made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.

Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.

If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.

And similarly with morality.

In response to comment by [deleted] on Conceptual Analysis and Moral Theory
Comment author: Amanojack 25 May 2011 06:33:21PM 0 points [-]

In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.

Comment author: Peterdjones 24 May 2011 03:05:43PM 0 points [-]

First of all, I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality." It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

So you disagree with EY about making beliefs pay rent?

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

You should be moral by the definition of "moral"and "should". This seems circular.

You say that like that's a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.

If there is no personal gain from morality, that doesn't mean you shouldn't be moral.

What if I say, "So what?"

So it's still true. Not caring is not refutation.

Comment author: Amanojack 25 May 2011 08:59:26AM 0 points [-]

It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably

How do you know that?

Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.

If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.

Well, what use is your belief in "objective value"?

So it's still true. Not caring is not refutation.

Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like "true" and "refute." I would substitute "useful" and "show people why it is not useful," respectively.

Comment author: Peterdjones 24 May 2011 01:50:30PM 0 points [-]

Indeed it is not an argument. Yet I can still say, "So what?" I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I'd care about it.

The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying "there is a perfectly good rational argument that the world is round, but I prefer to be irrational".

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

Comment author: Amanojack 25 May 2011 08:49:20AM *  -1 points [-]

Yeah, I said it's not an argument. Yet again I can only ask, "So what?" (And this doesn't make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, "We both would be disgusted at the prospect of killing a dog for no reason." But you seem to be saying there is something more.)

That doesn't constitute an argument unless you can explain why your winning is the only thing that should matter.

The wordings "affect my winning" and "matter" mean the same thing to me. I take "The world is round" seriously because it matters for my actions. I do not see how "I'm morally in the wrong"* matters for my actions. (Nor how "I'm pan-galactically in the wrong" matters. )

*EDIT: in the sense that you seem to be using it (quite possibly because I don't know what that sense even is!).

Comment author: Peterdjones 24 May 2011 01:35:44PM *  0 points [-]

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't. Objective values are the values you should have even if you have different values.

And I think lukeprog's point could be summed up as, "Trying to figure out how each discussant is defining their terms is not really 'doing philosophy'; it's just the groundwork necessary for people not to talk past each other."

Where the groundwork is about 90% of the job...

As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can't figure out what anticipations X entails, I will just respond, "So what?"

That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn't.

To unite the two themes: The ultimate definition would tell me why to care.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't. Even good arguments will fail to work on some people.

You should care about morality because it is morality. Morality defines (the ultimate kind of) "should".

"What I should do" =def "what is moral".

Nor everyone does get that , which is why "don't care" is "made to care" by various sanctions.

Comment author: Amanojack 25 May 2011 08:35:28AM *  0 points [-]

As far as objective value, I simply don't understand what anyone means by the term.

Objective truth is what you should believe even if you don't.

"Should" for what purpose?

Where the groundwork is about 90% of the job...

I certainly agree there. The question is whether it is more useful to assign the label "philosophy" to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called "philosophy," only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.

Imagine you are arguing with someone who doesn't "get" rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don't, you can't.

I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you're saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I'm wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it's all semantic confusion, and because I don't want to sound dismissive or obstinate in continuing to say, "So what?"

Comment author: endoself 24 May 2011 03:12:20AM 0 points [-]

I thought of a way that I could be mugged Pascal-style

It's not Pascal's mugging unless it works with ridiculously low probabilities. Would you pay $5 to avoid a 10^-30 chance of watching 3^^^3 people being tortured?

People decry such things loudly, but few of those who aren't directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.

Are you including yourself in "the vast majority of people"? Are you including most of LW? If your utility is bounded, you are probably not vulnerable to Pascal's mugging. If your utility is not bounded, it is irrelevant whether other people act like their utilities are bounded. Note that even egoists can have unbounded utility functions.

Comment author: Amanojack 25 May 2011 08:18:32AM 1 point [-]

Are you losing sleep over the daily deaths in Iraq? Are most LWers? That's all I'm saying. I consider myself pretty far above-average empathy-wise, to the extent that if I saw someone be tortured and die I'd probably be completely changed as a person. If I spent more time thinking about the war I probably not be able to sleep at all, and eventually if I steeped myself in the reality of the situation I'd probably go insane or die of grief. The same would probably happen if I spent all my time watching slaughterhouse videos. So I'm not pretending to be callous. I'm just trying to inject some reality into the discussion. If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken.

So although it depends on what exactly you mean by "unbounded utility function," it seems that no one's utility function is really unbounded. And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term).

Also, point taken about my scenario not being a Pascal's mugging situation.

Comment author: BobTheBob 24 May 2011 01:29:07AM 2 points [-]

Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:

I should disclose that I don't find ultimately any kind of objectivism coherent, including "objective reality".

-though the apparent tension in being a solipsist who argues gets to the root of the issue.

For what it may be worth:

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values - you can't fit them in, hence no way to understand them.

From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not 'trying' to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.

Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can't be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.

Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).

Presently you are disagreeing with me about values. To me this says you think there's a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).

Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren't, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.

Comment author: Amanojack 25 May 2011 08:00:21AM 0 points [-]

I took a slightly different tack, which is maybe moot given your admission to being a solipsist

Solipsism is an ontological stance: in short, "there is nothing out there but my own mind." I am saying something slightly different: "To speak of there being something/nothing out there is meaningless to me unless I can see why to care." Then again, I'd say this is tautological/obvious in that "meaning" just is "why it matters to me."

My "position" (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I'm sure it will be.

I'm assuming you subscribe to what you consider to be a rigorously scientific world-view

I'm not a naturalist. I'm not skeptical of "objective" because of such reasons; I am skeptical of it merely because I don't know what the word refers to (unless it means something like "in accordance with consensus"). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you'll call it (I mean them all synonymously).

If after engaging in such discourse I am not able to do that, I will eventually want to ask, "So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don't want?"

Comment author: [deleted] 23 May 2011 07:11:06PM 1 point [-]

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

That is apparently true in your hypothetical, but it's not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.

In response to comment by [deleted] on Conceptual Analysis and Moral Theory
Comment author: Amanojack 25 May 2011 07:37:37AM 0 points [-]

For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house).

You're interpreting "I'm morally in the wrong" to mean something like, "Other people will react badly to my actions," in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.

Comment author: Peterdjones 23 May 2011 06:44:29PM *  0 points [-]

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

The difference is, believing "The world is round" affects whether I win or not, whereas believing "I'm morally in the wrong" does not.

You have not succeeded in showing that winning is the most important thing.

Comment author: Amanojack 25 May 2011 07:31:37AM 0 points [-]

The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?

I've never argued (a), I'm still arguing (actually just informing you) that the words "objective morality" are meaningless to me, and I'm still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I'd need a definition!)

You have not succeeded in showing that winning is the most important thing.

I'm using the word winning as a synonym for "getting what I want," and I understand the most important thing to mean "what I care about most." And I mean "want" and "care about" in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.

Comment author: Amanojack 23 May 2011 06:33:54PM 2 points [-]

I'd like to see more people questioning orthodox assumptions, and generally more radical arguments, yet without compromising LW standards of rigor. I feel like people are too afraid to stick their necks out and seriously argue something that goes against the majority/high-status views.

Comment author: Amanojack 23 May 2011 06:25:09PM 1 point [-]

I don't like how much "crank" sounds like "heretic." In EY's case he kept trying to wage war on the established result even after he noticed his mistake, but the mere act of questioning - even publicly - an established result should not be called crankery.

View more: Prev | Next