Comment author: Peterdjones 07 June 2011 10:07:29AM 3 points [-]

Likewise, a given action can be 'right' and 'wrong' at the same time, though in different senses.

Are you sure that people mean different things by 'right' and 'wrong', or are they just using different criteria to judge whether something is right or wrong.

Isn't this done by appealing to the values of the majority?

It's done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently. The point being that if correct reasoning eventually leads to uniform results, we call that objective.

Only if — independent of values — certain values are rational and others are not

Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?

Comment author: Garren 07 June 2011 01:33:56PM 0 points [-]

Are you sure that people mean different things by 'right' and 'wrong', or are they just using different criteria to judge whether something is right or wrong.

What could 'right' and 'wrong' mean, beyond the criteria used to make the judgment?

It's done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently.

Sure, if you're talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I've still never heard how reason can have anything to say about fundamental values.

Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?

So far as I can tell, only by reasoning from their pre-existing values.

Comment author: Peterdjones 06 June 2011 04:16:55PM 0 points [-]

What's contradictory about the same object being judged differently by different standards?

Nothing. There's nothing contradictory about multiple subjective truths or about multiple opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.

Here's a standard: return the width of the object in meters. Here's another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.

There isn't any contradiction in multiple objective truths about different things; but the original hypothesis was multiple objective truths about the same thing, ie the morality of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.

Comment author: Garren 07 June 2011 04:34:17AM 0 points [-]

If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.

The focus doesn't have to be on John and Mary; it can be on the morality we're referencing via John and Mary. By analogy, we could talk about John's hometown and Mary's hometown, without being subjectivists about the cities we are referencing.

Comment author: Peterdjones 06 June 2011 01:38:36PM *  0 points [-]

There's an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.

I don't think that works. If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don't think you can have multiple contradictory objective truths.

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him.

You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways

I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

It''ll work on people who already subscribe to rationaity, whereas relativism won't.

Comment author: Garren 07 June 2011 04:27:10AM -1 points [-]

If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don't think you can have multiple contradictory objective truths.

Ok, instead of meter measurements, let's look at cubit measurements. Different ancient cultures represented significantly different physical lengths by 'cubits.' So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.

A given object could thus be 'over ten cubits' and 'under ten cubits' at the same time, though in different senses. Likewise, a given action can be 'right' and 'wrong' at the same time, though in different senses.

The surface judgments contradict, but there need not be any propositional conflict.

You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways

Isn't this done by appealing to the values of the majority?

It''ll work on people who already subscribe to rationaity, whereas relativism won't.

Only if — independent of values — certain values are rational and others are not.

Comment author: BobTheBob 07 June 2011 02:57:18AM 1 point [-]

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I'm not getting it.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

This states the thought very clearly -thanks.

If so, then I would offer the goal of "in order to be logically consistent."

I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It's possible this doesn't really engage your thoughts, though.

There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn't that an important result?

Comment author: Garren 07 June 2011 04:11:22AM *  0 points [-]

It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.

When a dispute is over fundamental values, I don't think we can give the other side compelling grounds to act according to our own values. Consider Eliezer's paperclip maximizer. How could we possibly convince such a being that it's doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?

Thanks for the link to the Carroll story. I plan on taking some time to think it over.

If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn't that an important result?

It's important to us, but — as far as I can tell — only because of our values. I don't think it's important 'to the universe' for someone to refrain from going on a killing spree.

Another way to put it is that the rationality of killing sprees is dependent on the agent's values. I haven't read much of this site, but I'm getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.

Comment author: asr 05 June 2011 10:33:00PM 1 point [-]

At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.

Once a disagreement is known to result from a pure difference in values, there isn't a rational way to resolve it...the best we can do is make people aware of the difference in their claims.

Suppose we don't have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?

To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.

If rationalism can't supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that's an objection to rationalism. Conversely, altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.

This reminds me of that line of Yeats, that "the best lack all conviction, while the worst are full of passionate intensity." Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than "we have our morals and they have theirs."

To sharpen the point slightly: There's an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don't have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn't.

Comment author: Garren 06 June 2011 06:03:10AM 0 points [-]

I think the worry here is that realizing 'right' and 'wrong' are relative to values might make us give up our values. Meanwhile, those who aren't as reflective are able to hold more strongly onto their values.

But let's look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?

Comment author: BobTheBob 05 June 2011 07:59:34PM 1 point [-]

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him - there is no sense in which he ought not to have done what he did (assuming his belief system doesn't inveigh against him offending yours)?

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from.

Perhaps it comes from the way you view the concept of belief as implying a goal?

Touche.

Look, what I'm getting at is this. I assume we can agree that

"68 + 57 = 125" is true if and only if 68 + 57 = 125

This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn't be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

Comment author: Garren 06 June 2011 05:54:12AM *  0 points [-]

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him

There's an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.

Such a person would be objectively afoul of a standard against randomly killing people. But let's say he acted according to a standard which doesn't care about that; we wouldn't be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn't one).

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A's question.

But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

If so, then I would offer the goal of "in order to be logically consistent." There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

Comment author: BobTheBob 04 June 2011 03:24:32AM 2 points [-]

Taking your thoughts out of order,

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ). I'm understanding that you're willing to bite this bullet.

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we're talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.

This is fair.

Instead, I would affirm: In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

This is an interesting proposal, but I'm not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn't a rational person always try to believe what is correct? Or, to put the point another way, isn't having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like

*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

or, more plausibly,

*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.

But if this is fair I'm back to wondering where the ought comes from.

Comment author: Garren 04 June 2011 06:06:10AM *  1 point [-]

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is

While it is relativism, the focus is a bit different from 'right for me.' More like 'this action measures up as right against standard Y' where this Y is typically something I endorse.

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

Or if I consider a practice morally right for the above reason, but you consider it morally wrong because it falls afoul of Rawls' theory of justice, there's more going on than it just being right for me and wrong for you. It's more like I'm saying it's right{Harris standard} and you're saying it's wrong{Rawls standard}. (...at least as far as cognitive content is concerned; we would usually also be expressing an expectation that others adhere to the standards we support.)

Of course the above are toy examples, since people's values don't tend to line up neatly with the simplifications of philosophers.

(since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ).

It's not apparent that values differ just because judgments differ, so there's still a lot of interesting work to find out if disagreements can be explained by differing descriptive beliefs. But, yes, once a disagreement is known to result from a pure difference in values, there isn't a rational way to resolve it. It's like Luke's 'tree falling' example; once we know two people are using different definitions of 'sound,' the best we can do is make people aware of the difference in their claims.

I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?).

Yep. While those are interesting standards to consider, it's pretty clear to me that real world moral discourse is wider and more messy than any one normative theory. We can simply declare a normative theory as the moral standard — plenty of people have! — but the next person whose values are a better match for another normative theory is just going to disagree. On what basis do we find that one normative theory is correct when, descriptively, moral pluralism seems to characterize moral discourse?

Is it possible for a rational person to strive to believe anything but the truth?

If being rational consists in doing what it takes to fulfill one's goals (I don't know what the popular definition of 'rationality' is on this site), then it is still possible to be rational while holding a false belief, if a false belief helps fulfill one's goals.

Now typically, false beliefs are unhelpful in this way, but I know at least Sinnott-Armstrong has talked about an 'instrumentally justified' belief that can go counter to having a true belief. The example I've used before is an Atheist married to a Theist whose goal of having a happy marriage would in fact go better if she could take a belief-altering pill so she would falsely take on her spouse's belief in God.

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from.

Perhaps it comes from the way you view the concept of belief as implying a goal?

Comment author: BobTheBob 03 June 2011 05:14:28PM 1 point [-]

Just to clarify where you stand on norms: Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)

To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts. This step taken, there's no further commitment required to get ethical facts. Obviously, though, there are epistemic issues associated with the latter which are not associated with the former.

Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?

Comment author: Garren 03 June 2011 06:58:38PM 1 point [-]

Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?) Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty. Instead, I would affirm:

In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

(I'm leaving 'mathematically correct' vague so different views on the nature of math are accommodated.)

In other words, the obligation relies on a goal. Or we could say normative answers require questions. Sometimes the implied question is so obvious, it seems strange to bother identifying it.

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

Comment author: Vladimir_Nesov 02 June 2011 02:52:27AM 4 points [-]

The problem is not that there is no way to identify 'good' or 'right' (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don't (yet) have access to its structure.

Strictly speaking, we can exhibit any definition of "good", even one that doesn't make any of the errors you pointed out, and still ask "Is it good?". The criteria for exhibiting a particular definition are ultimately non-rigorous, even if the selected definition is, so we can always examine them further.

Moore's argument might fail in the unintended use case of post-FAI morality not because at some point there might be no more potential for asking the question, but because, as with "Does 2+2 equal 4?", there is a point at which we are certain enough to turn to other projects, even if in principle some uncertainty and lack of clarity in the intended meaning remains. It's not at all clear this will ever happen to morality.

Comment author: Garren 03 June 2011 04:11:22PM *  -1 points [-]

Strictly speaking, we can exhibit any definition of "good", even one that doesn't make any of the errors you pointed out, and still ask "Is it good?".

Moore was correct that no alternate concrete meaning is identical to 'good,' his mistake was thinking that 'good' had its own concrete meaning. As Paul Ziff put it, good means 'answers to an interest' where the interest is semantically variable.

In math terms, the open question argument would be like asking the value of f(z) and when someone answers f(3), pointing out that f(z) is not the same thing as f(3).

I think the 'huge and complicated' X that Luke mentions is supposed to be the set all of inputs to f(z) that a given person is disposed to use. Or maybe the aggregate of all such sets for people.

Comment author: Peterdjones 02 June 2011 10:51:48PM 0 points [-]

Uh-huh. Is that an issue of commission rather than omission? Are people not obligated to refrain from theft murder and rape , their inclinations notwithstanding?

Comment author: Garren 03 June 2011 12:43:55AM *  0 points [-]

If by 'obligated' you mean it's demanded by those who fear being the targets of those actions, yes. Or if you mean exercising restraint may be practically necessary to comply with certain values those actions thwart, yes. Or if you mean doing those things is likely to result in legal penalties, that's often the case.

But if you mean it's some simple fact that we're morally obligated to restrain ourselves from doing certain things, no. Or at least I don't see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).

The 'commission' vs. 'omission' thing is often a matter of wording. Rape can be viewed as omitting to get proper permission, particularly when we're talking about drugging, etc.

View more: Next