cousin_it comments on Normativity and Meta-Philosophy - Less Wrong

12 Post author: Wei_Dai 23 April 2013 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: cousin_it 24 April 2013 01:03:03AM *  10 points [-]

If we try to translate sentences involving "should" into descriptive sentences about the world, they will probably sound like "action A increases the value of utility function U". If I was a consistent utility maximizer, and U was my utility function, then believing such a statement would make me take action A. No further verbal convincing would be necessary.

Since we are not consistent utility maximizers, we run an approximate implementation of that mechanism which is vulnerable to verbal manipulation, often by sentences involving "should". So the murkiness in the meaning of "should" is proportional to the difference between us and utility maximizers. Does that make sense?

(It may or may not be productive to describe a person as a utility maximizer plus error. But I'm going with that because we have no better theory yet.)

Comment author: TimS 24 April 2013 01:11:16AM 2 points [-]

I agree that all "ought" statements can be easily translated into "is" statements about maximizing some utility function. But in practice, should-statements often are disguised exhortations to adopt a particular utility function.

I think the question raised by the OP is something like: Why we should take the exhortation seriously if we have not already adopted the particular utility function?

Comment author: bogus 24 April 2013 01:55:19PM *  2 points [-]

If agents have different utility functions / conative ambitions / "shoulds", they will presumably need to engage in some kind of negotiation in order to compromise their values and reach an efficient outcome. Presumably, ethical disputes can function as a way of reaching such outcomes - some accounts of ethics are quite clear in describing ethical reasoning as being very much about such a balancing of "right versus right". Even Kantian ethics can be seen in such terms, although what we woud call "rights" Kant would perhaps refer to as "principles of practical reason".

Comment author: cousin_it 24 April 2013 01:36:50AM *  1 point [-]

One could try to create a model of agents that respond to such exhortations. Maybe such agents could be uncertain about their own utility function, like in Dewey's value learning paper.

Comment author: [deleted] 24 April 2013 01:15:31AM 0 points [-]

"action A increases the value of utility function U"

Your comment implies that sentences making reference to utility functions are sentences about the world. Do you mean to say that "action A increases the value of utility function U" involves no normative content, that it is purely a description of a state of affairs?

Comment author: CoffeeStain 24 April 2013 05:22:03AM *  0 points [-]

Is there a theory of normativity that claims that normative content does not reduce to states of affairs?

EDIT: Well of course there would be, under section 3 of the OP's link. Unfortunately, I could use help dissecting language such as:

Moore, whose metaethical views are taken as the archetype of a non-naturalist position, leaves us with two independent legacies. One is the non-reductionist metaphysical doctrine that the normative is sui generis and unanalysable into non-normative components or in purely non-normative terms, leading some writers to classify views as forms of ‘non-naturalism’ on this basis. The other legacy is the epistemological doctrine of intuitionism: that some substantive or synthetic normative truths are knowable a priori.

Comment author: [deleted] 24 April 2013 12:46:50PM *  1 point [-]

One is the non-reductionist metaphysical doctrine that the normative is sui generis and unanalysable into non-normative components or in purely non-normative terms, leading some writers to classify views as forms of ‘non-naturalism’ on this basis.

means Moore thinks you can't reduce 'ought' claims to 'is' claims, roughly.

The other legacy is the epistemological doctrine of intuitionism: that some substantive or synthetic normative truths are knowable a priori.

means that Moore thinks that you have access to informative moral truths (like 'it is wrong to kill wontonly' and not just 'murder is illegal homocide') in such a way that doesn't make reference to any particular experiences or contingent facts about the world. So Moore thinks that you can know 'it is wrong to kill wontonly'' independently of knowing any of the specific facts about human beings or human societies or anything like that.

But right, non-naturalism is a possibility for normative theories (and not a particularly unusual one, since all Kantians would count as non-naturalists). I'm not a non-naturalist myself, but I suspect cousin isn't getting away with eliminating the 'ought' in referring to utility functions, but just hiding it in the utility function. But I'm not well versed in that sort of thing, so I don't think I'm quite entitled to the criticism.

Comment author: Wei_Dai 24 April 2013 06:41:12AM 0 points [-]

If we try to translate sentences involving "should" into descriptive sentences about the world, they will probably sound like "action A increases the value of utility function U".

As you know, there is no commonly agreed upon way of stating "action A increases the value of utility function U" as math (otherwise decision theory would be solved). Given that, what does it mean when I say "I think we should express 'action A increases the value of utility function U' in math as X", which seems like a sensible statement? I don't see how the "should" in this sentence can be translated into something that sounds like "action A increases the value of utility function U" without making the sentence mean something obviously different.

Comment author: Pentashagon 24 April 2013 07:44:20PM 0 points [-]

Given that, what does it mean when I say "I think we should express 'action A increases the value of utility function U' in math as X", which seems like a sensible statement?

I think it makes sense as a statement about decision theories. How would a choice of which mathematical expression of 'action A increases the value of utility function U' affect actual utility? Only by affecting which actions are chosen; in other words by selecting a particular (class of) decision theory which maximizes utility due in part to its expression of what "should" means mathematically.

Comment author: cousin_it 24 April 2013 12:00:38PM *  0 points [-]

So you're saying we need to mathematically model our desire for a mathematical model of X? :-) That might be a line of attack, but I don't yet see what it gives us, compared to attacking the problem directly...

Comment author: MindTheLeap 24 April 2013 06:41:53AM *  -1 points [-]

I read

"action A increases the value of utility function U"

to mean (1:) "the utility function U increases in value from action A". Did you mean (2:) "under utility function U, action A increases (expected) value"? Or am I missing some distinction in terminology?

The alternative meaning (2) leads to "should" (much like "ought") being dependent on the utility function used. Normativity might suggest that we all share views on utility that have fundamental similarities. In my mind at least, the usual controversy over whether utility functions (and derivative moral claims, i.e., "should"s and "ought"s) can be objectively true, remains.

Edit: formatting.