Jack comments on Morality open thread - Less Wrong

6 Post author: Will_Newsome 08 July 2012 02:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Trevor_Caverly 09 July 2012 04:27:01AM 0 points [-]

Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".

I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how good or bad things are, from the perspective of the agent. The following thought experiment gives a concrete example of what I mean by S.

Imagine a universe with only one sentient thing, a person named P. P desires that there exist a 1 meter cube of gold somewhere within P's lightcone. P has a (non-sentient) oracle that ey trusts completely to provide either an accurate answer or no information for whatever question ey asks. P asks it whether a 1 meter gold cube exists within eir lightcone, and the oracle says yes.

It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P, and therfore the utility of the universe. P is free to claim that eir utility depends upon the existence of the cube, but I believe P would be mistaken. P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function. (I suppose it could be argued that in this case P is also mistaken about eir desire, and that desires can only really be about one's own metnal state, but that's not important to my argument). Similarly, P would be mistaken to claim that anything not part of eir mind was part of eir utility function.

I'm not sure whether S in itself implies a weak form of moral realism, since it implies that statements of the form "x is not part of P's utility function" can be true. Would these statements count as ethical statements in the necessary way? It does not seem to imply that there is any objective way to compare different possible worlds though, so it doesn't hurt the anti-realist position much. Still, it does seem to provide a way to create a sort of moral partition of the world, by breaking it into individual morally relevant agents (no, I don't have a good definition for "morally relevant agent") which can be examined separately, since their utility can only depend on their map of the world and not the world itself. The objective utility of the universe can only depend on the separate utilities in each of the partitions. This leaves the question of whether it makes any sense to talk about an objective utility of the universe.

So, does anyone disagree with S? If you agree with S, are you an anti-realist?

Comment author: Jack 09 July 2012 01:17:25PM *  1 point [-]

I'm a moral anti-realist. I don't see a justification for S. If there are facts about "how good or bad things are, from the perspective of the agent" it seems like those facts, for humans, are often facts about the 'real world'. I also don't much see what this has to do with moral realism.

Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure "objective utility" but that doesn't mean such a figure is of any moral importance. I doubt I would care much about it.

Comment author: Trevor_Caverly 09 July 2012 03:32:30PM 0 points [-]

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it's mainly just something I was thinking about while thinking about moral realism.

I don't really know what I'm talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind's utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.

Comment author: Eugine_Nier 10 July 2012 05:57:10AM 1 point [-]

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true.

I'm a moral realist and I find your claim nearly as absurd as asserting that 2+2=3, and I suspect nearly all moral realists would share my sentiment (even if they wouldn't express it quiet as strongly).