Comment author: bryjnar 09 July 2012 05:07:32PM *  3 points [-]

Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn't often thought of as especially altruistic, because it's something that usually matters very deeply to the person, even bracketing morality.

Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that

whether the cube exists in this example is completely decoupled from whether P believes the cube exists

This is only relevant if we already think S is true. You can't use it to support that very argument!

Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it's just not one that is acutally much use in relation to human decision-making.

Edit: remember to escape * s!

Edit2: quoting fail.

Comment author: Trevor_Caverly 09 July 2012 09:06:51PM *  -1 points [-]

Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.

Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.

Perhaps they are being altruistic and trying to improve someone else's well-being at the expense of their own, like in your torture example. In this example, I don't believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.

Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something "good" about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P's beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P's mental state is exactly the same, so P's well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)

Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states.

It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone's well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)

In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.

P.S. I think you quoted more than you meant to above.

Comment author: Jack 09 July 2012 01:17:25PM *  1 point [-]

I'm a moral anti-realist. I don't see a justification for S. If there are facts about "how good or bad things are, from the perspective of the agent" it seems like those facts, for humans, are often facts about the 'real world'. I also don't much see what this has to do with moral realism.

Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure "objective utility" but that doesn't mean such a figure is of any moral importance. I doubt I would care much about it.

In response to comment by Jack on Morality open thread
Comment author: Trevor_Caverly 09 July 2012 03:32:30PM 0 points [-]

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it's mainly just something I was thinking about while thinking about moral realism.

I don't really know what I'm talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind's utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.

Comment author: Khoth 09 July 2012 11:59:41AM 4 points [-]

I'm not 100% sure what your S means, but I don't think it's true.

If Omega comes along and says "If you want, I'll make a 1m cube of gold somewhere you'll never observe it, and then make you forget all about this offer", then P will accept.

On the other hand, P wouldn't necessarily accept an offer to make him delusionally believe that a cube of gold exists.

In response to comment by Khoth on Morality open thread
Comment author: Trevor_Caverly 09 July 2012 03:21:29PM -1 points [-]

That is true, but not relevant to the point I am trying to make. If P took the first offer, they would end up exactly as well off as if they hadn't received the offer, and if P took the second offer, they would end up better off. The fact that P's beliefs don't correspond with reality does not change this. The reason that P would accept the first offer but not the second is that P believes the universe would be "better" with the cube. P does not think ey will actually be happier (or whatever) accepting offer 1, and if P does think ey will be happier, I think that is an error in moral judgment. The error is in thinking that the box is morally relevant, when it cannot be, since P is the only morally relevant thing in this universe.

Comment author: bryjnar 09 July 2012 11:52:55AM *  1 point [-]

Your example seems to provide an instance where S is false. You just assert that it isn't like that:

It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P

Why?

P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function.

Again, why? You haven't really said anything about why you'd think that...

Also, it seems pretty clearl that things outside of your head can matter. Suppose an evil demon offers you a choice: either

  • your family will be tortured, but you will think that they're fine
  • your family will be fine, but you will think that they're being tortured.

And of course, all memory of the encounter with the demon will be erased.

I think most people would take the second option, and gladly! That seems pretty strong prima facie evidence that stuff outside people's heads matters to them. So I guess I'd disagree with S. Oh, and I'm (sort of) an anti-realist.

Comment author: Trevor_Caverly 09 July 2012 03:15:06PM -1 points [-]

In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)

I say that the cube cannot be part of P's utility function, because whether the cube exists in this example is completely decoupled from whether P believes the cube exists, since P trusts the oracle completely, and the oracle is free to give false data about this particular fact. P's belief about the cube is part of the utility function, but not the actual fact of whether the cube exists.

In response to Morality open thread
Comment author: Trevor_Caverly 09 July 2012 04:27:01AM 0 points [-]

Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".

I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how good or bad things are, from the perspective of the agent. The following thought experiment gives a concrete example of what I mean by S.

Imagine a universe with only one sentient thing, a person named P. P desires that there exist a 1 meter cube of gold somewhere within P's lightcone. P has a (non-sentient) oracle that ey trusts completely to provide either an accurate answer or no information for whatever question ey asks. P asks it whether a 1 meter gold cube exists within eir lightcone, and the oracle says yes.

It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P, and therfore the utility of the universe. P is free to claim that eir utility depends upon the existence of the cube, but I believe P would be mistaken. P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function. (I suppose it could be argued that in this case P is also mistaken about eir desire, and that desires can only really be about one's own metnal state, but that's not important to my argument). Similarly, P would be mistaken to claim that anything not part of eir mind was part of eir utility function.

I'm not sure whether S in itself implies a weak form of moral realism, since it implies that statements of the form "x is not part of P's utility function" can be true. Would these statements count as ethical statements in the necessary way? It does not seem to imply that there is any objective way to compare different possible worlds though, so it doesn't hurt the anti-realist position much. Still, it does seem to provide a way to create a sort of moral partition of the world, by breaking it into individual morally relevant agents (no, I don't have a good definition for "morally relevant agent") which can be examined separately, since their utility can only depend on their map of the world and not the world itself. The objective utility of the universe can only depend on the separate utilities in each of the partitions. This leaves the question of whether it makes any sense to talk about an objective utility of the universe.

So, does anyone disagree with S? If you agree with S, are you an anti-realist?

View more: Prev