Trevor_Caverly comments on Morality open thread - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".
I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how good or bad things are, from the perspective of the agent. The following thought experiment gives a concrete example of what I mean by S.
Imagine a universe with only one sentient thing, a person named P. P desires that there exist a 1 meter cube of gold somewhere within P's lightcone. P has a (non-sentient) oracle that ey trusts completely to provide either an accurate answer or no information for whatever question ey asks. P asks it whether a 1 meter gold cube exists within eir lightcone, and the oracle says yes.
It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P, and therfore the utility of the universe. P is free to claim that eir utility depends upon the existence of the cube, but I believe P would be mistaken. P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function. (I suppose it could be argued that in this case P is also mistaken about eir desire, and that desires can only really be about one's own metnal state, but that's not important to my argument). Similarly, P would be mistaken to claim that anything not part of eir mind was part of eir utility function.
I'm not sure whether S in itself implies a weak form of moral realism, since it implies that statements of the form "x is not part of P's utility function" can be true. Would these statements count as ethical statements in the necessary way? It does not seem to imply that there is any objective way to compare different possible worlds though, so it doesn't hurt the anti-realist position much. Still, it does seem to provide a way to create a sort of moral partition of the world, by breaking it into individual morally relevant agents (no, I don't have a good definition for "morally relevant agent") which can be examined separately, since their utility can only depend on their map of the world and not the world itself. The objective utility of the universe can only depend on the separate utilities in each of the partitions. This leaves the question of whether it makes any sense to talk about an objective utility of the universe.
So, does anyone disagree with S? If you agree with S, are you an anti-realist?
If you truly believe this proposition, as opposed to merely belief in belief, you shown stop reading LessWrong right now. If you keep reading LessWrong, you are likely to get better at rationality, and in particular at telling whether something is true or false, which will make it harder for you to maintain comfortable beliefs and thus will vastly lower your utility by your definition.
I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?
If someone kidnapped me and hooked me up to an experience machine that gave me a simulated perfect life, and then tortured my family for the rest of their lives, I claim that this would be good for me. It would be bad overall because people would be harmed (far in excess of my gains). If I was given this as an option I would not take it because I would be horrified by the idea and because I believe it would be morally wrong, but not because I believe I would be worse off if I took the deal. If someone claimed that taking this deal would be bad for their own well-being, I believe that they would be mistaken.
If someone claimed that the existence of a gold cube in a section of the universe where it would never be noticed by anyone or affect any sentient things could be a morally good thing, I would likewise claim that they are mistaken. I claim this, because regardless of how much they want the cube to exist, or how good they believe the existence of the cube to be, no one's well-being can depend on the existence of the cube. At most, someone's well-being can depend on their belief in the existence of the cube.
I had assumed you meant something like this.
To see if I'm understanding you correctly, would you be in favor of wireheading the entire human race?
I would not be in favor of wireheading the human race, but I don't see how that is connected to S. If wireheading all of humanity is bad, it seems clear that it is bad because it is bad for the people being wireheaded. If this is a wireheading scenario where humanity goes extinct as a result of wireheading, than this is also bad because of the hypothetical people who would have valued being alive. There is nothing about S that stops someone from comparing the normal life they would live with a wireheaded life and saying they would prefer the normal life. This is because these two choices involve different mental states for the person, and S does not in itself place any restrictions on which mental states would be better for you to have. Rather, it states that your own mental states are the only things that can be good or bad for you.
If you think S is false, you could additionally claim that wireheading humanity is bad because the fact that humanity is wireheaded is something that almost everybody believes is bad for them, and so if humanity is wireheaded, that is very bad for many people, even if these people are not aware that humanity is wireheaded. But it seems very easy to believe that wireheading is bad for humanity without believing this claim.
Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2? If so do you really think that this being possible is as obvious as that 2 + 2 =\= 3 ? If not, why would someone's well-being be able to depend on something other than their mental states in some situations but not this one? To me it seems very obvious to me that P1 and P2 have exactly equally good lives, and I am truly surprised that other people's intuitions and beliefs lean strongly the other way.
So would you argue that P2 shouldn't investigate whether the cube exists, because then he would find out that it doesn't and thus become worse off?
Yes. P2 finding this out would harm him, and couldn't possibly benefit anyone else, so if searching would lead him to believe the cube doesn't exist, it would be ethically better if he didn't search. But the harm to P2 is a result of his knowledge, not the mere fact of the cube's inexistence. Likewise, P1 should investigate assuming he would find the cube. The reason for this difference is that investigating would have a different effect on the mental states of P1 than it would on the mental states of P2. If the cube in U1 can't be found by P1, than the asymmetry is gone, and neither should investigate.
Very well, I repeat the advise I gave you above.