All of Trevor_Caverly's Comments + Replies

Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here) ?

4metaphysicist
Let me try to answer more succinctly. Dennett and I are concerned with different problems; Dennett's is a problem within science proper, while mine is traditionally philosophical. Dennett's conclusion is that "qualia" don't provide introspective access to the functioning of the brain; my conclusion is that our common intuition concerning the existence of qualia is incoherent.
4metaphysicist
I agree with Dennett that qualia don't exist. I disagree that the concept of qualia is basically a remnant of an outmoded psychological doctrine; I think it's an innate idea. Dennett can be criticized for ignoring the subjective nature of qualia. He shows, for example, that reported phenomenal awareness is empirically bogus in that it doesn't correspond to the contents of working memory. I'm concerned with accounting for the subjective nature of the qualia concept. Dennett basically thinks qualia are empirically falsifiable; I think the concept is incoherent.

" 'What is true is already so. The coherent extrapolated volition of God doesn't make it worse' is obviusly false if and only if timeless politics is isomorphic to truth if and only if the tenth virtue of rationality is 'Let me not become attached to the map I may not want' " is obviously false.

Well, it's true.

Also, This is way smarter than than the Deepak Chopra quote generator.

Yes. P2 finding this out would harm him, and couldn't possibly benefit anyone else, so if searching would lead him to believe the cube doesn't exist, it would be ethically better if he didn't search. But the harm to P2 is a result of his knowledge, not the mere fact of the cube's inexistence. Likewise, P1 should investigate assuming he would find the cube. The reason for this difference is that investigating would have a different effect on the mental states of P1 than it would on the mental states of P2. If the cube in U1 can't be found by P1, than the asymmetry is gone, and neither should investigate.

0Eugine_Nier
Very well, I repeat the advise I gave you above.

I would not be in favor of wireheading the human race, but I don't see how that is connected to S. If wireheading all of humanity is bad, it seems clear that it is bad because it is bad for the people being wireheaded. If this is a wireheading scenario where humanity goes extinct as a result of wireheading, than this is also bad because of the hypothetical people who would have valued being alive. There is nothing about S that stops someone from comparing the normal life they would live with a wireheaded life and saying they would prefer the normal life. T... (read more)

-2Eugine_Nier
So would you argue that P2 shouldn't investigate whether the cube exists, because then he would find out that it doesn't and thus become worse off?

What if you're deciding whether to have sex?

2Viliam_Bur
Eat and sleep.

Have solo sex first.

I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?

If someone kidnapped me and hooked me up to an experience machine that gave me a simulated perfect life, and then tortured my family for the rest of their lives, I claim that this would be g... (read more)

-2Eugine_Nier
I had assumed you meant something like this. To see if I'm understanding you correctly, would you be in favor of wireheading the entire human race?

No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists."

I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for "knowing" the cube exists has some degree of uncertainty. He is... (read more)

I guess the realism aspect isn't as relevant as I thought it would be. I expected that any realists would believe S, and that anti-realists might or might not. I also think that not believing S would imply anti-realism, but I'm not super confident that that's true.

I would say that P and Q have equal utility until the point where their circumstances diverge, after which of course they would have different utilities. There is no reason to consider future utility when talking about current utility. So it just depends on what section of time you are looking at. If you're only looking at a segment where P and Q have identical brain states, then yes I would say they have the same utility.

I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that's fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an... (read more)

I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, eve... (read more)

0mwengler
No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists." You are flummoxed by the paradox of P feeling just as happy due to a false belief in gold as he would based on a true belief in gold, and you are ignoring the thing that ACTUALLY made him happy: which was the oracle telling him the gold was real. How surprising should it be that ignoring the real world causes of something produces paradoxes? P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists. And if the gold doesn't exist in reality, P's happiness is not changed, but if the reality that lead him to believe the gold existed is reversed, if the oracle tells him (truly or falsely) the gold doesn't exist, then his happiness is changed. I actually have not a clue what this example's connection to moral realism might be, either supporting it or denying it. But I am pretty clear that what you present as a "real mental result without a physical cause because the gold does not matter" is merely a case of you taking an hypothesized fool at his word and ignoring the REAL physical cause of P's happiness or sadness. Or from a slightly different tack, if P defined "gold exists" as "oracle tells me gold exists" then P's claim that his utility is the gold is equivaelnt to a claim that his utility is being told there is god. Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.

Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.

Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.

Perhaps they are being altruistic and trying to improve someone else's ... (read more)

1bryjnar
Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone's well-being. One other thing: you seem to be assuming that the only reasons someone can have to act are either * it promotes their well-being * some moral reason. I think this isn't true, and it's especially not true if you're defining well-being as you are. So you present the options for P as * they want to have the happy-making belief that the cube exists * they think there is something "good" about the cube existing but these aren't exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you're now claiming that actually noone desires anything other than that they come to have certain mental states, that's even more controversial, and I would say even more obviously false ;)

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it's mainly just something I was thinking about while thinking about moral realism.

I don't really know what I'm talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind's utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.

0Eugine_Nier
I'm a moral realist and I find your claim nearly as absurd as asserting that 2+2=3, and I suspect nearly all moral realists would share my sentiment (even if they wouldn't express it quiet as strongly).

That is true, but not relevant to the point I am trying to make. If P took the first offer, they would end up exactly as well off as if they hadn't received the offer, and if P took the second offer, they would end up better off. The fact that P's beliefs don't correspond with reality does not change this. The reason that P would accept the first offer but not the second is that P believes the universe would be "better" with the cube. P does not think ey will actually be happier (or whatever) accepting offer 1, and if P does think ey will be happ... (read more)

In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)

I say that the cube cannot be part of P's utility function, because whether the cube exists in this example is completely decoupled from wheth... (read more)

0mwengler
It may not matter whether there is gold in them thar hills, but it does matter what the oracle says. So I think you have misstated P's utility function. P wants the oracle to tell him the gold exists, that is his utility function. And realizing that, you cannot say that it doesn't matter what the oracle really tells him, because it does. I don't think P's hypothesized stupid reliance on a lying oracle binds us to ignore what P really wants and thus call it only a state of mind. He needs that physical communication from something other than his mind, the oracle.
3bryjnar
Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn't often thought of as especially altruistic, because it's something that usually matters very deeply to the person, even bracketing morality. Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that This is only relevant if we already think S is true. You can't use it to support that very argument! Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it's just not one that is acutally much use in relation to human decision-making. Edit: remember to escape * s! Edit2: quoting fail.

Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".

I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how goo... (read more)

-2Eugine_Nier
If you truly believe this proposition, as opposed to merely belief in belief, you shown stop reading LessWrong right now. If you keep reading LessWrong, you are likely to get better at rationality, and in particular at telling whether something is true or false, which will make it harder for you to maintain comfortable beliefs and thus will vastly lower your utility by your definition.
0mwengler
I disagree with S and I think you might also. It depends on how you define utility. Consider two sentiences, P&Q. They are in identical states of mind. However, they are not in identical states of universe. P is in a room which is about to have its exits sealed and will then be slowly filled with an acid solution which will eat the flesh from P's bones, killing him after about 45 minutes of excruciating pain. Q is in a room in which a screening of the movie "Cabaret" starring Liza Minelli, Robert York, and Joel Grey is about to begin. But at this moment, neither acid nor movie has started, and P & Q are in the same state of mind. By your definition of utility do they have the same utility? I disagree with S. I have no idea if agreeing with S makes you an anti-realist, but it does seem to indicate you are underestimating the power of reality to make you unhappy.
0Jack
I'm a moral anti-realist. I don't see a justification for S. If there are facts about "how good or bad things are, from the perspective of the agent" it seems like those facts, for humans, are often facts about the 'real world'. I also don't much see what this has to do with moral realism. Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure "objective utility" but that doesn't mean such a figure is of any moral importance. I doubt I would care much about it.
3[anonymous]
I'm not 100% sure what your S means, but I don't think it's true. If Omega comes along and says "If you want, I'll make a 1m cube of gold somewhere you'll never observe it, and then make you forget all about this offer", then P will accept. On the other hand, P wouldn't necessarily accept an offer to make him delusionally believe that a cube of gold exists.
0bryjnar
Your example seems to provide an instance where S is false. You just assert that it isn't like that: Why? Again, why? You haven't really said anything about why you'd think that... Also, it seems pretty clearl that things outside of your head can matter. Suppose an evil demon offers you a choice: either * your family will be tortured, but you will think that they're fine * your family will be fine, but you will think that they're being tortured. And of course, all memory of the encounter with the demon will be erased. I think most people would take the second option, and gladly! That seems pretty strong prima facie evidence that stuff outside people's heads matters to them. So I guess I'd disagree with S. Oh, and I'm (sort of) an anti-realist.