Comment author: Trevor_Caverly 14 September 2012 04:30:37AM 3 points [-]

Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here) ?

Comment author: Trevor_Caverly 11 September 2012 09:20:26PM 2 points [-]

" 'What is true is already so. The coherent extrapolated volition of God doesn't make it worse' is obviusly false if and only if timeless politics is isomorphic to truth if and only if the tenth virtue of rationality is 'Let me not become attached to the map I may not want' " is obviously false.

Well, it's true.

Also, This is way smarter than than the Deepak Chopra quote generator.

Comment author: Eugine_Nier 12 July 2012 03:02:05AM 0 points [-]

Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2?

So would you argue that P2 shouldn't investigate whether the cube exists, because then he would find out that it doesn't and thus become worse off?

Comment author: Trevor_Caverly 12 July 2012 04:33:49AM 0 points [-]

Yes. P2 finding this out would harm him, and couldn't possibly benefit anyone else, so if searching would lead him to believe the cube doesn't exist, it would be ethically better if he didn't search. But the harm to P2 is a result of his knowledge, not the mere fact of the cube's inexistence. Likewise, P1 should investigate assuming he would find the cube. The reason for this difference is that investigating would have a different effect on the mental states of P1 than it would on the mental states of P2. If the cube in U1 can't be found by P1, than the asymmetry is gone, and neither should investigate.

Comment author: Eugine_Nier 11 July 2012 07:21:16AM 0 points [-]

I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?

I had assumed you meant something like this.

To see if I'm understanding you correctly, would you be in favor of wireheading the entire human race?

Comment author: Trevor_Caverly 12 July 2012 01:10:15AM 0 points [-]

I would not be in favor of wireheading the human race, but I don't see how that is connected to S. If wireheading all of humanity is bad, it seems clear that it is bad because it is bad for the people being wireheaded. If this is a wireheading scenario where humanity goes extinct as a result of wireheading, than this is also bad because of the hypothetical people who would have valued being alive. There is nothing about S that stops someone from comparing the normal life they would live with a wireheaded life and saying they would prefer the normal life. This is because these two choices involve different mental states for the person, and S does not in itself place any restrictions on which mental states would be better for you to have. Rather, it states that your own mental states are the only things that can be good or bad for you.

If you think S is false, you could additionally claim that wireheading humanity is bad because the fact that humanity is wireheaded is something that almost everybody believes is bad for them, and so if humanity is wireheaded, that is very bad for many people, even if these people are not aware that humanity is wireheaded. But it seems very easy to believe that wireheading is bad for humanity without believing this claim.

Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2? If so do you really think that this being possible is as obvious as that 2 + 2 =\= 3 ? If not, why would someone's well-being be able to depend on something other than their mental states in some situations but not this one? To me it seems very obvious to me that P1 and P2 have exactly equally good lives, and I am truly surprised that other people's intuitions and beliefs lean strongly the other way.

In response to Useful maxims
Comment author: [deleted] 11 July 2012 02:34:30PM 7 points [-]

Eat sleep and have sex before making any big decision. Getting into that emotional ground state is valuable.

In response to comment by [deleted] on Useful maxims
Comment author: Trevor_Caverly 11 July 2012 03:08:04PM 4 points [-]

What if you're deciding whether to have sex?

Comment author: Eugine_Nier 10 July 2012 06:05:56AM 0 points [-]

If you truly believe this proposition, as opposed to merely belief in belief, you shown stop reading LessWrong right now. If you keep reading LessWrong, you are likely to get better at rationality, and in particular at telling whether something is true or false, which will make it harder for you to maintain comfortable beliefs and thus will vastly lower your utility by your definition.

Comment author: Trevor_Caverly 11 July 2012 03:17:32AM 0 points [-]

I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?

If someone kidnapped me and hooked me up to an experience machine that gave me a simulated perfect life, and then tortured my family for the rest of their lives, I claim that this would be good for me. It would be bad overall because people would be harmed (far in excess of my gains). If I was given this as an option I would not take it because I would be horrified by the idea and because I believe it would be morally wrong, but not because I believe I would be worse off if I took the deal. If someone claimed that taking this deal would be bad for their own well-being, I believe that they would be mistaken.

If someone claimed that the existence of a gold cube in a section of the universe where it would never be noticed by anyone or affect any sentient things could be a morally good thing, I would likewise claim that they are mistaken. I claim this, because regardless of how much they want the cube to exist, or how good they believe the existence of the cube to be, no one's well-being can depend on the existence of the cube. At most, someone's well-being can depend on their belief in the existence of the cube.

Comment author: mwengler 09 July 2012 10:16:07PM 1 point [-]

Whether P should be trusting the oracle is besides the point.

No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists." You are flummoxed by the paradox of P feeling just as happy due to a false belief in gold as he would based on a true belief in gold, and you are ignoring the thing that ACTUALLY made him happy: which was the oracle telling him the gold was real.

How surprising should it be that ignoring the real world causes of something produces paradoxes? P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists. And if the gold doesn't exist in reality, P's happiness is not changed, but if the reality that lead him to believe the gold existed is reversed, if the oracle tells him (truly or falsely) the gold doesn't exist, then his happiness is changed.

I actually have not a clue what this example's connection to moral realism might be, either supporting it or denying it. But I am pretty clear that what you present as a "real mental result without a physical cause because the gold does not matter" is merely a case of you taking an hypothesized fool at his word and ignoring the REAL physical cause of P's happiness or sadness. Or from a slightly different tack, if P defined "gold exists" as "oracle tells me gold exists" then P's claim that his utility is the gold is equivaelnt to a claim that his utility is being told there is god.

Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.

Comment author: Trevor_Caverly 09 July 2012 11:11:39PM -1 points [-]

No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists."

I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for "knowing" the cube exists has some degree of uncertainty. He is happy because he has a strong belief that the gold exists. Moreover, my point stands regardless of where P gets his knowledge. Imagine, for example, that P believes strongly that the cube does not exist, because the existence of the cube violates Occam's razor. It is still the case (in my opinion) that whether he is correct does not alter his well-being.

How surprising should it be that ignoring the real world causes of something produces paradoxes?

I do not think that this is a paradox, it seems intuitively obvious to me. In fact, I'm not entirely sure that we disagree on anything. You say "P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists." I think others on this thread would argue that P's happiness does change depending on the existence of the gold, even if what the oracle tells him is the same either way.

I actually have not a clue what this example's connection to moral realism might be,

Maybe nothing, I just suspected that moral anti-realists would be less likely to accept S. My main question is just whether other people share my intuition that S is true (and what there reasons for agreeing or disagreeing are).

Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.

I'm not sure I understand what you're saying. P believes that the oracle is telling him the cube exists because the cube exists. P is of course mistaken, but everything else the oracle told him was correct, so he strongly believes that the oracle will only tell him things because they are the truth. Whether this is a reasonable belief for P to have is not relevant. You seem to be saying that if something has no causal effect on someone, that it cannot affect their well-being. I agree with that, but other people do not agree with that.

Comment author: mwengler 09 July 2012 09:54:10PM 1 point [-]

So, does anyone disagree with S? If you agree with S, are you an anti-realist?

I disagree with S and I think you might also. It depends on how you define utility.

Consider two sentiences, P&Q. They are in identical states of mind. However, they are not in identical states of universe. P is in a room which is about to have its exits sealed and will then be slowly filled with an acid solution which will eat the flesh from P's bones, killing him after about 45 minutes of excruciating pain. Q is in a room in which a screening of the movie "Cabaret" starring Liza Minelli, Robert York, and Joel Grey is about to begin.

But at this moment, neither acid nor movie has started, and P & Q are in the same state of mind. By your definition of utility do they have the same utility?

I disagree with S. I have no idea if agreeing with S makes you an anti-realist, but it does seem to indicate you are underestimating the power of reality to make you unhappy.

Comment author: Trevor_Caverly 09 July 2012 10:17:54PM -1 points [-]

I guess the realism aspect isn't as relevant as I thought it would be. I expected that any realists would believe S, and that anti-realists might or might not. I also think that not believing S would imply anti-realism, but I'm not super confident that that's true.

I would say that P and Q have equal utility until the point where their circumstances diverge, after which of course they would have different utilities. There is no reason to consider future utility when talking about current utility. So it just depends on what section of time you are looking at. If you're only looking at a segment where P and Q have identical brain states, then yes I would say they have the same utility.

Comment author: bryjnar 09 July 2012 09:30:07PM *  2 points [-]

Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone's well-being.

One other thing: you seem to be assuming that the only reasons someone can have to act are either

  • it promotes their well-being
  • some moral reason.

I think this isn't true, and it's especially not true if you're defining well-being as you are. So you present the options for P as

  • they want to have the happy-making belief that the cube exists
  • they think there is something "good" about the cube existing

but these aren't exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you're now claiming that actually noone desires anything other than that they come to have certain mental states, that's even more controversial, and I would say even more obviously false ;)

Comment author: Trevor_Caverly 09 July 2012 09:52:55PM -1 points [-]

I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that's fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?

But that doesn't really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don't agree, I don't really have any way to convince you other than what I've already written.

Comment author: mwengler 09 July 2012 09:05:26PM 1 point [-]

It may not matter whether there is gold in them thar hills, but it does matter what the oracle says. So I think you have misstated P's utility function. P wants the oracle to tell him the gold exists, that is his utility function. And realizing that, you cannot say that it doesn't matter what the oracle really tells him, because it does.

I don't think P's hypothesized stupid reliance on a lying oracle binds us to ignore what P really wants and thus call it only a state of mind. He needs that physical communication from something other than his mind, the oracle.

Comment author: Trevor_Caverly 09 July 2012 09:28:21PM -1 points [-]

I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, even if you knew that you would never know whether you were successful, but this is mainly an act of altruism.

My main point is probably better expressed as "Beings with identical mental states must be equally well off". Disagreeing with this seems absurd to me, but apparently a lot of people do not share this intuition.

Also, you could easily eliminate the oracle in the example by just stating that P spontaneously comes to believe the cube exists for no reason. Or we could imagine that P has a perfectly realistic hallucination of the oracle. The fact that P's belief is unjustified does not matter. According to S, the reasons for P's mental state are irrelevant.

View more: Next