LESSWRONG
LW

Personal Blog

16

Caring about what happens after you die

by DataPacRat
18th Dec 2012
1 min read
52

16

Personal Blog

16

Caring about what happens after you die
25James_Miller
3Bugmaster
2James_Miller
1A1987dM
0RomeoStevens
3Oligopsony
2RomeoStevens
3Oligopsony
0RomeoStevens
18Luke_A_Somers
1DataPacRat
0Bugmaster
0Luke_A_Somers
0Bugmaster
0Luke_A_Somers
1Bugmaster
0Luke_A_Somers
0Bugmaster
0Luke_A_Somers
0Bugmaster
0Luke_A_Somers
0Bugmaster
0Luke_A_Somers
9Kaj_Sotala
6Wei Dai
5wedrifid
0DataPacRat
5Larks
0hyporational
2Larks
3buybuydandavis
3Shmi
2asparisi
1TrE
12asparisi
1DataPacRat
0TrE
0TrE
0TrE
2Gastogh
0DataPacRat
0Gastogh
1prase
1Suryc11
0Manfred
0Suryc11
0Eneasz
0hyporational
0Bo102010
0kilobug
0[anonymous]
0pleeppleep
New Comment
52 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:11 AM
[-]James_Miller13y250

Question: "Would you pay $1 to stop the torture of 1,000 African children whom you will never meet and who will never impact your life other than through this dollar?"

If the answer is yes then you care about people with whom you have no causal connection. Why does it matter if this lack of connection is due to time rather than space?

Reply
[-]Bugmaster13y30

What if they answer "no" ? How would you convince them that "yes" is the better answer ?

Edit: see also here

Reply
[-]James_Miller13y20

If he answered no I would stop interacting with him. See here.

Reply
[-]A1987dM13y10

The example I read somewhere is: You have a terminal disease and you know you're going to die in two weeks. Would you press a button that gives you $10 now but will kill one billion people in one month?

Reply
[-]RomeoStevens13y00

If the being making the offer has done sufficient legwork to convince me of the causal connection then I get a better warm fuzzy per dollar return than anything else going.

Reply
[-]Oligopsony13y30

Mutatis mutandis the survival of sentient life, then.

Reply
[-]RomeoStevens13y20

I am confused.

Reply
[-]Oligopsony13y30

"Can you not get warm fuzzies from assurances as to what occurs after your death?"

Reply
[-]RomeoStevens13y00

Ah, I see. Yes. Since other people care about what happens after I die, such assurances are useful for signalling that I am a useful ally.

Reply
[-]Luke_A_Somers13y180

Does this person think wills are stupid? What about having children?

Do they actually care about anything at all?

If yes, then that's a bridge towards understanding.

Reply
[-]DataPacRat13y10

It has been more than one person; and the only answer I can offer for your questions at this point is "I don't know".

Reply
[-]Bugmaster13y00

I am interested in the discussion, so I am going to roleplay such a person. I'll call him "Bob".

Bob does not intend to have children, for a variety of reasons. He understands that some people do want children, and, while he believes that they are wrong, he does agree that wills are sensible tools to employ once a person commits to having children.

Bob wants to maximize his own utility. He recognizes that certain actions give him "warm fuzzies"; but he also understands that his brain is full of biases, and that not all actions that produce "warm fuzzies" are in his long-term interest. Bob has been working diligently to erdaicate as many of his biases as is reasonably practical.

So, please convince Bob that caring about what happens after he's dead is important.

Reply
[-]Luke_A_Somers13y00

If Bob really doesn't care, then there's not much to say. I mean, who am I to tell Bob what Bob should want? That said, I may be able to explain to Bob why I care, and he might accept or at least understand my reasoning. Would that satisfy?

Reply
[-]Bugmaster13y00

I think it would. Bob wants to want the things that will make him better off in the long run. This is why, for example, Bob trained himself to resist the urge to eat fatty/sugary foods. As the result, he is now much healthier (not to mention, leaner) than he used to be, and he doesn't even enjoy the taste of ice cream as much as he did. In the process, he also learned to enjoy physical exercise. He's also planning to apply polyhacking to himself, for reasons of emotional rather than physical health.

So, if you could demonstrate to Bob that caring about what happens after he's dead is in any way beneficial, he will strive to train himself to do so -- as long as doing so does not conflict with his terminal goals, of course.

Reply
[-]Luke_A_Somers13y00

Well, that's the thing. It's a choice of terminal goals. If we hold those fixed, then we have nothing left to talk about.

Reply
[-]Bugmaster13y10

Are you saying that caring about what happens after your death is a terminal goal for you ? That doesn't sound right.

Reply
[-]Luke_A_Somers13y00

I'm not sure what you mean. If I were able to construct a utility function for myself, it would have dependence on my projections of what happens after I die.

It is not my goal to have this sort of utility function.

Reply
[-]Bugmaster13y00

Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it's pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that's a goal in and of itself (though I could be wrong).

By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.

Reply
[-]Luke_A_Somers13y00

Your guess model of my morality breaks causality. I'm pretty sure that's not a feature of my preferences.

Reply
[-]Bugmaster13y00

Your guess model of my morality breaks causality.

That rhymes, but I'm not sure what it means.

Reply
[-]Luke_A_Somers13y00

How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?

Reply
[-]Bugmaster13y00

I don't know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.

Reply
[-]Luke_A_Somers13y00

But that's quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?

Reply
[-]Kaj_Sotala13y90

You could point out that plenty of people also have preferences about the lives of e.g. poor people in developing countries who they could, if they wanted to, just ignore completely. (Or preferences about the lives of strangers in their own country, for that matter.)

Reply
[-]Wei Dai13y60

This post is relevant. Not sure if your audience has the background to understand it though.

Reply
[-]wedrifid13y50

I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?

Who are the people you have been talking to? Have you considered talking to people who are more intelligent or better educated? Sometimes you just need to give up on people who can't understand sufficiently rudimentary concepts.

Reply
[-]DataPacRat13y00

At least one of the people I've had this conversation with has passed basically all my 'intelligence indicator' tests, short of 'being a LessWrongian'.

Reply
[-]Larks13y50

Suppose you were going to die tomorrow, and I come up to you and offer a deal. I'll give you an ice-cream now, in return for being able to torture your daughter the day after tomorrow for the rest of her life. Also, I'll wipe your memory, so you won't even feel guilty.

Anyone who really didn't care about things after they died would accept. But very few people would accept. So virtually all people care about the world after their death.

Reply
[-]hyporational13y00

There's no way of making that offer without interacting with the "utility function" that cares about the present mental images of the future.

Reply
[-]Larks13y20

How much does it care? Offer a compensating benefit to hedge their exposure.

Reply
[-]buybuydandavis13y30

Why "should" I like the taste of ice cream?

I do. I don't have to eat it, and I could trade off the fulfillment of that like for other likes. But I do like it, and that enjoyment is a value to me, I won't give it up except for a greater value, and even if I give it up for a greater value, it wouldn't mean that I had stopped liking the taste of ice cream.

You don't need a reason to value; values are the reasons.

Reply
[-]Shmi13y30

"Yeah, so? You'll be dead, so how/why should you care?"

Or more famously, au reste, après nous, le Déluge.

Reply
[-]asparisi13y20

The difference is whether or not you care about sapience as instrumental or terminal values.

If I only instrumentally value other sapient beings existing, then of course, I don't care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)

But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?

So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course I don't care whether or not they exist after I die. But if I just think that a universe with sapient beings is better than one without because I value the existence of sapience, then that's that.

Which is not to deny the instrumental value of other sapient beings existing. Something can have instrumental value and also be a terminal value.

Reply
[-]TrE13y10

(Playing devil's advocate) Once you're dead, there's no way you can feel good about sapient life existing. So if I toss a coin 1 second after your death and push the red button causing a nuclear apocalypse iff it comes up heads, you won't be able to feel sorrow in that case. You can certainly be sad before you die about me throwing the coin (if you know I'll do that), but once you're dead, there's just no way you could be happy or sad about anything.

Reply
[-]asparisi13y120

The fact that I won't be able to care about it once I am dead doesn't mean that I don't value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don't want future sapient life to be wiped out, and that is a statement about my current preferences, not my 'after death' preferences. (Which, as noted, do not exist.)

Reply
[-]DataPacRat13y10

That's /exactly/ the method of reasoning which inspired this post.

Reply
[-]TrE13y00

To me (look below, I managed to confuse myself), it appears like this position is an expression of failure to imagine death, or otherwise failing to understand that there's still an expected value which can be calculated even before death, and actions can be taken to maximize that expected value of the future, which is desribed by "caring about the future".

Reply
[-]TrE13y00

So what you're saying is, one can't get warm fuzzies of any kind from anything unexpected happening after one's death, right? I agree with this. But consider expected fuzzies: Until one's death it's certainly possible to influence the world, changing its expected state, and get warm fuzzies from that expected value before one's death.

If we're talking utilons, not warm fuzzies, I wonder what it even means to "feel" utilons. My utility function is simply a mapping from the state of the world to the set of real numbers, and maximizing it means doing that action out of all possible actions that maximizes the expected value of that function. My utility function can be more or less arbitrary, it's just saying which actions I'll take given that I have a choice.

Saying I care about sapient beings conquering the galaxy after my demise is merely saying that I will, while I can, choose those actions that augment the chance of sapient beings conquering the galaxy, nothing else. While I can't feel happy about accomplishing this after my death, it still makes sense to say that while I lived, I cared for this future in which I couldn't participate, by any sensible meaning of the verb "to care".

Reply
[-]TrE13y00

(playing devil's advocate) But you're dead by then! Does anything even matter if you can't experience it anymore?

Now, I find myself in a peculiar situation: I fully understand and accept the argument I made in the parent to this post, but somehow, a feeling prevails that this line of reasoning is unacceptable. It probably stems from my instincts which scream at me that death is bad, and from my brain not being able to imagine its nonexistence from the inside view.

Reply
[-]Gastogh13y20

This may seem like nitpicking, but I promise it's for non-troll purposes.

In short, I don't understand what the problem is. What do you mean by falling flat? That they don't understand what you're saying, that they don't agree with you, or something else? Are you trying to change their minds so that they'd think less about themselves and more about the civilization at large? What precisely is the goal that you're failing to accomplish?

Reply
[-]DataPacRat13y00

On the occasions I've had this conversation, IIRC, I don't seem to have managed to even get to the stage of them understanding that I /can/ care about what happens after I die, let alone get to an agreement about what's /worth/ caring about post-mortem.

Reply
[-]Gastogh13y00

If they really can't even see that someone can care, then it certainly sounds as though the problem is in their understanding rather than your explanations. The viewpoint of "I don't care what happens if it doesn't involve me in any way" doesn't seem in any way inherently self-contradictory, so it'd be a hard position to argue against, but that shouldn't be getting in the way of seeing that not everyone has to think that way. Things like these three comments might have a shot at bridging the empathic gap, but if that fails... I got nothing.

Reply
[-]prase13y10

Perhaps there is difference in understanding the subject matter. People intuitively have preferences about things related personally to them: about their friends and relatives (and enemies), about the impact of their work, about their city or nation. But when you say 'some sort of sapience to keep on living', it is naturally interpreted as relating to very distant future (1) when nothing of that which they care about exists any more. You may, of course, have preferences relating to such a distant future when humanity is replaced by 'some sort of sapience', but many people don't have (2).

In short, I suspect that "you'll be dead" isn't the true reason of their disagreement. It's rather "nothing you care about now will exist".

Footnotes:

(1) Distant doesn't necessarily mean many centuries after present. It's the amount of change to the world which matters.

(2) Me neither. I can't "understand" (for lack of a better word) your preferences on the gut level, but I understand that there is no phase transition between your and my preferences, they are in the same class, yours being more general.

Reply
[-]Suryc1113y10

This seems isomorphic to the mainstream debate, in academic philosophy, over whether one can be harmed by things happening after one's death; in other words, precisely how do one's preferences (for certain states of affairs) after one's death work?

See: http://plato.stanford.edu/entries/death/

"Third, what is the case for and the case against the harm thesis, the claim that death can harm the individual who dies, and the posthumous harm thesis, according to which events that occur after an individual dies can still harm that individual?"

Reply
[-]Manfred13y00

Hm. I think worrying about whether something can "harm" a dead person carries much more semantic baggage, so the key ideas will probably be different.

Reply
[-]Suryc1113y00

Good point. I think the main similarity derives from a specific understanding/definition of harm that holds that harming another is acting counter to another's preferences, in some sense. In that way then, it's similar to (the OP's trouble in getting his interlocutors to understand) preferences being sustained after one's death.

Reply
[-]Eneasz13y00

Huh. I just wrote a little blurb about this yesterday. Many individual's utility functions include terms for things that exist outside of themselves. It's trivially simple for one's utility function to be fulfilled by ensuring those things continue even after the person ends.

Reply
[-][anonymous]13y00

I think there might be a linguistic confusion behind this more than anything else and you probably agree with our friends more than you think. All of the reframings here I think can be countered by claiming that you're buying fuzzies in the present by believing you're affecting things in the spatial or temporal distance. It all comes down to definitions of this distance. Some people would argue only the present exists, how can you value something that doesn't exist? I think this kind of confusion is related.

[This comment is no longer endorsed by its author]Reply
[-]Bo10201013y00

I am reluctantly someone who pretty much doesn't care about what happens after I die. This is a position I that I don't necessarily endorse, and if I could easily self-modify into the sort of person who did care I would.

I don't think this makes me a monster. I basically behave the same way as people who claim they do care about what happens after they die. That is, I have plans for what happens to my assets if I die. I have life insurance ("free" through work) that pays to my wife if I die. I wouldn't take a billion dollars on the condition that a third world country would blow up the day after I died.

As you say, though, it's "me-of-the-present" that cares about these things. With the self-modification bit above, really what I mean is "I'd like to self-modify into the sort of person who could say that I cared about what happens after I die and not feel compelled to clarify that I really mean that I think good things are good and that acting as if I cared about good things continuing to happen after I die is probably a better strategy to keep good things happening while I'm alive."

Reply
[-]kilobug13y00

Well, the answer is simple to me : the well-being and happiness of other persons, especially my relatives and my friends, but also other humans in general are part of my terminal values. So I care about what will happen to them after I die, as long as they'll be alive. But I don't care much about what would happen after all humanity is wiped out, if that were to happen.

Reply
[-][anonymous]13y00

The original debate appeared to be limited to "Live forever" or "Dead forever" and if this was intentional, and we are intentionally ignoring the possibility of death not being permanent because that would be fighting the hypothetical, then the point below is irrelevant.

However, if we should consider the possibility that after a death a person might have a chance of being resurrected by a future sapient, then at that point, keeping future sapients alive might have potential value to that person, even if they didn't care about things that happened when they were dead.

Edit: The second paragraph was originally written in the first person, but it sounded off when I reread it, so I changed the grammar and added slightly more detail.

Reply
[-]pleeppleep13y00

You could simulate a debate with someone here taking the opposite point of view. Or better yet, you could take the opposite point of view and here someone else carry the argument.

Reply
Moderation Log
More from DataPacRat
View more
Curated and popular this week
52Comments

More than once, I've had a conversation roughly similar to the following:

Me: "I want to live forever, of course; but even if I don't, I'd still like for some sort of sapience to keep on living."

Someone else: "Yeah, so? You'll be dead, so how/why should you care?"

 

I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?