Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Caring about what happens after you die

8 Post author: DataPacRat 18 December 2012 03:13PM

More than once, I've had a conversation roughly similar to the following:

Me: "I want to live forever, of course; but even if I don't, I'd still like for some sort of sapience to keep on living."

Someone else: "Yeah, so? You'll be dead, so how/why should you care?"

 

I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?

Comments (51)

Comment author: James_Miller 18 December 2012 03:33:19PM 15 points [-]

Question: "Would you pay $1 to stop the torture of 1,000 African children whom you will never meet and who will never impact your life other than through this dollar?"

If the answer is yes then you care about people with whom you have no causal connection. Why does it matter if this lack of connection is due to time rather than space?

Comment author: Bugmaster 19 December 2012 09:21:12PM *  2 points [-]

What if they answer "no" ? How would you convince them that "yes" is the better answer ?

Edit: see also here

Comment author: James_Miller 19 December 2012 10:38:32PM 3 points [-]

If he answered no I would stop interacting with him. See here.

Comment author: [deleted] 21 December 2012 01:15:57PM 1 point [-]

The example I read somewhere is: You have a terminal disease and you know you're going to die in two weeks. Would you press a button that gives you $10 now but will kill one billion people in one month?

Comment author: RomeoStevens 18 December 2012 03:36:58PM 0 points [-]

If the being making the offer has done sufficient legwork to convince me of the causal connection then I get a better warm fuzzy per dollar return than anything else going.

Comment author: Oligopsony 18 December 2012 03:40:32PM 2 points [-]

Mutatis mutandis the survival of sentient life, then.

Comment author: RomeoStevens 18 December 2012 03:46:58PM 1 point [-]

I am confused.

Comment author: Oligopsony 18 December 2012 04:19:52PM 2 points [-]

"Can you not get warm fuzzies from assurances as to what occurs after your death?"

Comment author: RomeoStevens 18 December 2012 05:23:49PM 0 points [-]

Ah, I see. Yes. Since other people care about what happens after I die, such assurances are useful for signalling that I am a useful ally.

Comment author: Luke_A_Somers 18 December 2012 03:30:35PM 11 points [-]

Does this person think wills are stupid? What about having children?

Do they actually care about anything at all?

If yes, then that's a bridge towards understanding.

Comment author: DataPacRat 19 December 2012 03:46:34AM 1 point [-]

It has been more than one person; and the only answer I can offer for your questions at this point is "I don't know".

Comment author: Bugmaster 19 December 2012 09:27:24PM 0 points [-]

I am interested in the discussion, so I am going to roleplay such a person. I'll call him "Bob".

Bob does not intend to have children, for a variety of reasons. He understands that some people do want children, and, while he believes that they are wrong, he does agree that wills are sensible tools to employ once a person commits to having children.

Bob wants to maximize his own utility. He recognizes that certain actions give him "warm fuzzies"; but he also understands that his brain is full of biases, and that not all actions that produce "warm fuzzies" are in his long-term interest. Bob has been working diligently to erdaicate as many of his biases as is reasonably practical.

So, please convince Bob that caring about what happens after he's dead is important.

Comment author: Luke_A_Somers 20 December 2012 03:59:25AM 0 points [-]

If Bob really doesn't care, then there's not much to say. I mean, who am I to tell Bob what Bob should want? That said, I may be able to explain to Bob why I care, and he might accept or at least understand my reasoning. Would that satisfy?

Comment author: Bugmaster 20 December 2012 07:40:42PM 0 points [-]

I think it would. Bob wants to want the things that will make him better off in the long run. This is why, for example, Bob trained himself to resist the urge to eat fatty/sugary foods. As the result, he is now much healthier (not to mention, leaner) than he used to be, and he doesn't even enjoy the taste of ice cream as much as he did. In the process, he also learned to enjoy physical exercise. He's also planning to apply polyhacking to himself, for reasons of emotional rather than physical health.

So, if you could demonstrate to Bob that caring about what happens after he's dead is in any way beneficial, he will strive to train himself to do so -- as long as doing so does not conflict with his terminal goals, of course.

Comment author: Luke_A_Somers 21 December 2012 03:42:36PM 0 points [-]

Well, that's the thing. It's a choice of terminal goals. If we hold those fixed, then we have nothing left to talk about.

Comment author: Bugmaster 21 December 2012 05:31:39PM 1 point [-]

Are you saying that caring about what happens after your death is a terminal goal for you ? That doesn't sound right.

Comment author: Luke_A_Somers 21 December 2012 05:45:00PM 0 points [-]

I'm not sure what you mean. If I were able to construct a utility function for myself, it would have dependence on my projections of what happens after I die.

It is not my goal to have this sort of utility function.

Comment author: Bugmaster 21 December 2012 06:05:13PM 0 points [-]

Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it's pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that's a goal in and of itself (though I could be wrong).

By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.

Comment author: Luke_A_Somers 21 December 2012 07:18:30PM 0 points [-]

Your guess model of my morality breaks causality. I'm pretty sure that's not a feature of my preferences.

Comment author: Bugmaster 22 December 2012 09:44:15PM 0 points [-]

Your guess model of my morality breaks causality.

That rhymes, but I'm not sure what it means.

Comment author: Kaj_Sotala 18 December 2012 03:28:57PM 7 points [-]

You could point out that plenty of people also have preferences about the lives of e.g. poor people in developing countries who they could, if they wanted to, just ignore completely. (Or preferences about the lives of strangers in their own country, for that matter.)

Comment author: Wei_Dai 18 December 2012 05:55:57PM 4 points [-]

This post is relevant. Not sure if your audience has the background to understand it though.

Comment author: wedrifid 19 December 2012 12:49:17AM *  3 points [-]

I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?

Who are the people you have been talking to? Have you considered talking to people who are more intelligent or better educated? Sometimes you just need to give up on people who can't understand sufficiently rudimentary concepts.

Comment author: DataPacRat 19 December 2012 03:54:42AM 0 points [-]

At least one of the people I've had this conversation with has passed basically all my 'intelligence indicator' tests, short of 'being a LessWrongian'.

Comment author: Larks 18 December 2012 06:41:54PM 3 points [-]

Suppose you were going to die tomorrow, and I come up to you and offer a deal. I'll give you an ice-cream now, in return for being able to torture your daughter the day after tomorrow for the rest of her life. Also, I'll wipe your memory, so you won't even feel guilty.

Anyone who really didn't care about things after they died would accept. But very few people would accept. So virtually all people care about the world after their death.

Comment author: hyporational 23 December 2012 11:52:36AM 0 points [-]

There's no way of making that offer without interacting with the "utility function" that cares about the present mental images of the future.

Comment author: Larks 23 December 2012 10:46:12PM 1 point [-]

How much does it care? Offer a compensating benefit to hedge their exposure.

Comment author: buybuydandavis 18 December 2012 07:43:50PM *  2 points [-]

Why "should" I like the taste of ice cream?

I do. I don't have to eat it, and I could trade off the fulfillment of that like for other likes. But I do like it, and that enjoyment is a value to me, I won't give it up except for a greater value, and even if I give it up for a greater value, it wouldn't mean that I had stopped liking the taste of ice cream.

You don't need a reason to value; values are the reasons.

Comment author: shminux 18 December 2012 05:55:53PM 2 points [-]

"Yeah, so? You'll be dead, so how/why should you care?"

Or more famously, au reste, après nous, le Déluge.

Comment author: asparisi 18 December 2012 03:49:17PM 2 points [-]

The difference is whether or not you care about sapience as instrumental or terminal values.

If I only instrumentally value other sapient beings existing, then of course, I don't care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)

But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?

So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course I don't care whether or not they exist after I die. But if I just think that a universe with sapient beings is better than one without because I value the existence of sapience, then that's that.

Which is not to deny the instrumental value of other sapient beings existing. Something can have instrumental value and also be a terminal value.

Comment author: TrE 18 December 2012 04:17:20PM 1 point [-]

(Playing devil's advocate) Once you're dead, there's no way you can feel good about sapient life existing. So if I toss a coin 1 second after your death and push the red button causing a nuclear apocalypse iff it comes up heads, you won't be able to feel sorrow in that case. You can certainly be sad before you die about me throwing the coin (if you know I'll do that), but once you're dead, there's just no way you could be happy or sad about anything.

Comment author: asparisi 18 December 2012 04:54:01PM 10 points [-]

The fact that I won't be able to care about it once I am dead doesn't mean that I don't value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don't want future sapient life to be wiped out, and that is a statement about my current preferences, not my 'after death' preferences. (Which, as noted, do not exist.)

Comment author: DataPacRat 18 December 2012 04:42:43PM 1 point [-]

That's /exactly/ the method of reasoning which inspired this post.

Comment author: TrE 19 December 2012 06:34:17AM 0 points [-]

To me (look below, I managed to confuse myself), it appears like this position is an expression of failure to imagine death, or otherwise failing to understand that there's still an expected value which can be calculated even before death, and actions can be taken to maximize that expected value of the future, which is desribed by "caring about the future".

Comment author: TrE 18 December 2012 05:36:30PM *  0 points [-]

So what you're saying is, one can't get warm fuzzies of any kind from anything unexpected happening after one's death, right? I agree with this. But consider expected fuzzies: Until one's death it's certainly possible to influence the world, changing its expected state, and get warm fuzzies from that expected value before one's death.

If we're talking utilons, not warm fuzzies, I wonder what it even means to "feel" utilons. My utility function is simply a mapping from the state of the world to the set of real numbers, and maximizing it means doing that action out of all possible actions that maximizes the expected value of that function. My utility function can be more or less arbitrary, it's just saying which actions I'll take given that I have a choice.

Saying I care about sapient beings conquering the galaxy after my demise is merely saying that I will, while I can, choose those actions that augment the chance of sapient beings conquering the galaxy, nothing else. While I can't feel happy about accomplishing this after my death, it still makes sense to say that while I lived, I cared for this future in which I couldn't participate, by any sensible meaning of the verb "to care".

Comment author: TrE 18 December 2012 05:46:44PM 0 points [-]

(playing devil's advocate) But you're dead by then! Does anything even matter if you can't experience it anymore?

Now, I find myself in a peculiar situation: I fully understand and accept the argument I made in the parent to this post, but somehow, a feeling prevails that this line of reasoning is unacceptable. It probably stems from my instincts which scream at me that death is bad, and from my brain not being able to imagine its nonexistence from the inside view.

Comment author: Gastogh 18 December 2012 03:26:56PM 2 points [-]

This may seem like nitpicking, but I promise it's for non-troll purposes.

In short, I don't understand what the problem is. What do you mean by falling flat? That they don't understand what you're saying, that they don't agree with you, or something else? Are you trying to change their minds so that they'd think less about themselves and more about the civilization at large? What precisely is the goal that you're failing to accomplish?

Comment author: DataPacRat 18 December 2012 04:39:53PM 0 points [-]

On the occasions I've had this conversation, IIRC, I don't seem to have managed to even get to the stage of them understanding that I /can/ care about what happens after I die, let alone get to an agreement about what's /worth/ caring about post-mortem.

Comment author: Gastogh 19 December 2012 09:05:33PM 0 points [-]

If they really can't even see that someone can care, then it certainly sounds as though the problem is in their understanding rather than your explanations. The viewpoint of "I don't care what happens if it doesn't involve me in any way" doesn't seem in any way inherently self-contradictory, so it'd be a hard position to argue against, but that shouldn't be getting in the way of seeing that not everyone has to think that way. Things like these three comments might have a shot at bridging the empathic gap, but if that fails... I got nothing.

Comment author: prase 18 December 2012 10:48:18PM *  1 point [-]

Perhaps there is difference in understanding the subject matter. People intuitively have preferences about things related personally to them: about their friends and relatives (and enemies), about the impact of their work, about their city or nation. But when you say 'some sort of sapience to keep on living', it is naturally interpreted as relating to very distant future (1) when nothing of that which they care about exists any more. You may, of course, have preferences relating to such a distant future when humanity is replaced by 'some sort of sapience', but many people don't have (2).

In short, I suspect that "you'll be dead" isn't the true reason of their disagreement. It's rather "nothing you care about now will exist".

Footnotes:

(1) Distant doesn't necessarily mean many centuries after present. It's the amount of change to the world which matters.

(2) Me neither. I can't "understand" (for lack of a better word) your preferences on the gut level, but I understand that there is no phase transition between your and my preferences, they are in the same class, yours being more general.

Comment author: Suryc11 18 December 2012 10:03:29PM *  1 point [-]

This seems isomorphic to the mainstream debate, in academic philosophy, over whether one can be harmed by things happening after one's death; in other words, precisely how do one's preferences (for certain states of affairs) after one's death work?

See: http://plato.stanford.edu/entries/death/

"Third, what is the case for and the case against the harm thesis, the claim that death can harm the individual who dies, and the posthumous harm thesis, according to which events that occur after an individual dies can still harm that individual?"

Comment author: Manfred 19 December 2012 04:57:50AM 0 points [-]

Hm. I think worrying about whether something can "harm" a dead person carries much more semantic baggage, so the key ideas will probably be different.

Comment author: Suryc11 19 December 2012 05:52:33AM 0 points [-]

Good point. I think the main similarity derives from a specific understanding/definition of harm that holds that harming another is acting counter to another's preferences, in some sense. In that way then, it's similar to (the OP's trouble in getting his interlocutors to understand) preferences being sustained after one's death.

Comment author: Bo102010 20 December 2012 04:35:19AM 0 points [-]

I am reluctantly someone who pretty much doesn't care about what happens after I die. This is a position I that I don't necessarily endorse, and if I could easily self-modify into the sort of person who did care I would.

I don't think this makes me a monster. I basically behave the same way as people who claim they do care about what happens after they die. That is, I have plans for what happens to my assets if I die. I have life insurance ("free" through work) that pays to my wife if I die. I wouldn't take a billion dollars on the condition that a third world country would blow up the day after I died.

As you say, though, it's "me-of-the-present" that cares about these things. With the self-modification bit above, really what I mean is "I'd like to self-modify into the sort of person who could say that I cared about what happens after I die and not feel compelled to clarify that I really mean that I think good things are good and that acting as if I cared about good things continuing to happen after I die is probably a better strategy to keep good things happening while I'm alive."

Comment author: [deleted] 18 December 2012 04:29:16PM *  0 points [-]

The original debate appeared to be limited to "Live forever" or "Dead forever" and if this was intentional, and we are intentionally ignoring the possibility of death not being permanent because that would be fighting the hypothetical, then the point below is irrelevant.

However, if we should consider the possibility that after a death a person might have a chance of being resurrected by a future sapient, then at that point, keeping future sapients alive might have potential value to that person, even if they didn't care about things that happened when they were dead.

Edit: The second paragraph was originally written in the first person, but it sounded off when I reread it, so I changed the grammar and added slightly more detail.

Comment author: pleeppleep 18 December 2012 03:23:13PM 0 points [-]

You could simulate a debate with someone here taking the opposite point of view. Or better yet, you could take the opposite point of view and here someone else carry the argument.

Comment author: Eneasz 02 January 2013 06:37:12PM -1 points [-]

Huh. I just wrote a little blurb about this yesterday. Many individual's utility functions include terms for things that exist outside of themselves. It's trivially simple for one's utility function to be fulfilled by ensuring those things continue even after the person ends.

Comment author: kilobug 19 December 2012 09:18:02AM -1 points [-]

Well, the answer is simple to me : the well-being and happiness of other persons, especially my relatives and my friends, but also other humans in general are part of my terminal values. So I care about what will happen to them after I die, as long as they'll be alive. But I don't care much about what would happen after all humanity is wiped out, if that were to happen.