You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Luke_A_Somers comments on Caring about what happens after you die - Less Wrong Discussion

8 Post author: DataPacRat 18 December 2012 03:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 21 December 2012 05:45:00PM 0 points [-]

I'm not sure what you mean. If I were able to construct a utility function for myself, it would have dependence on my projections of what happens after I die.

It is not my goal to have this sort of utility function.

Comment author: Bugmaster 21 December 2012 06:05:13PM 0 points [-]

Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it's pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that's a goal in and of itself (though I could be wrong).

By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.

Comment author: Luke_A_Somers 21 December 2012 07:18:30PM 0 points [-]

Your guess model of my morality breaks causality. I'm pretty sure that's not a feature of my preferences.

Comment author: Bugmaster 22 December 2012 09:44:15PM 0 points [-]

Your guess model of my morality breaks causality.

That rhymes, but I'm not sure what it means.

Comment author: Luke_A_Somers 24 December 2012 01:33:43AM 0 points [-]

How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?

Comment author: Bugmaster 24 December 2012 02:06:27AM 0 points [-]

I don't know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.

Comment author: Luke_A_Somers 24 December 2012 05:17:05AM 0 points [-]

But that's quite directly caring about what happens after you die. How is this supposedly not caring about what happens after you die except instrumentally?