More than once, I've had a conversation roughly similar to the following:
Me: "I want to live forever, of course; but even if I don't, I'd still like for some sort of sapience to keep on living."
Someone else: "Yeah, so? You'll be dead, so how/why should you care?"
I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?
Well, you said that the disagreement between you and Bob comes down to a choice of terminal goals, and thus it's pointless for you to try to persuade Bob and vice versa. I am trying to figure out which goals are in conflict. I suspect that you care about what happens after you die because doing so helps advance some other goal, not because that's a goal in and of itself (though I could be wrong).
By analogy, a paperclip maximizer would care about securing large quantities of nickel not because it merely loves nickel, but because doing so would allow it to create more paperclips, which is its terminal goal.
Your guess model of my morality breaks causality. I'm pretty sure that's not a feature of my preferences.