How could I care about things that happen after I die only as instrumental values so as to affect things that happen before I die?
I don't know about you personally, but consider a paperclip maximizer. It cares about paperclips; its terminal goal is to maximize the number of paperclips in the Universe. If this agent is mortal, it would absolutely care about what happens after its death: it would want the number of paperclips in the Universe to continue to increase. It would pursue various strategies to ensure this outcome, while simultaneously trying to produce as many paperclips as possible during its lifetime.
More than once, I've had a conversation roughly similar to the following:
Me: "I want to live forever, of course; but even if I don't, I'd still like for some sort of sapience to keep on living."
Someone else: "Yeah, so? You'll be dead, so how/why should you care?"
I've tried describing how it's the me-of-the-present who's caring about which sort of future comes to pass, but I haven't been able to do so in a way that doesn't fall flat. Might you have any thoughts on how to better frame this idea?