Many people on Less Wrong believe reducing existential risk is one of the most important causes. Most arguments to this effect point out the horrible consequences: everyone now living would die (or face something even worse). The situation becomes even worse if we also consider future generations. Such an argument, as spelt out in Nick Bostrom's latest paper on the topic, for instance, should strike many consequentialists as persuading. But of course, not everyone's a consequentialist, and on other approaches it's far from obvious that existential risk should come out on top. Might it be worth to spend some more time investigating arguments for existential risk reduction that don't presuppose consequentialism? Of course, "non-consequentialism" is a very diverse category, and I'd be surprised if there were a single argument that covered all its members.
Oh, I wouldn't advise you to do something about existential risks first. But once you're signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.
I agree however that at that point, you hit diminishing returns.
Even if I've done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you're focused on the utility of actually existing people.