Re cumulative probability calculations, I just copied the non-cumulative probabilities column from Ajeya Cotra's spreadsheet, where she defines it as the difference between successive cumulative probabilities (I haven't dug deeply enough to know whether she calculates cumulative probabilities correctly). Either way, it makes fairly little difference, given how small the numbers are.
Re your second point, I basically agree that you should not work on AI Safety from a personal expected utility standpoint, as I address in the caveats. My main crux for this is ...
Fair! I made a throwaway pseudonym I don't anticipate using elsewhere, but this seems like a reasonable criticism of the pseudonym choice.
To be clear, I work on AI Safety for consequentialist reasons, and am aware that it seems overwhelmingly sensible from a longtermist perspective. I was trying to make the point that it also makes sense from a bunch of other perspectives, including perspectives that better feed in to my motivation system. It would still be worth working on even if this wasn't the case, but I think it's a point worth making.