All of AISafetyIsNotLongtermist's Comments + Replies

To be clear, I work on AI Safety for consequentialist reasons, and am aware that it seems overwhelmingly sensible from a longtermist perspective. I was trying to make the point that it also makes sense from a bunch of other perspectives, including perspectives that better feed in to my motivation system. It would still be worth working on even if this wasn't the case, but I think it's a point worth making.

Re cumulative probability calculations, I just copied the non-cumulative probabilities column from Ajeya Cotra's spreadsheet, where she defines it as the difference between successive cumulative probabilities (I haven't dug deeply enough to know whether she calculates cumulative probabilities correctly). Either way, it makes fairly little difference, given how small the numbers are.

Re your second point, I basically agree that you should not work on AI Safety from a personal expected utility standpoint, as I address in the caveats. My main crux for this is ... (read more)

2Vladimir_Nesov
Note that if AI risk doesn't kill you, but you survive to see AGI plus a few years, then you probably get to live however long you want, at much higher quality, so the QALY loss from AI risk in this scenario is not bounded by the no-AGI figure.
0TekhneMakre
So then it is a long-termist cause, isn't it? It's something that some people (long-termists) want to collaborate on, because it's worth the effort, and that some people don't. I mean, there can be other reasons to work on it, like wanting your grandchildren to exist, but still. 

Fair! I made a throwaway pseudonym I don't anticipate using elsewhere, but this seems like a reasonable criticism of the pseudonym choice.