ETA: I'll be adding things to the list that I think belong there.
I'm assuming a high level of credence in classic utilitarianism, and that AI-Xrisk is significant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years). ETA: For the purpose of this list, I don't care about questioning those assumptions.
Here's my current list (off the top of my head):
- not your comparitive advantage
- consider other Xrisks more threatening (top contenders: bio / nuclear)
- infinite ethics (and maybe other fundamental ethical questions, e.g. to do with moral uncertainty)
- S-risks
- simulation hypothesis
- ETA: AI has high moral value in expectation / by default
- ETA: low tractability (either at present or in general)
- ETA: Doomsday Argument as overwhelming evidence against futures with large number of minds
Also, does anyone want to say why they think none of these should change the picture? Or point to a good reference discussing this question? (etc.)
I think infinite ethics will most likely be solved in a way that leaves longtermism unharmed. See my recent comment to William MacAskill on this topic.
Do you have specific candidate solutions in mind?