(This was originally a comment that I wrote as a follow up to my question for William MacAskill's AMA. I'm moving it since it's perhaps more on-topic here.)
It occurs to me that another reason for the lack of engagement by people with philosophy backgrounds may be that philosophers aren't aware of the many philosophical problems in AI alignment that they could potentially contribute to. So here's a list of philosophical problems that have come up just in my own thinking about AI alignment.
- Decision theory for AI / AI designers
- How to resolve standard debates in decision theory?
- Logical counterfactuals
- Open source game theory
- Acausal game theory / reasoning about distant superintelligences
- Infinite/multiversal/astronomical ethics
- Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?
- What kinds of (e.g. spatial-temporal) discounting is necessary and/or desirable?
- Fair distribution of benefits
- How should benefits from AGI be distributed?
- For example, would it be fair to distribute it equally over all humans who currently exist, or according to how much AI services they can afford to buy?
- What about people who existed or will exist at other times and in other places or universes?
- Need for "metaphilosophical paternalism"?
- However we distribute the benefits, if we let the beneficiaries decide what to do with their windfall using their own philosophical faculties, is that likely to lead to a good outcome?
- Metaphilosophy
- What is the nature of philosophy?
- What constitutes correct philosophical reasoning?
- How to specify this into an AI design?
- Philosophical forecasting
- How are various AI technologies and AI safety proposals likely to affect future philosophical progress (relative to other kinds of progress)?
- Preference aggregation between AIs and between users
- How should two AIs that want to merge with each other aggregate their preferences?
- How should an AI aggregate preferences between its users?
- Normativity for AI / AI designers
- What is the nature of normativity? Do we need to make sure an AGI has a sufficient understanding of this?
- Metaethical policing
- What are the implicit metaethical assumptions in a given AI alignment proposal (in case the authors didn't spell them out)?
- What are the implications of an AI design or alignment proposal under different metaethical assumptions?
- Encouraging designs that make minimal metaethical assumptions or is likely to lead to good outcomes regardless of which metaethical theory turns out to be true.
- (Nowadays AI alignment researchers seem to be generally good about not placing too much confidence in their own moral theories, but the same can't always be said to be true with regard to their metaethical ideas.)
I should have written something to encourage others to list their own problems that philosophers can contribute to, but was in a hurry to move my existing comment over to this forum and forgot, so thanks for doing that anyway.
About your list, I agree with 3 (and was planning to add something like it as a bullet point under "metaphilosophy" but again forgot).
For 1, I'm not sure there's much for philosophers to do there, which may in part be because I don't correctly understand Dennett’s work on the intentional stance. I wonder if you can explain more or point to an explanation of Dennett's work that would make sense for someone like me? (Is he saying anything beyond that it's sometimes useful to make a prediction by treating something as if it's a rational agent, which by itself seems like a rather trivial point?)
For 2, it seems like rather than being disjoint from my list, it actually subsumes a number of items on my list. For example, one reason I think AGI will likely be dangerous is that it will likely differentially accelerate other forms of intellectual progress relative to philosophical progress. So wouldn't laying out this argument "as rigorously and comprehensively as possible" involve making substantial progress on that item on my list?