The project of Friendly AI would benefit from being approached in a much more down-to-earth way. Discourse about the subject seems to be dominated by a set of possibilities which are given far too much credence:
- A single AI will take over the world
- A future galactic civilization depends on 21st-century Earth
- 10n-year lifespans are at stake, n greater than or equal to 3
- We might be living in a simulation
- Acausal deal-making
- Multiverse theory
Add up all of that, and you have a great recipe for enjoyable irrelevance. Negate every single one of those ideas, and you have an alternative set of working assumptions that are still consistent with the idea that Friendly AI matters, and which are much more suited to practical success:
- There will always be multiple centers of power
- What's at stake is, at most, the future centuries of a solar-system civilization
- No assumption that individual humans can survive even for hundreds of years, or that they would want to
- Assume that the visible world is the real world
- Assume that life and intelligence are about causal interaction
- Assume that the single visible world is the only world we affect or have reason to care about
The simplest reason to care about Friendly AI is that we are going to be coexisting with AI, and so we should want it to be something we can live with. I don't see that anything important would be lost by strongly foregrounding the second set of assumptions, and treating the first set of possibilities just as possibilities, rather than as the working hypothesis about reality.
[Earlier posts on related themes: practical FAI, FAI without "outsourcing".]
I thought another significant difference was that "Ethics" doesn't even imply getting as far as "How do I?". An Ethical discussion could center around "Would a given scenario be considered torture and is this better or worse than extinction?"
No, Machine Ethics is the field concerned with exactly the question of how to program ethical machines. For example, Arkin's Governing Lethal Behavior in Autonomous Robots is a work in the field of Machine Ethics.
In principle, a philosopher could try to work in Machine Ethics and only do speculative work on, for example, whether it's good to have robots that like torture. But inasmuch as that's a real question, it's relevant to the practical project.