(examples chosen for being at different points in the spectrum between the two options, not for being likely)
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
...There could be a uniquely specified human morality that fulfills much of the same purpose Moral
I laughed: SMBC comic.