No one else seems to be giving what is IMO the correct answer; I want the values of a created FAI to match my own, extrapolated. ie moral selfishness.
I would actually prefer that the extrapolation seed be drawn only from SI supporters (or ideally just me, but that's unlikely to fly), because I'm uneasy about what happens if some of my values turn out to be memetic, and they get swamped/outvoted by a coherent extrapolated deathist or hedonist memplex. Or if you include, for example, uplifted sharks in the process.
I too would prefer super AI to look to my values when deciding what to implement.
But, given the existence of moral disagreement, I don't see why that deserves to be labeled Friendly. And the whole point of CEV or similar process is to figure out what is awesome for humanity. Implementing something other than what is awesome for all of humanity is not Friendly.
If deathism really is what is awesome for all humanity, I expect a FAI to implement deathism. But there's no particular reason to believe that deathism is what is awesome for humanity.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.