Inspired by Don't Plan For the Future.
For the purposes of discussion on this site, a Friendly AI is assumed to be one that shares our terminal values. It's a safe genie that doesn't need to be told what to do, but anticipates how to best serve the interests of its creators. Since our terminal values are a function of our evolutionary history, it seems reasonable to assume that an FAI created by one intelligent species would not necessarily be friendly to other intelligent species, and that being subsumed by another species' FAI would be fairly catastrophic.
Except.... doesn't that seem kind of bad? Supposing I were able to create a strong AI, and it created a sound fun-theoretic utopia for human beings, but then proceeded to expand and subsume extraterrestrial intelligences, and subject them to something they considered a fate worse than death, I would have to regard that as a major failing of my design. My utility function assigns value to the desires of beings whose values conflict with my own. I can't allow other values to supersede mine, but absent other considerations, I have to assign negative utility in my own function for creating negative utility in the functions of other existing beings. I'm skeptical that an AI that would impose catastrophe on other thinking beings is really maximizing my utility.
It seems to me that to truly maximize my utility, an AI would need to have consideration for the utility of other beings. Secondary consideration, perhaps, but it could not maximize my utility simply by treating them as raw material with which to tile the universe with my utopian civilization.
Perhaps my utility function gives more value than most to beings that don't share my values (full disclosure, I prefer the "false" ending of Three Worlds Collide, although I don't consider it ideal.) However, if an AI imposes truly catastrophic fates on other intelligent beings, my own utility function takes such a hit that I cannot consider it friendly. A true Friendly AI would need to be at least passably friendly to other intelligences to satisfy me.
I don't know if I've finally come to terms with Eliezer's understanding of how hard Friendly AI is, or made it much, much harder, but it gives me a somewhat humbling perspective of the true scope of the problem.
The key question is how much I trust the (hypothetical) CEV-extracting algorithm that developed the FAI to actually do what its programmers intended.
If I think it's more reliable than my own bias-ridden thinking process, then if the FAI it produces does something that I reject -- for example, starts disassembling alien civilizations to replace them with a human utopia -- presumably I should be skeptical about my rejection. The most plausible interpretation of that event is that my rejection is a symptom of my cognitive biases.
Conversely, if I am not skeptical of my rejection -- if I watch the FAI disassembling aliens and I say "No, this is not acceptable" and try to stop it -- it follows that I don't think the process is more reliable than my own thinking.
As I've said before, I suspect that an actual CEV-maximizing AI would do any number of things that quite a few humans (including me) would be horrified by, precisely because I suspect that quite a few humans (including me) would be horrified by the actual implications of their own values being maximally realized.
I see reliability and friendliness as separate questions. An AI might possess epistemic and instrumental rationality that's superior to ours, but not share our terminal values, in which case, I think it makes sense to regard it as reliable, but not friendly.