The assumptions that I criticize may be providing a lot of the motivation, but you can think that e.g. solving the problem of ethical stability under self-modification is important, without believing that stuff; and meanwhile, a lot of people must be encountering the concept of Friendly AI as part of a package which includes the futurist maximalism and the peculiar metaphysics. I suppose I'm saying that Friendly AI needs to be saved from the subculture that most vigorously supports it, because it contains ideas that really are important.
How about we plant seeds for a new culture intended to design mechanism-designing mechanisms? One seed institution will be this comment exchange. We can bootstrap from there.
The project of Friendly AI would benefit from being approached in a much more down-to-earth way. Discourse about the subject seems to be dominated by a set of possibilities which are given far too much credence:
Add up all of that, and you have a great recipe for enjoyable irrelevance. Negate every single one of those ideas, and you have an alternative set of working assumptions that are still consistent with the idea that Friendly AI matters, and which are much more suited to practical success:
The simplest reason to care about Friendly AI is that we are going to be coexisting with AI, and so we should want it to be something we can live with. I don't see that anything important would be lost by strongly foregrounding the second set of assumptions, and treating the first set of possibilities just as possibilities, rather than as the working hypothesis about reality.
[Earlier posts on related themes: practical FAI, FAI without "outsourcing".]