In the comments on this post (which in retrospect I feel was not very clearly written), someone linked me to a post Eliezer wrote five years ago, "The Hidden Complexity of Wishes." After reading it, I think I've figured out why the term "Friendly AI" is used so inconsistently.
This post explicitly lays out a view that seems to be implicit in, but not entirely clear from, many of of Eliezer's other writings. That view is this:
There are three kinds of genies: Genies to whom you can safely say "I wish for you to do what I should wish for"; genies for which no wish is safe; and genies that aren't very powerful or intelligent.
Even if Eliezer is right about that, I think that view of his has led to confusing usage of the term "Friendly AI." If you accept Eliezer's view, it may seem to make sense to not worry to much about whether by "Friendly AI" you mean:
-
A utopia-making machine (the AI "to whom you can safely say, 'I wish for you to do what I should wish for.'") Or:
-
A non-doomsday machine (a doomsday machine being the AI "for which no wish is safe.")
And it would make sense not to worry too much about that distinction, if you were talking only to people who also believe those two concepts are very nearly co-extensive for powerful AI. But failing to make that distinction is obviously going to be confusing when you're talking to people who don't think that. It will make it harder to communicate both your ideas and your reasons for holding those ideas to them.
One solution would be to more frequently link people back to "The Hidden Complexity of Wishes" (or other writing by Eliezer that makes similar points--what else would be suitable?) But while it's a good post and Eliezer makes some very good points with the "Outcome Pump" thought-experiment, the argument isn't entirely convincing.
As Eliezer himself has argued at great length, (see also section 6.1 of this paper) humans' own understanding of our values is far from perfect. None of us are, right now, qualified to design a utopia. But we do have some understanding of our own values; we can identify some things that would be improvements over our current situation while marking other scenarios as "this would be a disaster." It seems like there might be a point in the future where we can design an AI whose understanding of human values is similarly serviceable but no better than that.
Maybe I'm wrong about that. But if I am, until there's a better easy to read explanation of why I'm wrong for everybody to link to, it would be helpful to have different terms for (1) and (2) above. Perhaps call them "utopia AI" and "safe AI," respectively?
I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.
To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.
It seems like our local SI representatives recognize the need for an up to date summary document to point people to. Until then, our current refrain of "read the sequences" will grow increasingly misleading as more and more updates and revisions are spread across years of comments (that said, I still think people should read the sequences :) ).
Maybe this is what you're implying is already in progress, but if the main issue is that parts of the sequence are out of date, maybe Eliezer could commission a set of people who've been following the discussion all along to write review pieces, drawing on all the best comments, that describe how they would "rediscover" the conclusions of the aspect of the sequence they are responsible for themselves (with links back to original ... (read more)