I put "Friendliness" in quotes in the title, because I think what we really want, and what MIRI seems to be working towards, is closer to "optimality": create an AI that minimizes the expected amount of astronomical waste. In what follows I will continue to use "Friendly AI" to denote such an AI since that's the established convention.
I've often stated my objections MIRI's plan to build an FAI directly (instead of after human intelligence has been substantially enhanced). But it's not because, as some have suggested while criticizing MIRI's FAI work, that we can't foresee what problems need to be solved. I think it's because we can largely foresee what kinds of problems need to be solved to build an FAI, but they all look superhumanly difficult, either due to their inherent difficulty, or the lack of opportunity for "trial and error", or both.
When people say they don't know what problems need to be solved, they may be mostly talking about "AI safety" rather than "Friendly AI". If you think in terms of "AI safety" (i.e., making sure some particular AI doesn't cause a disaster) then that does looks like a problem that depends on what kind of AI people will build. "Friendly AI" on the other hand is really a very different problem, where we're trying to figure out what kind of AI to build in order to minimize astronomical waste. I suspect this may explain the apparent disagreement, but I'm not sure. I'm hoping that explaining my own position more clearly will help figure out whether there is a real disagreement, and what's causing it.
The basic issue I see is that there is a large number of serious philosophical problems facing an AI that is meant to take over the universe in order to minimize astronomical waste. The AI needs a full solution to moral philosophy to know which configurations of particles/fields (or perhaps which dynamical processes) are most valuable and which are not. Moral philosophy in turn seems to have dependencies on the philosophy of mind, consciousness, metaphysics, aesthetics, and other areas. The FAI also needs solutions to many problems in decision theory, epistemology, and the philosophy of mathematics, in order to not be stuck with making wrong or suboptimal decisions for eternity. These essentially cover all the major areas of philosophy.
For an FAI builder, there are three ways to deal with the presence of these open philosophical problems, as far as I can see. (There may be other ways for the future to turns out well without the AI builders making any special effort, for example if being philosophical is just a natural attractor for any superintelligence, but I don't see any way to be confident of this ahead of time.) I'll name them for convenient reference, but keep in mind that an actual design may use a mixture of approaches.
- Normative AI - Solve all of the philosophical problems ahead of time, and code the solutions into the AI.
- Black-Box Metaphilosophical AI - Program the AI to use the minds of one or more human philosophers as a black box to help it solve philosophical problems, without the AI builders understanding what "doing philosophy" actually is.
- White-Box Metaphilosophical AI - Understand the nature of philosophy well enough to specify "doing philosophy" as an algorithm and code it into the AI.
The problem with Normative AI, besides the obvious inherent difficulty (as evidenced by the slow progress of human philosophers after decades, sometimes centuries of work), is that it requires us to anticipate all of the philosophical problems the AI might encounter in the future, from now until the end of the universe. We can certainly foresee some of these, like the problems associated with agents being copyable, or the AI radically changing its ontology of the world, but what might we be missing?
Black-Box Metaphilosophical AI is also risky, because it's hard to test/debug something that you don't understand. Besides that general concern, designs in this category (such as Paul Christiano's take on indirect normativity) seem to require that the AI achieve superhuman levels of optimizing power before being able to solve its philosophical problems, which seems to mean that a) there's no way to test them in a safe manner, and b) it's unclear why such an AI won't cause disaster in the time period before it achieves philosophical competence.
White-Box Metaphilosophical AI may be the most promising approach. There is no strong empirical evidence that solving metaphilosophy is superhumanly difficult, simply because not many people have attempted to solve it. But I don't think that a reasonable prior combined with what evidence we do have (i.e., absence of visible progress or clear hints as to how to proceed) gives much hope for optimism either.
To recap, I think we can largely already see what kinds of problems must be solved in order to build a superintelligent AI that will minimize astronomical waste while colonizing the universe, and it looks like they probably can't be solved correctly with high confidence until humans become significantly smarter than we are now. I think I understand why some people disagree with me (e.g., Eliezer thinks these problems just aren't that hard, relative to his abilities), but I'm not sure why some others say that we don't yet know what the problems will be.
Perhaps you could make a taxonomy like yours when talking about a formally-defined singleton, which we might expect society to develop eventually. But I haven't seen strong arguments that we would need to design such a singleton starting from anything like our current state of knowledge. The best argument reason I know that we might need to solve this problem soon is the possibility of a fast takeoff, which still seems reasonably unlikely (say < 10% probability) but is certainly worth thinking about more carefully in advance.
But even granting a fast takeoff, it seems quite likely that you can build AI's that "work around" this problem in other ways, particularly by remaining controlled by human owners or by quickly bootstrapping to a better prepared society. I don't generally see why this would be subject to the same extreme difficulty you describe (the main reason for optimism is our current ignorance about what the situation will look like, and the large number of possible schemes).
And finally, even granting that we need to design such a singleton today (because of fast-takeoff and no realistic prospects for remaining in control), I don't think the taxonomy you offer is exhaustive, and I don't buy the claims of extraordinary difficulty.
There is a broad class of proposals in which an AI has a model of "what I would want" (either by the route that ordinary AI researchers find plausible, in which AI's concepts are reasonably aligned with human concepts, or by more elaborate formal machinations as in my indirect normativity proposal or a more sophisticated version thereof). It doesn't seem to be the case that you can't test such designs until you are dealing with superhuman AI---normal humans can reason about such concepts, as we do, and as long as you can design any AI which uses such concepts and isn't deliberately deceptive you can see whether it is doing something sensible. And it doesn't seem to be the case that your concept has to be so robust that it can tile the whole universe, because what you would want involves opportunities for explicit reflection by humans. The more fundamental issue is just that we haven't thought about this much, and so without formal justification I am pretty dubious of any claimed taxonomy or fundamental difficulty.
I agree that there is a solid case for making people smarter. I think there are better indirect approaches to making the world better though (rather than directly launching in on human enhancement). In the rest of the world (and even the rest of the EA community) people are focusing on making the world better in this kind of broad way. And in fairness this work currently occupies the majority of my time. I think do it's reasonably likely that I should focus directly on AI impacts, and that thinking about AI more clearly is the first step, but this is mostly coming from the neglected possibility of human-level AI relatively soon (e.g. < 40 years).
When you say "fast takeoff" do you mean the speed of the takeoff (how long it takes from start to superintelligence) or the timing of it (how far away it is from now)? Because later on you mention "< 40 years" which makes me think you mean that here as well, and timing would also make more ... (read more)