The difficulty is still largely due to the security problem. Without catastrophic risks (including UFAI and value drift), we could take as much time as necessary and/or go with making people smarter first.
The aspect of FAI that is supposed to solve the security problem is optimization power aimed at correct goals. Optimization power addresses the "external" threats (and ensures progress), and correctness of goals represents "internal" safety. If an AI has sufficient optimization power, the (external) security problem is taken care of, even if the goals are given by a complicated definition that the AI is unable to evaluate at the beginning: it'll protect the original definition even without knowing what it evaluates to, and aim to evaluate it (for instrumental reasons).
This suggests that a minimal solution is to pack all the remaining difficulties in AI's goal definition, at which point the only object level problems are to figure out what a sufficiently general notion of "goal" is (decision theory; the aim of this part is to give the goal definition sufficient expressive power, to avoid constraining its decisions while extracting the optimization part...
because I think what we really want, and what MIRI seems to be working towards, is closer to "optimality": create an AI that minimizes the expected amount of astronomical waste.
Astronomical waste is a very specific concept arising from a total utilitarian theory of ethics. That this is "what we really want" seems highly unobvious to me; as someone who leans towards negative utilitarianism, I would personally reject it.
I'd be happy with an AI that makes people on Earth better off without eating the rest of the universe, and gives us the option to eat the universe later if we want to...
If the AI doesn't take over the universe first, how will it prevent Malthusian uploads, burning of the cosmic commons, private hell simulations, and such?
"create an AI that minimizes the expected amount of astronomical waste"
Of course, this is still just a proxy measure... say that we're "in a simulation", or that there are already superintelligences in our environment who won't let us eat the stars, or something like that—we still want to get as good a bargaining position as we possibly can, or to coordinate with the watchers as well as we possibly can, or in a more fundamental sense we want to not waste any of our potential, which I think is the real driving intuition here. (Further...
Normative AI - Solve all of the philosophical problems ahead of time, and code the solutions into the AI.
Black-Box Metaphilosophical AI - Program the AI to use the minds of one or more human philosophers as a black box to help it solve philosophical problems, without the AI builders understanding what "doing philosophy" actually is.
White-Box Metaphilosophical AI - Understand the nature of philosophy well enough to specify "doing philosophy" as an algorithm and code it into the AI.
So after giving this issue some thought: I'm not sure ...
Black-Box Metaphilosophical AI is also risky, because it's hard to test/debug something that you don't understand.
On the other hand, to the extent that our uncertainty about whether different BBMAI designs do philosophy correctly is independent, we can build multiple ones and see what outputs they agree on. (Or a design could do this internally, achieving the same effect.)
it's unclear why such an AI won't cause disaster in the time period before it achieves philosophical competence.
This seems to be an argument for building a hybrid of what you call...
create an AI that minimizes the expected amount of astronomical waste
I prefer the more cheerfully phrased "Converts the reachable universe to QALYs" but same essential principle.
Perhaps you could make a taxonomy like yours when talking about a formally-defined singleton, which we might expect society to develop eventually. But I haven't seen strong arguments that we would need to design such a singleton starting from anything like our current state of knowledge. The best argument reason I know that we might need to solve this problem soon is the possibility of a fast takeoff, which still seems reasonably unlikely (say < 10% probability) but is certainly worth thinking about more carefully in advance.
But even granting a fast tak...
Just a minor terminology quibble: the “black” in “black-box” does not refer to the color, but to the opacity of the box; i.e., we don’t know what’s inside. “White-box” isn’t an obvious antonym in the sense I think you want.
“Clear-box” would better reflect the distinction that what’s inside isn’t unknown (i.e., it’s visible and understandable). Or perhaps open-box might be even better, since not only we know how it works but also we put it there.
astronomical waste
Just to be clear, you are proposing that mere friendliness is insufficient, and we also want optimality with respect to getting as much of the cosmos as we can? This seems contained in friendliness, but OK. You are not proposing that optimally taking over the universe is sufficient for friendliness, right?
white box metaphilosophy
I've been thinking a lot about this, and I also think this is most likely to work. On general principle, understanding the problem and indirectly solving it is more promising than trying to solve the proble...
I put "Friendliness" in quotes in the title, because I think what we really want, and what MIRI seems to be working towards, is closer to "optimality"
Meh. If we can get a safe AI, we've essentially done the whole of the work. Optimality can be tacked on easily at that point, bearing in mind that what may seem optimal to some may be an utter hellish disaster to others (see Repugnant Conclusion), so some sort of balanced view of optimality will be needed.
"create an AI that minimizes the expected amount of astronomical waste."
How is that defined? I would expect that minimizing astronomical waste would be the same as maximizing the amount used for intrinsically valuable things, which would be the same as maximizing utility.
I've often stated my objections MIRI's plan to build an FAI directly (instead of after human intelligence has been substantially enhanced).
Human intelligence is getting more substantially enhanced all the time. No doubt all parties will use the tools available - increasingly including computer-augmented minds as time passes.
So: I'm not clear about where it says that this is their plan.
When you say "fast takeoff" do you mean the speed of the takeoff (how long it takes from start to superintelligence) or the timing of it (how far away it is from now)?
I mean speed. It seems like you are relying on an assumption of a rapid transition from a world like ours to a world dominated by superhuman AI, whereas typically I imagine a transition that lasts at least years (which is still very fast!) during which we can experiment with things, develop new approaches, etc. In this regime many more approaches are on the table.
Superintelligent AIs controlled by human owners, even if it's possible, seem like a terrible idea, because humans aren't smart or wise enough to handle such power without hurting themselves. I wouldn't even trust myself to control such an AI, much less a more typical, less reflective human. It seems like you are packing a wide variety of assumptions in here, particularly about the nature of control and about the nature of the human owners.
or by quickly bootstrapping to a better prepared society Not sure what you mean by this. Can you expand?
Even given shaky solutions to the control problem, it's not obvious that you can't move quickly to a much better prepared society, via better solutions to the control problem, further AI work, brian emulations, significantly better coordination or human enhancement, etc.
Regarding your parenthetical "because of", I think the "need" to design such a singleton comes from the present opportunity to build such a singleton, which may not last. For example, suppose your scenario of superintelligent AIs controlled by human owners become reality (putting aside my previous objection). At that time we can no longer directly build a singleton, and those AI/human systems may not be able to, or want to, merge into a singleton. They may instead just spread out into the universe in an out of control manner, burning the cosmic commons as they go.
This is an interesting view (in that it isn't what I expected). I don't think that the AIs are doing any work in this scenario, i.e., if we just imagined normal humans going on their way without any prospect of building much smarter descendants, you would make similar predictions for similar reasons? If so, this seems unlikely given the great range of possible coordination mechanisms many of which look like they could avert this problem, the robust historical trends in increasing coordination ability and scale of organization, etc. Are there countervailing reasons to think it is likely, or even very plausible? If not, I'm curious about how the presence of AI changes the scenario.
There are all kinds of ways for this to go badly wrong, which have been extensively discussed by Eliezer and others on LW. To summarize, the basic problem is that human concepts are too fuzzy and semantically dependent on how human cognition works. Given complexity and fragility of value and likely alien nature of AI cognition, it's unlikely that AIs will share our concepts closely enough for it to obtain a sufficiently accurate model of "what I would want" through this method. (ETA: Here is a particularly relevant post by Eliezer.)
I don't find these arguments particularly compelling as a case for "there is very likely to be a problem," though they are more compelling as an indication of "there might be a problem."
In general, it seems that the burden of proof is on someone who claims "Surely X" in an environment which is radically unlike any environment we have encountered before. I don't think that any very compelling arguments have been offered here, just vague gesturing. I think it's possible that we should focus on some of these pessimistic possibilities because we can have a larger impact there. But your (and Eliezer's) claims go further than this, suggesting that it isn't worth investing in interventions that would modestly improve our ability of coping with difficulties (respectively clarifying understanding of AI and human empowerment, both of which slightly speed up AI progress), because the probability is so low. I think this is a plausible view, but it doesn't look like the evidence supports it to me.
Since you seem to bring up ideas that others have already considered and rejected, I wonder if perhaps you're underestimating how much we've thought about this? (Or were you already aware of their rejection and just wanted to indicate your disagreement?)
I'm certainly aware of the points you've raised, and at least a reasonable fraction of the thinking that has been done in this community on these topics. Again, I'm happy with these arguments (and have made many of them myself) as a good indication that the issue is worth taking seriously. But I think you are taking this "rejection" much too seriously in this context. If someone said "maybe X will work" and someone else said "maybe X won't work," I won't then leave X off of (long) lists of reasons why things might work, even if I agreed with them.
This is getting a bit too long for a point-by-point response, so I'll pick what I think are the most productive points to make. Let me know if there's anything in particular you'd like a response on.
It seems like you are relying on an assumption of a rapid transition from a world like ours to a world dominated by superhuman AI.
I try not to assume this, but quite possibly I'm being unconsciously biased in that direction. If you see any place where I seem to be implicitly assuming this, please point it out, but I think my argument applies even if the tr...
I put "Friendliness" in quotes in the title, because I think what we really want, and what MIRI seems to be working towards, is closer to "optimality": create an AI that minimizes the expected amount of astronomical waste. In what follows I will continue to use "Friendly AI" to denote such an AI since that's the established convention.
I've often stated my objections MIRI's plan to build an FAI directly (instead of after human intelligence has been substantially enhanced). But it's not because, as some have suggested while criticizing MIRI's FAI work, that we can't foresee what problems need to be solved. I think it's because we can largely foresee what kinds of problems need to be solved to build an FAI, but they all look superhumanly difficult, either due to their inherent difficulty, or the lack of opportunity for "trial and error", or both.
When people say they don't know what problems need to be solved, they may be mostly talking about "AI safety" rather than "Friendly AI". If you think in terms of "AI safety" (i.e., making sure some particular AI doesn't cause a disaster) then that does looks like a problem that depends on what kind of AI people will build. "Friendly AI" on the other hand is really a very different problem, where we're trying to figure out what kind of AI to build in order to minimize astronomical waste. I suspect this may explain the apparent disagreement, but I'm not sure. I'm hoping that explaining my own position more clearly will help figure out whether there is a real disagreement, and what's causing it.
The basic issue I see is that there is a large number of serious philosophical problems facing an AI that is meant to take over the universe in order to minimize astronomical waste. The AI needs a full solution to moral philosophy to know which configurations of particles/fields (or perhaps which dynamical processes) are most valuable and which are not. Moral philosophy in turn seems to have dependencies on the philosophy of mind, consciousness, metaphysics, aesthetics, and other areas. The FAI also needs solutions to many problems in decision theory, epistemology, and the philosophy of mathematics, in order to not be stuck with making wrong or suboptimal decisions for eternity. These essentially cover all the major areas of philosophy.
For an FAI builder, there are three ways to deal with the presence of these open philosophical problems, as far as I can see. (There may be other ways for the future to turns out well without the AI builders making any special effort, for example if being philosophical is just a natural attractor for any superintelligence, but I don't see any way to be confident of this ahead of time.) I'll name them for convenient reference, but keep in mind that an actual design may use a mixture of approaches.
The problem with Normative AI, besides the obvious inherent difficulty (as evidenced by the slow progress of human philosophers after decades, sometimes centuries of work), is that it requires us to anticipate all of the philosophical problems the AI might encounter in the future, from now until the end of the universe. We can certainly foresee some of these, like the problems associated with agents being copyable, or the AI radically changing its ontology of the world, but what might we be missing?
Black-Box Metaphilosophical AI is also risky, because it's hard to test/debug something that you don't understand. Besides that general concern, designs in this category (such as Paul Christiano's take on indirect normativity) seem to require that the AI achieve superhuman levels of optimizing power before being able to solve its philosophical problems, which seems to mean that a) there's no way to test them in a safe manner, and b) it's unclear why such an AI won't cause disaster in the time period before it achieves philosophical competence.
White-Box Metaphilosophical AI may be the most promising approach. There is no strong empirical evidence that solving metaphilosophy is superhumanly difficult, simply because not many people have attempted to solve it. But I don't think that a reasonable prior combined with what evidence we do have (i.e., absence of visible progress or clear hints as to how to proceed) gives much hope for optimism either.
To recap, I think we can largely already see what kinds of problems must be solved in order to build a superintelligent AI that will minimize astronomical waste while colonizing the universe, and it looks like they probably can't be solved correctly with high confidence until humans become significantly smarter than we are now. I think I understand why some people disagree with me (e.g., Eliezer thinks these problems just aren't that hard, relative to his abilities), but I'm not sure why some others say that we don't yet know what the problems will be.