On the subject of how an FAI team can avoid accidentally creating a UFAI, Carl Shulman wrote:
If we condition on having all other variables optimized, I'd expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this "halt, melt, and catch fire") that cannot be shown to be safe (given limited human ability, incentives and bias).
In the history of philosophy, there have been many steps in the right direction, but virtually no significant problems have been fully solved, such that philosophers can agree that some proposed idea can be the last words on a given subject. An FAI design involves making many explicit or implicit philosophical assumptions, many of which may then become fixed forever as governing principles for a new reality. They'll end up being last words on their subjects, whether we like it or not. Given the history of philosophy and applying the outside view, how can an FAI team possibly reach "very high standards of proof" regarding the safety of a design? But if we can foresee that they can't, then what is the point of aiming for that predictable outcome now?
Until recently I haven't paid a lot of attention to the discussions here about inside view vs outside view, because the discussions have tended to focus on the applicability of these views to the problem of predicting intelligence explosion. It seemed obvious to me that outside views can't possibly rule out intelligence explosion scenarios, and even a small probability of a future intelligence explosion would justify a much higher than current level of investment in preparing for that possibility. But given that the inside vs outside view debate may also be relevant to the "FAI Endgame", I read up on Eliezer and Luke's most recent writings on the subject... and found them to be unobjectionable. Here's Eliezer:
On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the Outside View beats the Inside View.
Does anyone want to argue that Eliezer's criteria for using the outside view are wrong, or don't apply here?
And Luke:
One obvious solution is to use multiple reference classes, and weight them by how relevant you think they are to the phenomenon you're trying to predict.
[...]
Once you've combined a handful of models to arrive at a qualitative or quantitative judgment, you should still be able to "adjust" the judgment in some cases using an inside view.
These ideas seem harder to apply, so I'll ask for readers' help. What reference classes should we use here, in addition to past attempts to solve philosophical problems? What inside view adjustments could a future FAI team make, such that they might justifiably overcome (the most obvious-to-me) outside view's conclusion that they're very unlikely to be in the possession of complete and fully correct solutions to a diverse range of philosophical problems?
My main objection is that securing positive outcomes doesn't seem to inherently require solving hard philosophical problems (in your sense). It might in principle, but I don't see how we can come to be confident about it or even why it should be much more likely than not. I also remain unconvinced about the conceptual difficulty and fundamental nature of the problems, and don't understand the cause for confidence on those counts either.
To make things more concrete: could you provide a hard philosophical problem (of the kind for which feedback is impossible) together with an argument that this problem must be resolved before human-level AGI arrives? What do you think is the strongest example?
To try to make my point clearer (though I think I'm repeating myself): we can aim to build machine intelligences which pursue the outcomes we would have pursued if we had thought longer (including machine intelligences that allow human owners to remain in control of the situation and make further choices going forward, or bootstrap to more robust solutions). There are questions about what formalization of "thought longer" we endorse, but of course we must face these with or without machine intelligence. For the most part, the questions involved in building such an AI are empirical though hard-to-test ones---would we agree that the AI basically followed our wishes, if we in fact thought longer?---and these don't seem to be the kinds of questions that have proved challenging, and probably don't even count as "philosophical" problems in the sense you are using the term.
I don't think it's clear or even likely that we necessarily have to resolve issues like metaethics, anthropics, the right formalization of logical uncertainty, decision theory, etc. prior to building human-level AI. No doubt having a better grasp of these issues is helpful for understanding our goals, and so it seems worth doing, but we can already see plausible ways to get around them.
In general, one reason that doing X probably doesn't require impossible step Y is that there are typically many ways to accomplish X, and without a strong reason it is unlikely that they will all require solving Y. This view seems to be supported by a reasonable empirical record. A lot of things have turned out to be possible.
(Note: in case it's not obvious, I disagree with Eliezer on many of these points.)
I suspect I also object to your degree of pessimism regarding philosophical claims, but I'm not sure and that is probably secondary at any rate.
It's hard for me to argue with multiple people simultaneously. When I argue with someone I tend to adopt most of their assumptions in order to focus on what I think is the core disagreement, so to argue with someone else I have to "swap in" a different set of assumptions and related arguments. The OP was aimed mostly at Eliezer, so it assumed that intelligence explosion is relatively easy. (Would you agree that if intelligence explosion was easy, then it would be hard to achieve a good outcome in the way that you imagine, by incrementally solving... (read more)