Followup to: Outline of possible Singularity scenarios (that are not completely disastrous)
Given that the Singularity and being strategic are popular topics around here, it's surprising there hasn't been more discussion on how to answer the question "In what direction should we nudge the future, to maximize the chances and impact of a positive Singularity?" ("We" meaning the SIAI/FHI/LW/Singularitarian community.)
(Is this an appropriate way to frame the question? It's how I would instinctively frame the question, but perhaps we ought to discussed alternatives first. For example, one might be "What quest should we embark upon to save the world?", which seems to be the frame that Eliezer instinctively prefers. But I worry that thinking in terms of "quest" favors the part of the brain that is built mainly for signaling instead of planning. Another alternative would be "What strategy maximizes expect utility?" but that seems too technical for human minds to grasp on an intuitive level, and we don't have the tools to answer the question formally.)
Let's start by assuming that humanity will want to build at least one Friendly superintelligence sooner or later, either from scratch, or by improving human minds, because without such an entity, it's likely that eventually either a superintelligent, non-Friendly entity will arise, or civilization will collapse. The current state of affairs, in which there is no intelligence greater than baseline-human level, seems unlikely to be stable over the billions of years of the universe's remaining life. (Nor does that seem particularly desirable even if it is possible.)
Whether to push for (or personally head towards) de novo AI directly, or IA/uploading first, depends heavily on the expected (or more generally, subjective probability distribution of) difficulty of building a Friendly AI from scratch, which in turn involves a great deal of logical and philosophical uncertainty. (For example, if it's known that it actually takes a minimum of 10 people with IQ 200 to build a Friendly AI, then there is clearly little point in pushing for de novo AI first.)
Besides the expected difficulty of building FAI from scratch, another factor that weighs heavily in the decision is the risk of accidentally building an unFriendly AI (or contributing to others building UFAIs) while trying to build FAI. Taking this into account also involves lots of logical and philosophical uncertainty. (But it seems safe to assume that this risk, if plotted against the intelligence of the AI builders, forms an inverted U shape.)
Since we don't have good formal tools for dealing with logical and philosophical uncertainty, it seems hard to do better than to make some incremental improvements over gut instinct. One idea is to train our intuitions to be more accurate, for example by learning about the history of AI and philosophy, or learning known cognitive biases and doing debiasing exercises. But this seems insufficient to gap the widely differing intuitions people have on these questions.
My own feeling is that the chance of success of of building FAI, assuming current human intelligence distribution, is low (even if given unlimited financial resources), while the risk of unintentionally building or contributing to UFAI is high. I think I can explicate a part of my intuition this way: There must be a minimum level of intelligence below which the chances of successfully building an FAI is negligible. We humans seem at best just barely smart enough to build a superintelligent UFAI. Wouldn't it be surprising that the intelligence threshold for building UFAI and FAI turn out to be the same?
Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers). Other strategies in the same vein are to pursue cognitive/pharmaceutical/neurosurgical approaches to increasing the intelligence of existing humans, or to push for brain emulation first followed by intelligence enhancement of human minds in software form.
Social/PR issues aside, these alternatives make more intuitive sense to me. The chances of success seem higher, and if disaster does occur as a result of the intelligence amplification effort, we're more likely to be left with a future that is at least partly influenced by human values. (Of course, in the final analysis, we also have to consider social/PR problems, but all Singularity approaches seem to have similar problems, which can be partly ameliorated by the common sub-strategy of "raising the general sanity level".)
I'm curious in what others think. What does your intuition say about these issues? Are there good arguments in favor of any particular strategy that I've missed? Is there another strategy that might be better than the ones mentioned above?
The problem is that building FAI is also likely not fast enough, given that UFAI looks significantly easier than FAI. And there are additional unique downsides to attempting to build FAI: since many humans are naturally competitive, it provides additional psychological motivation for others to build AGI; unless the would-be FAI builders have near perfect secrecy and security, they will leak ideas and code to AGI builders not particularly concerned with Friendliness; the FAI builders may themselves accidentally build UFAI; it's hard to do anti-AI PR/politics (to delay UFAI) while you're trying to build an AI yourself.
ETA: Also, the difficulty of building smarter humans seems logically independent of the difficulty of building UFAI, whereas the difficulty of building FAI is surely at least as great as the difficulty of building UFAI. So it seems the likelihood that building smarter humans is fast enough is higher.
Smarter humans will see the difficulty gap between FAI and UFAI as smaller, so they'll be less motivated to "save time and effort" by not taking taking safety/Friendliness seriously. The danger of UFAI will also be more obvious to them.