An Unfriendly AI would only be bad because it becomes ridiculously hard for us to stop, and it doesn't care about us. If an ufAI is exactly as powerful and smart as an average human, and cannot ever get better, it's not all that much of a threat, and is really just only as dangerous as your average socio/psycho/something-path.*
May I point at the various instances of systematic slavery in human history, or even right now across the world? Imagine if the slavers had double our triple the intelligence they had/have. What makes you think that these superintelligent slaver humans would be "Friendly" even at the basic level, let alone would be the Safe kind of Friendly under self-modification? (supposing they manage to modify or enhance themselves in some way)
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.
* Yes, that's anthropomorphizing it a bit, but I'm assuming that it would need its own set of heuristics to replace humans' biases and heuristics, otherwise it'd probably be thinking very slowly and pose even less of a threat. If those heuristics aren't particularly better optimized than our own, then it's still only so much of a threat, probably equivalent to a particularly unpredictable psychopath.
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default... is a very far-fetched combined assumption to be making here, IMO.
The assumptions that I make are that the humanity-fooming would be both very slow, and generally available in some way (I'm not sure entirely how, but brain-computer interfaces are a possibility). That all humans foom, at more-or-less the same time...
So far, we only have one known example of the development of intelligent life; and that example is us. Humanity. That means that we have only one machanism that is known to be able to produce intelligent life; and that is evolution. But by far the majority of life that is produced by evolution is not
intelligent. (In fact, by far the majority of life produced by evolution appears to be bacteria, as far as I can tell. There's also a lot of beetles).
Why did evolution produce such a steep climb in human intelligence, while not so much in the case of other creatures? That, I suspect, is at least partially because as humans we are not competing against other creatures anymore. We are competing against each other.
Also, once we managed to start writing things down and sharing knowledge, we shifted off the slow, evolutionary timescale and onto the faster, technological timescale. As technology improves, we find ourselves being more right, less wrong; our ability to affect the environment continually increases. Our intellectual development, as a species, speeds up dramatically.
And I believe that there is a hack that can be applied to this process; a mechanism by which the total intelligence of humanity as a whole can be rather dramatically increased. (It will take time). The process is simple enough in concept.
These thoughts were triggered by an article on some Ethiopian children who were given tablets by OLPC. They were chosen specifically on the basis of illiteracy (through the whole village) and were given no teaching (aside from the teaching apps on the tablets; some instruction on how to use the solar chargers was also given to the adults) and in fairly short order, they taught themselves basic literacy. (And had modified the operating system to customise it, and re-enable the camera).
My first thought was that this gives an upper limit to the minimum cost of world literacy; the minimum cost of world literacy is limited to the cost of one tablet per child (plus a bit for transportation).
In short, we need world literacy. World literacy will allow anyone and everyone to read up on that which interests them. It will allow a vastly larger number of people to start thinking about certain hard problems (such as any hard problem you care to name). It will allow more eyes to look at science; more experiments to be done and published; more armour-piercing questions which no-one has yet thought to ask because there simply are not enough scientists to ask them.
World literacy would improve the technological progress of humanity; and probably, after enough generations, result in a humanity who we would, by todays standards, consider superhumanly intelligent. (This may or may not necessitate direct brain-computer interfaces)
The aim, therefore, is to allow humanity, and not some human-made AI, to go *foom*. It will take some significant amount of time - following this plan means that our generation will do no more than continue a process that began some millions of years ago - but it does have this advantage; if it is humanity that goes *foom*, then the resulting superintelligences are practically guaranteed to be human-Friendly since they will be human. (For the moment, I discard the possibility of a suicidal superintelligence).
It also has this advantage; the process is likely to be slow enough that a significant fraction of humanity will be enhanced at the same time, or close enough to the same time that none will be able to stop any of the others' enhancements. This drastically reduces the probability of being trapped by a single Unfriendly enhanced human.
The main disadvantage is the time taken; this will take centuries at the least, perhaps millenia. It is likely that, along the way, a more traditional AI will be created.