Completely artificial intelligence is hard. But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions. So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous. Architectures that would distribute parts of tasks among different people.
Would you be less afraid of an AI like that? Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?
Because you probably already are part of such an AI. We call them corporations.
Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents. In that way they resemble AI from the 1970s. But they may provide insight into the behavior of AIs. The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have. But despite being very different from humans in this important way, they end up acting similar to us.
Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have. But it should also follow that these ethics are too complex for us to perceive.
Another point to consider would be my Imperfect levers article and this one. I believe that the organizations that show the first ability to foom would foom effectively and spread their values around. This is not in any way, new. I, of indian origin, am writing in english and share more values with some californian transhumanists than with my neighbours. If not for the previous fooms of the british empire, the computer revolution and the internet, this would not have been possible.
The question is how close to sociopathic rationality are any of these organizations. Almost all of them exhibit omohundro's basic drives. I would disagree with the premise that alliances, status, power, resources are basic human values. They are instrumental values, subsets of the basic drives.
In organizations where a lot of decisions are being made on a mechanical basis, it is possible that some mechanism just takes over as long as it continues satisfying the incentive/hitting the button.
This remark deserves an article of its own, mapping each of Omohundro's claims to the observed behaviour of corporations.