Completely artificial intelligence is hard. But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions. So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous. Architectures that would distribute parts of tasks among different people.
Would you be less afraid of an AI like that? Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?
Because you probably already are part of such an AI. We call them corporations.
Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents. In that way they resemble AI from the 1970s. But they may provide insight into the behavior of AIs. The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have. But despite being very different from humans in this important way, they end up acting similar to us.
Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have. But it should also follow that these ethics are too complex for us to perceive.
By human values we mean how we treat things that are not part of the competitive environment.
-- Mahatma Gandhi
Obviously a paperclip maximizer wouldn't punch you in the face if you could destroy it. But if it is stronger than all other agents and doesn't expect to ever having to prove its benevolence towards lesser agents, then there'll be no reason to care about them? The only reason I could imagine for a psychopathic agent to care about agents that are less powerful is if there is some benefit in being friendly towards them. For example if there are a lot of superhuman agents out there and general friendliness enables cooperation and makes you less likely to be perceived as a threat and subsequently allows you to use less resources to fight.
I don't think I mean that. I also don't know where you're going with this observation.