It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
Another possibility is that individual humans occasionally influence corporations' behavior in ways that cause that behavior to occasionally reflect human values.
How similar are their values actually?
One obvious difference seems to be their position on the exploration/exploitation scale, most corporations do not get bored (the rare cases where they do seem to get bored can probably be explained by an individual executive getting bored, or by customers getting bored and the corporation managing to adapt).
Corporations also do not seem to have very much compassion for other corporations, while they do sometimes co-operate I have yet to see an example one corporation giving money to another, without anticipating some s...
Should other large human organizations like governments and some religions also count as UFAIs?
Yes, I find it quite amusing that some people of a certain political bent refer to "corporations" as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don't think to extend the same anthropomorphism of demonic agency to large organizations that they're less interested in devalorizing, like governments and religions.
Unlike programmed AIs, corporations cannot FOOM. This leaves them with limited intelligence and power, heavily constrained by other corporations, government, and consumers.
The corporations that have come the closest to FOOMing are known as monopolies, and they tend to be among the least friendly.
The SIAI is a "501(c)(3) nonprofit organization." Such organizations are sometimes called nonprofit corporations. Is SIAI also an unfriendly AI? If not, why not?
P.S. I think corporations exist mostly for the purpose of streamlining governmental functions that could otherwise be strucured in law, although with less efficiency. Like taxation, and financial liability, and who should be able to sue and be sued. Corporations, even big hierarchical organizations like multinationals, are simply not structured with the complexity of Searles' Chinese Room.
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
I don't understand why you think that the rest of your post seems to suggest this. It appears to me that you're proposing that human (terminal?) values are universal to all intelligences at our level of intelligence on the basis that humans and corporations share values, but this doesn't hold up because corporations are composed of humans, so I think that the natural state would be for them to value human values.
I don't think it is useful to call Ancient Egypt a UFAI, even though they ended up tiling the desert in giant useless mausoleums at an extraordinary cost in wealth and human lives. Similarly, the Aztecs fought costly wars to capture human slaves, most of whom were then wasted as blood sacrifices to the gods.
If any human group can be UFAI, then does the term UFAI have any meaning?
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
By human values we mean how we treat things that are not part of the competitive environment.
The greatness of a nation and its moral progress can be judged by the way its animals are treated.
-- Mahatma Gandhi
Obviously a paperclip maximizer wouldn't punch you in the face if you could destroy it. But if it is stronger than all oth...
Another point to consider would be my Imperfect levers article and this one. I believe that the organizations that show the first ability to foom would foom effectively and spread their values around. This is not in any way, new. I, of indian origin, am writing in english and share more values with some californian transhumanists than with my neighbours. If not for the previous fooms of the british empire, the computer revolution and the internet, this would not have been possible.
The question is how close to sociopathic rationality are any of these organ...
Common instrumental values are in the air today.
The more values are found to be instrumental, the more the complexity of value thesis is eroded.
Charlie Stross seems to share this line of thought
We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden.
I was expecting a post questioning who/what is really behind this project to make paperclips invisible.
Corporations (and governments) are not usually regarded as sharing human values by those who consider the question. This brief blog post is a good example. I would certainly argue that the 'U' is appropriate; but then I tend to regard 'UFAI' as meaning 'the complement of FAI in mind space'.
...Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockh
Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.
I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.
If you were ta...
The differences between: a 90% human 10% machine company...
...and a 10% human 90% machine company...
...may be instructive if viewed from this perspective.
Completely artificial intelligence is hard. But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions. So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous. Architectures that would distribute parts of tasks among different people.
Would you be less afraid of an AI like that? Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?
Because you probably already are part of such an AI. We call them corporations.
Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents. In that way they resemble AI from the 1970s. But they may provide insight into the behavior of AIs. The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have. But despite being very different from humans in this important way, they end up acting similar to us.
Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.
It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.
As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have. But it should also follow that these ethics are too complex for us to perceive.