You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The UFAI among us

1 Post author: PhilGoetz 08 February 2011 11:29PM

Completely artificial intelligence is hard.  But we've already got humans, and they're pretty smart - at least smart enough to serve some useful functions.  So I was thinking about designs that would use humans as components - like Amazon's Mechanical Turk, but less homogenous.  Architectures that would distribute parts of tasks among different people.

Would you be less afraid of an AI like that?  Would it be any less likely to develop its own values, and goals that diverged widely from the goals of its constituent people?

Because you probably already are part of such an AI.  We call them corporations.

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.  In that way they resemble AI from the 1970s.  But they may provide insight into the behavior of AIs.  The values of their human components can't be changed arbitrarily, or even aligned with the values of the company, which gives them a large set of problems that AIs may not have.  But despite being very different from humans in this important way, they end up acting similar to us.

Corporations develop values similar to human values.  They value loyalty, alliances, status, resources, independence, and power.  They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies.  They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law).  This despite having different physicality and different needs.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident.  They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

As corporations are larger than us, with more intellectual capacity than a person, and more complex laws governing their behavior, it should follow that the ethics developed to govern corporations are more complex than the ethics that govern human interactions, and a good guide for the initial trajectory of values that (other) AIs will have.  But it should also follow that these ethics are too complex for us to perceive.

Comments (86)

Comment author: TheOtherDave 09 February 2011 12:29:03AM *  7 points [-]

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

Another possibility is that individual humans occasionally influence corporations' behavior in ways that cause that behavior to occasionally reflect human values.

Comment author: PhilGoetz 09 February 2011 12:57:26AM *  3 points [-]

If that were the case, we would see specific humans influence corporations behavior in ways that would cause the corporations to implement those humans' goals and values, without preservation of deictic references. For instance, Joe works for Apple Computer. Joe thinks that giving money to Amnesty International is more ethical than giving money to Apple Computer. And Joe values giving money to Joe. We should therefore see corporations give lots of their money to charity, and to their employees. That would be Joe making Apple implement Joe's values directly. Joe's values say "I want me to have more money". Transfering that value extensionally to Apple would replace "me" with "Joe".

Instead, we see corporations act as if they had acquired values from their employees, but with preservation of deictic references. That means, every place in Joe's value where it says "me", Apple's acquired value says "me". So instead of "make money for Joe", it says "make money for Apple". That means the process is not consciously directed by Joe; Joe would preserve the extensional reference to "Joe", so as to satisfy his values and goals.

Comment author: CronoDAS 09 February 2011 03:37:48AM 5 points [-]

We should therefore see corporations give lots of their money to charity, and to their employees. That would be Joe making Apple implement Joe's values directly. Joe's values say "I want me to have more money".

Some people point to executive compensation at U.S. firms as evidence that many corporations have been "subverted" in exactly that way.

Comment author: roystgnr 09 February 2011 07:37:26PM 2 points [-]

It says "make money for Apple", which is a roundabout way of saying "make money for Apple's shareholders", who are the humans that most directly make up "Apple". Apple's employees are like Apple's customers - they have market power that can strongly influence Apple's behavior, but they don't directly affect Apple's goals. If Joe wants a corporation to give more money to charity, but the corporation incorporated with the primary goal of making a profit, that's not the decision of an employee (or even of a director; see "duty of loyalty"); that's the decision of the owners.

There's definitely a massive inertia in such decisions, but for good reason. If you bought a chunk of Apple to help pay for your retirement, you've got a ethically solid interest in not wanting Apple management to change it's mind after the fact about where its profits should go.

If you want to look for places where corporate goals (or group goals in government or other contexts) really do differ from the goals of the humans who created and/or nominally control them, I'd suggest starting with the "Iron Law of Bureaucracy".

Comment author: TheOtherDave 09 February 2011 01:10:05AM 0 points [-]

Agreed that if Apple is making a lot of money, and none of the humans who nominally influence Apple's decisions are making that money, that is evidence that Apple has somehow adopted the "make money" value independent of those humans' values.

Agreed that if Apple is not donating money to charity, and the humans who nominally influence Apple's decisions value donating money to charity, that is evidence that Apple has failed to adopt the "donate to charity" value from those humans.

Comment author: Lightwave 09 February 2011 10:26:23AM *  0 points [-]

Also, corporations are restricted by governments, which implement other human-based values (different from pure profit), and they internalize these values (e.g. social/environmental responsibility) for (at the least) signaling purposes.

Comment author: benelliott 09 February 2011 08:40:29AM 5 points [-]

How similar are their values actually?

One obvious difference seems to be their position on the exploration/exploitation scale, most corporations do not get bored (the rare cases where they do seem to get bored can probably be explained by an individual executive getting bored, or by customers getting bored and the corporation managing to adapt).

Corporations also do not seem to have very much compassion for other corporations, while they do sometimes co-operate I have yet to see an example one corporation giving money to another, without anticipating some sort of gain from this action (any altruism they display towards humans is more likely caused by the individuals running things or done for signalling purposes, if they were really altruistic you would expect it to be towards each-other).

Do they really value independence and individuality? If so then why do they sometimes merge? I suppose you could say that the difference between they and humans is that they can merge while we can't, but I'm not convinced we would do so even if we could.

There may be superficial similarities between their values and ours, but it seems to me like we're quite different where it matters most. A hypothetical future which lacks creativity, altruism or individuality can be safely considered to have lost almost all of its potential value.

Comment author: PhilGoetz 14 February 2011 04:14:53AM *  0 points [-]

Altruism and merging: Two very good points!

Altruism can be produced via evolution by kin selection or group selection. I don't think kin selection can work for corporations, for several reasons, including massive lateral transfer of ideas between corporations (so that helping a kin does not give a great boost to your genes), deliberate acquisition of memes predominating over inheritance, and the fact that corporations can grow instead of reproducing, and so are unlikely to be in a position where they have no growth potential themselves but can help a kin instead.

Can group selection apply to corporations?

What are the right units of selection / inheritance?

Comment author: timtyler 15 May 2012 10:47:52AM *  0 points [-]

Altruism can be produced via evolution by kin selection or group selection. I don't think kin selection can work for corporations, for several reasons, including massive lateral transfer of ideas between corporations (so that helping a kin does not give a great boost to your genes), deliberate acquisition of memes predominating over inheritance, and the fact that corporations can grow instead of reproducing, and so are unlikely to be in a position where they have no growth potential themselves but can help a kin instead.

You don't think there's corporate parental care?!? IMO, corporate parental care is completely obvious. It is a simple instance of cultural kin selection. When a new corporation is spun off from an old one, there are often economic and resource lifelines - akin to the runners strawberry plants use to feed their offspring.

Lateral gene transfer doesn't much affect this. Growth competes with reproduction in many plants - and the line between the two can get blurred. It doesn't preclude parental care - as the strawberry runners show.

Comment author: Dorikka 09 February 2011 12:38:50AM 4 points [-]

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

I don't understand why you think that the rest of your post seems to suggest this. It appears to me that you're proposing that human (terminal?) values are universal to all intelligences at our level of intelligence on the basis that humans and corporations share values, but this doesn't hold up because corporations are composed of humans, so I think that the natural state would be for them to value human values.

Comment author: PhilGoetz 09 February 2011 12:52:01AM 2 points [-]

I figured someone would say that, and it is a hypothesis worth considering, but I think it needs justification. Corporations are composed of humans, but they don't look like humans, or eat the things humans eat, or espouse human religions. Corporations are especially human-like in their values, and that needs explaining. The goals of a corporation don't overlap with the values of its employees. Values and goals are highly intertwined. I would not expect a corporation to acquire values from its employees without also acquiring their goals; and they don't acquire their employees' goals. They acquire goals that are analogous to human goals; but eg IBM does not have the goal "help Frieda find a husband" or "give Joe more money".

Comment author: timtyler 09 February 2011 01:58:12AM 2 points [-]

Both humans and corporations want more money. Their goals at least overlap.

Comment author: PhilGoetz 14 February 2011 04:27:05AM 0 points [-]

The corporation wants the corporation to have more money, and Joe wants Joe to have more money. Those are the same goals internally, but because the corporation's goal says "ACME Corporation" where Joe's says "Joe", it means the corporation didn't acquire Joe's goals via lateral transfer.

Comment author: timtyler 14 February 2011 08:40:52AM *  0 points [-]

Normally, the corporation wants more money - because it was built by humans - who themselves want more money. They build the corporation to want to make money itself - and then want to pay them a wage - or dividends.

If the humans involved originally wanted cheese, the corporation would want cheese too. I think by considering this sort of thought experiment, it is possible to see that the human goals do get transferred across.

Comment author: blogospheroid 09 February 2011 05:33:00AM 3 points [-]

Another point to consider would be my Imperfect levers article and this one. I believe that the organizations that show the first ability to foom would foom effectively and spread their values around. This is not in any way, new. I, of indian origin, am writing in english and share more values with some californian transhumanists than with my neighbours. If not for the previous fooms of the british empire, the computer revolution and the internet, this would not have been possible.

The question is how close to sociopathic rationality are any of these organizations. Almost all of them exhibit omohundro's basic drives. I would disagree with the premise that alliances, status, power, resources are basic human values. They are instrumental values, subsets of the basic drives.

In organizations where a lot of decisions are being made on a mechanical basis, it is possible that some mechanism just takes over as long as it continues satisfying the incentive/hitting the button.

Comment author: Morendil 09 February 2011 04:45:00PM 2 points [-]

Almost all of them exhibit omohundro's basic drives.

This remark deserves an article of its own, mapping each of Omohundro's claims to the observed behaviour of corporations.

Comment author: PhilGoetz 14 February 2011 04:16:43AM 0 points [-]

I can't even find what Omohundro you're talking about using Google.

Comment author: NancyLebovitz 14 February 2011 04:56:57AM 2 points [-]

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

I don't know why google didn't work for you-- I used "omohundro's basic drives" and a bunch of links came up.

Comment author: Costanza 09 February 2011 01:48:35AM *  3 points [-]

The SIAI is a "501(c)(3) nonprofit organization." Such organizations are sometimes called nonprofit corporations. Is SIAI also an unfriendly AI? If not, why not?

P.S. I think corporations exist mostly for the purpose of streamlining governmental functions that could otherwise be strucured in law, although with less efficiency. Like taxation, and financial liability, and who should be able to sue and be sued. Corporations, even big hierarchical organizations like multinationals, are simply not structured with the complexity of Searles' Chinese Room.

Comment author: NancyLebovitz 08 February 2011 11:37:00PM 6 points [-]

Should other large human organizations like governments and some religions also count as UFAIs?

Comment author: Eliezer_Yudkowsky 09 February 2011 06:43:46PM 10 points [-]

Yes, I find it quite amusing that some people of a certain political bent refer to "corporations" as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don't think to extend the same anthropomorphism of demonic agency to large organizations that they're less interested in devalorizing, like governments and religions.

Comment author: [deleted] 11 February 2011 09:01:18AM 2 points [-]

Maybe those people are prioritising the things that seem to affect their lives? I can certainly see exactly the same argument about government or religion as about corporations, but currently the biggest companies (the Microsofts and Sonys and their like) seem to have more power than even some of the biggest governments.

Comment author: anonym 13 February 2011 08:18:21PM 1 point [-]

There is also the issue of legal personality, which applies to corporations and not to governments or religions.

The corporation actually seems to me a great example of a non-biological, non-software optimization process, and I'm surprised at Eliezer's implicit assertion that there is no significant difference between corporations, governments, and religions with respect to their ability to be unfriendly optimization processes, other than that some people of a certain political bent have a bias to think about corporations differently than other institutions like governments and religions.

Comment author: NancyLebovitz 10 February 2011 12:03:46AM 0 points [-]

I think such folks are likely to trust governments too much. They're more apt to oppose specific religious agendas than to oppose religion as such, and I actually think that's about right most of the time.

Comment author: CronoDAS 08 February 2011 11:44:05PM 2 points [-]

Probably.

Comment author: Alexandros 09 February 2011 09:09:01AM 0 points [-]

Funny you should mention that. Just yesterday I added on my list of articles-to-write one by the title of "Religions as UFAI". In fact, I think the comparison goes much deeper than it does for corporations.

Comment author: timtyler 15 May 2012 11:02:46AM 0 points [-]

Some corporations may become machine intelligences. Religions - probably not so much.

Comment author: Unnamed 09 February 2011 03:23:00AM 4 points [-]

Unlike programmed AIs, corporations cannot FOOM. This leaves them with limited intelligence and power, heavily constrained by other corporations, government, and consumers.

The corporations that have come the closest to FOOMing are known as monopolies, and they tend to be among the least friendly.

Comment author: RolfAndreassen 09 February 2011 04:18:31AM 3 points [-]

corporations cannot FOOM.

Is this obvious? True, the timescale is not seconds, hours, or even days. But corporations do change their inner workings, and they have also been known to change the way they change their inner workings. I suggest that if a corporation of today were dropped into the 1950s, and operated on 1950s technology but with modern technique, it would rapidly outmaneuver its downtime competitors; and that the same would be true for any gap of fifty years, back to the invention of the corporation in the Middle Ages.

Comment author: wedrifid 09 February 2011 07:33:42AM 4 points [-]

Is this obvious?

I suggest it is - for anything but the most crippled definition of "FOOM".

Comment author: Will_Newsome 09 February 2011 11:52:24AM 3 points [-]

Right, FOOM by its onomatopoeic nature suggest a fast recursion, not a million-year-long one.

Comment author: RolfAndreassen 09 February 2011 06:49:13PM 1 point [-]

I am suggesting that a ten-year recursion time is fast. I don't know where you got your million years; what corporations have been around for a million years?

Comment author: NancyLebovitz 12 February 2011 10:19:00AM 1 point [-]

I'm inclined to agree-- there are pressures in a corporation to slow improvement rather than to accelerate it.

Any organization which could beat that would be extremely impressive but rather hard to imagine.

Comment author: PhilGoetz 14 February 2011 04:09:25AM 0 points [-]

This is true, but not relevant to whether we can use what we know about corporations and their values to infer things about AIs and their values.

Comment author: wedrifid 14 February 2011 04:15:05AM 0 points [-]

This is true, but not relevant to whether we can use what we know about corporations and their values to infer things about AIs and their values.

It is relevant. It means you can infer a whole lot less about what capabilities an AI have and also about how much effort an AI will likely spend on self improvement early on. The payoffs and optimal investment strategy for resources are entirely different.

Comment author: knb 10 February 2011 10:59:01PM 2 points [-]

I don't think it is useful to call Ancient Egypt a UFAI, even though they ended up tiling the desert in giant useless mausoleums at an extraordinary cost in wealth and human lives. Similarly, the Aztecs fought costly wars to capture human slaves, most of whom were then wasted as blood sacrifices to the gods.

If any human group can be UFAI, then does the term UFAI have any meaning?

Comment author: Nornagest 10 February 2011 11:23:40PM *  1 point [-]

My understanding is that the human cost of the Ancient Egyptian mausoleum industry is now thought to be relatively modest. The current theory, supported by the recent discovery of workers' cemeteries, is that the famous monuments were generally built by salaried workers in good health, most likely during the agricultural off-season.

Definitely expensive, granted, but as a status indicator and ceremonial institution they've got plenty of company in human behavior.

There's some controversy over (ETA: the scale of) the Aztec sacrificial complex as well, but since that's entangled with colonial/anticolonial ideology I'd assume anything you hear about it is biased until proven otherwise.

Comment author: knb 11 February 2011 01:52:07AM 1 point [-]

There is no debate over whether the Aztecs engaged in mass human sacrifice. The main disagreement amongst academics is over the scale. The Aztecs themselves claimed sacrifices of over 40,000 people, but they obviously had good reason to lie (to scare enemies). Spanish and pre-columbian Aztec sources agree that human sacrifice was widespread amongst the Aztecs.

Comment author: Nornagest 11 February 2011 01:59:49AM *  0 points [-]

You're quite right; I should have been more explicit. Edited.

Comment author: XiXiDu 09 February 2011 11:46:33AM *  2 points [-]

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

By human values we mean how we treat things that are not part of the competitive environment.

The greatness of a nation and its moral progress can be judged by the way its animals are treated.

-- Mahatma Gandhi

Obviously a paperclip maximizer wouldn't punch you in the face if you could destroy it. But if it is stronger than all other agents and doesn't expect to ever having to prove its benevolence towards lesser agents, then there'll be no reason to care about them? The only reason I could imagine for a psychopathic agent to care about agents that are less powerful is if there is some benefit in being friendly towards them. For example if there are a lot of superhuman agents out there and general friendliness enables cooperation and makes you less likely to be perceived as a threat and subsequently allows you to use less resources to fight.

Comment author: PhilGoetz 14 February 2011 04:11:03AM 0 points [-]

By human values we mean how we treat things that are not part of the competitive environment.

I don't think I mean that. I also don't know where you're going with this observation.

Comment author: wedrifid 14 February 2011 04:28:48AM 0 points [-]

I also don't know where you're going with this observation.

Roughly, that you can specify human values by supplying a diff from optimal selfish competition.

Comment author: timtyler 09 February 2011 02:03:45AM *  2 points [-]

Common instrumental values are in the air today.

The more values are found to be instrumental, the more the complexity of value thesis is eroded.

Comment author: PhilGoetz 14 February 2011 04:07:02AM 0 points [-]

What particular instrumental values are you thinking of?

Comment author: Risto_Saarelma 09 February 2011 09:13:11AM 1 point [-]

Charlie Stross seems to share this line of thought

We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden.

Comment author: Vladimir_Nesov 09 February 2011 08:34:30PM 0 points [-]

Is my grandma an Unfriendly AI?

Comment author: Alicorn 09 February 2011 08:50:30PM 3 points [-]

Your grandma probably isn't artificial.

Comment author: Vladimir_Nesov 09 February 2011 08:55:18PM 0 points [-]

She was designed by evolution, so could just as well be considered artificial. And did I mention the Unfriendly AI part?

Comment author: wedrifid 10 February 2011 11:03:09AM 2 points [-]

She was designed by evolution, so could just as well be considered artificial.

Not when using the standard meanings of either of those words.

Comment author: Vladimir_Nesov 10 February 2011 11:27:41AM *  -2 points [-]

But what do you mean by "meaning"? Not that naive notion, I hope?

Edit: This was a failed attempt at sarcasm, see the parenthetical in this comment.

Comment author: wedrifid 10 February 2011 11:43:04AM *  1 point [-]

But what do you mean by "meaning"? Not that naive notion, I hope?

Question: How many legs does a dog have if you call the tail a leg?

Answer: I don't care, your grandma isn't artificial just because you call natural artificial. Presenting a counter-intuitive conclusion based on basically redefining the language isn't "deep". Sometimes things are just simple.

Perhaps you have another point to make about the relative unimportance of the distinction between 'natural' and 'artificial' in the grand scheme of things? There is certainly a point to be made there, and one that could be made without just using the words incorrectly.

Comment author: Vladimir_Nesov 10 February 2011 02:02:51PM -1 points [-]

There is certainly a point to be made there, and one that could be made without just using the words incorrectly.

But that would be no fun.

(For the perplexed: see No Evolutions for Corporations or Nanodevices. Attaching too many unrelated meanings to a word is a bad idea that leads to incorrect implicit inferences. Meaning is meaning, even if we don't quite know what it is, grandma and corporations are not Unfriendly AIs, and natural selection doesn't produce artificial things.)

Comment author: timtyler 15 May 2012 10:55:44AM 0 points [-]

natural selection doesn't produce artificial things.

It does, but indirectly.

Comment author: PhilGoetz 14 February 2011 04:07:59AM *  0 points [-]

Corporations are artificial, and they are intelligent. Therefore, they are artificial intelligences.

(ADDED: Actually this is an unimportant semantic point. What's important is how much we can learn about something that we all agree we can call "AI", from corporations. Deciding this on the basis of whether you can apply the name "AI" to them is literally thinking in circles.)

Comment author: PhilGoetz 25 February 2011 06:02:18AM 1 point [-]

Michael Vassar raised some of the same points in his talk at H+, 2 weeks before I posted this.

Comment author: sfb 09 February 2011 05:51:23AM *  1 point [-]

I was expecting a post questioning who/what is really behind this project to make paperclips invisible.

Comment author: Blueberry 09 February 2011 10:43:42AM 1 point [-]

Well, it's clear who benefits. Tiling the universe with invisible paperclips is less noticeable and less likely to start raising concerns.

Comment author: false_vacuum 09 February 2011 02:22:22AM 1 point [-]

Corporations (and governments) are not usually regarded as sharing human values by those who consider the question. This brief blog post is a good example. I would certainly argue that the 'U' is appropriate; but then I tend to regard 'UFAI' as meaning 'the complement of FAI in mind space'.

Comment author: PhilGoetz 14 February 2011 04:18:30AM *  0 points [-]

Those people are considering a different question, which is, "Do corporations treat humans the way humans treat humans?" Completely different question.

If corporations develop values that resemble those of humans by convergent evolution (which is what I was suggesting), we would expect them to treat humans the way humans treat, say, cattle.

Comment author: Matt_Simpson 09 February 2011 12:45:09AM *  1 point [-]

Corporations develop values similar to human values. They value loyalty, alliances, status, resources, independence, and power. They compete with other corporations, and face the same problems people do in establishing trust, making and breaking alliances, weighing the present against the future, and game-theoretic strategies. They even went through stages of social development similar to those of people, starting out as cutthroat competitors, and developing different social structures for cooperation (oligarchy/guild, feudalism/keiretsu, voters/stockholders, criminal law/contract law). This despite having different physicality and different needs.

It suggests to me that human values don't depend on the hardware, and are not a matter of historical accident. They are a predictable, repeatable response to a competitive environment and a particular level of intelligence.

It seems more likely that corporations act like humans because corporations are ran by humans. I've yet to meet an alien CEO or board member!

edit: and then I realized Dorikka said it first.

edit: and TheOtherDave.

Comment author: Dorikka 09 February 2011 12:52:27AM 0 points [-]

...I am laughing hard right now.

Comment author: Emile 09 February 2011 10:46:59AM *  -1 points [-]

Corporations today are not very good AI architectures - they're good at passing information down a hierarchy, but poor at passing it up, and even worse at adding up small correlations in the evaluations of their agents.

I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.

If you were talking about say a computer system that balances water circulation in a network of pipes, and has a bunch of "local" subsystems with more-or-less reliable measures for flow, damage to the installation, leaks, power-efficiency of pumps, you might care less about things like which way the information flows as long as the overal system works well. You couldn't worry about whether a particular node had it's feeling hurt by the central node ignoring it's information (which may be because the central node has limited bandwidth, processing power, and has to deal with high undertainty about which nodes provide accurate information).

Comment author: PhilGoetz 14 February 2011 04:13:04AM *  1 point [-]

I'd be cautious about the use of "good" here - the thing you describe mostly seem "good" from the point of view who cares about the humans being used by the corporations; it's not nearly as clear that they are "good" (bringing more benefits than downsides) for the final goals of the corporation.

Corporations are not good at using bottom-up information for their own benefit. Many companies have many employees who could optimize their work better, or know problems that need to be solved; yet nothing is done about it, and there is no mechanism to propagate this knowledge upward, and no reward given to the employees if they transmit their knowledge or if they deal with the problem themselves.

Comment author: timtyler 09 February 2011 01:56:01AM -2 points [-]

The differences between: a 90% human 10% machine company...

...and a 10% human 90% machine company...

...may be instructive if viewed from this perspective.

Comment author: PhilGoetz 14 February 2011 04:24:40AM 0 points [-]

I don't understand what you're getting at.

My company has about 300 people, and 2500 computers. And the computers work all night. Are we 90% machine?

Comment author: timtyler 14 February 2011 08:47:08AM 0 points [-]

There are various ways of measuring. My proposal for a metric is here:

http://machine-takeover.blogspot.com/2009/07/measuring-machine-takeover.html

I propose weighing them:

There are a variety of ways of measuring how much of the resource pie is allocated to machines.

One way that would appeal to economists is to look at the cost of constructing machines - and compare that to the cost of constructing humans. That would give an estimate of how much society is willing to spend on these different elements of the biosphere.

Here I will advocate what I believe to be a simpler method of measuring the proportion of machines on the planet. I think we should weigh them.

...in particular, weighing their sensor, motor, and computing elements.

...so "no" - not yet.