Organizations are highly disanalogous to potential AIs, and suffer from severe diminishing returns: http://www.nytimes.com/2010/12/19/magazine/19Urban_West-t.html?reddit=&pagewanted=all&_r=0
...As West notes, Hurricane Katrina couldn’t wipe out New Orleans, and a nuclear bomb did not erase Hiroshima from the map. In contrast, where are Pan Am and Enron today? The modern corporation has an average life span of 40 to 50 years. This raises the obvious question: Why are corporations so fleeting? After buying data on more than 23,000 publicly traded companies, Bettencourt and West discovered that corporate productivity, unlike urban productivity, was entirely sublinear. As the number of employees grows, the amount of profit per employee shrinks. West gets giddy when he shows me the linear regression charts. “Look at this bloody plot,” he says. “It’s ridiculous how well the points line up.” The graph reflects the bleak reality of corporate growth, in which efficiencies of scale are almost always outweighed by the burdens of bureaucracy. “When a company starts out, it’s all about the new idea,” West says. “And then, if the company gets lucky, the idea takes off. Everybody is happy a
But then management starts worrying about the bottom line and so all these people are hired to keep track of the paper clips. This is the beginning of the end.
And so LessWrong has been proved correct that paperclips will be the end of us all.
What is the relevance of profit per employee to the question of the power of organizations?
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
And why would a machine intelligence not suffer similar coordination problems as it scales up?
For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.
An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.
Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.
Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.
Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most o...
One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.
That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).
There are academic fields that study the behavior and anatomy of groups of people who act together to pursue goals. These include sociology, organizational behavior, military science, and even logistics. Singularity researchers should take some note of these fields' practical results.
Is that pretty much the point here?
The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.
Also:
When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.
Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.
On one hand, I think Luke is too dismissive of organizations. There's no reason not to regard organizations as intelligences, and I think the most likely paths to AGI go through some organization (today, Google looks like the most-likely candidate). But the bottleneck on organizational intelligence is either human intelligence or machine intelligence. So a super-intelligent corporation will end up having super-intelligent computers (or super-intelligent people, but it seems like computers are easier). If we're very lucky, those computers will directly inherit the corporation's purported goal structure ("to enhance shareholder value"). Not that shareholder value is a good goal -- just that it's much less bad than a lot of the alternatives. Given the difficulty of AI programming (not to mention internal corporate politics and Goodhart's law), it seems like SIAI's central arguments still apply.
Free market theorists from at least Smith considered a market as a benevolent super intelligence. In 1984, Orwell envisioned an organization as a mean super intelligence. In both cases, the functional outcome of the super intelligence ran counter to the intent of the component agents.
There have been very mean superintelligences. Political organization matters. They can be a benevolent invisible hand, or a malevolent boot stomping a human face forever.
Yup. There exist established fields that study super intelligences with interests not necessarily aligned with ours -- polisci, socialsci and econ. Now you may criticize their methods or their formalisms, but they do have smart people and insights.
I think the research into Friendliness, if it's not a fake, would do well to connect with some subproblem in polisci, socialsci or econ. It ought to be easier than the full problem, and the solution will immediately pay off. I asked Vassar about this once, and he said that he did not think this would be easier. I never really understood that reply.
I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.
I made it clear in our dialogue that I was stipulating a particular definition for intelligence:
...SBENTHALL: Would you say that Google is a super-human intelligence?
ME: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve one's goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That's why even tho
Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.
I claim this problem is easier because:
(a) we have a lot more time (no danger of "foom"),
(b) we can use empirical methods (processes already exist), to ground our theories.
(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.
The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.
This post doesn't come close to refuting Intelligence Explosion: Evidence and Import.
Organizations have optimization power.
That's true, but intelligence as defined in this context is not merely optimization power, but efficient cross-domain optimization power. There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
I think the world is already full of probably unfriendly supra-human intelligences...
This sounds similar to a position of Robin Hanson addressed in Footnote 25 ...
I felt an extreme Deja Vu when I saw the title for this.
I'm pretty sure I saw a post with the same name a couple of months ago. I don't remember what the post was actually about, so I can't really compare substance, but I have to ask. Did you post this before?
Again, sorry if this is me being crazy.
No, there was a very very similar post, about how governments are already super intelligences and seem to show no evidence of fooming.
I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it's conceivable that there is such a route and I just haven't thought of it, but on the other hand, the corporate singularity hasn't happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.
I think there is another related problem that we should be worrying about more. I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.
Sure, but this is essentially the same problem - once you get around the thinkos.
I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
This overall topic is known as collective intelligence, where the word "collective" is intended (at least by some proponents) as a contrast to both individual intelligence and AI. There are some folks studying rationality in organizations and management, most notably including Peter Senge who first formulated the idea of a learning organization as a rough equivalent to "rationality" as such.
At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. That's the main problem. Leaders have goals, which frequently conflict with the goals of their followers and sometimes with the existence of the organization.
I get the sense that "organization" is more or less a euphemism for "corporation" in this post. I understand that the term could have political connotations, but it's hard (for me at least) to easily evaluate an abstract conclusion like "many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers" without trying to generate concrete examples. Imprecise terminology inhibits this.
When you quote lukeprog saying
...It would be a kind of weird corporation that was better than the best hum
I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real...
You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardwar...
Not that it's central or anything, but I find it amusing that you mention as examples Muehlhauser and Salamon (two very central figures, to be sure), without mentioning a particular third...
First, examining the dispute over whether scalable systems can actually implement a distributed AI...
This is one reason why even Google's datastore, AFAIK, does not implement exactly this kind of architecture -- though it is still heavily sharded. This type of a datastructure does not easily lend itself to purely general computation, either, since it relies on precomputed indexes, and generally exploits some very specific property of the data that is known in advance.
That's untrue; Google App Engine's datastore is not built on exactly this architecture, but is built on one with these scalability properties, and they do not inhibit its operation. It is built on BigTable, which builds on multiple instances of Google File System, each of which has multiple chunk servers. They describe this as intended to scale to hundreds of thousands of machines and petabytes of data. They do not define a design scaling to an arbitrary number of levels, but there is no reason an architecturally similar system like it couldn't simply add another level and add on another potential roundtrip. I also omit discussion of fault-tolerance, but this doesn't present any additional fundamental issues for the described functionality.
In actual application, its architecture is used in conjunction with a large number of interchangeable non-data-holding compute nodes which communicate only with the datastore and end users rather than each other, running identical instances of software running on App Engine. This layout runs all websites and services backed by Google App Engine as distributed, scalable software, assuming they don't do anything to break scalability. There is no particular reliance of "special properties" of the data being stored, merely limited types of searching of the data which is possible. Even this is less limited than you might imagine; full text search of large texts has been implemented fairly recently. A wide range of websites, services, and applications are built on top of it.
The implication of this is that there could well be limitations on what you can build scalably, but they are not all that restrictive. They definitely don't include anything for which you can split data into independently processed chunks. Looking at GAE some more because it's a good example of a generalised scalable distributed platform, the software run on the nodes is written in standard Turing-complete languages (Python, Java, and Go) and your datastore access includes read and write by key and by equality queries on specific fields, as well as cursors. A scalable task queue and cron system mean you aren't dependent on outside requests to drive anything. It's fairly simple to build any such chunk processing on top of it.
So as long as an AI can implement its work in such chunks, it certainly can scale to huge sizes and be a scalable system.
And, as you also mentioned, even with these drastic tradeoffs you still get O(n log(n)).
And as I demonstrated, O(n log n) is big enough for a Singularity.
And now on whether scalable systems can actually grow big in general...
You mention Amazon (in addition to Google) as one example of a massively distributed system, but note that both Google and Amazon are already forced to build redundant data centers in separate areas of the Earth, in order to reduce network latency.
Speed of light as an issue is not a problem for building huge systems in general, so long as the number of roundtrips rises as O(n log n) or less, because for any system capable of at least tolerating roundtrips to the other side of the planet (few hundred milliseconds), it doesn't become more of an issue as a system gets bigger, until you start running out of space on the planet surface to run fibre between locations or build servers.
The GAE datastore is already tolerating latencies sufficient to cover distances between cities to permit data duplication over wide areas, for fault tolerance. If it was to expand into all the space between those cities, it would not have the time for each roundtrip increase until after it had filled all the space between them with more servers.
Google and Amazon are not at all forced to build data centres in different parts of the Earth to reduce latency; this is a misunderstanding. There is no technical performance degradation caused by the size of their systems forcing them to need the latency improvements to end users or the region-scale fault tolerance that spread out datacentres permit. They can just afford it more easily. You could argue there are social/political/legal reasons they need it more, higher expectations of their systems and similar, but these aren't relevant here. This spreading out is actually largely detrimental to their systems since spreading out this way increases latency between them, but they can tolerate this.
Heat dissipation, power generation, and network cabling needs all also scale as O(n log n), since computation and communication do and those are the processes which create those needs. Looking at my previous example, the amount of heat output, power needed, and network cabling required per amount of data processed will increase by maybe an order of magnitude in scaling such a system upwards by tens of orders of magnitude, 5x for 40 orders of magnitude in the example I gave. This assumes your base amount of latency is still enough to cover the distance between the most distant nodes (for an Earth bound system, one side of the planet to the other), which is entirely reasonable latency-wise for most systems; a total of 1.5 seconds for a planet-sized system.
This means that no, these do not become an increasing problem as you make a scalable system expand, any more so than provision of the nodes themselves. You are right in that that heat dissipation, power generation, and network cabling mean that you might start to hit problems before literally "running out of planet", using up all the matter of the planet; that example was intended to demonstrate the scalability of the architecture. You also might run out of specific elements or surface area.
These practical hardware issues don't really create a problem for a Singularity, though. Clusters exist now with 560k processors, so systems at least this big can be feasibly constructed at reasonable cost. So long as the software can scale without substantial overhead, this is enough unless you think an AI would need even more processors, and that the software could is the point that my planet-scale example was trying to show. You're already "post Singularity" by the time you seriously become unable to dissipate heat or run cables between any more nodes.
This means that, even in an absolutely ideal situation where we can ignore power, heat dissipation, and network congestion, you will still run into the speed of light as a limiting factor. In fact, high-frequency trading systems are already running up against this limit even today.
HFT systems desire extremely low latency; this is the sole cause of their wish to be close to the exchange and to have various internal scalability limitations in order to improve speed of processing. These issues don't generalise to typical systems, and don't get worse at above O(n log n) for typical bigger systems.
It is conceivable that speed of light limitations might force a massive, distributed AI to have high, maybe over a second latency in actions relying on knowledge from all over the planet, if prefetching, caching, and similar measures all fail. But this doesn't seem like nearly enough to render one at all ineffective.
There really aren't any rules of distributed systems which says that it can't work or even is likely not to.
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.