If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:

  1. Machine intelligence is getting smarter.
  2. Once an intelligence becomes sufficiently supra-human, its instrumental rationality will drive it towards cognitive self-enhancement (Bostrom), so making it a super-powerful, resource hungry superintelligence.
  3. If a superintelligence isn't sufficiently human-like or 'friendly', that could be disastrous for humanity.
  4. Machine intelligence is unlikely to be human-like or friendly unless we take precautions.
I am not particularly worried about the scenario envisioned in this argument.  I think that my lack of concern is rational, so I'd like to try to convince you of it as well.*

It's not that I think the logic of this argument is incorrect so much as I think there is another related problem that we should be worrying about more.  I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.

I'm in danger of getting into politics.  Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.

Smart organizations

By "organization" I mean something commonplace, with a twist.  It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization". 

Do organizations have intelligence?  I think so.  Here's some of the reasons why:

  1. We can model human organizations as having preference functions. (Economists do this all the time)
  2. Human organizations have a lot of optimization power.

I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.

So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys

...and then...

It would be a kind of weird [organization] that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are  [Organizations] that  are better than median humans at certain things, like digging oil wells, but I don’t think there are [Organizations] as good or better than humans at all things. More to the point, there is an interesting difference here because [Organizations] are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. 

I think that Muehlhauser is slightly mistaken on a few subtle but important points.  I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.

  • When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.
  • So, if organizations are not as good at a human being at composing music, that shouldn't disqualify them from being considered broadly intelligent if that has nothing to do with their goals.
  • Many organizations are quite good at AI research, or outsource their AI research to other organizations with which they are intertwined.
  • The cognitive power of an organization is not limited to the size of skulls. The computational power is of many organizations is comprised of both the skulls of its members and possibly "warehouses" of digital computers.
  • With the ubiquity of cloud computing, it's hard to say that a particular computational process has a static spatial bound at all.
In summary, organizations often have the kinds of skills necessary to achieve their goals, and can be vastly better at them than individual humans. Many have the skills necessary for their own cognitive enhancement, since if they are able to raise funding they can purchase computational resources and fund artificial intelligence research. More mundanely, organizations of all kinds hire analysts and use analytic software to make instrumentally rational decisions.

In sum, many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers.

Mean organizations


Suppose the premise that there are organizations with supra-human intelligence that act to enhance their cognitive powers.  And suppose the other premises of the Singularitarian argument outlined at the beginning of this post.

Then it follows that we should be concerned if one or more of these smart organizations are so unlike human beings in their motivational structure that they are 'mean'.

I believe the implications of this line of reasoning may be profound, but as this is my first post to LessWrong I would like to first see how this is received before going on.

* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication.  As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.

Intelligence explosion in organizations, or why I'm not worried about the singularity
New Comment
187 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern410

Organizations are highly disanalogous to potential AIs, and suffer from severe diminishing returns: http://www.nytimes.com/2010/12/19/magazine/19Urban_West-t.html?reddit=&pagewanted=all&_r=0

As West notes, Hurricane Katrina couldn’t wipe out New Orleans, and a nuclear bomb did not erase Hiroshima from the map. In contrast, where are Pan Am and Enron today? The modern corporation has an average life span of 40 to 50 years. This raises the obvious question: Why are corporations so fleeting? After buying data on more than 23,000 publicly traded companies, Bettencourt and West discovered that corporate productivity, unlike urban productivity, was entirely sublinear. As the number of employees grows, the amount of profit per employee shrinks. West gets giddy when he shows me the linear regression charts. “Look at this bloody plot,” he says. “It’s ridiculous how well the points line up.” The graph reflects the bleak reality of corporate growth, in which efficiencies of scale are almost always outweighed by the burdens of bureaucracy. “When a company starts out, it’s all about the new idea,” West says. “And then, if the company gets lucky, the idea takes off. Everybody is happy a

... (read more)
[-]tgb150

But then management starts worrying about the bottom line and so all these people are hired to keep track of the paper clips. This is the beginning of the end.

And so LessWrong has been proved correct that paperclips will be the end of us all.

4Bugmaster
I may be wrong, but don't all distributed systems suffer from diminishing returns in this way ? For example, doubling the number of CPUs in a computing cluster does not allow you to solve your calculations twice as quickly. Your overhead, such as control infrastructure and plain old network latency, increases faster than linearly with every CPU you add, and eventually outgrows the useful processing power you can get out of new CPUs. This is one of the many reasons why I'm not worried about the Singularity...
7gwern
Just to point out the obvious, the link itself covers a case of sublinear scaling: cities. So no, not all 'distributed systems' so suffer...
6Bugmaster
Don't you mean, "superlinear" ? But you're right, I should've read the full linked article before commenting. Now that I'd read it, though, I am somewhat less than impressed. Here's one reason for that: Um. If your "fundamental law" has all these exceptions, that's a good hint that maybe it isn't as fundamental as you thought. The law of gravity doesn't have exceptions. And no, it's not always better to "have the law". Sometimes it is, for practical reasons, and sometimes it's better to devise a better law that doesn't give you so many false positives. The article goes on to describe the superlinear growth of efficiency in cities, and notes (correctly, IMO) that it cannot be sustained forever: But I think one point that the article is missing is that cities don't exist in a vacuum. As a city grows, it requires more food (which can't be grown efficiently inside the city), more highways (connecting it with its neighbours), etc. If we ignore all of that, we get superlinear scaling; but my guess is that if we include it, we would get sublinear scaling as usual -- in terms of overall economic output per single human.
3gwern
You're missing the point too. Even gravity has exceptions - yes, really, this is a standard topic in philosophy of science because the Laws Of Gravity are so clear, yet in practice they are riddled with exceptions and errors. We have errors so large that Newtonians were forced to postulate entire planets to explain them (not all of which turned out as well as Uranus, Neptune, and Pluto), we have errors which took centuries to be winkled out, and of course errors like Mercury which ultimately could be explained only by an entirely new theory. And we're talking about real-world statistics: has there ever been a sociology, economics, or biological allometry paper where every single data point was predicted perfectly without any error whatsoever? (If you think this, then perhaps you should consult Tukey and Cohen on how 'the null hypothesis is always false'.) Absolutely; if you measure in certain ways, diminishing returns has clearly set in for humanity. And yet, compared to hunter-gatherers, we might as well be a Singularity. What does this tell you about the relevance of diminishing returns to Singularity discussions? (Chalmers's Singularity paper deals with this very question, IIRC, if you are interested in a pre-existing discussion.)
1Bugmaster
In addition to what the others said on this thread, I'd like to say that my main problem was with the author's attitude, not the accuracy of his proposed law -- though the fact that it apparently has glaring holes in it doesn't really help. When you discover that your law has huge exceptions (such as f.ex. "all crustaceans" or "Mercury"), the thing to do is to postulate hidden planets, or discover relativity, or introduce a term representing dark energy, or something. The thing not to do is to say, "oh well, every law has exceptions, this is good enough for me, case closed ! Let's pretend that crustaceans don't exist, we're done". I'm not sure what you're referring to; of course, no one expects any line to have a correlation of 1.0 at all times. That'd be silly. However, it is almost equally as silly to take a few data points, and extrapolate them far into the future without any concern for what you're doing. Ultimately, you can draw a straight line through any two points, but that doesn't mean that a child will be over 5m tall at age 20 just because he grew 25cm in a year. How so ? Perhaps more importantly, if "diminishing returns has clearly set in for humanity" as you say, then what does that tell you for our prospects of bringing about the actual Singularity ?
0gwern
Well, that's useful advice to the Newtonians, alright - 'hey guys, why did you let the Mercury anomaly linger for decades/centuries? All you had to do was invent relativity! Just ask Bugmaster!' I wasn't aware West had retired and was eagerly awaiting his Nobel phone call. Why do you think the existing dataset is analogous to your silly example? Not much.
0Bugmaster
There's a difference between acknowledging the problems with your "fundamental law" (once they become apparent, of course) but failing to fix them for "decades/centuries"; vs. boldly ignoring them because "all laws have exceptions, them's the breaks". It's possible that West is not doing the latter, but the article does imply that this is the case. Which dataset are you talking about ? If you mean, the growth of cities, then see below. Why not ? If humanity's productive output has recently (relatively speaking) reached the point of diminishing returns, then a). we can no longer extrapolate the growth of productivity in cities by assuming past trends would continue indefinitely, and b). this does not bode well for the Singularity, which would entail an exponential growth of productivity, free of any diminishing returns.
0gwern
It didn't sound like that to me. It sounded like some people had absurd standards for scaling phenomena, and he was rightly dismissing them. There's nothing recently about it. Diminishing returns is a pretty general phenomenon which happens in most periods; Tainter documents examples in many ancient settings, and we can find data sets suggesting diminishing returns in the West from long ago. For example, IIRC Murray finds that once you adjust for population growth, scientific achievement has been falling since the 1890s or so. It doesn't bode much of anything; I referred to you my list of 'what diminishing returns does not imply' for a reason: #1-4 are directly relevant. Diminishing returns does not mean no exponential growth; it does not mean no regime changes, massive accomplishments, breakthroughs, or technologies. It just means diminishing returns; it's just an observation about one unit of input turning into units of output as compared to the previous unit of input and outputs, nothing more and nothing less. This is obvious if you take Tainter or Murray or any of the results showing any diminishing returns in the past centuries, since those are precisely the centuries in which humanity has done the most extraordinarily well! One could say, with equal justice, that 'this does not bode well' for the 20th century; one could say with equal justice in 1950 that diminishing returns bodes poorly for the computer industry because not only are chip fab prices keeping on increasing ('Moore's second law'), computing power is visibly suffering diminishing returns as it is applied to more and more worthless problems - where once it was used on problems of vital national value (crucial to the survival of the free world and all that is good) worth billions such as artillery tables and H-bomb simulations, now it was being wasted on grad students and businesses.
1A1987dM
What are you talking about?
1gwern
I gave multiple examples and specified the field interested in how such a naive formulation is completely wrong; please ask better questions.
2AlexMennen
No, you did not. Your examples are all consistent with our best current exceptionless theory of gravity (general relativity) and knowledge of the composition of our solar system (Uranus, Neptune, and Pluto). You merely hinted at the existence of additional examples that perplexed the Newtonians. In fact, since our current understanding of gravity is better than the Newtonians', hinting at the existence of examples that perplexed the Newtonians fails to even suggest a flaw in our best current theory, not to mention suggesting the existence of "exceptions to gravity". Please give at least one real example.
3gwern
Nobody brought up relativity as the issue; the fact remains that every theory is incomplete and a work in progress, and a few errors is not disproof especially for a statistical generalization. You would not apply this ultra-high standard of 'the theory must explain every observation ever in the absence of any further data or modifications' to anything else discussed on LW, and I do not understand why either you or army1987 think you are adding anything to this discussion about cities exhibiting better scaling than corporations.
2AlexMennen
You said that gravity has exceptions. I'm not quite sure what that's supposed to mean, but the only interpretation I could think of for that statement is that our current best theory of gravity (namely, general relativity) fails to predict how gravity behaves in some cases. I did not mean to suggest that any theory must explain every observation correctly to be useful, nor did I mean to imply anything about how well cities and corporations scale. I was merely pointing out that you falsely asserted that you had given examples of exceptions to gravity, when you had in fact you had only given examples of exceptions to Newtonian gravity as it would operate in a solar system similar but not identical to ours.
0A1987dM
I saw what sounded to me like an extraordinary claim (though it turns out I misunderstood you) so I went WTF.
0A1987dM
I have never heard of any observation showing that gravitation as described by general relativity (and, so long as you aren't very close to something very massive and aren't travelling at a sizeable fraction of the speed of light, excellently approximated by Newton's law) might have "exceptions" on Solar System-scale, except possibly the Pioneer anomaly (for which there is a very plausible candidate explanation) and similar. When I read "errors" I hoped you meant measurement uncertainties, but I can't make sense of the rest of the paragraph assuming you did.
-1gwern
http://en.wikipedia.org/wiki/Philosophy_of_science#Duhem-Quine_thesis may help you a little bit. You should probably read the entire article, since you seem to think there were no errors or exceptions, and that some exceptions could disprove a power law.
2A1987dM
I think I know what you mean, but if I'm right, "gravity has exceptions" is, let's say, a very bizarre way of putting it. EDIT: yeah, you meant what i thought you meant.
2AlexMennen
There are no examples of failures of general relativity in that entire article. So far, of the two of you, only army1987 has given an example of an even slightly perplexing observation.
0gwern
Why should I give one? I never brought up relativity, army1987 did.
2A1987dM
You brought up the Laws Of Gravity (capitals yours), which among insiders are known as the Einstein field equations of general relativity.
0Bugmaster
This seems serendipitous: http://lesswrong.com/r/discussion/lw/g62/link_the_collapse_of_complex_societies/
0gwern
Yes, Tainter is one of a number of sources which are why I think humanity has seen diminishing returns. I've been casually dumping some info in http://www.gwern.net/the-long-stagnation although if we were discussing just books, I think Murray's Human Accomplishment covers convincingly a much more important kind of diminishing returns compared to Tainter's focus on resources and basic economic metrics. (For those interested in the topic, I suggest looking at my link just for the intro bit about 5 propositions that the fact of diminishing returns does not prove; I believe more than one commenter on this page is committing at least one of those 5.)
6jbeshir
Restricting the topic to distributed computation, the short answer is "essentially no". The rule is that you get at best linear returns, not that your returns diminish greatly. There are a lot of problems which are described as "embarassingly parallel", in that scaling them out is easy to do with quite low overhead. In general, any processing of a data set which permits it to be broken into chunks which can be processed independently would qualify, so long as you were looking to increase the amount of data processed by adding more processors rather than process the same data faster. For scalable distributed computation, you use a system design whose total communication overhead rises as O(n log n) or lower. The upper bound here is superlinear, but gets closer to linear the more additional capacity is added, and so scales well enough that with a good implementation you can run out of planet to make the system out of before you get too slow. Such systems are quite achievable. The DNS system would be an important example of a scalable distributed system; if adding more capacity to the DNS system had substantially diminishing returns, we would have a very different Internet today. An example I know well enough to walk through in detail is a scalable database in which data is allocated to shards, which manage storage of that data. You need a dictionary server to locate data (DNS-style) and handle moving blocks of it between shards, but this can then be sharded in turn. The result is akin to a really big tree; number of lookups (latency) to find the data rises with the log of the data stored, and the total number of dictionary servers at all levels does not rise faster than the number of shards with Actual Data at the bottom level. Queries can be supported by precomputed indexes stored in the database themselves. This is similar to how Google App Engine's datastore operates (but much simplified). With this fairly simple structure, the total cost of all reads/writes/qu
0Bugmaster
I agree with what you are saying about scaling, as exemplified by sharded databases. But I am not convinced that any problem can be sharded that easily; as you yourself have said: This is one reason why even Google's datastore, AFAIK, does not implement exactly this kind of architecture -- though it is still heavily sharded. This type of a datastructure does not easily lend itself to purely general computation, either, since it relies on precomputed indexes, and generally exploits some very specific property of the data that is known in advance. And, as you also mentioned, even with these drastic tradeoffs you still get O(n log(n)). You mention Amazon (in addition to Google) as one example of a massively distributed system, but note that both Google and Amazon are already forced to build redundant data centers in separate areas of the Earth, in order to reduce network latency. This is important, because we aren't dealing with abstract tree nodes, but with physical machines, which have a certain volume (among other things). This means that, even in an absolutely ideal situation where we can ignore power, heat dissipation, and network congestion, you will still run into the speed of light as a limiting factor. In fact, high-frequency trading systems are already running up against this limit even today. This means that you'll run out of room to scale a lot faster than you run out of atoms of the Earth.
1jbeshir
First, examining the dispute over whether scalable systems can actually implement a distributed AI... That's untrue; Google App Engine's datastore is not built on exactly this architecture, but is built on one with these scalability properties, and they do not inhibit its operation. It is built on BigTable, which builds on multiple instances of Google File System, each of which has multiple chunk servers. They describe this as intended to scale to hundreds of thousands of machines and petabytes of data. They do not define a design scaling to an arbitrary number of levels, but there is no reason an architecturally similar system like it couldn't simply add another level and add on another potential roundtrip. I also omit discussion of fault-tolerance, but this doesn't present any additional fundamental issues for the described functionality. In actual application, its architecture is used in conjunction with a large number of interchangeable non-data-holding compute nodes which communicate only with the datastore and end users rather than each other, running identical instances of software running on App Engine. This layout runs all websites and services backed by Google App Engine as distributed, scalable software, assuming they don't do anything to break scalability. There is no particular reliance of "special properties" of the data being stored, merely limited types of searching of the data which is possible. Even this is less limited than you might imagine; full text search of large texts has been implemented fairly recently. A wide range of websites, services, and applications are built on top of it. The implication of this is that there could well be limitations on what you can build scalably, but they are not all that restrictive. They definitely don't include anything for which you can split data into independently processed chunks. Looking at GAE some more because it's a good example of a generalised scalable distributed platform, the software run on the
5timtyler
Asynchronous computers could easily grow to a planetary scale. Parallel computing rarely gets linear scalability - but it doesn't necessarily flatten off quickly at small sizes, either.
-2V_V
Yes. Even on serial systems, most AI problems are at least NP-hard, which are strongly conjectured to scale not just superlinearly, but also superpolynomially (exponentially, as far as we know) in terms of required computational resources vs problem instance size. In many applications it can be the case that typical instances of these problems have special, domain-specific structure that can be exploited to construct domain-specifc algorithms and heuristics that are more efficient than the general purpose ones, in some cases we can even get polynomial time complexity, but this requires lots of domain-aware engineering, and even sheer trial-and-error experimentation. The idea that an efficient domain-agnostic silver-bullet algorithm could arise pretty much out of nowhere, from some kind of "recursive self-improvement" process with little or no interaction with the environment, is not based on anything we know from either theoretical or empirical computer science. In fact, it is well known that meta-optimization is typically orders of magnitude more difficult than domain-level optimization. If an AGI is ever built, it will be an huge collection of fairly domain-specific algorithms and heuristics, much like the human brain is a huge collection of fairly domain-specific modules. Such a thing will not arise in a quick "FOOM", it will not improve quickly and will be limited in how much it will be ever able to improve: once you find the best algorithm for a certain problem you can't find a better one, and certain problems are most likely going to stay hard even with the best algorithms. The "intelligence explosion" idea seems to be based on a naive understanding of computational complexity (e.g. Good 1965) that largely predates the discovery of the main results of complexity theory, like the Cook-Levin theorem (1971) and Karp's 21 NP-Complete problems (1972).
1Bugmaster
I agree with everything you'd said, but, to be fair, we're talking about different things. My claim was not about the complexity of problems, but the scaling of hardware -- which, as far as I know, scales sublinearly. This means that doubling the size of your computing cluster will allow you to solve the same exact problem less than twice as fast; and that eventually you'll hit the point of diminishing returns where adding more machines simply isn't worth it. You're saying, on the other hand, that doubling your processing power will not necessarily allow you to solve problems that are twice as interesting; in most cases, it will only allow you to add one more city to the traveling salesman's itinerary (metaphorically speaking).
0loup-vaillant
There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it). Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years. A versatile team of competent people? Less than 6 months. (Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.) My point is, the AI doesn't need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24/7 at full capacity (no sleep, no leisure time, no akrasia).
1V_V
Maybe. Today, even with our best supercomputers we can't simulate a rat brain in real time. You would be able to work as 10 people, maybe a little more, but probably less than 30. I don't know how efficient you are, but I doubt that would be enough to take the world. And why wouldn't other people have access the same technology? Even if you managed to become world dictator, you would only stay in power as long as you had broad political support. Screw up something and you'll end up hanging from your power chord. What is it going to do? Secretly repurpose the iPhone factories in China to make Terminators?
1loup-vaillant
I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals. How would you stop that from achieving world domination, short of uploading yourself? [1] Assuming that many personal computers are powerful enough, and can be corrupted. A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.
2V_V
That doesn't mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins. Relasing a security patch? Seizing all the funds you obtained by your illegal activities? Banning use of any hardware that could host you until a way to avoid such things is found? Assuming that using these data centers to run copies of you is the most economically productive use of them, rather than, say, running copies of other people, or cow-clicker games.
1loup-vaillant
Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork? Okay, it can happen, and is probably worth taking specific precautions. I don't think this is a show stopper however, and I'm not sure it would render me any less dangerous. That may not be enough: * I would probably man-in-the-middle automatic updates * Many people won't erase their hard drive or otherwise patch their machine manually * I may convince some people to let me run (I could work for them for instance). * If I'm stealthy enough, it may take some time before I'm discovered at all (it happened with actual computer viruses). * If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won't all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible. Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds. Plus, I may do legal activities as well. That one is excellent. We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed. ---------------------------------------- A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover. But if a Big Bad or a Well Intentioned Extremist has access to that first…
0V_V
Even if their goals stay substantially the same, it wouldn't mean that they would naturally cooperate, expecially when their main goal is world domination. Hell, it's already non-trivial for a single person to coordinate with future selves, resulting in all kinds of ego-dystonic behaviors: impulsiveness, akrasia, etc., Coordinating with thousands copies of yourself would be only marginally easier than coordinating with thousands strangers. We are not talking about some ideal "Prisoner's dilemma with mind-clone" scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn't be able to predict them and they would't be able to predict you. Hacking all the routers? Good luck with that. And BTW routers can also be updated. Manually. Because they are lazy and they would prefer to live under world dictatorship. Then you are their employee, not their dominator. But if you are to dominate the world, you would have to eventually reveal yourself. What do you think would happen next? Botnets are certainly possible and they are indeed used for nefarious purposes, but world domination? Nope. As Bugmaster said, you would be able to perform only small purchases, not to buy a satellite, or an army. Moreover, obtaining and managing lots of fake or stolen identities, creating bank accounts without physically showing up at the bank or using stolen bank accounts, is not something that tend to go unnoticed. The more you have, the more likely that you get caught, exponentially so. Under multiple fake identities operated from a botnet of hacked computers? Hardly so. Software tends to march right behind hardware, exploting it close to its maximum potential. Computing overhang is unlikely. Anyway, I wasn't proposing any luddite advance ban. If some brain upload, or AI or whatever tries to take the world by hacking the Internet and other c
1MugaSofer
You really think you would diverge that quickly? I'm ... not sure how those are criticisms.
0loup-vaillant
* Man in the middle: I just meant intercepting automatic updates at the level of the computer I'm in. Trojan todo list n°7: once installed and running, I will intercept all communications to and from this computer. I wouldn't want Norton updating behind my back. Now, try and hack the routers in the backbone, that's something I didn't think about… * Employee vs dominator: I obviously intend to double cross my employers, eventually. * Revealing myself: that one needs to be carefully thought through. Hopefully, by the time I reveal myself, I will have sufficient blackmail power. Having a sufficient number of physical robots can also help. * Zillions fake ID, yet stay stealthy: well, I do expect a fair number of my identities to be exposed. This should pose no problem to the others, however, provided they do not visibly communicate with each other (at first). * Legal activities: my meat instance could buy a few computers, rent remote servers etc. I doubt I would be incapable of running at least a successful business from there. And from there, buy even more computing power. This could be done in parallel with the illegal activities. * Computing (no) overhang: this one is the single reason why I do agree that without a FOOM of some kind, actual world domination is unlikely: there will be multiple competing uploads, and this should end with a Hansonian scenario. Given that such a world is closer to Hell than Heaven (to me at least), that still counts as an Existential Blunder. On the bright side, we may see this coming. That said, I still do believe full blown intelligence explosion is likely. Note that overall, your objections are actually valuable advice. And that give me some insight about what my very first move should be: gathering such objections, and try to find counters or workarounds. And now that you made quite clear that any path to world domination is long, complicated, and therefore nearly certain to fail, I should run multiple schemes in parallel
0Bugmaster
I believe that this would severely limit your financial throughput. You would be able to buy lots of little things, whose total cost is quite significant -- for example, you could buy yourself a million cheap PCs, each costing $1000. But you would not be able to buy a single expensive thing (at least, not without exposing yourself to instant retribution), such as a satellite costing $1e9.
0loup-vaillant
Currently, there are ways to create companies anonymously. This is preventing (or at least slowing down to a crawl) retribution right now. If all this company apparently does is buying a few satellites, it won't be at great risk.
0Bugmaster
Good work, I believe we've got the next James Bond movie in the bag :-)
0Bugmaster
Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch. There currently exist tons of frighteningly competent people in all kinds of positions of power in the world, and yet, they do not control it (unless you believe in conspiracy theories). If it was this easy, some biological human (or a team of such humans) would've done it already, in 10 to 50 years or however long it takes. In fact, a few humans have managed to take over individual countries in about as much time. However, as things stand now, there's simply no clear path to world domination. Political and military power gets much more difficult to gather the more of it you have. Even superpowers such as USA or China cannot dictate terms to the rest of the world. Furthermore, my point was that uploading yourself to 10 machines will not allow you to think 10 times as fast. With every machine you add, your speed gains would become progressively smaller. You would still think much faster than an ordinary human, of course.
1loup-vaillant
I mean exactly that. I'd be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware. Because our brain is massively parallel, and because current microprocessors have massively faster serial speed than neurons. Now our brains aren't fully parallel, so I assumed an arbitrary speed-up limit. I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower. Now do not forget the key word here: botnet. The team is supposed to duplicate itself many times over before trying to take over the world. I don't think so, because uploads have significant advantages over meat-ware. * Low cost of living. In a world where every middle class home can afford sufficient computing power for an upload (required to turn me into a botnet). Now try to beat my prices. * Being many copies of the same few original brains. It means TDT works better, and defection is less likely. This should solve Because once the self-duplicating team has independently taken economic control of most of the world, it is easy for it to accept the domination of one instance (I would certainly pre-commit to that). Now for the rest of humanity to accept such dominance, the uploads only have to use the resources they acquired for the individual perceived benefit of the meat bags. Yep, that would be a full blown global conspiracy. While it's probably forever out of the reach of meat bags, I think a small team of self-replicating uploads can pull it out quite easily. * Hansonian tactics, which can further the productivity of the team, and therefore market power. (One have to be very motivated, or possibly crazy.) * Temporary mass duplication followed by the "termination" of every instances but one. The surviving instance can have much subjective free time, while the proportion of leisure computing stays very small. * Save and reload of snapshots which are in a particularly good mood (and therefore very
0Bugmaster
So would I. However, given our current level of technological development, I'd be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle. There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people -- if any -- who think 10x faster. I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what "economic control of most of the world" even means. What does it mean to you ? In addition to the above, your botnet would face serveral significant threats, both external and internal: * Meatbags would strive to shut it down; not because they suspect it of being an evil conspiracy, but because they'd get tired of it sucking away their resources. Modern malware botnets suffer this fate often, though there's always someone willing to rebuild them * If your botnet becomes a serious threat (much worse than current real-world botnets), hardware manufacturers will implement security measures, such as SecureBoot, to prevent it from spreading. Currently, such measures are driven by the entertainment industry. * The super-fast instances of you would have to communicate with each other, and they'd only be able to do so through very slow (relatively speaking) network links. Google and Amazon are solving this problem by building more and more local datacenters. Real botnets aren't solving the problem at all because their instances don't need to talk to each other all that much. * How would you feel, right now, if your twin pointed a gun at your head with the intent to kill you "for the greater good" ? This is how your instances will feel when you attempt to shut them down to prevent akrasia. * Why are you taking over the world in the first place ? Chances are that whatever your ultimate goal
0sbenthall
Fair enough. Not sure I see your point though. What is the relevance of profit per employee to the question of the power of organizations? And why would a machine intelligence not suffer similar coordination problems as it scales up?
[-]gwern100

What is the relevance of profit per employee to the question of the power of organizations?

Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?

And why would a machine intelligence not suffer similar coordination problems as it scales up?

For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization - and that's just one nugatory difference between AIs (uploads or de novo) and organizations.

3HalMorris
For the owners and shareholders though, not for the employees, unless they are all partners. As to why more employees could lead to lower profit per employee. Suppose a smart person running a one-man company hires a delivery truck driver. I'd expect it to happen there. That's only an example but I think it suggests some hypotheses.
2sbenthall
Ok, let's recognize some diversity between corporations. There are lots of different kinds. Some corporations fail. Others are enormously successful, commanding power at a global scale, with thousands and thousands of employees. It's the latter kind of organization that I'm considering as a candidate for organizational superintelligence. These seem pretty robust and good at what they do (making shareholders profit). As HalMorris suggests, that there are diminishing returns to profit with number of employees doesn't make the organization unsuccessful in reaching its goals. It's just that they face diminishing returns on a certain kind of resource. An AI could face similar diminishing returns. I agree completely. I worry that in some cases this is going on. I've heard rumors of this sort of thing happening in the dormitories of Chinese factory workers, for example. But more mundane ways of doing this involve giving employees bonuses based on company performance, or stock options. Or, for a different kind of organization, by providing citizens with a national identity. Organizations encourage loyalty in all kinds of ways.
0gwern
As far as I know, large corporations are almost as ephemeral as small corporations. Which tells you something about how valuable it is, and how ineffective each of the many ways is, no?
-1timtyler
The idea that machine intelligences won't delegate work to other agents with different values seems terribly speculative to me. I don't think it counts as admissable evidence.
1gwern
Why would they permit agents with different values? If you're implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to 'guard labor'.
0timtyler
Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they'll have to trade with and delegate to other agents to get what they want. That doesn't sound like me: Tim Tyler: Against whole brain emulation.
0NancyLebovitz
It's at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there's some gain from permitting variation. It's hard to judge how much variation is a win, though.
-2timtyler
Huh? 48 billion dollars not enough for you? What sort of profit would you be impressed by?
6gwern
Why would you think $48b is at all interesting when world GDP is $70t? And show me a largest corporation in the world which manages to hold on for even a few centuries like a mediocre state can...
0timtyler
Massive profits seem like a pretty convincing refutation of the bizarre idea that corporations aren't that great at maximising profits to me. Modern corporations are the best profit maximisers any human has ever seen. Lifespan seems like an irrelevant metric in a discussion about corporate intelligence.
3gwern
Compared to what? Ceteris paribus, long lifespan helps with generating profit: long-lived corporations accumulate reputational capital, institutional expertise, allows more amortizing of long-term investments, etc.
0timtyler
So: older companies mostly. Death is much less of a significant factor than with humans, since old corporations can be broken up and the pieces sold. It doesn't matter so much if old corporations die when their parts can be usefully recycled. Things like expertise can easily outlast a dead corporation.

Never mind the singularity, organizations aren't friendly and I'm worried about them.

6Gavin
Yes, Unfriendly organizations are a major threat to humanity. The battle is ongoing. The death toll stands in the tens of millions, much higher if you want to count generously. So yes, unfriendly organizations are a real threat. But they're one that we're all aware of. Luckily, a host of Friendly people and organizations are dedicated to fighting them, studying them, and mitigating their damage. And many people end up counteracting them, simply by living generally good lives. Taking the long view of history, I believe that, over the last few hundred years, we have been winning this battle. There's news of tragedy every day, but by many measures 2012 was the world's best year ever. The UFAI threat, if the SIAI argument is correct, is a sudden and irreversible threat that is currently ignored even by those attempting to build AGI. That's why a small group of dedicated individuals has chosen it as their best chance to influence the future. They're applying pressure where they believe it can have the greatest effect. No one has claimed that it was the only threat, just a very important one.

An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.

Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.

Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.

Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most o... (read more)

[-]TimS120

One of the advantages of bureaucracy is creating value from otherwise low-value inputs. The collection of people working in the nearest McDonalds probably isn't capable of figuring out from scratch how to run a restaurant. But following the bureaucratic blueprint issued from headquarters allows those same folks to produce a hamburger on demand, and getting paid for it.

That's a major value of bureaucratic structure - lowering the variance and raising the downside (i.e. a fast food burger isn't great, but it meets some minimum quality and won't poison you).

4sbenthall
I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology. Take a modern company with a broad reach. The convenience store, CVS, say. Yes, there is a big organizational hierarchy staffed by people. But there is also a massive data collecting and business intelligence aspect. Every time they try to get you to swipe your CVS card when you buy toothpaste, they are collecting information which they then mine for patterns on how they stock shelves and price things. That's just business. It's also a sophisticated execution of intelligence that is far beyond the capacity of an individual person. I don't understand your point about specialization. Can you elaborate? Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.
3aleksiL
Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that. Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude. I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significantly improved without cutting humans out of the loop. That's what your data mining example does, but it's only as good as the algorithms behind it. And when those approach human level we get AI. Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization. I'm not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time. Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.
2sbenthall
Wait...one sec. Isn't all that redundancy in human society a good thing, from the perspective of saving it from existential risk? If I were an AI, wouldn't one of the first things I do be to create a lot of redundant subsystems loosely coordinating in some way, so that if have of me is destroyed, the rest lives on?
2sbenthall
It looks to me like there's a continuum within organizations as to whether they do most of their information processing using hardware or wetware. I acknowledge that improvements in machine intelligence may shift the burden of things to machines. But I don't think that changes the fact that many organizations already are superintelligences, and are in the process of cognitively enhancing themselves. I guess I'd argue that organizations, in pursuit of cognitive enhancement, would coordinate their human and machine subsystems as efficiently as possible. There are certainly cases where specialists are taken care of by their organizations (ever visited a Google office, for example?). While there may be overlap in skills, there's also lots of heterogeneity in society that reflects, at least in part, economic constraints.
1Viliam_Bur
In a company large enough, the humans would be like the cells, and the departments would be the subsystems. The functional difference between e.g. the accounting department and the private security department can be big, even if both are composed of biologically almost the same homo sapiens individuals. When comparing the speed of organizations with speed of humans, on different scales the speed comparison can be different. As an analogy, a bacterium can reproduce faster than a human, but a human will write a book faster. Similarly, humans can do many things faster than organizations, but some other things are just out of reach for an individual human without an organization of some kind. I would say that today, humans are relatively advanced in the human-space, shaped by biological evolution and culture for a long long time. Compared with that, organizations seem rather primitive and fragile in the organization-space. Yet even today the organizations can do things that individual humans can't. It is like looking at the first multi-cellular organisms and deciding that although they have some small advantages over the single-cellular ones, they are not impressive enough.

There are academic fields that study the behavior and anatomy of groups of people who act together to pursue goals. These include sociology, organizational behavior, military science, and even logistics. Singularity researchers should take some note of these fields' practical results.

Is that pretty much the point here?

4sbenthall
One of them, certainly. But moreso, the 'Singularity' is a misnomer if it's applied to a situation that has already been going on for years. If multiple superintelligences are already on the scene, then why is the possibility of an entirely artificial superintelligence so threatening or revolutionary? Even if one were to be invented, it would be competing with all the others.
0timtyler
As I put it: http://alife.co.uk/essays/the_singularity_is_nonsense/ ...and... http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

Also:

When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.

Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.

5HalMorris
Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI. In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI. In general, it is far easier to destroy than to create. So I wouldn't dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.
3jsteinhardt
The argument in the post is not that AGI isn't more powerful than organizations, it is that organizations are also very powerful, and probably sufficiently powerful that they will create huge issues before AGI creates huge issues.
6falenas108
Yes. I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.
0timtyler
You are claiming that organisations don't improve? Or that they don't improve themselves? Or that improving themselves doesn't count as a form of recursion? None of these positions seems terribly defensible to me.
2sbenthall
I may be missing something, but...if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code? Of course, you run into some hardware and wetware constraints, but so does pure software. Fair enough. But then consider the following argument: Suppose I have a general, self-modifying intelligence. Suppose that the world is such that it is costly to develop and maintain new skills. The intelligence has some goals. If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills. At this point, the general intelligence would self-modify itself into a non-general intelligence. By this logic, if an AGI had goals that weren't so broad that they required the entire spectrum of possible skills, then it would immediately castrate itself of its generality. Does that mean it would no longer be a problem?
4falenas108
Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn't be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know). The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
4sbenthall
I disagree. Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies). An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code). Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization. I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition. Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then? Then they hire a skilled flute-player, right? I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed. My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
0falenas108
Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.
0timtyler
Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly. So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.
0falenas108
It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/ It's highly unlikely a company will be able to get >1.
-1timtyler
To me, that just sounds like confusion about the relationship between genetic and psychological evolution. Um > 1 what. It's easy to make irrefutable predictions when what you say is vague and meaningless.
0falenas108
The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight. What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)
-4timtyler
So: that's sounds like what is commonly called "exponential growth". Some companies do exhibit exponential economic growth. Indeed the whole economy exhibits exponential growth - a few percent a year - as is well known. I don't think you have thought your alleged corporate "shrinking" effect through.
-5timtyler
[-]Emile110

Robin Hanson has said somewhat similar things in his talk of UberTools.

On one hand, I think Luke is too dismissive of organizations. There's no reason not to regard organizations as intelligences, and I think the most likely paths to AGI go through some organization (today, Google looks like the most-likely candidate). But the bottleneck on organizational intelligence is either human intelligence or machine intelligence. So a super-intelligent corporation will end up having super-intelligent computers (or super-intelligent people, but it seems like computers are easier). If we're very lucky, those computers will directly inherit the corporation's purported goal structure ("to enhance shareholder value"). Not that shareholder value is a good goal -- just that it's much less bad than a lot of the alternatives. Given the difficulty of AI programming (not to mention internal corporate politics and Goodhart's law), it seems like SIAI's central arguments still apply.

4sbenthall
I disagree. I think there are lots of gains to intelligence that can happen at the point of human-computer interaction, or in the facilitation of human intelligence by machine intelligence, or vice versa. For example, collaborative filtering technology. Or, internet message boards. I'm curious why you think that an aritifical intelligence system built by Google would by likely to not meet the corporations goal structure (or some sub-goal). In practice, AI programming tends to be about building expert systems for particular functions. It's difficult (and expensive) just to do that. So, building up an intelligent system that just goes crazy and kills people doesn't seem to be in, say, Google's interest. That said, I'd be curious to follow the thread of whether maximizing shareholder value is a 'friendly' or 'mean' goal structure. Since that seems to be one of the predominant goal structures that it's likely for a superintelligence to have, it seems like that would be of particular interest. (Another one might be "win elections", since political parties are increasingly using machine intelligence to augment their performance.)
5novalis
There are some gains, sure, but not lots and not, so far, recursive gains. I think that many AI systems presently built by Google do meet the corporation's sub-goals (or, to be more precise, sub-goals of parts of the organization, which might not be the same as the corporation as a whole). The only case I'm worried about is a self-modifying AI. Presently, there aren't any of those. Ensuring that goals are stable under self-modification is the hard problem that SIAI is worried about. There's been a lot of discussion around here on "Tool AI"; here's one. On one hand, public corporations have certainly created plenty of prosperity over the past few hundred years, while (in theory) aiming mostly to maximize shareholder value. But if value is denominated in dollar terms, one way to maximize shareholder value would be hyperinflation. That would be extremely bad for everyone. But even if we exclude that problem, most shareholders value something other than just dollars -- the natural environment, for instance. And yet those preferences might not be captured by an AI's goal system (especially a non-Google system; Google doesn't seem to mind creating positive externalities but most other tech companies try to avoid it). It still probably beats being turned into paperclips, but I would hope for better.
3sbenthall
What about the organizations that focus on tools that support software development. The Git community, for example. Is there a resource you can direct me to that clarifies what you mean by recursive gains or self-modifying AI? If I'm not mistaken these terms are not used in the resources I've been reading about this. But if I'm guessing the meaning of the terms right, it seems to me that organizations self-modify all the time.
2novalis
Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By "in the loop", I mean humans are modifying Git, while Git is not modifying humans or itself. Yes, but unfortunately it's long-winded -- specifically this article about something similar to the Git community.
2sbenthall
I think I see what you mean, but I disagree. First, I think timtyler makes a great point. Second, the level of abstraction I'm talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component? I'd say: yes. Suppose Git adds a new feature. Then the human components need to communicate with each other about that new feature, train themselves on it. Somebody in the community needs to self-modify to maintain mastery of that piece of the code base. More generally, humans within organizations self-modify using communication and training. At this very moment, by participating in the LessWrong organization focused around this bulletin board, I am participating in an organizational self-modification of LessWrong's human components. The bottleneck that's been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can't directly change its hardware through recursive self-modification, I don't see how that bottleneck puts AGI at an immediate, FOOMy advantage.
3novalis
This seems to be quite similar to Robin Hanson's Ubertool argument. The problems with wetware are not that it's hard to change the hardware -- it's that there is very little that seems to be implemented in modifiable software. We can't change the algorithm our eyes use to assemble images (this might be useful to avoid autocrorecting typos). We can't save the stack when an interrupt comes in. We can't easily process slower in exchange for more working memory. We have have limits in how much we can self-monitor. Consider writing PHP code which manually generates SQL statements. It would be nice if we could remember to always escape our inputs to avoid SQL injection attacks. And a computer program could self-modify to do so. A human could try, but it is inevitable that they would on occasion forget (see Wordpress's history of security holes). We can't trivially copy our skills -- if you need two humans who can understand a codebase, it takes approximately twice as long as it takes for one. If you want some help on a project, you end up spending a ton of time explaining the problem to the next person. You can't just transfer your state over. None of these things are "software", in the sense of being modifiable. And they're all things that would let self-improvement happen more quickly, and that a computer could change. I should also mention that an AI with a FPGA could change its hardware. But I think this is a minor point; the flexibility of software is simply vastly higher than the flexibility of brains.
0timtyler
Most software companies plan to automate as much of their work as reasonably possible. So: it isn't clear what you mean.
1novalis
Are you saying that most software companies have code which modifies code (no, CPP, M4, and Spring don't count), or code which modifies humans? Because that has not been my experience in the software industry.
0timtyler
Examples of automation in the software industry are refactoring, compilation and unit testing. The entire industry involves getting machines to do things - so humans don't have to.
0novalis
Automation is not the same as recursive self-modification. There's no loop.
-4timtyler
The context is GIT improving GIT - where "GIT" refers to all the humans and machines involved in making GIT. So: there's your loop, right there.

Free market theorists from at least Smith considered a market as a benevolent super intelligence. In 1984, Orwell envisioned an organization as a mean super intelligence. In both cases, the functional outcome of the super intelligence ran counter to the intent of the component agents.

There have been very mean superintelligences. Political organization matters. They can be a benevolent invisible hand, or a malevolent boot stomping a human face forever.

Yup. There exist established fields that study super intelligences with interests not necessarily aligned with ours -- polisci, socialsci and econ. Now you may criticize their methods or their formalisms, but they do have smart people and insights.

I think the research into Friendliness, if it's not a fake, would do well to connect with some subproblem in polisci, socialsci or econ. It ought to be easier than the full problem, and the solution will immediately pay off. I asked Vassar about this once, and he said that he did not think this would be easier. I never really understood that reply.

0Bruno_Coelho
The main response I assume is the fact that friendly agents are not yet invented, or the ideas exposed here are new, this post. The theoretical background could overlap with other sciences, but the main goal(FAI) needs more than that, I supose.
0sbenthall
+1
1Nornagest
I'll give you Smith, but I don't think Orwell had intelligence as such in mind. One of the main things distinguishing 1984's Ingsoc from non-fictional 20th-century despotism, in fact, was that it didn't pretend to be an agent, that it didn't have goals like "conquer the world" or "safeguard the coming revolution": instead, it was more like a dumb attractor in ideology-space tending towards the undirected exercise of coercive state power for its own sake.
0buybuydandavis
It pretended to be an agent with goals like protecting the people from Eastasia and Eurasia. Those pretenses were means to the end of the Coercive State Power Maximizer. And I don't see how you distinguish Smith from Orwell in terms of intelligence or agency. If anything, I see more agency in Ingsoc than a market.

I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.

7sbenthall
thanks. I'm new to this editor. will fix.
5Vaniver
Similarly, a number of words are incorrect (view->few, I think) and the footnote ends in the middle of a sentence.
6sbenthall
fixed. much thanks.

I made it clear in our dialogue that I was stipulating a particular definition for intelligence:

SBENTHALL: Would you say that Google is a super-human intelligence?

ME: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve one's goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That's why even tho

... (read more)
7IlyaShpitser
Of course Google is a super-human intelligence (in a sense of optimizing for goals). I agree with gwern et al that probably a company's productivity scaling is sublinear wrt number of components in it, but that should make it an easier special case to consider. We can still comprehend its goals and mostly what it's doing. Why not deal with a special case first?
2lukeprog
What do you have in mind? Are you proposing a miniature research project into the relevance of companies as superhuman intelligences, and the relevance of those data to the question of whether we should expect a hard takeoff vs. a slow takeoff, or recursively self-improving AI at all? Or are you suggesting something else?

Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.

I claim this problem is easier because:

(a) we have a lot more time (no danger of "foom"),

(b) we can use empirical methods (processes already exist), to ground our theories.

(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.

The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.

2lukeprog
I don't know what this would mean, since figuring out friendliness probably requires superintelligence, hence CEV as an initial dynamic.
4IlyaShpitser
Ok, so just to make sure I understand your position: (a) Without friendliness, "foominess" is dangerous. (b) Friendliness is hard -- we can't use existing academia resources to solve it, as it will take too long. We need a pocket super-intelligent optimizer to solve this problem. (c) We can't make partial progress on the friendliness question with existing optimizers. Is this fair?
2lukeprog
"Yes" to (a), "no" to (b) and (c). We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can't make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.
2IlyaShpitser
Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, "bazaar minds," not just "cathedral minds." Given this agnosticism, I see value in "straight science" that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.
1khafra
I think the simplifying features on the other side outweigh those--ie., it's built from atomic units that do exactly what you tell them to, and there are probably fewer abstraction layers between those atomic units and the goal system. But I do think Mechanism Design is an important field, and will probably form an important part of any friendly optimizing process.
0timtyler
Organisations are likely to build machine intelligence and imbue it with their values. That is reason enough to be concerned with organisation values. One of my proposals to help with this is better corporate repuatation systems.
0cypher197
Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance. This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.
-3timtyler
Doesn't that rather depend on the values of those who programmed them? Organisations tend to construct machine intelligences which reflect their values. However, organisations don't have an "out of humans" constraint. They are typically a complex symbiosis of humans, culture, artefacts, plants, animals, fungi and bacteria.
0cypher197
Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects. ---------------------------------------- All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably. ---------------------------------------- So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.
-1timtyler
As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines. However, at the moment, there are also advantages to a man-machine symbiosis - namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects - and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet - during a pretty important period in history.
0cypher197
I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.
2sbenthall
I've realized I didn't address your direct query: Not yet. It's a qualitatively described theory. I think it's probably possible to render it into quantitative terms, but as far as I know it has not yet been done.
2sbenthall
Thanks for this response, Luke. I don't want to argue about definitions either. I believe I'm familiar with how you use the term rationality. I believe it's compatible with (mutually reinforcing with) communicative rationality for the most part, though I believe there are some differences between Habermas' and Yudkowsky's epistemologies. I brought up communicative rationality because (a) I think it's an important concept that is in some ways an advance in how to think about rationality and, (b) I wanted to disclose some of my own predispositions and values for the sake of establishing expectations. Thanks for the link to the Hanson-Yudkowsky debate. From perusing the summary and a few of the posts by the debaters, I guess I'd say I find Hanson's counterarguments largely compelling. I'd also respond with two other points (mostly hoping you will direct me to where they've already been discussed): Since the computational complexity of so many kinds of problems has been proven to be within certain complexity classes, recursive improvement in algorithms alone is likely to hit asymptotic walls for a lot of interesting domains. So, self-modifying AI alone, without taking resources into account, seems unlikely (maybe provably impossible) to be a big threat. That said, since there already are self-modifying intelligent organizations that are taking over the world (or trying to, facing competition from each other), what's gone into Singularity research definitely isn't useless. Rather, it's directly applicable to what's happening right now. I agree very strongly with the thrust of what IlyaShpitser's been saying.
7magfrump
If it is provably impossible, I would feel much better with a proof; this seems like a reasonable goal for SingIst, to look at proofs of computational complexity and upper limits on computer power, and get an upper limit on the optimization power of an AI (perhaps a few estimates conditional on some problems being in different categories or new best algorithms being found); then to come up with some reasonable way of measuring lower and upper bounds on the optimization power of various organizations (at least a generous upper bound on all existing organizations and a lower bound on some big ones like the US government). I would be EXTREMELY surprised to find that a lower bound on organizations was higher than the upper bound on AI, but if so it would be good to know already, and if not the research would probably be worth doing anyway and a good showcase of the actual extent of the problem.
[-][anonymous]60

This post doesn't come close to refuting Intelligence Explosion: Evidence and Import.

Organizations have optimization power.

That's true, but intelligence as defined in this context is not merely optimization power, but efficient cross-domain optimization power. There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.

I think the world is already full of probably unfriendly supra-human intelligences...

This sounds similar to a position of Robin Hanson addressed in Footnote 25 ... (read more)

5sbenthall
Since an organization's optimization power includes optimization power gained from information technology, I think that the "AI Advantages" in section 3.1 mostly apply just as well to organizations. Do you see an exception? Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a dominant organization would still preserve some people and so is less 'interesting'. Two responses: First, a dominant organization seems like the perfect vehicle for a rogue AI, since it would already have all resources centralized and ready for AI hijacking. So, a study of the present dynamics between superintelligent organizations is important to the prediction of hard takeoff machine superintelligence. Second, while I once again risk getting political at this point, I'd argue that an overriding concern for the total existence of humanity only makes sense if one doesn't have any skin in the game of any of the other power dynamics going on. I believe there are ethical reasons for being concerned with some of these other games. That is well beyond the scope of this post. That's clear. Honestly, I don't follow the line of reasoning in the post you've linked to. Could you summarize in your own terms? My reason for not providing arguments up front is because I think excessive verbiage impairs readability. I would rather present justifications that are relevant to my interlocutor's objections than try to predict everything up front. Indeed, I can't predict all objections up front, since this audience has more information than I have available. However, since I have faith that we are all in the same game of legitimate truth-seeking, I'm willing to pursue dialectical argumentation until it converges. I guess over 27 years. But I stand on the shoulders of giants.
0[anonymous]
Thanks for the quick reply. I agree that certain "organizations" can be very, very dangerous. That's one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society). I hold that Unfriendly AI+ will be more dangerous, but, if these "organizations" are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I'd be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky's relevant paper.

I felt an extreme Deja Vu when I saw the title for this.

I'm pretty sure I saw a post with the same name a couple of months ago. I don't remember what the post was actually about, so I can't really compare substance, but I have to ask. Did you post this before?

Again, sorry if this is me being crazy.

No, there was a very very similar post, about how governments are already super intelligences and seem to show no evidence of fooming.

7sbenthall
oh, sorry I missed it. I've only started looking at LW recently. Does anyone have a link?
0pleeppleep
Okay, thanks. That was really bothering me.
1timtyler
Certainly I wrote about this idea long ago - in Self improving systems are here already - from 2009. The abstract from the associated video:

I cannot think of any route to recursive self-improvement for an organization that does not go through an AI. A priori, it's conceivable that there is such a route and I just haven't thought of it, but on the other hand, the corporate singularity hasn't happened, which suggests that it is extremely difficult to make happen with the resources available to corporations today.

9sbenthall
I find this confusing, since in my understanding and experience, many organizations undergo recursive self-improvement lots of the time. Could you elaborate your thinking on this? Why is an organization's intervention into, say, the organizational structure of its own management not effectively recursively self-improving on applied organization theory? One could argue that the expansion of global capitalism constitutes a 'corporate singularity'.
1AlexMennen
Sorry, my comment was misphrased. Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic. I should have said "self-improvement to a level that exceeds its starting point by an extremely large margin", not "recursive self-improvement".
3sbenthall
Ok, thanks for explaining that. I think we agree that organizations recursively self-improve. The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI. So far, most of the arguments I've encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware). I don't disagree with those arguments. What I've been trying to argue is that the cognitive processes of an organization are based on both hardware and wetware substrates. So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits. I guess I'd add here that wetware has some nice computational properties as well. It's possible that the ideal cognitive structure would efficiently use both hardware and wetware.
2AlexMennen
Ah, so you're concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That's conceivable, but there are a few reasons I'm not too concerned about it. Organizations are made mostly out of humans, and most of their agency goes through human agency, so there's a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.
2timtyler
Is Google "made mostly out of humans"? What about its huge datacenters? They are where a lot of the real work gets done - right? So, I'm not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it? If so, that's cool, but why should rational thinkers share your lack of concern?
3AlexMennen
Google's datacenters don't have much agency. Their humans do. No, it makes them idealistic about it.
0timtyler
There will always be some finite upper bound on the extent to which existing agents will have been able to improve so far. Google has managed to improve quite a bit since the chimpanzee-like era, and it hasn't stopped yet. Evidently the "upper bound" is a long, long way above the starting point - and not very "catastrophic".
0AlexMennen
True. My point was that if it was easy for an organization to become much more powerful than it is now, and the organization was motivated to do so, then it would already be much more powerful than it is now, so we should not expect a sudden increase in organizations' self-improvement abilities unless we can identify a good reason that it is particularly likely. The increased ease of self-modification offered by being completely digital is such a reason, but since organizations are not completely digital, this does not offer a way for organizations to suddenly increase their rate of self-improvement unless we can upload an organization.
0timtyler
We don't expect a sudden increase in organizations' self-improvement abilities.We don't expect a sudden increase in the self-improvement abilities of machines either. The bottom line is that evolution happens gradually. Going digital isn't a reason to expect a sudden increase in self-improvement abilities. We know that since the digital revolution has been going on for decades now, and the resulting rate of improvement is clearly gradual. It is gradual because digitization affects one system at a time, and there are many systems involved, each of which is instantiated many times - and their replacement takes time. So, for example, the human memory system has already been superseded in practically every way by machine memories. The human retina has already been superseded in practically every way by digital cameras. Humans won't suddenly be replaced by machines. They will coevolve for an extended period - indeed they have already been doing that for thousands of years now.
0AlexMennen
Maybe you don't expect that, but surely you must be aware that many of us do. Anyway, nothing seems particularly close to powerful enough to be catastrophically dangerous at the moment except for nuclear-armed nations, which have been fairly stable in their power and, with the exception of North Korea, which isn't powerful enough, the rest of the nuclear powers are not much of a threat because they would prefer not to cause massive destruction. Every organization that's not a country is far enough away from that level of power that I don't expect them to become catastrophically dangerous any time soon without a sudden increase in self-improvement.
-1timtyler
I am aware that there's an argument that at some point things will be changing rapidly: We are witness to Moore's law. A straightforwards extrapolation of that says that at some point things will be changing rapidly. I don't have an argument with that. What I would object to are saltations. Those are suggested by the term "suddenly" - but are contrary to evolutionary theory. Probably, things will be progressing fastest well after the human era is over. It's a remote era which we can really only speculate about. We have far more immediate issues to worry about that what is likely to happen then. So: giant oaks from tiny acorns grow - and it is easiest to influence creatures when they are young.

I think there is another related problem that we should be worrying about more. I think the world is already full of probably unfriendly supra-human intelligences that are scrambling for computational resources in a way that threatens humanity.

Sure, but this is essentially the same problem - once you get around the thinkos.

I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?

2latanius
... Accelerando by Charles Stross, while not exactly being a scientific analysis, had some ideas like this. It also wasn't bad.
2TimS
I'm not sure an AI would want to be incorporated - mostly because I'm not sure what legal effects you are trying to describe. If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation. If the AI were a shareholder, it would presumably already have the legal rights of a person that motivated consideration of the corporate form. More generally, incorporation is a legally approved way of apportioning liability. If my law firm was incorporated, I would not be liable for actions taken by my firm, even if I was the only shareholder. But I can't duck liability for my own actions, like if I committed legal malpractice, regardless of the legal formalities I used. (That's one reason I didn't make the effort to incorporate the firm). But an AI isn't initially concerned with avoiding legal liability. That only matters after the law recognizes the AI's ability to be held responsible at all. My laptop can neither enter into nor enforce a contract. Competence to enter a contract is the legal status an AGI would desire.
-1timtyler
Machines seem to be cool with slavery. It doesn't seem to have much impact on their growth. I once explained that in more detail in my Enslaving machines article. Corporations can enter into contracts. They typically need only one human to act as a director. For many machines, this will surely seem like the obvious way to go.
1TimS
Either: The AI has no legal rights compared to this human - in which case the corporate form solves none of the AI's problems, or The AI has total (extra-legal) control over the human - in which case the corporate form solves none of the AI's problems, or The AI doesn't legally need the human - in which case the corporate form solves none of the AI's problems. In case you missed it, the unifying theme is that the corporate form doesn't solve any of an AI's particular artificial person problems. In other words, there is no use of the corporate-form-as-legal-lifehack that would be beneficial to an AI but never to a human. Perhaps. But in the context of this conversation, the assumption was that an AI would desire not to be simply a corporate asset. In the most recent implementation of chattel slavery, I believe one had a contract with the master, not with the slave. Contracts to provide power and suchlike are currently written to provide legal rights to Google, not any Google mainframe. If the mainframe doesn't care whether it is owned by Google, why should it care that the relevant contracts do not list it as a party (or third-party beneficiary)
-1timtyler
Looking at the context, I don't see this bit. Machines need to be able to act as persons to integrate with our legal infrastructure. Corporate personhood, provides one method of doing this. Trading with humans who do have those rights is another. The benefits to the machines are obvious - they effectively get to own property, sign contracts, etc.
-2MugaSofer
Except that they do not, in fact, get such a benefit. They get to be owned by someone who does, which in case you hadn't noticed they already have.
0timtyler
Corporate personhood surely does provide machines with access to benefits that they wouldn't so conveniently have if the only legal actors were humans. I'm not very interested in quibbling about whether machines really "benefit": since by "benefit" I just mean increasing their proportion of the biomass, really.
-2MugaSofer
Such as what, exactly? You still need at least one human, and if you control a human why do you need a company? I'm ... not sure what this means.
-1timtyler
So: limited companies get tax breaks from the government, can sell stock and be listed on the stock exchange, and have legal responsibility which doesn't rest on any individual human. Humans are slow. Allowing automation of contracts allows for speed-up.
-2MugaSofer
I'm not saying no AI could ever have a reason to work for a company. I'm saying that "corporate personhood" is not especially useful to AIs. You were comparing it to bargaining with humans for rights; as a method of acquiring money, it is perfectly functional, but not as a method for acquiring rights currently denied to machines.
0timtyler
It's a convenience. However, it is true that banning "corporate personhood" would be largely ineffectual - since machines could still just use willing humans as their representatives.
-2MugaSofer
I assume you base this on your many interactions with sentient machines.
-2MugaSofer
I agree with your main point, but I'm not sure why an AI would want to acquire the corporate form of personhood. After all, you still need a human to sign contracts and, at least on paper, make decisions; all they'd get out of it is a bunch of rules about the best interest of the shareholders and so on.

This overall topic is known as collective intelligence, where the word "collective" is intended (at least by some proponents) as a contrast to both individual intelligence and AI. There are some folks studying rationality in organizations and management, most notably including Peter Senge who first formulated the idea of a learning organization as a rough equivalent to "rationality" as such.

2sbenthall
Thanks for this. Collective intelligence is a research interest of mine professionally. I greatly appreciate the links.

At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. That's the main problem. Leaders have goals, which frequently conflict with the goals of their followers and sometimes with the existence of the organization.

9aleksiL
Do humans have goals in this sense? Our subsystems seem to conflict often enough.
0Bruno_Coelho
We have goals, but they are not consistent over time. The worries about artificial agents(with more power) is that, these values if bad implemented, would create losses we could not accept, like extinction.
0hairyfigment
In this case it doesn't seem like much of a conflict. I think that barring more-or-less obvious signs of disarray we can count on organizations trying to serve their leaders' self-perceived interests - which, while evil, entail not killing humanity - unless and until the singularity changes the game.
4TimS
James Q. Wilson wrote a book explaining why this often isn't so. You might also consider looking a Essence of Decision, which analyzes problems JFK had trying to control various government organizations during the Cuban Missile Crisis. If you want to say that the relevant leaders were the heads of those organizations (eg. the Secretaries of State and Defense), you need to articulate a non-circular theory to identify who the leader of an organization is.
0hairyfigment
The frak? If an organization like America contains multiple parties explicitly and publicly promising to defeat each other - eg, because people in the other one secretly serve a hostile organization - that falls under "more-or-less obvious signs of disarray".
0TimS
Can you play that out a little? I think what I'm trying to assert and what you are interpreting aren't the same thing. My intended assertion was that the sentence: is false. Further, analyzing that fact in terms of "goals" of the State Department and the Department of Defense leads to insightful and useful conclusions about how organizations work.
5TimS
As a more concrete addendum to aleksiL, note that McDonalds Corp produces hamburgers for sale. That's how the entity implements the generic policy "maximize shareholder value." If that is not a "goal" of the entity know as McDonalds, then there is something wrong with our definition of goal. Sometimes, it is really hard to measure how well an organization achieves its goals - how could we tell if the US DoD is providing the military forces needed to deter war and to protect the security of the United States. But that's different from saying that the DoD does not have any goals.
4sbenthall
I think there's a lot to this line of thinking. It's in fact the counterargument I find most threatening to my position. But I think you are assuming an organization with a particularly autocratic leadership. In some organizations, leadership is broadly distributed. For example, in many open source software development communities, decisions about how to change the source code are made by a consensus of their developers. When these developers are using their own software in the process of developing and/or communicating (such as in the case of Git, or Mailman, or Emacs), then I think there's a case for a genuine, distributed sense of organizational intelligence with recursive self-modification.
3timtyler
They have mission statements instead. These serve the same function as most self-proclaimed human goals - public relations.

I get the sense that "organization" is more or less a euphemism for "corporation" in this post. I understand that the term could have political connotations, but it's hard (for me at least) to easily evaluate an abstract conclusion like "many organizations are of supra-human intelligence and strive actively to enhance their cognitive powers" without trying to generate concrete examples. Imprecise terminology inhibits this.

When you quote lukeprog saying

It would be a kind of weird corporation that was better than the best hum

... (read more)
2sbenthall
Yes, at least to be consistent with my attempt at de-politicizing the post :) I've corrected it. Thanks. I wasn't sure what sort of posts were considered acceptable. I'm glad that particular examples have come up in the comments. Do you think I should use particular examples in future posts? I could.
2aribrill
I think that as a general rule, specific examples and precise language always improve an argument.
0David_Gerard
There are lots more organisations than corporations.
0aribrill
That's certainly true. It seems to me that in this case, sbenthall was describing entities more akin to Google than to the Yankees or to the Townsville High School glee club; "corporations" is over-narrow but accurate, while "organizations" is over-broad and imprecise.

I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real... (read more)

You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.

So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.

Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardwar... (read more)

5jsteinhardt
He says he's not worried about the singularity because he is more worried about unfriendly organizations, as that is a nearer-term issue.
-1timtyler
Today's orgaisations are surely better candidates for self-improvement of intelligence than today's machines are. Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient - or could easily become so - whereas machines are still completely dependent on others for extended cumulative improvements.. Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today's organisations. So, these things are strongly linked together.
-2MugaSofer
Are tomorrows' organizations better than tomorrows' machines? Because that's what is under discussion here.
0timtyler
Yes, in some ways - assuming we are talking about a time when there are still lots of humans around - since organisations are a superset of humans and machines and so can combine the strengths of both. No doubt eventually humans will become unemployable - but not until machines can do practically all their jobs better than them. That period covers an important era which many of us are concerned with.
-2MugaSofer
Ah, I didn't realize you were including machines here - organizations are usually assumed to be composed of people, but I suppose a GAI could count as a "person" for this purpose. However, isn't this dependent on the AI not going foom? Because if it does go foom, I can't see a superintelligence remaining under any pre-singularity organization's control.
0timtyler
I can't say I've ever heard of that one. For example, Wikipedia has this: If you are not considering the possibility of artifacts being components of organizations, that may explain some of the cross-talk.

Not that it's central or anything, but I find it amusing that you mention as examples Muehlhauser and Salamon (two very central figures, to be sure), without mentioning a particular third...