Most long term users on Less Wrong understand the concept of optimization power and how a system can be called intelligent if it can restrict the future in significant ways. Now I believe that in this world, only institutions are close to superintelligence in any significant way. 

I believe it is important for us to have atleast some outside idea of which institutions/systems are powerful in today's world so that we can atleast see some outlines of how the increasing optimization power will end up affecting normal people.

So, my question is - what are the present institutions or systems that you would classify as having the maximum optimization power. Please present your thought behind it if you feel you are mentioning some unknown institution. I am presenting my guesses after the break.

Blogospheroid's guess list

 

  1. NSA / US Intelligence and defence community
  2. Harvard University
  3. The Chinese Politburo
  4. Goldman Sachs
  5. The Kremlin / Russian intelligence and defence community
  6. Google Inc 
  7. Oracle Inc
  8. Microsoft Inc
  9. Murdoch's media empire

Institutions I found significant outside this list is the Singaporean and Abu Dhabi city governments, very rational and increasing their significance in the world, but highly restricted from fooming because of constraints.

 

New Comment
15 comments, sorted by Click to highlight new comments since:

Applying this concept to a group of people makes sense only if the group's boundaries are clearly delineated and if its members act in a coherent way towards furthering well-defined goals. In my opinion, several items on your list don't satisfy these criteria, namely (1), (2), (5), and perhaps even (9).

I propose the following test to determine whether a group is well-delineated and coherent enough to speak of it as having optimization power: apply to it the iron law of oligarchy, and try to identify the small, closely-knit subgroup that exerts effective power over the rest and directs its workings. If no such subgroup can be found, then it follows from the iron law that the group cannot work coherently towards any well-defined goals. (The real oligarchy, of course, may coincide with the system of formal titles within the organization fully, partly, or not at all.)

For example, in business corporations, the oligarchy is clearly identifiable because it is identical with the formal management structure. Harvard University, on the other hand, appears to have no real oligarchy of its own, which suggests that its subdivisions, and perhaps even individuals within it, work independently or within other oligarchic institutions with overlapping membership. Thus, an accurate analysis of "optimization processes" that take place should focus on these subdivisions, individuals, and other institutions, and what apparent coherence exists at the level of Harvard (or academia in general) requires a non-intentionalistic explanation.

I praise your thinking on the subject. Do you have any guesses about the institutions /structures in today's world, that show both power and coherence?

On further reflection, my above comment wasn't very well worded, and in fact, it seems self-contradicting. I first said that without oligarchy, humans "cannot work coherently towards any well-defined goals," but then later I mentioned that even without oligarchy, we can sometimes observe "apparent coherence [that] requires a non-intentionalistic explanation." So let me explain what exactly I mean by that.

The key problem here is differentiating between two distinct ways in which large groups of people can be directed to work towards a goal. The first one is when there is an actual organization (i.e. oligarchy) directing them in a planned manner; a typical example is a business corporation. The second one is when masses of people choose to adopt some view and act on it in an independent and decentralized way, without any organization involved.

The latter typically happens when some view becomes a fashionable status marker. For example, in public discourse, expressing unfashionable contrarian views may cause lots of people to attack you in unison without much care for reason and logic, as if there were an organized propaganda campaign against you -- even though they're all acting independently, each guided (consciously or not) by the desire to signal adherence to a high-status view. (The process by which particular high-status views achieve public prominence may or may not involve organized action, though.) Of course, such spontaneous mass action directed and fueled by status competition is a much more blunt instrument than organized action. Yet, it can nevertheless be tremendously powerful, because it will influence the goals that individuals and organizations are working towards.

One of the greatest pitfalls in any attempt to make sense of what's happening in the world is to mistake spontaneous action for organized action, and end up with conspiracy theories, or to misidentify the oligarchy that is actually relevant for the events and actions under consideration. Another frequent mistake is to confuse language with reality when discussing groups and institutions, and assume that just because a group or institution is known under a strongly resonating name, it must be an organization acting in a coherent way. An extremely frequent manifestation of this error is when the actions of a government are discussed in terms of some supposedly well-defined, unique, and coherent interests and goals of the whole government, or even the whole country. (A corollary of the iron law is that this makes sense only for countries run by strong and monolithic oligarchies or autocracies -- therefore, in my view, your inclusion of the Chinese Politburo on the list makes sense, but not your inclusion of the U.S. intelligence and defense community.) Another great pitfall is the confusion between clear homo-economicus-style self interest and behavior motivated by less obvious signaling considerations.

Therefore, I think that the discussion you'd like to open should start by identifying two kinds of phenomena:

  • Large and powerful organizations run by clearly identifiable oligarchies.

  • Trends in fashionable opinion capable of swaying masses of people, especially influential and powerful people, in certain ideological directions.

The most difficult part of question seems to be: what non-obvious powerful oligarchies are there? In particular, how can we correctly identify the oligarchies operating within the U.S. government and other large non-monolithic governments? The next hard question is how to explain their motivations in order to predict what they're likely to be up to in the future, without falling for the fallacy of crude "cui bono?" thinking.

Now, before I give any concrete conclusions and guesses of mine, let's see how you would approach these questions.

I'd look for oligarchies in parts of the governments with the following pattern.

  • Long terms of office - it is hard to gain control if you are being swapped every time there is a new president

  • Control of subordinates and control of likely replacements/peers.

Take for example the Army. Generals last for longer than a presidency, get to control who goes up ranks and also because there is no outside source of knowledge of the army or military affairs, the possible replacements are only those that they have groomed.

So I would look for that type of pattern. A department I would expect not to form an oligarchy the Centre for Disease Control, where knowledge of infectious diseases and how it can be controlled can be acquired in civilian life and the Director is replaced frequently.

What they want? I'm not sure I can do more than suggest they will want to improve the power and influence of that group. Being a General in the Army is not so impressive if there has been massive cuts, even if your job is safe.

I wonder how much optimization power "Anonymous" wields? ;)

Most long term [sic] users on Less Wrong understand the concept of optimization power and how a system can be called intelligent if it can restrict the future in significant ways.

Probably harder than you expect -- try to define that (more) formally. There is only one actual future, and I know of no way of defining optimization power if you only say that a system is "intelligent" without assuming enough about its goal (defining when a system is powerful would be simpler - by the extent to which it wreaks havoc, but it doesn't follow that in so doing it furthers its goal, unless the havoc was carefully designed to hit the exact target it wants to hit; and to get the havoc rolling, it only needs to be intelligent enough not to self-destruct quickly).

Perhaps reasoning about something like 'causal measure' would work, where you can just talk about 'havoc' as 'large effect on agents at whatever seems the most germane level of organization'. 'Intuitively intelligent' but not goal-optimizing things will at least have a lot of causal significance, which I think is sufficient for this exercise. (Which is moving towards less formality, not more, so I'm not disputing your comment in any way.)

This list seems to suffer greatly from availability bias. Things to consider adding

  • Akamai (a.k.a., the most important internet company you've never heard off)

  • Other hedge funds besides Goldman Sachs. (Goldman Sachs is the most famous, that doesn't mean it's the most rational).

  • Other governments besides the ones you mentioned. (Practically every government has at least one intelligence agency, how much you've heard about an agency isn't a reliable predictor of how competent it is).

Availability Bias.

Guilty, but that is why I sent this question out to a wider community, so that as a group, we can get better answers to this one. I believe it is an important question and that is why i was wondering about the downvote. My guesses are not the main part of this post, the question is.

Kindly list some other hedge funds which might have some high optimization power.

Kindly list some other hedge funds which might have some high optimization power.

D. E. Shaw is one that I can think of off the top of my head, but there are many others.

Also, what do you mean by optimization power? How likely they are to build a fooming AGI? How much power they currently or potentially have? Something else?

Digesting the points from Vladimir's comment, I would say that optimization power would have the following as criteria.

  • Present Power and potential power
  • Coherence in goals
  • Sustainability - whether they look like a one trick pony or there is something genuinely fascinating going on inside.
  • Creativity - a track record of surprising solutions/impressive outcomes. for eg. for the chinese politburo, I would put high speed trains in tibet as a very noteworthy outcome.

I'm surprised you didn't list Apple or Walmart. Also, I see the USG as becoming less "rational" (perhaps this is inevitable as a nation-state becomes more democratic) and the Chinese Politburo becoming more so.

Note: I am employed by the USG.

Walmart - I agree, is a big miss. This thought was jotted down at my night time.

Apple, I genuinely don't think that they are genuinely restricting the future in significant ways. They have good products, but the majority of the future's consumers are coming from value-for-money areas like China, India and Africa (even in the middle-east, apart from the sheikhs, I don't see the average person having too much money). Apple's strategy is not going to fly there.

There's a difference between optimisation power and intelligence, namely intelligence is optimisation power divided by amount of resources. So some organisations may have become powerful (maybe just because they were in the right place at the right time) but are hardly "superintelligent".

I feel I'm missing something here. Why isn't e.g. the US Executive branch at least as powerful as their intelligence community? Why are US and China mentioned (large states), and Singapore is suggested (small state), but no others? Why are the 3 biggest (most profitable software & services corporations listed, but the biggest/richest corporations in other sectors aren't?