nickernst comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 03 October 2010 08:21:27PM 42 points [-]
  • A Singleton AI is not a stable equilibrium and therefore it is highly unlikely that a Singleton AI will dominate our future light cone (90%).

  • Superhuman intelligence will not give an AI an insurmountable advantage over collective humanity (75%).

  • Intelligent entities with values radically different to humans will be much more likely to engage in trade and mutual compromise than to engage in violence and aggression directed at humans (60%).

Comment author: [deleted] 03 October 2010 09:29:45PM 0 points [-]

I'm almost certainly missing some essential literature, but what does it mean for a mind to be a stable equilibrium?

Comment author: mattnewport 03 October 2010 09:43:56PM *  6 points [-]

Stable equilibrium here does not refer to a property of a mind. It refers to a state of the universe. I've elaborated on this view a little here before but I can't track the comment down at the moment.

Essentially my reasoning is that in order to dominate the physical universe an AI will need to deal with fundamental physical restrictions such as the speed of light. This means it will have spatially distributed sub-agents pursuing sub-goals intended to further its own goals. In some cases these sub-goals may involve conflict with other agents (this would be particularly true during the initial effort to become a singleton).

Maintaining strict control over sub-agents imposes restrictions on the design and capabilities of sub-agents which means it is likely that they will be less effective at achieving their sub-goals than sub-agents without such design restrictions. Sub-agents with significant autonomy may pursue actions that conflict with the higher level goals of the singleton.

Human (and biological) history is full of examples of this essential conflict. In military scenarios for example there is a tradeoff between tight centralized control and combat effectiveness - units that have a degree of authority to take decisions in the field without the delays or overhead imposed by communication times are generally more effective than those with very limited freedom to act without direct orders.

Essentially I don't think a singleton AI can get away from the principal-agent problem. Variations on this essential conflict exist throughout the human and natural worlds and appear to me to be fundamental consequences of the nature of our universe.

Comment author: orthonormal 04 October 2010 02:47:39AM 2 points [-]

Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.

Comment author: Eugine_Nier 04 October 2010 03:02:33AM 3 points [-]

I'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.

Comment author: orthonormal 04 October 2010 03:24:31AM 5 points [-]

Cancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction.

One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.

Comment author: Eugine_Nier 04 October 2010 03:28:54AM 2 points [-]

However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.

Comment author: orthonormal 04 October 2010 03:41:04AM *  1 point [-]

So you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way.

But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).

Comment author: Eugine_Nier 04 October 2010 04:54:10AM 1 point [-]

I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.

Comment author: orthonormal 04 October 2010 07:38:27PM 3 points [-]

Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.

Comment author: mattnewport 04 October 2010 04:32:14AM *  0 points [-]

I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen).

The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance.

The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.

Comment author: orthonormal 04 October 2010 07:34:31PM 0 points [-]

I'm analogizing a singleton to a single ant colony, not to a supercolony.