orthonormal comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 04 October 2010 02:47:39AM 2 points [-]

Ant colonies don't generally exhibit the principal-agent problem. I'd say with high certainty that the vast majority of our trouble with it is due to having the selfishness of an individual replicator hammered into each of us by our evolution.

Comment author: Eugine_Nier 04 October 2010 03:02:33AM 3 points [-]

I'm not a biologist, but given that animal bodies exhibit principal-agent problems, e.g., auto-immune diseases and cancers, I suspect ant colonies (and large AI's) would also have these problems.

Comment author: orthonormal 04 October 2010 03:24:31AM 5 points [-]

Cancer is a case where an engineered genome could improve over an evolved one. We've managed to write software (for the most vital systems) that can copy without error, with such high probability that we expect never to see that part malfunction.

One reason that evolution hasn't constructed sufficiently good error correction is that the most obvious way to do this makes the genome totally incapable of new mutations, which works great until the niche changes.

Comment author: Eugine_Nier 04 October 2010 03:28:54AM 2 points [-]

However, an AI-subagent would need to be able to adjust itself to unexpected conditions, and thus can't simply rely on digital copying to prevent malfunctions.

Comment author: orthonormal 04 October 2010 03:41:04AM *  1 point [-]

So you agree that it's possible in principle for a singleton AI to remain a singleton (provided it starts out alone in the cosmos), but you believe it would sacrifice significant adaptability and efficiency by doing so. Perhaps; I don't know either way.

But the AI might make that sacrifice if it concludes that (eventually) losing singleton status would cost its values far more than the sacrifice is worth (e.g. if losing singleton status consigns the universe to a Hansonian hardscrapple race to burn the cosmic commons(pdf) rather than a continued time of plenty).

Comment author: Eugine_Nier 04 October 2010 04:54:10AM 1 point [-]

I believe it would at the very least have to sacrifice at least all adaptability by doing so, as in only sending out nodes with all instructions in ROM and instructions to periodically rest all non-ROM memory and shelf-destruct if it notices any failures of its triple redundancy ROM. As well as an extremely strong directive against anything that would let nodes store long term state.

Comment author: orthonormal 04 October 2010 07:38:27PM 3 points [-]

Remember, you're the one trying to prove impossibility of a task here. Your inability to imagine a solution to the problem is only very weak evidence.

Comment author: mattnewport 04 October 2010 04:32:14AM *  0 points [-]

I don't know whether ant colonies exhibit principal-agent problems (though I'd expect that they do to some degree) but I know there is evidence of nepotism in queen rearing in bee colonies where individuals are not all genetically identical (evidence of workers favouring the most closely related larvae when selecting larvae to feed royal jelly to create a queen).

The fact that ants from different colonies commonly exhibit aggression towards each other indicates limits to scaling such high levels of group cohesion. Though supercolonies do appear to exist they have not come to total dominance.

The largest and most complex examples of group coordination we know of are large human organizations and these show much greater levels of internal goal conflicts than much simpler and more spatially concentrated insect colonies.

Comment author: orthonormal 04 October 2010 07:34:31PM 0 points [-]

I'm analogizing a singleton to a single ant colony, not to a supercolony.