What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?
This is a tremendously important question! (David Brin isn't the first person to raise the idea, BTW. I raised it at the first AGI workshop in 2006, and probably before that on OB. I would be surprised if no-one else had also raised it before that.)
Brin's essay doesn't really touch on any of the important problems with doing so, though.
One of the dangers of trying to implement this is our own horrendously inaccurate understanding of how checks-and-balances work in our own system. Brin's essay, and the ideas of just about every American who speaks on this topic, are fundamentally unsound because they start from the presumption that democracy works, for everything, all the time, everywhere. We've made democracy such a concept of reverence that we have never critiqued it. We haven't even collected the data we would need to do so. Even in Iraq, where we urgently need to do so, we still haven't asked the question "Why does democracy not seem to work here? Would something else work better?"
For starters, we can't hope to create an ecology of AIs until we can figure out how to create a government that doesn't immediately decay into a 2-party system. We want more than 2 AIs.
EDIT: Folks, this is a very important point, for your own survival. I strongly encourage you to explain why you down-voted this comment.
What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?
This is a tremendously important question!
...and the answer is, "None." It's like asking how you should move your legs to walk faster than a jet plane.
David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans.
I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints? I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI.
An imperfectly friendly AI, perfectly rational or not, is a very likely scenario. Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?
What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right? Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his "disputation arenas" would work if the participants were rational and altruistically honest).