David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans.
I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints? I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI.
An imperfectly friendly AI, perfectly rational or not, is a very likely scenario. Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?
What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right? Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his "disputation arenas" would work if the participants were rational and altruistically honest).
...and the answer is, "None." It's like asking how you should move your legs to walk faster than a jet plane.
Downvoted for dismissing a question that is tremendously important to Eliezer's own work without giving any evidence; and for claiming certainty.
It would be reasonable to say that you think it might not be possible. It isn't reasonable to claim to know that it's impossible.
I have just stated that it isn't reasonable to dismiss without argument as impossible what may be our only chance for survival. I therefore find the immediate surge of downvotes surprising, and would appreciate explanations.