David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans.
I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints? I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI.
An imperfectly friendly AI, perfectly rational or not, is a very likely scenario. Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?
What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right? Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his "disputation arenas" would work if the participants were rational and altruistically honest).
Easy. Go to the first-class galley. Drop-kick their ceramic plates and bowls so that they shatter. Then tell the inspectors. They'll ground the plane. I can walk faster than 0 mph, can you?
(ETA: Better suggestions not given for fear of being arrested, but you get the picture.)
You forgot that aircraft structural analysts post here.
Oh, and wrong meta-level! Or something...