if various human beings have diverging values, there is no way for the AI to be aligned with both.
Yes, it is trivially true that an AI cannot perfectly optimize for one person's values while simultaneously perfectly optimizing for a different person's values. But, by optimizing for some combination of each person's values, there's no reason the AI can't align reasonably well with all of them unless their values are rather dramatically in conflict.
In particular, as I said elsewhere, human beings do not value anything infinitely. Any AI that does value something infinitely will not have human values, and it will be subject to Pascal's Muggings. Consequently, the most important point is to make sure that you do not give an AI any utility function at all, since if you do give it one, it will automatically diverge from human values.
Are you claiming that all utility functions are unbounded? That is not the case. (In fact, if you only consider continuous utility functions on a complete lottery space, then all utility functions are bounded. http://lesswrong.com/lw/gr6/vnm_agents_and_lotteries_involving_an_infinite/)
No, I wasn't saying that all utility functions are unbounded. I was making two points in that paragraph:
1) An AI that values something infinitely will not have anything remotely like human values, since human beings do not value anything infinitely. And if you describe this AI's values with a utility function, it would either be an unbounded function, or a bounded function that behaves in a similar way by approaching a limit (if it didn't behave similarly it would not treat anything as having infinite value.)
2) If you program an AI with an explicit utility...
Edge.org has recently been discussing "the myth of AI". Unfortunately, although Superintelligence is cited in the opening, most of the participants don't seem to have looked into Bostrom's arguments. (Luke has written a brief response to some of the misunderstandings Pinker and others exhibit.) The most interesting comment is Stuart Russell's, at the very bottom:
I'd quibble with a point or two, but this strikes me as an extraordinarily good introduction to the issue. I hope it gets reposted somewhere it can stand on its own.
Russell has previously written on this topic in Artificial Intelligence: A Modern Approach and the essays "The long-term future of AI," "Transcending complacency on superintelligent machines," and "An AI researcher enjoys watching his own execution." He's also been interviewed by GiveWell.