Human beings do not have values that are provably aligned with the values of other human beings.
Sure, but we "happily" compromise. AI should be able to understand and implement the compromise that is overall best for everyone.
Any AI that does value something infinitely will not have human values
AI can value the "best compromise" infinitely :). But agreed nothing else.
I'm not sure what it would mean exactly to value the best compromise infinitely, since part of that compromise would be the refusal to accept a sufficiently bad Mugging, which implies a utility bound.
Edge.org has recently been discussing "the myth of AI". Unfortunately, although Superintelligence is cited in the opening, most of the participants don't seem to have looked into Bostrom's arguments. (Luke has written a brief response to some of the misunderstandings Pinker and others exhibit.) The most interesting comment is Stuart Russell's, at the very bottom:
I'd quibble with a point or two, but this strikes me as an extraordinarily good introduction to the issue. I hope it gets reposted somewhere it can stand on its own.
Russell has previously written on this topic in Artificial Intelligence: A Modern Approach and the essays "The long-term future of AI," "Transcending complacency on superintelligent machines," and "An AI researcher enjoys watching his own execution." He's also been interviewed by GiveWell.