(Also, the 'AGI will literally kill us all by default' argument is laughably bad, for many game theoretic and economic reasons both standard and acausal that should be obvious, and people unthinkingly repeating it also makes SingInst and LessWrong look like weirdly overconfident end-of-the-world-mongers.)
The argument in its simplest form is:
Hence most AGIs will make better use of resources by controlling them than by trading with humans. Hence most AGIs will kill us by default. You can question the assumptions (the last one is somewhat related to the orthogonality thesis), but the conclusion seem to come from them pretty directly.
What does "most" AGI s mean? Most we are likely to build? When our only model of AGI is human intelligence ?
There is no engineering process corresponding to a random dip into mind space.
One of the most annoying arguments when discussing AI is the perennial "But if the AI is so smart, why won't it figure out the right thing to do anyway?" It's often the ultimate curiosity stopper.
Nick Bostrom has defined the "Orthogonality thesis" as the principle that motivation and intelligence are essentially unrelated: superintelligences can have nearly any type of motivation (at least, nearly any utility function-bases motivation). We're trying to get some rigorous papers out so that when that question comes up, we can point people to standard, and published, arguments. Nick has had a paper accepted that points out the orthogonality thesis is compatible with a lot of philosophical positions that would seem to contradict it.
I'm hoping to complement this with a paper laying out the positive arguments in favour of the thesis. So I'm asking you for your strongest arguments for (or against) the orthogonality thesis. Think of trying to convince a conservative philosopher who's caught a bad case of moral realism - what would you say to them?
Many thanks! Karma and acknowledgements will shower on the best suggestions, and many puppies will be happy.