Just a minor thought connected with the orthogonality thesis: if you claim that any superintelligence will inevitably converge to some true code of morality, then you are also claiming that no measures can be taken by its creators to prevent this convergence. In other words, the superintelligence will be uncontrollable.
Thank you for your answer. I don't think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I've already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it's motivations are more certain to be more alien than I'd thought. Not sure how much weight to put on these intuitions.