Well, if we understand A to include ems, then your B, C and D are misleadingly worded. They all speak of a single AI. If you reworded them to allow for ems, then I guess I would agree that the dispute is located there. But I would be interested to see such a rewording--I think it would be favorable to Hanson's point of view.
It's also not clear to me that "single powerful AI or trillions of uploads" together add up to p > .5! But that might be off-topic.
Favorable? I don't know why you'd think that. Seems to me the charitable interpretation of Hanson's view has him thinking of ems as naturally Friendly, or near-Friendly. (My analysis didn't mention the chance of us getting FAI without working for it.)
If we get two unFriendly AIs that individually have the power to kill humanity, and if acting quickly means they don't have to negotiate with anyone else from this planet, they'll divide Earth between them. If we somehow get trillions of uFAIs with practically different goals, then of course the expected value...
Link: overcomingbias.com/2011/07/debating-yudkowsky.html