If you have those preferences, then presumably small enough changes to the competing options in each case won't change which outcome you prefer. And then we get this:
Competing Agent: Hey, Imm. I hear you prefer certainly saving 399 people to a 0.8 chance of saving 500 people. Is that right?
Imm: Yup.
Competing Agent: Cool. It just so happens that there's a village near here where there are 500 people in danger, and at the moment we're planning to do something that will save them with probability 80% of the time but otherwise let them all die. But there's something else we could do that will save 399 of them for sure, though unfortunately the rest won't make it. Shall we do it?
Imm: Yes.
Competing Agent: OK, done. Oh, now, I realise I have to tell you something else. There's this village where 100 people are going to die (aside: 101, actually,but that's even worse, right?) because of a dubious choice someone made. I hear you prefer a 20% chance of killing 499 people to the certainty of killing 100 people; is that right?
Imm: Yes, it is.
Competing Agent: Right, then I'll get there right away and make sure they choose the 20% chance instead.
At this point, you have gone from losing 500 people with p=0.8 and saving them with p=0.2, to losing one person for sure and then losing the rest with p=0.8 and saving them with p=0.2. Oops.
[EDITED to clarify what's going on at one point.]
Well sure. But my position only makes sense at all because I'm not a consequentialist and don't see killing n people and saving n people as netting out to zero, so I don't see that you can just add the people up like that.
My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.
There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.
Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.
I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.