There is no contradiction to rejecting total utilitarianism and choosing torture.
For one thing, I compared choosing torture with the repugnant conclusion, not with total utilitarianism. For another thing, I didn't suspect there to be any contradiction. However, agents with intransitive dispositions are exploitable.
You can also descriptively say that, structurally, refusing total utilitarianism because of the repugnant conclusion is equal to refusing deontology because we've realise that two deontological absolutes can contradict each other. Or, more simply, refusing X because of A is structurally the same as refusing X' because of A'.
My fault, I should have been more precise. I wanted to say that the two repugnant conclusions (one based on dust specks the other one based on "17") are similiar because quite some people would, upon reflection, refuse any kind of scope neglect that renders one intransitive.
Just because one can reject total utilitarianism (or anything) for erroneous reasons, does not mean that every reason for rejecting total utilitarianism must be an error.
I agree. Again, I didn't claim the contrary to be true. I didn't argue against the rejection of total utilitarianism. However, I argued against the repugnant conclusion, since it simply repeats that evolution brought about limbic systems that make human brains choose in intransitive ways. For the case that we in the dust speck example considered this to be a bias, the same would apply in the repugnant conclusion.
There is no contradiction to rejecting total utilitarianism and choosing torture.
However, agents with intransitive dispositions are exploitable.
Transitive agents (eg average utilitarians) can reject the repugnant conclusion and choose torture. These things are not the same - many consistent, unexploitable agents reach different conclusions on them. Rejection of the repugnant conclusion does not come from scope neglect.
I want to thank Irgy for this idea.
As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.
The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).
But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.
Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.
This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.
It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.