(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you're interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs - the equivalent of those who argue that "maybe the world isn't worth saving" while never dreaming of letting people they know or even random strangers just die in front of them.
you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they've gradually been getting better so that they produce better and happier and nicer and more productive people.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That's the only reason I care about this - there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I'm not sure we can over-rely on practical considerations to keep these conclusions at bay.
I want to thank Irgy for this idea.
As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.
The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).
But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.
Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.
This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.
It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.