I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions. The action of giving birth to a happier person is (for total utilitarians) a good action. The action of killing the first person is (for total utilitarians) a bad action. If these two actions are (as they would be in just about any actually-credible scenario) totally unrelated, then what a total utilitarian might do is to praise one of the actions and condemn the other; or tell non-killing-non-birthers to emulate one of those actions but not the other.
The last suggestion is an interesting one, in that it does actually describe a nasty-sounding policy that total utilitarians really might endorse. But if we're going to appeal to intuition here we'd better make sure that we're not painting an unrealistic picture (which is the sort of thing that enables the Chinese Room argument to fool some people).
For the nasty-sounding policy actually to be approved by a total utilitarian in a given case, we need to find someone who very much wants to kill people but can successfully be prevented from doing so; who could, if s/he so chose, produce children who would bring something like as much net happiness to the world as the killings remove; who currently chooses not to produce such children but would be willing to do so in exchange for being allowed to kill; and there would need not to be other people capable of producing such children at a substantially lower cost to society. Just about every part of this is (I think) very implausible.
It may be that there are weird possible worlds in which those things happen, in which case indeed a total utilitarian might endorse the policy. But "it is possibly to imagine really weird possible worlds in which this ethical system leads to conclusions that we, living in the quite different actual world, find strange" is not a very strong criticism of an ethical system. I think in fact such criticisms can be applied to just about any ethical system.
I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions.
I think the best way to do this is to "naturalize" all the events involved. Instead of having someone kill or create someone else, imagine the events happened purely because of natural forces.
As it happens, in the case of killing and replacing a person, my intuitions remain the same. If someone is struck by lightning, and a new person pops out of a rock to replace them, my sense is that, on the net, a bad thing has happ...
I want to thank Irgy for this idea.
As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.
The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).
But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.
Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.
This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.
It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.