I tend to think that exact duplication doesn't double utility. More of exactly the same isn't really making things better. So I don't think millions of exactly identical villages of a few thousand, isolated from one another (else their relationships would undermine the perfect identities between them; they'd be at different places in the pattern of relationships) are more valuable than just one instance of the same village, and if one village is slightly happier than any of the millions of identical villages, the one village is preferable. But between a more normal world of billions of unique, diverse, barely worth living lives, and one village of thousands of almost but not quite a million times happier lives, I guess I think the billions may be the better world if that's how the total utilitarian math works out. Further, though, I think that while it doesn't take very much difference for me to think that an additional worthwhile life is an improvement, once you get very, very close to exact duplication, it again stops being as much of an improvement to add people. When you're talking about, say, a google people instead of mere billions, it seems likely that some of them are going to be getting close enough to being exact duplicates that the decreased value of mere duplication may start affecting the outcome.
I tend to think that exact duplication doesn't double utility.
I agree.
I guess I think the billions may be the better world if that's how the total utilitarian math works out.
You don't have to resign yourself to merely following the math. Total utilitarianism is built on some intuitive ideas. If you don't like the billions of barely worth living lives, that's also an intuition. The repugnant conclusion shows some tension between these intuitions, that's all - you have to decide how to resolve the tension (and it you think that exact duplication doesn...
I want to thank Irgy for this idea.
As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so.
The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series).
But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed and finite amount of resources. Total utilitarianism is indifferent to having a world with half the population and twice the happiness. But S need to be indifferent to that - it may much prefer the twice-happiness world. Instead of the world Y, it may prefer to reallocate resources to instead achieve the world X', which has the same average happiness as X but is slightly larger.
Of course, since it accepts the repugnant conclusion, there will be a barely-worth-living world Y' which it prefers to X'. But then it might prefer reallocating the resources of Y' to the happy world X'', and so on.
This is not an argument for efficiency of resource allocation: even if it's four times as hard to get people twice as happy, S can still want to do so. You can accept the repugnant conclusion and still want to reallocate any fixed amount of resources towards low population and extreme happiness.
It's always best to have some examples, so here is one: an S whose value is the product of average agent happiness times the logarithm of population size.