Yeah, it definitely seems like we're talking past each other here. I think I don't understand what you mean by "aggregation" -- I have a different impression from this comment than from the opening post. Perhaps you can clarify that?
Not sure if this is relevant: From a utilitarian point of view I think you can aggregate when creating lives, but of course the counterfactuals you'll use will change (as mostly what you're trying to work out is how good creating a life is).
Let me try and be careful and clear here.
What I meant by "aggregation" is that when we have to choose between X and Y once, we may have unclear intuitions, but if we have to choose between X and Y multiple times (given certain conditions), the choice is clear (and is Y, for example).
There are two intuitive examples of this. The first is when X causes a definite harm and Y causes a probability of harm, as in http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/ . The second is the example I gave here, where X causes harm to ...
EDIT: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.
Consider the following choices:
Some people might feel that these two choices are the same. There are some key differences between them, however - and not only because the second choice seems more underspecified than the first. The difference is the effect of aggregation - of facing the same choice again and again and again. And again...
There are roughly 1.6 billion seconds in 50 years (hence 1.6 trillion milliseconds in 50 years). Assume a fixed population of 3^^^3 people, and assume that you were going to face the first choice 1.6 trillion times (in each case, the person to be tortured is assigned randomly and independently). Then choosing "50 years" each time results in 1.6 trillion people getting tortured for 50 years (the chance of the same person being chosen to be tortured twice is of the order of 50/3^^^3 - closer to zero than most people can imagine). Choosing "a millisecond" each time results in 3^^^3 people, each getting tortured for (slightly more than) 50 years.
The choice there is clear: pick "50 years". Now, you could argue that your decision should change based on how often you (or people like you) expects to face the same choice, and assumes a fixed population of size 3^^^3, but there is a strong intuitive case to be made that the 50 years of torture is the way to go.
Compare with the second choice now. Choosing "50 years" 1.6 trillion times results in the creation of 1.6 trillion people who get tortured for 50 years. The "a millisecond" choice results in 1.6 trillion times 3^^^3 people being created, each tortured for a millisecond. Conditional on what the rest of the life of these people is like, many people (including me) would feel the "a millisecond" option is much better.
As far as I can tell (please do post suggestions), there is no way of aggregating impacts on potential people you are creating, in the same way that you can aggregate impacts on existing people (of course, you can first create potential people, then add impacts to them - or add impacts that will affect them when they get created - but this isn't the same thing). Thus the two situations seem justifiably different, and there is no strong reason to assign the intuitions of the first case to the second.