In my view population ethics failed at the start by making a false assumption, namely "Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility)." I believe this assumption is wrong.
Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.
After meditating on the Nonidentity Problem for a while I realized Parfit's proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.
I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
In my view the primary thing that determines whether someone's creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a "morally right" identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it's negative it's bad to create them no matter what their identity is).
We can now use this to patch Singer's "Moral Ledger" in a way that fits Eliezer's views. Creating someone with the "wrong" identity is a debt, but creating a person with a "right" identity is not. So we shouldn't create a utility monster (if "utility monster" is a "wrong" identity), because that would create a debt, but killing the monster wouldn't solve anything, it would just make it impossible to pay the debt.
My "Identity Matters" model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn't want to have children, having an unwanted child is a "debt" regardless of the child's personal utility. A child born to parents who want to have one, by contrast may be "right" to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone "your child" in order to regard things like sterile parents adopting unwanted children as positive, but I don't see this as a major problem).
In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they're right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I'm not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.
A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people's utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn't just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.
It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don't think that's because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist a...
When someone complains that utilitarianism1 leads to the dust speck paradox or the trolley-car problem, I tell them that's a feature, not a bug. I'm not ready to say that respecting the utility monster is also a feature of utilitarianism, but it is what most people everywhere have always done. A model that doesn't allow for utility monsters can't model human behavior, and certainly shouldn't provoke indignant responses from philosophers who keep right on respecting their own utility monsters.
The utility monster is a creature that is somehow more capable of experiencing pleasure (or positive utility) than all others combined. Most people consider sacrificing everyone else's small utilities for the benefits of this monster to be repugnant.
Let's suppose the utility monster is a utility monster because it has a more highly-developed brain capable of making finer discriminations, higher-level abstractions, and more associations than all the lesser minds around it. Does that make it less repugnant? (If so, I lose you here. I invite you to post a comment explaining why utility-monster-by-smartness is an exception.) Suppose we have one utility monster and one million others. Everything we do, we do for the one utility monster. Repugnant?
Multiply by nine billion. We now have nine billion utility monsters and 9x1015 others. Still repugnant?
Yet these same enlightened, democratic societies whose philosophers decry the utility monster give approximately zero weight to the well-being of non-humans. We might try not to drive a species extinct, but when contemplating a new hydroelectric dam, nobody adds up the disutility to all the squirrels in the valley to be flooded.
If you believe the utility monster is a problem with utilitarianism, how do you take into account the well-being of squirrels? How about ants? Worms? Bacteria? You've gone to 1015 others just with ants.2 Maybe 1020 with nematodes.
"But humans are different!" our anti-utilitarian complains. "They're so much more intelligent and emotionally complex than nematodes that it would be repugnant to wipe out all humans to save any number of nematodes."
Well, that's what a real utility monster looks like.
The same people who believe this then turn around and say there's a problem with utilitarianism because (when unpacked into a plausible real-life example) it might kill all the nematodes to save one human. Given their beliefs, they should complain about the opposite "problem": For a sufficient number of nematodes, an instantiation of utilitarianism might say not to kill all the nematodes to save one human.
1. I use the term in a very general way, meaning any action selection system that uses a utility function—which in practice means any rational, deterministic action selection system in which action preferences are well-ordered.
2. This recent attempt to estimate the number of different living beings of different kinds gives some numbers. The web has many pages claiming there are 1015 ants, but I haven't found a citation of any original source.