I still don't understand what that means. Are you talking about believing that other people should have particular ethical views and it's bad if they don't?
I'm trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there's no right answer (and probably also no "safe" answer where you won't end up disagreeing with others).
This^^ is all about a "morality as altruism" view, where you contemplate what it means to "make the world better for other beings." I think this part is subjective.
There is also a very prominent "morality as cooperation/contract" view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it's "objective" – but I would call it something like "pragmatics for civil society" or maybe "decision theoretic reasons for cooperation" and not "morality," which is the term I reserve for (ways of) caring about the well-being of others.
It's pretty clearly apparent that "killing everyone on earth" is not in most people's interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe's playlist of experience moments should be like.
But I'm starting to dislike the analogy. Let's say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it's cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically "compensate" (if that's even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others' moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or "moral axiology").
other people are going to disagree with you or me
Of course, that's a given.
These are questions that the discipline of population ethics deals with
So is this discipline basically about ethics of imposing particular choices on other people (aka the "population")? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it "morality as altruism", but I think there is a great deal of difference between having power to impose your own perceptions of "better" ("it's for your own good") and...
I've started listening to the audiobook of Peter Singer's Ethics in the Real World, which is both highly recommended and very unsettling. The essays on non-human animals, for example, made me realize for the first time that it may well be possible that the net utility on Earth over all conscious creatures is massively negative.
Naturally, this led me to wonder whether, after all, efforts to eradicate all consciousness on Earth - human and non-human - may be ethically endorsable.This, in turn, reminded me of a recent post on LW asking whether the possibility of parallelized torture of future uploads justifies killing as many people as possible today.
I had responded to that post by mentioning that parallelizing euphoria was also possible, so this should cancel things out. This seemed at the time like a refutation, but I realized later I had made the error of equating the two, utility and disutility, as part of the same smooth continuum, like [-100, 100] ∈ R. There is no reason to believe the maximum disutility I can experience is equal in magnitude to the maximum utility I can experience. It may be that max disutility is far greater. I really don't know, and I don't think introspection is as useful in answering this question as it seems intuitively to be, but it seems quite plausible for this to be the case.
As these thoughts were emerging, Singer, as if hearing my concerns, quoted someone or other who claimed that the human condition is one of perpetual suffering, constantly seeking desires which, once fulfilled, are ephemeral and dissatisfying, and therefore it is a morally tragic outcome for any of us to have emerged into existence.
Of course these are shoddy arguments in support of Mass Planetary Biocide, even supposing the hypothesis that the Earth (universe?) has net negative utility is true. For one, we can engineer minds somewhere in a better neighborhood of mindspace, where utility is everywhere positive. Or maybe it's impossible even in theory to treat utility and disutility like real-valued functions of physical systems over time (though I'm betting it is). Or maybe the universe is canonically infinite, so even if 99% of conscious experiences in the universe have disutility, there are infinite quantities of both utility and disutility and so nothing we do matters, as Bostrom wrote about. (Although this is actually not an argument against MPB, just not one for it). And anyway, the state of net utility today is not nearly as important as the state of net utility could potentially be in the future. And perhaps utilitarianism is a naive and incorrect ethical framework.
Still, I had somehow always assumed implicitly that net utility of life on Earth was positive, so the realization that this need not be so is causing me significant disutility.