simply wanting to create lives without considering living conditions does not seem to take this into account
I don't think any of the people who support creating more lives believe we should do so regardless of living conditions, though they may assume that most human lives are worth living and that it takes exceptionally bad conditions for someone's life to become not worth living.
Typically people may also assume that technological and societal progress continues, thus making it even more likely than today that the average person has a life worth living. E.g. Nick Bostrom's paper Astronomical Waste notes, when talking about a speculative future human civilization capable of settling other galaxies:
I am assuming here that the human lives that could have been created would have been worthwhile ones. Since it is commonly supposed that even current human lives are typically worthwhile, this is a weak assumption. Any civilization advanced enough to colonize the local supercluster would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living.
In general, you easily end up with "maximizing human lives is good (up to a point)" as a conclusion if you accept some simple premises like:
Thus, if it's good to have lives worth living (1) and most new humans will have lives that are worth living (2), then creating new lives will be mostly good. If it's better to have more of a good thing than less of it (3), and creating new lives will be mostly good, then it's better to create new lives than not to.
Now it's true that at some point we'll probably run into resource or other constraints so that the median new life won't be worth living anymore. But I think anyone talking about maximizing life is just assuming it as obvious that the maximization goal will only hold up to a certain point.
(Of course it's possible to dispute some of these premises - see e.g. here or here for arguments against. But it's also possible to accept them.)
- it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Some of the people wanting to create more human lives might indeed agree with this! For instance, when they say "human", they might actually have in mind some technologically enhanced posthuman species that's a successor for our current species.
On the other hand, it's also possible that people who say this just intrinsically value humans in particular.
It seems to me that, separately from whether we accept or reject premises #1 and/or #2,[1], we should notice that premise #3 has an equivocation built into it.
Namely, it does not specify: better for whom?
After all, it makes no sense to speak of things being simply “better”, without some agent or entity whose evaluations we take to be our metric for goodness. But if we attempt to fully specify premise #3, we run into difficulties.
We could say: “it is better for a person to have more of a good[2] thing than to have less of it”. And, sure, it is. But then whe...
Kaj_Sotala provided a good answer, but I want to give an intuitive example:
If you could decide whether:
A: a single person lives on Earth, supported by aligned AGI, its knowledge and all resources of the planet in service of nothing but his welfare, living in abundance not even the greatest emperors ever dreamed of.
B: a civilization of tens of billions living on Earth, supported by aligned AGI, thanks to which all of them have at least the living standard of a current upper-middle class American.
I believe most people would choose option B. Of course, this is not independent of living conditions (greatly influenced by anchoring), but for me covers the general "feeling" of the idea. I would formulate it along the lines of "due to diminishing returns, spending resources on increasing living standards above a certain level is wasteful, more goodness/utility is created if other humans are included".
I would like to also suggest for reading a sci-fi short story by one of the LessWrongers, which deals a lot with this question (among other things that are also memeworthy), especially in chapter 3:
https://timunderwoodscifi.wordpress.com/index/
I'm not sure where you get this idea from. Certainly I've seen people argue that within some ranges of conditions, more humans are better. But not some general thing that doesn't collide with the sorts of caveats you mentioned.
Eh? This is a very common idea on Less Wrong and in rationalist spaces. Have you really not encountered it? Like, a lot?
"Or perhaps even: that preventing humans from being born is as bad as killing living humans."
I'm not sure if this is what you were looking for, but here are some thoughts on the "all else equal" version of the above statement. Suppose that Alice is the only person in the universe. Suppose that Alice would, conditional on you not intervening, live a really great life of 100 years. Now on the 50th birthday of Alice, you (a god-being) have the option to painlessly end Alice's life, and in place of her to create a totally new person, let's call this person Bob, who comes into existence as a 50-year old with a full set of equally happy (but totally different) memories, and who (you know) has an equally great life ahead of them as Alice would have if you choose not to intervene. (All this assumes that interpersonal comparisons of what a "great" life is make sense. I personally highly doubt one can do anything interesting in ethics without such a notion; this is just to let people know about a possible point of rejecting this argument.)
Do you think it is bad to intervene in this way? (My position is that intervening is morally neutral.) If you think it is bad to intervene, then consider intervening twice in short succession, once painlessly replacing Alice with Bob, and then painlessly replacing Bob with Alice again. Would this be even worse? Since this double-swapping process gives an essentially identical (block) universe as just doing nothing, I have a hard time seeing how anything significantly bad could have happened.
Or consider a situation in which this universe had laws of nature such that Alice was to "naturally" turn into Bob on her 50th birthday without any intervention by you. Would you then be justified in immediately swapping Alice and Bob again to prevent Alice from being "killed"?
(Of course, the usual conditions of killing someone vs creating a new person are very much non-equivalent in practice in the various ways in which the above situation was constructed to be equivalent. Approximately no one thinks that never having a baby is as bad as having a baby and then killing them.)
To me, it's about maximizing utility.
Would you want to be killed today ? That's how much you value life over non-existence.
How would you react if a loved one was to be killed today ? Same as above, that's how you value life over non-existence.
Almost everybody agree that life has value, considerable value, over non-existence. Hence considering some commonly agreed arbitrary utility function, giving life to somebody, giving existence to somebody, probably beats all the good deeds you could do in a lifetime, just as murder would probably beat all the good deeds you did over your lifetime.
The comfort of one's life is definitely important, but i'd bet the majority of depressive people still don't want to die. There's a large margin for life to become so invaluable that you'd want to die, and even then, you'd still have to consider this (usually) large positive part of your life where you still wanted to live.
Hence,
you should also actually create the utility for all these new lives
might not be a problem if life itself, if just being conscious has an almost infinite weight in most living conditions. In our arbitrary utility function, being an African kid rummaging a dump might have a weight of 1 000 000 while being a Finish kid born in a loving and wealthy family might have a weight of 1 100 000 at the very best (it could very well be lower depending on opinions and the kids trajectories).
it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Whereas everybody agrees on the value of human life, not everybody will agree about the value of animal life (raise your hand if you ate chicken or beef or fish in the past weeks ?). Artificial life, certainly, unless the solution to the hard problem of consciousness rules out consciousness for some types of artificial life.
But, the way you stated your idea might not describe how people feel about this idea, instead of human life should be maximized, i lean more toward human life should not be minimized or it's a good thing to increase human life.
More humans leads to more knowledge. There will be more people doing basic research and adding to humanity's collective knowledge which will improve collective wellbeing. There are efficiencies of scale as someone else mentioned and in the book Scale the author mentions productivity in cities scales 1.15x superlinearly with population. There is an upper limit, though the current population growth rate is well below the peak and very likely a much higher growth rate than today's can be sustained.
If ever greater numbers of possible human-level minds are created, their creation becomes computationally easier due to economies of scale, shared elements, and opportunities for data compression.
Expected utility maximization is only applicable when utility is known. When it's not, various anti-goodharting considerations become more important, maintaining ability to further develop understanding of utility/values without leaning too much on any current guesses of what that is going to be. Keeping humans in control of our future is useful for that, but instrumentally convergent actions such as grabbing the matter in the future lightcone (without destroying potentially morally relevant information such as aliens) and moving decision making to a better substrate are also helpful for whatever our values eventually settle as. The process should be corrigible, should allow replacing humans-in-control with something better as understanding of what that is improves (and not getting locked-in into that either). The AI risk is about failing to set up this process.
I think there is a confusion about what is meant by utilitarianism and utility.
Consider the moral principle that moral value is local to the individual, in the sense that there is some function F: Individual minds -> Real numbers such that the total utility of the universe. Alice having an enjoyable life is good, and the amount of goodness doesn't depend on Bob. This is a real restriction on the space of utility functions. It says that you should be indifferent between (A coin toss between both Alice and Bob existing and neither Alice or Bob existing) and (A coin toss between Alice existing and Bob existing). At least on the condition that Alice's and Bob's quality of life is the same if they do exist in either scenario. And no one else is effected by Alice's or Bob's existence.
Under this principle, a utopia of a million times the size is a million times as good.
I agree that wanting to create new lives regardless of living conditions is wrong. There is a general assumption that the lives will be worth living. In friendly superintelligence singleton scenarios, this becomes massively overdetermined.
Utility is not some ethereal substance that exists in humans and animals, with it being conceivable animals contain more.
It is possible there is some animal or artificial mind such that if we truly understood the neurology, we would prefer to fill the universe with them.
Often "humans" is short for "beings similar to current humans except for this wish list of improvements". (Posthumans) There is some disagreement over how radical these upgrades should be.
I only partly value maximizing human life, but I'll comment anyway.
Where the harm done seems comparatively low, it makes sense to increase the capacity for human lives. Whether that capacity actually goes into increasing population or improving the lives of those that already exist is a decision that others can make. Interfering with individual decisions about whether or not new humans should be born seems much more fraught with likelihoods of excess harms. Division of the created capacity seems more of a fiddly social and political problem than a wide-view one in the scope of this question.
The main problem is that on this planet there is a partial trade off between capacity for humans to live and the capacity for other species to live. I unapologetically favour sapient species there. Not to exclusion of all else, and particularly not to the point of endangerment or extinction, but definitely value a population of a million kangaroos and two million humans (or friendly AGIs or aliens) more than ten million kangaroos and one million humans. There is no exact ratio here, and I could hypothetically support some people who are better than human (and not inimical to humans) having greater population capacity, though I would hope that humans would be able to improve over time just as I hope that they do in the real world.
In the long term, I do think we should spread out from our planet, and be "grabby" in that weak sense. The cost in terms of harm to other beings seems very much lower than on our planet, since as far as we can tell, the universe is otherwise very empty of life.
If we ever encounter other sapient species, I would hope that we could coexist instead of anything that results in the extinction or subjugation of either. If that's not to be then it may help to already have the resources of a few galactic superclusters for the inevitable conflict. but I don't see that as a primary reason to value spreading out.
There is an idea, or maybe an assumption, that I've seen mentioned in many Lesswrong posts. This is the idea that human life should be maximized: that one of our goals should be to create as many humans as possible. Or perhaps even: that preventing humans from being born is as bad as killing living humans.
I've seen this idea used to argue for a larger point, but I haven't yet seen arguments to justify the idea itself. I only have some directional notions:
So I would like to hear, from people who actually hold the "maximizing human life" position, some of your explanations for why. (Or pointers to a source or a framework that explains it.)