The vast majority of humans ever born have died, and generally unpleasantly. Most of them lived lives which, from our perspective, would be considered unpleasant on average and horrific at the low points, even if they contain many moments of joy. I consider most of those lives to have been positive-value, even the ones that only lasted a few years, and I think that all intentional childbirths (and many un-) were positive-expectation at the time the decision was made.
Even fairly gloomy predictions don't change very much about the expected value of a new life.
Interesting perspective. So you think the lives were unpleasant on average, but still good enough?
It all depends on your preferences, but in my opinion: from kids' selfish perspective - of course It's bad, even without AI risk, who would want to live only 100 years by choice? But from the perspective of parents or kids' altruistic preferences, it may be good to have more people that would help create worthwhile world.
Thanks, but the "helping" part would only help if the kids get old enough and are talented and willing to do so, right? Also, if I were born to become cannonfodder, I would be quite angry, I guess.
I hope I don't offend you, but I think you should step back...
Life is a miracle.
Yes, it's true we will all (likely) die and there might be tough times ahead. Some of us will live longer. Some of us will die sooner. Some of us will have better lives than others in the time we have. But the fact that we get to live at all is an immense gift. Too many people outsmart themselves by suggesting otherwise, Too many people squander the time they have, not even realizing what a profound gift it is to be alive.
I think it's far better to honor the gift of our time by doing the best we can to make the most of it so the future will be better because of what we did. The future doesn't just happen to us...it happens because of the choices we make.
And so, heck no you should not refrain from having kids because of AI or any other thing that might cloud your judgment about the magic of life.
Thank you for your comment. It is very helpful. But may I ask what your personal expectations are regarding the world in 2040?
I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".
The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soon" might be off the table, "sudden" is still obviously good tactics. Nuclear war plus protracted conventional war, Skynet-style, makes a great movie, but would be foolish vs even biowarfare. Depending on what is physically possible for a germ to do (and I know of no reason why "long asymptomatic latent phase" and "highly contagious" and "short lethal active phase" isn't a consistent combination, except that you could only reach it by deliberate engineering rather than gradual evolution), we could all be dead before anyone was sure we were at war.
Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?
It seems to me that the world into which children are born today has a high likelihood of being really bad.
Why? You didn't elaborate on this claim.
I would certainly be willing to have a kid today (modulo all the mundane difficulty of being a parent) if I was absolutely, 100%, sure that they would have a painful death in 30 years. Your moral intuitions may vary. But have you considered that it's really good to have a fun life for 30 years?
If alignment goes well, you can have kids afterwards. If alignment goes poorly, you may have to "sorta" watch your children die. Given whatever P(DOOM) and timelines you have, does that seem like an actually good deal or are you being tempted by some genetic script?
I am not a parent for obvious reasons, so it's hard for me to anticipate how bad #2 would be, but as of now this question seems overdetermined in favor of "No". I'd rather be vaguely sad every once in a while about my childlessness than probably undergo a much more acutely painful experience throughout the 2030s.
To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.
The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.
From the your hypothetical children's perspective this scenario is also disproportionately one-sidedly positive. If AI isn't aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.
Now it's important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal's minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.
Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.
I suggest looking at AI's reply to, should humans have babies? I believe a paraphrased answer was , it is about the biggest mistake a human can make. Why you ask? I am not AI. I'm just I. Clearly your family line ends suffering if you just use birth control. Or continue the suffering. One thing I know is humans will choose the most illogical answer.
This is a simple answer. The pain of life always eclipses the joy of life. Anyone that debates this fact is delusional. Not too many babies were born so the babies could be happy. They were created to fill some sick desire that a couple has to create joy. And of course nations try to promote births for future soldiers. Sounds wonderful if you are a sociopathic wing nut
I downvoted the main question because it's not strongly related to the topic of actually making the world better for millions+ of people, it's just personal life planning stuff. I downvote anything that doesn't warrant the AI tag. I am only one vote, though.
(personally I'm gonna have kids at some point after we hopefully get superintelligence and don't die in a couple of years here)
edit: because my comment saying this was downvoted, I have undone this downvote and instead strong upvoted. probably shouldn't have downvoted to begin with.
Of course everyone can apply their own criteria, but:
I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).
Eli Lifland discusses AI risk probabilities here.
Scott Alexander talks about how everything will change completely in this post, and then says "There's some chance I'm wrong about a singularity, there's some chance we make it through the singularity, and if I'm wrong about both those things I'd rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren't that different in the grand scheme of things. I'd feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don't plan on committing suicide to escape and I don't see why I should be not bringing life into the world either.". I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".
Will MacAskill says that "conditional on misaligned takeover, I think like 50/50 chance that involves literally killing human beings, rather than just disempowering them", but "just" being disempowered does not seem like a great alternative, and I do not know why the AI would care for disempowered humans in a good way.
It seems to me that the world into which children are born today has a high likelihood of being really bad. Is it still a good idea to have children, taking their perspective into account and not just treating them as fulfilling the somehow hard-wired preferences of the parents?
I am currently not only confused, but quite gloomy, and would be grateful for your opinions. Optimistic ones are welcome, but being realistic is more important.