I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that's true in many many worlds.
In fact it's hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don't solve alignment and EY is right about what happens in a fast takeoff world, it doesn't really matter if you have kids or not.
In that sense, it's basically a freeroll, though of course there are intermediate outcomes. I don't immediately see any strong argument in favor of not having kids if you would otherwise want them.
If we don't solve alignment and EY is right about what happens in a fast takeoff world, it doesn't really matter if you have kids or not.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don't have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard "I really care about this person" one you could experience with just about anyone else.
+ the obvious fact that it might matter to the kid that they're going to die
(edit: fwiw I broadly think people who want to have kids should have kids)
I'm sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they're very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it's not surprising they would be opposed to never getting to exist even if it isn't a full lifetime.
I agree that it's bad to raise a child in an environment of extreme anxiety. Don't do that.
Also try to avoid being very doomy and anxious in general, it's not a healthy state to be in. (Easier said than done, I realize.)
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
I agree with this take. I already have four children, and I wouldn't decide against children because of AI risks.
In fact it's hard to find probable worlds where having kids is a really bad idea, IMO.
One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form "take a hit to your happiness now, get more happiness back later", then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I...
Empirically my answer to this is yes: I'm due in January with my second.
When I had my first child, I was thinking in terms of longer timelines. I assumed before having them that it would not be worth having a child if the world ended within a few years of their birth, because I would be less happy and their utility wouldn't really be much until later.
One month after my first baby was born, I had a sudden and very deep feeling that if the world ended tomorrow, it would have been worth it.
YMMV of course, but having kids can be a very deep human experience that pays off much sooner than you might think.
If I was in a relationship where everyone involved wanted a kid and I believed the kid would have a good chance of having positive role models and the kind of environment I'd wish for someone I love throughout its formative years, yes.
The "what if my child can't do intellectual labor because of AI?" question is, IMO, a very similar shape of risk to "what if my child can't do intellectual labor because they have an intellectual disability?".
If you'd love a kid even if they turned out to be in a low percentile of society intellectually, then you're ready for a kid regardless of whether the world you're bringing it into happens to have AI smarter than it. If your desire to add to your family is contingent on assumptions about how the new addition's abilities would compare to those of other agents it interacts with, it might be worth having a good think about whether that's a childhood environment that you would wish upon a person whom you love.
You're providing no evidence that superintelligence is likely in the next 30 years other than a Yudkowsky tweet. I expect that 30 years later we will not have superintelligence (of the sort that can build the stack to run itself on, growing at a fast rate, taking over the solar system etc).
There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.
The post is phrased as "do you think it's a good idea to have kids given timelines?". I've said why I'm not convinced timelines should be relevant to having kids. I think if people are getting their views by copying Eliezer Yudkowsky and copying people who copy his views (which I'm not sure if OP is doing) then they should get better epistemology.
From 2018, AI timelines section of mediangroup.org/research.
...Modeling AI progress through insights
We assembled a list of major technical insights in the history of progress in AI and metadata on the discoverer(s) of each insight.
Based on this dataset, we developed an interactive model that calculates the time it would take to reach the cumulation of all AI research, based on a guess at what percentage of AI discoveries have been made.
AI Insights dataset: data (json file), schema
Feasibility of Training an AGI using Deep Reinforcement Learning: A Very Rough Estimate
Several months ago, we were presented with a scenario for how artificial general intelligence (AGI) may be achieved in the near future. We found the approach surprising, so we attempted to produce a rough model to investigate its feasibility. The document presents the model and its conclusions.
The usual cliches about the folly of trying to predict the future go without saying and this shouldn't be treated as a rigorous estimate, but hopefully it can give a loose, rough sense of some of the relevant quantities involved. The notebook and the data used for it can be found in the Median Group numbers
An aspect that I would not take into account is the expected impact of your children.
Most importantly, it just seems wrong to make personal-happiness decisions subservient to impact.
But even if you did want to optimise impact through others, then betting on your children seems riskier and less effective than, for example, engaging with interested students. (And even if you wanted to optimise impact at all costs, then the key factors might not be your impact through others. But instead (i) your opportunity costs, (ii) second order effects, where having kids makes you more or less happy, and this changes the impact of your work, and (iii) negative second order effects that "sacrificing personal happiness because of impact" has on the perception of the community.)
Having kids will make you care more about the future.
If a friend or partner wanted to have a child with me Id potentially be down. Though we would need to have extremely aligned view on non-coercive parenting. Also im a sleepy boy so we gotta be aligned on low effort parenting too.
Yes. The more the merrier.
PSA: you have less control over whether you have kids, or how many you get, than people generally believe. There are biological problems you might not know you have, there are women who lie about contraception, there are hormonal pressures you won't feel till you reach a certain age, there are twins and stillbirth, and most of all there are super horny split second decisions in the literal heat of the moment that your system 2 is too slow to stop.
I understand this doesn't answer the question, I just took the opportunity to share a piece of information that I consider not well-understood enough. Please have a plan for the scenario where your reproductive strategy doesn't work out.
There are biological problems you might not know you have, there are women who lie about contraception, there are hormonal pressures you won't feel till you reach a certain age, there are twins and stillbirth, and most of all there are super horny split second decisions in the literal heat of the moment that your system 2 is too slow to stop.
This is absolutely nonsense IMO for any couple of grown ups of at least average intelligence who trust each other. People plan children all the time and are often successful; with a little knowledge and foresight I don't think the risk of having unplanned children is very high.
I look at this, having long ago adopted a combination of transhumanism and antinatalism: we have a real chance of achieving something much better than the natural human condition, but meanwhile, this is not a kind of existence in which one should create a life. Back in 2012, I wrote:
We are in the process of gaining new powers and learning new things, there are obvious unknowns in front of us that we are on the way to figuring out, so at least hold off until they have been figured out and we have a better idea of what reality is about, and what we can really hope for, from existence.
As a believer in short timelines (0-5 years until superintelligence), there does not seem much more time to wait. The AI era has arrived, and a new ecosystem of mind is taking shape around us. It may become very bad for human beings, just thanks to plain old darwinian competition, to say nothing of superintelligences with unfriendly value systems. We are now all hostage to how this transformation turns out.
Given how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.), do you think it's a good idea to have a baby now? What factors would you take into account (e.g. age)?
Today I saw a tweet by Eliezer Yudkowski that made me think about this:
"When was the last human being born who'd ever grow into being employable at intellectual labor? 2016? 2020?"
https://twitter.com/ESYudkowsky/status/1738591522830889275
Any advice for how to approach such a discussion with somebody who is not at all familiar with the topics discussed on lesswrong?
What if the option "wait for several years and then decide" is not available?