All of Mientras's Comments + Replies

I don't doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.

Isn't there a certain amount of disagreement about whether FOOM is the necessary thing to happen?

4Shiroe
People also talk about a slow takeoff being risky. See the "Why Does This Matter" section from here.

I think the comparison to cancer etc is helpful, thanks.

The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it:

  1. Do you consider a life worth living that ends in a situation in which suicide is the best option?
  2. How likely will this be the case for most people in our relatively soon future? (Including because of AI)

Of course everyone can apply their own criteria, but:

  1. I think it is a bit weird to downvote a question, except if the question is extremely stupid. I also would not know a better place to ask it, except maybe the ea forum.
  2. This is a question about the effects of unaligned AGI, and which kind of world to expect from it. For me that is at least relevant to the question of how I should try making the world better.
  3. What do you mean by "AI tag"?

Thank you for your comment. It is very helpful. But may I ask what your personal expectations are regarding the world in 2040?

Oh LessWrong people, please explain to me why asking this question got a negative score.

1Vakus Drake
I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).
2the gears to ascension
I downvoted the main question because it's not strongly related to the topic of actually making the world better for millions+ of people, it's just personal life planning stuff. I downvote anything that doesn't warrant the AI tag. I am only one vote, though. (personally I'm gonna have kids at some point after we hopefully get superintelligence and don't die in a couple of years here) edit: because my comment saying this was downvoted, I have undone this downvote and instead strong upvoted. probably shouldn't have downvoted to begin with.

Thanks, but the "helping" part would only help if the kids get old enough and are talented and willing to do so, right? Also, if I were born to become cannonfodder, I would be quite angry, I guess.

Interesting perspective. So you think the lives were unpleasant on average, but still good enough?

6Dagon
Yes.  I don't know of any believable estimate of median or mean self-reported happiness except very recently (and even now the data is very sparse), and in fact the the entire concept is pretty recent.  In any case, Hobbes's "solitary, poor, nasty, brutish, and short" is a reasonable description of most lives, and his caveat of "outside of society" is actually misleading - it's true inside society as well. I do think that much of the unpleasantness is in our judgement not theirs, with an implicit comparison to some imaginary life, not in comparison to nonexistence, but it's clear that the vast majority of lives include a lot of pain and loss.  And it's clear that suicide has always been somewhat rare, which is a revealed preference for continued life, even in great hardship. So again, yes.  "worth living" or "good enough" includes lives that are unpleasant in their mean or medium momentary experience.  I don't know exactly what the threshold is, or whether it's about quantity or quality of the good moments, but I do know (or rather, believe; there's no territory here, it's all map) that most lives were good enough.

Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?

5JBlack
It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly. Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer. If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
4Shiroe
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.