Isn't there a certain amount of disagreement about whether FOOM is the necessary thing to happen?
I think the comparison to cancer etc is helpful, thanks.
The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it:
Of course everyone can apply their own criteria, but:
Thank you for your comment. It is very helpful. But may I ask what your personal expectations are regarding the world in 2040?
Oh LessWrong people, please explain to me why asking this question got a negative score.
Thanks, but the "helping" part would only help if the kids get old enough and are talented and willing to do so, right? Also, if I were born to become cannonfodder, I would be quite angry, I guess.
Interesting perspective. So you think the lives were unpleasant on average, but still good enough?
Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?
I don't doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.