If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Interesting stuff. And I agree. Once you have a nanosystem or something of equivalent power, humans are no longer any threat. But we're yet to be sure if such thing is physically possible. I know many here think so, but I still have my doubts.
Maybe it's even more likely that some random narrow AI failure will start big wars before anything more fancy. Although with the scaling hypothesis on sight, AGI could come suddenly indeed.
"This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact."
Although I quite disagree with this. I'm not a huge supporter of our largest possible impact. I guess it's naive to attribute any net positive expectation to that when you look at history or at the present. In fact, such outcome (things staying exactly the same forever) would probably be among the most positive ones in the advent of non aligned AI. As long as we could still take care of Earth, like ending factory farming and dictatorships, it really wouldn't be that bad...