Ambitious, Texan libertarian. LessWrong seems like a good site to become more rational, which is something I'd love to do, so here I am. Please tell me if I'm not contributing anything meaningful, years of trolling on Discord (which I still do nowadays) has somewhat dulled my senses.
Something I've wondered in relation to AI takeover is whether it's actually useful to think AI takeover certain. Consider the fact that the only future where humans continue to exist, and thus human action is meaningful and all of our predictions for the future carry any practical application, is the future wherein AI doesn't take over-- or, at least, doesn't cause extinction.
Sure, speculating about AI takeover is an excellent debate topic and good overall intellectual engagement, but for all practical purposes, should we (that is, to say, people at large) not simply assume that AI will not cause extinction?
That isn't to say I don't think that AI is a threat in any way; I'm just saying that I think it would be good for people in general to be more optimistic about artificial intelligence, in that they don't believe doom is certain, but rather an avoidable possibility that a lot of very smart people are working hard on preventing.