If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
"Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'."
Care to give a few examples? Because I'd venture saying that, except for religious and other superstitious beliefs, and except for crazy lunatics too like fascists and communists, they were mostly right.
"the future is not certain"
Depends on what you mean by that. If you mean that it's not extremely likely, like 90% plus, that we will develop some truly dangerous form of AI this century that will pose immense control challenges, then I'd say you're deeply misguided given the smoke signals that have been coming up since 2017.
I mean, it's like worrying about nuclear war. Is it certain that we'll ever get a big nuclear war? No. Is it extremely likely if things stay the same and if enough time passes (10, 50, 100, 200, 300 years)? Hell yes. I mean, just look at the current situation...
Though I don't care about nuclear war much because it is also extremely likely that it will come with a warning, so you can also run to the countryside, and even then if things go bad like you're starving to death or dying of radiation poisoning, you can always put an end to your own suffering. With AI you might not be so lucky. You might end in an unbreakable dictatorship a la With Folded Hands.
How can you not feel paralyzed when you see chaos pointed at your head and at the heads of other humans, coming in as little as 5 or 10 years, and you see absolutely no solution, or much less anything you can do yourself?
We can't even build a provably safe plane, how are we gonna build a provably safe TAI with the work of a few hundred people over 5-30 years, and with complete ignorance by most?
The world would have to wake up, and I don't think it will.
Really, the only ways we will not build dangerous and uncontrollable AI is if either we destroy ourselves by some other way first (or even just with narrow AI), or the miracle happens that someone cracks advanced nanotechnology/magic through narrow AI and becomes a benevolent and omnipotent world dictator. There's really no other way we won't end up doing it.