AI labs are on a quest to bring a prosperous wonderful future for all men and women of the world, without disease or suffering, altruistically building machines that will shine knowledge, prosperity, and splendour onto the universe. Their glorious leaders are fighting against adversity, towards ..... - and other egregious baloney, you know.
You are smart enough to see through the bs, wishful thinking, and self-deceit. There is no magic in the world. This thing will explode and you will be hurt, maybe worse than others - because your conscience will not forgive you and in your last dying moment you will be engulfed by sorrow.
That is... unless you quit.
If YOU, the reader, employed at Anthropic, Google, Baidu, Microsoft, HuggingFace, Meta, etc, quit. This will not happen. We will not become a galaxy-faring species during your lifetime either, and that's actually OK.
Don't fool yourself. You are not going to get the light cone of the universe. You know how all these rushed kerfuffles end.
You are a free person, and you can quit today.
In general, appeals to people of the form "You already agree with me that this is right, you're just lying to yourself and that's why you don't do it," are not apt to be well-received. Such appeals combine the belief that the right thing to do is obvious (which is often false) and that the person you address is actually deceiving themself (which is also often false).
Consider:
"You already know Jesus is God, you're resisting His grace because of your attachment to sin, but I know you can turn away from the darkness." Or "You already know Christianity is false, you're just staying in it because it is comfortable, I know you can face the truth of atheism."
Both from opposite perspectives, but both quite irritating to hear, and both including statements about the listener's internal state that are often just false. I think this kind of appeal is a result of expecting short inferential distances