You may want to reconsider the way you write.
Your post has a lot of questionable statements supported by zero references, which indicates that you did not do your research properly, if at all. These statements are loosely coupled, and it is not at all clear why they are put together in the same post. Well, it is clear, actually: they are all excerpts from your book. The book's subtitle, "This, dude, is the most important book of your life" sounds like a failed sales pitch and does not inspire confidence in its content.
Your conclusion that "promoting a vision of future galactic super civilization with immortal people could motivate people now to fight global risks in all forms" does not follow from your rhetoric and seems more like wishful thinking.
May I suggest that, instead of reinventing the wheel and pushing your views here, you start by reading through the relevant Sequences?
Your conclusion that "promoting a vision of future galactic super civilization with immortal people could motivate people now to fight global risks in all forms" does not follow from your rhetoric and seems more like wishful thinking.
Conclusion is probably true, though the Glorious Future already has many promoters.
Conclusion is probably true
I would like to see some experimental evidence. In my experience it tends to be dismissed as science fiction.
Clearly the view has some traction. It seems likely that proselytisation would help.
Whether it is an effective thing to do is a bit of a different question.
I don't want to sound like a jerk, but your post is full of grammatical errors. E.g. you often forget to use "a" or "the". There are other problems with your post, but I guess improving your writing style and grammar would make your post more convincing and readable.
A good idea implied in this post is that some movements or forms of activism are catalyzed by identification with a group, and so using this mechanism to scale activism requires a core group and idea that people can start identifying with. This seems plausible and unrelated to most statements made in the post.
So promoting a vision of future galactic super civilization with immortal people could motivate people now to fight global risks in all forms.
That's pretty much what Eliezer's fun theory sequence is about.
The chanses of prevention of the global catastrophe are growing if humans have the goal of it. This is semi-trivial conclusion.
I agree, but this is far from trivial. Due to biases, perverse incentives and plain lack of cognitive power humans have made specific stuff quite a bit worse by trying to optimize it.
The chanses of prevention of the global catastrophe are growing if humans have the goal of it. This is semi-trivial conclusion. But the main question is who should have such goal?
Of course, if we have global government, its main goal should be prevention of global catastrophe. But we do not have global government and most people hate the idea. I find it irrational. But discussion about global goverment is pure theoretical one, because I do not see peaceful ways of its creation.
If friendly AI take over the world ve will became global government de facto.
Or if imminent global risk will be recognized (asteroid is near), UN could temporaly transform in some kind of global government.
But some people think that global government itself will be or will soon lead to the global catastrophe - because it could easily implement global measures - and predicate "global" is nessesary to global catastrophes, as I am going to show in one of next posts. For example it could implement global total vaccination that lately will have dangerous consciences.
So we see that idea of global government is very closely connected with idea of global catastrophe. One could lead to another and back.
But as we do not have global government we could only speak about goals of separate people and separate organizations.
People do not have goals. The only think that they have goals, but these are only declarations, which rarely regulate real people's behavior. This is because human beings are not born as rational subjects and their behavior is mostly regulated by unconscious programs known as instincts.
These instincts are culturally adapted as values. Values are the real reasons of human behavior. "Goals" are what people say about their reasons to others and to themselves.
The problem of how human values influent the course of human history is difficult one. Last year I wrote a book "Futurology. 21 century: immortality or global catastrophe" in Russian together with M.Batin. And the chapter about values was the most difficult one.
Values are always based on instincts , pleasure and collective behavior (values help to form groups of people who share them). Value is always emotion, it has energy to move a person.
But self-preservation is basic human instinct and so prevention of death and global catastrophe could be human value.
Each value need a group of supporters to exist (value of soccer needs group of fans). Religious values exist only because they have large group of supporters.
In 1960th fight for peace existed and was mass movement. It finally won and lead to limitation of nuclear arsenals in 80th and later. This is good example of how human values prevented global risk without creation of global government.
Now value of "being green" has been created and many people fight CO2.
The problems with such values is that they need very bright picture of risk to attract attention of people. It is not easy to create value to fight global risks in general. But the value of infinite existence of the civilization is much easily imaginable.
So promoting a vision of future galactic super civilization with immortal people could motivate people now to fight global risks in all forms.