What about taking steps to reduce the incidence of conflict, e.g. making meditation more pleasant/enjoyable/accessible/effective so people chill out more? Improved translation/global English fluency could help people understand one another. Fixing harmful online discussion dynamics could also do this, and prevent frivolous conflicts from brewing as often.
BTW, both Nick Beckstead and Brian Tomasik have research-wanted lists that might be relevant.
(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)
You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sper...
The roadmap is distributed under an open license GNU.
I don't know what that sentence means. If you mean the GPL that includes a provision of distributing the work along with a copy of the GPL which you aren't doing.
Creative Commons licenses don't require you to distribute a copy of them, which makes them better for this kind of project.
PDF not available without "joining" Scribd, which appears to require giving them information I do not wish to give them. Any chance of making it available in some other way?
I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think ab...
(Thinking out loud)
Currently, about a third of all food produced in the world doesn't make it to being consumed (or something like that - we were told this in our phytopathology course.) With the increase in standardization of food processing, there should be more common causes of spoilage and the potential of resistant pathogen evolution and rapid spread. How much worse should the food loss become before initiating a cascade of x-threats to mankind?
I have an idea related to Plan B – Survive the Catastrophe.
The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.
I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.
Of course, th...
I think the word "Trust" is lacking from your roadmap. We need to find ways to determine when to trust scientists that their findings are sound.
On a smaller level trust is also very important to get people to cooperate. Empathy alone doesn't make you cooperate when you don't trust the other person.
Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)
New idea: Nuclear hoarding. Collecting of all nuclear particles to limit its ability to be used. (not sure if this falls under a larger "worldwide risk prevention authority", but it doesn't have to be carried out willingly, it can be carried out via capitalism. Just purchase and contain the material.)
New idea: to limit climate change. tree-planting. plant massive numbers of green species in order to reduce the carbon in the atmosphere. Australia is a large land mass that is unused and could be utilised to grow the oxygen farm and carbon cap...
Meta: I honestly didn't read the plan in full the first two times I posted. Instead I went to Wikipedia and looked up global catastrophic risk. Then once I had an understanding of what the definition of global catastrophic risk is; I thought up solutions (How would I best solve X) and checked if they were on the map.
The reason why I share this is because the first several things I thought of were not on the map. And it seems like several other answers are limited to "whats outside the box" (Think outside the box is a silly concept because i...
I don't think "low rivalry" in science is desirable. Rivalry makes scientists criticize the work of their peers and that's very important.
Is A3 meant to have connecting links horizontal through it's path?
Another bad idea: build a simulation-world to live in so that we don't actually have to worry about real-world risks. (disadvantage - is possibly an X-risk itself)
It kinda depends on which x-risk you are trying to cover...
For example - funding technologies that improve the safety or efficiency of nuclear use might mean that any use is a lot more harmless. Or develop ways to clean up nuclear mess; or mitigate the decay of nuclear radiation (i.e. a way to gather nuclear radioactive dust)
Enco...
Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for gett...
In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. "ludism" and "relinquishment of dangerous science" is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.
I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.
Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.
I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.
I'm not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.
In survive the catastrophe I would add two extra boxes:
Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure
Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly
I could send money to a charity of your choice.
Send it to one of the charities here.
What do you take to be humanness?
Technically, I wouldn't say we'd lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.
Not only does going back to our main ancestral environment seem unworkable - at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com