Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com
I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.
I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.
One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:
Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.
Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.
Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.
Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.
I don’t want this post to be too long, so I have just listed the common systems problems below:
Policy Resistance – Fixes that Fail
Tragedy of the Commons
Drift to Low Performance
Escalation
Success to the Successful
Shifting the Burden to the Intervenor—Addiction
Rule Beating
Seeking the Wrong Goal
Limits to Growth
Four additional plans are:
(in Controlled regression) voluntary or forced devolution
uploading human consciousness into a super computer
some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware
dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.
You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.
I accepted your idea about replacing the word “robust" and will award the prize for it.
The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated.
The idea that some of plans create their own risks is represented in this map with red boxes below plan A1.
But it may be possible to create completely different future risks and prevention map usin... (read more)