What about taking steps to reduce the incidence of conflict, e.g. making meditation more pleasant/enjoyable/accessible/effective so people chill out more? Improved translation/global English fluency could help people understand one another. Fixing harmful online discussion dynamics could also do this, and prevent frivolous conflicts from brewing as often.
BTW, both Nick Beckstead and Brian Tomasik have research-wanted lists that might be relevant.
(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)
You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sper...
The roadmap is distributed under an open license GNU.
I don't know what that sentence means. If you mean the GPL that includes a provision of distributing the work along with a copy of the GPL which you aren't doing.
Creative Commons licenses don't require you to distribute a copy of them, which makes them better for this kind of project.
PDF not available without "joining" Scribd, which appears to require giving them information I do not wish to give them. Any chance of making it available in some other way?
I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think ab...
(Thinking out loud)
Currently, about a third of all food produced in the world doesn't make it to being consumed (or something like that - we were told this in our phytopathology course.) With the increase in standardization of food processing, there should be more common causes of spoilage and the potential of resistant pathogen evolution and rapid spread. How much worse should the food loss become before initiating a cascade of x-threats to mankind?
I have an idea related to Plan B – Survive the Catastrophe.
The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.
I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.
Of course, th...
I think the word "Trust" is lacking from your roadmap. We need to find ways to determine when to trust scientists that their findings are sound.
On a smaller level trust is also very important to get people to cooperate. Empathy alone doesn't make you cooperate when you don't trust the other person.
Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)
New idea: Nuclear hoarding. Collecting of all nuclear particles to limit its ability to be used. (not sure if this falls under a larger "worldwide risk prevention authority", but it doesn't have to be carried out willingly, it can be carried out via capitalism. Just purchase and contain the material.)
New idea: to limit climate change. tree-planting. plant massive numbers of green species in order to reduce the carbon in the atmosphere. Australia is a large land mass that is unused and could be utilised to grow the oxygen farm and carbon cap...
Meta: I honestly didn't read the plan in full the first two times I posted. Instead I went to Wikipedia and looked up global catastrophic risk. Then once I had an understanding of what the definition of global catastrophic risk is; I thought up solutions (How would I best solve X) and checked if they were on the map.
The reason why I share this is because the first several things I thought of were not on the map. And it seems like several other answers are limited to "whats outside the box" (Think outside the box is a silly concept because i...
I don't think "low rivalry" in science is desirable. Rivalry makes scientists criticize the work of their peers and that's very important.
Is A3 meant to have connecting links horizontal through it's path?
Another bad idea: build a simulation-world to live in so that we don't actually have to worry about real-world risks. (disadvantage - is possibly an X-risk itself)
It kinda depends on which x-risk you are trying to cover...
For example - funding technologies that improve the safety or efficiency of nuclear use might mean that any use is a lot more harmless. Or develop ways to clean up nuclear mess; or mitigate the decay of nuclear radiation (i.e. a way to gather nuclear radioactive dust)
Enco...
Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for gett...
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.
I don't think that I should go inside all details of decision theory and EA. I just put "rationality".
Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it.
Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't believe in usefulness of space mining without nanotech.
The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?
I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.
"Pray" corrected.
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely acc...
Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com