What about taking steps to reduce the incidence of conflict, e.g. making meditation more pleasant/enjoyable/accessible/effective so people chill out more? Improved translation/global English fluency could help people understand one another. Fixing harmful online discussion dynamics could also do this, and prevent frivolous conflicts from brewing as often.
BTW, both Nick Beckstead and Brian Tomasik have research-wanted lists that might be relevant.
(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)
You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sper...
The roadmap is distributed under an open license GNU.
I don't know what that sentence means. If you mean the GPL that includes a provision of distributing the work along with a copy of the GPL which you aren't doing.
Creative Commons licenses don't require you to distribute a copy of them, which makes them better for this kind of project.
PDF not available without "joining" Scribd, which appears to require giving them information I do not wish to give them. Any chance of making it available in some other way?
I would use the word resilient rather than robust.
Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.
Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.
I think that it is a better idea to think ab...
(Thinking out loud)
Currently, about a third of all food produced in the world doesn't make it to being consumed (or something like that - we were told this in our phytopathology course.) With the increase in standardization of food processing, there should be more common causes of spoilage and the potential of resistant pathogen evolution and rapid spread. How much worse should the food loss become before initiating a cascade of x-threats to mankind?
I have an idea related to Plan B – Survive the Catastrophe.
The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.
I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.
Of course, th...
I think the word "Trust" is lacking from your roadmap. We need to find ways to determine when to trust scientists that their findings are sound.
On a smaller level trust is also very important to get people to cooperate. Empathy alone doesn't make you cooperate when you don't trust the other person.
Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)
New idea: Nuclear hoarding. Collecting of all nuclear particles to limit its ability to be used. (not sure if this falls under a larger "worldwide risk prevention authority", but it doesn't have to be carried out willingly, it can be carried out via capitalism. Just purchase and contain the material.)
New idea: to limit climate change. tree-planting. plant massive numbers of green species in order to reduce the carbon in the atmosphere. Australia is a large land mass that is unused and could be utilised to grow the oxygen farm and carbon cap...
Meta: I honestly didn't read the plan in full the first two times I posted. Instead I went to Wikipedia and looked up global catastrophic risk. Then once I had an understanding of what the definition of global catastrophic risk is; I thought up solutions (How would I best solve X) and checked if they were on the map.
The reason why I share this is because the first several things I thought of were not on the map. And it seems like several other answers are limited to "whats outside the box" (Think outside the box is a silly concept because i...
I don't think "low rivalry" in science is desirable. Rivalry makes scientists criticize the work of their peers and that's very important.
Is A3 meant to have connecting links horizontal through it's path?
Another bad idea: build a simulation-world to live in so that we don't actually have to worry about real-world risks. (disadvantage - is possibly an X-risk itself)
It kinda depends on which x-risk you are trying to cover...
For example - funding technologies that improve the safety or efficiency of nuclear use might mean that any use is a lot more harmless. Or develop ways to clean up nuclear mess; or mitigate the decay of nuclear radiation (i.e. a way to gather nuclear radioactive dust)
Enco...
Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for gett...
Encouraging people to start small bio-hack groups around the world could improve the biotechnology understanding of the public to the point where no one accidentally creates a bio-technology hazard.
I'm all for biohazard awareness groups, and even most forms of BioHacking at local HackerSpaces or wherever else. However, I never want to see potentially dangerous forms of BioTech become decentralized. Centralized sources are easy to monitor and control. If anyone can potentially make an engineered pandemic in their garage, then no amount of education will be enough for sufficient safety margin. Think of how many people people cut fingers off in home table saws or lawnmowers or whatever. DIY is a great way to learn through trial and error, but not so great where errors have serious consequences.
The "economic activation energy" for both malicious rogue groups and accidental catastrophes is just too low, and Murphy's law takes over. However if the economic activation energy is a million dollars of general purpose bio lab equipment, that's much safer, but would require heavy regulation on the national level. Currently it's something like a billion dollars of dedicated bio warfare effort, and has to be regulated on the international level. (by the Geneva Protocol and the Biological Warfare Convention)
(I suggest that maybe you want to offer to take free suggestions before you pay people - at least that might save you some dollars)
I'd agree with you here. Although money is a fantastic motivator for repetitive tasks, it has the opposite effect on coming up with insightful ideas.
(I suggest that maybe you want to offer to take free suggestions before you pay people - at least that might save you some dollars)
I was really saying - save your money till after people shoot off some low-hanging fruit ideas.
I would argue that the current barrier of "it costs lots of money to do bio-hacking right" is a terrible one to hide behind because of how easy it is to overcome it; or do biohacking less-right and less-safely. i.e. without safe containment areas.
Perhaps funding things like clean-rooms with negative pressure and leaving the rest up to whoever is using the lab-space.
Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com