Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.

Should more than one person have the same idea, the award will be made to the person who posted it first.

The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.

I may include you as a co-author in the roadmap (if you agree).

The roadmap is distributed under an open license GNU.

Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).

The competition is open until the end of 2015.

The roadmap can be downloaded as a pdf from:

UPDATE: I uploaded new version of the map with changes marked in blue.

 http://immortality-roadmap.com/globriskeng.pdf

Email: alexei.turchin@gmail.com

 

New Comment
88 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

What about taking steps to reduce the incidence of conflict, e.g. making meditation more pleasant/enjoyable/accessible/effective so people chill out more? Improved translation/global English fluency could help people understand one another. Fixing harmful online discussion dynamics could also do this, and prevent frivolous conflicts from brewing as often.

BTW, both Nick Beckstead and Brian Tomasik have research-wanted lists that might be relevant.

3turchin
I like your ideas "reduce the incidence of conflict" - maybe by some mild psychedelic or brain stimulation? and "Improved translation/global English fluency". I would be happy to provide you two awards - how can I do it? Also links are useful, thanks.
2John_Maxwell
I'm not sure "Improved translation/global English fluency" is an unalloyed good... could lead to a harmful global monoculture (putting all our eggs in one basket culturally). Feel free to reduce my award count by one :) Also helping Westerners chill out could leave them less prepared to deal with belligerent Middle Easterners.
2turchin
Ok, how I could send it to you?

(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)

You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sper... (read more)

4turchin
If we have 200-300 years before well proved catastrophe, this technic may work. But in 10-50 years time scale it is better to search good clever students and pay them to work on x-risks.
5Gram_Stone
Embryo selection is a third alternative, the progress of which is more susceptible to policy decisions than gene therapy, and the effects of which are more immediate than selective breeding. I recommend Shulman & Bostrom (2014) for further information.
0Gondolinian
If you're talking about significant population changes in IQ, then I agree, it would take a while to make that happen with only reproduction incentives. However, I was thinking more along the lines of just having a few thousands or tens of thousands of >145 IQ people more than we would otherwise, and that could be achieved in as little as one or two generations (< 50 years) if the program were successful enough. Now for a slightly crazier idea. (Again, I'm just thinking out loud.) You take the children and send them to be unschooled by middle-class foster families, both to save money, and to make sure they are not getting the intellectual stimulation they need from their environment alone, which they might if you sent them to upper-class private schools, for example. But, you make sure they have Internet access, and you gradually introduce them to appropriately challenging MOOCs on math and philosophy specially made for them, designed to teach them a) the ethics of why they should want to save the world (think some of Nate's posts) and b) the skills they would need to do it (e.g., they should be up to speed on what MIRI recommends for aspiring AI researchers before they graduate high school). The point of separating them from other smart people is that smart people tend to be mostly interested in money, power, status, etc., and that could spread to them if they are immersed in it. If their focus growing up is simply to find intellectual stimulation, then they would be essentially blank slates and when they're introduced to problems that are very challenging and stimulating, have other smart people working on them, and are really, really* important, they might be more likely to take them seriously. *Please see my clarification below.
2Lumifer
I don't think this is how it works with people. Especially smart ones with full 'net access.
2Gondolinian
You're right; that was poorly phrased. I meant that they would have a lot less tying them down to the mainstream, like heavy schoolwork, expectations to get a good job, etc. Speaking from my own experience, not having those makes a huge difference in what ideas you're able to take seriously. The Internet exposes one to many ideas, but 99% of them are nonsense, and smart people with the freedom to think about the things they want to think about eventually become pretty good at seeing that (again speaking from personal experience), so I think Internet access helps rather than hurts this "blank slate"-ness.
0Lumifer
I am confused as to why do you think it's a good thing. You're basically trying to increase the variance of outcomes. I have no idea why do you think this variance will go precisely in the direction you want. For all I know you'll grow a collection of very very smart sociopaths. Or maybe wireheads. Or prophets of a new religion. Or something else entirely.

The roadmap is distributed under an open license GNU.

I don't know what that sentence means. If you mean the GPL that includes a provision of distributing the work along with a copy of the GPL which you aren't doing.

Creative Commons licenses don't require you to distribute a copy of them, which makes them better for this kind of project.

0turchin
I mean that you you are free to copy and modify the roadmap, but should track changes and do not create proprietary commercial productes based on it. I also would be happy to be inform if any derivates will be created. It is is a little bit different from CC3.0, but it now clear that I have elaborate.
[-]gjm30

PDF not available without "joining" Scribd, which appears to require giving them information I do not wish to give them. Any chance of making it available in some other way?

5turchin
http://immortality-roadmap.com/globriskeng.pdf Link fixed
2gjm
Yup, that works. Thanks.

I would use the word resilient rather than robust.

  • Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.

  • Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.

I think that it is a better idea to think ab... (read more)

1turchin
I accepted your idea about replacing the word “robust" and will award the prize for it. The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated. The idea that some of plans create their own risks is represented in this map with red boxes below plan A1. But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree. Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods. Timeline was already partly replaced with "steps", as was already suggested by "elo" and he was awarded for it. Phil Torres shows that Bostroms classification of x-risks is not as good as it seems to be, in: http://ieet.org/index.php/IEET/more/torres20150121 So I prefer the notion of "human extinction risks" as more clear. I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1. In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan? 1. The idea of uploding was already suggested here in the form of "migrating into simulation" and was awarded. 2. I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent. I think I should accept "dramatic social changes”, as it could include many interesting but
1Satoshi_Nakamoto
I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. "ludism" and "relinquishment of dangerous science" is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human. Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones. I'm not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks. In survive the catastrophe I would add two extra boxes: * Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure * Increase time available for preparation by improving monitoring and early detection techno
0hairyfigment
Technically, I wouldn't say we'd lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time. Not only does going back to our main ancestral environment seem unworkable - at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.
0turchin
A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking? For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space. Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?
0Satoshi_Nakamoto
bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.
0turchin
Sent 150 USD to Against Malaria foundation. The idea of dumbing people is also present in Bad plan section, "limitation of human or collective intelligence"... But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can't said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction. There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer. I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)
0Satoshi_Nakamoto
Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.
0turchin
I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem. Lowering human intelligence is in bad plans. I have been thinking about hive mind... It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI. If a hive mind is enforced, it is like worst totalitarian state... If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.
1[anonymous]
This is useful. Mr. Turchin, please redirect my award to Satoshi.
0turchin
Done
[-][anonymous]10

(Thinking out loud)

Currently, about a third of all food produced in the world doesn't make it to being consumed (or something like that - we were told this in our phytopathology course.) With the increase in standardization of food processing, there should be more common causes of spoilage and the potential of resistant pathogen evolution and rapid spread. How much worse should the food loss become before initiating a cascade of x-threats to mankind?

1turchin
As a lot of grain now is consumed by meat industry, returning to vegetable and limited diet could effectively increase food supply 4-5 times. Many other options exist to do so.
1[anonymous]
Legally enforced veganism? (But grain spoils. It is also often stored in buildings designed specifically for that purpose, and once they get infected...) All in all, I was just trying to think of a hard-to-contain, hard-to-notice, hard-to-prevent x-risk; those already discussed seem more... straightforward, perhaps. I am sure there are other examples of systemic failure harder to fight with international treaties than nuclear war.
1turchin
If your suggestion would be something like "invest in bio-divercity of food supply chain" or "prevent crop loss due bad transportation" it may be interesting. Because while the whole humanity can't extinct because of food shortage, it could contribute to wars, terrorism and riots, as happened during Arab spring.
1[anonymous]
Those would be useful things to do, I think, resulting in 1) better carantine law (the current one does not seem to be taken seriously enough, if Ambrosia's expansion is any indicator, and the timescales for a pathogen will not be decades), 2) portable equipment for instant identification of alien inclusions in medium bulks of foodstuffs, and 3) further development of nonchemical ways of sterilization.
1turchin
Thank you for this interesting suggestion! I will include it in the map and I want to send you award. PM details to me. But what is Ambrosia? Corn rust?
1[anonymous]
Thank you. (Ambrosia artemisiifolia is a carantine plant species, I used it as an example because it's notorious for having allergenic pollen, contributing to desertification - its roots can reach about 4m down, maybe more - and is rather easy to recognize, but people just don't care to eradicate it or at least cut off the inflorescences. And yes, in many places its spread is already unstoppable. Some other plants of the same genus are carantine weeds, too.) I referred rather more to pathogens that could arise and benefit from totally manmade environments. (I remember from somewhere that in supermarkets all over the world, the same six species of Drosophila occur; I think that transportation and storage networks can be modeled as ecosystems, especially if more and more stuff gets produced.)
1turchin
Yes, in fact fungi rusts could eliminate entire species, as has happened with previous variant of banana and now with amphibious. And here rises the question: could some kind of fungi be dangerous to human existence?
1[anonymous]
I really cannot tell. The Irish Famine comes to mind, but surely such things are in the past?.. It's just not a question that you should ask a non-expert if you, because the trivial answer is of course 'yes', but enfolding it takes expertise.
0turchin
I meant the fungi which kill humans...
1[anonymous]
IANAD, but there's Pneumocystis pneumonia, a really ugly resistant to treatment thing. Don't know if it's virulent enough to threaten mankind as a whole. Edit: but considering that the fungus 'appears to be present in healthy individuals in general population' and causes pneumonia if the immune system is weakened, I would not disregard the possibility.
[-][anonymous]10

I have an idea related to Plan B – Survive the Catastrophe.

The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.

I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.

Of course, th... (read more)

1turchin
I think that you have two ideas: 1. Prediction market for x-risks 2. Built prepare to most probable catastrophe. I don't buy the first one... But in fact prizes that I suggested in the opening post is something like it. I mean the idea to use money to extract wisdom of the crowd is good. But prediction market is not the best variant. Because majority of people have a lot of strange ideas about x-risks and such ideas would dominate. The idea to prepare to most probable catastrophe is better. In fact, we could built bio and nuclear refuges, but not AI and nanotech refuges. And bio-hazard refuges are more important as now pandemic seems to be more risky than nuclear war. So we could concentrate on bio-hazard refuges. I would like to award you the prize for the idea and you could PM to me payment details.
0Lumifer
You can treat insurance and reinsurance markets as prediction markets for severe events (earthquakes, hurricanes, etc.). I don't think they (or your proposed prediction markets) would be helpful in estimating the probabilities of extinction events.
0[anonymous]
Seems like the definition of "severe" is an issue here. Maybe I should have used "incredibly severe"? Yes, reinsurance markets deal in large insured risks, but they do not target the incredibly large humanitarian risks that are more informative to us. See reinsurance deals here for reference: http://www.artemis.bm/deal_directory/ Care to explain your reasoning? For example, if the market indicated that the chance of a pandemic killing 50% of the population is 1,000x greater than the likelihood of a nuclear war of any kind, wouldn't a forecaster find this at least a little useful?
0Lumifer
For the prediction markets to work they need to settle: a bet must be decided one way or another within reasonable time so that the winners could collect the money from the losers. How are you going to settle the bets on a 50%-population pandemic or a nuclear war?
0[anonymous]
Each contract would have a maturity date - that is standard. Your primary concern is that the market would not be functional after a 50%-population pandemic or a nuclear war? That is a possibility. The likelihood depends on the severity of the catastrophe, popularity of the market, its technology and infrastructure, geographic distribution, disaster recovery plan, etc. With the proper funding and interest, I think a very robust market could be created. And if it works, the information it provides will be very valuable (in my opinion).
1Lumifer
So, a bet would look like "There will or will not be a nuclear war during the year 2016"? I am not sure you will find enough serious bidders on the "will be" side to actually provide good predictions. You are likely to get some jokers and crackpots, but for prediction purposes you actually don't want them. Is there any empirical data that prediction markets can correctly estimate the chances of very-low-probability events?
1gwern
Liquidity problems are an issue but it may have been partially solved by first, paying normal interest on deposits to avoid opportunity cost issues, and second by market makers like Hanson's LMSR. In particular, people can subsidize the market-maker, paying to get trading activity and hence accuracy.
1Lumifer
It's not the liquidity problems I'm worried about, but rather the signal-to-noise ratio. Assume that the correct underlying probability is, say, 0.5% and so you should expect 0.5% of the participants to bet on the "the end is nigh!" side. However you also have a noise level -- say 3% of your participants are looking for teh lulz or believe that a correct bet will get them front-row Rapture seats (or vice versa). Given this noise floor you will be unable to extract the signal from the prediction market if the event has a sufficiently low probability.
0gwern
? That's not a prediction market. A PM is a continuous price interpretable as a probability, which traders trade based on its divergence from their estimated probability. You can buy or sell either way based on how it's diverged. I have 'bet for' and 'bet against' things I thought were high or low probability all the time on past prediction markets.
0Lumifer
You're right, I got confused with the noise floor in polls. However my concern with counterparties didn't disappear. People need incentives to participate in markets. Those who bet that there won't be a nuclear war next year will expect to win some money and if that money is half of a basis point they are not going to bother. Who will be the source of money that pays the winners? At which probability you're going to take the side "yes, there will be a nuclear war in 2016"? And if this particular market will be subsidized, well, then it becomes free money and your prediction ability goes out of the window. I suspect prediction markets won't work well with very-low-probability bets. Especially bets with moral overtones ("You bet on a global pandemic happening? What are you, a monster??")
0gwern
Half a basis point is half a basis point. Bond traders would prostitute their grandmothers for a regular half a basis point boost to their returns. The source is everyone who takes the other side of the contract? PMs are two-sided. I don't pretend to understand how the LMSR and other market-makers really work, but I think that you can't simply trade at random to extract money from them. Seems to have worked so far with the IARPA bets. (Admittedly, my own trades on ISIS being overblown didn't work out too well but I think I did gain on the various flu contracts.)

I think the word "Trust" is lacking from your roadmap. We need to find ways to determine when to trust scientists that their findings are sound.

On a smaller level trust is also very important to get people to cooperate. Empathy alone doesn't make you cooperate when you don't trust the other person.

Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)

2turchin
I have another roadmap "how to survive the end of the universe", and one of ideas there is geometric computers. But thanks for links. This map in OP is about x-risks in approximately near future, like next 100 yeras
0Lumifer
From that set of solutions I prefer the good old-fashioned elven magic X-D
[-]Elo00

New idea: Nuclear hoarding. Collecting of all nuclear particles to limit its ability to be used. (not sure if this falls under a larger "worldwide risk prevention authority", but it doesn't have to be carried out willingly, it can be carried out via capitalism. Just purchase and contain the material.)

New idea: to limit climate change. tree-planting. plant massive numbers of green species in order to reduce the carbon in the atmosphere. Australia is a large land mass that is unused and could be utilised to grow the oxygen farm and carbon cap... (read more)

0turchin
1. If you mean nuclear materials by "particles" , it is practically impossible because uranium is dissolved in sea water and could by mined. Also it requires world goverment with high power. 2. I already added CO2 capture. It could also be plankton. 3. It is in practice the Bostrom's idea of differential technology development. 4. They will be already punished by x-risks catastrophe: they will die and their families also. If they don't want to think about it; they will not take punishment seriously. But may му we could punish people only for a rising risk or not preventing it enough. It is like a law which punish people for inadvertency or neglect. I will think about it. The R.B. idea is about it in fact. 5. Ozon is about UV, not cooling. But nanobots could become part of geoingineering later. I don't go in detail about all possible ways of geoingineering in the map. 6. Mostly the same. 7. Not clear wht new moon would be any better than real Moon or Internatinal space station. 8. Terraforming planets in the map is the same as making Mars habitable, and also moving Mars is risky and require dangerous technologies. The most interesting idea which I derived from here is the idea to write international law about x-risks which will punish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people for lowering x-risks, finding new risks, and for efforts in their prevention. I would like to reward you 1 prize for it, pm me.
0Elo
n4. Xrisk is not always risky to the developer: the Atomic bomb creators did not feel the effects of their weapons personally. in this way an x-risk can be catastrophic but not personally of consequence. I was suggesting something to overcome this predicament. Where one goverment might commission a scientist to design a bioweapon to be released on another country and offer to defend/immunise the creators. Its a commons-binding contract that discourages the individual. it only takes one. which is the challenge of humans taking x-risky actions, it is not always of direct consequence to the individual (or it may not feel that way to the individual) n5. radiation from space that we are not currently accustomed to or protected from would be an x-risk to the biological population. n7. as a way to increase the number of colonies nearby, creating another moon that is close to earth might be easier and cheaper and more viable than mars. Although I might be completely wrong and mars might be easier. really depends on how much inter-colony travel there will likely be. n1. I meant - buy up supply of nuclear material, push the price above viable for a long time, therefore discouraging the development of surrounding technology.
1turchin
I got your idea: I think it should be covered in x-risk law. Something like Article 1: Any one, who consciously or unconsciously is rising x-risks will go to jail. Ozone level depletion is not proved to be extinction level event, but is really nasty thing anyway. It could be solved without nanobots, by injecting right chemicals. NASA is planning to attract small asteroid so it may work, but can't be main solution. May be useful step. Market forces will rise uranium supply. You don't need a lot of uranium if you are going to enrich it in centrifuges. If you are really going to limit supply, its better to try to buy all the best scientists in field. US in 90s were buying Russian scientists who worked previously on secret biological weapons.
0Elo
The other advantage of forcing people to use a limited supply of radioactive material in a reaction would be enhanced safety of doing so as well. (in the case of a failure there will be less total material to account for)
[-]Elo00

Meta: I honestly didn't read the plan in full the first two times I posted. Instead I went to Wikipedia and looked up global catastrophic risk. Then once I had an understanding of what the definition of global catastrophic risk is; I thought up solutions (How would I best solve X) and checked if they were on the map.

The reason why I share this is because the first several things I thought of were not on the map. And it seems like several other answers are limited to "whats outside the box" (Think outside the box is a silly concept because i... (read more)

2turchin
Yes, the site is not finished, but the map "Typology of human extinction risks" is ready and will be published next week. Around 100 risks will be listed. Any roadmap has its limitations because of its size and its basic 2D structure. Of course we could and should cover all option for all risks but it should be done in more details. Maybe I should do a map there to each risks will be suggested ways to its prevention.
0Elo
I didn't really know what x-risks you were talking about; which is why a map of x-risks would have helped me.
0turchin
Basically the same risks you listed here. I can PM you the map.

I don't think "low rivalry" in science is desirable. Rivalry makes scientists criticize the work of their peers and that's very important.

0turchin
By "law rivalry" I mean something like "productive cooperation", based on "trust" between scientists to each other and to the society to the scientist. Productive cooperation does not exclude competition if it based on honest laws. And it is really important topic. In the movie "2012" the most fantastic thing was that than the scientist found the high neutrino level, he was able to inform government about the risk and was heard. I really want to award the prize, you could email details me on alexei.turchin@gmail.com I am going to replace rivalry with "productive cooperation between scientists and society based on trust". Do you think it will be right phrase?
[-]Elo00

Is A3 meant to have connecting links horizontal through it's path?

Another bad idea: build a simulation-world to live in so that we don't actually have to worry about real-world risks. (disadvantage - is possibly an X-risk itself)

It kinda depends on which x-risk you are trying to cover...

For example - funding technologies that improve the safety or efficiency of nuclear use might mean that any use is a lot more harmless. Or develop ways to clean up nuclear mess; or mitigate the decay of nuclear radiation (i.e. a way to gather nuclear radioactive dust)

Enco... (read more)

2MarsColony_in10years
I'm all for biohazard awareness groups, and even most forms of BioHacking at local HackerSpaces or wherever else. However, I never want to see potentially dangerous forms of BioTech become decentralized. Centralized sources are easy to monitor and control. If anyone can potentially make an engineered pandemic in their garage, then no amount of education will be enough for sufficient safety margin. Think of how many people people cut fingers off in home table saws or lawnmowers or whatever. DIY is a great way to learn through trial and error, but not so great where errors have serious consequences. The "economic activation energy" for both malicious rogue groups and accidental catastrophes is just too low, and Murphy's law takes over. However if the economic activation energy is a million dollars of general purpose bio lab equipment, that's much safer, but would require heavy regulation on the national level. Currently it's something like a billion dollars of dedicated bio warfare effort, and has to be regulated on the international level. (by the Geneva Protocol and the Biological Warfare Convention) I'd agree with you here. Although money is a fantastic motivator for repetitive tasks, it has the opposite effect on coming up with insightful ideas.
0Elo
I was really saying - save your money till after people shoot off some low-hanging fruit ideas. I would argue that the current barrier of "it costs lots of money to do bio-hacking right" is a terrible one to hide behind because of how easy it is to overcome it; or do biohacking less-right and less-safely. i.e. without safe containment areas. Perhaps funding things like clean-rooms with negative pressure and leaving the rest up to whoever is using the lab-space.
1turchin
In A3 blocks are not connected because they not consequent steps, but more like themes or ideas.
0Elo
Okay. Maybe bolden up the outlines or change the colours so they appear more distinct, or make some lines into arrows?
0turchin
I like all 3 ideas - simulation, nuclear waste reduction and bio-hack awareness groups. I would like to include them in the map and award you 150 usd. How can I pay you?
0Elo
Simulation is an X-Risk that we stagnate our universal drive to growth and live in a simulation for the rest of our lives and extinguish ourselves from existence. Bio-Hack is an X-Risk because if done wrong you would encourage all these small bio-tech interests and end up with someone doing it unsafely. The failure of mini biohack groups could probably be classified as controlled regression->small catastrophy. Similar to the small nuclear catastrophies of current history and their ability to discourage any future risk taking behaviour in the area. The advantage of common bio-hack groups is less reliance on the existing big businesses to save us with vaccines etc. Indeed the suggestion of "Invite the full population to contribute to solving the problem" might be a better description. New suggestion: "lower the barriers of entry into the field of assistance in X-risk". Easy explanation of the X-risks; easier availability of resources to attempt solutions. Assuming your main x-risks are 1. biotech; 2. nanotech; 3. nuclear, 4. climate change and 5. UFAI) 1. Provide Biotech upskill (education, some kind of OpenBio foundation) and Bioresources for anyone interested in the area (starter kit, smaller cheaper lab-on-a-chip, simple biotech "at-home" experiments like GFP insertion). 2. Teach the risks of molecular manufacturing before teaching people how to do it. (or restructure the education program to make sure it is included) 3. Teach 4th gen nuclear technologies to everyone. Implement small scale nuclear models. (i.e. tiny scale - not sure if it would work) to help people understand the possibility of a teeny nuclear failure but scaled up to large. (if it is possible to make a tiny scale nuclear reactor is beyond my knowledge) 4. empower the public with technology or understanding to reverse pollution. i.e. solar + batteries + electric cars; plant-trees initiatives (or oxygen bio-filters), carbon capture programs, educate and make possible small-scale sustainabi
0turchin
I like ideas about risks education and about bureaucracy I think I should include them in the map and award you 2 prizes. How I can transfer them?
0Elo
Details by PM.
0Elo
Reply in a PM.
0turchin
Replied
[-]plex-20

Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.

One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for gett... (read more)

1turchin
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf. I don't think that I should go inside all details of decision theory and EA. I just put "rationality". Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it. Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't believe in usefulness of space mining without nanotech. The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague? I upvoted your post and in general I think that downvoting without explanation is not good thing on LW. "Pray" corrected.
3plex
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources). Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely accepted", rather than rationality's normal meanings as being effective. I'd also think it more appropriate for the A3 stream than A2, for what I have in mind at least. I'd think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from "person who is smart and will probably get some good job in some mildly evil corporation" to "person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside" on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction). Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I'm not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I'm not sure about a good set of labels for this though, but perhaps something like: * Immediate (a
1turchin
Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating "world saviour" from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.
0Lumifer
I'm very very suspicious of the idea of creating "world saviours". In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/
0plex
I'm curious about why this was downvoted?
1OrphanWilde
One person downvoted it, which means it could be anything from "I don't like spelling corrections" to "I disagree about not giving dates". In general, if only one person downvotes, it is best not to ask. I don't see anything worth downvoting in your post myself, although I wouldn't upvote it, because it reads to me more like an attempt at compressing many applause lights into one comment without paying attention to any one than an attempt at genuine suggestions for improvement. (It's a little -too- Less Wrongian.)