We talk about a wide variety of stuff on LW, but we don't spend much time trying to identify the very highest-utility stuff to discuss and promoting additional discussion of it.  This thread is a stab at that.  Since it's just comments, you can feel more comfortable bringing up ideas that might be wrong or unoriginal (but nevertheless have relatively high expected value, since existential risks are such an important topic).

New Comment
47 comments, sorted by Click to highlight new comments since:

The naive thing to do for existential risk reduction seems like: make a big list of all the existential risks, then identify interventions for reducing every risk, order interventions by cost-effectiveness, and work on the most cost-effective interventions. Has anyone done this? Any thoughts on whether it would be worth doing?

[-][anonymous]190

Bostrom's book 'Global Catastrophic Risks' does the first two of your list. The other two are harder. One issue is lack of information about organisations currently working in this space. If I remember correctly, Nick Beckstead at Rutgers is compiling a list. Another is interrelationships between risks - the GCR Institute is doing work on this aspect.

Yet another issue is that a lot of existential risks are difficult to solve with 'interventions' as we might understand the term in, say, extreme poverty reduction. While one can donate money to AMF and send out antimalarial bednets, it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases. Indeed many of these problems can only be tackled by government action, because it requires regulation or because of the cost of the prevention device (i.e. an asteroid deflector). However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.

Thanks for reminding me about GCR.

it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.

However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.

Does anyone reading LW know stuff about political advocacy and lobbying? Is there a Web 2.0 "lobbyist as a service" company yet? ;)

Are there ways we can craft memes to co-opt existing political factions? I doubt we'd be able to infect, say, most of the US democractic party with the entire effective altruism memeplex, but perhaps a single meme could make a splash with good timing and a clever, sticky message.

Is there any risk of "poisoning the well" with an amateurish lobbying effort? If we can get Nick Bostrom or similar to present to legislators on a topic, they'll probably be taken seriously, but a half-hearted attempt from no-names might not be.

Is there any risk of "poisoning the well" with an amateurish lobbying effort?

E.g. annoyance towards the overenthusiastic amateurs wasting the time of a researcher who knows the field and issues better than they do seems plausible. Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.

Low-quality or otherwise low-investment attempts at convincing people to make major life changes seem to me to run a strong risk of setting up later attempts for the one argument against an army failure mode. Remember that the people you're trying to convince aren't perfect rationalists.

(And I'm not sure that convincing a few researchers would be an improvement, let alone a large one.)

Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.

Only if they buy the argument in the first place. Have any "synthetic biology" researchers ever been convinced by such arguments?

Were there any relatively uninformed amateurs that played a role in convincing EY that AI friendliness was an issue?

[-]satt70

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.

Systematically emailing researchers runs the risk of being pattern matched to crank spam. If I were a respected biologist, a better plan might be to

  1. write a short (500-1500 words) editorial that communicates the strongest arguments with the least inferential distance, and sign it
  2. get other recognized scientists to sign it
  3. contact the editors of Science, Nature, and PNAS and ask whether they'd like to publish it
  4. if step 3 works, try to get an interview or segment on those journals' podcasts (all three have podcasts), and try putting out a press release
  5. if step 3 fails, try getting a more specific journal like Cell or Nature Genetics to publish it

Some of these steps could of course be expanded or reordered (for example, it might be quicker to get a less famous journal to publish an editorial, and then use that as a stepping stone into Science/Nature/PNAS). I'm also ignoring the possibility that synthetic biologists have already considered risks of their work, and would react badly to being nagged (however professionally) about it.

Edit: Martin Rees got an editorial into Science about catastrophic risk just a few weeks ago, which is minor evidence that this kind of approach can work.

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?

That might convince a few ones on the margin, but I doubt it would convince the bulk of them -- especially the most dangerous ones, I guess.

[-][anonymous]20

People like Bostrom and Martin Rees are certainly engaged in raising public awareness through the media. There's extensive lobbying on some risks, like global warming, nuclear weapons and asteroid defence. In relation to bio/nano/AI the most important thing to do at the moment is research - lobbying should wait until it's clearer what should be done. Although perhaps not - look at the mess over flu research.

It seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?

One of the last serious attempts to prevent large-scale memetic engineering was the unabomber.

The effort apparently failed - the memes have continued their march unabated.

It's worth doing but very hard. GCR is a first stab at this, but really it's going to take 20 times that amount of effort to make a first pass at the project you describe, and there just aren't that many researchers seriously trying to do this kind of thing. Even if CSER takes off and MIRI and FHI both expand their research programs, I'd expect it to be at least another decade before that much work has been done.

It feels like more research on this issue would have the effect of gradually improving the clarity of the existential risk picture. Do you think the current picture is sufficiently unclear that most potential interventions might backfire? Given limited resources, perhaps the best path is to do targeted investigation of what appear to be the most promising interventions and stop as soon as one that seems highly unlikely to backfire is identified, or something like that.

What level of clarity is represented by a "first pass"?

[-]Shmi100

Any thoughts on whether it would be worth doing?

It would be worth doing, and has been done, to some degree.

make a big list of all the existential risks, then identify interventions for reducing every risk

There are a few steps missing in between, such as identifying causes of the risks, rating them by likelihood and odds of wiping human species, etc.

I wrote a book about existential risks in Russian and translated it into English. In Russian it had maybe 100 000 clicks from different sites and generaly had positive reaction. I have translated it myself into English so translation is readable but not good. In English I got may be a couple negative comments. I attribute the diference to the fact that there much less books on existential risks in Russian and also to the bad translation.

Structure of the global catastrophe. Risks of human extinction in the 21 century. http://ru.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CATASTROPHE-Risks-of-human-extinction-in-the-XXI-century-

Is there a way to crowdsource editing this into better english? I mean I've never seen a book length wiki.

It is good idea. But anyway booklenth format is obsolete - that is why sequenses in LW is more popular then earlier booklenth Creating Friendly AI. So for the book it is better to be cut on wiki pages of chapter lenth. Another problem is luck of interest - I could create such wiki but most likely nobody would visit it. Anyway may be I should create it and invite people to try and add information.

What about editing and releasing chapters as posts on LW for discussion as each chapter is completed?

I have tried to do this in 2011 then I suggetsed sequence «moresafe« but there was some mistakes in grammer and formating and it was extesively downvoted. Also people claim that existential risks is not right theme for LW or simply disagree with my point opf view on some topics. Downvoting as instrument of communication is emotionly hurting me and I feel less encourage to post again. So, I decided to post rare and only if I have high quality text.

A question Katja Grace posed at a CFAR minicamp (wording mine):

Are there things we can do that aren't targeted to specific x-risks but mitigate a great many x-risks at once?

So trying to increase the number of people who think about and work on x-risk and see it as a high priority would be one. Efforts to raise general rationality would be another. MIRI does sort-of represent a general strategy against existential risk, since if they are successful the problem will likely be taken care of.

I hope that SPARC will end up being one of these things.

"not you regular math camp" I gather

[-]SWIM00

In discussions about AI risks, the possibility of a dangerous arms race between the US and China sometimes comes up. It seems like this kind of arms race could happen with other dangerous techs like nano and bio. Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.

This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.

How could we push for regime change? Since the cost of living in China is lower than the US, funding dissidents who are already working towards democracy seems like a solid option. Cyberattacks seem like another... how hard would it be to neuter the Great Firewall of China?

So pushing for more democratic governments in states like Russia and China

Do you expect democratic governments to engage less in arms races? Or to be less capable of engaging in them (because they might have less domestic/economic/military power)? Or to be less willing to actually deploy the produced arms? Or to be less willing to compete with the US specifically? Or to cause some other change that is desirable? And why?

I ask because "democracy" is an applause light that is often coopted when people mean something else entirely that is mentally associated with it. Such as low corruption, or personal freedom, or an alliance with Western nations.

[-]SWIM00

Or to be less willing to compete with the US specifically?

This is what I had in mind. I'd guess that the fact that the US is democratic and China is not ends up indirectly causing a lot of US/China friction. Same is probably true for Russia.

Pushing for more democratic governments in states like Russia and China

That sounds like the sort of aggression which would lead to an arms race. How would America react if China tried to achieve regime change here?

Cyberattacks

...thereby encouraging them to invest in intelligent tech defence

[-][anonymous]60

I agree that if Russia and China became more democratic the world would be a safer place. Liberal democracies are generally better at cooperation, and almost never go to war with one another [see the extensive literature on Democratic Peace Theory].

However like Larks, I think this is a baaaaaad idea. Foreign inteference would either have no effect, or provoke harsh countermeasures.

[-]SWIM00

Foreign inteference would either have no effect, or provoke harsh countermeasures.

Seems plausible. Might be a good idea for LWers who were Russia/China natives though.

Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.

Most Chinese people I talked to really disliked Japan, and seemed in favour of China invading Taiwan to "get it back". And that's from a sample that was more educated and western-friendly than the general population. I'm really not sure giving everybody the vote would really decrease the chances of nuclear war. It's not as if democratic elections in Iran, and Egypt (and maybe Libya?) were making the countries more stable.

if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.

Sure, a civil war in a highly militarized country that has The Bomb, what could go wrong?

Sure, a civil war in a highly militarized country that has The Bomb, what could go wrong?

Keep in mind that a potential consequence of letting NK run amok (remember that they have already bombed South Korean land and military, killing hundreds of South Koreans, over the last few years) is South Korea and Japan going nuclear. (Implausible? No: SK already had an active nuke program in the 1980s due to fear of NK.)

I agree that North Korea keeping up with it´s current behavior is dangerous, it´s just far from clear whether a regime collapse would make things better or worse. The safest solution might be something like a soft collapse where the Kims and their friends are offered a safe retirement in China en exchange for stepping down and letting South Korea and/or Soutj Korea take over (which is unlikely unless China is threatening military action otherwise - and since China does't want Japan to go Nuclear, it has an incentive of finding some way to calm down the Kims).

This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.

I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.

China has privately told the US that they would support the US in extending South Korean control over the entire Korean peninsula per the diplomatic cables leak. The Chinese would probably be happy if the US rolled in and flattened the entire country as long as they didn't have to let too many refugees into China, and really, at this point, the way that North Korea is acting China is probably willing to take the risk given that the North Koreans seem eager to cause trouble, and there's no guarantee it won't happen in a worse way later on.

Honestly I think that crushing the North Korean government and military completely would probably pretty much end it. North Korea has a ton of propaganda about their country's superiority over the rest of the world; without the tight control over the country that the present government has, I don't think that vision of superiority would last very long.

Not to say that they'd be terribly awesomely happy with us, but the US rolled into Japan after WWII and it worked out quite well. Given the present day poverty of the country, really all you'd have to do to win is wait for a bad famine to hit the country and roll in then; showing the people that you care about them with food is a dirty but probably effective way to make them distrust you less, especially if you have the South Koreans move in and the US move out as much as possible. Though of course other options exist.

It would be a mess, but I think it would probably be significantly less messy than Afganistan, given that rather than having twenty different angry groups, you really have the government and that's about it.

Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.

How sure are you?

  • Acts of military aggression by the PRC since 1949: About 5.
  • Acts of military aggression by the USSR/Russia in the same period: About 5
  • Acts of military aggression by the USA in the same period: About 7

(I've tried to be upwardly biased on numbers for all three, since it's obviously hard to decide who the aggressors in a conflict are)

  • Wars that the PRC have participated in that were not part of domestic territorial disputes since 1949: 2
  • Likewise for Russia: 5
  • Likewise for the USA: 17

(for the USA and USSR figures I'm counting all of the Cold War as one conflict, and likewise all of the War on Terror)

Sources found here

Edit: What happened to my formatting? I've had this problem before but I've never been able to fix it.

[-]SWIM10

Good point. I think ideally your sample size would be larger, I'm not sure the US is representative of democratic countries.

Re: formatting. Try putting a blank line between bullets.

Re: formatting. Try putting a blank line between bullets.

Tried, doesn't work. Anyone got any ideas?

[-]SWIM10

You also need to put a space between the asterisk and the start of your sentence. Ex.:

* These

* Will

* Be

* Bullet

* Points


  • These

  • Will

  • Be

  • Bullet

  • Points

Thank you!

After his cryonics hour with Robin Hanson, orthonormal wrote:

Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.

This is something I've been vaguely wondering myself. CFAR or similar seems like it might be one way to do this, but right now their methodology doesn't look very scalable (in-person workshops run by a small number of highly trained employees; contrast with LW or HPMoR). I'd be interested to hear if they have any plans to scale their operations up and if so what those plans look like. I'm also curious if they're trying to get leading psychologists like Keith Stanovich or Daniel Kahneman involved--this seems like it would be useful for a bunch of reasons.

Another idea is to try to spread the politics is the mind-killer or nonviolent communication memes more strongly... in other words, try to accelerate the historical trend towards decreased violence, as discussed by Steven Pinker and others. I've heard rumors that Middle Easterners' aggression may be caused by zinc deficiency from eating unleavened bread; don't know how true/useful that is.

the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.

Also, see Taleb's Antifragile

I've heard rumors that Middle Easterners' aggression may be caused by zinc deficiency from eating unleavened bread; don't know how true/useful that is.

I suspect some history/culture is a better explanation... But why not drop some zinc on them just in case? Go Team America!

Is anyone systematically working on the other side of the pancake: existential opportunities?

People are working on particular opportunities, but I haven't heard of people doing the depth first search for opportunities.

[-][anonymous]80

What do you mean by 'existential opportunities'?

What would be game changing in terms of avoiding threats, and our resilience to threats?

(not sure this is the right answer) Potentially FHI prize competition could be seen as an attempt to pursue that end? ( http://www.fhi.ox.ac.uk/prize , it's closed now)

Interesting.

They seem more focused on general problem solving than game changing, but just raising the questions they do is game changing in a way. The blue team is Us, and what would harm us is Them. Getting people to increasingly view everyone else as Us would be game changing.

One of the things that cheers me about Death - it's a common bond with everyone else. Well, those living today for an afterlife tomorrow probably aren't so much part of that common bond, but maybe they'll come around someday.

[-][anonymous]00

Has anyone tried advertising existential risk?

Bostroms "End of Humanity" talk for instance.

It costs about 0.2 $ per view for a video ad on YouTube, so if 0.2% of viewers give an average of 100 $ it would break even. Hopefully people would give more than that.

You can target ads to groups likely to give much by the way, like the highly educated

[This comment is no longer endorsed by its author]Reply