John_Maxwell_IV comments on Existential risks open thread - Less Wrong

10 Post author: John_Maxwell_IV 31 March 2013 12:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 31 March 2013 12:55:18AM 10 points [-]

The naive thing to do for existential risk reduction seems like: make a big list of all the existential risks, then identify interventions for reducing every risk, order interventions by cost-effectiveness, and work on the most cost-effective interventions. Has anyone done this? Any thoughts on whether it would be worth doing?

Comment author: [deleted] 31 March 2013 03:12:03AM 12 points [-]

Bostrom's book 'Global Catastrophic Risks' does the first two of your list. The other two are harder. One issue is lack of information about organisations currently working in this space. If I remember correctly, Nick Beckstead at Rutgers is compiling a list. Another is interrelationships between risks - the GCR Institute is doing work on this aspect.

Yet another issue is that a lot of existential risks are difficult to solve with 'interventions' as we might understand the term in, say, extreme poverty reduction. While one can donate money to AMF and send out antimalarial bednets, it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases. Indeed many of these problems can only be tackled by government action, because it requires regulation or because of the cost of the prevention device (i.e. an asteroid deflector). However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.

Comment author: John_Maxwell_IV 31 March 2013 10:54:52AM *  4 points [-]

Thanks for reminding me about GCR.

it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.

However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.

Does anyone reading LW know stuff about political advocacy and lobbying? Is there a Web 2.0 "lobbyist as a service" company yet? ;)

Are there ways we can craft memes to co-opt existing political factions? I doubt we'd be able to infect, say, most of the US democractic party with the entire effective altruism memeplex, but perhaps a single meme could make a splash with good timing and a clever, sticky message.

Is there any risk of "poisoning the well" with an amateurish lobbying effort? If we can get Nick Bostrom or similar to present to legislators on a topic, they'll probably be taken seriously, but a half-hearted attempt from no-names might not be.

Comment author: Kaj_Sotala 31 March 2013 03:59:07PM *  7 points [-]

Is there any risk of "poisoning the well" with an amateurish lobbying effort?

E.g. annoyance towards the overenthusiastic amateurs wasting the time of a researcher who knows the field and issues better than they do seems plausible. Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.

Comment author: evand 31 March 2013 04:56:13PM 2 points [-]

Low-quality or otherwise low-investment attempts at convincing people to make major life changes seem to me to run a strong risk of setting up later attempts for the one argument against an army failure mode. Remember that the people you're trying to convince aren't perfect rationalists.

(And I'm not sure that convincing a few researchers would be an improvement, let alone a large one.)

Comment author: timtyler 01 April 2013 01:07:35AM 1 point [-]

Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.

Only if they buy the argument in the first place. Have any "synthetic biology" researchers ever been convinced by such arguments?

Comment author: John_Maxwell_IV 31 March 2013 11:55:56PM 1 point [-]

Were there any relatively uninformed amateurs that played a role in convincing EY that AI friendliness was an issue?

Comment author: satt 01 April 2013 03:07:39PM *  4 points [-]

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.

Systematically emailing researchers runs the risk of being pattern matched to crank spam. If I were a respected biologist, a better plan might be to

  1. write a short (500-1500 words) editorial that communicates the strongest arguments with the least inferential distance, and sign it
  2. get other recognized scientists to sign it
  3. contact the editors of Science, Nature, and PNAS and ask whether they'd like to publish it
  4. if step 3 works, try to get an interview or segment on those journals' podcasts (all three have podcasts), and try putting out a press release
  5. if step 3 fails, try getting a more specific journal like Cell or Nature Genetics to publish it

Some of these steps could of course be expanded or reordered (for example, it might be quicker to get a less famous journal to publish an editorial, and then use that as a stepping stone into Science/Nature/PNAS). I'm also ignoring the possibility that synthetic biologists have already considered risks of their work, and would react badly to being nagged (however professionally) about it.

Edit: Martin Rees got an editorial into Science about catastrophic risk just a few weeks ago, which is minor evidence that this kind of approach can work.

Comment author: [deleted] 31 March 2013 11:44:16AM 3 points [-]

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?

That might convince a few ones on the margin, but I doubt it would convince the bulk of them -- especially the most dangerous ones, I guess.

Comment author: [deleted] 31 March 2013 05:41:56PM 1 point [-]

People like Bostrom and Martin Rees are certainly engaged in raising public awareness through the media. There's extensive lobbying on some risks, like global warming, nuclear weapons and asteroid defence. In relation to bio/nano/AI the most important thing to do at the moment is research - lobbying should wait until it's clearer what should be done. Although perhaps not - look at the mess over flu research.

Comment author: timtyler 01 April 2013 01:02:11AM *  0 points [-]

It seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.

OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?

One of the last serious attempts to prevent large-scale memetic engineering was the unabomber.

The effort apparently failed - the memes have continued their march unabated.

Comment author: lukeprog 31 March 2013 06:32:03AM 7 points [-]

It's worth doing but very hard. GCR is a first stab at this, but really it's going to take 20 times that amount of effort to make a first pass at the project you describe, and there just aren't that many researchers seriously trying to do this kind of thing. Even if CSER takes off and MIRI and FHI both expand their research programs, I'd expect it to be at least another decade before that much work has been done.

Comment author: John_Maxwell_IV 31 March 2013 11:08:51PM 2 points [-]

It feels like more research on this issue would have the effect of gradually improving the clarity of the existential risk picture. Do you think the current picture is sufficiently unclear that most potential interventions might backfire? Given limited resources, perhaps the best path is to do targeted investigation of what appear to be the most promising interventions and stop as soon as one that seems highly unlikely to backfire is identified, or something like that.

What level of clarity is represented by a "first pass"?

Comment author: shminux 31 March 2013 12:59:26AM *  7 points [-]

Any thoughts on whether it would be worth doing?

It would be worth doing, and has been done, to some degree.

make a big list of all the existential risks, then identify interventions for reducing every risk

There are a few steps missing in between, such as identifying causes of the risks, rating them by likelihood and odds of wiping human species, etc.