Skeptityke comments on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI - Less Wrong

19 Post author: Sarokrae 30 August 2014 02:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: Sean_o_h 30 August 2014 03:16:57PM *  15 points [-]

Hi,

I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.

In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the 'catastrophic scenarios'' which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: "Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe." and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included "I hope you can make it clear these are "worst case possibilities that currently appear worthy of study" rather than "high-likelihood events". Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn't accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%"; this was sort-of-addressed, but not really.

That said, the basic premises - that a virus could be modified for greater infectivity and released by a malicious actor, 'termination risk' for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons - are intact.

Re: 'paperclip maximiser'. I mentioned this briefly in conversation, after we'd struggled for a while with inferential gaps on AI (and why we couldn't just outsmart something smarter than us, etc), presenting it as a 'toy example' used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn't expect it it to be taken as a literal doomsday concern - and it wasn't in the text I sent - and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan's explanations were very good, and quite well-presented..

We've been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI's still doing a lot, some turns out very good, some not so good).

Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)

Right now, I'm leaning fairly strongly towards 'ignore and let quietly slip away' (the guardian has a small UK readership, so how much we 'push' this will probably make a difference), but I'd be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I'm open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I'd anticipated.

Comment author: Skeptityke 30 August 2014 03:44:30PM 15 points [-]

I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.

I don't think the saying "any publicity is good publicity" is true, but "shoddy publicity pointing in the right direction" might be.

I wonder how feasible it is to figure out journalist quality by reading past articles... Maybe ask people who have been interviewed by the person in the past how it went?

Comment author: Sean_o_h 30 August 2014 03:52:15PM 7 points [-]

Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.

In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch: they sent a guy who was anti-technology in a "social media is destroying good society values" sort of way to talk to people who are concerned about catastrophic risks from technology (I can see how this might have made sense to an editor).