Sarokrae comments on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
Hi,
I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.
In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the 'catastrophic scenarios'' which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: "Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe." and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included "I hope you can make it clear these are "worst case possibilities that currently appear worthy of study" rather than "high-likelihood events". Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn't accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%"; this was sort-of-addressed, but not really.
That said, the basic premises - that a virus could be modified for greater infectivity and released by a malicious actor, 'termination risk' for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons - are intact.
Re: 'paperclip maximiser'. I mentioned this briefly in conversation, after we'd struggled for a while with inferential gaps on AI (and why we couldn't just outsmart something smarter than us, etc), presenting it as a 'toy example' used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn't expect it it to be taken as a literal doomsday concern - and it wasn't in the text I sent - and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan's explanations were very good, and quite well-presented..
We've been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI's still doing a lot, some turns out very good, some not so good).
Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)
Right now, I'm leaning fairly strongly towards 'ignore and let quietly slip away' (the guardian has a small UK readership, so how much we 'push' this will probably make a difference), but I'd be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I'm open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I'd anticipated.
I've read a fair number of x-risk related news pieces, and this was by far the most positive and non-sensationalist coverage that I've seen by someone who was neither a scientist nor involved with x-risk organisations.
The previous two articles I'd seen on the topic were about 30% Terminator references. This article, while not necessarily a 100% accurate account, at least takes the topic seriously.
Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it
"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."
I wouldn't worry too much about the comments. Even Guardian readers don't hold the online commentariat of the Guardian in very high esteem, and it's reader opinion, not commenter opinion, that matters the most.
It seems like the most highly upvoted comments are pretty sane anyway!