Just announcing for those interested that Seth Baum from the Global Catastrophic Risks Institute (GCRI) will be coming to the Effective Altruism Forum to answer a wide range of questions (like a Reddit "Ask Me Anything") next week at 7pm US ET on March 3.

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky. (Clarification: his background is more standard, and he's probably more emulate-able!). He had a PhD in geography, and had come to a maximising consequentialist view, in which GCR-reduction is overwhelmingly important. So three years ago,  with risk analyst Tony Barrett, he cofounded the Global Catstrophic Risks Institute - one of the handful of places working on these particularly important problems. Since then, it's done some academic outreach and have covered issues like double-catastrophe/ recovery from catstrophe, bioengineering, food security and AI.

Just last week, they've updated their strategy, giving the following announcement:

Dear friends,

I am delighted to announce important changes in GCRI’s identity and direction. GCRI is now just over three years old. In these years we have learned a lot about how we can best contribute to the issue of global catastrophic risk. Initially, GCRI aimed to lead a large global catastrophic risk community while also performing original research. This aim is captured in GCRI’s original mission statement, to help mobilize the world’s intellectual and professional resources to meet humanity’s gravest threats.

Our community building has been successful, but our research has simply gone farther. Our research has been published in leading academic journals. It has taken us around the world for important talks. And it has helped us publish in the popular media. GCRI will increasingly focus on in-house research.

Our research will also be increasingly focused, as will our other activities. The single most important GCR research question is: What are the best ways to reduce the risk of global catastrophe? To that end, GCRI is launching a GCR Integrated Assessment as our new flagship project. The Integrated Assessment puts all the GCRs into one integrated study in order to assess the best ways of reducing the risk. And we are changing our mission statement accordingly, to develop the best ways to confront humanity’s gravest threats.

So 7pm ET Tuesday, March 3 is the time to come online and post your questions about any topic you like, and Seth will remain online until at least 9 to answer as many questions as he can. Questions in the comments here can also be ported across.

On the topic of risk organisations, I'll also mention that i) video is available from CSER's recent seminar, in which Mark Lipsitch and Derek Smith's discussed potentially pandemic pathogens, and ii) I'm helping Sean to write up an update of CSER's progress for LessWrong and effective altruists which will go online soon.

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 7:53 PM

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012

Seth is an interesting case - more of a 'mere mortal' than Bostrom and Yudkowsky.

Unbelievable.

This comment confuses me.

Ok, let me see if I can help.

Aside from the fact that this was an incredibly rude, cultish thing to say, utterly lacking in collegiality, how do we even judge who a 'mere mortal' is here? Do we compare CVs? Citation rank? A tingly sense of impressiveness you get when in the same room?


Maybe people should find a better hobby than ordering other people from best to worst. Yes, I know this hobby stirs something deep in our social monkey hearts.

For the record, I was trying to reference the cultishness rather than promote it - Bostrom has a history of taking on multiple divergent and technical subjects and overstacking his curriculum since his university days, and Eliezer is a child-futurist who tried to program superhuman AI. As LessWrong readers know, both have bizzare stories that are not very relateable to most people. If people want to think about what their life might be like in global risk reduction research, Seth might be a better person to look at.

I didn't register that the wording crossed the line from cheeky to rude.

But it was a non-public-facing post, not in main, not mean-spirited, not actually intended to rank people by goodness (this is your interpretation not mine), and the section has since been amended!

Even if I do write something that's truly abysmal (I'm sure I have before), it's hard to respond to feedback that is incredulous and non-constructive.

This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see how the line might be taken to mean things it wasn't at all intended to do.

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature - ones that are clearly intended to promote and highlight good work in LW's areas of interest - especially in "within community" spaces like this. One of the really inspiring things about working in this area is the number of people putting in great work and long hours alongside their full-time commitments - like Ryan and many others. And those working fulltime in Xrisk/EA often put in far in excess of standard hours. This sometimes means that people are working under a lot of time pressure or fatigue, and phrase things badly (or don't recognise that something could easily be misread). That may or may not be the case here, but I know it's a concern I often have about my own engagements, especially when it's gone past the '12 hours in the office' stage.

With that said, please do tell us when it looks like we're expressing things badly, or in a way that might be taken to be less than positive. It's a tremendously helpful learning experience about the mistakes we can make in how we write (particularly in cases where people might be tired/under pressure and thus less attentive to such things).

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature

Duly noted, thanks. This kind of tone deafness seems to be a pattern here in the LW-sphere, however. For instance, look at this:

http://lesswrong.com/lw/lco/could_you_be_prof_nick_bostroms_sidekick/

If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive.

Really?


An appeal to charity in the reading of "public-facing, external communication" is a little odd. Public-facing means you can't beg off on social incompetence, being overworked, etc. You have to convince the public of something, and they don't owe you charity in how they read your message. They will retreat to their prejudices and gut instincts right away. It is in the job description of public-facing communication to deal with this.

This is reasonable.

Cool, that clears it up, thanks!

(I got that you were being sarcastic, but I wasn't clear which possible sucky thing you were disapproving of)

Seth has put up his post, where you can ask questions now - http://effective-altruism.com/ea/fv/i_am_seth_baum_ama/ - and will be online in a few short hours!