Wiki Contributions

Comments

Sorted by

(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is "pleased to have contributed" and "nice comment about the lead author" (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it "timely", as the topic of open-sourcing was very much live at the time.

 

(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I'm glad to see it).

This is super awesome. Thank you for doing this.

Johnson was perhaps below average in his application to his studies, but it would be a mistake to think he is/was a below average intelligence pupil.

I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.

With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.

(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):

Global catastrophic risk:

Ó hÉigeartaigh. The State of Research in Existential Risk

Avin, Wintle, Weitzdorfer, O hEigeartaigh, Sutherland, Rees (all CSER). Classifying Global Catastrophic Risks

International governance and disaster governance:

Rhodes. Risks and Risk Management in Systems of International Governance.

Biorisk/bio-foresight:

Rhodes. Scientific freedom and responsibility in a biosecurity context.

Just missing the cutoff for this review but not included last year, so may be of interest is our bioengineering horizon-scan. (published November 2017). Wintle et al (incl Rhodes, O hEigeartaigh, Sutherland). Point of View: A transatlantic perspective on 20 emerging issues in biological engineering.

Biodiversity loss risk:

Amano (CSER), Szekely… & Sutherland. Successful conservation of global waterbird populations depends on effective governance (Nature publication)

CSER researchers as coauthors:

(Environment) Balmford, Amano (CSER) et al. The environmental costs and benefits of high-yield farming

(Intelligence/AI) Bhatagnar et al (incl Avin, O hEigeartaigh, Price): Mapping Intelligence: Requirements and Possibilities

(Disaster governance): Horhager and Weitzdorfer (CSER): From Natural Hazard to Man-Made Disaster: The Protection of Disaster Victims in China and Japan

(AI) Martinez-Plumed, Avin (CSER), Brundage, Dafoe, O hEigeartaigh (CSER), Hernandez-Orallo: Accounting for the Neglected Dimensions of AI Progress

(Foresight/expert elicitation) Hanea… & Wintle The Value of Performance Weights and Discussion in Aggregated Expert Judgments

(Intelligence) Logan, Avin et al (incl Adrian Currie): Uncovering the Neural Correlates of Behavioral and Cognitive Specialization

(Intelligence) Montgomery, Currie et al (incl Avin). Ingredients for Understanding Brain and Behavioral Evolution: Ecology, Phylogeny, and Mechanism

(Biodiversity) Baynham Herdt, Amano (CSER), Sutherland (CSER), Donald. Governance explains variation in national responses to the biodiversity crisis

(Biodiversity) Evans et al (incl Amano). Does governance play a role in the distribution of invasive alien species?

Outside of the scope of the review, we produced on request a number of policy briefs for the United Kingdom House of Lords on future AI impacts; horizon-scanning and foresight in AI; and AI safety and existential risk, as well as a policy brief on the bioengineering horizon scan. Reports/papers from our 2018 workshops (on emerging risks in nuclear security relating to cyber; nuclear error and terror; and epistemic security) and our 2018 conference will be released in 2019.

Thanks again!

It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.

(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year's review, CSER's first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, although 2019 will be higher.

Throughout 2016-2017, considerable CSER leadership time (mine in particular) has also gone on getting http://lcfi.ac.uk/ up and running, which will increase our output on AI safety/strategy/governance (although CFI also separately works on near term and non-AI safety-related topics).

Thank you for another detailed review! (response cross-posted to EA forum too)

And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)

FLI's anthony aguirre is centrally involved or leading, AFAIK.

Thanks for the initiative! I'll be there Thursday through Saturday (plus Sunday) for symposia and workshops, if anyone would like to chat (Sean O hEigeartaigh, CSER).

Load More