Comment author: Douglas_Knight 07 March 2016 08:03:24PM 0 points [-]

Who runs Metaculus? Is it a reincarnation of an older organization?

It is some kind of prediction market. Is it a descendant of one of the teams in the IARPA prediction contest? It reminds me of Twardy and Hanson’s Scicast. Is it related? Or do they all look the same to me? The site mentions no names, but Angelbase lists some. Do they suggest some earlier incarnation?

Comment author: Sean_o_h 09 March 2016 03:42:57PM 0 points [-]

FLI's anthony aguirre is centrally involved or leading, AFAIK.

In response to NIPS 2015
Comment author: Sean_o_h 08 December 2015 09:25:23AM *  2 points [-]

Thanks for the initiative! I'll be there Thursday through Saturday (plus Sunday) for symposia and workshops, if anyone would like to chat (Sean O hEigeartaigh, CSER).

New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan)

17 Sean_o_h 03 December 2015 10:07AM

[Cross-posted at EA forum]

Hot on the heels of 80K's excellent AI risk research career profile (https://80000hours.org/career-guide/top-careers/profiles/artificial-intelligence-risk-research/), we're delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence, to be led by Cambridge, with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed by us at CSER, but will be a stand-alone centre, albeit collaborating extensively at CSER.

Building on the by-now-familiar "Puerto Rico Agenda", it will have the long-term safe and beneficial development of AI at its core, but with a slightly broader remit than CSER's focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, and as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

It builds on the pioneering work of FHI, FLI and others, and the generous support of Elon Musk in massively boosting this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers - the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions at a minimum will be opening up in this space (we're currently pursuing matched funding opportunities) across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

In between now and then, FHI is hiring for AI safety researchers, and CSER will be hiring for an AI policy postdoc in the spring. I'll have limited time to post in between now and the Christmas break (I'll be away at NIPS and then occupied with funder deadlines and CSER recruitment), but will be happy to post more over the Christmas break if desired.

Thank you so much as always to the Lesswrong and Effective Altruism community for their support of existential risk/far future work, both financially and intellectually - it has made a huge difference over the last couple of years. Thanks in particular to MIRI and FHI's researchers, who I received a lot of guidance from in my part of co-developing this proposal.

Seán (Executive Director, CSER)

http://www.eurekalert.org/pub_releases/2015-12/uoc-cul120215.php

Human-level intelligence is familiar in biological 'hardware' -- it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be "the biggest event in human history". Professor Stephen Hawking agrees, saying that "when it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right."

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: "Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad".

The Centre is a response to the Leverhulme Trust's call for "bold, disruptive thinking, capable of creating a step-change in our understanding". The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University's Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity's future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: "The Centre is intended to build on CSER's pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones."

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge's Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, "a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH's vision and expertise."

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John's College, Cambridge, said: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks -- from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: "With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.


Comment author: Sean_o_h 04 November 2015 10:45:27AM 0 points [-]

A quick reminder: our deadline closes a week from tomorrow (midday UK time) - so now would be a great time to apply if you were thinking of it, or to remind fellow researchers! Thanks so much, Seán.

Comment author: Sean_o_h 13 October 2015 11:18:37AM *  1 point [-]

A pre-emptive apology: I have a heavy deadline schedule over the next few weeks, so will answer questions when I can - please excuse any delays!

New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK)

9 Sean_o_h 13 October 2015 11:11AM

[Cross-posted from EA Forum. Summary: Four new postdoc positions at the Centre for the Study of Existential Risk: Evaluation of extreme technological risk (philosophy, economics); Extreme risk and the culture of science (philosophy of science); Responsible innovation and extreme technological risk (science & technology studies, sociology, policy, governance); and an academic project manager (cutting across the Centre’s research projects, and playing a central role in Centre development). Please help us to spread the word far and wide in the academic community!]

 

An inspiring first recruitment round

The Centre for the Study of Existential Risk (Cambridge, UK) has been making excellent progress in building up our research team. Our previous recruitment round was a great success, and we made three exceptional hires. Dr Shahar Avin joined us in September from Google, with a background in the philosophy of science (Cambridge, UK). He is currently fleshing out several potential research projects, which will be refined and finalised following a research visit to FHI later this month. Dr Yang Liu joined us this month from Columbia University, with a background in mathematical logic and philosophical decision theory. Yang will work on problems in decision theory that relate to long-term AI, and will help us to link the excellent work being done at MIRI with relevant expertise and talent within academia. In February 2016, we will be joined by Dr Bonnie Wintle from the Centre of Excellence for Biosecurity Risk Analysis (CEBRA), who will lead our horizon-scanning work in collaboration with Professor Bill Sutherland’s group at Cambridge; among other things, she has worked on IARPA-funded development of automated horizon-scanning tools, and has been involved in the Good Judgement Project.

We are very grateful for the help of the existential risk and EA communities in spreading the word about these positions, and helping us to secure an exceptionally strong field. Additionally, I have now moved on from FHI to be CSER’s full-time Executive Director, and Huw Price is now 50% funded as CSER’s Academic Director (we share him with Cambridge’s Philosophy Faculty, where he remains Bertrand Russell Chair of Philosophy).

Four new positions:

We’re delighted to announce four new positions at the Centre for the Study of Existential Risk; details below. Unlike the previous round, where we invited project proposals from across our areas of interest, in this case we have several specific positions that we need to fill for our three year Managing Extreme Technological Risk project, funded by the Templeton World Charity Foundation; details are provided below. As we are building up our academic brand within a traditional university, we expect to predominantly hire from academia, i.e. academic researchers with (or near to the completion of) PhDs. However, we are open to hiring excellent candidates without candidates but with an equivalent and relevant level of expertise, for example in think tanks, policy settings or industry.

Three of these positions are in the standard academic postdoc mould, working on specific research projects. I’d like to draw attention to the fourth, the academic project manager. For this position, we are looking for someone with the intellectual versatility to engage across our research strands – someone who can coordinate these projects, synthesise and present our research to a range of audiences including funders, collaborators, policymakers and industry contacts. Additionally, this person will play a key role in developing the centre over the next two years, working with our postdocs and professorial advisors to secure funding, and contributing to our research, media, and policy strategy among other things. I’ve been interviewed in the past (https://80000hours.org/2013/02/bringing-it-all-together-high-impact-research-management/) about the importance of roles of this nature; right now I see it as our biggest bottleneck, and a position in which an ambitious person could make a huge difference.

We need your help – again!

In some ways, CSER has been the quietest of the existential risk organisations of late – we’ve mainly been establishing research connections, running lectures and seminars, writing research grants and building relations with policymakers (plus some behind-the scenes involvement with various projects). But we’ve been quite successful in these things, and now face an exciting but daunting level of growth: by next year we aim to have a team of 9-10 postdoctoral researchers here at Cambridge, plus senior professors and other staff. It’s very important we continue our momentum by getting world-class researchers motivated to do work of the highest impact. Reaching out and finding these people is quite a challenge, especially given our still-small team. So the help of the existential risk and EA communities in spreading the word – on your facebook feeds, on relevant mailing lists in your universities, passing them on to talented people you know – will make a huge difference to us.

Thank you so much!

Seán Ó hÉigeartaigh (Executive Director, CSER)

 

“The Centre for the Study of Existential Risk is delighted to announce four new postdoctoral positions for the subprojects below, to begin in January 2016 or as soon as possible afterwards. The research associates will join a growing team of researchers developing a general methodology for the management of extreme technological risk.

Evaluation of extreme technological risk will examine issues such as:

The use and limitations of approaches such as cost-benefit analysis when evaluating extreme technological risk; the importance of mitigating extreme technological risk compared to other global priorities; issues in population ethics as they relate to future generations; challenges associated with evaluating small probabilities of large payoffs; challenges associated with moral and evaluative uncertainty as they relate to the long-term future of humanity. Relevant disciplines include philosophy and economics, although suitable candidates outside these fields are welcomed. More: Evaluation of extreme technological risk

Extreme risk and the culture of science will explore the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk, and investigate the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include philosophy of science and science and technology studies, although suitable candidates outside these fields are welcomed. More: Extreme risk and the culture of science;

Responsible innovation and extreme technological risk asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential? Relevant disciplines include science and technology studies, geography, sociology, governance, philosophy of science, plus relevant technological fields (e.g., AI, biotechnology, geoengineering), although suitable candidates outside these fields are welcomed. More: Responsible innovation and extreme technological risk

We are also seeking to appoint an academic project manager, who will play a central role in developing CSER into a world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and administrative responsibilities. The Academic Project Manager will co-ordinate and develop CSER’s projects and the Centre’s overall profile, and build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide. This is a unique opportunity to play a formative research development role in the establishment of a world-class centre. More: CSER Academic Project Manager

Candidates will normally have a PhD in a relevant field or an equivalent level of experience and accomplishment (for example, in a policy, industry, or think tank setting). Application Deadline: Midday (12:00) on November 12th 2015.”

Comment author: Sean_o_h 24 July 2015 08:11:51AM 3 points [-]

"The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas)." That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)

Comment author: lukeprog 02 July 2015 07:08:14AM 4 points [-]

For those who haven't been around as long as Wei Dai…

Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.

Comment author: Sean_o_h 02 July 2015 11:14:53AM 10 points [-]

In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer's work and discussions with Eliezer have played in his own research and thinking over the course of the FHI's work on AI safety.

Comment author: John_Maxwell_IV 11 June 2015 02:16:55PM *  1 point [-]

I was interested to read Nick Beckstead write that x-risk reduction jobs are "very competitive". Do you guys want to share how pleased you were about the set of applicants you received for these jobs? And what strategies worked best for advertising them? (Interesting because: I'm curious whether x-risk reduction is more capital or talent-limited, and also how well the x-risk reduction movement is communicating internally.)

Comment author: Sean_o_h 15 June 2015 10:09:50AM *  7 points [-]

A few comments. I was working with Nick when he wrote that, and I fully endorsed it as advice at the time. Since then, the Xrisk funding situation - and number of locations at which you can do good work - has improved dramatically. it would be worth checking with him how he feels now. My view is that jobs are certainly still competitive though.

In that piece he wrote "I find the idea of doing technical research in AI or synthetic biology while thinking about x-risk/GCR promising." I also strongly endorse this line of thinking. My view is that in addition to centres specifically doing Xrisk, having people who are Xrisk-motivated working in all the standard mainstream fields that are relevant to Xrisk would be a big win. Not just AI or synthetic biology (although obviously directly valuable here) - I'd include areas like governance, international relations, science & technology studies, and so on. There will come a point (in my view) when having these concerns diffusing across a range of fields and geographic locations will be more important than increasing the size of dedicated thought bubbles at e.g. Oxford.

"Do you guys want to share how pleased you were about the set of applicants you received for these jobs?" I can't say too much about this, because hires not yet finalised, but yes, pleased. The hires we made are stellar. There were a number of people not hired who at most times I would have thought to be excellent, but for various reasons the panel didn't think they were right at this time. You will understand if I can't say more about this, (and my very sincere apologies to everyone I can't give individual feedback to, carrying a v heavy workload at the moment w minimal support).

That said, I wouldn't be willing to stand up and say x-risk reduction is not talented-limited, as I don't think there's enough data for that. Our field was large, and top talent was deep enough on this occasion, but could have been deeper. Both CSER and FHI have more hires coming up, so that will deplete the talent pool further.

Another consideration: I do feel that many of the most brilliant people the X-risk field needs are out there already, finishing their PhDs in relevant areas but not currently part of the field. I think organisations like ours need to make hard efforts to reach out to these people.

Recruitment strategies: Reaching out through our advisors' networks. Standard academic jobs hiring boards, emails to the top 10-20 departments in the most relevant fields. Getting in touch with members of different x-risk organisations and asking them to spread the word through their networks. Posting online in various x-risk/ea-related places. I also got in touch with a large range of the smaller, more specific centres (and authors) producing the best work outside of the x-risk community - e.g. in risk, foresight, horizon-scanning, security, international relations, DURC, STS and so on, asked them for recommendations and to distribute it among their network. And I iterated a few times through the contacts I made this way. E.g. I got in touch with Tetlock and others on expertise elicitation & aggregation, who put me in touch with people at the Good Judgement Project and others, who put me in touch with other centres. Eventually got some very good applicants in this space, including one from Australia's Centre of Excellence for Biosecurity Risk Analysis, whose director I was put in touch with through this method but hadn't heard of previously.

This was all v labour intensive, and I expect I won't have time to recruit so heavily in future. But I hope going forward we will have a bigger academic footprint. I also had tremendous help from a number of people in the Xrisk community, including Ryan Carey, Seth Baum, FHI folks, to whom I'm v grateful. Also, a huge thanks to Scott Alexander for plugging our positiosn on his excellent blog!

I think our top 10 came pretty evenly split between "xrisk community", "standard academic jobs posting boards/university department emails" and "outreach to more specific non-xrisk networks". I think all our hires are new introductions to existential risk, which is encouraging.

Re: communicating internally, I think we're doing pretty well. E.g. on recruitment, I've been communicating pretty closely with FHI as they have positions to fill too at present and coming up, and will recommend to some excellent people who applied to us to apply to them. (note that this isn't always just quality - we have both had excellent applicants who weren't quite a fit at this time at one, but would a top prospect at the other, going in both directions).

More generally, internal communication within x-risk has been good in my view - project managers and researchers at FHI, MIRI and other orgs make a point of regular meetings with the other organisations, and this has made up a decent chunk of my time too over the past couple of years and has been very important, although I'm likely to have to cut back personally for a couple of years due to increasing cambridge-internal workload (early days of a new, unusual centre in an old traditional university). I expect our researchers will play an important role in communicating between centres however.

One further apology: I don't expect to have much time to comment/post on LW going forward, so I apologise that I won't always be able to reply to qs like this. But I'm very grateful for all the useful support, advice and research input I've received from LW members over the years.

Comment author: So8res 05 April 2015 04:30:41PM *  20 points [-]

Update, ~1 year later: I am a full-time MIRI research fellow now, and it's been one hell of a year.

I've maintained my high productivity consistently since last year. I wrote twelve papers over the course of the year, nine as the primary author, three as a secondary author. I compiled the MIRI technical agenda and the MIRI research guide. I attended five conferences, and I've flown around the world to talk with many different people about related topics. I've learned a ton.

Public discourse about AI x-risk has advanced far faster than I expected, thanks in large part to Bostrom's Superintelligence and the the Future of Life institute. The field is growing much faster than expected. These are exciting times, and I'm grateful that I was granted the opportunity to throw myself into the thick of things.

Comment author: Sean_o_h 08 April 2015 10:51:24AM 4 points [-]

9 single author research papers is extremely impressive! Well done.

View more: Next