Comment author: sixes_and_sevens 02 April 2015 11:59:57AM *  7 points [-]

Tasker is an Android app that lets you specify "contexts" (specific states of the phone), and carry out actions depending on these contexts. An example use-case might be something like "when I am connected to my home WiFi network, disable my screen lock".

One of the actions available under Tasker is "Run Shell", which lets you issue shell commands to the underlying operating system. To achieve your desired effect, you could:

  • Acquire Tasker (a few dollars)
  • Set it up to run with root privileges
  • Set a context of "between 11pm and 6am"
  • Set an action of the shell command "su -c shutdown -h now" (or something similar) to run under that context

This does seem quite hazardous, though. If an emergency happened at 3am, I'm pretty sure I'd want my phone easily available and usable.

ETA: I just Googled to see if there was an existing recipe for this. It turns out Android doesn't have a conventional shutdown terminal command, but does have the "reboot" command, with the switch -p for powering down. Tasker also has a "reboot" under System->Misc, with a power-down option on rooted phones. This can absolutely do what you want it to do. Just don't go having any emergencies between 11 and 6.

Comment author: Sean_o_h 02 April 2015 12:33:36PM *  4 points [-]

This does seem quite hazardous, though. If an emergency happened at 3am, I'm pretty sure I'd want my phone easily available and usable.

I was going to say this too, it's a good point. Potential fix: have a cheap non-smartphone on standby at home.

Comment author: leplen 27 March 2015 12:52:38AM *  10 points [-]

Candidates should have a PhD in a relevant field

I'm really curious as to what constitutes a relevant field. The 3 people you list are an economist, a conservation biologist, and a someone with a doctorate in geography. Presumably those are relevant fields, but I don't know what they have in common exactly.

I don't know what to think about this. You're new and you have sort of unconventional funding and a really broad mission statement. I'm not really sure what sort of research you're looking for or what journals it would be published in. I can't tell how much of this is science and how much of this is economics or political science and your institute is under the umbrella of the Arts and Humanities Research Center. What sorts of positions do you envision your post-doctoral fellows taking two years down the road?

This is definitely interesting, but I'm not sure that I have any actual idea who you're looking for and having read your website and downloaded the job listing and read the bios of the people involved, I'm still not really sure. I can't figure out whether this seems sort of vague and confusing because it isn't directed at me or because you're still sort of figuring out the shape of the group yourself.

Comment author: Sean_o_h 29 March 2015 03:42:31PM 10 points [-]

Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the academics most involved I’m confident that things will work out well.

Firstly, re: your first post, a few words from our Academic Director and co-founder Huw Price (who doesn’t have a LW account).

“Thanks for your questions! What the three people mentioned have in common is that they are all interested in applying their expertise to the challenges of managing extreme risks arising from new technologies. That's CSER's goal, and we're looking for brilliant early-career researchers interested in working on these issues, with their own ideas about how their skills are relevant. We don't want to try to list all the possible fields these people might come from, because we know that some of you will have ideas we haven't thought of yet. The study of technological xrisk is a new interdisciplinary subfield, still taking shape. We're looking for brilliant and committed people, to help us design it.

We expect that the people we appoint will publish mainly in the journals in their home field, thus helping to raise awareness of these important issues within those fields – but there will also be opportunities for inter-field collaborations, too, so you may find yourself publishing in places you wouldn't have expected. We anticipate that most of our postdocs will go on to distinguished careers in their home fields, too, though hopefully in a way which maintains their links with the interdisciplinary xrisk community. We anticipate that there will also be some opportunities for more specialised career paths, as the field and funding expand. “

A few words of my own to expand: As you and Ryan have discussed, we have a number of specific, quite well-defined subprojects that we have secured grant funding for (two more will be announced later on). But we are also in the lucky position of having some more unconstrained postdoctoral position funding – and now, as Huw says, seems like an opportune time to see what people, and ideas, are out there, and what we haven’t considered. Future calls are likely to be a lot more constrained – as the centre’s ongoing projects and goals get more locked in, and as we need to hire for very specific people to work on specific grants.

Some disciplines seem very obviously relevant to me – e.g. if the existential risk community is to do work on AI, synthetic biology, pandemic risk, geoengineering, it needs people with qualifications in CS/math, biology/informatics, epidemiology, climate modelling/physics. Disciplines relevant to risk modelling and assessment seem obvious, as does science & technology studies, philosophy of science, and policy/governance. In aiming to develop implementable strategies for safe technology development and x-risk reduction, economics, law and international relations seem like fields that might produce people with necessary insights. Some or a little less clear-cut: insights into horizon-scanning and foresight/technological prediction could come from a range of areas. And I’m sure there are disciplines we are simply missing. Obviously we can’t hire people with all of these backgrounds now (although, over the course of the centre we would aim to have all these disciplines pass through and make their mark). But we don’t necessarily need to; we have enough strong academic connections that we will usually be able to provide relevant advisors and collaborators to complement what we have ‘in house’. E.g. if a policy/law-background person seems like an excellent fit for biosecurity work or biotech policy/regulation, we would aim to make sure there’s both a senior person in policy/law to provide guidance, and collaborators in biology to make sure the science is there. And vice versa.

With all that said, from my time at FHI and CSER, a lot of the biggest progress and ideas have come from people whose backgrounds might not have immediately seemed obvious to x-risk, at least to me – cosmologists, philosophers, neuroscientists. We want to make sure we get the people, and the ideas, wherever they may be.

With regards to your second post:

You again raise good questions. For the people who don’t fall squarely into the ‘shovel-ready’ projects (although the majority of our hires this year will), I expect we will set up senior support structures on a case by case basis depending on what the project/person needs.

One model is co-supervision or supervisor+advisor. For one example, last year I worked with a CSER postdoctoral candidate on a grant proposal for a postdoc project that would have taken in both technical modelling/assessment of extreme risks from sulphate aerosol geoengineering, but where the postdoc also wanted to explore broader socio/policy challenges. We felt we had the in-house expertise for the latter but not the former. We set up an arrangement whereby he would be advised by a climate specialist in this area, and spend a period of the postdoc with the specialist’s group in Germany. (The proposal was unfortunately unsuccessful with the granting body.)

As we expect AI to be a continuing focus, we’re developing good connections with AI specialist groups in academia and industry in Cambridge, and would similarly expect that a postdoc with a CS background might split their time between CSER’s interdisciplinary group and a technical group working in this area and interested in long-term safe/responsible AI development. The plan is to develop similar relations in bio and other key areas. If we feel like we’re really not set up to support someone as seems necessary and can’t figure out how to get around that, then yes, that may be a good reason not to proceed at a given time. That said, during my time at FHI, a lot of good research has been done without these kinds of setups – and incidentally I don’t think being at FHI has ever harmed anyone’s long-term career prospects - so they won’t always be necessary.

And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I'd like to have some sense of whether anyone would even read them or whether they'd just be confused as to why I applied.

An offer: if you (or anyone else) have these kinds of concerns and wish to send me something short (say 1/3-1/2 page proposal/info about yourself) before investing the effort in a full application, I’ll be happy to read and say whether it’s worth applying (warning: it may take me until weekend on any given week).

Comment author: leplen 27 March 2015 12:52:38AM *  10 points [-]

Candidates should have a PhD in a relevant field

I'm really curious as to what constitutes a relevant field. The 3 people you list are an economist, a conservation biologist, and a someone with a doctorate in geography. Presumably those are relevant fields, but I don't know what they have in common exactly.

I don't know what to think about this. You're new and you have sort of unconventional funding and a really broad mission statement. I'm not really sure what sort of research you're looking for or what journals it would be published in. I can't tell how much of this is science and how much of this is economics or political science and your institute is under the umbrella of the Arts and Humanities Research Center. What sorts of positions do you envision your post-doctoral fellows taking two years down the road?

This is definitely interesting, but I'm not sure that I have any actual idea who you're looking for and having read your website and downloaded the job listing and read the bios of the people involved, I'm still not really sure. I can't figure out whether this seems sort of vague and confusing because it isn't directed at me or because you're still sort of figuring out the shape of the group yourself.

Comment author: Sean_o_h 27 March 2015 11:42:31AM 4 points [-]

Placeholder: this is a good comment and good questions, which I will respond to by tomorrow or Sunday.

Postdoctoral research positions at CSER (Cambridge, UK)

17 Sean_o_h 26 March 2015 05:59PM

[To be cross-posted at Effective Altruism Forum, FLI news page]

I'm delighted to announce that the Centre for the Study of Existential Risk has had considerable recent success in grantwriting and fundraising, among other activities (full update coming shortly). As a result, we are now in a position to advance to CSER's next stage of development: full research operations. Over the course of this year, we will be recruiting for a full team of postdoctoral researchers to work on a combination of general methodologies for extreme technological (and existential) risk analysis and mitigation, alongside specific technology/risk-specific projects.

Our first round of recruitment has just opened - we will be aiming to hire up to 4 postdoctoral researchers; details below. A second recruitment round will take place in the Autumn. We have a slightly unusual opportunity in that we get to cast our net reasonably wide. We have a number of planned research projects (listed below) that we hope to recruit for. However, we also have the flexibility to hire one or more postdoctoral researchers to work on additional projects relevant to CSER's aims. Information about CSER's aims and core research areas is available on our website. We request that as part of the application process potential postholders send us a research proposal of no more than 1500 words, explaining what your research skills could contribute to CSER. At this point in time, we are looking for people who will have obtained a doctorate in a relevant discipline by their start date.

We would also humbly ask that the LessWrong community aid us in spreading the word far and wide about these positions. There are many brilliant people working within the existential risk community. However, there are academic disciplines and communities that have had less exposure to existential risk as a research priority than others (due to founder effect and other factors), but where there may be people with very relevant skills and great insights. With new centres and new positions becoming available, we have a wonderful opportunity to grow the field, and to embed existential risk as a crucial consideration in all relevant fields and disciplines.

Thanks very much,

Seán Ó hÉigeartaigh (Executive Director, CSER)

 

"The Centre for the Study of Existential Risk (University of Cambridge, UK) is recruiting for to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk.

We are looking for outstanding and highly-committed researchers, interested in working as part of growing research community, with research projects relevant to any aspect of the project. We invite applicants to explain their project to us, and to demonstrate their commitment to the study of extreme technological risks.

We have several shovel-ready projects for which we are looking for suitable postdoctoral researchers. These include:

  • Ethics and evaluation of extreme technological risk (ETR) (with Sir Partha Dasgupta;
  • Horizon-scanning and foresight for extreme technological risks (with Professor William Sutherland);
  • Responsible innovation and extreme technological risk (with Dr Robert Doubleday and the Centre for Science and Policy).

However, recruitment will not necessarily be limited to these subprojects, and our main selection criterion is suitability of candidates and their proposed research projects to CSER’s broad aims.

Details are available here. Closing date: April 24th."

Comment author: IlyaShpitser 03 March 2015 01:34:06PM *  1 point [-]

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature

Duly noted, thanks. This kind of tone deafness seems to be a pattern here in the LW-sphere, however. For instance, look at this:

http://lesswrong.com/lw/lco/could_you_be_prof_nick_bostroms_sidekick/

If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive.

Really?


An appeal to charity in the reading of "public-facing, external communication" is a little odd. Public-facing means you can't beg off on social incompetence, being overworked, etc. You have to convince the public of something, and they don't owe you charity in how they read your message. They will retreat to their prejudices and gut instincts right away. It is in the job description of public-facing communication to deal with this.

Comment author: Sean_o_h 03 March 2015 04:37:14PM 0 points [-]

This is reasonable.

Comment author: IlyaShpitser 02 March 2015 10:43:27AM *  1 point [-]

Ok, let me see if I can help.

Aside from the fact that this was an incredibly rude, cultish thing to say, utterly lacking in collegiality, how do we even judge who a 'mere mortal' is here? Do we compare CVs? Citation rank? A tingly sense of impressiveness you get when in the same room?


Maybe people should find a better hobby than ordering other people from best to worst. Yes, I know this hobby stirs something deep in our social monkey hearts.

Comment author: Sean_o_h 03 March 2015 01:16:49PM 2 points [-]

This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see how the line might be taken to mean things it wasn't at all intended to do.

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature - ones that are clearly intended to promote and highlight good work in LW's areas of interest - especially in "within community" spaces like this. One of the really inspiring things about working in this area is the number of people putting in great work and long hours alongside their full-time commitments - like Ryan and many others. And those working fulltime in Xrisk/EA often put in far in excess of standard hours. This sometimes means that people are working under a lot of time pressure or fatigue, and phrase things badly (or don't recognise that something could easily be misread). That may or may not be the case here, but I know it's a concern I often have about my own engagements, especially when it's gone past the '12 hours in the office' stage.

With that said, please do tell us when it looks like we're expressing things badly, or in a way that might be taken to be less than positive. It's a tremendously helpful learning experience about the mistakes we can make in how we write (particularly in cases where people might be tired/under pressure and thus less attentive to such things).

Comment author: Sean_o_h 26 February 2015 12:34:01PM 5 points [-]

They've also released their code (for non-commercial purposes): https://sites.google.com/a/deepmind.com/dqn/

In other interesting news, a paper released this month describes a way of 'speeding up' neural net training, and an approach that achieves 4.9% top 5 validation error on Imagenet. My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http://arxiv.org/abs/1501.02876

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe, Christian Szegedy

(Submitted on 11 Feb 2015 (v1), last revised 13 Feb 2015 (this version, v2))

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters."

Comment author: Sean_o_h 23 February 2015 09:59:50PM 2 points [-]

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012

Comment author: moreati 26 January 2015 10:58:23AM 5 points [-]

I saw Ex Machina this weekend. The subject matter is very close to LWs interests and I enjoyed it a lot. My prior prediction that it's "AI box experiment: the movie" wasn't 100% accurate.

Gur fpranevb vf cerfragrq nf n ghevat grfg. Gur punenpgref hfr n srj grezf yvxr fvatheynevgl naq gur NV vf pbasvarq, ohg grfgre vf abg rkcyvpvgyl n tngrxrrcre. Lbh pbhyq ivrj gur svyz nf rvgure qrcvpgvat n obk rkcrevrzrag eha vapbzcrgnagyl, be gur obk rkcrevzrag zbhyqrq gb znxr n pbzcryyvat/cbchyne svyz.

For those that worry it's Hollywood, hence dumb I think you'll be pleasantly surprised. The characters are smart and act accordingly, I spotted fewer than 5 idiot ball moments.

Comment author: Sean_o_h 26 January 2015 12:04:52PM 4 points [-]

Script/movie development was advised by CSER advisor and AI/neuroscience expert Murray Shanahan (Imperial). Haven't had time to go see it yet, but looking forward to it!

Comment author: [deleted] 18 January 2015 03:32:25AM 4 points [-]

Is FLI using this money to fund research proposals? Where would one send such a proposal for consideration?

Comment author: Sean_o_h 18 January 2015 10:06:19AM *  3 points [-]

Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.

View more: Prev | Next