Comment author: leplen 27 March 2015 12:52:38AM *  10 points [-]

Candidates should have a PhD in a relevant field

I'm really curious as to what constitutes a relevant field. The 3 people you list are an economist, a conservation biologist, and a someone with a doctorate in geography. Presumably those are relevant fields, but I don't know what they have in common exactly.

I don't know what to think about this. You're new and you have sort of unconventional funding and a really broad mission statement. I'm not really sure what sort of research you're looking for or what journals it would be published in. I can't tell how much of this is science and how much of this is economics or political science and your institute is under the umbrella of the Arts and Humanities Research Center. What sorts of positions do you envision your post-doctoral fellows taking two years down the road?

This is definitely interesting, but I'm not sure that I have any actual idea who you're looking for and having read your website and downloaded the job listing and read the bios of the people involved, I'm still not really sure. I can't figure out whether this seems sort of vague and confusing because it isn't directed at me or because you're still sort of figuring out the shape of the group yourself.

Comment author: Sean_o_h 27 March 2015 11:42:31AM 4 points [-]

Placeholder: this is a good comment and good questions, which I will respond to by tomorrow or Sunday.

Comment author: IlyaShpitser 03 March 2015 01:34:06PM *  1 point [-]

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature

Duly noted, thanks. This kind of tone deafness seems to be a pattern here in the LW-sphere, however. For instance, look at this:

http://lesswrong.com/lw/lco/could_you_be_prof_nick_bostroms_sidekick/

If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive.

Really?


An appeal to charity in the reading of "public-facing, external communication" is a little odd. Public-facing means you can't beg off on social incompetence, being overworked, etc. You have to convince the public of something, and they don't owe you charity in how they read your message. They will retreat to their prejudices and gut instincts right away. It is in the job description of public-facing communication to deal with this.

Comment author: Sean_o_h 03 March 2015 04:37:14PM 0 points [-]

This is reasonable.

Comment author: IlyaShpitser 02 March 2015 10:43:27AM *  1 point [-]

Ok, let me see if I can help.

Aside from the fact that this was an incredibly rude, cultish thing to say, utterly lacking in collegiality, how do we even judge who a 'mere mortal' is here? Do we compare CVs? Citation rank? A tingly sense of impressiveness you get when in the same room?


Maybe people should find a better hobby than ordering other people from best to worst. Yes, I know this hobby stirs something deep in our social monkey hearts.

Comment author: Sean_o_h 03 March 2015 01:16:49PM 2 points [-]

This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see how the line might be taken to mean things it wasn't at all intended to do.

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature - ones that are clearly intended to promote and highlight good work in LW's areas of interest - especially in "within community" spaces like this. One of the really inspiring things about working in this area is the number of people putting in great work and long hours alongside their full-time commitments - like Ryan and many others. And those working fulltime in Xrisk/EA often put in far in excess of standard hours. This sometimes means that people are working under a lot of time pressure or fatigue, and phrase things badly (or don't recognise that something could easily be misread). That may or may not be the case here, but I know it's a concern I often have about my own engagements, especially when it's gone past the '12 hours in the office' stage.

With that said, please do tell us when it looks like we're expressing things badly, or in a way that might be taken to be less than positive. It's a tremendously helpful learning experience about the mistakes we can make in how we write (particularly in cases where people might be tired/under pressure and thus less attentive to such things).

Comment author: Sean_o_h 26 February 2015 12:34:01PM 5 points [-]

They've also released their code (for non-commercial purposes): https://sites.google.com/a/deepmind.com/dqn/

In other interesting news, a paper released this month describes a way of 'speeding up' neural net training, and an approach that achieves 4.9% top 5 validation error on Imagenet. My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http://arxiv.org/abs/1501.02876

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe, Christian Szegedy

(Submitted on 11 Feb 2015 (v1), last revised 13 Feb 2015 (this version, v2))

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters."

Comment author: Sean_o_h 23 February 2015 09:59:50PM 2 points [-]

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012

Comment author: moreati 26 January 2015 10:58:23AM 5 points [-]

I saw Ex Machina this weekend. The subject matter is very close to LWs interests and I enjoyed it a lot. My prior prediction that it's "AI box experiment: the movie" wasn't 100% accurate.

Gur fpranevb vf cerfragrq nf n ghevat grfg. Gur punenpgref hfr n srj grezf yvxr fvatheynevgl naq gur NV vf pbasvarq, ohg grfgre vf abg rkcyvpvgyl n tngrxrrcre. Lbh pbhyq ivrj gur svyz nf rvgure qrcvpgvat n obk rkcrevrzrag eha vapbzcrgnagyl, be gur obk rkcrevzrag zbhyqrq gb znxr n pbzcryyvat/cbchyne svyz.

For those that worry it's Hollywood, hence dumb I think you'll be pleasantly surprised. The characters are smart and act accordingly, I spotted fewer than 5 idiot ball moments.

Comment author: Sean_o_h 26 January 2015 12:04:52PM 4 points [-]

Script/movie development was advised by CSER advisor and AI/neuroscience expert Murray Shanahan (Imperial). Haven't had time to go see it yet, but looking forward to it!

Comment author: [deleted] 18 January 2015 03:32:25AM 4 points [-]

Is FLI using this money to fund research proposals? Where would one send such a proposal for consideration?

Comment author: Sean_o_h 18 January 2015 10:06:19AM *  3 points [-]

Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.

Comment author: JoshuaZ 15 January 2015 11:26:00PM 5 points [-]

This is good news. In general, since all forms of existential risk seem underfunded as a whole, funding more to any one of them is a good thing. But a donation of this size for AI specifically makes me now start to wonder if people should identify other existential risks which are now more underfunded. In general, it takes a very large amount of money to change what has the highest marginal return, but this is a pretty large donation.

Comment author: Sean_o_h 16 January 2015 11:13:59AM *  6 points [-]

This will depend on how many other funders are "swayed" towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It's important people don't get the impression that this funding has "solved" the AI problem (I know you're not saying this yourself).

Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now AI safety research hasn't had that body to draw on in the same way, and has instead focused on fundamental issues on the development of general AI, as well as outlining the challenges that will be faced. Given that much of this funding will go towards technical work by AI researchers, this will hopefully get this side of things going in a big way, and help build a body of support and involvement from the non-risk AI/CS community, which is essential at this moment in time.

But there's a tremendous amount of work that will need to be done - and funded - in both the technical, fundamental, and broader (policy, etc) areas. Even if FHI/CSER are successful in applying, the funds that are likely to be allocated to the work we're doing from this pot is not going to be near what we would need for our respective AI research programmes (I can't speak for MIRI, but I presume this to be the case also). But it will certainly help!

Comment author: JoshuaFox 16 January 2015 08:17:01AM *  8 points [-]

Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?

Comment author: Sean_o_h 16 January 2015 10:44:10AM 16 points [-]

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Comment author: gjm 27 October 2014 05:18:01PM 2 points [-]

"You guys" has absolutely none of the hostile/contemptuous feeling that "you people" has (at least for me). It's distinctly informal and (as you surmise) some people may interpret it as sexist.

I think I'd generally just say "you" and, if necessary, make it explicit what particular group I had in mind.

It hadn't occurred to me that you might not be a native English speaker; sorry about that. I guess it's one of the perils of speaking the language very well :-).

Comment author: Sean_o_h 04 November 2014 12:27:50PM 0 points [-]

As another non-native speaker, I frequently find myself looking for a "plural you" in English, which was what I read hyporational's phrase as trying to convey. Useful feedback not to use 'you people'.

View more: Prev | Next