Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Looking for machine learning and computer science collaborators

9 Post author: Stuart_Armstrong 26 May 2017 11:53AM

I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique.

What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge.

The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind.

It would be especially useful if the collaborators were located physically close to Oxford (UK).

Let me know if you know or are a potential candidate, in the comments.


Comments (9)

Comment author: IlyaShpitser 26 May 2017 01:18:30PM 8 points [-]

Hi Stuart. I am not an ML person, and I am not close to Oxford, but I am interested in this type of stuff (in particular, I went through the FDT paper just two days ago with someone). I do write papers for ML conferences sometimes.

Comment author: Stuart_Armstrong 30 May 2017 06:40:59AM 0 points [-]

I am not an ML person

I do write papers for ML conferences sometimes.

Interesting ^_^ under what name are these paper?

Comment author: IlyaShpitser 30 May 2017 02:04:19PM 0 points [-]

Mine? I can send you my cv if you want.

Comment author: Stuart_Armstrong 30 May 2017 04:34:34PM 0 points [-]

Is this you: https://www.researchgate.net/profile/Ilya_Shpitser/publications ?

Just let me know which papers are particularly ML-ish :-)

Comment author: harshhpareek 27 May 2017 07:43:48AM 3 points [-]

Hi Stuart, I am about to complete a PhD in Machine Learning and would be interested in collaborations like these but only October onwards.

I have written and presented papers at Machine Learning conferences, and am quite interested in contributing to concrete AI safety research. My work so far has been on issues in supervised ranking tasks, but I have read a fair bit on reinforcement learning.

I am not close to Oxford. I am current in Austin, TX and will be in the bay area October onwards.

Comment author: Stuart_Armstrong 30 May 2017 06:43:51AM 0 points [-]

ok! Sending you my email in a PM. Would you mind contacting me in October, if that's ok and you're still interested?


Comment author: Darklight 13 June 2017 05:37:40AM 1 point [-]

I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.

I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.

As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture, and have spent a fair bit of time thinking about the Friendly/Unfriendly AI problem. I even came up with an attempt at a thought experiment to convince an AI to be friendly.

Alas, I am based near Toronto, Ontario, Canada, so distance might be an issue.

Comment author: Stuart_Armstrong 15 June 2017 05:06:05AM 0 points [-]

Interesting. Can we exchange email addresses?

Comment author: Thomas 26 May 2017 12:56:27PM 0 points [-]

Consider for a moment, that this DL thing may be soon obsolete. It is great, the best so far, but anyway.

The first problem I have with it is the enormous data set needed for training.

The second problem is the inherent non-understandability of what those weights mean.

So, perhaps something better may be just around the corner.