I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique.
What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge.
The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind.
It would be especially useful if the collaborators were located physically close to Oxford (UK).
Let me know if you know or are a potential candidate, in the comments.
Cheers!
Hi Stuart, I am about to complete a PhD in Machine Learning and would be interested in collaborations like these but only October onwards.
I have written and presented papers at Machine Learning conferences, and am quite interested in contributing to concrete AI safety research. My work so far has been on issues in supervised ranking tasks, but I have read a fair bit on reinforcement learning.
I am not close to Oxford. I am current in Austin, TX and will be in the bay area October onwards.
ok! Sending you my email in a PM. Would you mind contacting me in October, if that's ok and you're still interested?
Cheers!