Wiki Contributions

Comments

Sorted by

Thanks for the corrections. I changed the text to "in Berkeley". How should FLI be described? (I was just cribbing from Scott's FAQ when claiming it was at MIT)

Good points all; these are good reasons to work on AI safety (and of course as a theorist I'm very happy to think about interesting problems even if they don't have immediate impact :-) I'm definitely interested in the short-term issues, and have been spending a lot of my research time lately thinking about fairness/privacy in ML. Inverse-RL/revealed preferences learning is also quite interesting, and I'd love to see some more theory results in the agnostic case.

Hi all,

Thanks for the very thoughtful comments; lots to chew on. As I hope was clear, I'm just an interested outside observer, and have not spent very long thinking about these issues, and don't know much of the literature. (My blog post ended up as a cross post here because I posted it to facebook, and asked if anyone could point me to more serious literature thinking about this problem, and a commenter suggested that I should crosspost here for feedback)

I agree that linear feedback is more plausible if we think of research breakthroughs as producing multiplicative gains, a simple point that I hadn't thought about.