Dr_Manhattan comments on [link] New essay summarizing some of my latest thoughts on AI safety - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (27)
Thanks for writing this; a couple quick thoughts:
I think I've yet to see a paper that convincingly supports the claim that neural nets are learning natural representations of the world. For some papers that refute this claim, see e.g.
http://arxiv.org/abs/1312.6199 http://arxiv.org/abs/1412.6572
I think the Degrees of Freedom thesis is a good statement of one of the potential problems. Since it's essentially making a claim about whether a certain very complex statistical problem is identifiable, I think it's very hard to know whether it's true or not without either some serious technical analysis or some serious empirical research --- which is a reason to do that research, because if the thesis is true then that has some worrisome implications about AI safety.
My impression that they can in fact learn "natural" representations of the world, a good example here http://arxiv.org/abs/1311.2901
On the other hand since they tend to be task-specific learners they might take shortcuts that we wouldn't perceive as "natural"; our "natural object" ontology is optimized for much more general task than most NNets.
If I'm correct about this I would expect NNets to become more "natural" as the tasks get closer to being "AI-complete", such as question-answering systems and scene description networks.