One thing I've been wondering about deep neural networks: to what extent are neural networks novel and non-obvious? To what extent has evolution invented and thus taught us something very important to know for AI? (I realize this counterfactual is hard to evaluate.)
That is, imagine a world like ours but in which for some reason, no one had ever been sufficiently interested in neurons & the brain as to make the basic findings about neural network architecture and its power like Pitts & McCulloch. Would anyone reinvent them or any isomorphic algorithm or discover superior statistical/machine-learning methods?
For example, Ilya comments elsewhere that he doesn't think much of neural networks inasmuch as they're relatively simple, 'just' a bunch of logistic regressions wired together in layers and adjusted to reduce error. True enough - for all the subleties, even a big ImageNet-winning neural network is not that complex to implement; you don't have to be a genius to create some neural nets.
Yet, offhand, I'm having a hard time thinking of any non-neural network algorithms which operate like a neural network in putting together a lot of little things in layers and achieving high performance. That's not like any of your usual regressions or tests, multi-level models aren't very close, random forests and bagging and factor analysis may be universal or consistent but are 'flat'...
Nor do I see many instances of people proposing new methods which turn out to just be a convolutional network with nodes and hidden layers renamed. (A contrast here would be Turing's halting theorem: it seems like you can't throw a stick among language or system papers without hitting a system complicated enough to be Turing-complete and hence indecidable, and like there were a small cottage industry post-Turing of showing that yet another system could be turned into a Turing machine or a result could be interpreted as proving something well-known about Turing machines.) There don't seem to be 'multiple inventions' here, as if the paradigm were non-obvious and, without the biological inspiration.
So if humanity had had no biological neural networks to steal the general idea and as proof of feasibility, would machine learning & AI be far behind where they are now?
This 2007 talk by Yann LeCun, Who is Afraid of Non-Convex Loss Functions? seems very relevant to your question. I'm far from an ML expert, but here's my understanding from that talk and various other sources. Basically there is no theoretical reason to think that deep neural nets can be trained for any interesting AI task, because they are not convex so there's no guarantee that when you try to optimize the weights you won't get stuck in local minima or flat spots. People tried to use DNNs anyway and suffered from those problems in practice as well, so the...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.