You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

is4junk comments on Open thread, Mar. 23 - Mar. 31, 2015 - Less Wrong Discussion

6 Post author: MrMind 23 March 2015 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (181)

You are viewing a single comment's thread. Show more comments above.

Comment author: is4junk 24 March 2015 11:40:37PM 1 point [-]

I don't think we would be that far behind.

NNs had lost favor in the AI community after 1969 (minsky's paper) and only have become popular again in the last decade. see http://en.wikipedia.org/wiki/Artificialneuralnetwork

The only crossover that comes to mind for me is the vision deep learning 'discovering' edge detection. There also is some interest in sparse NN activation.

Comment author: gwern 26 March 2015 09:13:01PM 1 point [-]

NNs had lost favor in the AI community after 1969 (minsky's paper) and only have become popular again in the last decade

Yes, I'm familiar with the history. But how far would we be without the neural network work done since ~2001? The non-neural-network competitors on Imagenet like SVM are nowhere near human levels of performance, Watson required neural networks, Stanley won the DARPA Grand Challenge without neural networks because it had so many sensors but real self-driving cars will have to use neural networks, neural networks are why Google Translate has gone from roughly Babelfish levels (hysterically bad) to remarkably good, voice recognition has gone from mostly hypothetical to routine on smartphones...

What major AI achievements have SVMs or random forests racked up over the past decade comparable to any of that?

Comment author: is4junk 27 March 2015 01:31:20AM 1 point [-]

So if humanity had had no biological neural networks to steal the general idea and as proof of feasibility, would machine learning & AI be far behind where they are now?

NNs connection to biology is very thin. Artificial neurons don't look or act like regular neurons at all. But as a coined term to sell your research idea its great.

NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).

Comparing NNs to SVMs isn't really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.

As for major achievements - NNs are leading for now because 1) most of the world's data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.

Comment author: gwern 27 March 2015 02:38:32AM 3 points [-]

NNs connection to biology is very thin. Artificial neurons don't look or act like regular neurons at all.

I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.

If you have lots of labeled data you are more likely to use an SVM.

ImageNet is a lot of labeled data, to give one example.

As for major achievements - NNs are leading for now because ...

There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you're really just doing the former.

Comment author: skeptical_lurker 31 March 2015 12:13:54PM 0 points [-]

If you have lots of labeled data you are more likely to use an SVM.

SVMs are O(n^3) - if you have lots of data you shouldn't use SVMs.

Comment author: Douglas_Knight 29 March 2015 08:47:38PM 0 points [-]

What year do you put the change in google translate? It didn't switch to neural nets until 2012, right? Did anyone notice the change? My memory is that it was dramatically better than babelfish in 2007, let alone 2010.

Comment author: gwern 30 March 2015 12:05:29AM *  0 points [-]

Good question... I know that Google Translate began as a pretty bad outsourced translator (SYSTRAN) because I had a lot of trouble figuring out when Translate first came out for my Google survival analysis, and it began being upgraded and expanded almost constantly from ~2002 onwards. The 2007 switch was supposedly from the company SYSTRAN to an internal system, but what does that mean? SYSTRAN is a proprietary company which could be using anything it wants internally, and admits it's a hybrid system. The 2006 beta just calls it statistics and machine learning, with no details about what this means. Google Scholar's no help here either - hits are swamped by research papers mentioning Translate, and a few more recent hits about the neural networks used in various recent Google mobile-oriented services like speech or image recognition.

So... I have no idea. Highly unlikely to predate their internal translator in 2006, anyway, but could be your 2012 date.

Comment author: Douglas_Knight 30 March 2015 02:24:51AM *  0 points [-]

Here is a 2007 paper that I found when I was writing the above. I don't remember how I found it, or why I think it representative, though.