You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Daniel_Burfoot comments on Stupid Questions December 2014 - Less Wrong Discussion

16 Post author: Gondolinian 08 December 2014 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (341)

You are viewing a single comment's thread. Show more comments above.

Comment author: JQuinton 08 December 2014 08:11:57PM 2 points [-]

Looking for some people to refute this recently hair-brained idea I came up with.

The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?

Comment author: NobodyToday 08 December 2014 09:40:32PM 3 points [-]

I'm a firstyear AI student, and we are currently in the middle of exploring AI 'history'. Ofcourse I don't know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.

The problem is they came across some hard problems which they can't really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.

Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can't seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec's paradox ).

Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.

Comment author: Daniel_Burfoot 08 December 2014 11:02:55PM 2 points [-]

And machine learning is a thing which we can't seem to get very far with.

Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.

Comment author: Punoxysm 10 December 2014 05:17:31AM *  1 point [-]

but deep learning is really a new thing under the sun.

On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90's in the form of convolutional neural nets, and they had another burst of popularity in 2006.

Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.

This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.