Baughn comments on Open Thread, Jun. 22 - Jun. 28, 2015 - Less Wrong

6 Post author: Gondolinian 22 June 2015 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 24 June 2015 05:06:01PM *  2 points [-]

So, some Inside View reasons to think this time might be different:

  • The results look better, and in particular, some of Google's projects are reproducing high-level quirks of the human visual cortex.

  • The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn't have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore's Law is producing a difference in kind.

That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all. We're making progress, and I wouldn't be surprised if deep learning is part of the first AGI.

Comment author: RobFack 26 June 2015 10:57:21PM 1 point [-]

some of Google's projects are reproducing high-level quirks of the human visual cortex.

While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn't seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?

Comment author: Houshalter 27 June 2015 08:16:55AM 4 points [-]

There is the 'one learning algorithm' hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc.

The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it's auditory cortex just learned how to do vision instead.

Comment author: jsteinhardt 25 June 2015 05:38:15AM 0 points [-]

which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all

This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?