Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DanBurfoot comments on Artificial Addition - Less Wrong

36 Post author: Eliezer_Yudkowsky 20 November 2007 07:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

Sort By: Old

You are viewing a single comment's thread.

Comment author: DanBurfoot 20 November 2007 02:49:59PM 3 points [-]


Did you include your own answer to the question of why AI hasn't arrived yet in the list? :-)

This is a nice post. Another way of stating the moral might be: "If you want to understand something, you have to stare your confusion right in the face; don't look away for a second."

So, what is confusing about intelligence? That question is problematic: a better one might be "what isn't confusing about intelligence?"

Here's one thing I've pondered at some length. The VC theory states that in order to generalize well a learning machine must implement some form of capacity control or regularization, which roughly means that the model class it uses must have limited complexity (VC dimension). This is just Occam's razor.

But the brain has on the order of 10^12 synapses, and so it must be enormously complex. How can the brain generalize, if it has so many parameters? Are the vast majority of synaptic weights actually not learned, but rather preset somehow? Or, is regularization implemented in some other way, perhaps by applying random changes to the value of the weights (this would seem biochemically plausible)?

Also, the brain has a very high metabolic cost, so all those neurons must be doing something valuable.

Comment author: JohnH 22 April 2011 11:40:23PM 4 points [-]

Are the vast majority of synaptic weights actually not learned, but rather preset somehow?"

This is what some philosophers have purposed, others have thought we start as a blank slate. The research into the subject has shown that babies do start with some sort of working model of things. That is we begin life with a set of preset preferences and the ability to distinguish those preferences and a basic understanding of geometric shapes.

Comment author: wedrifid 22 April 2011 11:51:39PM 5 points [-]

It would be shocking if we didn't have preset functions. Calves, for example, can walk almost straight away and swim not much longer. We aren't going to entirely eliminate the mammalian ability to start with a set of preset features there just isn't enough pressure to keep a few of them.

Comment author: Cyan 23 April 2011 01:07:24AM 6 points [-]

If you put a newborn whose mother had an unmedicated labor on the mother's stomach, the baby will move up to a breast and start to feed.

Comment author: wedrifid 23 April 2011 07:23:46AM *  2 points [-]

Good point. Drink (food), breathe, scream and a couple of cute reactions to keep caretakers interested. All you need to bootstrap a human growth process. There seems to be something built in about eye contact management too - because a lack there is an early indicator that something is wrong.

Comment author: Houshalter 24 February 2014 09:55:14PM 2 points [-]

a couple of cute reactions to keep caretakers interested

Not terribly relevant to your point, but it's likely human sense of cuteness is based on what babies do rather than the other way around.

Comment author: Nornagest 24 February 2014 10:02:49PM *  1 point [-]

I'd replace "human" with "mammalian" -- most young mammals share a similar set of traits, even those that aren't constrained as we are by big brains and a pelvic girdle adapted to walking upright. That seems to suggest a more basal cuteness response; I believe the biology term is "baby schema".

Other than that, yeah.

Comment author: Eugene 18 February 2012 11:23:35AM 4 points [-]

Conversely, studies with newborn mammals have shown that if you deprive them of something as simple as horizontal lines, they will grow up unable to distinguish lines that approach 'horizontalness'. So even separating the most basic evolved behavior from the most basic learned behavior is not intuitive.

Comment author: Cyan 20 February 2012 06:25:53AM 0 points [-]

The deprivation you're talking about takes place over the course of days and weeks -- it reflects the effects of (lack of) reinforcement learning, so it's not really germane to a discussion of preset functions that manifest in the first few minutes after birth.

Comment author: Eugene 27 July 2012 02:31:01AM 3 points [-]

It's relevant insofar as we shouldn't make assumptions on what is and is not preset simply based on observations that take place in a "typical" environment.

Comment author: Cyan 27 July 2012 03:26:00AM *  2 points [-]

Ah, a negative example. Fair point. Guess I wasn't paying enough attention and missed the signal you meant to send by using "conversely" as the first word of your comment.

Comment author: Eugene 27 July 2012 05:37:42AM 1 point [-]

That was lazy of me, in retrospect. I find that often I'm poorer at communicating my intent than I assume I am.

Comment author: Kenny 13 January 2013 06:57:25PM 1 point [-]
Comment author: Houshalter 26 June 2014 11:24:25PM 0 points [-]

Artificial Neural Networks have been trained with millions of parameters. There are a lot of different methods of regularization like dropconnect or sparsity constraints. But the brain does online learning. Overfitting isn't as big of a concern because it doesn't see the data more than once.

Comment author: Punoxysm 27 June 2014 01:30:57AM *  0 points [-]

On the other hand, architecture matters. The most successful neural network for a given task has connections designed for the structure of that task, so that it will learn much more quickly than a fully-connected or arbitrarily connected network.

The human brain appears to have a great deal of information and structure in its architecture right off the bat.

Comment author: [deleted] 27 June 2014 05:49:40AM 0 points [-]

The human brain appears to engage in hierarchical learning, which is what allows it to leverage huge amounts of "general case" abstract knowledge in attacking novel specific problems put before it.

Comment author: Houshalter 27 June 2014 06:34:12PM 0 points [-]

I'm not saying that you're wrong, but the state of the art in computer vision is weight sharing which biological NNs probably can't do. Hyper parameters like the number of layers and how local the connections should be, are important but they don't give that much prior information about the task.

I may be completely wrong, but I do suspect that biological NNs are far more general purpose and less "pre-programmed" than is usually thought. The learning rules for a neural network are far simpler than the functions they learn. Training neural networks with genetic algorithms is extremely slow.

Comment author: Punoxysm 27 June 2014 07:12:05PM 0 points [-]

Architecture of the V1 and V2 areas of the brain, which Convolutional Neural Networks and other ANNs for vision borrow heavily from, is highly geared towards vision, and includes basic filters that detect stripes, dots, corners, etc. that appear in all sorts of computer vision work. Yes, no backpropagation or weight-sharing is directly responsible for this, but the presence of local filters is still what I would call very specific architecture (I've studied computer vision and inspiration it draws from early vision specifically, so I can say more about this).

The way genetic algorithms tune weights in an ANN (and yes, this is an awful way to train an ANN) is very different from the way they work in actually evolving a brain; working on the genetic code that develops the brain. I'd say they are so wildly different that no conclusions from the first can be applied to the second.

During a single individual's life, Hebbian and other learning mechanisms in the brain are distinct from gradient learning, but can achieve somewhat similar things.