komponisto comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 16 August 2010 02:02:16AM *  1 point [-]

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it's unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).

EDIT: When I say "I suspect" here, of course I mean "my impression is". I don't mean to imply that I don't think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).

Comment author: CarlShulman 16 August 2010 11:35:39AM 6 points [-]

The portion of the genome coding for brain architecture is a lot smaller than Windows 7, bit-wise.

Comment author: whpearson 17 August 2010 02:29:22PM 3 points [-]

An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we'll be able reverse engineer the human brain in a decade by looking at the genome.

Comment author: CarlShulman 17 August 2010 02:54:39PM *  4 points [-]

P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.

Edit: Kurzweil has now replied, more or less along the lines above.

Comment author: timtyler 17 August 2010 04:46:30PM *  0 points [-]

Kurzweil's analysis is simply wrong. Here's the gist of my refutation of it:

"So, who is right? Does the brain's design fit into the genome? - or not?

The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.

We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.

At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."

Comment author: whpearson 17 August 2010 10:44:28PM 0 points [-]

Wired really messed up the flow of the talk in that case. Is it based off a singularity summit talk?

Comment author: Perplexed 17 August 2010 03:31:02PM 0 points [-]

I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm - itself scattered through the whole genome and perhaps also the zygotic "epigenome". You might perhaps clarify that what you were talking about with "small portion of the genome" was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields - neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.

Comment author: timtyler 17 August 2010 04:52:51PM *  0 points [-]

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV?

Why bother? PZ's rather misguided rant isn't doing very much damage. Just ignore him, I figure.

Maybe it is a slow news day. PZ's rant got Slashdotted:

http://science.slashdot.org/story/10/08/17/1536233/Ray-Kurzweil-Does-Not-Understand-the-Brain

PZ has stooped pretty low with the publicity recently:

http://scienceblogs.com/pharyngula/2010/08/the_eva_mendes_sex_tape.php

Maybe he was trolling with his Kurzweil rant. He does have a history with this subject matter, though:

http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php

Comment author: Jonathan_Graehl 16 August 2010 09:58:56PM *  1 point [-]

Obviously the genome alone doesn't build a brain. I wonder how many "bits" I should add on for the normal environment that's also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.

Comment author: komponisto 16 August 2010 12:14:45PM *  1 point [-]

Thanks, this is useful to know. Will revise beliefs accordingly.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:32:54PM 2 points [-]

Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

What do you think you know and how do you think you know it? Let's say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides "then a miracle occurs"?

Comment author: komponisto 19 August 2010 12:13:15AM *  2 points [-]

What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human's or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

Now, to head off the "Fly Q" objection, Iet me point out that I'm not at all suggesting that an AGI has to be designed like a human brain. Instead, I'm "arguing" (expressing my perception) that the human brain's general intelligence isn't a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the "zillions" part is crucial.

(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I'm just expressing my epistemic state, and I've even made it clear that I don't believe I have information that SIAI folks don't, or am being more rational than they are.)

Comment author: Eliezer_Yudkowsky 19 August 2010 12:38:13AM 5 points [-]

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Comment author: thomblake 19 August 2010 12:48:18AM 2 points [-]

Tough problem. My first reaction is 'yes', but I think that might be because we're assuming cooperation, which might be letting more in the door than you want.

Comment author: wedrifid 19 August 2010 04:54:20AM 0 points [-]

Exactly the thought I had. Cooperation is kind of a big deal.

Comment author: komponisto 19 August 2010 12:47:29AM *  -1 points [-]

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

Comment author: komponisto 19 August 2010 02:04:07AM 2 points [-]

I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.

Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation

And then someone came along, read this, and thought....what? Was it:

  • "No, you idiot, obviously no optimization process could be that powerful." ?

  • "There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?

  • "Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?

  • Something else?

Comment author: WrongBot 19 August 2010 02:01:32AM 0 points [-]

The optimization process is the part where the intelligence lives.

Comment author: komponisto 19 August 2010 02:08:24AM *  2 points [-]

Natural selection is an optimization process, but it isn't intelligent.

Also, the point here is AI -- one is allowed to assume the use of intelligence in shaping the cooperation. That's not the same as using intelligence as a black box in describing the nature of it.

If you were the downvoter, might I suggest giving me the benefit of the doubt that I'm up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)

Comment author: WrongBot 19 August 2010 02:23:32AM 0 points [-]

You were at +1 when I downvoted, so I'm not alone.

Natural selection is a very bad optimization process, and so it's quite unintelligent relative to any standards we might have as humans.

Comment author: komponisto 19 August 2010 02:28:39AM *  1 point [-]

Now it's my turn to downvote, on the grounds that you didn't understand my comment. I agree that natural selection is unintelligent -- that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.

EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI's public relations: people really do have more trouble noticing intellectual competence than I tend to realize.

Comment author: Eliezer_Yudkowsky 19 August 2010 04:10:51AM -2 points [-]

Downvoted for retaliatory downvoting; voted everything else up toward 0.

Comment author: WrongBot 19 August 2010 05:23:08PM *  1 point [-]

(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)

Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."

To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.

I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see "fewer things like this" with a very low threshold.

Comment author: whpearson 19 August 2010 09:01:32AM 0 points [-]

Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.

Comment author: jacob_cannell 25 August 2010 03:46:49AM *  0 points [-]

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.

The cortex is no more specialized than your hard drive.

Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.

You can think of cortical tissue as a biological 'neuronium'. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this

All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.

Comment author: Jonathan_Graehl 16 August 2010 10:01:23PM 0 points [-]

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

There may be other approaches that are significantly simpler (that we haven't yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn't think of. In other words, you think you have an upper bound on how much time/expense it will take.

Comment author: whpearson 16 August 2010 11:00:24AM *  0 points [-]

I'm not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven't been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.

These are problems such as

  • How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
  • How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).

Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.