Daniel_Burfoot comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.
At first glance this may seem puzzling, since, given how much more attention is given to narrow AI by researchers, you might think that someone who believes AGI is "fundamentally different" from narrow AI might be more pessimistic about the prospect of AGI coming soon than someone (like me) who is inclined to suspect that the difference is essentially quantitative. The explanation, however, is that (from what I can tell) the former belief leads Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).
This -- much more than all the business about fragility of value and recursive self-improvement leading to hard takeoff, which frankly always struck me as pretty obvious, though maybe there is hindsight involved here -- is the area of Eliezer's belief map that, in my opinion, could really use more public, explicit justification.
I don't think this is a good analogy. The problem of colonizing Mars is concrete. You can make a TODO list; you can carve the larger problem up into subproblems like rockets, fuel supply, life support, and so on. Nobody knows how to do that for AI.
OK, but it could still end up being like colonizing Mars if at some point someone realizes how to do that. Maybe komponisto thinks that someone will probably carve AGI in to subproblems before it is solved.
Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.
Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it's unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).
EDIT: When I say "I suspect" here, of course I mean "my impression is". I don't mean to imply that I don't think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).
The portion of the genome coding for brain architecture is a lot smaller than Windows 7, bit-wise.
An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we'll be able reverse engineer the human brain in a decade by looking at the genome.
P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.
Edit: Kurzweil has now replied, more or less along the lines above.
Kurzweil's analysis is simply wrong. Here's the gist of my refutation of it:
"So, who is right? Does the brain's design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.
At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."
Wired really messed up the flow of the talk in that case. Is it based off a singularity summit talk?
I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm - itself scattered through the whole genome and perhaps also the zygotic "epigenome". You might perhaps clarify that what you were talking about with "small portion of the genome" was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.
The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields - neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.
Why bother? PZ's rather misguided rant isn't doing very much damage. Just ignore him, I figure.
Maybe it is a slow news day. PZ's rant got Slashdotted:
http://science.slashdot.org/story/10/08/17/1536233/Ray-Kurzweil-Does-Not-Understand-the-Brain
PZ has stooped pretty low with the publicity recently:
http://scienceblogs.com/pharyngula/2010/08/the_eva_mendes_sex_tape.php
Maybe he was trolling with his Kurzweil rant. He does have a history with this subject matter, though:
http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php
Obviously the genome alone doesn't build a brain. I wonder how many "bits" I should add on for the normal environment that's also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.
Thanks, this is useful to know. Will revise beliefs accordingly.
What do you think you know and how do you think you know it? Let's say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides "then a miracle occurs"?
What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human's or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.
This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.
Now, to head off the "Fly Q" objection, Iet me point out that I'm not at all suggesting that an AGI has to be designed like a human brain. Instead, I'm "arguing" (expressing my perception) that the human brain's general intelligence isn't a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the "zillions" part is crucial.
(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I'm just expressing my epistemic state, and I've even made it clear that I don't believe I have information that SIAI folks don't, or am being more rational than they are.)
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Tough problem. My first reaction is 'yes', but I think that might be because we're assuming cooperation, which might be letting more in the door than you want.
Exactly the thought I had. Cooperation is kind of a big deal.
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.
I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.
Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:
And then someone came along, read this, and thought....what? Was it:
"No, you idiot, obviously no optimization process could be that powerful." ?
"There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?
"Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?
Something else?
The optimization process is the part where the intelligence lives.
Natural selection is an optimization process, but it isn't intelligent.
Also, the point here is AI -- one is allowed to assume the use of intelligence in shaping the cooperation. That's not the same as using intelligence as a black box in describing the nature of it.
If you were the downvoter, might I suggest giving me the benefit of the doubt that I'm up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)
You were at +1 when I downvoted, so I'm not alone.
Natural selection is a very bad optimization process, and so it's quite unintelligent relative to any standards we might have as humans.
Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.
The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.
The cortex is no more specialized than your hard drive.
Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.
You can think of cortical tissue as a biological 'neuronium'. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this
All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.
There may be other approaches that are significantly simpler (that we haven't yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn't think of. In other words, you think you have an upper bound on how much time/expense it will take.
I'm not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven't been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.
These are problems such as
Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.