Eliezer_Yudkowsky comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
What do you think you know and how do you think you know it? Let's say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides "then a miracle occurs"?
What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human's or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.
This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.
Now, to head off the "Fly Q" objection, Iet me point out that I'm not at all suggesting that an AGI has to be designed like a human brain. Instead, I'm "arguing" (expressing my perception) that the human brain's general intelligence isn't a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the "zillions" part is crucial.
(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I'm just expressing my epistemic state, and I've even made it clear that I don't believe I have information that SIAI folks don't, or am being more rational than they are.)
If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?
Tough problem. My first reaction is 'yes', but I think that might be because we're assuming cooperation, which might be letting more in the door than you want.
Exactly the thought I had. Cooperation is kind of a big deal.
Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.
I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.
Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:
And then someone came along, read this, and thought....what? Was it:
"No, you idiot, obviously no optimization process could be that powerful." ?
"There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?
"Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?
Something else?
The optimization process is the part where the intelligence lives.
Natural selection is an optimization process, but it isn't intelligent.
Also, the point here is AI -- one is allowed to assume the use of intelligence in shaping the cooperation. That's not the same as using intelligence as a black box in describing the nature of it.
If you were the downvoter, might I suggest giving me the benefit of the doubt that I'm up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)
You were at +1 when I downvoted, so I'm not alone.
Natural selection is a very bad optimization process, and so it's quite unintelligent relative to any standards we might have as humans.
Now it's my turn to downvote, on the grounds that you didn't understand my comment. I agree that natural selection is unintelligent -- that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.
EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI's public relations: people really do have more trouble noticing intellectual competence than I tend to realize.
Downvoted for retaliatory downvoting; voted everything else up toward 0.
Downvoted the parent and upvoted the grandparent. "On the grounds that you didn't understand my comment" is a valid reason for downvoting and based off a clearly correct observation.
I do agree that komponisto would have been better served by leaving off mention of voting altogether. Just "You didn't understand my comment. ..." would have conveyed an appropriate level of assertiveness to make the point. That would have avoided sending a signal of insecurity and denied others the invitation to judge.
(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)
Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:
I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."
To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.
I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see "fewer things like this" with a very low threshold.
Thank you for explaining this, and showing that I was operating under the illusion of transparency.
My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be "controlling" it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said "controlling the form of their cooperation" rather than "controlling their cooperation". My comment was really nothing different from thomblake's or wedrifid's. I was saying, in effect, "yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence."
The "cleverness" referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the "effective intelligence" of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such "cleverness" itself not looking particularly clever -- perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I'm definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).
I can now see how my words weren't as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you've updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.
Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.
The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.
The cortex is no more specialized than your hard drive.
Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.
You can think of cortical tissue as a biological 'neuronium'. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this
All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.