komponisto comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 19 August 2010 12:47:29AM *  -1 points [-]

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

Comment author: komponisto 19 August 2010 02:04:07AM 2 points [-]

I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.

Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation

And then someone came along, read this, and thought....what? Was it:

  • "No, you idiot, obviously no optimization process could be that powerful." ?

  • "There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?

  • "Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?

  • Something else?

Comment author: WrongBot 19 August 2010 02:01:32AM 0 points [-]

The optimization process is the part where the intelligence lives.

Comment author: komponisto 19 August 2010 02:08:24AM *  2 points [-]

Natural selection is an optimization process, but it isn't intelligent.

Also, the point here is AI -- one is allowed to assume the use of intelligence in shaping the cooperation. That's not the same as using intelligence as a black box in describing the nature of it.

If you were the downvoter, might I suggest giving me the benefit of the doubt that I'm up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)

Comment author: WrongBot 19 August 2010 02:23:32AM 0 points [-]

You were at +1 when I downvoted, so I'm not alone.

Natural selection is a very bad optimization process, and so it's quite unintelligent relative to any standards we might have as humans.

Comment author: komponisto 19 August 2010 02:28:39AM *  1 point [-]

Now it's my turn to downvote, on the grounds that you didn't understand my comment. I agree that natural selection is unintelligent -- that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.

EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI's public relations: people really do have more trouble noticing intellectual competence than I tend to realize.

Comment author: Eliezer_Yudkowsky 19 August 2010 04:10:51AM -2 points [-]

Downvoted for retaliatory downvoting; voted everything else up toward 0.

Comment author: wedrifid 19 August 2010 04:51:51AM 2 points [-]

Downvoted for retaliatory downvoting; voted everything else up toward 0.

Downvoted the parent and upvoted the grandparent. "On the grounds that you didn't understand my comment" is a valid reason for downvoting and based off a clearly correct observation.

I do agree that komponisto would have been better served by leaving off mention of voting altogether. Just "You didn't understand my comment. ..." would have conveyed an appropriate level of assertiveness to make the point. That would have avoided sending a signal of insecurity and denied others the invitation to judge.

Comment author: jimrandomh 19 August 2010 05:08:18PM -1 points [-]

Voted down all comments that talk about voting, for being too much about status rather than substance.

Vote my comment towards -1 for consistency.

Comment author: komponisto 19 August 2010 05:48:03PM *  3 points [-]
  • Status matters; it's a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don't want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it's wrong for aspiring rationalists to pursue those things. Still less is it considered bad to discuss them.

  • Comments like the parent are disingenuous. If we didn't want users to think about status, we wouldn't have adopted a karma system in the first place. A norm of forbidding the discussion of voting creates the wrong incentives: it encourages people to make aggressive status moves against others (downvoting) without explaining themselves. If a downvote is discussed, the person being targeted at least has better opportunity to gain information, rather than simply feeling attacked. They may learn whether their comment was actually stupid, or if instead the downvoter was being stupid. When I vote comments down I usually make a comment explaining why -- certainly if I'm voting from 0 to -1. (Exceptions for obvious cases.)

  • I really don't appreciate what you've done here. A little while ago I considered removing the edit from my original comment that questioned the downvote, but decided against it to preserve the context of the thread. Had I done so I wouldn't now be suffering the stigma of a comment at -1.

Comment author: thomblake 19 August 2010 06:01:47PM 4 points [-]

When I vote comments down I usually make a comment explaining why -- certainly if I'm voting from 0 to -1. (Exceptions for obvious cases.)

Then you must be making a lot of exceptions, or you don't downvote very much. I find that "I want to see fewer comments like this one" is true of about 1/3 of the comments or so, though I don't downvote quite that much anymore since there is a cap now. Could you imagine if every 4th comment in 'recent comments' was taken up by my explanations of why I downvoted a comment? And then what if people didn't like my explanations and were following the same norm - we'd quickly become a site where most comments are explaining voting behavior.

A bit of a slippery slope argument, but I think it is justified - I can make it more rigorous if need be.

Comment author: Oligopsony 19 August 2010 05:55:43PM 0 points [-]

Status matters; it's a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don't want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it's wrong for aspiring rationalists to pursue those things.

Status is an inherently zero-sum good, so while it is rational for any given individual to pursue it; we'd all be better off, cet par, if nobody pursued it. Everyone has a small incentive for other people not to pursue status, just as they have an incentive for them not to be violent or to smell funny; hence the existence of popular anti-status-seeking norms.

Comment author: WrongBot 19 August 2010 05:23:08PM *  1 point [-]

(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)

Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."

To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.

I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see "fewer things like this" with a very low threshold.

Comment author: komponisto 20 August 2010 07:56:53AM 2 points [-]

I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."

Thank you for explaining this, and showing that I was operating under the illusion of transparency.

My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be "controlling" it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said "controlling the form of their cooperation" rather than "controlling their cooperation". My comment was really nothing different from thomblake's or wedrifid's. I was saying, in effect, "yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence."

The "cleverness" referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the "effective intelligence" of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such "cleverness" itself not looking particularly clever -- perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I'm definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).

I can now see how my words weren't as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you've updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.

Comment author: WrongBot 20 August 2010 02:33:10PM 8 points [-]

Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it's a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.

Comment author: ciphergoth 20 August 2010 02:41:38PM 5 points [-]

Hooray for polite, respectful, informative disagreements on LW!

Comment author: komponisto 20 August 2010 03:01:26PM 2 points [-]

It's why I keep coming back even after getting mad at the place.

(That, and the fact that this is one of very few places I know where people reliably get easy questions right.)

Comment author: whpearson 19 August 2010 09:01:32AM 0 points [-]

Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.