JoshuaZ comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 14 November 2011 06:39:59PM 1 point [-]

General intelligence -- defined as the ability to acquire, organize, and apply information -- is definitionally instrumental. Greater magnitudes of intelligence yield greater ability to acquire, organize, and apply said information.

Intelligence is instrumentally useful but it comes at cost. Note that only a few tens of species have developed intelligence. This suggests that intelligence is in general costly. Even if more intelligence helps an AI's goals more that doesn't mean that acquiring more intelligence is easy or worth the effort.

Any AGI which is designed as more "intelligent" than the (A)GI which designed it will be material evidence that GI can be incremented upwards through design: and furthermore that general intelligence can do this.

Yes, but I don't think many people seriously doubt this. Humans will likely to do this in a few years even without any substantial AGI work simply by genetic engineering and/or implants.

This then implies that any general intelligence that can design an intelligence superior to itself will likely do so in a manner that creates a general intelligence which is superior at designing superior intelligences, as this has already been demonstrated to be a characteristic of general intelligences of the original intelligence's magnitude.

This does not follow. It could be that it gets more and more difficult to design a superior intelligence. There may be diminishing marginal returns. (See my comment elsewhere in this thread for one possible example of what could go wrong.)

Comment author: Logos01 14 November 2011 08:06:43PM 0 points [-]

We seem to be talking past one another. Why do you speak in terms of evolution, where I was discussing engineered intelligence?

Comment author: JoshuaZ 14 November 2011 08:16:34PM 0 points [-]

I'm only discussing evolved intelligences to make the point that intelligence seems to be costly from a resource perspective.

Comment author: Logos01 15 November 2011 03:25:11AM 0 points [-]

Certainly. But evolved intelligences do not optimize for intelligence. They optimize for perpetuation of the genome. Constructed intelligence allows for systems that are optimized for intelligence. This was what I was getting at with the mentions of the fact that evolution does not optimize for what we optimize for; that there is no evolutionary equivalent of the atom bomb nor the Saturn-V rocket.

So mentioning "ways that can go wrong" and reinforcing that point with evolutionary precedent seems to be rather missing the point. It's apples-to-oranges.

After all: even if there is diminishing return on investment in terms of invested energy to achieve a more-intelligent design, once that new design is achieved it can be replicated essentially indefinitely.

Comment author: JoshuaZ 15 November 2011 03:30:47AM 0 points [-]

Taking energy to get there isn't what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn't optimize for intelligence but for other goals isn't really relevant, given that an AGI presumably won't optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)

Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.

Comment author: Logos01 15 November 2011 04:51:21AM 0 points [-]

Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.

Quite correct, but you're still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the "aims"/"goals" of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.

Comment author: Logos01 15 November 2011 04:49:06AM *  0 points [-]

given that an AGI presumably won't optimize itself for intelligence

By the very dint of the fact that it is designed for the purpose of being intelligent, any AGI conceivably constructed by men would be optimized for intelligence; this seems a rather routinely heritable phenomenon. While a paperclip optimizer itself might not seek to optimize itself for intelligence, if we postulate that it is in the business of making a 'smarter' paperclip optimizer it will optimize for intelligence. Of course we cannot know the agency of any given point in the sequence; whether it will "make the choice" to recurse upwards.

That being said, there's a real "non-sequitor" here to your dialogue, insofar as I can see. "The relevant issue is that being intelligent takes a lot of resources up". -- Compared to what, exactly? Roughly 2/3 of our caloric intake goes to our brain. Our brain is not well-optimized for intelligence. "[...] given that an AGI presumably won't optimize itself for intelligence" -- but whatever designed that AGI would have.

The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)

I strongly disagree. The basic point is deeply flawed. I've already tried to say this repeatedly: evolution does not optimize for intelligence. Pointing at evolution's history with intelligence and saying, "aha! Optimization finds intelligence expensive!" is missing the point altogether: evolution should find intelligence expensive. It doesn't match what evolution "does". Evolution 'seeks' stable local minima to perpetuate replication of the genome. That is all it does. Intelligence isn't integral to that process; humans didn't need to be any more intelligent than we are in order to reach our local minima of perpetuation, so we didn't evolve any more intelligence.

To attempt to extrapolate from that to what intelligence-seeking designers would achieve is missing the point on a very deep level: to extrapolate correctly from the 'lessons' evolution would 'teach us', one would have to postulate a severe selection pressure favoring intelligence.

I don't see how you're doing that.

Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.

Quite correct, but you're still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the "aims"/"goals" of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.

Comment author: JoshuaZ 16 November 2011 05:36:52PM 0 points [-]

I don't understand your remark. No one is going to make an AGI whose goal is to become as intelligent as possible. Evolution is thus in this context one type of optimizer. Whatever one is optimizing for, becoming as intelligent as possible won't generally be the optimal thing to do even if becoming more intelligent does help achieve its goals more.

Comment author: Logos01 16 November 2011 07:50:13PM -1 points [-]

No one is going to make an AGI whose goal is to become as intelligent as possible.

I would.

Evolution is thus in this context one type of optimizer.

To which intelligence is extraneous.

Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.

It is not possible to intentionally design a trait into a system without that trait being valuable to the system.

Intelligence is definitionally instrumental to an artificial general intelligence. Given sufficient time, any AGI capable of constructing a superior AGI will do so.

Comment author: JoshuaZ 16 November 2011 07:55:31PM -1 points [-]

No one is going to make an AGI whose goal is to become as intelligent as possible.

I would.

Are you trying to make sure a bad Singularity happens?

Evolution is thus in this context one type of optimizer.

To which intelligence is extraneous.

No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied environments. Unfortunately, it is a resource intensive tool. That's why Azathoth doesn't use except in a few very bright species.

Consider then an optimizer which focuses on agency rather than perpetuation. To agency, intelligence is instrumental. By dint of being artificial and designed to be intelligent whatever goalset would have to be integrated into it would value that intelligence.

You seem to be confusing two different notions of intelligence. One is the either/or "is it intelligent" and the other is how intelligent it is.

It is not possible to intentionally design a trait into a system without that trait being valuable to the system.

I'm not sure what you mean here.

Comment author: wedrifid 16 November 2011 08:04:21PM *  1 point [-]

Are you trying to make sure a bad Singularity happens?

If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer "No". (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)

Comment author: Logos01 17 November 2011 07:22:27AM *  0 points [-]

No, intelligence is one tool among many which the blind idiot god uses. It is a particularly useful tool for species that can have widely varied enviornments. Unfortunately, it is a resource intensive tool.

Given the available routes to general intelligence available to "the blind idiot god" due to the characteristics it does optimize for. We have a language breakdown here.

The reason I said intelligence is 'extraneous' to evolution was because evolution only 'seeks out' local minima for perpetuation of the genome. What specific configuration a given local minimum happens to be is extraneous to the algorithm. Intelligence is in the available solution space but it is extraneous to the process. Which is why generalists often lose out to specialists in limited environments. (Read: the pigmy people who "went back to the trees".)

Intelligence is not a goal to evolution; it is extraneous to its criteria. Intelligence is not the metric by which evolution judges fitness. Successful perpetuation of the genome is. Nothing else.

You seem to be confusing two different notions of intelligence. One is the either/or "is it intelligent" and the other is how intelligent it is.

Not at all. Not even remotely. I'm stating that any agent -- a construct (biological or synthetic) that can actively select amongst variable results; a thing that makes choices -- inherently values intelligence; the capacity to 'make good choices'. A more-intelligent agent is a 'superior' agent, instrumentally speaking.

Any time there is an intelligent designed agent, the presence of said intelligence is a hard indicator of the agent valuing intelligence. Designed intelligences are "designed to be intelligent". (This is tautological). This means that whoever designed that intelligence spent effort and time on making it intelligent. That, in turn, means that its designer valued that intelligence. Whatever goalset the designer imparted into the designed intelligence, thusly, is a goalset that requires intelligence to be effected.

Which in turn means that intelligence is definitionally instrumentally useful to a designed intelligence.

It is not possible to intentionally design a trait into a system without that trait being valuable to the system.

I'm not sure what you mean here.

What gives you trouble with it? Try rephrasing it and I'll 'correct' said rephrasing towards my intended meaning as possible, perhaps? I want to be understood. :)