falenas108 comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread.

Comment author: falenas108 27 December 2012 01:11:18PM 6 points [-]

The reason why an AGI would go foom is because it either has access to its own source code, so it can self modify, or it is capable of making a new AGI that builds on itself. Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

Also:

When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.

Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.

Comment author: HalMorris 27 December 2012 03:34:40PM 3 points [-]

Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.

In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.

In general, it is far easier to destroy than to create.

So I wouldn't dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.

Comment author: jsteinhardt 27 December 2012 04:18:47PM 2 points [-]

The argument in the post is not that AGI isn't more powerful than organizations, it is that organizations are also very powerful, and probably sufficiently powerful that they will create huge issues before AGI creates huge issues.

Comment author: falenas108 27 December 2012 11:29:08PM 2 points [-]

Yes. I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.

Comment author: timtyler 29 December 2012 01:58:21PM 0 points [-]

I was pointing out that the thing that makes AGI dangerous, i.e. recursive improvement, does not apply to organizations.

You are claiming that organisations don't improve? Or that they don't improve themselves? Or that improving themselves doesn't count as a form of recursion? None of these positions seems terribly defensible to me.

Comment author: sbenthall 28 December 2012 12:26:04AM 1 point [-]

Organizations don't have this same power, in that they can't modify the mental structure of the people that make up the organization. They can change the people in it, and the structure connecting them, but that's not the same type of optimization power as an AGI would have.

I may be missing something, but...if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code?

Of course, you run into some hardware and wetware constraints, but so does pure software.

Not if you're talking about general intelligence. Deep Blue isn't an AGI, because it can only play chess. This is its only goal, but we do not say it is an AGI because it is not able to take its algorithm and apply it to new fields.

Fair enough. But then consider the following argument:

Suppose I have a general, self-modifying intelligence.

Suppose that the world is such that it is costly to develop and maintain new skills.

The intelligence has some goals.

If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.

At this point, the general intelligence would self-modify itself into a non-general intelligence.

By this logic, if an AGI had goals that weren't so broad that they required the entire spectrum of possible skills, then it would immediately castrate itself of its generality.

Does that mean it would no longer be a problem?

Comment author: falenas108 28 December 2012 02:43:38AM 1 point [-]

if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code?

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn't be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know).

If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Comment author: sbenthall 28 December 2012 04:28:59PM 2 points [-]

They can't use one improvement to fuel another, they would have to come up with the next one independently

I disagree.

Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies).

An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).

Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?

Then they hire a skilled flute-player, right?

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

Comment author: falenas108 30 December 2012 02:06:27AM -1 points [-]

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.

Comment author: timtyler 29 December 2012 02:04:01PM *  0 points [-]

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently

Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.

(or if they could, it wouldn't be nearly to the extent that an AGI could

So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.

Comment author: falenas108 29 December 2012 03:49:14PM -1 points [-]

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

It's highly unlikely a company will be able to get >1.

Comment author: timtyler 29 December 2012 07:33:46PM -1 points [-]

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

To me, that just sounds like confusion about the relationship between genetic and psychological evolution.

It's highly unlikely a company will be able to get >1.

Um > 1 what. It's easy to make irrefutable predictions when what you say is vague and meaningless.

Comment author: falenas108 30 December 2012 02:03:40AM -1 points [-]

The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.

What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)