falenas108 comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: falenas108 28 December 2012 02:43:38AM 1 point [-]

if an organization depends on software to manage some part of its information processing, and it has developers that work on that source code, can't the organization modify its own source code?

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently (or if they could, it wouldn't be nearly to the extent that an AGI could. If you want me to go into more detail with this, let me know).

If the intelligence has any skills that are irrelevant to its goals, it would be irrational for it to maintain those skills.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Comment author: sbenthall 28 December 2012 04:28:59PM 2 points [-]

They can't use one improvement to fuel another, they would have to come up with the next one independently

I disagree.

Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies).

An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).

Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.

Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?

Then they hire a skilled flute-player, right?

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

Comment author: falenas108 30 December 2012 02:06:27AM -1 points [-]

I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.

Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw

I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.

My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.

I agree that organizations may be seen as similar to an AGI that has supra-human intelligence in many ways, but not in their ability to self modify.

Comment author: timtyler 29 December 2012 02:04:01PM *  0 points [-]

Such an organisation can self-modify, but those modifications aren't recursive. They can't use one improvement to fuel another, they would have to come up with the next one independently

Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.

(or if they could, it wouldn't be nearly to the extent that an AGI could

So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.

Comment author: falenas108 29 December 2012 03:49:14PM -1 points [-]

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

It's highly unlikely a company will be able to get >1.

Comment author: timtyler 29 December 2012 07:33:46PM -1 points [-]

It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/

To me, that just sounds like confusion about the relationship between genetic and psychological evolution.

It's highly unlikely a company will be able to get >1.

Um > 1 what. It's easy to make irrefutable predictions when what you say is vague and meaningless.

Comment author: falenas108 30 December 2012 02:03:40AM -1 points [-]

The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.

What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)