sbenthall comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong

13 Post author: sbenthall 27 December 2012 04:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: sbenthall 28 December 2012 04:38:32PM 2 points [-]

Ok, thanks for explaining that.

I think we agree that organizations recursively self-improve.

The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI.

So far, most of the arguments I've encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware).

I don't disagree with those arguments. What I've been trying to argue is that the cognitive processes of an organization are based on both hardware and wetware substrates. So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.

I guess I'd add here that wetware has some nice computational properties as well. It's possible that the ideal cognitive structure would efficiently use both hardware and wetware.

Comment author: AlexMennen 29 December 2012 12:17:11AM 1 point [-]

So, organizational cognition can take advantage of the physical properties of computers, and so are not bounded by wetware limits.

Ah, so you're concerned that an organization could solve the friendly AI problem, and then make it friendly to itself rather than humanity? That's conceivable, but there are a few reasons I'm not too concerned about it.

Organizations are made mostly out of humans, and most of their agency goes through human agency, so there's a limit to how far an organization can pursue goals that are incompatible with the goals of the people comprising the organization. So at the very least, an organization could not intentionally produce an AGI that is unfriendly to the members of the team that produced the AGI. It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.

Comment author: timtyler 29 December 2012 02:42:18PM *  1 point [-]

That's conceivable, but there are a few reasons I'm not too concerned about it.

Organizations are made mostly out of humans

Is Google "made mostly out of humans"? What about its huge datacenters? They are where a lot of the real work gets done - right?

It is also conceivable that the team could make the AGI friendly to its members but not to the rest of humanity, but future utopia made perfect by AGI is about as far a concept as you can get, so most people will be idealistic about it.

So, I'm not sure I have this straight, but you seem to be saying that one of the reasons you are not concerned about this is because many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?

If so, that's cool, but why should rational thinkers share your lack of concern?

Comment author: AlexMennen 29 December 2012 09:11:25PM 2 points [-]

Is Google "made mostly out of humans"? What about its huge datacenters? They are where a lot of the real work gets done - right?

Google's datacenters don't have much agency. Their humans do.

many people use a daft reasoning technique when dealing with the future utopias, and that makes you idealistic about it?

No, it makes them idealistic about it.