timtyler comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
Today's orgaisations are surely better candidates for self-improvement of intelligence than today's machines are.
Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient - or could easily become so - whereas machines are still completely dependent on others for extended cumulative improvements..
Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today's organisations. So, these things are strongly linked together.
Are tomorrows' organizations better than tomorrows' machines? Because that's what is under discussion here.
Yes, in some ways - assuming we are talking about a time when there are still lots of humans around - since organisations are a superset of humans and machines and so can combine the strengths of both.
No doubt eventually humans will become unemployable - but not until machines can do practically all their jobs better than them. That period covers an important era which many of us are concerned with.
Ah, I didn't realize you were including machines here - organizations are usually assumed to be composed of people, but I suppose a GAI could count as a "person" for this purpose.
However, isn't this dependent on the AI not going foom? Because if it does go foom, I can't see a superintelligence remaining under any pre-singularity organization's control.
I can't say I've ever heard of that one. For example, Wikipedia has this:
If you are not considering the possibility of artifacts being components of organizations, that may explain some of the cross-talk.