Eliezer_Yudkowsky comments on New report: Intelligence Explosion Microeconomics - Less Wrong

45 Post author: Eliezer_Yudkowsky 29 April 2013 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 29 April 2013 04:25:10PM 6 points [-]

I think that teams of up to five people can scale "pretty well by human standards" - not too far from linearly. It's going up to a hundred, a thousand, a million, a billion that we start to run into incredibly sublinear returns.

Comment author: ThrustVectoring 29 April 2013 05:23:29PM 6 points [-]

As group size increases you have to spend more and more of your effort getting your ideas heard and keeping up with the worthwhile ideas being proposed by other people, as opposed to coming up with your own good ideas.

Depending on the relevant infrastructure and collaboration mechanisms, it's fairly easy to have a negative contribution from each additional person in the project. If someone is trying to say something, then someone else has to listen - even if all the listener does is keep it from lowering the signal-to-noise ratio by removing the contribution.

Comment author: DanArmak 29 April 2013 07:50:22PM 12 points [-]

You correctly describe the problems of coordinating the selection of the best result produced. But there's another big problem: coordinating the division of work.

When you add another player to a huge team of 5000 people, he won't start exploring a completely new series of moves no-one else had considered before. Instead, he will likely spend most of his time considering moves already considered by some of the existing players. That's another reason why his marginal contribution will be so low.

Unlike humans, computers are good at managing divide-and-conquer problems. In chess, a lot of the search for the next move is local in the move tree. That's what makes it a particularly good example of human groups not scaling where computers would.

Comment author: mwengler 29 April 2013 10:12:40PM 2 points [-]

Are you or anyone else aware of any work along these lines, showing the intelligence of groups of people?

Any sense of what the intelligence of the planet as a whole, or the largest effective intelligence of any group on the planet might be?

If groups of up to 5 scale well, and we get sublinear returns above 5, but positive returns up to some point anyway, does this prove that AI won't FOOM until it has an intelligence larger than the largest intelligence of a group of humans? That is, until AI has a higher intelligence than the group, that the group of humans will dominate the rate at which new AI's are improved?

Comment author: CarlShulman 29 April 2013 10:50:42PM 6 points [-]
Comment author: timtyler 03 May 2013 10:56:55AM 3 points [-]

I think that teams of up to five people can scale "pretty well by human standards" - not too far from linearly. It's going up to a hundred, a thousand, a million, a billion that we start to run into incredibly sublinear returns.

That's parallelism for you. It's like the way that four-core chips are popular, while million-core chips are harder to come by.