So8res comments on I know when the Singularity will occur - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (27)
That's not the part of your post I was criticizing. I was criticizing this:
Which doesn't seem to be a good model of how Google servers work.
Confidence in English can be expressed non-numerically. Here's a few sentences that seemed brazenly overconfident to me:
(Sensationalized title.)
(The number of significant digits you're counting on your measure of transmitted information implies confidence that I don't think you should possess.)
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
Did you even read my counter-argument?
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven't supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn't seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
I don't buy it. At best, it doesn't foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn't matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I'm concerned, is that an AI built by a large corporation for a large computational grid doesn't have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I'll go out on a limb here and say I know (for Bayesian values of the word "know") resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn't depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.