First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
That's why I said "one or two orders of magnitude".
Thirdly, it's quite a leap to jump from "there's slightly less server load at these hours" to "therefore an AI would go super-intelligent in these hours". To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Thank you. What, you think I believe what I said? I'm a Bayesian. Show me where I expressed a confidence level in that post.
If you already believed that a small AI could foom given a large portion of the world's resources, then it seems like an AI that starts out with massive computing power should foom even faster.
One variant of the "foom" argument is that software that is "about as intelligent as a human" and runs on a desktop can escape into the Internet and augment its intelligence not by having insights into how to recode itself, but just by getting orders of magnitude more processing power. That then enables it to improve its own code, starting from software no smarter than a human.
If the software can't grab many more computational resources than it was meant to run with, because those resources don't exist, that means it has to foom on raw intelligence. That raises the minimum intelligence needed for FOOM to the superhuman level.
If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world's resources already in hand.
No. That's the point of the article! "AI" indicates a program of roughly human intelligence. The intelligence needed to count as AI, and to start an intelligence explosion, is constant. Small AI and large AI have the same level of effective intelligence. A small AI needs to be written in a much more clever manner, to get the same performance out of a desktop as out of the Google data centers. When it grabs a million times more computational power, it will be much more intelligent than a Google AI that started out with the same intelligence when running on a million servers.
That's why I said 'one or two orders of magnitude"
That's not the part of your post I was criticizing. I was criticizing this:
And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
Which doesn't seem to be a good model of how Google servers work.
Show me where I expressed a confidence level in that post.
Confidence in English can be expressed non-numerically. Here's a few sentences that seemed brazenly overconfident to me:
I know when the singularity will occur
(Sensationalized title.)
...I can g
More precisely, if we suppose that sometime in the next 30 years, an artificial intelligence will begin bootstrapping its own code and explode into a super-intelligence, I can give you 2.3 bits of further information on when the Singularity will occur.
Between midnight and 5 AM, Pacific Standard Time.
Why? Well, first, let's just admit this: The race to win the Singularity is over, and Google has won. They have the world's greatest computational capacity, the most expertise in massively distributed processing, the greatest collection of minds interested in and capable of work on AI, the largest store of online data, the largest store of personal data, and the largest library of scanned, computer-readable books. That includes textbooks. Like, all of them. All they have to do is subscribe to Springer-Verlag's online journals, and they'll have the entire collected knowledge of humanity in computer-readable format. They almost certainly have the biggest research budget for natural language processing with which to interpret all those things. They have two of the four smartest executives in Silicon Valley.1 Their corporate strategy for the past 15 years can be approximated as "Win the Singularity."2 If someone gave you a billion dollars today to begin your attempt, you'd still be 15 years and about two-hundred and ninety-nine billion dollars behind Google. If you believe in a circa-2030 Singularity, there isn't enough time left for anybody to catch up with them.
(And I'm okay with that, considering that the other contenders include Microsoft and the NSA. But it alarms me that Google hasn't gone into bioinformatics or neuroscience. Apparently their plans don't include humans.)
So the first bootstrapping AI will be created at Google. It will be designed to use Google's massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
A more important implication is that this scenario decreases the possibility of FOOM. The AI will be designed to run on the computational resources available to Google, and they'll build and test it as soon as they think that is just enough computational power for it to run. That means that its minimum computational requirements will be within one or two orders of magnitude of that of all the computers on Earth. (We don't know how many servers Google has, but we know they installed their one millionth server on July 9, 2008. Google may—may—own less than 1% of the world's CPU power, but connectivity within its system is vastly superior to that between other internet servers, let alone a botnet of random compromised PCs.)
So when the AI breaks out of the computational grid composed of all the Google data centers in the world, into the "vast wide world of the Internet", it's going to be very disappointed.
Of course, the distribution of computational power will change before then. Widespread teraflop GPU graphics cards could change this scenario completely in the next ten years.
In which case Google might take a sudden interest in GPUs...
ADDED:
Do I really believe all that? No. I do believe "Google wins" is a likely scenario—more likely than "X wins" for any other single value of X. Perhaps more importantly, you need to factor the size of the first AI built into your FOOM-speed probability distribution, because if the first AI is built by a large organization, with a lot of funding, that changes the FOOM paths open to it.
AI FOOMs if it can improve its own intelligence in one way or another. The people who build the first AI will make its algorithms as efficient as they are able to. For the AI to make itself more intelligent by scaling, it has to get more resources, while to make itself more intelligent by algorithm redesign, it will have to be smarter than the smartest humans who work on AI. The former is trivial for an AI built in a basement, but severely limited for an AI brought to life at the direction of Page and Brin.
The first "human-level" AI will probably be roughly as smart as a human, because people will try to build them before they can reach that level, the distribution of effectiveness-of-attempted-AIs will be skewed hard left, with many failures before the first success, and the first success will be a marginal improvement over a previous failure. That means the first AI will have about the same effective intelligence, regardless of how it's built.
As smart as "a human" is closer to "some human" than to "all humans". The first AI will almost certainly be at most as intelligent as the average human, and considerably less intelligent than its designers. But for an AI to make itself smarter through algorithm improvement requires the AI to have more intelligence than the smartest humans working on AI (the ones who just built it).
The easier, more-likely AI-foom path is: Build an AI as smart as a chimp. That AI grabs (or is given) orders of magnitude of resources, and gets smarter simply by brute force. THEN it redesigns itself.
That scaling-foom path is harder for AIs that start big than AIs that start small. This means that the probability distribution for FOOM speed depends on the probability distribution for the amount of dollars that will be spent to build the first AI.
Remember you are Bayesians. Your objective is not to accept or reject the hypothesis that the first AI will be developed according to this scenario. Your objective is to consider whether these ideas change the probability distribution you assign to FOOM speed.
The question I hope you'll ask yourself now is not, "Won't data centers in Asia outnumber those in America by then?", nor, "Isn't X smarter than Larry Page?", but, "What is the probability distribution over <capital investment that will produce the first average-human-level AI>?" I expect that the probabilities will be dominated by large investments, because the probability distribution over "capital investment that will produce the first X" appears to me to be dominated in recent decades by large investments, for similarly-ambitious X such as "spaceflight to the moon" or "sequence of the human genome". A very clever person could have invented low-cost genome sequencing in the 1990s and sequenced the genome him/herself. But no very clever person did.
1. I'm counting Elon Musk and Peter Thiel as the others.
2. This doesn't need to be intentional. Trying to dominate information search should look about the same as trying to win the Singularity. Think of it as a long chess game in which Brin and Page keep making good moves that strengthen their position. Eventually they'll look around and find they're in a position to checkmate the world.