You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion

9 Post author: KatjaGrace 21 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: SteveG 21 October 2014 05:50:30AM 3 points [-]

So, when the AI is turned on, there could be a hardware overhang of 1, 10, or 100 right within the computer it is on.

My belief is that the development team, if there is just one, has a good chance to succeed at preventing this initial AI from finding its way to the internet (since I gather this is controversial, I should work to substantiate that later), and attaching more processing power to it will be under their control.

If they do decide to add hardware to the first AI, either via increasing the server farm size, cloud or internet release, I think I can make a case that the team will be able to justify the investment to obtain a hardware speedup of x1000 fairly quickly if the cost of the first set of hardware is less than $1 Billion, or has a chance of expropriating the resources.

The team can also decide to try to optimize how well it runs on existing hardware, gaining x 1000. The kind of optimization process might take maybe two years today.

So as a point estimate I am thinking that the developers have the option of increasing hardware overhang by x1,000,000 in two years or less. Can work on a better estimate over time.

That x1,000,000 improvement can be used either for speed AI or to run additional processes.

Comment author: KatjaGrace 21 October 2014 09:10:36PM 3 points [-]

So, when the AI is turned on, there could be a hardware overhang of 1, 10, or 100 right within the computer it is on.

I didn't follow where this came from.

Also, when you say 'hardware overhang', do you mean the speedup available by buying more hardware at some fixed rate? Or could you elaborate - it seems a bit different from the usage I'm familiar with.

Comment author: SteveG 22 October 2014 12:19:00AM 3 points [-]

Here is what I mean by "hardware overhang." It's different from what you discussed.

Let's suppose that YouTube just barely runs in a satisfactory way on a computer with an 80486 processor. If we move up to a processor with 10X the speed, or we move to a computer with ten 80486 processors, for this YouTube application we now have a "hardware overhang" of nine. We can run the YouTube application ten times and it still performs OK in each of these ten runs.

So, when we turn on an AI system on a computer, let's say a neuromorphic NLP system, we might have enough processing power to run several copies of it right on that computer.

Yes, a firmer definition of "satisfactory" is necessary for this concept to be used in a study.

Yes, this basic approach assumes that the AI processes are acting fully independently and in parallel, rather than interacting. We do not have to be satisfied with either of those later.

Anyway, what I am saying here is the following:

Let's say that in 2030 a neuromorphic AI system is running on standard cloud hardware in a satisfactory according to a specific set of benchmarks, and that the hardware cost is $100 Million.

If ten copies of the AI can run on that hardware, and still meet the defined benchmarks, then there is a hardware overhang of nine on that computer.

If, for example, a large government could martial at least $100 Billion at that time to invest in renting or quickly building more existing hardware on which to run this AI, then the hardware overhang gets another x1000.

What I am further saying is that at the moment this AI is created, it may be coded in an inefficient way that is subject to software optimization by human engineers, like the famous IBM AI systems have been. I estimate that software optimization frequently gives a x1000 improvement.

That is the (albeit rough) chain of reasoning that leads me to think that a x1,000,000 hardware overhang will develop very quickly for a powerful AI system, even if the AI does not get in the manufacturing business itself quite yet.

I am trying to provide a dollop of analysis for understanding take-off speed, and I am saying that AI systems can get x1,000,000 the power shortly after they are invented, even if they DO NOT recursively self-improve.