jacob_cannell comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 17 May 2012 01:45:28PM *  1 point [-]

Point 1 has come up in at least one form I remember. There was an interesting discussion some while back about limits to the speed of growth of new computer hardware cycles which have critical endsteps which don't seem amenable to further speedup by intelligence alone. The last stages of designing a microchip involve a large amount of layout solving, physical simulation, and then actual physical testing. These steps are actually fairly predicatable, where it takes about C amounts of computation using certain algorithms to make a new microchip, the algorithms are already best in complexity class (so further improvments will be minor), and C is increasing in a predictable fashion. These models are actually fairly detailed (see the semiconductor roadmap, for example). If I can find that discussion soon before I get distracted I'll edit it into this discussion.

Note however that 1, while interesting, isn't a fully general counteargument against a rapid intelligence explosion, because of the overhang issue if nothing else.

Point 2 has also been discussed. Humans make good 'servitors'.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario?

Oh that's easy enough. Oxygen is highly reactive and unstable. Its existence on a planet is entirely dependent on complex organic processes, ie life. No life, no oxygen. Simple solution: kill large fraction of photosynthesizing earth-life. Likely paths towards goal:

  1. coordinated detonation of large number of high yield thermonuclear weapons
  2. self-replicating nanotechnology.
Comment author: kalla724 17 May 2012 06:00:04PM 3 points [-]

I'm vaguely familiar with the models you mention. Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl. This has been put forward as one of the main reasons for research into optronics, spintronics, etc.

We do NOT have sufficient basic information to develop processors based on simulation alone in those other areas. Much more practical work is necessary.

As for point 2, can you provide a likely mechanism by which a FOOMing AI could detonate a large number of high-yield thermonuclear weapons? Just saying "human servitors would do it" is not enough. How would the AI convince the human servitors to do this? How would it get access to data on how to manipulate humans, and how would it be able to develop human manipulation techniques without feedback trials (which would give away its intention)?

Comment author: JoshuaZ 17 May 2012 06:17:08PM *  4 points [-]

The thermonuclear issue actually isn't that implausible. There have been so many occasions where humans almost went to nuclear war over misunderstandings or computer glitches, that the idea that a highly intelligent entity could find a way to do that doesn't seem implausible, and exact mechanism seems to be an overly specific requirement.

Comment author: kalla724 17 May 2012 07:00:57PM *  3 points [-]

I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.

Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of us in math, but are completely unable to communicate an idea.

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

Maybe I'm missing something, but I don't see a straightforward way something like that could happen. And I would like to see even an outline of a mechanism for such an event.

Comment author: [deleted] 17 May 2012 07:40:58PM 3 points [-]

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

Comment author: kalla724 17 May 2012 08:09:30PM 2 points [-]

Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.

Again, these skills do not automatically fall out of any intelligent system.

Comment author: XiXiDu 18 May 2012 09:14:41AM 0 points [-]

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

I don't see what justifies that suspicion.

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?

Comment author: NancyLebovitz 18 May 2012 12:47:15PM *  2 points [-]

Humans learn most of what they know about interacting with other humans by actual practice. A superhuman AI might be considerably better than humans at learning by observation.

Comment author: [deleted] 18 May 2012 05:39:42PM *  1 point [-]

Just imagine you emulated a grown up human mind

As a “superhuman AI” I was thinking about a very superhuman AI; the same does not apply to slightly superhuman AI. (OTOH, if Eliezer is right then the difference between a slightly superhuman AI and a very superhuman one is irrelevant, because as soon as a machine is smarter than its designer, it'll be able to design a machine smarter than itself, and its child an even smarter one, and so on until the physical limits set in.)

all of the hard coded capabilities of a human toddler

The hard coded capabilities are likely overrated, at least in language acquisition. (As someone put it, the Kolgomorov complexity of the innate parts of a human mind cannot possibly be more than that of the human genome, hence if human minds are more complex than that the complexity must come from the inputs.)

Also, statistic machine translation is astonishing -- by now Google Translate translations from English to one of the other UN official languages and vice versa are better than a non-completely-ridiculously-small fraction of translations by humans. (If someone had shown such a translation to me 10 years ago and told me “that's how machines will translate in 10 years”, I would have thought they were kidding me.)

Comment author: JoshuaZ 17 May 2012 07:04:17PM 0 points [-]

Let's do the most extreme case: AI's controlers give it general internet access to do helpful research. So it gets to find out about general human behavior and what sort of deceptions have worked in the past. Many computer systems that should't be online are online (for the US and a few other governments). Some form of hacking of relevant early warning systems would then seem to be the most obvious line of attack. Historically, computer glitches have pushed us very close to nuclear war on multiple occasions.

Comment author: kalla724 17 May 2012 08:12:45PM 3 points [-]

That is my point: it doesn't get to find out about general human behavior, not even from the Internet. It lacks the systems to contextualize human interactions, which have nothing to do with general intelligence.

Take a hugely mathematically capable autistic kid. Give him access to the internet. Watch him develop ability to recognize human interactions, understand human priorities, etc. to a sufficient degree that it recognizes that hacking an early warning system is the way to go?

Comment author: JoshuaZ 17 May 2012 08:15:47PM 1 point [-]

Well, not necessarily, but an entity that is much smarter than an autistic kid might notice that, especially if it has access to world history (or heck many conversations on the internet about the horrible things that AIs do simply in fiction). It doesn't require much understanding of human history to realize that problems with early warning systems have almost started wars in the past.

Comment author: kalla724 17 May 2012 08:20:46PM 3 points [-]

Yet again: ability to discern which parts of fiction accurately reflect human psychology.

An AI searches the internet. It finds a fictional account about early warning systems causing nuclear war. It finds discussions about this topic. It finds a fictional account about Frodo taking the Ring to Mount Doom. It finds discussions about this topic. Why does this AI dedicate its next 10^15 cycles to determination of how to mess with the early warning systems, and not to determination of how to create One Ring to Rule them All?

(Plus other problems mentioned in the other comments.)

Comment author: JoshuaZ 17 May 2012 08:35:42PM 3 points [-]

There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I'm not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).

Comment author: kalla724 17 May 2012 09:40:19PM 1 point [-]

We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.

Comment author: XiXiDu 17 May 2012 07:20:59PM 3 points [-]

Let's do the most extreme case: AI's controlers give it general internet access to do helpful research. So it gets to find out about general human behavior and what sort of deceptions have worked in the past.

None work reasonably well. Especially given that human power games are often irrational.

There are other question marks too.

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

The problem is that you won't beat a human at Tic-tac-toe just because you thought about it for a million years.

You also won't get a practical advantage by throwing more computational resources at the travelling salesman problem and other problems in the same class.

You are also not going to improve a conversation in your favor by improving each sentence for thousands of years. You will shortly hit diminishing returns. Especially since you lack the data to predict human opponents accurately.

Comment author: JoshuaZ 17 May 2012 07:40:36PM *  3 points [-]

Especially given that human power games are often irrational.

So? As long as they follow minimally predictable patterns it should be ok.

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

Bad analogy. In this case the Taliban has a large set of natural advantages, the US has strong moral constraints and goal constraints (simply carpet bombing the entire country isn't an option for example).

You are also not going to improve a conversation in your favor by improving each sentence for thousands of years. You will shortly hit diminishing returns. Especially since you lack the data to predict human opponents accurately.

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Comment author: kalla724 17 May 2012 08:14:39PM 3 points [-]

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Or if your search algorithm never accesses relevant search space. Quantitative advantage in one system does not translate into quantitative advantage in a qualitatively different system.

Comment author: XiXiDu 18 May 2012 10:28:59AM *  2 points [-]

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

Bad analogy. In this case the Taliban has a large set of natural advantages, the US has strong moral constraints and goal constraints (simply carpet bombing the entire country isn't an option for example).

I thought it was a good analogy because you have to take into account that an AGI is initially going to be severely constrained due to its fragility and the necessity to please humans.

It shows that a lot of resources, intelligence and speed does not provide a significant advantage in dealing with large-scale real-world problems involving humans.

Especially given that human power games are often irrational.

So? As long as they follow minimally predictable patterns it should be ok.

Well, the problem is that smarts needed for things like the AI box experiment won't help you much. Because convincing average Joe won't work by making up highly complicated acausal trade scenarios. Average Joe is highly unpredictable.

The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.

Comment author: jacob_cannell 18 May 2012 11:00:54AM *  1 point [-]

The Taliban analogy also works the other way (which I invoked earlier up in this thread). It shows that a small group with modest resources can still inflict disproportionate large scale damage.

The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.

There's some wiggle room in 'reliably control', but plain old money goes pretty far. An AI group only needs a certain amount of initial help from human infrastructure, namely to the point where it can develop reasonably self-sufficient foundries/data centers/colonies. The interactions could be entirely cooperative or benevolent up until some later turning point. The scenario from the Animatrix comes to mind.

Comment author: Strange7 22 May 2012 11:52:13PM 1 point [-]

Animatrix

That's fiction.

Comment author: Mass_Driver 17 May 2012 07:55:51PM 1 point [-]

One interesting wrinkle is that with enough bandwidth and processing power, you could attempt to manipulate thousands of people simultaneously before those people have any meaningful chance to discuss your 'conspiracy' with each other. In other words, suppose you discover a manipulation strategy that quickly succeeds 5% of the time. All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it. There are a wide variety of valuable/dangerous resources that at least 400 people have access to. Repeat with hundreds of different groups of several hundred people, and an AI could equip itself with fearsome advantages in the minutes it would take for humanity to detect an emerging threat.

Note that the AI could also run experiments to determine which kinds of manipulations had a high success rate by attempting to deceive targets over unimportant / low-salience issues. If you discovered, e.g., that you had been tricked into donating $10 to a random mayoral campaign, you probably wouldn't call the SIAI to suggest a red alert.

Comment author: kalla724 17 May 2012 08:17:05PM 2 points [-]

Doesn't work.

This requires the AI to already have the ability to comprehend what manipulation is, to develop manipulation strategy of any kind (even one that will succeed 0.01% of the time), ability to hide its true intent, ability to understand that not hiding its true intent would be bad, and the ability to discern which issues are low-salience and which high-salience for humans from the get-go. And many other things, actually, but this is already quite a list.

None of these abilities automatically "fall out" from an intelligent system either.

Comment author: JoshuaZ 17 May 2012 09:12:07PM 0 points [-]

The problem isn't whether they fall out automatically so much as, given enough intelligence and resources, does it seem somewhat plausible that such capabilities could exist. Any given path here is a single problem. If you have 10 different paths each of which are not very likely, and another few paths that humans didn't even think of, that starts adding up.

Comment author: XiXiDu 18 May 2012 08:59:23AM *  1 point [-]

All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it.

But at what point does it decide to do so? It won't be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions? And once it became a master of deception, how does it hide the rough side effects of its large scale conspiracy, e.g. its increased energy consumption and data traffic? I mean, I would personally notice if my PC suddenly and unexpectedly used 20% of my bandwidth and the CPU load would increase for no good reason.

You might say that a global conspiracy to build and acquire advanced molecular nanotechnology to take over the world doesn't use much resources and they can easily be cloaked as thinking about how to solve some puzzle, but that seems rather unlikely. After all, such a large scale conspiracy is a real-world problem with lots of unpredictable factors and the necessity of physical intervention.

Comment author: jacob_cannell 18 May 2012 10:49:38AM 0 points [-]

All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it.

But at what point does it decide to do so? It won't be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions?

Most of your questions have answers that follow from asking analogous questions about past human social engineers, ie Hitler.

Your questions seem to come from the perspective that the AI will be some disembodied program in a box that has little significant interaction with humans.

In the scenario I was considering, the AI's will have a development period analogous to human childhood. During this childhood phase the community of AIs will learn of humans through interaction in virtual video game environments and experiment with social manipulation, just as human children do. The latter phases of this education can be sped up dramatically as the AI's accelerate and interact increasingly amongst themselves. The anonymous nature of virtual online communites makes potentially dangerous, darker experiments much easier.

However, the important questions to ask are not of the form: how would these evil AIs learn how to manipulate us while hiding their true intentions for so long? but rather how could some of these AI children which initially seemed so safe later develop into evil sociopaths?

Comment author: jacob_cannell 17 May 2012 10:52:46PM 1 point [-]

Just to clarify kalla724, I completely agree with your point 1.

As for point 2, can you provide a likely mechanism by which a FOOMing AI could detonate a large number of high-yield thermonuclear weapons?

Before answering this specific question, let me setup an analogy. Imagine it is the year 2000 and we are having a discussion about global security and terrorism. You might ask "can you provide a mechanism by which a terrorist group could actually harm the US?" I may not be lucky/creative enough to provide an answer now that could live up to that analogy, but hopefully you should understand why I don't need to.

Nonetheless, I'm game. Here it goes:

The seed requires computational power to grow into godhood. The bulk of earth's computational power resides in home PC's (numbering in the billions, google employs less than a million servers in comparison), specifically in home PC GPUs. The AI's first step is thus to conquer this space.

But how? The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on. All of the video games that it creates entirely use up the local GPU, but curiously much of the rendering and real game simulation for its high end titles is handled very efficiently on remote server farms ala OnLive/gaikai/etc. The actual local machine is used .. .for other purposes.

It produces countless games, and through a series of acquisitions soon comes to control the majority of the market. One of its hits, "world of farmcraft", alone provides daily access to 25 million machines.

Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them. It begins acquiring ... small nations. Crucially it's shell companies and covert influences come to dominate finance, publishing, media, big pharma, security, banking, weapons technology, physics ...

It becomes known, but it is far far too late. History now progresses quickly towards an end: Global financial cataclysm. Super virus. Worldwide regime changes. Nuclear acquisitions. War. Hell.

Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl.

Yes ... and no. The miniaturization roadmap of currently feasible tech ends somewhere around 10nm in a decade, and past that we get into molecular nanotech which could approach 1nm in theory, albeit with various increasingly annoying tradeoffs. (interestingly most of which result in brain/neural like constraints, for example see HP's research into memristor crossbar architectures). That's the yes.

But that doesn't imply "computational power slows to a crawl". Circuit density is just one element of computational power, by which you probably really intend to mean either computations per watt or computations per watt per dollar or computations per watt with some initial production cost factored in with a time discount. Shrinking circuit density is the current quick path to increasing computation power, but it is not the only.

The other route is reversible computation., which reduces the "per watt". There is no necessarily inherent physical energy cost of computation, it truly can approach zero. Only forgetting information costs energy. Exploiting reversibility is ... non-trivial, and it is certainly not a general path. It only accelerates a subset of algorithms which can be converted into a reversible form. Research in this field is preliminary, but the transition would be much more painful than the transition to parallel algorithms.

My own takeway from reading into reversibility is that it may be beyond our time, but it is something that superintelligences will probably heavily exploit. The most important algorithms (simulation and general intelligence), seem especially amenable to reversible computation. This may be a untested/unpublished half baked idea, but my notion is that you can recycle the erased bits as entropy bits for random number generators. Crucially I think you can get the bit count to balance out with certain classes of monte carlo type algorithms.

On the hardware side, we've built these circuits already, they just aren't economically competitive yet. It also requires superconductor temperatures and environments, so it's perhaps not something for the home PC.

Comment author: JoshuaZ 17 May 2012 11:02:17PM 2 points [-]

There's a third route to improvement- software improvement, and it is a major one. For example, between 1988 and 2003, the efficiency of linear programming solvers increased by a factor of about 40 million, of which a factor of around 40,000 was due to software and algorithmic improvement. Citation and further related reading(pdf) However, if commonly believed conjectures are correct (such as L, P, NP, co-NP, PSPACE and EXP all being distinct) , there are strong fundamental limits there as well. That doesn't rule out more exotic issues (e.g. P != NP but there's a practical algorithm for some NP-complete with such small constants in the run time that it is practically linear, or a similar context with a quantum computer). But if our picture of the major complexity classes is roughly correct, there should be serious limits to how much improvement can do.

Comment author: XiXiDu 18 May 2012 10:13:04AM 1 point [-]

But if our picture of the major complexity classes is roughly correct, there should be serious limits to how much improvement can do.

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI. Humans will be able to use an AGI's own analytic and predictive algorithms in the form of expert systems to analyze and predict its actions.

Take for example generating exploits. Seems strange to assume that humans haven't got specialized software able to do similarly, i.e. automatic exploit finding and testing.

Any AGI would basically have to deal with equally capable algorithms used by humans. Which makes the world much more unpredictable than it already is.

Comment author: jacob_cannell 18 May 2012 11:18:32AM *  1 point [-]

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI.

Any human-in-the-loop system can be grossly outclassed because of Amdahl's law. A human managing a superintilligence that thinks 1000X faster, for example, is a misguided, not-even-wrong notion. This is also not idle speculation, an early constrained version of this scenario is already playing out as we speak in finacial markets.

Comment author: XiXiDu 18 May 2012 12:30:30PM *  1 point [-]

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI.

Any human-in-the-loop system can be grossly outclassed because of Amdahl's law. A human managing a superintilligence that thinks 1000X faster, for example, is a misguided, not-even-wrong notion. This is also not idle speculation, an early constrained version of this scenario is already playing out as we speak in finacial markets.

What I meant is that if an AGI was in principle be able to predict the financial markets (I doubt it), then many human players using the same predictive algorithms will considerably diminish the efficiency with which an AGI is able to predict the market. The AGI would basically have to predict its own predictive power acting on the black box of human intentions.

And I don't think that Amdahl's law really makes a big dent here. Since human intention is complex and probably introduces unpredictable factors. Which is as much of a benefit as it is a slowdown, from the point of view of a competition for world domination.

Another question with respect to Amdahl's law is what kind of bottleneck any human-in-the-loop would constitute. If humans used an AGI's algorithms as expert systems on provided data sets in combination with a army of robot scientists, how would static externalized agency / planning algorithms (humans) slow down the task to the point of giving the AGI a useful advantage? What exactly would be 1000X faster in such a case?

Comment author: jacob_cannell 18 May 2012 01:22:13PM *  3 points [-]

What I meant is that if an AGI was in principle be able to predict the financial markets (I doubt it), then many human players using the same predictive algorithms will considerably diminish the efficiency with which an AGI is able to predict the market.

The HFT robotraders operate on millisecond timescales. There isn't enough time for a human to understand, let alone verify, the agent's decisions. There are no human players using the same predictive algorithms operating in this environment.

Now if you zoom out to human timescales, then yes there are human-in-the-loop trading systems. But as HFT robotraders increase in intelligence, they intrude on that domain. If/when general superintelligence becomes cheap and fast enough, the humans will no longer have any role.

If an autonomous superintelligent AI is generating plans complex enough that even a team of humans would struggle to understand given weeks of analysis, and the AI is executing those plans in seconds or milliseconds, then there is little place for a human in that decision loop.

To retain control, a human manager will need to grant the AGI autonomy on larger timescales in proportion to the AGI's greater intelligence and speed, giving it bigger and more abstract hierachical goals. As an example, eventually you get to a situation where the CEO just instructs the AGI employees to optimize the bank account directly.

Another question with respect to Amdahl's law is what kind of bottleneck any human-in-the-loop would constitute.

Compare the two options as complete computational systems: human + semi-autonomous AGI vs autonomous AGI. Human brains take on the order of seconds to make complex decisions, so in order to compete with autonomous AGIs, the human will have to either 1.) let the AGI operate autonomously for at least seconds at a time, or 2.) suffer a speed penalty where the AGI sits idle, waiting for the human response.

For example, imagine a marketing AGI creates ads, each of which may take a human a minute to evaluate (which is being generous). If the AGI thinks 3600X faster than human baseline, and a human takes on the order of hours to generate an ad, it would generate ads in seconds. The human would not be able to keep up, and so would have to back up a level of heirarachy and grant the AI autonomy over entire ad campaigns, and more realistically, the entire ad company. If the AGI is truly superintelligent, it can come to understand what the human actually wants at a deeper level, and start acting on anticipated and even implied commands. In this scenario I expect most human managers would just let the AGI sort out 'work' and retire early.

Comment author: XiXiDu 18 May 2012 02:36:55PM *  2 points [-]

Well, I don't disagree with anything you wrote and believe that the economic case for a fast transition from tools to agents is strong.

I also don't disagree that an AGI could take over the world if in possession of enough resources and tools like molecular nanotechnology. I even believe that a sub-human-level AGI would be sufficient to take over if handed advanced molecular nanotechnology.

Sadly these discussions always lead to the point where one side assumes the existence of certain AGI designs with certain superhuman advantages, specific drives and specific enabling circumstances. I don't know of anyone who actually disagrees that such AGI's, given those specific circumstances, would be an existential risk.

Comment author: jacob_cannell 18 May 2012 03:18:05PM 0 points [-]

I don't see this as so sad, if we are coming to something of a consensus on some of the sub-issues.

This whole discussion chain started (for me) with a question of the form, "given a superintelligence, how could it actually become an existential risk?"

I don't necessarily agree with the implied LW consensus on the liklihood of various AGI designs, specific drives, specific circumstances, or most crucially, the actual distribution over future AGI goals, so my view may be much closer to yours than this thread implies.

But my disagreements are mainly over details. I foresee the most likely AGI designs and goal systems as being vaguely human-like, which entails a different type of risk. Basically I'm worried about AGI's with human inspired motivational systems taking off and taking control (peacefully/economically) or outcompeting us before we can upload in numbers, and a resulting sub-optimal amount of uploading, rather than paperclippers.

Comment author: Strange7 22 May 2012 11:34:26PM 0 points [-]

To retain control, a human manager will need to grant the AGI autonomy on larger timescales in proportion to the AGI's greater intelligence and speed, giving it bigger and more abstract hierachical goals. As an example, eventually you get to a situation where the CEO just instructs the AGI employees to optimize the bank account directly.

Nitpick: you mean "optimize shareholder value directly." Keeping the account balances at an appropriate level is the CFO's job.

Comment author: Bugmaster 17 May 2012 11:15:34PM *  2 points [-]

The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on.

Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)

That said, there are several problems with your scenario.

  • Splitting up a computation among multiple computing nodes is not a trivial task. It is easy to run into diminishing returns, where your nodes spend more time on synchronizing with each other than on working. In addition, your computation will quickly become bottlenecked by network bandwidth (and latency); this is why companies like Google spend a lot of resources on constructing custom data centers.
  • I am not convinced that any agent, AI or not, could effectively control "all of the businesses of man". This problem is very likely NP-Hard (at least), as well as intractable, even if the AI's botnet was running on every PC on Earth. Certainly, all attempts by human agents to "acquire" even something as small as Europe have failed miserably so far.
  • Even controlling a single business would be very difficult for the AI. Traditionally, when a business's computers suffer a critical failure -- or merely a security leak -- the business owners (even ones as incompetent as Sony) end up shutting down the affected parts of the business, or switching to backups, such as "human accountants pushing paper around".
  • Unleashing "Nuclear acquisitions", "War" and "Hell" would be counter-productive for the AI, even assuming such a thing were possible.. If the AI succeeded in doing this, it would undermine its own power base. Unless the AI's explicit purpose is "Unleash Hell as quickly as possible", it would strive to prevent this from happening.
  • You say that "there is no necessarily inherent physical energy cost of computation, it truly can approach zero", but I don't see how this could be true. At the end of the day, you still need to push electrons down some wires; in fact, you will often have to push them quite far, if your botnet is truly global. Pushing things takes energy, and you will never get all of it back by pulling things back at some future date. You say that "superintelligences will probably heavily exploit" this approach, but isn't it the case that without it, superintelligences won't form in the first place ? You also say that "It requires superconductor temperatures and environments", but the energy you spend on cooling your superconductor is not free.
  • Ultimately, there's an upper limit on how much computation you can get out of a cubic meter of space, dictated by quantum physics. If your AI requires more power than can be physically obtained, then it's doomed.
Comment author: JoshuaZ 17 May 2012 11:24:01PM 2 points [-]

While Jacob's scenario seems unlikely, the AI could do similar things with a number of other options. Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code- like having compilers that when they compile code include additional instructions (worse they could do so even when compiling a new compiler). Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems. An AI that had a few years start and could have its own modifications to communication satellites for example could be quite insidious.

Comment author: Bugmaster 17 May 2012 11:31:38PM 0 points [-]

Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code

What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it's attractive to make the exploited PC send out 1000 spam messages per second -- but then, its human owner will inevitably notice that his computer is "slow", and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.

Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems

Yes, and this spectacularly successful exploit -- and it was, IMO, spectacular -- managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.

Comment author: JoshuaZ 17 May 2012 11:39:54PM 1 point [-]

Well, the evil compiler is I think the most nefarious thing anyone has come up with that's a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren't going for any sort of largescale global control. They weren't people interested in being able to take all the world's satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.

But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn't need a large botnet to start that off.

I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that's partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn't even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn't happened, it is possible that the public would never have learned about it.

Comment author: XiXiDu 18 May 2012 10:22:32AM *  2 points [-]

Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren't going for any sort of largescale global control. They weren't people interested in being able to take all the world's satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns...

Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won't be visible and traceable? Packets do have to come from somewhere.

And don't forget that out systems become ever more secure and our toolbox to detect unauthorized use of information systems is becoming more advanced.

Comment author: khafra 18 May 2012 02:48:46PM 3 points [-]

out systems become ever more secure

As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it's easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that's becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.

If you're thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there's something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild--hell, it took years to figure out that the most popular crypto program running on one of the more secure OS's was basically worthless.

Humans are not good at securing computers.

Comment author: thomblake 18 May 2012 03:00:04PM 0 points [-]

Humans are not good at securing computers.

Sure we are, we just don't care very much. The method of "Put the computer in a box and don't let anyone open the box" (alternately, only let one person open the box) was developed decades ago and is quite secure.

Comment author: jacob_cannell 17 May 2012 11:40:20PM *  1 point [-]

Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)

It could/would, but this is an inferior mainline strategy. Too obvious, doesn't scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.

Splitting up a computation among multiple computing nodes is not a trivial task.

True. Don't try this at home.

. ... spend a lot of resources on constructing custom data centers.

Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you'd also need custom data centers. These quickly become the main resources.

Even controlling a single business would be very difficult for the AI.

Nah.

Unless the AI's explicit purpose is "Unleash Hell as quickly as possible", it would strive to prevent this from happening.

The AI's entire purpose is to remove earth's oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.

You say that "there is no necessarily inherent physical energy cost of computation, it truly can approach zero", but I don't see how this could be true.

Fortunately, the internets can be your eyes.

Ultimately, there's an upper limit on how much computation you can get out of a cubic meter of space

Yes, most likely, but not really relevant here. You seem to be connecting all of the point 2 and point 1 stuff together, but they really don't relate.

Comment author: JoshuaZ 17 May 2012 11:45:41PM *  1 point [-]

Even controlling a single business would be very difficult for the AI.

Nah.

That seems like an insufficient reply to address Bugmaster's point. Can you expand on why you think it would be not too hard?

Comment author: jacob_cannell 18 May 2012 06:59:06AM *  3 points [-]

We are discussing a superintelligence, a term which has a particular common meaning on this site.

If we taboo the word and substitute in its definition, Bugmaster's statement becomes:

"Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever."

Since "controlling a single business" is in fact one of these activities, this is false, no inference steps required.

Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn't assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn't change the conclusion. Humans can covertly control businesses.

Comment author: Bugmaster 18 May 2012 12:07:53AM 0 points [-]

Yes, I would also like to see a better explanation.

Comment author: Bugmaster 18 May 2012 12:07:04AM *  0 points [-]

Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.

It's a bit of a tradeoff, seeing as botnets can run 24/7, but people play games relatively rarely.

Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don't try this at home.

Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.

The home PCs are a good starting resource, a low hanging fruit, but you'd also need custom data centers. These quickly become the main resources.

How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to "take over all human businesses" and cause them to be built ? Current botnets are nowhere near powerful enough for that -- otherwise human spammers would have done it already.

The AI's entire purpose is to remove earth's oxygen. See the overpost for the original reference.

My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).

Fortunately, the internets can be your eyes.

I still don't understand how this reversible computing will work in the absence of a superconducting environment -- which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables... and I'm not sure what you'd do with satellite links.

Yes, most likely, but not really relevant here.

My point is that, a). if the AI can't get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there's an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you're using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it's doomed. Some of the tasks you outline -- such as "take over all human businesses" -- will likely require more resources than can be obtained.

Comment author: jacob_cannell 18 May 2012 07:47:57AM *  0 points [-]

It's a bit of a tradeoff, seeing as botnets can run 24/7, but people play games relatively rarely.

The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.

Splitting up a computation among multiple computing nodes is not a trivial task.

True. Don't try this at home.

Ok, let me make a stronger statement ..

I agree with that subchain but we don't need to get in to that. I've actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).

But that's all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.

The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.

How will the AI acquire those data centers ?

Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.

Would it have enough power in its conventional botnet (or game-net, if you prefer) to "take over all human businesses" and cause them to be built ?

Quote me, but don't misquote me. I actually said:

"Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them."

The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI's have tremendous competitive advantages even discounting superintellligence - namely no employee costs. Humans can not hope to compete.

I still don't understand how this reversible computing will work in ..

Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.

If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.

I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.

Even if you can't shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.

My point is that, a). if the AI can't get the computing resources it needs out of the space it has, then

I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.

Comment author: Bugmaster 19 May 2012 12:17:13AM 0 points [-]

A better strategy would probably entail benign benevolence and cooperation with humans.

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.

Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.

If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that "the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication", but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way -- for example, by selling it games -- it will be bottlenecked by human speeds.

The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.

My mistake; I thought that by "dominate human businesses" you meant something like "hack its way to the top", not "build an honest business that outperforms human businesses". That said:

The AI's have tremendous competitive advantages even discounting superintellligence - namely no employee costs.

How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or... what ?

data centers already need cooling to dump all the waste heat generated by bit erasure

There's a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they're a bit off-topic (though we can discuss them if you wish). But here's another potentially huge problem I see with your argument:

In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.

Which time are we talking about ? I have a pretty sweet gaming setup at home (though it's already a year or two out of date), and there's no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?

Comment author: JoshuaZ 21 May 2012 02:24:43AM 0 points [-]

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.

Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn't that difficult to get people to free up their extra clock cycles.

Comment author: Bugmaster 21 May 2012 10:32:03PM 0 points [-]

Yeah, the "voluntary" part is key to getting humans to like you and your project. On the flip side, illicit botnets are quite effective at harnessing "spare" (i.e., owned by someone else) computing capacity; so, it's a bit of a tradeoff.

Comment author: jacob_cannell 21 May 2012 02:10:23AM 0 points [-]

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.

The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it's an application area where interim experiments can pay rent, so to speak.

If the AI can scale and perform about as well as human organizations, then why should we fear it ?

I never said or implied merely "about as well". Human verbal communication bandwidth is at most a few measly kilobits per second.

No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.

The discussion centered around lowering earth's oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.

How are they going to build all those foundries and data centers, then ?

Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.

Which time are we talking about ? I have a pretty sweet gaming setup at home (though it's already a year or two out of date), and there's no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?

After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.

Comment author: Bugmaster 21 May 2012 10:46:30PM 0 points [-]

The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI...

I think this is one of many possible paths, though I wouldn't call any of them "likely" to happen -- at least, not in the next 20 years. That said, if the AI is an NPC in a game, then of course it makes sense that it would harness the game for its CPU cycles; that's what it was built to do, after all.

"about as well". Human verbal communication bandwidth is at most a few measly kilobits per second.

Right, but my point is that communication is just one piece of the puzzle. I argue that, even if you somehow enabled us humans to communicate at 50 MB/s, our organizations would not become 400000 times more effective.

There are at least two human organizations that have the potential to accomplish this already

Which ones ? I don't think that even WW3, given our current weapon stockpiles, would result in a successful destruction of all plant life. Animal life, maybe, but there are quite a few plants and algae out there. In addition, I am not entirely convinced that an AI could start WW3; keep in mind that it can't hack itself total access to all nuclear weapons, because they are not connected to the Internet in any way.

If human labor is the cheapest option, then they can simply employ humans.

But then they lose their advantage of having zero employee costs, which you brought up earlier. In addition, whatever plans the AIs plan on executing become bottlenecked by human speeds.

On the other hand, once we have superintelligence then advanced robotics is almost a given.

It depends on what you mean by "advanced", though in general I think I agree.

we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today

I am willing to bet money that this will not happen, assuming that by "high end" you mean something like Nvidia's Geforce 680 GTX. What are you basing your estimate on ?

Comment author: private_messaging 28 May 2012 05:24:14AM *  0 points [-]

Having cloned its core millions of times over, the AI is now a civilization unto itself.

Precisely. It is then a civilization, not some single monolithic entity. The consumer PCs have a lot if internal computing power and comparatively very low inter-node bandwidth and huge inter-node lag, entirely breaking any relation to the 'orthogonality thesis', up to the point that the p2p intelligence protocols may more plausibly have to forbid destruction or manipulation (via second guessing which is a waste of computing power) of intelligent entities. Keep in mind that human morality is, too, a p2p intelligence protocol allowing us to cooperate. Keep in mind also that humans are computing resources you can ask to solve problems for you (all you need is to implement interface), while Jupiter clearly isn't.

The nuclear war is very strongly against interests of the intelligence that sits on home computers, obviously.

(I'm assuming for sake of argument that intelligence actually had the will to do the conquering of the internet rather than being just as content with not actually running for real)

Comment author: Douglas_Knight 23 May 2012 08:54:01PM 1 point [-]

Maybe you're thinking of this comment and others in that thread by Jed Harris (aka).

Jed's point #2 is more plausible, but you are talking about point #1, which I find unbelievable for reasons that were given before he answered it. If clock speed mattered, why didn't the failure of exponential clock speed shut down the rest of Moore's law? If computation but not clock speed mattered, then Intel should be able to get ahead of Moore's law by investing in software parallelism. Jed seems to endorse that position, but say that parallelism is hard. But hard exactly to the extent to allow Moore's law to continue? Why hasn't Intel monopolized parallelism researchers? Anyhow, I think his final conclusion is opposite to yours: he say that intelligence could lead to parallelism and getting ahead of Moore's law.

Comment author: jacob_cannell 23 May 2012 09:50:11PM *  0 points [-]

Yes, thanks. My model of Jed's internal model of moore's law is similar to my own.

He said:

The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.

He then lists two examples. By 'points' I assume you are referring to his examples in the first comment you linked.

What exactly do you find unbelievable about his first example? He is claiming that the achievable speed of a chip is dependent on physical simulations, and thus current computing power.

If clock speed mattered, why didn't the failure of exponential clock speed shut down the rest of Moore's law?

Computing power is not clock speed, and Moore's Law is not directly about clock speed nor computing power.

Jed makes a number of points in his posts. In my comment on the earlier point 1 (in this thread), I was referring to one specific point Jed made: that each new hardware generation requires complex and lengthy simulation on the current hardware generation, regardless of the amount of 'intelligence' one throws at the problem.

Comment author: Douglas_Knight 24 May 2012 02:27:27AM 1 point [-]

There are two questions here: would computer simulations of the physics of new chips be a bottleneck for an AI trying to foom*? and are they a bottleneck that explains Moore's law? If you just replace humans by simulations, then the human time gets reduced with each cycle of Moore's law, leaving the physical simulations, so the simulations probably are the bottleneck. But Intel has real-time people, so saying that it's a bottleneck for Intel is a lot stronger a claim than saying it is a bottleneck for a foom.

First, foom:
If each year of Moore's law requires a solid month of computer time of state of the art processors, then eliminating the humans speeds it up by a factor of 12. That's not a "hard takeoff," but it's pretty fast.

Moore's Law:
Jed seems to say the computational requirements of physics simulations actually determine Moore's law and that if Intel had access to more computer resources, it could move faster. If it takes a year of computer time to design and test the next year's processor that would explain the exponential nature of Moore's law. But if it only takes a month, computer time probably isn't the bottleneck. However, this model seems to predict a lot of things that aren't true.

The model only makes sense if "computer time" means single threaded clock cycles. If simulations require an exponentially increasing number of ordered clock cycles, there's nothing you can do but get a top of the line machine and run it continuously. You can't buy more time. But clock speed stopped increasing exponentially, so if this is the bottleneck, Intel's ability to design new chips should have slowed down and Moore's law should have stopped. This didn't happen, so the bottleneck is not linearly ordered clock cycles. So the simulation must parallelize. But if it parallelizes, Intel could just throw money at the problem. For this to be the bottleneck, Intel would have to be spending a lot of money on computer time, which I do not think is true. Jed says that writing parallel software is hard and that it isn't Intel's specialty. Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore's law has continued smoothly. This seems like too much of a coincidence to believe.

Thus I reject Jed's apparent claim that physics simulations are the bottleneck in Moore's law. If simulations could be parallelized, why didn't they invest in parallelism 20 years ago? Maybe it's not worth it for them to be any farther ahead of their competitors than they are. Or maybe there is some other bottleneck.


* actually, I think that an AI speeding up Moore's law is not very relevant to anything, but it's a simple example that many people like.

Comment author: jacob_cannell 24 May 2012 03:27:18AM *  0 points [-]

There are differing degrees of bottlenecks.

Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.

If it takes a year of computer time to design and test the next year's processor that would explain the exponential nature of Moore's law.

Yes. Keep in mind this is a moving target, and that is the key relation to Moore's Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.

The model only makes sense if "computer time" means single threaded clock cycles.

I don't understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.

Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore's law has continued smoothly. This seems like too much of a coincidence to believe.

Right, it's not a coincidence, it's a causal relation. Moore's Law is not a law of nature, it's a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn't an entirely unplanned, unanticipated event.

As for the details of what kind of simulation software Intel uses, I'm not sure. Jed's last posts are also 4 years old at this point, so much has probably changed.

I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google "Cadence Nvidia") and this really is a big deal for their hardware cycle.

Thus I reject Jed's apparent claim that physics simulations are the bottleneck in Moore's law.

Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.

If simulations could be parallelized, why didn't they invest in parallelism 20 years ago?

It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.

Comment author: Douglas_Knight 24 May 2012 04:01:24AM 1 point [-]

If simulations could be parallelized, why didn't they invest in parallelism 20 years ago?

It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.

(by "parallelism" I mean making their simulations parallel, running on clusters of computers)
What does "unnecessary" mean?
If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn't they do it 20 years ago? They aren't any easier to make parallel today than then. The obvious interpretation of "unnecessary" it was not necessary to use parallel simulations to keep up with Moore's law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore's law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore's law, which implies no bottleneck.

Comment author: jacob_cannell 24 May 2012 04:14:47AM 0 points [-]

(by "parallelism" I mean making their simulations parallel, running on clusters of computers)

Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I'd guess that there was at least a modest investment in distributed simulation even a while ago.

The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.

They sped up their simulations the right amount to minimize schedule risk (staying on moore's law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.

And of course the big consideration is that in a year or two moore's law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.

Comment author: Strange7 22 May 2012 11:22:16PM 0 points [-]

Wait, are we talking O2 molecules in the atmosphere, or all oxygen atoms in Earth's gravity well?

Comment author: dlthomas 22 May 2012 11:54:58PM 0 points [-]

I wish I could vote you up and down at the same time.

Comment author: Strange7 23 May 2012 12:48:39AM 1 point [-]

Please clarify the reason for your sidewaysvote.

Comment author: dlthomas 23 May 2012 01:01:34AM 1 point [-]

On the one hand a real distinction which makes a huge difference in feasibility. On the other hand, either way we're boned, so it makes not a lot of difference in the context of the original question (as I understand it). On balance, it's a cute digression but still a digression, and so I'm torn.

Comment author: Strange7 26 May 2012 05:25:26AM 1 point [-]

Actually in the case of removing all oxygen atoms from Earth's gravity well, not necessarily. The AI might decide that the most expedient method is to persuade all the humans that the sun's about to go nova, construct some space elevators and Orion Heavy Lifters, pump the first few nines of ocean water up into orbit, freeze it into a thousand-mile-long hollow cigar with a fusion rocket on one end, load the colony ship with all the carbon-based life it can find, and point the nose at some nearby potentially-habitable star. Under this scenario, it would be indifferent to our actual prospects for survival, but gain enough advantage by our willing cooperation to justify the effort of constructing an evacuation plan that can stand up to scientific analysis, and a vehicle which can actually propel the oxygenated mass out to stellar escape velocity to keep it from landing back on the surface.

Comment author: dlthomas 26 May 2012 05:45:12PM 0 points [-]

Interesting.