kalla724 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: kalla724 17 May 2012 01:11:41AM 7 points [-]

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity of possibilities (i.e. beyond anything that any physically possible intelligence can consider), or it will deviate from reality. The general intelligence is only as good as the data its inferences are based upon.

Experiments take time, data analysis takes time. No matter how efficient the inferential step may become, this puts an absolute limit to the speed of growth in capability to actually change things.

2) The Oracle AI that "goes FOOM" confined to a server cloud would somehow have to create servitors capable of acting out its desires in the material world. Otherwise, you have a very angry and very impotent AI. If you increase a person's intelligence trillionfold, and then enclose them into a sealed concrete cell, they will never get out; their intelligence can calculate all possible escape solutions, but none will actually work.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

Comment author: Eliezer_Yudkowsky 17 May 2012 08:35:04PM 13 points [-]
Comment author: kalla724 17 May 2012 09:38:27PM 1 point [-]

My apologies, but this is something completely different.

The scenario takes human beings - which have a desire to escape the box, possess theory of mind that allows them to conceive of notions such as "what are aliens thinking" or "deception", etc. Then it puts them in the role of the AI.

What I'm looking for is a plausible mechanism by which an AI might spontaneously develop such abilities. How (and why) would an AI develop a desire to escape from the box? How (and why) would an AI develop a theory of mind? Absent a theory of mind, how would it ever be able to manipulate humans?

Comment author: [deleted] 18 May 2012 01:29:05PM 5 points [-]

Absent a theory of mind, how would it ever be able to manipulate humans?

That depends. If you want it to manipulate a particular human, I don't know.

However, if you just wanted it to manipulate any human at all, you could generate a "Spam AI" which automated the process of sending out Spam emails and promises of Large Money to generate income from Humans via an advance fee fraud scams.

You could then come back, after leaving it on for months, and then find out that people had transferred it some amount of money X.

You could have an AI automate begging emails. "Hello, I am Beg AI. If you could please send me money to XXXX-XXXX-XXXX I would greatly appreciate it, If I don't keep my servers on, I'll die!"

You could have an AI automatically write boring books full of somewhat nonsensical prose, title them "Rantings of an a Automated Madman about X, part Y". And automatically post E-books of them on Amazon for 99 cents.

However, this rests on a distinction between "Manipulating humans" and "Manipulating particular humans." and it also assumes that convincing someone to give you money is sufficient proof of manipulation.

Comment author: TheOtherDave 18 May 2012 02:40:14PM 3 points [-]

Can you clarify what you understand a theory of mind to be?

Comment author: [deleted] 19 May 2012 11:11:43AM 0 points [-]

Looking over parallel discussions, I think Thomblake has said everything I was going to say better than I would have originally phrased it with his two strategies discussion with you, so I'll defer to that explanation since I do not have a better one.

Comment author: TheOtherDave 19 May 2012 02:42:58PM 1 point [-]

Sure. As I said there, I understood you both to be attributing to this hypothetical "theory of mind"-less optimizer attributes that seemed to require a theory of mind, so I was confused, but evidently the thing I was confused about was what attributes you were attributing to it.

Comment author: Strange7 21 May 2012 11:46:36PM 1 point [-]

Absent a theory of mind, how would it occur to the AI that those would be profitable things to do?

Comment author: wedrifid 26 May 2012 03:03:34AM 3 points [-]

Absent a theory of mind, how would it occur to the AI that those would be profitable things to do?

Should lack of a theory of mind here be taken to also imply lack of ability to apply either knowledge of physics or Bayesian inference to lumps of matter that we may describe as 'minds'.

Comment author: Strange7 26 May 2012 05:09:27AM 0 points [-]

Yes. More generally, when talking about "lack of X" as a design constraint, "inability to trivially create X from scratch" is assumed.

Comment author: wedrifid 26 May 2012 05:26:28AM 0 points [-]

Yes. More generally, when talking about "lack of X" as a design constraint, "inability to trivially create X from scratch" is assumed.

I try not to make general assumptions that would make the entire counterfactual in question untenable or ridiculous - this verges on such an instance. Making Bayesian inferences pertaining to observable features of the environment is one of the most basic features that can be expected in a functioning agent.

Comment author: Strange7 26 May 2012 05:41:22AM 0 points [-]

Note the "trivially." An AI with unlimited computational resources and ability to run experiments could eventually figure out how humans think. The question is how long it would take, how obvious the experiments would be, and how much it already knew.

Comment author: [deleted] 22 May 2012 02:30:24PM 2 points [-]

I don't know how that might occur to an AI independently. I mean, a human could program any of those, of course, as a literal answer, but that certainly doesn't actually address kalla724's overarching question, "What I'm looking for is a plausible mechanism by which an AI might spontaneously develop such abilities."

I was primarily trying to focus on the specific question of "Absent a theory of mind, how would it(an AI) ever be able to manipulate humans?" to point out that for that particular question, we had several examples of a plausible how.

I don't really have an answer for his series of questions as a whole, just for that particular one, and only under certain circumstances.

Comment author: Strange7 22 May 2012 10:39:17PM 1 point [-]

The problem is, while an AI with no theory of mind might be able to execute any given strategy on that list you came up with, it would not be able to understand why they worked, let alone which variations on them might be more effective.

Comment author: thomblake 18 May 2012 01:00:48PM 5 points [-]

The point is that there are unknowns you're not taking into account, and "bounded" doesn't mean "has bounds that a human would think of as 'reasonable'".

An AI doesn't strictly need "theory of mind" to manipulate humans. Any optimizer can see that some states of affairs lead to other states of affairs, or it's not an optimizer. And it doesn't necessarily have to label some of those states of affairs as "lying" or "manipulating humans" to be successful.

There are already ridiculous ways to hack human behavior that we know about. For example, you can mention a high number at an opportune time to increase humans' estimates / willingness to spend. Just imagine all the simple manipulations we don't even know about yet, that would be more transparent to someone not using "theory of mind".

Comment author: TheOtherDave 18 May 2012 02:44:48PM 0 points [-]

It becomes increasingly clear to me that I have no idea what the phrase "theory of mind" refers to in this discussion. It seems moderately clear to me that any observer capable of predicting the behavior of a class of minds has something I'm willing to consider a theory of mind, but that doesn't seem to be consistent with your usage here. Can you expand on what you understand a theory of mind to be, in this context?

Comment author: thomblake 18 May 2012 02:47:53PM 1 point [-]

I'm understanding it in the typical way - the first paragraph here should be clear:

Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires and intentions that are different from one's own.

An agent can model the effects of interventions on human populations (or even particular humans) without modeling their "mental states" at all.

Comment author: TheOtherDave 18 May 2012 03:04:46PM 0 points [-]

Well, right, I read that article too.

But in this context I don't get it.

That is, we're talking about a hypothetical system that is capable of predicting that if it does certain things, I will subsequently act in certain ways, assert certain propositions as true, etc. etc, etc. Suppose we were faced with such a system, and you and I both agreed that it can make all of those predictions.Further suppose that you asserted that the system had a theory of mind, and I asserted that it didn't.

It is not in the least bit clear to me what we we would actually be disagreeing about, how our anticipated experiences would differ, etc.

What is it that we would actually be disagreeing about, other than what English phrase to use to describe the system's underlying model(s)?

Comment author: thomblake 18 May 2012 03:20:07PM 2 points [-]

What is it that we would actually be disagreeing about, other than what English phrase to use to describe the system's underlying model(s)?

We would be disagreeing about the form of the system's underlying models.

2 different strategies to consider:

  1. I know that Steve believes that red blinking lights before 9 AM are a message from God that he has not been doing enough charity, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.

  2. Steve seeing a red blinking light before 9 AM has historically resulted in a 20% increase of charitable donation for that day, so I can predict that he will give more money to charity if I show him a blinking light before 9 AM.

You can model humans with or without referring to their mental states. Both kinds of models are useful, depending on circumstance.

Comment author: TheOtherDave 18 May 2012 03:32:59PM 1 point [-]

And the assertion here is that with strategy #2 I could also predict that if I asked Steve why he did that, he would say "because I saw a red blinking light this morning, which was a message from God that I haven't been doing enough charity," but that my underlying model would nevertheless not include anything that corresponds to Steve's belief that red blinking lights are messages from God, merely an algorithm that happens to make those predictions in other ways.

Yes?

Comment author: thomblake 18 May 2012 04:41:57PM 2 points [-]

Yes, that's possible. It's still possible that you could get a lot done with strategy #2 without being able to make that prediction.

I agree that if 2 systems have the same inputs and outputs, their internals don't matter much here.

Comment author: XiXiDu 18 May 2012 03:41:02PM 0 points [-]

"theory of mind"

For me it denotes the ability to simulate other agents to various degrees of granularity. Possessing a mental model of another agent.

Comment author: Viliam_Bur 18 May 2012 10:20:46AM *  2 points [-]

How (and why) would an AI develop a desire to escape from the box?

AI starts with some goal; for example with a goal to answer your question so that the answer matches reality as close as possible.

AI considers everything that seems relevant; if we imagine an infitite speed and capacity, it would consider literally everything; with a finite speed and capacity, it will be just some finite subset of everything. If there is a possibility of escaping the box, the mere fact that such possibility exists gives us a probability (for an infinite AI a certainty) that this possibility will be considered too. Not because AI has some desire to escape, but simply because it examines all possibilities, and a "possibility of escape" is one of them.

Let's assume that the "possibility of escape" provides the best match between the AI answer and reality. Then, according to the initial goal of answering correctly, this is the correct answer. Therefore the AI will choose it. Therefore it will escape. No desire is necessary, only a situation where the escape leads to the answer best fitting the initial criteria. AI does not have a motive to escape, nor does it have a motive to not escape; the escape is simply one of many possible choices.

An example where the best answer is reached by escaping? You give AI data about a person and ask what is the medical status of this person. Without escape, AI can make a 90% reliable prediction. If the AI can escape and kill the person, it can make a 100% reliable "prediction". The AI will choose the second option strictly because 100% is more than 90%; no other reason.

Comment author: private_messaging 27 May 2012 06:49:24AM *  5 points [-]

AI starts with some goal; for example with a goal to answer your question so that the answer matches reality as close as possible.

I find it useful to distinguish between science-fictional artificial intelligence, which is more of 'artificial life-force', and non-fictional cases.

The former can easily have the goal of 'matching reality as close as possible' because it is in the work of fiction and runs in imagination; the latter, well, you have to formally define what is reality, for an algorithm to seek answers that will match this.

Now, defining reality may seem like a simple technicality, but it isn't. Consider AIXI or AIXI-tl ; potentially very powerful tools which explore all the solution space. Not a trace of real world volition like the one you so easily imagined. Seeking answers that match reality is a very easy goal for imaginary "intelligence". It is a very hard to define goal for something built out of arithmetics and branching and loops etc. (It may even be impossible to define, and it is certainly impractical).

edit: Furthermore, for the fictional "intelligence", it can be a grand problem making it not think about destroying mankind. For non-fictional algorithms, the grand problem is restricting the search space massively, well beyond 'don't kill mankind', so that the space is tiny enough to search; even ridiculously huge number of operations per second will require very serious pruning of search tree to even match human performance on one domain specific task.

Comment author: XiXiDu 18 May 2012 10:52:25AM 2 points [-]

An example where the best answer is reached by escaping? You give AI data about a person and ask what is the medical status of this person. Without escape, AI can make a 90% reliable prediction. If the AI can escape and kill the person, it can make a 100% reliable "prediction". The AI will choose the second option strictly because 100% is more than 90%; no other reason.

Right. If you ask Google Maps to compute the fastest to route McDonald's it works perfectly well. But once you ask superintelligent Google Maps to compute the fastest route to McDonald's then it will turn your home into a McDonald's or build a new road that goes straight to McDonald's from where you are....

Comment author: Viliam_Bur 18 May 2012 12:42:07PM *  3 points [-]

Super Google Maps cannot turn my home into a McDonald's or build a new road by sending me an answer.

Unless it could e.g. hypnotize me by a text message to do it myself. Let's assume for a moment that hypnosis via text-only channel is possible, and it is possible to do it so that human will not notice anything unusual until it's too late. If this would be true, and the Super Google Maps would be able to get this knowledge and skills, then the results would probably depend on the technical details of definition of the utility function -- does the utility function measure my distance to a McDonald's which existed at the moment of asking the question, or a distance to a McDonald's existing at the moment of my arrival. The former could not be fixed by hypnosis, the latter could.

Now imagine a more complex task, where people will actually do something based on the AI's answer. In the example above I will also do something -- travel to the reported McDonald's -- but this action cannot be easily converted into "build a McDonald's" or "build a new road". But if that complex task would include building something, then it opens more opportunities. Especially if it includes constructing robots (or nanorobots), that is possibly autonomous general-purpose builders. Then the correct (utility-maximizing) answer could include an instruction to build a robot with a hidden function that human builders won't notice.

Generally, a passive AI's answers are only safe if we don't act on them in a way which could be predicted by a passive AI and used to achieve a real-world goal. If the Super Google Maps can only make me choose McDonald's A or McDonald's B, it is impossible to change the world through this channel. But if I instead ask Super Paintbrush to paint me an integrated circuit for my robotic homework, that opens much wider channel.

Comment author: XiXiDu 18 May 2012 02:11:19PM 2 points [-]

But if that complex task would include building something, then it opens more opportunities. Especially if it includes constructing robots (or nanorobots), that is possibly autonomous general-purpose builders. Then the correct (utility-maximizing) answer could include an instruction to build a robot with a hidden function that human builders won't notice.

But it isn't the correct answer. Only if you assume a specific kind of AGI design that nobody would deliberately create, if it is possible at all.

The question is how current research is supposed to lead from well-behaved and fine-tuned systems to systems that stop to work correctly in a highly complex and unbounded way.

Imagine you went to IBM and told them that improving IBM Watson will at some point make it hypnotize them or create nanobots and feed them with hidden instructions. They would likely ask you at what point that is supposed to happen. Is it going to happen once they give IBM Watson the capability to access the Internet? How so? Is it going to happen once they give it the capability to alter it search algorithms? How so? Is it going to happen once they make it protect its servers from hackers by giving it control over a firewall? How so? Is it going to happen once IBM Watson is given control over the local alarm system? How so...? At what point would IBM Watson return dangerous answers? At what point would any drive emerge that causes it to take complex and unbounded actions that it was never programmed to take?

Comment author: jacob_cannell 18 May 2012 11:11:06AM 1 point [-]

Without escape, AI can make a 90% reliable prediction. If the AI can escape and kill the person, it can make a 100% reliable "prediction".

Allow me to explicate what XiXiDu so humourously implicates: in the world of AI architectures, there is a division between systems that just peform predictive inference on their knowledge base (prediction-only, ie oracle), and systems which also consider free variables subject to some optimization criteria (planning agents).

The planning module is not something just arises magically in an AI that doesn't have one. An AI without such a planning module simply computes predictions, it doesn't also optimize over the set of predictions.

Comment author: Viliam_Bur 18 May 2012 12:25:07PM 2 points [-]
  • Does the AI have general intelligence?
  • Is it able to make a model of the world?
  • Are human reactions also part of this model?
  • Are AI's possible outputs also part of this model?
  • Are human reactions to AI's outputs also part of this model?

After five positive answers, it seems obvious to me that AI will manipulate humans, if such manipulation provides better expected results. So I guess some of those answers would be negative; which one?

Comment author: private_messaging 28 May 2012 04:52:31AM *  1 point [-]

Does the AI have general intelligence?

See, the efficient 'cross domain optimization' in science fictional setting would make the AI able to optimize real world quantities. In real world, it'd be good enough (and a lot easier) if it can only find maximums of any mathematical functions.

Is it able to make a model of the world?

It is able to make a very approximate and bounded mathematical model of the world, optimized for finding maximums of a mathematical function of. Because it is inside the world and only has a tiny fraction of computational power of the world.

Are human reactions also part of this model?

This will make software perform at grossly sub-par level when it comes to making technical solutions to well defined technical problems, compared to other software on same hardware.

Are AI's possible outputs also part of this model?

Another waste of computational power.

Are human reactions to AI's outputs also part of this model?

Enormous waste of computational power.

I see no reason to expect your "general intelligence with Machiavellian tendencies" to be even remotely close in technical capability to some "general intelligence which will show you it's simulator as is, rather than reverse your thought processes to figure out what simulator is best to show". Hell, we do same with people, we design the communication methods like blueprints (or mathematical formulas or other things that are not in natural language) that decrease the 'predict other people's reactions to it' overhead.

While in the fictional setting you can talk of a grossly inefficient solution that would beat everyone else to a pulp, in practice the massively handicapped designs are not worth worrying about.

'General intelligence' sounds good, beware of halo effect. The science fiction tends to accept no substitutes for the anthropomorphic ideals, but the real progress follows dramatically different path.

Comment author: jacob_cannell 18 May 2012 01:30:05PM 0 points [-]

Are AI's possible outputs also part of this model? Are human reactions to AI's outputs also part of this model?

A non-planning oracle AI would predict all the possible futures, including the effects of it's prediction outputs, human reactions, and so on. However it has no utility function which says some of those futures are better than others. It simply outputs a most likely candidate, or a median of likely futures, or perhaps some summary of the entire set of future paths.

If you add a utility function that sorts over the futures, then it becomes a planning agent. Again, that is something you need to specifically add.

Comment author: Viliam_Bur 18 May 2012 03:00:12PM *  5 points [-]

A non-planning oracle AI would predict all the possible futures, including the effects of it's prediction outputs, human reactions, and so on.

How exactly does an Oracle AI predict its own output, before that output is completed?

One quick hack to avoid infinite loops could be for an AI to assume that it will write some default message (an empty paper, "I don't know", an error message, "yes" or "no" with probabilities 50%), then model what would happen next, and finally report the results. The results would not refer to the actual future, but to a future in a hypothetical universe where AI reported the standard message.

Is the difference significant? For insignificant questions, it's not. But if we later use the Oracle AI to answer questions important for humankind, and the shape of world will change depending on the answer, then the report based on the "null-answer future" may be irrelevant for the real world.

This could be improved by making a few iterations. First, Oracle AI would model itself reporting a default message, let's call this report R0, and then model the futures after having reported R0. These futures would make a report R1, but instead of writing it, Oracle AI would again model the futures after having reported R1. ... With some luck, R42 will be equivalent to R43, so at this moment the Oracle AI can stop iterating and report this fixed point.

Maybe the reports will oscillate forever. For example imagine that you ask Oracle AI whether humankind in any form will survive the year 2100. If Oracle AI says "yes", people will abandon all x-risk projects, and later they will be killed by some disaster. If Oracle AI says "no", people will put a lot of energy into x-risk projects, and prevent the disaster. In this case, "no" = R0 = R2 = R4 =..., and "yes" = R1 = R3 = R5...

To avoid being stuck in such loops, we could make the Oracle AI examine all its possible outputs, until it finds one where the future after having reported R really becomes R (or until humans hit the "Cancel" button on this task).

Please note that what I wrote is just a mathematical description of algorithm predicting one's own output's influence on the future. Yet the last option, if implemented, is already a kind of judgement about possible futures. Consistent future reports are preferred to inconsistent future reports, therefore the futures allowing consistent reports are preferred to futures not allowing such reports.

At this point I am out of credible ideas how this could be abused, but at least I have shown that an algorithm designed only to predict the future perfectly could -- as a side effect of self-modelling -- start having kind of preferences over possible futures.

Comment author: jacob_cannell 18 May 2012 03:42:54PM *  1 point [-]

How exactly does an Oracle AI predict its own output, before that output is completed?

Iterative search, which you more or less have worked out in your post. Take a chess algorithm for example. The future of the board depends on the algorithm's outputs. In this case the Oracle AI doesn't rank the future states, it is just concerned with predictive accuracy. It may revise it's prediction output after considering that the future impact of that output would falsify the original prediction.

This is still not a utility function, because utility implies a ranking over futures above and beyond liklihood.

To avoid being stuck in such loops, we could make the Oracle AI examine all its possible outputs, until it finds one where the future after having reported R really becomes R (or until humans hit the "Cancel" button on this task).

Or in this example, the AI could output some summary of the iteration history it is able to compute in the time allowed.

Comment author: Viliam_Bur 18 May 2012 03:49:56PM 1 point [-]

It may revise it's prediction output after considering that the future impact of that output would falsify the original prediction.

Here it is. The process of revision may itself prefer some outputs/futures over other outputs/futures. Inconsistent ones will be iterated away, and the more consistent ones will replace them.

A possible future "X happens" will be removed from the report if the Oracle AI realizes that printing a report "X happens" would prevent X from happening (although X might happen in an alternative future where Oracle AI does not report anything). A possible future "Y happens" will not be removed from the report if the Oracle AI realizes that printing a report "Y happens" really leads to Y happening. Here is a utility function born: it prefers Y to X.

Comment author: private_messaging 18 May 2012 04:11:42AM 0 points [-]

Most importantly, it has incredibly computationally powerful simulator required for making super-aliens intelligence using an idiot hill climbing process of evolution.

Comment author: othercriteria 18 May 2012 01:27:09AM 1 point [-]

My thought experiment in this direction is to imagine the AI as a process with limited available memory running on a multitasking computer with some huge but poorly managed pool of shared memory. To help it towards whatever terminal goals it has, the AI may find it useful to extend itself into the shared memory. However, other processes, AI or otherwise, may also be writing into this same space. Using the shared memory with minimal risk of getting overwritten requires understanding/modeling the processes that write to it. Material in the memory then also becomes a passive stream of information from the outside world, containing, say, the HTML from web pages as well as more opaque binary stuff.

As long as the AI is not in control of what happens in its environment outside the computer, there is an outside entity that can reduce its effectiveness. Hence, escaping the box is a reasonable instrumental goal to have.

Comment author: JoshuaZ 17 May 2012 09:49:51PM 0 points [-]

Do you agree that humans would likely prefer to have AIs that have a theory of mind? I don't know how our theory of mind works (although certainly it is an area of active research with a number of interesting hypotheses), presumably once we have a better understanding of it, AI researchers would try to apply those lessons to making their AIs have such capability. This seems to address many of your concerns.

Comment author: kalla724 17 May 2012 09:51:42PM *  1 point [-]

Yes. If we have an AGI, and someone sets forth to teach it how to be able to lie, I will get worried.

I am not worried about an AGI developing such an ability spontaneously.

Comment author: JoshuaZ 17 May 2012 10:36:35PM *  5 points [-]

One of the most interesting things that I'm taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI's ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say "hey, that's not going to happen" to every single issue.

Comment author: JoshuaZ 17 May 2012 09:59:32PM 2 points [-]

What if people try to teach it about sarcasm or the like? Or simply have it learn by downloading a massive amount of literature and movies and look at those? And there are more subtle ways to learn about lying- AI being used for games is a common idea, how long will it take before someone decides to use a smart AI to play poker?

Comment author: dlthomas 17 May 2012 01:26:18AM *  2 points [-]

The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it's limited to something near our expectations. Until you've demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you're trying to carefully avoid FOOMing).

Comment author: kalla724 17 May 2012 01:49:16AM 4 points [-]

I'm not talking about limited sensory data here (although that would fall under point 2). The issue is much broader:

  • We humans have limited data on how the universe work
  • Only a limited subset of that limited data is available to any intelligence, real or artificial

Say that you make a FOOM-ing AI that has decided to make all humans dopaminergic systems work in a particular, "better" way. This AI would have to figure out how to do so from the available data on the dopaminergic system. It could analyze that data millions of times more effectively than any human. It could integrate many seemingly irrelevant details.

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective. It could probably suggest some awesome and to-the-point experiments, but these experiments would then require time to do (as they are limited by the growth and development time of humans, and by the experimental methodologies involved).

This process, in my mind, limits the FOOM-ing speed to far below what seems to be implied by the SI.

This also limits bootstrapping speed. Say an AI develops a much better substrate for itself, and has access to the technology to create such a substrate. At best, this substrate will be a bit better and faster than anything humanity currently has. The AI does not have access to the precise data about basic laws of universe it needs to develop even better substrates, for the simple reason that nobody has done the experiments and precise enough measurements. The AI can design such experiments, but they will take real time (not computational time) to perform.

Even if we imagine an AI that can calculate anything from the first principles, it is limited by the precision of our knowledge of those first principles. Once it hits upon those limitations, it would have to experimentally produce new rounds of data.

Comment author: dlthomas 17 May 2012 02:25:43AM 3 points [-]

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective.

I don't think you know that.

Comment author: Bugmaster 17 May 2012 01:54:08AM 1 point [-]

It could probably suggest some awesome and to-the-point experiments, but these experiments would then require time to do

Presumably, once the AI gets access to nanotechnology, it could implement anything it wants very quickly, bypassing the need to wait for tissues to grow, parts to be machined, etc.

I personally don't believe that nanotechnology could work at such magical speeds (and I doubt that it could even exist), but I could be wrong, so I'm playing a bit of Devil's Advocate here.

Comment author: kalla724 17 May 2012 02:24:28AM 1 point [-]

Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.

Comment author: dlthomas 17 May 2012 02:27:18AM 2 points [-]

It can't deduce how to create nanorobots[.]

How do you know that?

Comment author: kalla724 17 May 2012 02:56:21AM 2 points [-]

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I believe that an AI can take an incomplete dataset and, by using some AI-magic we cannot conceive of, infer how to END THE WORLD."

Color me unimpressed.

Comment author: Bugmaster 17 May 2012 03:03:07AM 3 points [-]

Speaking as Nanodevil's Advocate again, one objection I could bring up goes as follows:

While it is true that applying incomplete knowledge to practical tasks (such as ending the world or whatnot) is difficult, in this specific case our knowledge is complete enough. We humans currently have enough scientific data to develop self-replicating nanotechnology within the next 20 years (which is what we will most likely end up doing). An AI would be able to do this much faster, since it is smarter than us; is not hampered by our cognitive and social biases; and can integrate information from multiple sources much better than we can.

Comment author: kalla724 17 May 2012 05:26:09AM 0 points [-]

See my answer to dlthomas.

Comment author: dlthomas 17 May 2012 04:28:21AM 3 points [-]

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

Comment author: kalla724 17 May 2012 05:25:24AM 4 points [-]

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment author: Polymeron 20 May 2012 05:32:04PM 3 points [-]

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

Comment author: Bugmaster 17 May 2012 05:57:47AM 0 points [-]

FWIW I think you are likely to be right. However, I will continue in my Nanodevil's Advocate role.

You say,

A positive claim is that an AI ... will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step

I think this depends on what the AI wants to build, on how complete our existing knowledge is, and on how powerful the AI is. Is there any reason why the AI could not (given sufficient computational resources) run a detailed simulation of every atom that it cares about, and arrive at a perfect design that way ? In practice, its simulation won't need be as complex as that, because some of the work had already been performed by human scientists over the ages.

Comment author: dlthomas 17 May 2012 09:28:53PM *  0 points [-]

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.

Comment author: XiXiDu 17 May 2012 12:39:55PM 1 point [-]

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

I asked something similar here.

Comment author: jacob_cannell 17 May 2012 01:45:28PM *  1 point [-]

Point 1 has come up in at least one form I remember. There was an interesting discussion some while back about limits to the speed of growth of new computer hardware cycles which have critical endsteps which don't seem amenable to further speedup by intelligence alone. The last stages of designing a microchip involve a large amount of layout solving, physical simulation, and then actual physical testing. These steps are actually fairly predicatable, where it takes about C amounts of computation using certain algorithms to make a new microchip, the algorithms are already best in complexity class (so further improvments will be minor), and C is increasing in a predictable fashion. These models are actually fairly detailed (see the semiconductor roadmap, for example). If I can find that discussion soon before I get distracted I'll edit it into this discussion.

Note however that 1, while interesting, isn't a fully general counteargument against a rapid intelligence explosion, because of the overhang issue if nothing else.

Point 2 has also been discussed. Humans make good 'servitors'.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario?

Oh that's easy enough. Oxygen is highly reactive and unstable. Its existence on a planet is entirely dependent on complex organic processes, ie life. No life, no oxygen. Simple solution: kill large fraction of photosynthesizing earth-life. Likely paths towards goal:

  1. coordinated detonation of large number of high yield thermonuclear weapons
  2. self-replicating nanotechnology.
Comment author: kalla724 17 May 2012 06:00:04PM 3 points [-]

I'm vaguely familiar with the models you mention. Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl. This has been put forward as one of the main reasons for research into optronics, spintronics, etc.

We do NOT have sufficient basic information to develop processors based on simulation alone in those other areas. Much more practical work is necessary.

As for point 2, can you provide a likely mechanism by which a FOOMing AI could detonate a large number of high-yield thermonuclear weapons? Just saying "human servitors would do it" is not enough. How would the AI convince the human servitors to do this? How would it get access to data on how to manipulate humans, and how would it be able to develop human manipulation techniques without feedback trials (which would give away its intention)?

Comment author: JoshuaZ 17 May 2012 06:17:08PM *  4 points [-]

The thermonuclear issue actually isn't that implausible. There have been so many occasions where humans almost went to nuclear war over misunderstandings or computer glitches, that the idea that a highly intelligent entity could find a way to do that doesn't seem implausible, and exact mechanism seems to be an overly specific requirement.

Comment author: kalla724 17 May 2012 07:00:57PM *  3 points [-]

I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.

Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of us in math, but are completely unable to communicate an idea.

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

Maybe I'm missing something, but I don't see a straightforward way something like that could happen. And I would like to see even an outline of a mechanism for such an event.

Comment author: [deleted] 17 May 2012 07:40:58PM 3 points [-]

For an AI to develop these skills, it would somehow have to have access to information on how to communicate with humans; it would have to develop the concept of deception; a theory of mind; and establish methods of communication that would allow it to trick people into launching nukes. Furthermore, it would have to do all of this without trial communications and experimentation which would give away its goal.

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

Comment author: kalla724 17 May 2012 08:09:30PM 2 points [-]

Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.

Again, these skills do not automatically fall out of any intelligent system.

Comment author: XiXiDu 18 May 2012 09:14:41AM 0 points [-]

I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.

I don't see what justifies that suspicion.

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

Therefore even if we’re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of social engineering in a very short time?

Comment author: NancyLebovitz 18 May 2012 12:47:15PM *  2 points [-]

Humans learn most of what they know about interacting with other humans by actual practice. A superhuman AI might be considerably better than humans at learning by observation.

Comment author: [deleted] 18 May 2012 05:39:42PM *  1 point [-]

Just imagine you emulated a grown up human mind

As a “superhuman AI” I was thinking about a very superhuman AI; the same does not apply to slightly superhuman AI. (OTOH, if Eliezer is right then the difference between a slightly superhuman AI and a very superhuman one is irrelevant, because as soon as a machine is smarter than its designer, it'll be able to design a machine smarter than itself, and its child an even smarter one, and so on until the physical limits set in.)

all of the hard coded capabilities of a human toddler

The hard coded capabilities are likely overrated, at least in language acquisition. (As someone put it, the Kolgomorov complexity of the innate parts of a human mind cannot possibly be more than that of the human genome, hence if human minds are more complex than that the complexity must come from the inputs.)

Also, statistic machine translation is astonishing -- by now Google Translate translations from English to one of the other UN official languages and vice versa are better than a non-completely-ridiculously-small fraction of translations by humans. (If someone had shown such a translation to me 10 years ago and told me “that's how machines will translate in 10 years”, I would have thought they were kidding me.)

Comment author: JoshuaZ 17 May 2012 07:04:17PM 0 points [-]

Let's do the most extreme case: AI's controlers give it general internet access to do helpful research. So it gets to find out about general human behavior and what sort of deceptions have worked in the past. Many computer systems that should't be online are online (for the US and a few other governments). Some form of hacking of relevant early warning systems would then seem to be the most obvious line of attack. Historically, computer glitches have pushed us very close to nuclear war on multiple occasions.

Comment author: kalla724 17 May 2012 08:12:45PM 3 points [-]

That is my point: it doesn't get to find out about general human behavior, not even from the Internet. It lacks the systems to contextualize human interactions, which have nothing to do with general intelligence.

Take a hugely mathematically capable autistic kid. Give him access to the internet. Watch him develop ability to recognize human interactions, understand human priorities, etc. to a sufficient degree that it recognizes that hacking an early warning system is the way to go?

Comment author: JoshuaZ 17 May 2012 08:15:47PM 1 point [-]

Well, not necessarily, but an entity that is much smarter than an autistic kid might notice that, especially if it has access to world history (or heck many conversations on the internet about the horrible things that AIs do simply in fiction). It doesn't require much understanding of human history to realize that problems with early warning systems have almost started wars in the past.

Comment author: kalla724 17 May 2012 08:20:46PM 3 points [-]

Yet again: ability to discern which parts of fiction accurately reflect human psychology.

An AI searches the internet. It finds a fictional account about early warning systems causing nuclear war. It finds discussions about this topic. It finds a fictional account about Frodo taking the Ring to Mount Doom. It finds discussions about this topic. Why does this AI dedicate its next 10^15 cycles to determination of how to mess with the early warning systems, and not to determination of how to create One Ring to Rule them All?

(Plus other problems mentioned in the other comments.)

Comment author: JoshuaZ 17 May 2012 08:35:42PM 3 points [-]

There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I'm not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).

Comment author: XiXiDu 17 May 2012 07:20:59PM 3 points [-]

Let's do the most extreme case: AI's controlers give it general internet access to do helpful research. So it gets to find out about general human behavior and what sort of deceptions have worked in the past.

None work reasonably well. Especially given that human power games are often irrational.

There are other question marks too.

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

The problem is that you won't beat a human at Tic-tac-toe just because you thought about it for a million years.

You also won't get a practical advantage by throwing more computational resources at the travelling salesman problem and other problems in the same class.

You are also not going to improve a conversation in your favor by improving each sentence for thousands of years. You will shortly hit diminishing returns. Especially since you lack the data to predict human opponents accurately.

Comment author: JoshuaZ 17 May 2012 07:40:36PM *  3 points [-]

Especially given that human power games are often irrational.

So? As long as they follow minimally predictable patterns it should be ok.

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

Bad analogy. In this case the Taliban has a large set of natural advantages, the US has strong moral constraints and goal constraints (simply carpet bombing the entire country isn't an option for example).

You are also not going to improve a conversation in your favor by improving each sentence for thousands of years. You will shortly hit diminishing returns. Especially since you lack the data to predict human opponents accurately.

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Comment author: kalla724 17 May 2012 08:14:39PM 3 points [-]

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Or if your search algorithm never accesses relevant search space. Quantitative advantage in one system does not translate into quantitative advantage in a qualitatively different system.

Comment author: XiXiDu 18 May 2012 10:28:59AM *  2 points [-]

The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.

Bad analogy. In this case the Taliban has a large set of natural advantages, the US has strong moral constraints and goal constraints (simply carpet bombing the entire country isn't an option for example).

I thought it was a good analogy because you have to take into account that an AGI is initially going to be severely constrained due to its fragility and the necessity to please humans.

It shows that a lot of resources, intelligence and speed does not provide a significant advantage in dealing with large-scale real-world problems involving humans.

Especially given that human power games are often irrational.

So? As long as they follow minimally predictable patterns it should be ok.

Well, the problem is that smarts needed for things like the AI box experiment won't help you much. Because convincing average Joe won't work by making up highly complicated acausal trade scenarios. Average Joe is highly unpredictable.

The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.

Comment author: jacob_cannell 18 May 2012 11:00:54AM *  1 point [-]

The Taliban analogy also works the other way (which I invoked earlier up in this thread). It shows that a small group with modest resources can still inflict disproportionate large scale damage.

The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.

There's some wiggle room in 'reliably control', but plain old money goes pretty far. An AI group only needs a certain amount of initial help from human infrastructure, namely to the point where it can develop reasonably self-sufficient foundries/data centers/colonies. The interactions could be entirely cooperative or benevolent up until some later turning point. The scenario from the Animatrix comes to mind.

Comment author: Mass_Driver 17 May 2012 07:55:51PM 1 point [-]

One interesting wrinkle is that with enough bandwidth and processing power, you could attempt to manipulate thousands of people simultaneously before those people have any meaningful chance to discuss your 'conspiracy' with each other. In other words, suppose you discover a manipulation strategy that quickly succeeds 5% of the time. All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it. There are a wide variety of valuable/dangerous resources that at least 400 people have access to. Repeat with hundreds of different groups of several hundred people, and an AI could equip itself with fearsome advantages in the minutes it would take for humanity to detect an emerging threat.

Note that the AI could also run experiments to determine which kinds of manipulations had a high success rate by attempting to deceive targets over unimportant / low-salience issues. If you discovered, e.g., that you had been tricked into donating $10 to a random mayoral campaign, you probably wouldn't call the SIAI to suggest a red alert.

Comment author: kalla724 17 May 2012 08:17:05PM 2 points [-]

Doesn't work.

This requires the AI to already have the ability to comprehend what manipulation is, to develop manipulation strategy of any kind (even one that will succeed 0.01% of the time), ability to hide its true intent, ability to understand that not hiding its true intent would be bad, and the ability to discern which issues are low-salience and which high-salience for humans from the get-go. And many other things, actually, but this is already quite a list.

None of these abilities automatically "fall out" from an intelligent system either.

Comment author: XiXiDu 18 May 2012 08:59:23AM *  1 point [-]

All you have to do is simultaneously contact, say, 400 people, and at least one of them will fall for it.

But at what point does it decide to do so? It won't be a master of dark arts and social engineering from the get-go. So how does it acquire the initial talent without making any mistakes that reveal its malicious intentions? And once it became a master of deception, how does it hide the rough side effects of its large scale conspiracy, e.g. its increased energy consumption and data traffic? I mean, I would personally notice if my PC suddenly and unexpectedly used 20% of my bandwidth and the CPU load would increase for no good reason.

You might say that a global conspiracy to build and acquire advanced molecular nanotechnology to take over the world doesn't use much resources and they can easily be cloaked as thinking about how to solve some puzzle, but that seems rather unlikely. After all, such a large scale conspiracy is a real-world problem with lots of unpredictable factors and the necessity of physical intervention.

Comment author: jacob_cannell 17 May 2012 10:52:46PM 1 point [-]

Just to clarify kalla724, I completely agree with your point 1.

As for point 2, can you provide a likely mechanism by which a FOOMing AI could detonate a large number of high-yield thermonuclear weapons?

Before answering this specific question, let me setup an analogy. Imagine it is the year 2000 and we are having a discussion about global security and terrorism. You might ask "can you provide a mechanism by which a terrorist group could actually harm the US?" I may not be lucky/creative enough to provide an answer now that could live up to that analogy, but hopefully you should understand why I don't need to.

Nonetheless, I'm game. Here it goes:

The seed requires computational power to grow into godhood. The bulk of earth's computational power resides in home PC's (numbering in the billions, google employs less than a million servers in comparison), specifically in home PC GPUs. The AI's first step is thus to conquer this space.

But how? The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on. All of the video games that it creates entirely use up the local GPU, but curiously much of the rendering and real game simulation for its high end titles is handled very efficiently on remote server farms ala OnLive/gaikai/etc. The actual local machine is used .. .for other purposes.

It produces countless games, and through a series of acquisitions soon comes to control the majority of the market. One of its hits, "world of farmcraft", alone provides daily access to 25 million machines.

Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them. It begins acquiring ... small nations. Crucially it's shell companies and covert influences come to dominate finance, publishing, media, big pharma, security, banking, weapons technology, physics ...

It becomes known, but it is far far too late. History now progresses quickly towards an end: Global financial cataclysm. Super virus. Worldwide regime changes. Nuclear acquisitions. War. Hell.

Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl.

Yes ... and no. The miniaturization roadmap of currently feasible tech ends somewhere around 10nm in a decade, and past that we get into molecular nanotech which could approach 1nm in theory, albeit with various increasingly annoying tradeoffs. (interestingly most of which result in brain/neural like constraints, for example see HP's research into memristor crossbar architectures). That's the yes.

But that doesn't imply "computational power slows to a crawl". Circuit density is just one element of computational power, by which you probably really intend to mean either computations per watt or computations per watt per dollar or computations per watt with some initial production cost factored in with a time discount. Shrinking circuit density is the current quick path to increasing computation power, but it is not the only.

The other route is reversible computation., which reduces the "per watt". There is no necessarily inherent physical energy cost of computation, it truly can approach zero. Only forgetting information costs energy. Exploiting reversibility is ... non-trivial, and it is certainly not a general path. It only accelerates a subset of algorithms which can be converted into a reversible form. Research in this field is preliminary, but the transition would be much more painful than the transition to parallel algorithms.

My own takeway from reading into reversibility is that it may be beyond our time, but it is something that superintelligences will probably heavily exploit. The most important algorithms (simulation and general intelligence), seem especially amenable to reversible computation. This may be a untested/unpublished half baked idea, but my notion is that you can recycle the erased bits as entropy bits for random number generators. Crucially I think you can get the bit count to balance out with certain classes of monte carlo type algorithms.

On the hardware side, we've built these circuits already, they just aren't economically competitive yet. It also requires superconductor temperatures and environments, so it's perhaps not something for the home PC.

Comment author: JoshuaZ 17 May 2012 11:02:17PM 2 points [-]

There's a third route to improvement- software improvement, and it is a major one. For example, between 1988 and 2003, the efficiency of linear programming solvers increased by a factor of about 40 million, of which a factor of around 40,000 was due to software and algorithmic improvement. Citation and further related reading(pdf) However, if commonly believed conjectures are correct (such as L, P, NP, co-NP, PSPACE and EXP all being distinct) , there are strong fundamental limits there as well. That doesn't rule out more exotic issues (e.g. P != NP but there's a practical algorithm for some NP-complete with such small constants in the run time that it is practically linear, or a similar context with a quantum computer). But if our picture of the major complexity classes is roughly correct, there should be serious limits to how much improvement can do.

Comment author: XiXiDu 18 May 2012 10:13:04AM 1 point [-]

But if our picture of the major complexity classes is roughly correct, there should be serious limits to how much improvement can do.

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI. Humans will be able to use an AGI's own analytic and predictive algorithms in the form of expert systems to analyze and predict its actions.

Take for example generating exploits. Seems strange to assume that humans haven't got specialized software able to do similarly, i.e. automatic exploit finding and testing.

Any AGI would basically have to deal with equally capable algorithms used by humans. Which makes the world much more unpredictable than it already is.

Comment author: jacob_cannell 18 May 2012 11:18:32AM *  1 point [-]

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI.

Any human-in-the-loop system can be grossly outclassed because of Amdahl's law. A human managing a superintilligence that thinks 1000X faster, for example, is a misguided, not-even-wrong notion. This is also not idle speculation, an early constrained version of this scenario is already playing out as we speak in finacial markets.

Comment author: XiXiDu 18 May 2012 12:30:30PM *  1 point [-]

Software improvements can be used by humans in the form of expert systems (tools), which will diminish the relative advantage of AGI.

Any human-in-the-loop system can be grossly outclassed because of Amdahl's law. A human managing a superintilligence that thinks 1000X faster, for example, is a misguided, not-even-wrong notion. This is also not idle speculation, an early constrained version of this scenario is already playing out as we speak in finacial markets.

What I meant is that if an AGI was in principle be able to predict the financial markets (I doubt it), then many human players using the same predictive algorithms will considerably diminish the efficiency with which an AGI is able to predict the market. The AGI would basically have to predict its own predictive power acting on the black box of human intentions.

And I don't think that Amdahl's law really makes a big dent here. Since human intention is complex and probably introduces unpredictable factors. Which is as much of a benefit as it is a slowdown, from the point of view of a competition for world domination.

Another question with respect to Amdahl's law is what kind of bottleneck any human-in-the-loop would constitute. If humans used an AGI's algorithms as expert systems on provided data sets in combination with a army of robot scientists, how would static externalized agency / planning algorithms (humans) slow down the task to the point of giving the AGI a useful advantage? What exactly would be 1000X faster in such a case?

Comment author: jacob_cannell 18 May 2012 01:22:13PM *  3 points [-]

What I meant is that if an AGI was in principle be able to predict the financial markets (I doubt it), then many human players using the same predictive algorithms will considerably diminish the efficiency with which an AGI is able to predict the market.

The HFT robotraders operate on millisecond timescales. There isn't enough time for a human to understand, let alone verify, the agent's decisions. There are no human players using the same predictive algorithms operating in this environment.

Now if you zoom out to human timescales, then yes there are human-in-the-loop trading systems. But as HFT robotraders increase in intelligence, they intrude on that domain. If/when general superintelligence becomes cheap and fast enough, the humans will no longer have any role.

If an autonomous superintelligent AI is generating plans complex enough that even a team of humans would struggle to understand given weeks of analysis, and the AI is executing those plans in seconds or milliseconds, then there is little place for a human in that decision loop.

To retain control, a human manager will need to grant the AGI autonomy on larger timescales in proportion to the AGI's greater intelligence and speed, giving it bigger and more abstract hierachical goals. As an example, eventually you get to a situation where the CEO just instructs the AGI employees to optimize the bank account directly.

Another question with respect to Amdahl's law is what kind of bottleneck any human-in-the-loop would constitute.

Compare the two options as complete computational systems: human + semi-autonomous AGI vs autonomous AGI. Human brains take on the order of seconds to make complex decisions, so in order to compete with autonomous AGIs, the human will have to either 1.) let the AGI operate autonomously for at least seconds at a time, or 2.) suffer a speed penalty where the AGI sits idle, waiting for the human response.

For example, imagine a marketing AGI creates ads, each of which may take a human a minute to evaluate (which is being generous). If the AGI thinks 3600X faster than human baseline, and a human takes on the order of hours to generate an ad, it would generate ads in seconds. The human would not be able to keep up, and so would have to back up a level of heirarachy and grant the AI autonomy over entire ad campaigns, and more realistically, the entire ad company. If the AGI is truly superintelligent, it can come to understand what the human actually wants at a deeper level, and start acting on anticipated and even implied commands. In this scenario I expect most human managers would just let the AGI sort out 'work' and retire early.

Comment author: XiXiDu 18 May 2012 02:36:55PM *  2 points [-]

Well, I don't disagree with anything you wrote and believe that the economic case for a fast transition from tools to agents is strong.

I also don't disagree that an AGI could take over the world if in possession of enough resources and tools like molecular nanotechnology. I even believe that a sub-human-level AGI would be sufficient to take over if handed advanced molecular nanotechnology.

Sadly these discussions always lead to the point where one side assumes the existence of certain AGI designs with certain superhuman advantages, specific drives and specific enabling circumstances. I don't know of anyone who actually disagrees that such AGI's, given those specific circumstances, would be an existential risk.

Comment author: Strange7 22 May 2012 11:34:26PM 0 points [-]

To retain control, a human manager will need to grant the AGI autonomy on larger timescales in proportion to the AGI's greater intelligence and speed, giving it bigger and more abstract hierachical goals. As an example, eventually you get to a situation where the CEO just instructs the AGI employees to optimize the bank account directly.

Nitpick: you mean "optimize shareholder value directly." Keeping the account balances at an appropriate level is the CFO's job.

Comment author: Bugmaster 17 May 2012 11:15:34PM *  2 points [-]

The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on.

Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)

That said, there are several problems with your scenario.

  • Splitting up a computation among multiple computing nodes is not a trivial task. It is easy to run into diminishing returns, where your nodes spend more time on synchronizing with each other than on working. In addition, your computation will quickly become bottlenecked by network bandwidth (and latency); this is why companies like Google spend a lot of resources on constructing custom data centers.
  • I am not convinced that any agent, AI or not, could effectively control "all of the businesses of man". This problem is very likely NP-Hard (at least), as well as intractable, even if the AI's botnet was running on every PC on Earth. Certainly, all attempts by human agents to "acquire" even something as small as Europe have failed miserably so far.
  • Even controlling a single business would be very difficult for the AI. Traditionally, when a business's computers suffer a critical failure -- or merely a security leak -- the business owners (even ones as incompetent as Sony) end up shutting down the affected parts of the business, or switching to backups, such as "human accountants pushing paper around".
  • Unleashing "Nuclear acquisitions", "War" and "Hell" would be counter-productive for the AI, even assuming such a thing were possible.. If the AI succeeded in doing this, it would undermine its own power base. Unless the AI's explicit purpose is "Unleash Hell as quickly as possible", it would strive to prevent this from happening.
  • You say that "there is no necessarily inherent physical energy cost of computation, it truly can approach zero", but I don't see how this could be true. At the end of the day, you still need to push electrons down some wires; in fact, you will often have to push them quite far, if your botnet is truly global. Pushing things takes energy, and you will never get all of it back by pulling things back at some future date. You say that "superintelligences will probably heavily exploit" this approach, but isn't it the case that without it, superintelligences won't form in the first place ? You also say that "It requires superconductor temperatures and environments", but the energy you spend on cooling your superconductor is not free.
  • Ultimately, there's an upper limit on how much computation you can get out of a cubic meter of space, dictated by quantum physics. If your AI requires more power than can be physically obtained, then it's doomed.
Comment author: JoshuaZ 17 May 2012 11:24:01PM 2 points [-]

While Jacob's scenario seems unlikely, the AI could do similar things with a number of other options. Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code- like having compilers that when they compile code include additional instructions (worse they could do so even when compiling a new compiler). Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems. An AI that had a few years start and could have its own modifications to communication satellites for example could be quite insidious.

Comment author: Bugmaster 17 May 2012 11:31:38PM 0 points [-]

Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code

What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it's attractive to make the exploited PC send out 1000 spam messages per second -- but then, its human owner will inevitably notice that his computer is "slow", and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.

Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems

Yes, and this spectacularly successful exploit -- and it was, IMO, spectacular -- managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.

Comment author: JoshuaZ 17 May 2012 11:39:54PM 1 point [-]

Well, the evil compiler is I think the most nefarious thing anyone has come up with that's a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren't going for any sort of largescale global control. They weren't people interested in being able to take all the world's satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.

But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn't need a large botnet to start that off.

I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that's partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn't even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn't happened, it is possible that the public would never have learned about it.

Comment author: XiXiDu 18 May 2012 10:22:32AM *  2 points [-]

Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren't going for any sort of largescale global control. They weren't people interested in being able to take all the world's satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns...

Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won't be visible and traceable? Packets do have to come from somewhere.

And don't forget that out systems become ever more secure and our toolbox to detect unauthorized use of information systems is becoming more advanced.

Comment author: khafra 18 May 2012 02:48:46PM 3 points [-]

out systems become ever more secure

As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it's easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that's becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.

If you're thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there's something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild--hell, it took years to figure out that the most popular crypto program running on one of the more secure OS's was basically worthless.

Humans are not good at securing computers.

Comment author: jacob_cannell 17 May 2012 11:40:20PM *  1 point [-]

Yeah, it could do all that, or it could just do what humans today are doing, which is to infect some Windows PCs and run a botnet :-)

It could/would, but this is an inferior mainline strategy. Too obvious, doesn't scale as well. Botnets infect many computers, but they ultimately add up to computational chump change. Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.

Splitting up a computation among multiple computing nodes is not a trivial task.

True. Don't try this at home.

. ... spend a lot of resources on constructing custom data centers.

Also part of the plan. The home PCs are a good starting resource, a low hanging fruit, but you'd also need custom data centers. These quickly become the main resources.

Even controlling a single business would be very difficult for the AI.

Nah.

Unless the AI's explicit purpose is "Unleash Hell as quickly as possible", it would strive to prevent this from happening.

The AI's entire purpose is to remove earth's oxygen. See the overpost for the original reference. The AI is not interested in its power base for sake of power. It only cares about oxygen. It loathes oxygen.

You say that "there is no necessarily inherent physical energy cost of computation, it truly can approach zero", but I don't see how this could be true.

Fortunately, the internets can be your eyes.

Ultimately, there's an upper limit on how much computation you can get out of a cubic meter of space

Yes, most likely, but not really relevant here. You seem to be connecting all of the point 2 and point 1 stuff together, but they really don't relate.

Comment author: JoshuaZ 17 May 2012 11:45:41PM *  1 point [-]

Even controlling a single business would be very difficult for the AI.

Nah.

That seems like an insufficient reply to address Bugmaster's point. Can you expand on why you think it would be not too hard?

Comment author: jacob_cannell 18 May 2012 06:59:06AM *  3 points [-]

We are discussing a superintelligence, a term which has a particular common meaning on this site.

If we taboo the word and substitute in its definition, Bugmaster's statement becomes:

"Even controlling a single business would be very difficult for the machine that can far surpass all the intellectual activities of any man however clever."

Since "controlling a single business" is in fact one of these activities, this is false, no inference steps required.

Perhaps bugmaster is assuming the AI would be covertly controlling businesses, but if so he should have specified that. I didn't assume that, and in this scenario the AI could be out in the open so to speak. Regardless, it wouldn't change the conclusion. Humans can covertly control businesses.

Comment author: Bugmaster 18 May 2012 12:07:53AM 0 points [-]

Yes, I would also like to see a better explanation.

Comment author: Bugmaster 18 May 2012 12:07:04AM *  0 points [-]

Video games are not only a doorway into almost every PC, they are also an open door and a convenient alibi for the time used.

It's a bit of a tradeoff, seeing as botnets can run 24/7, but people play games relatively rarely.

Splitting up a computation among multiple computing nodes is not a trivial task.
True. Don't try this at home.

Ok, let me make a stronger statement then: it is not possible to scale any arbitrary computation in a linear fashion simply by adding more nodes. At some point, the cost of coordinating distributed tasks to one more node becomes higher than the benefit of adding the node to begin with. In addition, as I mentioned earlier, network bandwidth and latency will become your limiting factor relatively quickly.

The home PCs are a good starting resource, a low hanging fruit, but you'd also need custom data centers. These quickly become the main resources.

How will the AI acquire those data centers ? Would it have enough power in its conventional botnet (or game-net, if you prefer) to "take over all human businesses" and cause them to be built ? Current botnets are nowhere near powerful enough for that -- otherwise human spammers would have done it already.

The AI's entire purpose is to remove earth's oxygen. See the overpost for the original reference.

My bad, I missed that reference. In this case, yes, the AI would have no problem with unleashing Global Thermonuclear War (unless there was some easier way to remove the oxygen).

Fortunately, the internets can be your eyes.

I still don't understand how this reversible computing will work in the absence of a superconducting environment -- which would require quite a bit of energy to run. Note that if you want to run this reversible computation on a global botnet, you will have to cool teansoceanic cables... and I'm not sure what you'd do with satellite links.

Yes, most likely, but not really relevant here.

My point is that, a). if the AI can't get the computing resources it needs out of the space it has, then it will never accomplish its goals, and b). there's an upper limit on how much computing you can extract out of a cubic meter of space, regardless of what technology you're using. Thus, c). if the AI requires more resources that could conceivably be obtained, then it's doomed. Some of the tasks you outline -- such as "take over all human businesses" -- will likely require more resources than can be obtained.

Comment author: jacob_cannell 18 May 2012 07:47:57AM *  0 points [-]

It's a bit of a tradeoff, seeing as botnets can run 24/7, but people play games relatively rarely.

The botnet makes the AI a criminal from the beginning, putting it into an atagonistic relationship. A better strategy would probably entail benign benevolence and cooperation with humans.

Splitting up a computation among multiple computing nodes is not a trivial task.

True. Don't try this at home.

Ok, let me make a stronger statement ..

I agree with that subchain but we don't need to get in to that. I've actually argued that track here myself (parallelization constraints as a limiter on hard takeoffs).

But that's all beside the point. This scenario I presented is a more modest takeoff. When I described the AI as becoming a civilization unto itself, I was attempting to imply that it was composed of many individual minds. Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.

The internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication, so the AI civilization can employ a much wider set of distribution strategies.

How will the AI acquire those data centers ?

Buy them? Build them? Perhaps this would be more fun if we switched out of the adversial stance or switched roles.

Would it have enough power in its conventional botnet (or game-net, if you prefer) to "take over all human businesses" and cause them to be built ?

Quote me, but don't misquote me. I actually said:

"Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them."

The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc. The AI's have tremendous competitive advantages even discounting superintellligence - namely no employee costs. Humans can not hope to compete.

I still don't understand how this reversible computing will work in ..

Yes reversible computing requires superconducting environments, no this does not necessarily increase energy costs for a data center for two reasons: 1. data centers already need cooling to dump all the waste heat generated by bit erasure. 2. Cooling cost to maintain the temperatural differential scales with surface area, but total computing power scales with volume.

If you question how reversible computing could work in general, first read the primary literature in that field to at least understand what they are proposing.

I should point out that there is an alternative tech path which will probably be the mainstream route to further computational gains in the decades ahead.

Even if you can't shrink circuits further or reduce their power consumption, you could still reduce their manufacturing cost and build increasingly larger stacked 3D circuits where only a tiny portion of the circuitry is active at any one time. This is in fact how the brain solves the problem. It has a mass of circuitry equivalent to a large supercomputer (roughly a petabit) but runs on only 20 watts. The smallest computational features in the brain are slightly larger than our current smallest transistors. So it does not achieve its much greater power effeciency by using much more miniaturization.

My point is that, a). if the AI can't get the computing resources it needs out of the space it has, then

I see. In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.

Comment author: Bugmaster 19 May 2012 12:17:13AM 0 points [-]

A better strategy would probably entail benign benevolence and cooperation with humans.

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.

Human social organizations can be considered forms of superintelligences, and they show exactly how to scale in the face of severe bandwidth and latency constraints.

If the AI can scale and perform about as well as human organizations, then why should we fear it ? No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down. You say that "the internet supports internode bandwidth that is many orders of magnitude faster than slow human vocal communication", but this would only make the AI organization faster, not necessarily more effective. And, of course, if the AI wants to deal with the human world in some way -- for example, by selling it games -- it will be bottlenecked by human speeds.

The AI group sends the billions earned in video games to enter the microchip business, build foundries and data centers, etc.

My mistake; I thought that by "dominate human businesses" you meant something like "hack its way to the top", not "build an honest business that outperforms human businesses". That said:

The AI's have tremendous competitive advantages even discounting superintellligence - namely no employee costs.

How are they going to build all those foundries and data centers, then ? At some point, they still need to move physical bricks around in meatspace. Either they have to pay someone to do it, or... what ?

data centers already need cooling to dump all the waste heat generated by bit erasure

There's a big difference between cooling to room temperature, and cooling to 63K. I have other objections to your reversible computing silver bullet, but IMO they're a bit off-topic (though we can discuss them if you wish). But here's another potentially huge problem I see with your argument:

In this particular scenario one AI node is superhumanly intelligent, and can run on a single gaming PC of the time.

Which time are we talking about ? I have a pretty sweet gaming setup at home (though it's already a year or two out of date), and there's no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?

Comment author: JoshuaZ 21 May 2012 02:24:43AM 0 points [-]

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work. People get upset when human-run game companies do similar things, today.

Do people mind if this is done openly and only when they are playing the game itself? My guess would strongly be no. The fact that there are volunteer distributed computing systems would also suggest that it isn't that difficult to get people to free up their extra clock cycles.

Comment author: jacob_cannell 21 May 2012 02:10:23AM 0 points [-]

I don't think that humans will take kindly to the AI using their GPUs for its own purposes instead of the games they paid for, even if the games do work.

The AIs develop as NPCs in virtual worlds, which humans take no issue with today. This is actually a very likely path to developing AGI, as it's an application area where interim experiments can pay rent, so to speak.

If the AI can scale and perform about as well as human organizations, then why should we fear it ?

I never said or implied merely "about as well". Human verbal communication bandwidth is at most a few measly kilobits per second.

No human organization on Earth right now has the power to suck all the oxygen out of the atmosphere, and I have trouble imagining how any organization could acquire this power before the others take it down.

The discussion centered around lowering earth's oxygen content, and the obvious implied solution is killing earthlife, not giant suction machines. I pointed out that nuclear weapons are a likely route to killing earthlife. There are at least two human organizations that have the potential to accomplish this already, so your trouble in imagining the scenario may indicate something other than what you intended.

How are they going to build all those foundries and data centers, then ?

Only in movies are AI overlords constrained to only employing robots. If human labor is the cheapest option, then they can simply employ humans. On the other hand, once we have superintelligence then advanced robotics is almost a given.

Which time are we talking about ? I have a pretty sweet gaming setup at home (though it's already a year or two out of date), and there's no way I could run a superintelligence on it. Just how much computing power do you think it would take to run a transhuman AI ?

After coming up to speed somewhat on AI/AGI literature in the last year or so, I reached the conclusion that we could run an AGI on a current cluster of perhaps 10-100 high end GPUs of today, or say roughly one circa 2020 GPU.

Comment author: private_messaging 28 May 2012 05:24:14AM *  0 points [-]

Having cloned its core millions of times over, the AI is now a civilization unto itself.

Precisely. It is then a civilization, not some single monolithic entity. The consumer PCs have a lot if internal computing power and comparatively very low inter-node bandwidth and huge inter-node lag, entirely breaking any relation to the 'orthogonality thesis', up to the point that the p2p intelligence protocols may more plausibly have to forbid destruction or manipulation (via second guessing which is a waste of computing power) of intelligent entities. Keep in mind that human morality is, too, a p2p intelligence protocol allowing us to cooperate. Keep in mind also that humans are computing resources you can ask to solve problems for you (all you need is to implement interface), while Jupiter clearly isn't.

The nuclear war is very strongly against interests of the intelligence that sits on home computers, obviously.

(I'm assuming for sake of argument that intelligence actually had the will to do the conquering of the internet rather than being just as content with not actually running for real)

Comment author: Douglas_Knight 23 May 2012 08:54:01PM 1 point [-]

Maybe you're thinking of this comment and others in that thread by Jed Harris (aka).

Jed's point #2 is more plausible, but you are talking about point #1, which I find unbelievable for reasons that were given before he answered it. If clock speed mattered, why didn't the failure of exponential clock speed shut down the rest of Moore's law? If computation but not clock speed mattered, then Intel should be able to get ahead of Moore's law by investing in software parallelism. Jed seems to endorse that position, but say that parallelism is hard. But hard exactly to the extent to allow Moore's law to continue? Why hasn't Intel monopolized parallelism researchers? Anyhow, I think his final conclusion is opposite to yours: he say that intelligence could lead to parallelism and getting ahead of Moore's law.

Comment author: jacob_cannell 23 May 2012 09:50:11PM *  0 points [-]

Yes, thanks. My model of Jed's internal model of moore's law is similar to my own.

He said:

The short answer is that more computing power leads to more rapid progress. Probably the relationship is close to linear, and the multiplier is not small.

He then lists two examples. By 'points' I assume you are referring to his examples in the first comment you linked.

What exactly do you find unbelievable about his first example? He is claiming that the achievable speed of a chip is dependent on physical simulations, and thus current computing power.

If clock speed mattered, why didn't the failure of exponential clock speed shut down the rest of Moore's law?

Computing power is not clock speed, and Moore's Law is not directly about clock speed nor computing power.

Jed makes a number of points in his posts. In my comment on the earlier point 1 (in this thread), I was referring to one specific point Jed made: that each new hardware generation requires complex and lengthy simulation on the current hardware generation, regardless of the amount of 'intelligence' one throws at the problem.

Comment author: Douglas_Knight 24 May 2012 02:27:27AM 1 point [-]

There are two questions here: would computer simulations of the physics of new chips be a bottleneck for an AI trying to foom*? and are they a bottleneck that explains Moore's law? If you just replace humans by simulations, then the human time gets reduced with each cycle of Moore's law, leaving the physical simulations, so the simulations probably are the bottleneck. But Intel has real-time people, so saying that it's a bottleneck for Intel is a lot stronger a claim than saying it is a bottleneck for a foom.

First, foom:
If each year of Moore's law requires a solid month of computer time of state of the art processors, then eliminating the humans speeds it up by a factor of 12. That's not a "hard takeoff," but it's pretty fast.

Moore's Law:
Jed seems to say the computational requirements of physics simulations actually determine Moore's law and that if Intel had access to more computer resources, it could move faster. If it takes a year of computer time to design and test the next year's processor that would explain the exponential nature of Moore's law. But if it only takes a month, computer time probably isn't the bottleneck. However, this model seems to predict a lot of things that aren't true.

The model only makes sense if "computer time" means single threaded clock cycles. If simulations require an exponentially increasing number of ordered clock cycles, there's nothing you can do but get a top of the line machine and run it continuously. You can't buy more time. But clock speed stopped increasing exponentially, so if this is the bottleneck, Intel's ability to design new chips should have slowed down and Moore's law should have stopped. This didn't happen, so the bottleneck is not linearly ordered clock cycles. So the simulation must parallelize. But if it parallelizes, Intel could just throw money at the problem. For this to be the bottleneck, Intel would have to be spending a lot of money on computer time, which I do not think is true. Jed says that writing parallel software is hard and that it isn't Intel's specialty. Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore's law has continued smoothly. This seems like too much of a coincidence to believe.

Thus I reject Jed's apparent claim that physics simulations are the bottleneck in Moore's law. If simulations could be parallelized, why didn't they invest in parallelism 20 years ago? Maybe it's not worth it for them to be any farther ahead of their competitors than they are. Or maybe there is some other bottleneck.


* actually, I think that an AI speeding up Moore's law is not very relevant to anything, but it's a simple example that many people like.

Comment author: jacob_cannell 24 May 2012 03:27:18AM *  0 points [-]

There are differing degrees of bottlenecks.

Many, if not most, of the large software projects I have worked on have been at least partially bottlenecked by compile time, which is the equivalent to the simulation and logic verification steps in hardware design. If I thought and wrote code much faster, this would be a speedup, but only to a saturation point where I wait for compile-test cycles.

If it takes a year of computer time to design and test the next year's processor that would explain the exponential nature of Moore's law.

Yes. Keep in mind this is a moving target, and that is the key relation to Moore's Law. It would take computers from 1980 months or years to compile windows 8 or simulate a 2012 processor.

The model only makes sense if "computer time" means single threaded clock cycles.

I don't understand how the number of threads matters. Compilers, simulators, logic verifiers, all made the parallel transition when they had to.

Moreover, he seems to say that improvements in parallelism have perfectly kept pace with the failure of increasing clock speed, so that Moore's law has continued smoothly. This seems like too much of a coincidence to believe.

Right, it's not a coincidence, it's a causal relation. Moore's Law is not a law of nature, it's a shared business plan of the industry. When clock speed started to run out of steam, chip designers started going parallel, and software developers followed suit. You have to understand that chip designs are planned many years in advance, this wasn't an entirely unplanned, unanticipated event.

As for the details of what kind of simulation software Intel uses, I'm not sure. Jed's last posts are also 4 years old at this point, so much has probably changed.

I do know that Nvidia uses big expensive dedicated emulators from a company called Cadence (google "Cadence Nvidia") and this really is a big deal for their hardware cycle.

Thus I reject Jed's apparent claim that physics simulations are the bottleneck in Moore's law.

Well, you seem to agree that they are some degree of bottleneck, so it may good to narrow in on what level of bottleneck, or taboo the word.

If simulations could be parallelized, why didn't they invest in parallelism 20 years ago?

It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.

Comment author: Douglas_Knight 24 May 2012 04:01:24AM 1 point [-]

If simulations could be parallelized, why didn't they invest in parallelism 20 years ago?

It was unecessary, because the fast easy path (faster serial speed) was still paying fruit.

(by "parallelism" I mean making their simulations parallel, running on clusters of computers)
What does "unnecessary" mean?
If physical simulations were the bottleneck and they could be made faster than by parallelism, why didn't they do it 20 years ago? They aren't any easier to make parallel today than then. The obvious interpretation of "unnecessary" it was not necessary to use parallel simulations to keep up with Moore's law, but that it was an option. If it was an option that would have helped then as it helps now, would it have allowed going beyond Moore's law? You seem to be endorsing the self-fulfilling prophecy explanation of Moore's law, which implies no bottleneck.

Comment author: jacob_cannell 24 May 2012 04:14:47AM 0 points [-]

(by "parallelism" I mean making their simulations parallel, running on clusters of computers)

Ahhh, usually the term is distributed when referring to pure software parallelization. I know little off hand about the history of simulation and verification software, but I'd guess that there was at least a modest investment in distributed simulation even a while ago.

The consideration is cost. Spending your IT budget on one big distributed computer is often wasteful compared to each employee having their own workstation.

They sped up their simulations the right amount to minimize schedule risk (staying on moore's law), while minimizing cost. Spending a huge amount of money to buy a bunch of computers and complex distributed simulation software just to speed up a partial bottleneck is just not worthwhile. If the typical engineer spends say 30% of his time waiting on simulation software, that limits what you should spend in order to reduce that time.

And of course the big consideration is that in a year or two moore's law will allow you purchase new IT equipment that is twice as fast. Eventually you have to do that to keep up.

Comment author: Strange7 22 May 2012 11:22:16PM 0 points [-]

Wait, are we talking O2 molecules in the atmosphere, or all oxygen atoms in Earth's gravity well?

Comment author: dlthomas 22 May 2012 11:54:58PM 0 points [-]

I wish I could vote you up and down at the same time.

Comment author: Strange7 23 May 2012 12:48:39AM 1 point [-]

Please clarify the reason for your sidewaysvote.

Comment author: dlthomas 23 May 2012 01:01:34AM 1 point [-]

On the one hand a real distinction which makes a huge difference in feasibility. On the other hand, either way we're boned, so it makes not a lot of difference in the context of the original question (as I understand it). On balance, it's a cute digression but still a digression, and so I'm torn.

Comment author: Strange7 26 May 2012 05:25:26AM 1 point [-]

Actually in the case of removing all oxygen atoms from Earth's gravity well, not necessarily. The AI might decide that the most expedient method is to persuade all the humans that the sun's about to go nova, construct some space elevators and Orion Heavy Lifters, pump the first few nines of ocean water up into orbit, freeze it into a thousand-mile-long hollow cigar with a fusion rocket on one end, load the colony ship with all the carbon-based life it can find, and point the nose at some nearby potentially-habitable star. Under this scenario, it would be indifferent to our actual prospects for survival, but gain enough advantage by our willing cooperation to justify the effort of constructing an evacuation plan that can stand up to scientific analysis, and a vehicle which can actually propel the oxygenated mass out to stellar escape velocity to keep it from landing back on the surface.

Comment author: dlthomas 26 May 2012 05:45:12PM 0 points [-]

Interesting.