dlthomas comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: kalla724 17 May 2012 01:11:41AM 7 points [-]

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity of possibilities (i.e. beyond anything that any physically possible intelligence can consider), or it will deviate from reality. The general intelligence is only as good as the data its inferences are based upon.

Experiments take time, data analysis takes time. No matter how efficient the inferential step may become, this puts an absolute limit to the speed of growth in capability to actually change things.

2) The Oracle AI that "goes FOOM" confined to a server cloud would somehow have to create servitors capable of acting out its desires in the material world. Otherwise, you have a very angry and very impotent AI. If you increase a person's intelligence trillionfold, and then enclose them into a sealed concrete cell, they will never get out; their intelligence can calculate all possible escape solutions, but none will actually work.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

Comment author: dlthomas 17 May 2012 01:26:18AM *  2 points [-]

The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it's limited to something near our expectations. Until you've demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you're trying to carefully avoid FOOMing).

Comment author: kalla724 17 May 2012 01:49:16AM 4 points [-]

I'm not talking about limited sensory data here (although that would fall under point 2). The issue is much broader:

  • We humans have limited data on how the universe work
  • Only a limited subset of that limited data is available to any intelligence, real or artificial

Say that you make a FOOM-ing AI that has decided to make all humans dopaminergic systems work in a particular, "better" way. This AI would have to figure out how to do so from the available data on the dopaminergic system. It could analyze that data millions of times more effectively than any human. It could integrate many seemingly irrelevant details.

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective. It could probably suggest some awesome and to-the-point experiments, but these experiments would then require time to do (as they are limited by the growth and development time of humans, and by the experimental methodologies involved).

This process, in my mind, limits the FOOM-ing speed to far below what seems to be implied by the SI.

This also limits bootstrapping speed. Say an AI develops a much better substrate for itself, and has access to the technology to create such a substrate. At best, this substrate will be a bit better and faster than anything humanity currently has. The AI does not have access to the precise data about basic laws of universe it needs to develop even better substrates, for the simple reason that nobody has done the experiments and precise enough measurements. The AI can design such experiments, but they will take real time (not computational time) to perform.

Even if we imagine an AI that can calculate anything from the first principles, it is limited by the precision of our knowledge of those first principles. Once it hits upon those limitations, it would have to experimentally produce new rounds of data.

Comment author: dlthomas 17 May 2012 02:25:43AM 3 points [-]

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective.

I don't think you know that.

Comment author: Bugmaster 17 May 2012 01:54:08AM 1 point [-]

It could probably suggest some awesome and to-the-point experiments, but these experiments would then require time to do

Presumably, once the AI gets access to nanotechnology, it could implement anything it wants very quickly, bypassing the need to wait for tissues to grow, parts to be machined, etc.

I personally don't believe that nanotechnology could work at such magical speeds (and I doubt that it could even exist), but I could be wrong, so I'm playing a bit of Devil's Advocate here.

Comment author: kalla724 17 May 2012 02:24:28AM 1 point [-]

Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.

Comment author: dlthomas 17 May 2012 02:27:18AM 2 points [-]

It can't deduce how to create nanorobots[.]

How do you know that?

Comment author: kalla724 17 May 2012 02:56:21AM 2 points [-]

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I believe that an AI can take an incomplete dataset and, by using some AI-magic we cannot conceive of, infer how to END THE WORLD."

Color me unimpressed.

Comment author: Bugmaster 17 May 2012 03:03:07AM 3 points [-]

Speaking as Nanodevil's Advocate again, one objection I could bring up goes as follows:

While it is true that applying incomplete knowledge to practical tasks (such as ending the world or whatnot) is difficult, in this specific case our knowledge is complete enough. We humans currently have enough scientific data to develop self-replicating nanotechnology within the next 20 years (which is what we will most likely end up doing). An AI would be able to do this much faster, since it is smarter than us; is not hampered by our cognitive and social biases; and can integrate information from multiple sources much better than we can.

Comment author: kalla724 17 May 2012 05:26:09AM 0 points [-]

See my answer to dlthomas.

Comment author: dlthomas 17 May 2012 04:28:21AM 3 points [-]

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

Comment author: kalla724 17 May 2012 05:25:24AM 4 points [-]

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment author: Polymeron 20 May 2012 05:32:04PM 3 points [-]

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

Comment author: Bugmaster 17 May 2012 05:57:47AM 0 points [-]

FWIW I think you are likely to be right. However, I will continue in my Nanodevil's Advocate role.

You say,

A positive claim is that an AI ... will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step

I think this depends on what the AI wants to build, on how complete our existing knowledge is, and on how powerful the AI is. Is there any reason why the AI could not (given sufficient computational resources) run a detailed simulation of every atom that it cares about, and arrive at a perfect design that way ? In practice, its simulation won't need be as complex as that, because some of the work had already been performed by human scientists over the ages.

Comment author: dlthomas 17 May 2012 09:28:53PM *  0 points [-]

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.