kalla724 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: kalla724 17 May 2012 02:24:28AM 1 point [-]

Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.

Comment author: dlthomas 17 May 2012 02:27:18AM 2 points [-]

It can't deduce how to create nanorobots[.]

How do you know that?

Comment author: kalla724 17 May 2012 02:56:21AM 2 points [-]

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I believe that an AI can take an incomplete dataset and, by using some AI-magic we cannot conceive of, infer how to END THE WORLD."

Color me unimpressed.

Comment author: Bugmaster 17 May 2012 03:03:07AM 3 points [-]

Speaking as Nanodevil's Advocate again, one objection I could bring up goes as follows:

While it is true that applying incomplete knowledge to practical tasks (such as ending the world or whatnot) is difficult, in this specific case our knowledge is complete enough. We humans currently have enough scientific data to develop self-replicating nanotechnology within the next 20 years (which is what we will most likely end up doing). An AI would be able to do this much faster, since it is smarter than us; is not hampered by our cognitive and social biases; and can integrate information from multiple sources much better than we can.

Comment author: kalla724 17 May 2012 05:26:09AM 0 points [-]

See my answer to dlthomas.

Comment author: dlthomas 17 May 2012 04:28:21AM 3 points [-]

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

Comment author: kalla724 17 May 2012 05:25:24AM 4 points [-]

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment author: Polymeron 20 May 2012 05:32:04PM 3 points [-]

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

Comment author: Strange7 23 May 2012 12:47:33AM 0 points [-]

At the very least it would ask for some textbooks on electrical engineering and demolitions, first. The detonation process is remarkably tricky.

Comment author: Bugmaster 17 May 2012 05:57:47AM 0 points [-]

FWIW I think you are likely to be right. However, I will continue in my Nanodevil's Advocate role.

You say,

A positive claim is that an AI ... will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step

I think this depends on what the AI wants to build, on how complete our existing knowledge is, and on how powerful the AI is. Is there any reason why the AI could not (given sufficient computational resources) run a detailed simulation of every atom that it cares about, and arrive at a perfect design that way ? In practice, its simulation won't need be as complex as that, because some of the work had already been performed by human scientists over the ages.

Comment author: kalla724 17 May 2012 05:55:22PM 4 points [-]

By all means, continue. It's an interesting topic to think about.

The problem with "atoms up" simulation is the amount of computational power it requires. Think about the difference in complexity when calculating a three-body problem as compared to a two-body problem?

Than take into account the current protein folding algorithms. People have been trying to calculate folding of single protein molecules (and fairly short at that) by taking into account the main physical forces at play. In order to do this in a reasonable amount of time, great shortcuts have to be taken - instead of integrating forces, changes are treated as stepwise, forces beneath certain thresholds are ignored, etc. This means that a result will always have only a certain probability of being right.

A self-replicating nanomachine requires minimal motors, manipulators and assemblers; while still tiny, it would be a molecular complex measured in megadaltons. To precisely simulate creation of such a machine, an AI that is trillion times faster than all the computers in the world combined would still require decades, if not centuries of processing time. And that is, again, assuming that we know all the forces involved perfectly, which we don't (how will microfluidic effects affect a particular nanomachine that enters human bloodstream, for example?).

Comment author: Bugmaster 17 May 2012 10:49:01PM 0 points [-]

Yes, this is a good point. That said, while protein folding had not been entirely solved yet, it had been greatly accelerated by projects such as FoldIt, which leverage multiple human minds working in parallel on the problem all over the world. Sure, we can't get a perfect answer with such a distributed/human-powered approach, but a perfect answer isn't really required in practice; all we need is an answer that has a sufficiently high chance of being correct.

If we assume that there's nothing supernatural (or "emergent") about human minds [1], then it is likely that the problem is at least tractable. Given the vast computational power of existing computers, it is likely that the AI would have access to at least as many computational resources as the sum of all the brains who are working on FoldIt. Given Moore's Law, it is likely that the AI would soon surpass FoldIt, and will keep expanding its power exponentially, especially if the AI is able to recursively improve its own hardware (by using purely conventional means, at least initially).

[1] Which is an assumption that both my Nanodevil's Advocate persona and I share.

Comment author: JoshuaZ 17 May 2012 10:58:00PM *  3 points [-]

Protein folding models are generally at least as bad as NP-hard, and some models may be worse. This means that exponential improvement is unlikely. Simply put, one probably gets diminishing marginal returns for how much one can computer further in terms of how much improvement one has already done.

Comment author: Eliezer_Yudkowsky 22 March 2013 07:19:58AM 4 points [-]

Protein folding models must be inaccurate if they are NP-hard. Reality itself is not known to be able to solve NP-hard problems.

Comment author: Bugmaster 17 May 2012 11:21:32PM 0 points [-]

Hmm, ok, my Nanodevil's Advocate persona doesn't have a good answer to this one. Perhaps some SIAI folks would like to step in and pick up the slack ?

Comment author: dlthomas 17 May 2012 09:28:53PM *  0 points [-]

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.

Comment author: kalla724 17 May 2012 09:42:21PM 1 point [-]

You are correct. I did not phrase my original posts carefully.

I hope that my further comments have made my position more clear?