kalla724 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: dlthomas 17 May 2012 04:28:21AM 3 points [-]

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

Comment author: kalla724 17 May 2012 05:25:24AM 4 points [-]

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an unpredicted and previously unknown obstacle, which is uncovered by experimentation. 4- work is done to overcome this obstacle. 5- goto 2, for many cycles, until a goal is achieved - which may or may not be close to the original idea.

I am not the one who is making positive claims here. All I'm saying is that what has happened before is likely to happen again. A team of human researchers or an AGI can use currently available information to build something (anything, nanoscale or macroscale) to the place to which it has already been built. Pushing it beyond that point almost invariably runs into previously unforeseen problems. Being unforeseen, these problems were not part of models or simulations; they have to be accounted for independently.

A positive claim is that an AI will have a magical-like power to somehow avoid this - that it will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step. I find that to be unlikely.

Comment author: Polymeron 20 May 2012 05:32:04PM 3 points [-]

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

Comment author: Strange7 23 May 2012 12:47:33AM 0 points [-]

At the very least it would ask for some textbooks on electrical engineering and demolitions, first. The detonation process is remarkably tricky.

Comment author: Bugmaster 17 May 2012 05:57:47AM 0 points [-]

FWIW I think you are likely to be right. However, I will continue in my Nanodevil's Advocate role.

You say,

A positive claim is that an AI ... will be able to simulate even those steps that haven't been attempted yet so perfectly, that all possible problems will be overcome at the simulation step

I think this depends on what the AI wants to build, on how complete our existing knowledge is, and on how powerful the AI is. Is there any reason why the AI could not (given sufficient computational resources) run a detailed simulation of every atom that it cares about, and arrive at a perfect design that way ? In practice, its simulation won't need be as complex as that, because some of the work had already been performed by human scientists over the ages.

Comment author: kalla724 17 May 2012 05:55:22PM 4 points [-]

By all means, continue. It's an interesting topic to think about.

The problem with "atoms up" simulation is the amount of computational power it requires. Think about the difference in complexity when calculating a three-body problem as compared to a two-body problem?

Than take into account the current protein folding algorithms. People have been trying to calculate folding of single protein molecules (and fairly short at that) by taking into account the main physical forces at play. In order to do this in a reasonable amount of time, great shortcuts have to be taken - instead of integrating forces, changes are treated as stepwise, forces beneath certain thresholds are ignored, etc. This means that a result will always have only a certain probability of being right.

A self-replicating nanomachine requires minimal motors, manipulators and assemblers; while still tiny, it would be a molecular complex measured in megadaltons. To precisely simulate creation of such a machine, an AI that is trillion times faster than all the computers in the world combined would still require decades, if not centuries of processing time. And that is, again, assuming that we know all the forces involved perfectly, which we don't (how will microfluidic effects affect a particular nanomachine that enters human bloodstream, for example?).

Comment author: Bugmaster 17 May 2012 10:49:01PM 0 points [-]

Yes, this is a good point. That said, while protein folding had not been entirely solved yet, it had been greatly accelerated by projects such as FoldIt, which leverage multiple human minds working in parallel on the problem all over the world. Sure, we can't get a perfect answer with such a distributed/human-powered approach, but a perfect answer isn't really required in practice; all we need is an answer that has a sufficiently high chance of being correct.

If we assume that there's nothing supernatural (or "emergent") about human minds [1], then it is likely that the problem is at least tractable. Given the vast computational power of existing computers, it is likely that the AI would have access to at least as many computational resources as the sum of all the brains who are working on FoldIt. Given Moore's Law, it is likely that the AI would soon surpass FoldIt, and will keep expanding its power exponentially, especially if the AI is able to recursively improve its own hardware (by using purely conventional means, at least initially).

[1] Which is an assumption that both my Nanodevil's Advocate persona and I share.

Comment author: JoshuaZ 17 May 2012 10:58:00PM *  3 points [-]

Protein folding models are generally at least as bad as NP-hard, and some models may be worse. This means that exponential improvement is unlikely. Simply put, one probably gets diminishing marginal returns for how much one can computer further in terms of how much improvement one has already done.

Comment author: Eliezer_Yudkowsky 22 March 2013 07:19:58AM 4 points [-]

Protein folding models must be inaccurate if they are NP-hard. Reality itself is not known to be able to solve NP-hard problems.

Comment author: Kawoomba 22 March 2013 08:49:48AM 2 points [-]

Reality itself is not known to be able to solve NP-hard problems.

Yet the proteins are folding. Is that not "reality" solving the problem?

Comment author: CCC 22 March 2013 09:31:43AM 3 points [-]

If reality cannot solve NP-hard problems as easily as proteins are being folded, and yet proteins are getting folded, then that implies that one of the following must be true:

  1. It turns out that reality can solve NP-hard problems after all
  2. Protein folding is not an NP-hard problem (which implies that it is not properly understood)
  3. Reality is not solving protein folding; it merely has a very good approximation that works on some but not necessarily all proteins (including most examples found in nature)
Comment author: Kawoomba 22 March 2013 09:41:49AM *  0 points [-]

Yes, and I'm leaning towards 1.

I am not familiar whether e.g. ("We show that the protein folding problem in the two-dimensional H-P model is NP-complete.") accurately models what we'd call "protein folding" in nature (just because the same name is used), but prima facie there is no reason to doubt the applicability, at least for the time being. (This precludes 2.)

Regarding 3, I don't think it would make sense to say "reality is using only a good approximation of protein folding, and by the way, we define protein folding as that which occurs in nature." That which happens in reality is precisely - and by definition not only an approximation of - that which we call "protein folding", isn't it?

What do you think?

Comment author: CCC 22 March 2013 08:16:07AM *  2 points [-]

Google has pointed me to an article describing an algorithm that can apparently predict folded protein shapes pretty quickly (a few minutes on a single laptop).

Original paper here. From a quick glance, it looks like it's only effective for certain types of protein chains.

Comment author: Eliezer_Yudkowsky 22 March 2013 08:17:34AM 1 point [-]

That too. Even NP-hard problems are often easy if you get the choice of which one to solve.

Comment author: Bugmaster 17 May 2012 11:21:32PM 0 points [-]

Hmm, ok, my Nanodevil's Advocate persona doesn't have a good answer to this one. Perhaps some SIAI folks would like to step in and pick up the slack ?

Comment author: Polymeron 20 May 2012 05:45:29PM *  6 points [-]

I'm afraid not.

Actually, as someone with background in Biology I can tell you that this is not a problem you want to approach atoms-up. It's been tried, and our computational capabilities fell woefully short of succeeding.

I should explain what "woefully short" means, so that the answer won't be "but can't the AI apply more computational power than us?". Yes, presumably it can. But the scales are immense. To explain it, I will need an analogy.

Not that long ago, I had the notion that chess could be fully solved; that is, that you could simply describe every legal position and every position possible to reach from it, without duplicates, so you could use that decision tree to play a perfect game. After all, I reasoned, it's been done with checkers; surely it's just a matter of getting our computational power just a little bit better, right?

First I found a clever way to minimize the amount of bits necessary to describe a board position. I think I hit 34 bytes per position or so, and I guess further optimization was possible. Then, I set out to calculate how many legal board positions there are.

I stopped trying to be accurate about it when it turned out that the answer was in the vicinity of 10^68, give or take a couple orders of magnitude. That's about a billionth billionth of the TOTAL NUMBER OF ATOMS IN THE ENTIRE UNIVERSE. You would literally need more than our entire galaxy made into a huge database just to store the information, not to mention accessing it and computing on it.

So, not anytime soon.

Now, the problem with protein folding is, it's even more complex than chess. At the atomic level, it's incredibly more complex than chess. Our luck is, you don't need to fully solve it; just like today's computers can beat human chess players without spanning the whole planet. But they do it with heuristics, approximations, sometimes machine learning (though that just gives them more heuristics and approximations). We may one day be able to fold proteins, but we will do so by making assumptions and approximations, generating useful rules of thumb, not by modeling each atom.

Comment author: Bugmaster 20 May 2012 08:57:24PM 4 points [-]

Yes, I understand what "exponential complexity" means :-)

It sounds, then, like you're on the side of kalla724 and myself (and against my Devil's Advocate persona): the AI would not be able to develop nanotechnology (or any other world-shattering technology) without performing physical experiments out in meatspace. It could do so in theory, but in practice, the computational requirements are too high.

But this puts severe constraints on the speed with which the AI's intelligence explosion could occur. Once it hits the limits of existing technology, it will have to take a long slog through empirical science, at human-grade speeds.

Comment author: Polymeron 23 May 2012 04:35:32PM 1 point [-]

Actually, I don't know that this means it has to perform physical experiments in order to develop nanotechnology. It is quite conceivable that all the necessary information is already out there, but we haven't been able to connect all the dots just yet.

At some point the AI hits a wall in the knowledge it can gain without physical experiments, but there's no good way to know how far ahead that wall is.

Comment author: Kawoomba 17 April 2013 08:58:29AM 2 points [-]

First I found a clever way to minimize the amount of bits necessary to describe a board position. I think I hit 34 bytes per position or so, and I guess further optimization was possible.

Indeed, using a very straightforward Huffman encoding (1 bit for an for empty cell, 3 bits for pawns) you can get it down to 24 bytes for the board alone. Was an interesting puzzle.

Looking up "prior art" on the subject, you also need 2 bytes for things like "may castle", and other more obscure rules.

There's further optimizations you can do, but they are mostly for the average case, not the worst case.

Comment author: Polymeron 23 April 2013 06:50:01PM 2 points [-]

I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.

Comment author: RichardKennaway 22 March 2013 08:06:19AM *  1 point [-]

It's been tried, and our computational capabilities fell woefully short of succeeding.

Is that because we don't have enough brute force, or because we don't know what calculation to apply it to?

I would be unsurprised to learn that calculating the folding state having global minimum energy was NP-complete; but for that reason I would be surprised to learn that nature solves that problem, rather than finding a local minimum.

I don't have a background in biology, but my impression from Wikipedia is that the tension between Anfinsen's dogma and Levinthal's paradox is yet unresolved.

Comment author: Polymeron 17 April 2013 07:16:04AM *  1 point [-]

The two are not in conflict.

A-la Levinthal's paradox, I can say that throwing a marble down a conical hollow at different angles and force can have literally trillions of possible trajectories; a-la Anfinsen's dogma, that should not stop me from predicting that it will end up at the bottom of the cone; but I'd need to know the shape of the cone (or, more specifically, its point's location) to determine exactly where that is - so being able to make the prediction once I know this is of no assistance for predicting the end position with a different, unknown cone.

Similarly, Eliezer is able to predict that a grandmaster chess player would be able to bring a board to a winning position against himself, even though he has no idea what moves that would entail or which of the many trillions of possible move sets the game would be comprised of.

Problems like this cannot be solved on brute force alone; you need to use attractors and heuristics to get where you want to get.

So yes, obviously nature stumbled into certain stable configurations which propelled it forward, rather than solve the problem and start designing away. But even if we can never have enough computing power to model each and every atom in each and every configuration, we might still get a good enough understanding of the general laws for designing proteins almost from scratch.

Comment author: Strange7 22 March 2013 06:52:38AM 0 points [-]

I would think it would be possible to cut the space of possible chess positions down quite a bit by only retaining those which can result from moves the AI would make, and legal moves an opponent could make in response. That is, when it becomes clear that a position is unwinnable, backtrack, and don't keep full notes on why it's unwinnable.

Comment author: Polymeron 17 April 2013 07:26:44AM 1 point [-]

This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.

Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...

Comment author: dlthomas 17 May 2012 09:28:53PM *  0 points [-]

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.

Comment author: kalla724 17 May 2012 09:42:21PM 1 point [-]

You are correct. I did not phrase my original posts carefully.

I hope that my further comments have made my position more clear?