Eliezer_Yudkowsky comments on New report: Intelligence Explosion Microeconomics - Less Wrong

45 Post author: Eliezer_Yudkowsky 29 April 2013 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 30 April 2013 03:10:08PM 4 points [-]

I believe I've seen that discussed before and the answer is just that in real life, proteins don't fold into the lowest-energy conformation. It's like saying that the global minimum energy for soap bubbles is NP-complete. Finding new useful proteins tends to occur via point mutations so that can't be NP-complete either.

Comment author: Vaniver 30 April 2013 03:35:04PM *  4 points [-]

So, I can think of several different things that could all be the 'protein folding problem.'

  1. Figure out the trajectory an unfolded protein takes towards a folded protein, with a known starting state. (P)
  2. Given a known folded protein, find local minima that unfolded proteins starting with random start states might get stuck in. (NP, probably)
  3. Given a desired chemical reaction, find a folded protein that will catalyze the reaction. (Not sure, probably NP.)
  4. Given a desired chemical reaction, find a folded protein that will catalyze the reaction that is the local minimum reached by most arbitrary unfolded positions for that protein (Optimal is definitely NP, but I suspect feasible is too.)
  5. Others. (Here, the closest I get to molecular nanotech is 'catalyze reactions,' but I imagine the space for 'build a protein that looks like X' might actually be smaller.)

It looks to me like the problems here that have significant returns are NP.

Finding new useful proteins tends to occur via point mutations so that can't be NP-complete either.

It's not at all clear to me what you mean by this. I mean, take the traveling salesman problem. It's NP-Hard*, but you can get decent solutions by using genetic algorithms to breed solutions given feasible initial solutions. Most improvements to the route will be introduced by mutations, and yet the problem is still NP-hard.

That is, it's not clear to me that you're differentiating between the problem of finding an optimal solution being NP hard, it taking NP time to find a 'decent' solution, and an algorithm requiring NP time to finish running.

(The second is rarely true for things like the traveling salesman problem, but is often true for practical problems where you throw in tons of constraints.)

* A variant is NP-Complete, which is what I originally wrote.

Comment author: Eliezer_Yudkowsky 30 April 2013 04:08:04PM 0 points [-]

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn't have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = "did not physically happen". "Physically happened" = not NP-hard.

Comment author: Tyrrell_McAllister 30 April 2013 06:42:22PM *  17 points [-]

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period.

I've seen you say this a couple of times, and your interlocutors seem to understand you, even when they dispute your conclusion. But my brain keeps returning an error when I try to parse your claim.

Read literally, "NP-hard" is not a predicate that can be meaningfully applied to individual events. So, in that sense, trivially, nothing that happens (physically or otherwise, if "physically" is doing any work here) can be NP-hard. But you are evidently not making such a trivial claim.

So, what would it look like if the physical universe "solved an NP-hard problem"? Presumably it wouldn't just mean that some actual salesman found a why to use existing airline routes to visit a bunch of pre-specified cities without revisiting any one of them. Presumably it wouldn't just mean that someone built a computer that implements a brute-force exhaustive search for a solution to the traveling salesman problem given an arbitrary graph (a search that the computer will never finish before the heat death of the universe if the example is large). But I can't think of any other interpretation to give to your claim.

Comment author: CarlShulman 30 April 2013 10:46:22PM *  5 points [-]

ETA: this is a side point.

Here's Scott Aaronson describing people (university professors in computer science and cognitive science at RPI) who claim that the physical universe efficiently solves NP-hard problems:

It was only a matter of time before someone put the pieces together. Last summer Bringsjord and Taylor [24] posted a paper entitled “P=NP” to the arXiv. This paper argues that, since (1) finding a Steiner tree is NP-hard, (2) soap bubbles find a Steiner tree in polynomial time, (3) soap bubbles are classical objects, and (4) classical physics can be simulated by a Turing machine with polynomial slowdown, it follows that P = NP.

My immediate reaction was that the paper was a parody. However, a visit to Bringsjord’s home page2 suggested that it was not. Impelled, perhaps, by the same sort of curiosity that causes people to watch reality TV shows, I checked the discussion of this paper on the comp.theory newsgroup to see if anyone recognized the obvious error. And indeed, several posters pointed out that, although soap bubbles might reach a minimum-energy configuration with a small number of pegs, there is no “magical” reason why this should be true in general. By analogy, a rock in a mountain crevice could reach a lower-energy configuration by rolling up first and then down, but it is not observed to do so.

In other news, Bringsjord also claims to show by a modal argument, similar to the theistic modal argument (which he also endorses), that human brains are capable of hypercomputation: "it's possible humans are capable of hypercomputation, so they are capable of hypercomputation." For this reason he argues that superhumanly intelligent Turing machines/Von Neumann computers are impossible and belief in their possibility is fideistic.

Comment author: EHeller 30 April 2013 10:55:00PM *  6 points [-]

Here's Scott Aaronson describing people (university professors at RPI) who claim that the physical universe efficiently solves NP-hard problems.

This doesn't refute what you are responding to. Saying the universe can't solve a general NP problem in polynomial time is not the same thing as saying the universe cannot possibly solve specific instances of generally NP-complete problems, which is Tyrrell_McAllister's point, as far as I can parse. In general, the traveling salesman is NP-complete, however there are lots of cases where heuristics get the job done in polynomial time, even if those heuristics would run-away if they were given the wrong case.

To use Aaronson's soap bubbles, sometimes the soap bubble finds a Steiner tree, sometimes it doesn't. When it DOES, it has solved one instance of an NP-complete problem fairly quickly.

Comment author: Vaniver 30 April 2013 11:14:22PM 3 points [-]

But my brain keeps returning an error when I try to parse your claim.

I agree with your parse error. It looks like EY has moved away from the claim made in the grandparent, though.

Comment author: Luke_A_Somers 30 April 2013 05:45:58PM 11 points [-]

That seems a little strongly put - NP-hard scales very poorly, so no real process can take N up to large numbers. I can solve the traveling salesman problem in my head with only modest effort if there are only 4 stops. And it's trivial if there are 2 or 3 stops.

Comment author: Eliezer_Yudkowsky 30 April 2013 06:12:51PM 8 points [-]

Conceded.

Comment author: CronoDAS 15 July 2013 03:59:21AM 1 point [-]

Um... doesn't it take exponential time in order to simulate quantum mechanics on a classical computer?

Comment author: pengvado 17 July 2013 04:01:09AM *  0 points [-]

Yes (At least that's the general consensus among complexity theorists, though it hasn't been proved.) This doesn't contradict anything Eliezer said in the grandparent. The following are all consensus-but-not-proved:

P⊂BQP⊂EXP
P⊂NP⊂EXP
BQP≠NP (Neither is confidently predicted to be a subset of the other, though BQP⊂NP is at least plausible, while NP⊆BQP is not.)
If you don't measure any distinctions finer than P vs EXP, then you're using a ridiculously coarse scale. There are lots of complexity classes strictly between P and EXP, defined by limiting resources other than time-on-a-classical-computer. Some of them are tractable under our physics and some aren't.

Comment author: bogdanb 16 July 2013 02:57:13AM 0 points [-]

Is that just that we don’t know any better algorithms, or is there a proof that exptime is needed?

Comment author: CronoDAS 16 July 2013 03:33:36AM 1 point [-]

I really don't know; some Wikipedia browsing suggests that there's a proof, but I'd rather have a statement from someone who actually knows.

Comment author: asr 05 May 2013 05:47:17AM -1 points [-]

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard.

I don't understand why you think new physics is required to solve hard instances of NP-complete problems. We routinely solve the hard instances of NP-hard problems in practice on computers -- just not on large instances of the problem. New physics might be required to solve those problems quickly, but if you are willing to wait exponentially long, you can solve the problems just fine.

If you want to argue that actual practical biological folding of proteins isn't NP-hard, the argument can't start from "it happens quickly" -- you need to say something about how the time to fold scales with the length of the amino acid strings, and in particular in the limit for very large strings.

Similarly, I don't see why biological optimization couldn't have solved hard cases of NP-compete problems. If you wait long enough for evolution to do its thing, the result could be equivalent to an exhaustive search. No new physics required.

Comment author: wedrifid 05 May 2013 06:23:36AM *  2 points [-]

I don't understand why you think new physics is required to solve hard instances of NP-complete problems. We routinely solve the hard instances of NP-hard problems in practice on computers -- just not on large instances of the problem.

Eliezer already conceded that trivial instances of such problems can be solved. (We can assume that before he made that concession he thought it went without saying.)

New physics might be required to solve those problems quickly, but if you are willing to wait exponentially long, you can solve the problems just fine.

The physics and engineering required to last sufficiently long may be challenging. I hear it gets harder to power computers once the stars have long since burned out. As far as I know the physics isn't settled yet.

(In other words, I am suggesting that "just fine" is an something of an overstatement when it comes to solving seriously difficult problems by brute force.)

Comment author: Kawoomba 05 May 2013 07:03:36AM 3 points [-]

The physics and engineering required to last sufficiently long may be challenging. I hear it gets harder to power computers once the stars have long since burned out. As far as I know the physics isn't settled yet.

That counterargument is a bit too general, since it applies not only to NP problems, but even to P problems (such as deciding whether a number is the GCD of two other numbers), or even any arbitrary algorithm modified by a few lines of codes such that its result is unaffected, merely delayed until after the stars burned out, or whatever limit we postulate.

For NP problems and e.g. P problems both, given how we understand the universe, there is only a finite number of inputs in both cases which are tractable, and an infinite number of inputs which aren't. Though the finite number is well different for both, as a fraction of all "possible", or rather well-defined (let's avoid that ambiguity cliff) inputs, it would be the same.

Cue "We all live in a Finite State Machine, Finite State Machine, Finite State Machine ..."

Comment author: asr 05 May 2013 02:37:40PM *  1 point [-]

Eliezer already conceded that trivial instances of such problems can be solved. (We can assume that before he made that concession he thought it went without saying.)

The point can't be confined to "trivial instances". For any NP-complete problem on some reasonable computing platform that can solve small instances quickly, there will be instance sizes that are non-trivial (take appreciable time to solve) but do not require eons to solve. There is absolutely no mathematical reason for assuming that for "natural" NP-complete problems, interesting-sized instances can't be solved on a timescale of months/years/centuries by natural processes.

The dichotomy between "trivial" and "impossible to solve in a useful time-frame" is a false one.

Comment author: shminux 01 May 2013 09:35:27PM -1 points [-]

Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever

Presumably quantum suicide is a part of "whatever".

Comment author: JoshuaZ 30 April 2013 03:27:44PM 1 point [-]

Finding new useful proteins tends to occur via point mutations so that can't be NP-complete either.

This does not follow. It may be that finding new useful proteins just takes a very long time and is very inefficient. The rest of your comment seems correct though.

Comment author: Eliezer_Yudkowsky 30 April 2013 04:02:40PM 2 points [-]

Then evolution wouldn't happen in real life.

Actually, even that understates the argument. If you can take a 20,000 base sequence and get something useful by point-perturbing at least one of the 20,000 bases in the sequence, then whatever just happened was 50 lightyears from being NP-hard - you only had to search through 19 variants on each of 20,000 cases.

Comment author: JoshuaZ 30 April 2013 07:00:06PM 0 points [-]

Huh? How does this argument work? That doesn't mean that evolution can't happen in real life, it would be a reason to think that evolution is very slow (which it is!) or that evolution is missing a lot of interesting proteins (which seems plausible).

If you can take a 20,000 base sequence and get something useful by point-perturbing at least one of the 20,000 bases in the sequence, then whatever just happened was 50 lightyears from being NP-hard - you only had to search through 19 variants on each of 20,000 cases.

I'm not sure I follow your logic. Are you arguing that because log 20,000 <19? Yes, you can check every possible position in a base sequence this way, but there are still a lot more proteins than those 19. One doesn't get something special from just changing a specific base. Moreover, even if something interesting does happen for changing a specific one, it might not happen if one changes some other one.

Comment author: CarlShulman 30 April 2013 11:12:33PM *  3 points [-]

missing a lot of interesting proteins (which seems plausible).

Definitely, since evolution keeps introducing new interesting proteins.

That doesn't mean that evolution can't happen in real life, it would be a reason to think that evolution is very slow (which it is!)

But it's not slow on a scale of e^n for even modestly large n. If you can produce millions of proteins with hundreds to thousands of amino acids in a few billion years, then approximate search for useful proteins is not inefficient like finding the lowest-energy conformation is (maybe polynomial approximation, or the base is much better, or functional chunking lets you effectively reduce n greatly...).

Comment author: SilasBarta 01 May 2013 12:23:25AM *  0 points [-]

[...evolution is] missing a lot of interesting proteins (which seems plausible).

Definitely, since evolution keeps introducing new interesting proteins.

Wait, the fact the evolution is often introducing a interesting new proteins is evidence that evolution is missing a lot of interesting proteins? How does that follow?

Switch the scenario around: if evolution never produced interesting new proteins (anymore, after time T), would that be evidence that there are no other interesting proteins than what evolution produced?

Comment author: CarlShulman 01 May 2013 12:35:41AM *  5 points [-]

Switch the scenario around: if evolution never produced interesting new proteins (anymore, after time T), would that be evidence that there are no other interesting proteins than what evolution produced?

Yes.

That would be evidence that the supply of interesting proteins had been exhausted, just as computer performance at tic-tac-toe and checkers has stopped improving. I don't see where you're coming from here.

Comment author: SilasBarta 01 May 2013 12:42:38AM *  0 points [-]

Because evolution can't get stuck in the domain of attraction of a local optimum? It always finds any good points?

Edit to add: Intelligent humans can quickly refactor their programs out of poor regions of designspace. Evolution must grope within its neighborhood.

2nd Edit: How about this argument:

"Evolution has stopped producing interesting new ways of flying; therefore, there are probably no other interesting ways of accomplishing flight, since after all, if there were a good way of doing it, evolution would find it."

Comment author: CellBioGuy 09 May 2013 03:37:15PM *  1 point [-]

Point mutations aren't the only way for new things to be produced. You can also recombine large chunks and domains together from multiple previous genes.

Hell, there are even examples of genes evolving via a frame-shift that knocks the 3-base frame of a gene off by one producing a gobbeldygook protein that selection then acts upon...

Comment author: JoshuaZ 01 May 2013 12:47:14AM 0 points [-]

Carl wasn't commenting on whether it would be very strong evidence but whether it would be evidence.

Comment author: SilasBarta 01 May 2013 12:52:25AM 0 points [-]

Definitely, since evolution keeps introducing new interesting proteins.

Comment author: Kawoomba 30 April 2013 03:33:02PM *  0 points [-]

It occurs to me that we can't really say, since we only have access to the time of the program, which may or may not reflect the actual computational resources expended.

Imagine you were living in a game, and trying to judge the game's hardware requirements. If you did that by looking at a clock in the game, you'd need to assume that the clock is synchronized to the actual system time. If you had a counter you increased, you wouldn't be able to say from inside the program every which step you get to that counter++ instruction.

The problem being that we don't have access to anything external, we aren't watching the Turing Machine compute, we are inside the Turing Machine "watching" the effects of other parts of the program, such as a folding protein (observing whenever it's our turn to be simulated). We don't, however, see the Turing Machine compute, we only see the output. The raw computing power / requirements "behind the scenes", even if such a behind the scenes is only a non-existent abstraction, is impossible to judge with certainty, similar to a map-territory divide. Since there is no access in principle, we cannot observe anything but the "output", we have no way of verifying any assumptions about a correspondence between "game timer" and "system timer" we may make, or of devising any experiments.

Even the recently linked "test the computational limits" doesn't break the barrier, since for all we know the program may stall, and the next "frame" it outputs may still seem consistent, with no stalling, when viewed from inside the program, which we are. We wouldn't subjectively realize the stall. If such an experiment did find something, it would be akin to a bug, not to a measurement of computational resources expended.

Back to diapers.

Comment author: RichardKennaway 30 April 2013 07:26:06PM 1 point [-]

It occurs to me that we can't really say, since we only have access to the time of the program, which may or may not reflect the actual computational resources expended.

That's a valid point, but it does presuppose exotic new physics to make that substrate, in which "our" time passes arbitrarily slowly compared to the really real time, so that it can solve NP-hard problems between our clock ticks. We would, in effect be in a simulation. Evidence of NP-hard problems actually being solved in P could be taken as evidence that we are in one.

Comment author: ESRogs 30 April 2013 06:44:11PM 1 point [-]

we only have access to the time of the program [...] we are inside the Turing Machine "watching" the effects of other parts of the program, such as a folding protein

If we assume that protein folding occurs according to the laws of quantum mechanics, then it shouldn't tell us anything about the computational complexity of our universe besides what quantum mechanics tells us, right?

Comment author: Kawoomba 30 April 2013 07:02:42PM *  1 point [-]

Well, yea that's what I'm leaning towards. The laws of physics themselves need not govern the machine (Turing or otherwise), they are effects we observe, us being other effects. The laws of physics and the observers both are part of the output.

Like playing an online roleplaying game and inferring what the program can actually do or what resources it takes, when all you can access is "how high can my character jump" and other in-game rules. The rules regarding the jumping, and any limits the program chose to confer to the jumping behavior are not indicative of the resource requirements and efficiency of the underlying system. Is calculating the jumping easy or hard for the computer? How would you know as a character? The output, again, is a bad judge, take this example:

Imagine using an old Intel 386 system which you rigged into running the latest FPS shooter. It may only output one frame every few hours, but as a sentient character inside that game you wouldn't notice. Things would be "smooth" for you because the rules would be unchanged from your point of view.

We can only say that given our knowledge of the laws of physics, the TM running the universe doesn't output anything which seems like an efficient NP-problem solver, whether the program contains one, or the correct hardware abstraction running it uses one, is anyone's guess. (The "contains one" probably isn't anyone's guess because of Occam's Razor considerations.)

If this is all confused (it may well be, was mostly a stray thought), I'd appreciate a refutation.

Comment author: ESRogs 01 May 2013 08:38:50AM 3 points [-]

If I understand correctly you're saying that what is efficiently computable within a universe is not necessarily the same as what is efficiently computable on a computer simulating that universe. That is a good point.

Comment author: Kawoomba 01 May 2013 08:51:13AM 0 points [-]

Exactly. Thanks for succinctly expressing my point better than I could.

The question is whether assuming a correspondence as a somewhat default case (implied by the "not necessarily") is even a good default assumption.

Why would the rules inherent in what we see inside the universe be any more indicative of the rules of the computer simulating that universe than the rules inside a computer game are reflective of the instruction set of the CPU running it (they are not)?

I am aware that the reference class "computer running super mario brother / kirby's dream land" implies for the rules to be different, but on what basis would we choose any reference class which implies a correspondence?

Also, I'm not advocating simulationism with this per se, the "outer" computer can be strictly an abstraction.