Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwern comments on GAZP vs. GLUT - Less Wrong

33 Post author: Eliezer_Yudkowsky 07 April 2008 01:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 06 February 2012 11:20:14PM 0 points [-]

Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!

I don't think we even need turing-completeness, really. Take a total language, and give it a loop-timeout of say 3^^^^3. It can compute anything we care about, and is still guaranteed to terminate in bounded time.

(I'm reminded of Godel's letter discussing P=NP - should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn't be proved with that many symbols would be of little to no interest to us.)

Comment author: wedrifid 07 February 2012 09:17:00AM *  6 points [-]

(I'm reminded of Godel's letter discussing P=NP - should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn't be proved with that many symbols would be of little to no interest to us.)

Or, at least, they could be replaced by people who can understand and seek out proofs that are relevant to us out of the arbitrarily large number of useless ones. So mathematicians basically.

Comment author: Eliezer_Yudkowsky 07 February 2012 10:57:54AM 5 points [-]

Unless for any NP problem there exists an algorithm which solves it in just O(N^300) time.

NP=P is not the same as NP=Small P

Comment author: [deleted] 07 February 2012 11:52:36AM 7 points [-]

Likewise, EXPTIME doesn't mean Large EXPTIME -- an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.

I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham's number and those larger than Graham's number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”

Comment deleted 07 February 2012 03:30:44PM [-]
Comment author: kilobug 07 February 2012 03:39:37PM *  1 point [-]

No, slower. N^300 is polynomial, exp(1e-15*N) is exponential.

Maybe it's easier to understand them if you take log ? log(N^300) = log(N) * 300 and log(exp(1e-15 * N)) = 1e-15 * N

Whatever >0 multiplicative constants a and b you put, a * N will at one point become bigger (so, slower) than b * log(N). In this, that'll happen roughly when N will be somewhat above 10^15, around 10^20 according to a simple guess.

Comment author: [deleted] 07 February 2012 01:04:46PM 2 points [-]

Complexity theorists don't know anything, but they at least know that it's impossible to solve all NP problems in O(N^300) time. In fact they know it's impossible to solve all P problems in O(N^300) time.

http://en.wikipedia.org/wiki/Time_hierarchy_theorem

Comment author: Incorrect 07 February 2012 03:34:22PM -1 points [-]

I think Yudkowsky meant big omega.

Comment author: skepsci 13 February 2012 10:15:18AM *  4 points [-]

I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that's consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn't help anyone with the goal of replacing mathematicians with microchips.

Comment author: thomblake 13 February 2012 05:17:45PM -1 points [-]

I don't think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we'd still need mathematicians.

Comment author: skepsci 13 February 2012 08:14:33PM *  0 points [-]

Sewing-Machine correctly pointed out, above, that this contradicts what we already know.

Comment author: thomblake 13 February 2012 08:39:11PM 0 points [-]

Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?

I'd be surprised by the former, and the latter is obvious from that it is a counterfactual.

Comment author: skepsci 14 February 2012 02:11:31AM 2 points [-]

I'm not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can't picture a universe in which the math is different.

I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.

Comment author: skepsci 15 February 2012 10:29:32AM 1 point [-]

To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?

Comment author: [deleted] 15 February 2012 03:54:40PM 1 point [-]

When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn't sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.

Comment author: Incorrect 07 February 2012 03:37:22PM 0 points [-]

anything that couldn't be proved with that many symbols would be of little to no interest to us

Are you sure? Maybe there are exist concise, powerful theorems that have really long proofs.

Comment author: gwern 07 February 2012 03:39:40PM 4 points [-]

Oh good grief, since everyone here is intent on nitpicking the observation to death, here is his bloody letter: http://rjlipton.wordpress.com/the-gdel-letter/