Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MagnetoHydroDynamics comments on GAZP vs. GLUT - Less Wrong

33 Post author: Eliezer_Yudkowsky 07 April 2008 01:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

Sort By: Old

You are viewing a single comment's thread.

Comment author: [deleted] 06 February 2012 11:51:42AM 4 points [-]

Unless you think it's possible to program a conscious being in Haskell."

Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!

Also, can we just conclude that "consciousness" is the leakiest of surface generalizations ever? If I one day get the cog-psy skills I am going to run a stack-trace on what makes us say "consciousness" without knowing diddy about what it is.

As a budding AI researcher, I am frankly offended by philosophers pretending to be wise like that. No. There is no such thing as "consciousness" because it is not even a bucket to put things in. It's metal shreds. You are a some sort of self-introspective algorithm implemented on a biochemical computing substrate, so let's make the blankness of our maps self-evident by calling it "magic" or something.

Comment author: gwern 06 February 2012 11:20:14PM 0 points [-]

Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!

I don't think we even need turing-completeness, really. Take a total language, and give it a loop-timeout of say 3^^^^3. It can compute anything we care about, and is still guaranteed to terminate in bounded time.

(I'm reminded of Godel's letter discussing P=NP - should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn't be proved with that many symbols would be of little to no interest to us.)

Comment author: wedrifid 07 February 2012 09:17:00AM *  6 points [-]

(I'm reminded of Godel's letter discussing P=NP - should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn't be proved with that many symbols would be of little to no interest to us.)

Or, at least, they could be replaced by people who can understand and seek out proofs that are relevant to us out of the arbitrarily large number of useless ones. So mathematicians basically.

Comment author: Eliezer_Yudkowsky 07 February 2012 10:57:54AM 5 points [-]

Unless for any NP problem there exists an algorithm which solves it in just O(N^300) time.

NP=P is not the same as NP=Small P

Comment author: [deleted] 07 February 2012 11:52:36AM 7 points [-]

Likewise, EXPTIME doesn't mean Large EXPTIME -- an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.

I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham's number and those larger than Graham's number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”

Comment deleted 07 February 2012 03:30:44PM [-]
Comment author: kilobug 07 February 2012 03:39:37PM *  1 point [-]

No, slower. N^300 is polynomial, exp(1e-15*N) is exponential.

Maybe it's easier to understand them if you take log ? log(N^300) = log(N) * 300 and log(exp(1e-15 * N)) = 1e-15 * N

Whatever >0 multiplicative constants a and b you put, a * N will at one point become bigger (so, slower) than b * log(N). In this, that'll happen roughly when N will be somewhat above 10^15, around 10^20 according to a simple guess.

Comment author: [deleted] 07 February 2012 01:04:46PM 2 points [-]

Complexity theorists don't know anything, but they at least know that it's impossible to solve all NP problems in O(N^300) time. In fact they know it's impossible to solve all P problems in O(N^300) time.

http://en.wikipedia.org/wiki/Time_hierarchy_theorem

Comment author: Incorrect 07 February 2012 03:34:22PM -1 points [-]

I think Yudkowsky meant big omega.

Comment author: skepsci 13 February 2012 10:15:18AM *  4 points [-]

I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that's consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn't help anyone with the goal of replacing mathematicians with microchips.

Comment author: thomblake 13 February 2012 05:17:45PM -1 points [-]

I don't think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we'd still need mathematicians.

Comment author: skepsci 13 February 2012 08:14:33PM *  0 points [-]

Sewing-Machine correctly pointed out, above, that this contradicts what we already know.

Comment author: thomblake 13 February 2012 08:39:11PM 0 points [-]

Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?

I'd be surprised by the former, and the latter is obvious from that it is a counterfactual.

Comment author: skepsci 14 February 2012 02:11:31AM 2 points [-]

I'm not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can't picture a universe in which the math is different.

I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.

Comment author: Incorrect 07 February 2012 03:37:22PM 0 points [-]

anything that couldn't be proved with that many symbols would be of little to no interest to us

Are you sure? Maybe there are exist concise, powerful theorems that have really long proofs.

Comment author: gwern 07 February 2012 03:39:40PM 4 points [-]

Oh good grief, since everyone here is intent on nitpicking the observation to death, here is his bloody letter: http://rjlipton.wordpress.com/the-gdel-letter/

Comment author: skepsci 13 February 2012 11:05:32AM 0 points [-]

Glaring redundancy aside, isn't "self-introspective" just as intensionally valid or void as "conscious"?

Comment author: [deleted] 20 February 2012 06:02:24PM 0 points [-]

Yes, probably. It is a really good idea to taboo any and all of "conscious," "self-<anything>," "introspective," "thinking," and so on when doing AI work, or so I heard.