Qiaochu_Yuan comments on New report: Intelligence Explosion Microeconomics - Less Wrong

45 Post author: Eliezer_Yudkowsky 29 April 2013 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 30 April 2013 02:23:14PM -1 points [-]

Protein folding cannot be NP-hard. The physical universe is not known to be able to solve NP-hard problems, and protein folding will not involve new physics.

Comment author: Qiaochu_Yuan 30 April 2013 09:09:49PM 0 points [-]

Is this your complete response? I guess I could expand this to "I expect all the problems an AI needs to solve on the way to an intelligence explosion to be easy in principle but hard in practice," and I guess I could expand your other comments to "the problem sizes an AI will need to deal with are small enough that asymptotic statements about difficulty won't come into play." Both of these claims seem like they require justification.

Comment author: Eliezer_Yudkowsky 30 April 2013 09:16:33PM -1 points [-]

It's not meant as a response to everything, just noting that protein structure prediction can't be NP-hard. More generally, I tend to take P!=NP as a background assumption; I can't say I've worried too much about how the universe would look different if P=NP. I never thought superintelligences could solve NP-hard problems to begin with, since they're made out of wavefunction and quantum mechanics can't do that. My model of an intelligence explosion just doesn't include anyone trying to do anything NP-hard at any point, unless it's in the trivial sense of doing it for N=20 or something. Since I already expect things to local FOOM with P!=NP, adding P=NP doesn't seem to change much, even if the polynomial itself is small. Though Scott Aaronson seems to think there'd be long-term fun-theoretic problems because it would make so many challenges uninteresting. :)