Eliezer_Yudkowsky comments on New report: Intelligence Explosion Microeconomics - Less Wrong

45 Post author: Eliezer_Yudkowsky 29 April 2013 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 30 April 2013 09:16:33PM -1 points [-]

It's not meant as a response to everything, just noting that protein structure prediction can't be NP-hard. More generally, I tend to take P!=NP as a background assumption; I can't say I've worried too much about how the universe would look different if P=NP. I never thought superintelligences could solve NP-hard problems to begin with, since they're made out of wavefunction and quantum mechanics can't do that. My model of an intelligence explosion just doesn't include anyone trying to do anything NP-hard at any point, unless it's in the trivial sense of doing it for N=20 or something. Since I already expect things to local FOOM with P!=NP, adding P=NP doesn't seem to change much, even if the polynomial itself is small. Though Scott Aaronson seems to think there'd be long-term fun-theoretic problems because it would make so many challenges uninteresting. :)