paulfchristiano comments on The Curve of Capability - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
Yes, I'm familiar with these arguments. I find that suggestive but not nearly as persuasive as others seem to. I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP. Since this is close to my area of expertise, I think I can make these estimates fairly safely.
Absolutely not. Humans can't do good FOOM. We evolved in circumstances where we very rarely had to solve NP hard or NP complete problems. And our self-modification system is essentially unconscious. There's little historical evolutionary incentive to take advantage of fast SAT solving. If one doesn't believe this just look at how much trouble humans have doing all sorts of very tiny instances of simple computational problems like multiplying small numbers, or factoring small integers (say under 10 digits).
Really? In that case we have a sharply different probability estimate. Would you care to make an actual bet? Is it fair to say that you are putting an estimate of less than 10^-6 that P=NP?
If it an AI can make quantum computers that can do that then it hardly has so much matter manipulation ability it has likely already won (although I doubt that even a reasonably powerful AI could necessarily do this simply because quantum computers are so finicky and unstable.)
But if P=NP in a practical way, RSA cracking is just one of the many things the AI will have fun with. Many crypto systems not just RSA will be vulnerable. The AI might quickly control many computer systems, increasing its intelligence and data input drastically. Many sensitive systems will likely fall under its control. And if P=NP then the AI also has shortcuts to all sorts of other things that could help it, like designing new circuits for it to use (and chip factories are close to automated at this point), and lots of neat biological tricks (protein folding becomes a lot easier although there seems to be some disagreement about from a computational perspective what class general protein folding falls into.) And of course, all those secure systems that are on the net which shouldn't be become far more vulnerable (nuclear power plants, particle accelerators, hydroelectric dams), as do lots of commercial and military satellites. And those are just a handful of the things that my little human mind comes up without being very creative. Harry James Potter Evans-Verres would do a lot better. (Incidentally I didn't remember how to spell his name so I started typing in Harry James Potter to Google and at "Harry James Pot" the third suggestion is for Evans-Verres. Apparently HPMoR is frequently googled.) And neither Harry nor I is as smart as decently intelligent AI.
I now agree that I was overconfident in P != NP. I was thinking only of failures where my general understanding of and intuition about math and computer science are correct. In fact most of the failure probability comes from the case where I (and most computer scientists) are completely off base and don't know at all what is going on. I think that worlds like this are unlikely, but probably not 1 in a million.