Psy-Kosh comments on What would you do with a solution to 3-SAT? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
NP oracles allow easy learning by allowing one to find compact models to explain/predict available data...
Also gives the ability to do stuff like "what actions can I take which, within N inferential steps of this model, will produce an outcome I desire?" or "will produce utility > U with probability > P" or such.
Maybe I'm way off on this, but it sure does seem like a cheap NP oracle would make at least UFAI comparatively easy.
If I'm totally wrong about NP oracles -> easy AI, then I'd agree with you re releasing it... with a caveat. I'd say to it with advance warning... ie, anonymously demonstrate to various banks, security institutions, and possibly the public that there exists someone with the ability to efficiently solve NP complete problems, and that the algorithm will be released in X amount of time. (1-2 years would be my first thought for how long the advanced warning should be.)
This way everyone has time to at least partly prepare for the day where all crypto other than OTP (quantum crypto would be an example of an OTP style crypto) would be dead.
Yes, but if the models aren't well-defined then that won't be doable. At this point we can even give rigorous notions what we mean by an intelligence, so our 3-SAT oracle won't be able to help much. The 3-SAT oracle is only going to help for precisely defined questions.
Huh? I'm not sure I understand what your objection.
An NP oracle would let you do stuff like "given this sensory data, find a model of size N or less that within K computational steps or less will reproduce the data to within error x, given such a model exists"
Then one can run "which sequence of actions, given this model, will, within S steps, produce outcome A with probability P?"
Whether or not we can give a rigorous definition of intelligence, seems like the above is sufficient to act like an intelligence, right? Yeah, there're a few tricky parts re reflective decision theory, but even without that. Even if we let the thing be non-self-modifying... giving a nice chunk of computing power to the above, given an efficient NP oracle, would be enough to potentially cause trouble. Or so I'd imagine.
Ok, but how are you specifying the outcome? And are you going to specify each input and output?
Just some property of the outcome that you're interested in. ie, all of the above, with the question being "blah blah blah, with f(outcome) = blah with probability blah blah blah blah blah"
Then you will need to specify your AI for every single output-input pair you are interested in.
Huh? I don't follow. (Note, the whole point is that I was claiming that an NP oracle would make, say, a UFAI potentially easy, while to achieve the rather more specific FAI would still be difficult.)
It seems that we may be talking past each other. Could you give an explicit example of what sort of question you would ask the NP oracle to help get a UFAI?
Oooh, sorry, I was unclear. I meant the NP oracle itself would be a component of the AI.
ie, give me an algorithm for efficiently solving NP complete problems, and one could then use that to perform the sort of computations I mentioned earlier.
Hmm, I'm confused. Why do you think that such an object would be helpful as part of the AI? I see how it would be useful to an AI once one had one, but I don't see why you would want it as a component that makes it easier to make an AGI.