loqi comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: loqi 07 February 2010 11:51:37PM 0 points [-]

"If" is the question, not "how long". And I think we'd stand a pretty good chance of handling a proof object in a secure way, assuming we have a secure digital transmission channel etc.

But the original scope of the thought experiment was assuming that we want to verify the proof. Wei Dai said:

Surely most humans would be too dumb to understand such a proof? And even if you could understand it, how does the AI convince you that it doesn't contain a deliberate flaw that you aren't smart enough to find? Or even better, you can just refuse to look at the proof.

I was responding to the first question, exclusively disjoint from the others. If your point is that we shouldn't attempt to verify an AI's precommitment proof, I agree.

Comment author: aausch 09 February 2010 10:19:41PM 0 points [-]

I'm getting more confused. To me, the statements "Humans are too dumb to understand the proof" and the statement "Humans can understand the proof given unlimited time", where 'understand' is qualified to include the ability to properly map the proof to the AI's capabilities, are equivalent.

My point is not that we shouldn't attempt to verify the AI's proof for any external reasons - my point is that there is no useful information to be gained from the attempt.