Unknowns comments on AI cooperation in practice - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (157)
Here's my proof that A defects against Defection Rock: by assumption, A's proof checker is correct. If A cooperates against the Defection Rock, A must have arrived at a proof that A's choice is equal to B's choice. But A's choice is not equal to Rock's choice. Therefore A's proof checker is incorrect, contradiction, QED.
This proof doesn't take more than 3^^^^3 steps, but it doesn't matter. A can't use it anyway because it can't assume (or prove) that its proof checker is correct. Goedel's second theorem: if a formal system asserts or proves its own consistency, it is inconsistent. That's why all proof systems that people actually use cannot prove their own consistency.
That proves that A can't cooperate, but it doesn't prove that it defects, since there remains the possibility that the program doesn't output anything.
The program is guaranteed to finish because there's a finite number of proofs to check, and checking every proof takes finite time (a proof checker cannot loop forever on a given proof).
So with an argument like the one that A defects, I can prove that your mind is inconsistent:
"Cousin_it will never prove this statement to be true."
I know that this is true, and presumably you know it too... which is a contradiction.
I don't know that it's true.
And I don't see how this makes the proof wrong.
It doesn't make the proof wrong.
Oh, sorry then :-)
"Lucas cannot consistently assert this sentence" is an old philosophers' joke, actually.
Yes, I can.