You know that automated proof verifiers exist, right? And also that programs can know their own source code? Well, here's a puzzle for you:
Consider a program A that knows its own source code. The algorithm of A is as follows: generate and check all possible proofs up to some huge size (3^^^^3). If A finds a proof that A returns 1, it returns 1. If the search runs to the end and fails, it returns 0. What will this program actually return?
Wait, that was the easy version. Here's a harder puzzle:
Consider programs A and B that both know their own, and each other's, source code. The algorithm of A is as follows: generate and check all proofs up to size 3^^^^3. If A finds a proof that A returns the same value as B, it returns 1. If the search fails, it returns 0. The algorithm of B is similar, but possibly with a different proof system and different length limit. What will A and B return?
This second puzzle is a formalization of a Prisoner's Dilemma strategy proposed by Eliezer: "I cooperate if and only I expect you to cooperate if and only if I cooperate". So far we only knew how to make this strategy work by "reasoning from symmetry", also known as quining. But programs A and B can be very different - a human-created AI versus an extraterrestrial crystalloid AI. Will they cooperate?
I may have a tentative proof that the answer to the first problem is 1, and that in the second problem they will cooperate. But: a) it requires you to understand some logic (the diagonal lemma and Löb's Theorem), b) I'm not sure it's correct because I've only studied the subject for the last four days, c) this margin is too small to contain it. So I put it up here. I submit this post with the hope that, even though the proof is probably wrong or incomplete, the ideas may still be useful to the community, and someone else will discover the correct answers.
Edit: by request from Vladimir Nesov, I reposted the proofs to our wiki under my user page. Many thanks to all those who took the time to parse and check them.
The generalized action idea, applied to this setting, works as follows. Agents A and B can surrender control over their output to procedures they don't fully understand, but expect to produce good results. That they expect good result is reflected, as in previous construction, in the order in which they check for the possibility:
Here, the programs might be unable to prove anything specific about the output of Foo() and Bar(), but can still use the trick to coordinate their simultaneous invocation. This is also one way to look at how to solve the coordination problem.
Note that Foo() and Bar() might as well implement the algorithms similar to what we have in main(), so choosing them is a high-level decision, while the low-level decisions still remain to be made by them. Deciding hierarchically makes individual proofs, and individual decisions, easier to perform, so that you don't need a ridiculously big upper bound on which proofs to consider. That we can forget how the hierarchical code is written (say, by inlining most functions) also shows the potential for finding similarly simplifying patterns in the more monolithic code.