Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

orthonormal comments on Robust Cooperation in the Prisoner's Dilemma - Less Wrong

69 Post author: orthonormal 07 June 2013 08:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 09 June 2013 02:05:53PM 5 points [-]

Algorithmic proof search in general is NP-hard. Special cases can be simpler: i.e. if two modal agents are playing against each other, they could employ Kripke semantics (which may also be NP-hard, but has much smaller computational requirements in practice than searching through all proofs).

(Technically, you'd define a class of ModalPrime agents, which begin with code to see if their opponent is syntactically a ModalPrime agent, then use Kripke semantics to see what the outcome of their modal statement versus the other modal statement would be. Or you could have Omega accept modal statements as inputs, and then Omega would only need to use Kripke semantics to resolve the matches in the tournament.)

Long story short, I wouldn't submit a "search through proofs" modal algorithm into the PD tournament currently being run: it would always run out of time before finding Lobian proofs.

Comment author: DanArmak 09 June 2013 07:45:21PM 4 points [-]

If search is hard, perhaps there is another way.

Suppose an agent intends to cooperate (if the other agent cooperates, etc.). Then it will want the other agent to be able to prove that it will cooperate. If it knew of a proof about itself, it would advertise this proof, so the others wouldn't have to search. (In the original tournament setup, it could embed the proof in its source code.)

An agent can't spend much time searching for proofs about every agent it meets. Perhaps it meets new agents very often. Or perhaps there is a huge amount of agents out there, and it can communicate with them all, and the more agents it cooperates with the more it benefits.

But an agent has incentive to spend a lot of time finding a proof about itself - once - and then it can give that proof to counterparties it wants to cooperate with.

(It's been pointed out that this could give an advantage to families of similar agents that can more easily prove things about one another by using existing knowledge and proofs).

Finally, somewhere there may be someone who created this agent - a programming intelligence. If so, that someone with their understanding of the agent's behavior may be able to create such a proof.

Do these considerations help in practice?

Comment author: warbo 11 June 2013 10:43:18AM 6 points [-]

We only need to perform proof search when we're given some unknown blob of code. There's no need to do a search when we're the ones writing the code; we know it's correct, otherwise we wouldn't have written it that way.

Admittedly many languages allow us to be very sloppy; we may not have to justify our code to the compiler and the language may not be powerful enough to express the properties we want. However there are some languages which allow this (Idris, Agda, Epigram, ATS, etc.). In such languages we don't actually write a program at all; we write down a constructive proof that our properties are satisfied by some program, then the compiler derives that program from the proof. Such programs are known as "correct by construction".

http://en.wikipedia.org/wiki/Program_derivation http://en.wikipedia.org/wiki/Proof-carrying_code