We assume two rules of inference:
Necessitation:
Distributivity:
Is there a reason why this differs from the standard presentation of K? Normally you would say that K is generated by the following (coupled with substitution):
Axioms:
- All tautologies of propositional logic.
- Distribution: .
Rules of inference:
- Necessitation: .
- Modus ponens: .
No particular reason (this is the setup used by Demski in his original probabilistic Payor post).
I agree this is nonstandard though! To consider necessitation as a rule of inference & not mentioning modus ponens. Part of the justification is that probabilistic weak distributivity () seems to be much closer to a 'rule of inference' than an axiom for me (or, at least, given the probabilistic logic setup we're using it's already a tautology?).
On reflection, this presentation makes more sense to me (or at least gives me a better sense of what's going on / what's different between logic and logic). I am pretty sure they're interchangeable however.
Thanks.
I am pretty sure they're interchangeable however.
Do you have a reference for this? Or perhaps there is a quick proof that could convince me?
Payor's Lemma holds in provability logic, distributivity is invoked when moving from step 1) to step 2) and this can be accomplished by considering all instances of distributivity to be true by axiom & using modus ponens. This section should probably be rewritten with the standard presentation of K to avoid confusion.
W.r.t. to this presentation of probabilistic logic, let's see what the analogous generator would be:
Axioms:
Rules of inference:
Then, again, step 1 to 2 of the proof of the probabilistic payor's lemma is shown by considering the axiom of weak distributivity and using modus ponens.
(actually, these are pretty rough thoughts. Unsure what the mapping is to the probabilistic version, and if the axiom schema holds in the same way)
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member's individual threshold for cooperation. A formalism of the intuitive strategy 'I will only cooperate if there are no defectors' (or perhaps 'we will only cooperate if there are no defectors').
You should include the highlighted insight in your summary. Also, why does your setup not lead to inconsistencies when Abram Demski isn't sure his setup does? Is it just that you don't have ", then "?
We know that the self-referential probabilistic logic proposed in Christiano 2012 is consistent. So, if we can get probabilistic Payor in this logic, then as we are already operating within a consistent system this should be a legitimate result.
Will respond more in depth later!
Yudhister's treatment here does not satisfy me: the assumption he calls syntactic-probabilistic correspondence is false. For example, in Paul's probability distributions, the self-referential sentence L: must be assigned probability 1, but is not true and not provable.
Could you perhaps say something about what a Kripkean semantics would look like for your logic?
I'm interested in what happens if individual agents A, B, C merely have a probability of cooperating given that their threshold is satisfied. So, consider the following assumptions.
The last assumption being simply that is low enough. Given these assumptions, we have via the same proof as in the post.
So for example if are all greater than two thirds, there can be some nonzero such that the agents will cooperate with probability . In a sense this is not a great outcome, since viable might be quite small; but it's surprising to get any cooperation in this circumstance.
The interesting thing about this -- beyond showing that going probabilistic allows the handshake to work with somewhat unreliable bots -- is that proving rather than is a lot different. With , we're like "And so Peano arithmetic (or whatever) proves they cooperate! We think Peano arithmetic is accurate about such matters, so, they actually cooperate."
With the conclusion we're more like "So if the agent's probability estimates are any good, we should also expect them to cooperate" or something like that. The connection to them actually cooperating is looser.
In summary: A probabilistic version of the Payor's Lemma holds under the logic proposed in the Definability of Truth in Probabilistic Logic. This gives us modal fixed-point-esque group cooperation even under probabilistic guarantees.
EDIT 10/24: I think the way the way this post is framed is somewhat confused and should not be taken literally. However, I do stand by some of the core intuitions here.
Background
Payor's Lemma: If ⊢□(□x→x)→x, then ⊢x.
We assume two rules of inference:
Proof:
The Payor's Lemma is provable in all normal modal logics (as it can be proved in K, the weakest, because it only uses necessitation and distributivity). Its proof sidesteps the assertion of an arbitrary modal fixedpoint, does not require internal necessitation (⊢□x⟹⊢□□x), and provides the groundwork for Lobian handshake-based cooperation without Lob's theorem.
It is known that Lob's theorem fails to hold in reflective theories of logical uncertainty. However, a proof of a probabilistic Payor's lemma has been proposed, which modifies the rules of inference necessary to be:
The question is then: does there exist a consistent formalism under which these rules of inference hold? The answer is yes, and it is provided by Christiano 2012.
Setup
(Regurgitation and rewording of the relevant parts of the Definability of Truth)
Let L be some language and T be a theory over that language. Assume that L is powerful enough to admit a Godel encoding and that it contains terms which correspond to the rational numbers Q. Let ϕ1,ϕ2… be some fixed enumeration of all sentences in L. Let ┌ϕ┐ represent the Godel encoding of ϕ.
We are interested in the existence and behavior of a function P:L→[0,1]L, which assigns a real-valued probability in [0,1] to each well-formed sentence of L. We are guaranteed the coherency of P with the following assumptions:
Note: I think that 2 & 3 are redundant (as says John Baez), and that these axioms do not necessarily constrain P to [0,1] in and of themselves (hence the extra restriction). However, neither concern is relevant to the result.
A coherent P corresponds to a distribution μ over models of L. A coherent P which gives probability 1 to T corresponds to a distribution μ over models of T. We denote a function which generates a distribution over models of a given theory T as PT.
Syntactic-Probabilistic Correspondence: Observe that PT(ϕ)=1⟺T⊢ϕ. This allows us to interchange the notions of syntactic consequence and probabilistic certainty.
Now, we want P to give sane probabilities to sentences which talk about the probability P gives them. This means that we need some way of giving L the ability to talk about itself.
Consider the formula Bel. Bel takes as input the Godel encodings of sentences. Bel(┌ϕ┐) contains arbitrarily precise information about P(ϕ). In other words, if P(ϕ)=p, then the statement Bel(┌ϕ┐)>a is True for all a<p, and the statement Bel(┌ϕ┐)>b is False for all b>p. Bel is fundamentally a part of the system, as opposed to being some metalanguage concept.
(These are identical properties to that represented in Christiano 2012 by P(┌ϕ┐). I simply choose to represent P(┌ϕ┐) with Bel(┌ϕ┐) as it (1) reduces notational uncertainty and (2) seems to be more in the spirit of Godel's Bew for provability logic).
Let L′ denote the language created by affixing Bel to L. Then, there exists a coherent PT for a given consistent theory T over L such that the following reflection principle is satisfied: ∀ϕ∈L′∀a,b,∈Q:(a<PT(ϕ)<b)⟹PT(a<Bel(┌ϕ┐)<b)=1. In other words, a<PT(ϕ)<b implies T⊢a<Bel(┌ϕ┐)<b.
Proof
(From now, for simplicity, we use P to refer to PT and ⊢ to refer to T⊢. You can think of this as fixing some theory T and operating within it).
Let □p(ϕ) represent the sentence Bel(┌ϕ┐)>p, for some p∈Q. We abbreviate □p(ϕ) as □pϕ. Then, we have the following:
Probabilistic Payor's Lemma: If ⊢□p(□px→x)→x, then ⊢x.
Proof as per Demski:
Rules of Inference
Necessitation: ⊢x⟹⊢□px. If ⊢x, then P(x)=1 by syntactic-probabilistic correspondence, so by the reflection principle we have P(□px)=1, and as such ⊢□px by syntactic-probabilistic correspondence.
Weak Distributivity: ⊢x→y⟹⊢□px→□py. The proof of this is slightly more involved.
From ⊢x→y we have (via correspondence) that P(x→y)=1, so P(¬x∨y)=1. We want to prove that P(□px→□py)=1 from this, or P((Bel(┌x┐)≤p)∨(Bel(┌y┐)>p))=1. We can do casework on x. If P(x)≤p, then weak distributivity follows from vacuousness. If P(x)>p, then as P(¬x∨y)=P(x∧(¬x∨y))+P(¬x∧(¬x∨y)),1=P(x∧y)+P(¬x∨(¬x∧y)),1=P(x∧y)+P(¬x), P(¬x)<1−p, so P(x∧y)<p, and therefore P(y)>p. Then, Bel(┌y┐)>p is True by reflection, so by correspondence it follows that ⊢□px→□py.
(I'm pretty sure this modal logic, following necessitation and weak distributivity, is not normal (it's weaker than K). This may have some implications? But in the 'agent' context I don't think that restricting ourselves to modal logics makes sense).
Bots
Consider agents A,B,C which return True to signify cooperation in a multi-agent Prisoner's Dilemma and False to signify defection. (Similar setup to Critch's ). Each agent has 'beliefs' PA,PB,PC:L→[0,1]L representing their credences over all formal statements in their respective languages (we are assuming they share the same language: this is unnecessary).
Each agent has the ability to reason about their own 'beliefs' about the world arbitrarily precisely, and this allows them full knowledge of their utility function (if they are VNM agents, and up to the complexity of the world-states they can internally represent). Then, these agents can be modeled with Christiano's probabilistic logic! And I would argue it is natural to do so (you could easily imagine an agent having access to its own beliefs with arbitrary precision by, say, repeatedly querying its own preferences).
Then, if A,B,C each behave in the following manner:
where E=A∧B∧C and e=max({a,b,c}), they will cooperate by the probabilistic Payor's lemma.
Proof:
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member's individual threshold for cooperation. A formalism of the intuitive strategy 'I will only cooperate if there are no defectors' (or perhaps 'we will only cooperate if there are no defectors').
It is important to note that any P is going to be uncomputable. However, I think modeling agents as having arbitrary access to their beliefs is in line with existing 'ideal' models (think VNM -- I suspect that this formalism closely maps to VNM agents that have access to arbitrary information about their utility function, at least in the form of preferences), and these agents play well with modal fixedpoint cooperation.
Acknowledgements
This work was done while I was a 2023 Summer Research Fellow at the Center on Long-Term Risk. Many thanks to Abram Demski, my mentor who got me started on this project, as well as Sam Eisenstat for some helpful conversations. CLR was a great place to work! Would highly recommend if you're interested in s-risk reduction.