In order to clarify the semantics of paraconsistent and relevance logics, we need to make a detour into impossible worlds - a fruitful detour opening up fun new vistas. Note that this is an intuitive introduction to the subject, and logic is probably the area of mathematics where it is the most dangerous to rely on your intuition; this is no substitute for rigorously going through the formal definitions. With that proviso in mind, let's get cracking.

 

Possible worlds: the meaning of necessity

Modal logics were developed around the concept of necessity and possibility. They do this with two extra operators: the necessity operator □, and possibility operator ◊. The sentence □A is taken to mean "it is necessary that A" and ◊A means "it is possible that A". The two operators are dual to each other – thus "it is necessary that A" is the same as "it not possible that not-A" (in symbols: □A ↔ ¬◊¬A). A few intuitive axioms then lead to an elegant theory.

There was just one problem: early modal logicians didn't have a clue what they were talking about. They had the syntax, the symbols, the formal rules, but they didn't have the semantics, the models, the meanings of their symbols.

To see the trouble they had, imagine someone tossing a coin and covering it with their hand. Call wH the world in which it comes out heads and wT the world in which it comes out, you guessed it, tails. Now, is the coin necessarily heads? Is it possibly heads?

Intuitively, the answers should be no and yes. But this causes a problem. We may be in the world wH. So if we agree that the coin is not necessarily heads, then it is not necessarily heads even though it is actually heads (forget your Bayescraft here and start thinking like a logician). Similarly, in wT, the coin is in actuality tails yet it is possibly heads.

Saul Kripke found the breakthrough: necessity and possibility are not about individual worlds, but about collections of possible worlds, and relationships between them. In this case, there is a indistinguishability relationship between wT and wH, because we can't (currently) tell them apart.

Because of this relationship, the statement A:"the coin is heads" is possible in both wT and wH. The rule is that a statement is possible in world w if it is true in at least one world that is related to w. For w=wH, ◊A is true because A is true in wH and wH is related to itself. Similarly, for w=wT, ◊A is true because A is true in wH and wH is related to wT.

Conversely B:"the coin is heads or the coin is tails" is necessary in both wT and wH: here the rule is that a statement is necessary in world w if it is true in all worlds related to w. wT and ware related only to each other through the indistinguishability relationship, and B is true in both of them, so □B is also true in both of them. However □A is not true in either wT or wH, because both those worlds are related to wT and A is false in wT.

This idea was formalised as a Kripke structure: a set W of worlds, a binary relationship R between some of the elements of W, and a rule that assigned the truth and falsity of statements in any particular world w. R is called the accessibility relationship (it need not be indistinguishability): wRv means that world v is 'accessible' from world w, and necessity and possibility are define, via R, as above.

As an extra treat for the less wrong crowd, it should be noted that Kripke structures are used in the semantics of common knowledge, key to the all-important Aumann agreement theorem.

 

Impossible worlds: when necessary truth is not true, necessarily

It should be noted that all the worlds in W are possible worlds: the (classical) laws of logic apply. Hence every tautology is true in every world in W, and every contradiction is false. In standard ('normal') modal logic, there is a necessitation requirement: if A is a tautology, then □A ("it is necessary that A") is true in every world.

However, what if this were not the case? What if we added the requirement that □A not be true? Formally and symbolically, this seems a perfectly respectable thing to ask for. But can we come up with a model for this logic?

Not by using possible worlds, of course: since A is always true, we won't get very far. But we can enlarge our set W to also include impossible worlds: worlds where the laws of logic don't apply. Impossible worlds have truth values assigned to statements, but they don't need to be consistent or coherent ("Socrates is a man", "all men are mortal" and "Socrates is immortal" could all be true).

Now assume there is a world w', where A is not labelled true (since A is a tautology, w' must be an impossible world). Also assume that w' is accessible from possible world w (hence wRw'). Then w is a perfectly respectable possible world, with A true in it; however, □A is not true in w, because it has an accessible world (w') where A isn't true. Voila!

This seems like a cheap trick – a hack to build a model for a quirky odd logic that doesn't feel particularly intuitive anyway. But there are more things to be done with impossible worlds...

 

Some possibilities are more possible than others

Lewis built on the possible world idea to construct a theory of counterfactuals. Assume that the world is as we know it, and consider the two statements:

  • A: If the Titanic didn't hit an iceberg, then it didn't sink on its maiden voyage.
  • B: If the Titanic didn't hit an iceberg, then it was elected Pope in the controversial Martian election of 58 BC.

Both of these statements are true in our world: we can show that the antecedent is contradicted (because the Titanic actually did hit an iceberg), and then imply anything. Nevertheless, there is a strong intuitive sense in which A is 'more true' than B. The consequent of A is 'less extraordinary' than that of B, and seems to derive more directly from the antecedent. A seems at least counterfactually truer than B.

We can't use the Kripke accessibility relationship to encode this: R is a simple yes or no binary, with no gradation. Lewis suggested that we instead use a measure that encodes relative distance between possible worlds, some way of saying that world v is closer to w than u is. Some way of saying that a world where the English invented chocolate is less distant from us than a world in which all the English are made of chocolate.

With this distance, we can rigorously state whether A and B are true as counterfactual statements. Define C:"the Titanic didn't hit an iceberg", D:"the Titanic didn't sink on its maiden voyage" and E:"the Titanic was elected Pope in the controversial Martian election of 58 BC."

So, how shall we encode "C counterfactually implies D (in our world)"? Well, this could be true if C were false in every possible world; but no such luck. Otherwise, we could look for worlds where C is true and D is also true; yes, but there are also possible worlds where C is true but D is false (such as the world where the Titanic swerved to avoid the iceberg only to get sunk by the Nautilus). This is where the distance between worlds comes in: we say that "C counterfactually implies D (in our world)" if D is true in w, where w is the closest world to our own where C is true. For any intuitive distance measure, it does seem quite likely that C does counterfactually imply D (in our world): most ships cross the Atlantic safely, so if the Titanic hadn't hit an iceberg, it should have done so as well.

Basically a counterfactual is true if the consequent is true in the least weird world where the antecedent is true. A similar approach shows that "C counterfactually implies E (in our world)" is false. Unless we have made some very peculiar choices for our distance measure, the closest world to us in which the Titanic didn't hit an iceberg, is not one in which a large metal contraption was elected Pope on an uninhabited planet thousands of years before it was even built.

So far, this seems pretty intuitive; but what about this "distance measure" between possible worlds, which seems to be doing all the work? Where does that come from? Well, that, of course, is left as an exercise to the interested reader. By which I mean the problem is unsolved. But there is progress: we started with an unsolved informal problem (what is a counterfactual) and replaced it with an unsolved formal problem (how to specify the relative distance between possible worlds). Things are looking up!

 

Some impossibilities are less impossible than others

It should be noted that in the previous section, the fact that the worlds were logically consistent was never used. This immediately suggests we can extend the setup to include impossible worlds as well. This was Edwin Mares's idea, and it allowed him to construct models for the logic of paradox.

All that we need is our magical "relative distance between worlds" measure, as before, except we now measure relative distance to impossible worlds as well. With that, we can start talking about the truths of counterpossible statements - conditional statements where the antecedent is contradictory. Assume for instance that both Euclidean geometry and Peano arithmetic are true, and consider:

  • A: the existence of an equilateral triangle with a right angle counterpossibly implies the existence of a triangle whose angles sum up to 270 degrees.
  • B: the existence of an equilateral triangle with a right angle counterpossibly implies the existence of many even primes.

For many intuitive distance-between-worlds measures, A is true in our world and B is false. This is, for instance, the case for some distance measures that penalise doubly impossible worlds (worlds where Euclidean geometry and Peano arithmetic both have contradictions) more than singly impossible worlds (where only one of the theories have contradictions). Again, the informal mystery about counterpossibles is reduced to the formal problem of specifying this distance measure between worlds.

 

New Comment
36 comments, sorted by Click to highlight new comments since:

But there is progress: we started with an unsolved informal problem (what is a counterfactual) and replaced it with an unsolved formal problem (how to specify the relative distance between possible worlds).

I wouldn't call the latter a formal problem, since there is no formal criteria in the problem statement for judging what constitutes a correct solution. Intuition and philosophy are still all we have to go on.

There is still progress, but I think a better way of putting it is that we have found one possible way in which the original problem might be solved.

We say that "C counterfactually implies D (in our world)" if D is true in w, where w is the closest world to our own where C is true.

This has a somewhat subtle error in it: C []-> D is true in w iff there is no world in which C is true and D is false that is closer to w than every world in which C is true and D is true. In other words, C []-> D is true in w iff for every world in which C is true and D is false, there is a closer world in which both C and D are true. (At least, I think those two statements are equivalent.)

Your definition requires that there be a closest world to w in which C is true. But there is no guarantee that there is such a world. There might be a continuum of worlds approaching w but no closest one.

My post is an "intuitive introduction to the subject", which is "no substitute for rigorously going through the formal definitions" :-) But yeah, it can get complicated, especially as the "distance measure" need not be a metric at all.

What does it do?

What goal is all this intended to accomplish?

If I try to build an AI that lacks all this stuff, what sort of real-world task can it not solve?

Stuart explained his motivations in Paraconsistency and relevance: avoid logical explosions:

For Less Wrong, the best use of relevance logic would be to articulate counterfactuals without falling into the Löbian/self-confirming trap and blowing up.

When you ask, "What does it do?" what is "it" referring to? Modal logic? Counterfactuals? Stuart's specific application of modal logic and counterfactuals?

I'll guess that you mean to ask about the whole apparatus of modal logic. Aside from Stuart's stated goals -- having to do with relevance and reactions to explosion and certain paradoxes, like the liar -- modal logics have been used to study provability, knowledge and belief, moral obligation, tense, and action, just to name a few. You might also take a look at Section 3 of Benthem's book Modal Logic for Open Minds for some applications of modal logic.

Counterfactuals have found special application in causal inference. Lewis' approach to counterfactuals provides a semantics for the Neyman-Rubin approach to causal inference. (See, for more detail, Glymour's discussion piece following Holland's well-known 1986 paper on causal inference.) Pearl takes it a step further by proving that the do() calculus is equivalent to Rubin's and to Lewis' approaches.

Your last question is a whole lot harder to answer. Maybe all of these things can be implemented in some way that does not require any modal logic or anything formally equivalent to modal logic. One might try a non-counterfactual approach to causal inference, like that suggested by Dawid, for example. (However, Dawid's approach is not equivalent to the Neyman-Rubin-Lewis-Pearl approach: they don't always endorse the same inferences.) I don't know enough about AI approaches to the other problems to say whether or not modal logics have serious competitors. Maybe you could point me to some further reading(s)?

Wot Wei_Dai said.

I personally think that taking a probabilistic approach to logical truths (and using P(A|B) rather than B→A) is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT. But before doing this, I thought I should at least glance at the work that logicians have been doing, and put it out on Less Wrong, in case someone gets inspired brainwaves from it.

If I try to build an AI that lacks all this stuff, what sort of real-world task can it not solve?

Paraconsistent logics are weaker than classical logics, so an AI with this has less stuff than a classical AI, so it can solve less. The hope is that it would still be able to solve a lot, and wouldn't be able to solve the problematic Loebian spurious proofs.

... is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT

To repeat my complaint here, the problems with UDT are conundrums, not obstacles. Finding workarounds doesn't obviously explain what was wrong and why workarounds are supposed to work. This would only be useful to the extent a workaround gets understood better than an original prototype.

explain what was wrong

What was wrong is that we had yet another version of Russell's/liar/self-reference paradox. Things reasoning about themselves (even implicitly) causes problems. So looking at systems designed to avoid those paradoxes is probably worth doing.

So looking at systems designed to avoid those paradoxes is probably worth doing.

The distinction I'm making is between techniques designed to avoid problems (refuse to consider the situations that contain them, or reduce the damage they cause, symptomatic treatment) and those allowing to resolve/understand them. For example, Goedel numbering is the kind of technique that significantly clarified what was going on with self-reference paradoxes, at which point you deal with complicated structure rather than confusing paradoxes.

[-]gRR20

There is a provability semantics for the modal logic, which forms a model for the logic within mathematics, without using constructs like impossible possible worlds. In the provability semantics, "□A" means "A is a theorem [of some sufficiently rich formal system such as PA]".

It just occurred to me that the diagonalized implementations of ADT/UDT implicitly use this semantics: when a UDT agent looks for moral arguments
□(A()=a => U()=u)
the diagonal rule guarantees that
□(~A()=a) => A()=a
which (assuming the proof system is consistent) implies ~□(~A()=a), or ◊(A()=a), thus keeping the counterfactual worlds "possible".

Marcello Herreshoff, who once (still does?) worked with Eliezer on AI has a paper about this. (At least I think it's about this, it's a bit over my head.)

[-]gRR20

Thanks! It looks interesting. Will need to study some of the references to understand it better.

I wonder why EY is so hostile to modal logic.

I wonder why EY is so hostile to modal logic.

He explains here.

[-]gRR00

Philosophy is a mind killer. Modal logic is pure math.

Yep, but classifying the holonomy of normal conformal Cartan connections is also maths, and I don't post it to Less Wrong. There has to be some connection with AI or rationality or decision making.

I should have clarified that I wasn't writing in defense of modal logic, but purely using it as an illustration to introduce Kripke structures and impossible worlds, which are needed for the semantics of relevance logics (and might be useful tools in their own right).

[-]gRR00

classifying the holonomy of normal conformal Cartan connections

The paper is rather intimidating, I don't understand anything in it. Do your results have implications at a more accessible level?

No, they don't - that was my point.

[-]gRR00

Unlike modal logic, which does - that was mine.

But it's relevant implications here are philosophical or practical, not mathematical.

[-]gRR30

Yes, I suppose. My cached thoughts say one can never know when a mathematical result will suddenly turn out to be useful for something, but this is irrelevant. I think modal logic has direct practical uses here, as my top-level comment implies.

EY's hostility to it seems to be the result of a misunderstanding. He thinks it is a proposed explanation for phenomena of 'possibility' and 'necessity', but in fact it is a description of them. Any theory/algorithm/structure that purports to explain the phenomena must be a model of the modal logic's axioms. And, as usual, the theory tries to specify the minimal set of axioms, shows what can be derived from them, etc.

EY's hostility to it seems to be the result of a misunderstanding. He thinks it is a proposed explanation for phenomena of 'possibility' and 'necessity',

Can you summarize your reasons for believing that?

My recollection is that the relevant essay was railing against the habit of thought that causes a particular kind of thinker to come up with systems of description that encapsulate the unexplained, rather than attempting to explain the unexplained. But it's been a while since I read it carefully.

[-]gRR10

You're right:

So what these philosophers do instead, is "bounce" off the problem into a new modal logic: A logic with symbols that embody the mysterious, opaque, unopened black box. A logic with primitives like "possible" or "necessary", to mark the places where the philosopher's brain makes an internal function call to cognitive algorithms as yet unknown. And then they publish it and say, "Look at how precisely I have defined my language!"

I'm back to not really understanding the hostility. I mean, precisely the same argument would work against the probability theory. I suppose, it can be argued that probability theory has lots of useful theorems, whereas modal logic appears to be only a formal language. But this is probably not quite true, and even if so, still abstracting a common element from several different domains is worth something (e.g. Category Theory).

Irritation at treating encapsulation-of-mystery as though it were an explanation is a recurring theme in EY's writing.

I won't presume to speak to the motives of someone I don't actually know here, but I will say I often find myself irritated by similar things in similar ways, and my own reason for irritation is that people stop there.

That is, I don't have a problem with "OK, we don't understand X, but we've developed some techniques for working productively with X as a black box, which is great, and now we can go on to try and understand X." I do have a problem with "We've developed some techniques for working productively with X as a black box, and that's the only kind of understanding that matters."

That said, it's what people do: the orthodox establish a hierarchy build around X as a black box, and further dissolving X is left to the heterodox.

[-]gRR10

Yes, but transferring your irritation from bad habits of certain people to a mathematical theory seems to be a bad case of Halo effect.

I don't get the impression he was annoyed by the mathematical theory - rather that he was annoyed at me for bringing it up in less wrong, and wanted to know if I had any justification for doing so.

[-]gRR00

I'm not sure what you mean. Do you frequently bring up modal logic in conversations? Do you think he wouldn't be annoyed if someone else brought it up? Do you think he'd be similarly annoyed if you brought up normal conformal Cartan connections instead?

EY does not think that modal logic is wrong as a mathematical theory, but that if we interpret philosophically it as its creators seem to intend, we will we be lead astray and believe we have gained an understanding of "necessity" and "possibility" when we haven't actually done so.

I tend to agree with him on this. His comment seemed to have the subtext "you shouldn't be bringing up these ideas on less wrong without some strong justification, it will lead people down fruitless avenues". I was bringing them up to illustrate semantics for relevance logics, but that wasn't clear in the post.

[-]gRR00

I'm not sure modal logic's creators ever intended it as an explanation of "necessity" and "possibility". It was always a description of how the two things (and other similar modalities) should behave. Kripke semantics has more of an 'explanation' flavor, but it is also a description.

The thought that we (LW participants interested in these things) will be lead astray by a slight exposure to the forbidden topic is kinda offending. I mean, we already have a satisfactory explanation and understanding of "possibility", don't we?

Can you expand on how I would tell the difference between someone's irritation at a mathematical theory, and that person's irritation at the people who work on that theory and write about it and otherwise disseminate it into academia?

[-]gRR00

Well, this comment, which started this discussion, clearly shows irritation. I conjecture, it was not against Stuart_Armstrong's research/publishing habits.

Your conjecture is not obvious to me. If you can expand on how you made that determination, I might learn something from the expansion. (Not, of course, that this obligates you in any way.)

[-]gRR00

I don't see what you could possibly learn beside my own thinking habits, but ok (although, this is evidence collected after the fact): (1) there is nothing about modal logic in Stuart_Armstrong's publications page, and (2) there are no EY's comments in the last at least ten Stuart_Armstrong's posts, except this one.

OK, thanks.

the diagonal rule guarantees that
□(~A()=a) => A()=a

Something like that, but not quite, as you can't get an implication from provability if agent is a program, it's not decidable (unless you use a provability oracle). So you only impose some finite difficulty on (un)provability of a statement, not outright impossibility. See this comment.

[-]gRR10

You're saying, the use of "□" is not justified, because for the agent it means "provable with a proof length of up to N" and not simply "provable"?

This is technically correct, but is this case significantly different from the cases where we assume variables in programming languages are integers, when in fact they are members of Zn for some large n? The resource limits only mean we have to additionally prove that (1) the modal logic axioms hold for all statements with sufficiently short proofs, or something like that, and (2) that the N is sufficiently large for our purposes.