Consider the following premises from which we would like to reason:

  1. Abraham Lincoln was born in Michigan.
  2. Abraham Lincoln was born in Kentucky.
  3. Eliezer Yudkowsky has a beard or he has blond hair.
  4. Eliezer Yudkowsky doesn't have blond hair.

We'd like to be able to conclude that Eliezer Yudkowsky has a beard, but we have a problem doing that. Classical logic can certainly derive that result, but because (1) and (2) contradict each other, by the principle of explosion, classical logic can derive any result - including the fact that Eliezer doesn't have a beard. If we use paraconsistent logics then we're safe from explosion. But because we've been rejecting the disjunctive syllogism in order to avoid explosion, we can't actually put (3) and (4) together in order to get the result we want.

Informally, what we'd want to do is something like "use classical logic wherever there's no contradictions, but restrict to paraconsistent logic whenever there are". Adaptive logics attempt to implement this idea. The presentation in this post will be even more informal than usual, as the ideas of adaptive logics are more useful than the specific implementation I've seen.

Between sky and earth, between classical and relevance

Adaptive logics make use of two logical systems: an upper limit logic ULL (generally take to be classical logic) and a weaker lower limit logic LLL (we will be using a relevance logic for this). Adaptive logics will then start with a collection of premises (that may include contradictions), and reason from there.

The basic rules are that proofs using the LLL are always valid (since it is a weaker logic, they are also valid for the ULL). Proofs using the ULL are valid if they did not use a contradiction in their reasoning. In the example above, we would freely apply classical logic when using premises (3) and (4), but would restrict to relevance logic when using premises (1) and (2).

This might be enough for UDT; we would label the antecedent A()==a in the (A()==a → U()==u) statement as (potentially) contradictory, and only allow relevance logic to use it as a premise, while hitting everything else that moves with classical logic.

Dynamic proofs

However, it's also possible that the contradiction might not be so clear to see beforehand; possibly other contradictions may lurk deep within the premises of the algorithm. In that case, adaptive logics have a more dynamic proof system.

The method goes broadly like this: skip merrily along, proving stuff left and right, and marking each line of the proofs with the premises whose consistency is being assumed. Then, if you stumble upon a contradiction, backtrack, designate some of the premises as contradictory in a sensible way, and erase anything that was derived from them using the ULL. From now on, only the LLL is allowed to handle the contradictory premises. Then resume skipping and proving until you find another contradiction, then repeat. It is also possible, depending on your algorithm, that some of the premises marked contradictory will be later unmarked (for instance, you might have known that either A or B was contradictory, and marked them both, and then later realised that B was contradictory, and so unmarked A). In that case, they become available to the ULL again.

Note that this dynamic proof system can be made entirely deterministic. But we need to distinguish between two notions of derivability:

  • current derivability: for those results that are derivable at some stage in the proof process, but may be erased later.
  • final derivability: for those results that are derivable at some stage in the proof process, and will never be erased later.

Notice that final derivability is not a dynamic property: it will never change, it is a statement valid "for ever and ever". It is, however, generally not recursive (nor r.e.), so is hard to work with in practice. The major exception being, of course, those results that are deduced using only the LLL: they are finally derivable by definition.

Hence adaptive logics resemble human reasoning: some things were are certain of, others we're currently sure of but may find out we're wrong at a later date, and some things we haven't found a hole in yet but consider highly suspect. An AIXI/UDT agent using adaptive logic would be even more impossible that the normal AIXI: it would have not only to compute an uncomputable prior, but also magically establish final derivability. Whether this has a sensible approximation, à la AIXItl, remains to be seen.

And the moral of the story is?

Well, this concludes my brief introduction to paraconsistent, relevance and adaptive logics. I personally don't feel they will be the route to solving UDT's problems, but I may be wrong - and by all means if someone is inspired to take this information and fix UDT, go ahead! My feeling is that some sort of proper probabilistic approach to logical truths is going to be the way forwards (for both Löbian problems and logical uncertainty). But some of the ideas presented here may end up being useful even for a probabilistic approach (e.g. Kripke semantics).

New Comment
4 comments, sorted by Click to highlight new comments since:

Pearl's book has a remark somewhere relating Lewis' counterfactuals and counterfactual approach based on graphical models/potential outcomes. I am not sure if anyone has explored this connection in more detail.

See also Pearl's paper An Axiomatic Characterization of Causal Counterfactuals (ftp.cs.ucla.edu/pub/stat_ser/R250.pdf) for remarks on the relationship to Lewis.

As an aside, difference between current derivability and final derivability is also an answer to Penrose's argument about human intelligence being incomputable. Humanity's mathematical working process can be computable (despite claims based, say, on Gödel's theorem) because not everything that is accepted as maths is non-contradictory. Sometimes we do need to go back and discard many theorems because the axioms were inconsistent.