Upvote for attempting foundational work on reference class forecasting which seems underexplored in terms of implementable by humans heuristics.
Meta: I think it would have been better to post these 1 per day?
1) noting that all the research is Kevin Kelly's, I'm just taking his class 2) I agree that it seems underexplored and interesting.
meta: agreed. I'm putting all the posts up now for logistical reasons related to the class.
Errata:
If we could do that, then we'd have a way to talk about scientific problems in terms of [their] complexity,
Errata:
If we could do that, then we'd have a way to talk about scientific problems in terms of [their] complexity,
(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)
One of the whole points of this project is to create a clear description of various forms of "success", and to be able to make claims about what is the highest form of success one can hope for given the problem that is being faced. The ultimate point of this is to have a useful frame for justifying the use of different methods. Now I'll introduce the gist of our formalization of methods so that we can get back to the good stuff.
In it's most general form, a method M is just a function from info states to hypothesis. M:I→P(W)∖∅
Often I might use notation like M(±A|E) to highlight "This method is responding to a yes or no question A given evidence E. This is mostly useful to be able to talk about certain relations between the question being asked, and your answers.
Gettier Problem
We are going to look at methods that respond with an articulation of an answer. So M(A|E)=A′ st A′⊆A. There's an interesting reason this matters, and part of it has to do with the Gettier Problem.
The Gettier problem has to do with believing the right thing for the wrong reasons. Consider this example.
There's three possible worlds, boxes are possible info states, and the two Hypothesis A and B are outlined. We haven't given a formal notion of what it means to be Occam / "to act in accord with simplicity". But pretend we have. An occam method would say:
M(E1)=A, M(E2)=B and M(E3)=A. In a bit, we're going to make a big deal about the criterion of progressive learning (never drop the truth once you have it). The method I just outline drops the truth in this problem. Suppose w3 is the true world. In E1 is says A, which is true, but then we drop the truth and say B in E2, only to return to it later. Can you see why this is sorta a gettier problem? In E1 we proclaim "A!" but we do it for inductive reasons. It's the simplest hypothesis right now. So it is true that A, but we don't have a super sure reason for saying it. That means that when we get more info, E2, we drop the truth, because our previous "bad reason" for saying A has been disconfirmed.
Our way around these sorts of gettier problems is to not restrict the method to only giving a yes or no answer. That's what an articulation is. A method that gives an articulation would look like M(A|E1)=w1. This makes a lot of sense. The reason you're saying A when you're in E1 is different from the reason you say it in E3. Letting methods give articulations instead of flat yay or nay let's you not loose that information.
Success, Convergences, and Verification
Convergence to an articulation
M converges to an articulation of A in w↔(∃E∈I(w))(∀F∈I(w|E)M(±A|F)⊆A)
Plain English: A method converges to a hypothesis in a given world iff that world has some information state such that no matter what further info you get, your method will stick to it's guns and give an articulation of A
Convergence to a true articulation
M converges to a true articulation of A in w↔(∃E∈I(w))(∀F∈I(w|E))w∈M(±A|F)⊂M(±A|E)⊂A
Plain English: Same as converging to an articulation of A, with the added stipulation that your articulations must include the the world w
Verification in the Limit
M verifies A in the limit in w↔M converges to a true articulation of A if w∈A and M does not converge to a true articulation of A if W∉A
Strong Verification in the Limit
M verifies A in the limit in w↔M converges to a true articulation of A if w∈A and M does not converge to an articulation of A if W∉A
Difference between strong and normal verification:
Consider someone pondering if the true form of the law is a polynomial degree, and what degree it it. This question can be verified in the limit, but it can't be strongly verified. Forever shifting the polynomial degree that you think the law is counts as converging to an articulation of "The true law is polynomial". To strongly verify, at some point your method would have to say "It's not polynomial!" But if you had to keep inter-spacing those in between your other guesses, you don't get to converge at all.
Retractions
Definition: a pair (E,F) st F⊆E∧M(F)⊈M(E) is called a retraction pair.
Here's a picture.
Basically a retraction is any time you get more information and say something that isn't strictly a refinement of your earlier hypothesis. Retractions are really important because they are going to be a key measure of success, one which we connect to various topological properties of a question.
Some brief philosophical motivation for caring about retractions: At first glance, minimizing retractions sounds like being closed minded, and that sounds like a bad quality to have. Luckily, retractions aren't the only thing we're paying attention to when we talk about success. Often we'll talk about converging to the truth while also minimizing retractions. The closed-minded kersmudgeon who sticks to their guns forever doesn't even converge to the truth in most scenarios, and is thus not appealing to us. One way to think about minimizing retractions to "getting to the truth with minimum fuss". It's like missile pursuit.
It's totally expected that for most scientific problems, you're going to have to dodge and weave. But the more of a sequitous path you take in pursuing the truth, the less it feels like it's even right to call what you are doing "pursuit". Converging to the truth while minimizing retractions is like pursuing a target with minimal waste.
A retraction is is a sequence of of info states E0⊇F0 ... En−1⊇Fn−1 such that each pair (Ei,Fi) is a retraction. We'd call this a retraction sequence of length n.
N-verify
Now to the most important definition.
M n-verifies A in w⟺M verifies A in the limit in w and the longest possible retraction chain for M in w is of length ≤n
This concept is about to become very important. A sneak peak at the rest: we have some notion of different types of success you could achieve on a problem. You can verify, refute, or decide a question with 0-ω retraction. Next we're going to hop back to topology and construct a topological notion of complexity, one that allows us to make climbs like
A is n-topologically complex ⟺ there exists a method M such that M n-verifies A
If we could do that, then we'd have a way to talk about scientific problems in terms of there complexity, and have a strong way that caches out. For a given problem, you might be able to prove upper or lower bounds on the topological complexity, and thus be able to re-calibrate expectations about what sort of success you can expect from your methods. You might be able to show that a given methods achieves the best possible success, given the topological complexity of the problem. That would be pretty dope. Let's get to it.
(note: So far, for every definition of verification we have given, you can create an analogous definition for refutability and decidability)