Trying posting here since I don't see how to post to https://agentfoundations.org/.
Recently sphere packing was solved in dimension 24, and I read about it on Quanta Magazine. I found the following part of the article (paraphrased) fascinating.
Cohn and Kumar found that the best possible sphere packings in dimensions 24 could be at most 0.0000000000000000000000000001 percent denser than the Leech lattice. Given this ridiculously close estimate, it seemed clear that the Leech lattice must be the best sphere packings in dimension 24.
This is clearly a kind of reasoning under logical uncertainty, and seems very reasonable. Most humans probably would reason similarly, even when they have no idea what the Leech lattice is.
Is this kind of reasoning covered by already known desiderata for logical uncertainty?
Right. There's also a somewhat stronger desideratum that we want to expect sequences to be simple rather than complex.
But I think there is something lots of logical uncertainty schemes are missing, which is estimation of numerical parameters. We should be able to care whether the target region is of size 0.001 or 0.000000000000000000000000000001, even if we have no positive examples, but sequence-prediction approaches don't do that.
If we're willing to "cheat" a bit and use as an input to our logical uncertainty method the class of objects that we're drawing from and comparing to some numerical parameter, then we can just treat prior examples as being drawn from the distribution we're trying to learn. And this captures our intuition very well, but it has some trouble fitting into schemes for logical uncertainty because of the requirement for cheating.