Trying posting here since I don't see how to post to https://agentfoundations.org/.

Recently sphere packing was solved in dimension 24, and I read about it on Quanta Magazine. I found the following part of the article (paraphrased) fascinating.

Cohn and Kumar found that the best possible sphere packings in dimensions 24 could be at most 0.0000000000000000000000000001 percent denser than the Leech lattice. Given this ridiculously close estimate, it seemed clear that the Leech lattice must be the best sphere packings in dimension 24.

This is clearly a kind of reasoning under logical uncertainty, and seems very reasonable. Most humans probably would reason similarly, even when they have no idea what the Leech lattice is.

Is this kind of reasoning covered by already known desiderata for logical uncertainty?

New Comment
3 comments, sorted by Click to highlight new comments since:

Trying posting here since I don't see how to post to https://agentfoundations.org/.

People who haven't been given full membership in the forum can post links to things they have written elsewhere, but cannot make posts on the forum itself.

Is this kind of reasoning covered by already known desiderata for logical uncertainty?

It sounds similar to the Gaifman condition. Say you have a Pi_1 sentence, meaning a sentence of the form "for all x: phi(x)", where phi(x) is computable. If you've checked all values of x up to some large number, and phi(x) has always been true so far, you might think that this means that phi(x) is probably true for all the other values of x too. The Gaifman condition says that the probability that you assign to "for all x: phi(x)" should go to 1 as the range of values of x you've checked goes to infinity.

But it turns out that any computable way of handling logical uncertainty that satisfies the Gaifman condition also must give probabilities that go to 0 for some other true sentences. https://intelligence.org/files/Pi1Pi2Problem.pdf This may sound alarming, but I don't think it is too surprising; after all, the theory of the natural numbers is not computable, so any logical uncertainty engine will not be able to rule out different theories even in the limit.

Right. There's also a somewhat stronger desideratum that we want to expect sequences to be simple rather than complex.

But I think there is something lots of logical uncertainty schemes are missing, which is estimation of numerical parameters. We should be able to care whether the target region is of size 0.001 or 0.000000000000000000000000000001, even if we have no positive examples, but sequence-prediction approaches don't do that.

If we're willing to "cheat" a bit and use as an input to our logical uncertainty method the class of objects that we're drawing from and comparing to some numerical parameter, then we can just treat prior examples as being drawn from the distribution we're trying to learn. And this captures our intuition very well, but it has some trouble fitting into schemes for logical uncertainty because of the requirement for cheating.

The number 0.0000000000000000000000000001 does not tell me much. Numbers in high dimensions are tricky. For example, the volume of a unit sphere decays exponentially with dimension. A unit sphere in dimension 24 has very little volume.