Review

I'd like to write a follow-up post or two to my original post, but I don't have a good sense of what the Lesswrong community mostly accepts / mostly rejects / has thought a lot about / hasn't thought very much about. Which of these topics feel novel enough to merit a post / which existing posts are well regarded for the non-novel topics?

  •  I don't feel like I can apply Bayesian reasoning very well in practice: whenever I try to estimate a prior or a conditional probability, I end up making assumptions about the world that are basically extra variables whose values I'm conditioning on. Trying to estimate priors / conditional probabilities for those variables only ends up roping in more assumptions about the world and so on. Eventually I imagine I'd get a sprawling Bayesian network / Markov network on (countably?) infinitely many variables. I guess I'm then interested in how you reason in Bayesian / Markov networks when you have infinitely many variables flying around, especially if a Bayesian network has no parent-free nodes. The latter scenario feels like infinite epistemic regress.
  • I'm interested in what we can do when we are forced to cut off the above process after considering some number of variables. When trying to reason in practice, I don't have time to build this infinite network of variables, so I cut the process off after some time. However, imagine I have the power to collect information on how most situations "extend to infinity", i.e. some kind of probability measure on infinite Bayesian/Markov networks. Could I use such a probability measure to bias my priors on the nodes I'm cutting off so that I'm updating on the effect of extending the network to infinity?
  • A lot of the time, I'm not even able to create this large network of conditional probabilities: for claims about the world that aren't directly grounded in reality (such as claims about which arguments are compatible with which other arguments), it's not clear at all how to ground the discussion in real-world probabilities. I tried to write about that in my first post, but I think people in the comments were concerned that I had completely detached my framework from reality, which I agree is a problem. If I have some collection of statements, some of which I can construct reasonable priors / conditional probabilities about and some of which I can't, then what can I do? What are the best ways to merge a probabilistic framework with a logical framework?

Thanks!

New Answer
New Comment
3 comments, sorted by Click to highlight new comments since:

You seem to be talking about "combinatorial explosion". It's a classic problem in AI, and I like John Vervaeke's approach to explaining how humans solve the problem for themselves. See: http://sites.utoronto.ca/jvcourses/jolc.pdf

No one has solved it for AI yet.

Thanks for the response! I took a look at the paper you linked to; I'm pretty sure I'm not talking about combinatorial explosion. Combinatorial explosion seems to be an issue when solving problems that are mathematically well-defined but computationally intractable in practice; in my case it's not even clear that these objects are mathematically well-defined to begin with.

The paper https://www.researchgate.net/publication/335525907_The_infinite_epistemic_regress_problem_has_no_unique_solution initially looks related to what I'm thinking, but I haven't looked at it in depth yet.

Okay that paper doesn't seem like what I was thinking of either but it references this paper which does seem to be on-topic: https://research.rug.nl/en/publications/justification-by-an-infinity-of-conditional-probabilities