Pseudolikelihood is method for approximating joint probability distributions. I'm bringing this up because I think something like this might be used in human cognition. If so, it would tend to produce overconfident estimates.
Say we have some joint distribution over X, Y, and Z, and we want to know about the probability of some particular vector (x, y, z). The pseudolikelihood estimate involves asking yourself how likely each piece of information is, given all of the other pieces of information. Then you multiply these together. So the pseudolikelihood of (x, y, z) is P(x|yz) P(y|xz) P(z|xy).
Not only is this wrong, but it gets more wrong as your system is bigger. By that I mean that a ratio of two pseudolikelihoods will tend towards 0 or infinity for big problems, even if the likelihoods are close to the same.
So how can we avoid this? A correct way to calculate a joint probability P(x,y,z) looks like P(x) P(y|x) P(z|xy). At each step we only condition on information "prior" to the thing we are asking about. My guess about how to do do this involves making your beliefs look more like a directed acyclic graph. Given two adjacent beliefs, you need to be clear on which is the "cause" and which is the "effect." The cause talks to the effect in terms of prior probabilities and the effect talks to the cause in terms of likelihoods.
Failure to do this could take the form of an undirected relationship (two beliefs are "related" without either belief being the cause or the effect), or loops in a directed graph. I don't actually think we want to get rid of undirected relationships entirely -- people do use them in machine learning -- but I can't see any good reason for keeping the latter.
An example of a causal loop would be if you thought of math as an abstraction from everyday reality, and then turned around and calculated prior probabilities of fundamental physical theories in terms of mathematical elegance. One way out is to declare yourself a mathematical Platonist. I'm not sure what the other way would look like.
This post has a big problem in that "cause" and "effect" are totally the wrong words, because when calculating joint probabilities you don't need to know causal information at all.
P(x,y,z) = P(x) P(y|x) P(z|xy) = P(x) P(z|x) P(y|xz) = P(y) P(x|y) P(z|xy) = P(y) P(z|y) P(x|zy) = P(z) P(y|z) P(x|zy) = P(z) P(x|z) P(y|xz).
OK, you're right. You only need to think about causality if you don't want your graph to be fully connected.