It is both numerically intractable, and occasionally computationally impossible, to maintain rational opinions about what's true when your information comes filtered through partisan news networks.
If this follows a fortiori from the claim "it's numerically intractable to update on anything", then it's not a very interesting one.
If all beliefs in a Bayesian network are bounded away from 0 and 1, then an approximate update can be done to arbitrary accuracy in polynomial time.
The pathological behavior shows up here because there are two competing but mutually exclusive belief systems. And it is hard to determine when your world view should flip.
I hope that this makes it more interesting to you.
Critically, if we are persuaded by either camp, we will find most of the sources in that camp believable.
Then the easy solution is to not let yourself be persuaded by either camp and assume that there are a lot of flaws in the information environment on both sides.
Your approach is to treat news outlets like they are blackboxes. As I argued before, if you want to rationally read news you need to have models of how the news outlets you read operate.
For a lot of news it's possible to understand the ground reality. If there's a new law that gets proposed, you are not limited to what journalists write about it. You can actually read the test of the law and compare it with what journalists write about it.
Court cases end with the court publishing a document with their rulings. In science journalism, you can read the papers yourself.
Freedom of information request allow accessing a lot of government data to understand the ground reality.
Often facts get more clear over time. While it might be hard to understand that the New York Times mislead their readers at the time of the start of the Iraq war, it become more clear later.
This would really benefit from mathematically defining this network and showing the mathematical statement and proof of your impossibility result.
This is a response to Thinking About Filtered Evidence Is (Very!) Hard that I thought deserved to be its own post.
The post said that it lacked a practical/interesting failure-case for Bayes' Law. Here is a case that is both practical and interesting.
Suppose that we have a variety of sources, divided into two camps. Each camp wishes to persuade us to their point of view. Each source within each camp has a different, unknown to us, amount of reliability. Critically, if we are persuaded by either camp, we will find most of the sources in that camp believable. And most in the other camp to be not believable.
We further know that each camp does contain reliable sources who are accurately reporting on filtered versions of the same underlying reality. We just don't know how common that is on either side.
This is hopefully recognizable as a description of trying to reconstruct the world from partisan news sources of varying quality. That makes it practical. I'll make it interesting by asserting some key facts about it.
The result? It is both numerically intractable, and occasionally computationally impossible, to maintain rational opinions about what's true when your information comes filtered through partisan news networks.
Any struggles we have with cognitive biases come on top of that basic impossibility result.