I did agree with the framing of the problem at the beginning but it's unclear to me why the conclusion is something besides "focus on how we can produce more reliable sources of information".
If you take an issue like 'people don't believe in CDC guidance' it's possible to reform the CDC in a way that forces it to give explicit reasoning that's backed for all their recommendations.
Instead of fighting misinformation it would be possible to focus on improving the processes inside institutions to make those more reliable.
The problem is reception of reliable information not production of reliable information.
I've actually just wondered if you need to move science veracity to some external right leaning institution like betting on scientific markets or voting on replication experiments or something.
Status notes: I take the view that rational dialogue should work with good faith people who aren't also following rational dialogue. From that point of view, this piece is about rationality. If you don't take that view then OK fine, it's about coordination.
Substack version
I want to help people respect people they disagree with. In this post I discuss why focusing on misinformation is such a mistake when trying to build respectful political discourse. My key points are:
Misinformation has been constant, institutional trust has fallen
If I can summarize the review "The Fake News on Fake News": people are actually pretty resistant to misinformation, however, in the last couple of decades, they also became resistant to accepting information from our institutions. Our present situation is:
I selected and worded these suspicions to sound more like things a Democrat might say, because I’m trying to persuade Democrats that conservative misinformation draws from reasonable attitudes.
In my post title, I pointed out it actually takes quite a bit of trust in government just to drink the tap water, and our government does not consistently deliver that trust. Not trusting one’s government is a reasonable default course of action, and that will extend to not believing institutional information.
Misinformation isn’t the root of polarization
Matt Yglesias has written substantially on the misinformation “moral panic”:
The boldface is mine and the result is surprising. It looks like misinformation isn’t causing polarization, information is. Yglesias continues:
The belief that misinformation causes polarization really isn’t rooted in evidence. (I could swear there’s a word for believing a false thing without evidence.) It derives from an illusory correlation fallacy, in that misinformation and polarization are both bad, so we assume they co-occur, but that’s really not the case.
The following may be obvious, but it bears emphasizing since the misinformation panic can distract from it. To quote Dan Williams again:
Toward a theory of disagreement
The fundamental unit of political discourse should be disagreement, not misinformation. This is covered in the paper “Disagreement as a way to study misinformation and its effects”, primarily aimed at researchers who the authors feel are studying things poorly by focusing on misinformation, but the results just as well apply to anyone engaging in political discourse.
In their proposed framework, disagreement is central, with misinformation being a factor, but one that can’t lead to an undesired societal outcome by itself.
The shift toward disagreement-centered politics is necessary for these reasons:
Being wrong
Do you want my list of stuff I’m mad at Democrats about? Literally everyone can relate to that, until I actually name the specific things. Fortunately, respected left-centrist Ezra Klein just wrote a book about that, in which he shows Democrats really do have a governance problem that they can’t write off as a messaging problem. As Nate Silver captured it: “The mistakes are piling up, and they’re not just rounding errors.”
This phrase, “they’re not just rounding errors,” is not really a stoic, analytical thing to say. It implies that by admitting the obvious, social, moral or intellectual identity is at stake. We have way too much riding on not being wrong. Dan Williams puts it bluntly: “This is bad and we need stronger norms against it."
Risking being wrong is supposed to be uncomfortable. In The Mom Test, Fitzpatrick advises entrepreneurs:
This principle applies to political discourse. Learning, governing well, and dialogue jall require risk: if you're unwilling to wager your intellectual pride, social belonging, or moral certainty, you cannot gain anything either. For a Democrat today, that might include:
How bad are our own filter bubbles?
Filter bubbles are caused by System 1 thinking. We are frighteningly efficient at rejecting memes of the incorrect polarity. If the algorithm kept serving up disagreebable memes, people would quite literally get pissed off and complain about how racist, biased, etc. social media is becoming.
Filter bubbles are real but people are underestimating how strongly they demand to be in one. “The algorithm” does matter, but until we’re honest that it’s not the main cause of filter bubbles, we can’t honestly explore what the algorithm actually is and is not responsible for.
From a dialogue standpoint, consuming political content via System 1 thinking is like practicing the piano badly: skipping drills and skills, not bothering to correct mistakes, etc. The result is illusion of explanatory depth, where a simple unexpected statistic from a cross-partisan reveals one’s own understanding of the issue is built on memes and wit. In the face of embarrassment without vulnerability, people just double down with their even more biting memes.
Deep canvassing
I need to touch on deep canvassing since it’s work I’m deeply involved in by way of ctctogether.org. In the deep canvassing methodology, a canvasser opens up with a story about a loved one, such as a time someone helped them get through a tough exam, or deal with harassment, or they were just generally lonely and wanted someone there for them. Canvassed voters are often very interested in dialoguging with someone who is vulnerable and honest about their humanity. This method is effective for convincing non-voters to vote, which seems to be an a degree of disagreement that can be bridged in one honest conversation.
This 2016 video shows the method. The biggest change since then is we’ve realized our story doesn’t even need to be related to the issue at hand; the real work is done just by establishing vulnerability.
Persuading anyone to believe anything is very hard, as any political operative can say. (But not impossible, I mean, innovate and try stuff.) Does this misinformation theory account for this?
I think it does, when separating out the types of trust, institution, and information into a “social” versus “formal“ category.
So a deep canvasser is utilizing the existing social institution of the door knock, in which it’s okay to knock on someone’s door and talk about something totally random, and deepening it by way of vulnerability. This creates the trust necessary for a two-way exchange of information with the voter, in which the voter’s reasons for rejecting establishment information to go vote are honored as pretty reasonable. Then new reasons to vote are shared, and for many voters, this dialogue is meaningful and motivating.
Narratives and propaganda
In my time reading Matt Yglesias and Dan Williams, I haven’t seen much on propaganda. Both of them are focused on combating the misinformation moral panic, which they associate with the misperception there has been a rise in misinformation, when actually there’s been a drop in institutional trust.
Dan Williams points out “disinformation itself is overwhelmingly demand driven.” That’s a good argument that LLM-based disinformation can’t penetrate much, but it’s not really hard for a politician, corporate PR department, or enemy propagandist to serve up a narrative that people believe without ever seeking it.
And yet, I would appreciate a theory of misinformation that includes a theory of narratives and a theory of propaganda, even if no one is arguing these have changed dramatically in nature in recent years.
Is the problem solvable?
Demand for better politics
An entrepreneur might interpret a statement like “Americans are totally burnt out on politics” as “the total addressable market for people who want better politics is over 100 million.”
Pew reports 79% of Americans describe politics with a negative word in their 2023 poll. These are the negative words:
We can mine this poll for a better estimate for the total addressable market of burnt out Americans who might pay money just for better political discourse, without considering policy or government.
This suggests the existence of a market for better political discourse. Also note misinformation is not the subject of this poll. The “anger” and “exhaustion” Americans experience is an addressable problem in its own right.
Social media shakeups
On the social media network side, shakeups create an opening for innovations in discourse over social media.
In examining social media, it’s also important to recognize how collections of people, related by hashtags, subreddits, follows, groups, and so forth, manage information and misinformation. A team of researchers, one of whom is almost definitely a BTS stan, did a survey of BTS stan Twitter users for how they manage misinformation in their community. The lede, of course, is that a fandom is an entity that can manage misinformation, and it seems important to expand our theory of misinformation to include this social level. Their survey results are summarized as:
Changes in search
Less likely to make mainstream news is that search is now a hot space. That’s right: Google now has competitors besides whoever Bing made a deal with.
AI technology
I’m actually pretty concerned about the full spectrum of AI’s capability, but I also agree with Dan Williams that we're caught in a negativity bias. To illustrate with the simplest example possible: drugs are bad, and yet drugs can cure sick people. In his words:
Here’s a time I used AI, complete with screenshot: I was wondering if any countries have retaliated against U.S. tariffs. Unfortunately, I talked to a teenager earlier and still had brainrot, so I only had 13 seconds before my attention spand disappeared. Wikipedia’s long, already up-to-date article is useless here. So I used Perplexity.AI:
Please take note of how well-cited this answer is, by reliable news sources. LLMs do hallucinate, but this isn’t holding up as a valid criticism against using AI, anymore than “Wikipedia can be wrong” is.
I tend to not sympathize to the perennial complaint that people’s attention span is getting shorter and we’re all getting dumber as a result.
If someone can get an answer in 15 seconds, it’s a bit of a myth that they were on the brink of curling up with Encyclopædia Britannica and just really learning. This is more realistic:
Conclusion
At the individual level: treat institutional distrust as a valid opinion, and focus your energy on why that leads to failure to believe reliable information.
For entrepreneurs: this problem may be solvable after all, given the mix of change in politics, the social media landscape and the AI era. But avoid misinformation inoculation and building balanced news apps, because those aren’t relevant to how people accept or reject information.