Cross-posted from Huffington Post. See also The End of Bullshit at the Hands of Critical Rationalism.

Debating season is in full swing
, and as per usual the presidential candidates are playing fast and loose with the truth. Fact-checking sites such as PolitiFact and FactCheck.org have had plenty of easy targets in the debates so far. For instance, in the CNN Republican debate on September 16, Fiorina made several dubious claims about the Planned Parenthood video, as did Cruz about the Iran agreement. Similarly, in the CNN Democratic debate on October 13, Sanders falsely claimed that the U.S. has "more wealth and income inequality than any other country", whereas Chafee fudged the data on his Rhode Island record. No doubt we are going to see more of that in the rest of the presidential campaign. The fact-checkers won't need to worry about finding easy targets.

Research shows that fact-checking actually does make a difference. Incredible as it may seem, the candidates would probably have been even more careless with the truth if it weren't for the fact-checkers. To some extent, fact-checkers are a deterrent to politicians inclined to stretch the truth.

At the same time, the fact that falsehoods and misrepresentations of the truth are still so common shows that this deterrence effect is not particularly strong. This raises the question how we can make it stronger. Is there a way to improve on PolitiFact's and FactCheck.org's model - Fact-Checking 2.0, if you will?

Spencer Greenberg of ClearerThinking and I have developed a tool which we hope could play that role. Greenberg has created an application to embed videos of recorded debates and then add subtitles to them. In these subtitles, I point out falsehoods and misrepresentations of the truth at the moment when the candidates make them. For instance, when Fiorina says about the Planned Parenthood video that there is "a fully formed fetus on the table, its heart beating, its legs kicking, while someone says we have to keep it alive to harvest its brain", I write in the subtitles:

2015-10-20-1445359965-1599465-FiorinaHuffPo2.png

We think that reading that a candidate's statement is false just as it is made could have quite a striking effect. It could trigger more visceral feelings among the viewers than standard fact-checking, which is published in separate articles. To over and over again read in the subtitles that what you're being told simply isn't true should outrage anyone who finds truth-telling an important quality.

Another salient feature of our subtitles is that we go beyond standard fact-checking. There are many other ways of misleading the audience besides playing fast and loose with the truth, such as evasions, ad hominem-attacks and other logical fallacies. Many of these are hard to spot for the viewers. We must therefore go beyond fact-checking and also do argument-checking, as we call it. If fact-checking grew more effective, and misrepresenting the truth less viable a strategy, politicians presumably would more frequently resort to Plan B: evading questions where they don't want the readers to know the truth. To stop that, we need careful argument-checking in addition to fact-checking.

So far, I've annotated the entire CNN Republican Debate, a 12 minute video from the CNN Democratic Debate (more annotations of this debate will come) and nine short clips (1-3 minutes) from the Fox News Republican Debate (August 6). My aim is to be as complete as possible, and I think that I've captured an overwhelming majority of the factual errors, evasions, and fallacies in the clips. The videos can be found on ClearerThinking as well as below.

2015-10-20-1445360978-3597669-Republicandebate.png

The CNN Republican debate, subtitled in full.

2015-10-20-1445361023-3673364-DemocratDebate.png

The first 12 minutes of the CNN Democratic debate.

2015-10-20-1445361172-1566621-FoxDebate.png

Nine short clips from the Fox News Debate: Christie and Paul, Bush, Carson, Cruz, Huckabee, Kasich, Rubio, Trump, Walker.

What is perhaps most striking is the sheer number of falsehoods, evasions and fallacies the candidates make. The 2hr 55 min long CNN Republican debate contains 273 fact-checking and argument-checking comments (many of which refer to various fact-checking sites). In total, 27 % of the video is subtitled. Similar numbers hold for the other videos.

Conventional wisdom has it that politicians lie and deceive on a massive scale. My analyses prove conventional wisdom right. The candidates use all sorts of trickery to put themselves in a better light and smear their opponents.

All of this trickery is severely problematic from several perspectives. Firstly, it is likely to undermine the voters' confidence in the political system. This is especially true for voters on the losing side. Why be loyal to a government which has gained power by misleading the electorate? No doubt many voters do think in those terms, more or less explicitly.

It is also likely to damage the image of democracy. The American presidential election is followed all over the world by millions if not billions of people. Many of them live in countries where democracy activists are struggling to amass support against authoritarian regimes. It hardly helps them that the election debates in the U.S. and other democratic countries look like this.

All of these deceptive arguments and claims also make it harder for voters to make informed decisions. Televised debates are supposed to help voters to get a better view of the candidates' policies and track-records, but how could they, if they can't trust what is being said? This is perhaps the most serious consequence of poor debates, since it is likely to lead to poorer decisions on the part of the voters, which in turn will lead to poorer political leadership and poorer policies.

Besides functioning as a more effective lie deterrent to the candidates, improved fact-checking could also nudge the networks to adjust the set-up of the debates. The way the networks lead the debates today hardly encourages serious and rational argumentation. To the contrary, they often positively goad the candidates against each other. Improved fact-checking could make it more salient to the viewers how poor the debates are, and induce them to demand a better debate set-up. The networks need to come up with a format which incentivizes the candidates to argue fairly and truthfully, and which makes it clear who has not. For instance, they could broadcast the debate again the next day, with fact-checking and argument-checking subtitles.

Another means to improve the debates is further technological innovation. For example, there should be a video annotation equivalent to Genius.com, the web application which allows you to annotate text on any webpage in a convenient way. That would be very useful for fact-checking and argument-checking purposes.

Fact-checking could even become automatic, as Google CEO Eric Schmidt predicted it would be within five years in 2006. Though Schmidt was over-optimistic, Google algorithms are able to fact-check websites with a high degree of accuracy today, whilst Washington Post already has built a rudimentary automatic fact-checker.

But besides new software applications and better debating formats, we also need something else, namely a raised awareness among the public what a great problem politicians' careless attitude to the truth is. They should ask themselves: are people inclined to mislead the voters really suited to shape the future of the world?

Politicians are normally held to high moral standards. Voters tend to take very strict views on other forms of dishonest behavior, such as cheating and tax evasion. Why, then, is it that they don't take a stricter view on intellectual dishonesty? Besides being morally objectionable, intellectual dishonesty is likely to lead to poor decisions. Voters would therefore be wise to let intellectual honesty be an important criterion when they cast their vote. If they started doing that on a grand scale, that would do more to improve the level of political debate than anything else I can think of.

Thanks to Aislinn Pluta, Doug Moore, Janko Prester, Philip Thonemann, Stella Vallgårda and Staffan Holmberg for their contributions to the annotations.

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 12:54 PM

Reposting by comment from your post on omnilibrium:

You failed to address, or even acknowledge the question, of who fact-checks the fact-checkers. For example, you mention PolitiFact, it has a acquired a reputation for downplaying some politicians lies, and in some cases even outright classifying true statements as lies by others.

In general, this proposal is just silly. After all the media is supposed to fack-check politicians but it is rather notorious for its own biases and even occasional lies. Why would we expect self-proclaimed fact-checkers to be any better?

Also, judging by the upvotes this post has recieved and the rest of the comments, it appears even most LWers will accept someone's claim to be stating facts without question.

This a is a fully general counterargument to everything from consumer reports to examine.com to the organic movement. Basically anything that attempts to help people be better informed can be accused of lost purposes.

[-][anonymous]8y80

i think you could steelman this as "You should only use fact checkers who don't have significant adverse incentives". Consumer Reports and Examine.com fit the bill, politifact may not.

That's fair.

and in some cases even outright classifying true statements as lies by others.

Which cases do you mean?

Why would we expect self-proclaimed fact-checkers to be any better?

They operate under a bit different incentives. PolitiFact gains less by writing sensational stories than classic news outlets.

They operate under a bit different incentives.

That's not self-evident to me. They still want eyeballs and clicks.

I basically remembered FactCheck.org funding model and thought PolitiFact uses the same.

PolitiFact does make money via adverstising. At the same time I expect it's reputation needs to be a bit different.

I'd prefer the framing that it's not a fact-checker, but rather an inconsistency-detector. Rather than "this bot detected the claim that vaccines cause autism, which is wrong", it'd say "this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world's most prominent medical journals". Or in 1930, it might have reported "this bot detected the claim that continents drift, which is in conflict with the scientific consensus of leading geology journals".

it'd say "this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world's most prominent medical journals".

In that case, I don't see the point. After all, anti-vaxxers don't deny that there are prominent medical professionals who don't agree with their position. They, however, suspect that said professionals are doing so due to a combination of biases and money from the vaccine industry.

But not all people in the audience would react like that to michaelkeenan's example warning. Some people would presumably value being informed of authoritative sources contradicting a claim that vaccines cause autism.

(And if your objection went through for fact checking framed as contradiction reporting, why wouldn't it go through for fact checking framed as fact checking? My mental model of an anti-vaxxer has them responding as negatively to being baldly contradicted as to being informed, "The Lancet says this is wrong".)

The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:

  • "The UK is a much more violent society than the US, statistically"
  • "There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK"
  • "Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US."

There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren't, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).

If these fact-checkers were extremely common, it could still help anti-vaccine people. If you're against vaccines, but you've seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.

If you're against vaccines, but you've seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.

That's subject to Goodhart's Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.

(Of course, they won't think of it that way, they'll think of it as "using our influence to promote social change" or some such. But it will happen, and has already happened for non-bot members of the media.)

Heck, Wikipedia is the prime example.

I don't know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.

If people on LW are using Bayesian updating properly and check comments for refutations (which some commenters love to do), then this shouldn't be as large a problem.

In order to be fact-checked, a statement has to be truth-apt in the first place. That is, it has to be the sort of statement that is capable of being true or false.

A lot of political arguments aren't truth-apt; they amount to cheering ("Georgism, boo! Synarchism, yay!") as opposed to historical claims ("Countries that adopt goat control have seen their arson rate double") or even theoretical claims ("The erotic calculation problem predicts that college-educated adults will move out of states that ban vibrators").

Your criticism would be much more interesting if you pointed to concrete problems in my fact-checking/argument-checking.

I wasn't asserting problems with your fact-checking; I was stating a limitation on the project of fact-checking in general.

The limitation is that Schubert could only find 273 issues given his standards of what's worthy of commenting within the 2hr 55 min long CNN Republican debate and that's not enough to really engage with it?

Do you think that something that Schubert labeled as a wrong believe shouldn't be because you don't believe it to be truth-apt?

But besides new software applications and better debating formats, we also need something else, namely a raised awareness among the public what a great problem politicians' careless attitude to the truth is. They should ask themselves: are people inclined to mislead the voters really suited to shape the future of the world?

I watched the beginning of your annotation of the Republican debate. I think you did a good job at annotating it. The annotations add to the experience of watching the debate, which is likely the most important thing for making it impactful.

There were a few technical issues where the annotation freezed and didn't update (I'm using Firefox).

Thanks! What device did you use? It is working poorly on phones, but we hoped it would work fine on computers. Thanks for pointing this out.

I use a notebook where a 24 inch monitor is plugged in.

I think I will want to see further the further debates in the US presidential race in this format. At the moment I don't see a clear link where I can express that preferance and get an email when the next debates get released with annotations.

Good to hear, Christian. We're currently subtitling a bit more of the CNN Democratic debate, which should be up soon. We haven't decided, though, to what extent we will subtitle future debates. This is extremely time-consuming. But you could subscribe to ClearerThinking, who are likely to announce any major new updates. (They also do lots of other rationality related stuff; most notably rationality tests.)

I'm still fairly skeptical that algorithmically fact-checking anything complex is tractable today. The Google article states that "this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting." Also, no real insights into nlp are presented; the article only suggests that an algorithm could fact check relatively simple statements that have clear truth values by checking a large database of information. So if the database has nothing to say about the statement, the algorithm is useless. In particular, such an approach would be unable to fact-check the Fiorina quote you used as an example.

Do you think fact checking is an inherently more difficult problem then what Watson can do?

It depends what level of fact checking is needed. Watson is well-suited for answering questions like "What year was Obama born?", because the answer is unambiguous and also fairly likely to be found in a database. I would be very surprised if Watson could fact check a statement like "Putin has absolutely no respect for President Obama", because the context needed to evaluate such a statement is not so easy to search for and interpret.

"Putin has absolutely no respect for President Obama", because the context needed to evaluate such a statement is not so easy to search for and interpret.

I'm not sure that a statement like that has to tagged as a falsehood. I would be fine with a fact checker that focuses on statements that are more clearly false.

I think the standard for accuracy would be very different. If Watson gets something right you think "Wow that was so clever", if it's wrong you're fairly forgiving. On that other hand, I feel like if an automated fact checker got even 1/10 things wrong it would be subject to insatiable rage for doing so. I think specifically correcting others is the situation in which people would have the highest standard for accuracy.

And that's before you get into the levels of subjectivity and technicality in the subject matter which something like Watson would never be subjected to.

I think the standard for accuracy would be very different. If Watson gets something right you think "Wow that was so clever", if it's wrong you're fairly forgiving.

Given that Watson get's used to make medical decisions about how to cure cancer I don't think people are strongly forgiving.

Yes, because Watson's corpus doesn't contain people lying. On the other hand, for political fact-checking the corpus is going to have tons of lies, half-truth, and BS.

It would still be helpful to have automatic fact-checking of simple statements. Consider this Hacker News thread - two people are arguing about crime rates in the UK and USA. Someone says "The UK is a much more violent society than the US" and they argue about that, neither providing citations. That might be simple enough that natural language processing could parse it and check it against various interpretations of it. For example, one could imagine a bot that notices when people are arguing over something like that (whether on the internet or in a national election. It would provide useful relevant statistics, like the total violent crime rates in each country, or the murder rate, or whatever it thinks is relevant. If it were an ongoing software project, the programmers could notice when it's upvoted and downvoted, and improve it.

Consider this Hacker News thread - two people are arguing about crime rates in the UK and USA.

This is harder than it seems. The two countries use different methodologies to collect their crime statistics.

Yes, you'd want to use the International Crime Victims Survey. It's the standard way to compare crime rates between countries.

[-][anonymous]8y30

This is unbelievable. Congratulations. When I first heard the idea of real time fact checking not long ago (was that you in the other discussion thread?) I was very skeptical. I thought it was important, neglected but intractable. You've proven me and any other skeptics wrong. I hope you keep chugging on and this doesn't stay a proof of concept.

The quality of ClearerThinking's work has consistently been beyond reproach but I'm less familiar with your personal brand. I hope you don't brand this kind of effort as to stifle potential competition. If this catches on, you will change politics and the future of humanity forever in a very positive way.

Okay. It seems real time fact checking exists and is sponsored by a powerful news agency. They seem to be the lone competition. How has it not taken off yet!

This is not automated fact checking. They are comparing claims against a database of things that have been fact checked by human beings already.

We think that reading that a candidate's statement is false just as it is made could have quite a striking effect. It could trigger more visceral feelings among the viewers than standard fact-checking, which is published in separate articles. To over and over again read in the subtitles that what you're being told simply isn't true should outrage anyone who finds truth-telling an important quality.

Will the outrage be directed against the politician, or against the person who claims they're wrong?

I expect that any politician could take any speech made by a politician on the other side and "fact-check" it to produce a subtitled video "correcting" their "lies". How do you propose to establish a reputation for probity of the proposed system?

There no problem with outrage being directed against people who claim that a politician is wrong. That outrage can lead to productive discussion.

I don't think a strong reputation is necessary for people prefering to watch the debates with those subtitles instead of watching the debates without the subtitles. At the same time I think that the way Stefan Schubert annotates the videos is likely to be appreciated by many people. I think this is hard to judge in the abstract without viewing those videos.