Ante hoc fact-checking (fact-checking before
dissemination) aims to remove errors and allow text to proceed to
dissemination (or to rejection if it fails confirmations or other
criteria). Post hoc fact-checking is most often followed by a
written report of inaccuracies, sometimes with a visual metric from the
checking organization (e.g., Pinocchios from The Washington Post Fact Checker, or TRUTH-O-METER ratings from PolitiFact). Several organizations are devoted to post hoc fact-checking, such as FactCheck.org and PolitiFact.
Research on the impact of fact-checking is relatively recent but the existing research suggests that fact-checking does indeed correct misperceptions among citizens, as well as discourage politicians from spreading misinformation.
Research on the impact of fact-checking is relatively recent but the existing research suggests that fact-checking does indeed correct misperceptions among citizens, as well as discourage politicians from spreading misinformation.
Post hoc fact-checking
External post hoc fact-checking by independent organizations began in the United States in the early 2000s.
Consistency across fact-checkers
One study finds that fact-checkers PolitiFact, FactCheck.org, and Washington Post's Fact Checker overwhelmingly agree on their evaluations of claims.
However, a study by Morgan Marietta, David C. Barker and Todd Bowser
found "substantial differences in the questions asked and the answers
offered." They concluded that this limited the "usefulness of
fact-checking for citizens trying to decide which version of disputed
realities to believe."
A paper by Chloe Lim, Ph.D. student at Stanford University, found
little overlap in the statements that fact-checkers check. Out of 1065
fact-checks by PolitiFact and 240 fact-checks by The Washington Post's
Fact-Checker, there were only 70 statements that both fact-checkers
checked. The study found that the fact-checkers gave consistent ratings
for 56 out of 70 statements, which means that one out every five times,
the two fact-checkers disagree on the accuracy of statements.
Effects
Studies of post hoc
fact-checking have made clear that such efforts often result in changes
in the behavior, in general, of both the speaker (making them more
careful in their pronouncements) and of the listener or reader (making
them more discerning with regard to the factual accuracy of content);
observations include the propensities of audiences to be completely
unswayed by corrections to errors regarding the most divisive subjects,
or the tendency to be more greatly persuaded by corrections of negative
reporting (e.g., "attack ads"), and to see minds changed only when the
individual in error was someone reasonably like-minded to begin with.
Correcting misperceptions
A
2015 study found evidence a "backfire effect" (correcting false
information may make partisan individuals cling more strongly to their
views): "Corrective information adapted from the Centers for Disease Control and Prevention
(CDC) website significantly reduced belief in the myth that the flu
vaccine can give you the flu as well as concerns about its safety.
However, the correction also significantly reduced intent to vaccinate
among respondents with high levels of concern about vaccine side
effects--a response that was not observed among those with low levels of
concern." A 2017 study attempted to replicate the findings of the 2015 study but failed to do so.
A 2016 study found little evidence for the "backfire effect": "By
and large, citizens heed factual information, even when such
information challenges their partisan and ideological commitments." A study of Donald Trump
supporters during the 2016 race similarly found little evidence for the
backfire effect: "When respondents read a news article about Mr.
Trump's speech that included F.B.I. statistics indicating that crime had
"fallen dramatically and consistently over time," their misperceptions
about crime declined compared with those who saw a version of the
article that omitted corrective information (though misperceptions
persisted among a sizable minority)." A 2018 study found no evidence of a backfire effect.
Studies have shown that fact-checking can affect citizens' belief in the accuracy of claims made in political advertisement. A paper by a group of Paris School of Economics and Sciences Po economists found that falsehoods by Marine Le Pen
during the 2017 French presidential election campaign (i) successfully
persuaded voters, (ii) lost their persuasiveness when fact-checked, and
(iii) did not reduce voters' political support for Le Pen when her
claims were fact-checked. A 2017 study in the Journal of Politics
found that "individuals consistently update political beliefs in the
appropriate direction, even on facts that have clear implications for
political party reputations, though they do so cautiously and with some
bias... Interestingly, those who identify with one of the political
parties are no more biased or cautious than pure independents in their
learning, conditional on initial beliefs."
A study by Yale University cognitive scientists Gordon Pennycook and David G. Rand
found that Facebook tags of fake articles "did significantly reduce
their perceived accuracy relative to a control without tags, but only
modestly". A Dartmouth study led by Brendan Nyhan found that Facebook tags had a greater impact than the Yale study found.
A "disputed" tag on a false headline reduced the number of respondents
who considered the headline accurate from 29% to 19%, whereas a "rated
false" tag pushed the number down to 16%.
The Yale study found evidence of a backfire effect among Trump
supporters younger than 26 years whereby the presence of both untagged
and tagged fake articles made the untagged fake articles appear more
accurate.
In response to research which questioned the effectiveness of the
Facebook "disputed" tags, Facebook decided to drop the tags in December
2017 and would instead put articles which fact-checked a fake news story
next to the fake news story link whenever it is shared on Facebook.
Based on the findings of a 2017 study in the journal Psychological Science, the most effective ways to reduce misinformation through corrections is by:
- limiting detailed descriptions of / or arguments in favor of the misinformation;
- walking through the reasons why a piece of misinformation is false rather than just labelling it false;
- presenting new and credible information which allows readers to update their knowledge of events and understand why they developed an inaccurate understanding in the first place;
- using video, as videos appear to be more effective than text at increasing attention and reducing confusion, making videos more effective at correcting misperception than text.
A forthcoming study in the Journal of Experimental Political Science
found "strong evidence that citizens are willing to accept corrections
to fake news, regardless of their ideology and the content of the fake
stories."
A paper by Andrew Guess (of Princeton University), Brendan Nyhan
(Dartmouth College) and Jason Reifler (University of Exeter) found that
consumers of fake news tended to have less favorable views of
fact-checking, in particular Trump supporters. The paper found that fake news consumers rarely encountered fact-checks: "only about half of the Americans who visited a fake news website during the study period also saw any fact-check from one of the dedicated fact-checking website (14.0%)."
A 2018 study found that Republicans were more likely to correct
their false information on voter fraud if the correction came from
Breitbart News rather than a non-partisan neutral source such as
PolitiFact.
Political discourse
A 2015 experimental study found that fact-checking can encourage politicians to not spread misinformation.
The study found that it might help improve political discourse by
increasing the reputational costs or risks of spreading misinformation
for political elites. The researchers sent, "a series of letters about
the risks to their reputation and electoral security if they were caught
making questionable statements. The legislators who were sent these
letters were substantially less likely to receive a negative
fact-checking rating or to have their accuracy questioned publicly,
suggesting that fact-checking can reduce inaccuracy when it poses a
salient threat."
Political preferences
One
experimental study found that fact-checking during debates affected
viewers' assessment of the candidates' debate performance and "greater
willingness to vote for a candidate when the fact-check indicates that
the candidate is being honest."
A study of Trump supporters during the 2016 presidential campaign
found that while fact-checks of false claims made by Trump reduced his
supporters' belief in the false claims in question, the corrections did
not alter their attitudes towards Trump.
Controversies and criticism
Political fact-checking is sometimes criticized as being opinion journalism. In September 2016, a Rasmussen Reports
national telephone and online survey found that "just 29% of all Likely
U.S. Voters trust media fact-checking of candidates' comments.
Sixty-two percent (62%) believe instead that news organizations skew the
facts to help candidates they support."
Informal fact-checking
Individual readers perform some types of fact-checking, such as comparing claims in one news story against claims in another.
Rabbi Moshe Benovitz, has observed that: "modern students use
their wireless worlds to augment skepticism and to reject dogma." He
says this has positive implications for values development:
"Fact-checking can become a learned skill, and technology can be harnessed in a way that makes it second nature… By finding opportunities to integrate technology into learning, students will automatically sense the beautiful blending of… their cyber… [and non-virtual worlds]. Instead of two spheres coexisting uneasily and warily orbiting one another, there is a valuable experience of synthesis…".
Detecting fake news
Fake news
has become increasingly prevalent over the last few years, with over a
100 incorrect articles and rumors spread incessantly just with regard to
the 2016 United States presidential election.
These fake news articles tend to come from satirical news websites or
individual websites with an incentive to propagate false information,
either as clickbait or to serve a purpose.
Since these articles typically hope to intentionally promote incorrect
information, these articles are quite difficult to detect.
When identifying a source of information, one must look at many
attributes, including but not limited to the content of the email and
social media engagements.
The language, specifically, is typically more inflammatory in fake news
than real articles, in part because the purpose is to confuse and
generate clicks. Furthermore, modeling techniques such as n-gram encodings and bag of words have served as other linguistic techniques to determine the legitimacy of a news course.
On top of that, researchers have determined that visual-based cues also
play a factor in categorizing an article, specifically some features
can be designed to assess if a picture was legitimate, and provides us
more clarity on the news. There is also many social context features that can play a role, as well as the model of spreading the news. Websites such as “Snopes”
try to detect this information manually, while certain universities are
trying to build mathematical models to do this themselves.
Organizations and individuals
Some individuals and organizations publish their fact-checking
efforts on the internet. These may have a special subject-matter focus,
such as Snopes.com's focus on urban legends or the Reporters' Lab at Duke University's focus on providing resources to journalists.
On-going Research in Fact-checking and Detecting Fake News
Since the 2016 United Stated presidential election, fake news has been a popular topic of discussion by President Trump
and news outlets. The reality of fake news had become omnipresent, and a
lot of research has gone into understanding, identifying, and combating
fake news. Also, a number of researchers began with the usage of fake
news to influence the 2016 presidential campaign. One research found
evidence of pro-Trump fake news being selectively targeted on
conservatives and pro-Trump supporters in 2016.
The researchers found that social media sites, Facebook in particular,
to be powerful platforms to spread certain fake news to targeted groups
to appeal to their sentiments during the 2016 presidential race.
Additionally, researchers from Stanford, NYU, and NBER found evidence to show how engagement with fake news on Facebook and Twitter was high throughout 2016.
Recently, a lot of work has gone into detecting and identifying fake
news through machine learning and artificial intelligence. In 2018,
researchers at MIT's CSAIL
(Computer Science and Artificial Intelligence Lab) created and tested a
machine learning algorithm to identify false information by looking for
common patterns, words, and symbols that typically appear in fake news.
More so, they released an open-source data set with a large catalog of
historical news sources with their veracity scores to encourage other
researchers to explore and develop new methods and technologies for
detecting fake news.
Despite the ongoing research at top universities and
institutions, there is much debate on the effectiveness of such
technology in identifying fake news. There is still not enough good
training data for machine learning and AI scientists to use to create
very accurate predictive models on detecting fake news. Nonetheless, a
lot of research is still ongoing to better understand fake news and
their characteristics.
Ante hoc fact-checking
Among
the benefits of printing only checked copy is that it averts serious,
sometimes costly, problems. These problems can include lawsuits for
mistakes that damage people or businesses, but even small mistakes can
cause a loss of reputation for the publication. The loss of reputation
is often the more significant motivating factor for journalists.
Fact checkers verify that the names, dates, and facts in an article or book are correct.
For example, they may contact a person who is quoted in a proposed
news article and ask the person whether this quotation is correct, or
how to spell the person's name. Fact-checkers are primarily useful in
catching accidental mistakes; they are not guaranteed safeguards against
those who wish to commit journalistic frauds.
As a career
Professional
fact checkers have generally been hired by newspapers, magazines, and
book publishers, probably starting in the early 1920s with the creation
of Time magazine in the US. Fact checkers may be aspiring writers, future editors, or freelancers engaged other projects; others are career professionals.
Historically, the field was considered women's work,
and from the time of the first professional American fact checker
through at least the 1970s, the fact checkers at a media company might
be entirely female or primarily so.
The number of people employed in fact-checking varies by
publication. Some organizations have substantial fact-checking
departments. For example, The New Yorker magazine had 16 fact checkers in 2003.
Others may hire freelancers per piece, or may combine fact-checking
with other duties. Magazines are more likely to use fact checkers than
newspapers.
Television and radio programs rarely employ dedicated fact checkers,
and instead expect others, including senior staff, to engage in
fact-checking in addition to their other duties.
Checking original reportage
Stephen Glass began his journalism career as a fact-checker. He went on to invent fictitious stories, which he submitted as reportage, and which fact-checkers at The New Republic (and other weeklies for which he worked) never flagged. Michael Kelly,
who edited some of Glass's concocted stories, blamed himself, rather
than the fact-checkers, saying: "Any fact-checking system is built on
trust ... If a reporter is willing to fake notes, it defeats the system.
Anyway, the real vetting system is not fact-checking but the editor."
Education on fact-checking
With
the circulation of fake news on the internet, many organizations have
dedicated time to create guidelines to help read to verify the
information they are consuming. Many universities across America provide
university students resources and tools to help them verify their
sources. Universities provide access to research guides that help
students conduct thorough research with reputable sources within
academia. Organizations like FactCheck.org, OntheMedia.org, and PolitiFact.com provide procedural guidelines that help individuals navigate the process to fact-check a source.
Books on professional fact-checking
- Sarah Harrison Smith spent some time and also headed the fact-checking department for The New York Times. She is the author of the book, The Fact Checker's Bible.
- Jim Fingal worked for several years as a fact-checker at The Believer and McSweeney's and is co-author with John D'Agata of The Lifespan of a Fact which is an inside look at the struggle between fact-checker (Fingal) and author (D'Agata) over an essay that pushed the limits of the acceptable "artistic license" for a non-fiction work.
Alumni of the role
The
following is a list of individuals for whom it has been reported,
reliably, that they have played such a fact-checking role at some point
in their careers, often as a stepping point to other journalistic
endeavors, or to an independent writing career:
- Susan Choi – American novelist
- Anderson Cooper – Television anchorman.
- Esther Dyson – technologist
- Nancy Franklin – New Yorker staff writer
- William Gaddis – American novelist
- Virginia Heffernan – New York Times television critic
- Roger Hodge – Former editor, Harper's Magazine
- David D. Kirkpatrick – The New York Times reporter
- Daniel Menaker – Former editor-in-chief at Random House
- David Rees – cartoonist
- Sean Wilsey – McSweeney's Editor and memoirist