A heuristic (/hjʊˈrɪstɪk/; from Ancient Greekεὑρίσκω (heurískō) 'to find, discover'), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation.
Where finding an optimal solution is impossible or impractical,
heuristic methods can be used to speed up the process of finding a
satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
Heuristics are the strategies derived from previous experiences
with similar problems. These strategies depend on using readily
accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues.
When an individual applies a heuristic in practice, it generally
performs as expected. However it can alternatively create systematic
errors.
The most fundamental heuristic is trial and error, which can be used in
everything from matching nuts and bolts to finding the values of
variables in algebra problems. In mathematics, some common heuristics
involve the use of visual representations, additional assumptions,
forward/backward reasoning and simplification. Here are a few commonly
used heuristics from George Pólya's 1945 book, How to Solve It:
If you are having difficulty understanding a problem, try drawing a picture.
If you can't find a solution, try assuming that you have a solution
and seeing what you can derive from that ("working backward").
If the problem is abstract, try examining a concrete example.
Try solving a more general problem first (the "inventor's paradox": the more ambitious plan may have more chances of success).
In psychology,
heuristics are simple, efficient rules, either learned or inculcated by
evolutionary processes. These psychological heuristics have been
proposed to explain how people make decisions, come to judgements, and
solve problems. These rules typically come into play when people face
complex problems or incomplete information. Researchers employ various
methods to test whether people use these rules. The rules have been
shown to work well under most circumstances, but in certain cases can
lead to systematic errors or cognitive biases.
History
The study of heuristics in human decision-making was developed in the 1970s and the 1980s, by the psychologists Amos Tversky and Daniel Kahneman, although the concept had been originally introduced by the Nobel laureateHerbert A. Simon. Simon's original primary object of research was problem solving that showed that we operate within what he calls bounded rationality. He coined the term satisficing,
which denotes a situation in which people seek solutions, or accept
choices or judgements, that are "good enough" for their purposes
although they could be optimised.
Rudolf Groner analysed the history of heuristics from its roots in ancient Greece up to contemporary work in cognitive psychology and artificial intelligence, proposing a cognitive style "heuristic versus algorithmic thinking", which can be assessed by means of a validated questionnaire.
Adaptive toolbox
Gerd Gigerenzer
and his research group argued that models of heuristics need to be
formal to allow for predictions of behavior that can be tested. They study the fast and frugal heuristics in the "adaptive toolbox" of individuals or institutions, and the ecological rationality of these heuristics; that is, the conditions under which a given heuristic is likely to be successful. The descriptive study of the "adaptive toolbox" is done by observation and experiment, while the prescriptive study of ecological rationality requires mathematical analysis and computer simulation. Heuristics – such as the recognition heuristic, the take-the-best heuristic and fast-and-frugal trees
– have been shown to be effective in predictions, particularly in
situations of uncertainty. It is often said that heuristics trade
accuracy for effort but this is only the case in situations of risk.
Risk refers to situations where all possible actions, their outcomes and
probabilities are known. In the absence of this information, that is
under uncertainty, heuristics can achieve higher accuracy with lower
effort. This finding, known as a less-is-more effect,
would not have been found without formal models. The valuable insight
of this program is that heuristics are effective not despite their
simplicity – but because of it. Furthermore, Gigerenzer and Wolfgang Gaissmaier found that both individuals and organisations rely on heuristics in an adaptive way.
Cognitive-experiential self-theory
Heuristics,
through greater refinement and research, have begun to be applied to
other theories, or be explained by them. For example, the cognitive-experiential self-theory
(CEST) is also an adaptive view of heuristic processing. CEST breaks
down two systems that process information. At some times, roughly
speaking, individuals consider issues rationally, systematically,
logically, deliberately, effortfully, and verbally. On other occasions,
individuals consider issues intuitively, effortlessly, globally, and
emotionally.
From this perspective, heuristics are part of a larger experiential
processing system that is often adaptive, but vulnerable to error in
situations that require logical analysis.
Attribute substitution
In 2002, Daniel Kahneman and Shane Frederick proposed that cognitive heuristics work by a process called attribute substitution, which happens without conscious awareness.
According to this theory, when somebody makes a judgement (of a "target
attribute") that is computationally complex, a more easily calculated
"heuristic attribute" is substituted. In effect, a cognitively difficult
problem is dealt with by answering a rather simpler problem, without
being aware of this happening. This theory explains cases where judgements fail to show regression toward the mean. Heuristics can be considered to reduce the complexity of clinical judgments in health care.
Psychology
Heuristics (from Ancient Greekεὑρίσκω, heurískō,
"I find, discover") is the process by which humans use mental shortcuts
to arrive at decisions. Heuristics are simple strategies that humans,
animals, organizations, and even machines use to quickly form judgments, make decisions, and find solutions
to complex problems. Often this involves focusing on the most relevant
aspects of a problem or situation to formulate a solution. While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate.
Judgments and decisions based on heuristics are simply good enough to
satisfy a pressing need in situations of uncertainty, where information
is incomplete. In that sense they can differ from answers given by logic and probability.
The economist and cognitive psychologist Herbert A. Simon
introduced the concept of heuristics in the 1950s, suggesting there
were limitations to rational decision making. In the 1970s,
psychologists Amos Tversky and Daniel Kahneman added to the field with their research on cognitive bias. It was their work that introduced specific heuristic models, a field which has only expanded since. While some argue that pure laziness is behind the heuristics process, others argue that it can be more accurate than decisions based on every known factor and consequence, the less-is-more effect.
Philosophy
A heuristic device is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y.
A good example is a model that, as it is never identical with what it models,
is a heuristic device to enable understanding of what it models.
Stories, metaphors, etc., can also be termed heuristic in this sense. A
classic example is the notion of utopia as described in Plato's best-known work, The Republic. This means that the "ideal city" as depicted in The Republic
is not given as something to be pursued, or to present an
orientation-point for development. Rather, it shows how things would
have to be connected, and how one thing would lead to another (often
with highly problematic results), if one opted for certain principles
and carried them through rigorously.
In legal theory, especially in the theory of law and economics, heuristics are used in the law when case-by-case analysis would be impractical, insofar as "practicality" is defined by the interests of a governing body.
The present securities regulation regime largely assumes that all
investors act as perfectly rational persons. In truth, actual investors
face cognitive limitations from biases, heuristics, and framing
effects. For instance, in all states in the United States the legal drinking age
for unsupervised persons is 21 years, because it is argued that people
need to be mature enough to make decisions involving the risks of alcohol
consumption. However, assuming people mature at different rates, the
specific age of 21 would be too late for some and too early for others.
In this case, the somewhat arbitrary delineation is used because it is
impossible or impractical to tell whether an individual is sufficiently
mature for society to trust them with that kind of responsibility. Some
proposed changes, however, have included the completion of an alcohol
education course rather than the attainment of 21 years of age as the
criterion for legal alcohol possession. This would put youth alcohol
policy more on a case-by-case basis and less on a heuristic one, since
the completion of such a course would presumably be voluntary and not
uniform across the population.
The same reasoning applies to patent law. Patents
are justified on the grounds that inventors must be protected so they
have incentive to invent. It is therefore argued that it is in society's
best interest that inventors receive a temporary government-granted monopoly
on their idea, so that they can recoup investment costs and make
economic profit for a limited period. In the United States, the length
of this temporary monopoly is 20 years from the date the patent
application was filed, though the monopoly does not actually begin until
the application has matured into a patent. However, like the drinking
age problem above, the specific length of time would need to be
different for every product to be efficient. A 20-year term is used
because it is difficult to tell what the number should be for any
individual patent. More recently, some, including University of North Dakota law professor Eric E. Johnson, have argued that patents in different kinds of industries – such as software patents – should be protected for different lengths of time.
Stereotyping
Stereotyping is a type of heuristic that people use to form opinions or make judgements about things they have never seen or experienced. They work as a mental shortcut to assess everything from the social status of a person (based on their actions),
to classifying a plant as a tree based on it being tall, having a
trunk, and that it has leaves (even though the person making the
evaluation might never have seen that particular type of tree before).
Stereotypes, as first described by journalist Walter Lippmann in his book Public Opinion (1922), are the pictures we have in our heads that are built around experiences as well as what we are told about the world.
Artificial intelligence
A heuristic can be used in artificial intelligence systems while searching a solution space.
The heuristic is derived by using some function that is put into the
system by the designer, or by adjusting the weight of branches based on
how likely each branch is to lead to a goal node.
Behavioural economics
Heuristics refers to the cognitive shortcuts that individuals use to simplify decision-making processes in economic situations. Behavioral economics is a field that integrates insights from psychology and economics to better understand how people make decisions.
Anchoring and adjustment is one of the most extensively
researched heuristics in behavioural economics. Anchoring is the
tendency of people to make future judgements or conclusions based too
heavily on the original information supplied to them. This initial
knowledge functions as an anchor, and it can influence future judgements
even if the anchor is entirely unrelated to the decisions at hand.
Adjustment, on the other hand, is the process through which individuals
make gradual changes to their initial judgements or conclusions.
Anchoring and adjustment
has been observed in a wide range of decision-making contexts,
including financial decision-making, consumer behavior, and negotiation.
Researchers have identified a number of strategies that can be used to
mitigate the effects of anchoring and adjustment, including providing
multiple anchors, encouraging individuals to generate alternative
anchors, and providing cognitive prompts to encourage more deliberative
decision-making.
Other heuristics studied in behavioral economics include the representativeness heuristic, which refers to the tendency of individuals to categorize objects or events based on how similar they are to typical examples, and the availability heuristic, which refers to the tendency of individuals to judge the likelihood of an event based on how easily it comes to mind.
According
to Tversky and Kahneman (1973), the availability heuristic can be
described as the tendency to consider events that they can remember with
greater facilitation as more likely to occur than events that are more
difficult to recall.
An example of this would be asking someone whether they believe they
are more likely to get bitten by a shark attack or die in a drowning
incident. Someone may quickly answer with the incorrect belief that they
are more likely to die from a shark attack as the event is more easily
remembered, and is often covered more heavily than drowning deaths in
the news. The correct answer is that people are more likely to die of
drowning (1 in 1,134) than die after being bitten by a shark (1 in
4,332,817).
Representative heuristic
The representativeness heuristic
refers to the cognitive bias where people rely on their preconceived
mental image/prototype of a particular category or concept rather than
actual probabilities and statistical data for making judgments. This
behavior often leads to stereotyping/generalization with limited
information causing errors as well as distorted views about reality.
For instance, when trying to guess someone's occupation based on
their appearance, a representative heuristic might be used by assuming
that an individual in a suit must be either a lawyer or businessperson
while assuming that someone in uniform fits the police officer or
soldier category. This shortcut could sometimes be useful but may also
result in stereotypes and overgeneralizations.
Models of scientific inquiry have two functions: first, to provide a descriptive account of how scientific inquiry is carried out in practice, and second, to provide an explanatory account of why scientific inquiry succeeds as well as it appears to do in arriving at genuine knowledge. The philosopher Wesley C. Salmon described scientific inquiry:
The search for scientific knowledge ends far back into antiquity. At
some point in the past, at least by the time of Aristotle, philosophers
recognized that a fundamental distinction should be drawn between two
kinds of scientific knowledge—roughly, knowledge that and knowledge why. It is one thing to know that
each planet periodically reverses the direction of its motion with
respect to the background of fixed stars; it is quite a different
matter to know why. Knowledge of the former type is descriptive;
knowledge of the latter type is explanatory. It is explanatory
knowledge that provides scientific understanding of the world. (Salmon,
2006, pg. 3)
According to the National Research Council (United States):
"Scientific inquiry refers to the diverse ways in which scientists
study the natural world and propose explanations based on the evidence
derived from their work."
Accounts of scientific inquiry
Classical model
The classical model of scientific inquiry derives from Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.
Wesley Salmon (1989) began his historical survey of scientific explanation with what he called the received view, as it was received from Hempel and Oppenheim in the years beginning with their Studies in the Logic of Explanation (1948) and culminating in Hempel's Aspects of Scientific Explanation (1965). Salmon summed up his analysis of these developments by means of the following Table.
In this classification, a deductive-nomological
(D-N) explanation of an occurrence is a valid deduction whose
conclusion states that the outcome to be explained did in fact occur.
The deductive argument is called an explanation, its premisses are called the explanans (L:explaining) and the conclusion is called the explanandum (L:to be explained). Depending on a number of additional qualifications, an explanation may be ranked on a scale from potential to true.
Not all explanations in science are of the D-N type, however. An inductive-statistical
(I-S) explanation accounts for an occurrence by subsuming it under
statistical laws, rather than categorical or universal laws, and the
mode of subsumption is itself inductive instead of deductive. The D-N
type can be seen as a limiting case of the more general I-S type, the
measure of certainty involved being complete, or probability 1, in the former case, whereas it is less than complete, probability < 1, in the latter case.
In this view, the D-N mode of reasoning, in addition to being
used to explain particular occurrences, can also be used to explain
general regularities, simply by deducing them from still more general
laws.
Finally, the deductive-statistical (D-S) type of
explanation, properly regarded as a subclass of the D-N type, explains
statistical regularities by deduction from more comprehensive
statistical laws. (Salmon 1989, pp. 8–9).
Such was the received view of scientific explanation from the point of view of logical empiricism, that Salmon says "held sway" during the third quarter of the last century (Salmon, p. 10).
During the course of history, one theory has succeeded another, and
some have suggested further work while others have seemed content just
to explain the phenomena. The reasons why one theory has replaced
another are not always obvious or simple. The philosophy of science
includes the question: What criteria are satisfied by a 'good' theory.
This question has a long history, and many scientists, as well as
philosophers, have considered it. The objective is to be able to choose
one theory as preferable to another without introducing cognitive bias. Several often proposed criteria were summarized by Colyvan. A good theory:
contains few arbitrary elements (simplicity/parsimony);
agrees with and explains all existing observations (unificatory/explanatory power) and makes detailed predictions about future observations that can disprove or falsify the theory if they are not borne out;
is fruitful, where the emphasis by Colyvan is not only upon prediction and falsification, but also upon a theory's seminality in suggesting future work;
is elegant (formal elegance; no ad hoc modifications).
Stephen Hawking supported items 1, 2 and 4, but did not mention fruitfulness. On the other hand, Kuhn emphasizes the importance of seminality.
The goal here is to make the choice between theories less
arbitrary. Nonetheless, these criteria contain subjective elements, and
are heuristics rather than part of scientific method. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird:
"They [such criteria] cannot
determine scientific choice. First, which features of a theory satisfy
these criteria may be disputable (e.g. does simplicity concern
the ontological commitments of a theory or its mathematical form?).
Secondly, these criteria are imprecise, and so there is room for
disagreement about the degree to which they hold. Thirdly, there can be
disagreement about how they are to be weighted relative to one another,
especially when they conflict."
— Alexander Bird, Methodological incommensurability
It also is debatable whether existing scientific theories satisfy all
these criteria, which may represent goals not yet achieved. For
example, explanatory power over all existing observations (criterion 3)
is satisfied by no one theory at the moment.
Whatever might be the ultimate
goals of some scientists, science, as it is currently practiced, depends
on multiple overlapping descriptions of the world, each of which has a
domain of applicability. In some cases this domain is very large, but in
others quite small.
— E.B. Davies, Epistemological pluralism, p. 4
The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor,
which often is taken as an attribute of a good theory. Occam's razor
might fall under the heading of "elegance", the first item on the list,
but too zealous an application was cautioned by Albert Einstein: "Everything should be made as simple as possible, but no simpler." It is arguable that parsimony and elegance "typically pull in different directions".
The falsifiability item on the list is related to the criterion
proposed by Popper as demarcating a scientific theory from a theory like
astrology: both "explain" observations, but the scientific theory takes
the risk of making predictions that decide whether it is right or
wrong:
"It must be possible for an empirical scientific system to be refuted by experience."
"Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science."
— Karl Popper, The Logic of Scientific Discovery, p. 18 and p. 280
Thomas Kuhn
argued that changes in scientists' views of reality not only contain
subjective elements, but result from group dynamics, "revolutions" in
scientific practice which result in paradigm shifts. As an example, Kuhn suggested that the heliocentric "Copernican Revolution" replaced the geocentric views of Ptolemy
not because of empirical failures, but because of a new "paradigm" that
exerted control over what scientists felt to be the more fruitful way
to pursue their goals.
Deductive reasoning is the reasoning of proof, or logical implication. It is the logic used in mathematics and other axiomatic systems such as formal logic. In a deductive system, there will be axioms
(postulates) which are not proven. Indeed, they cannot be proven
without circularity. There will also be primitive terms which are not
defined, as they cannot be defined without circularity. For example, one
can define a line as a set of points, but to then define a point as the
intersection of two lines would be circular. Because of these
interesting characteristics of formal systems,
Bertrand Russell humorously referred to mathematics as "the field where
we don't know what we are talking about, nor whether or not what we say
is true". All theorems and corollaries are proven by exploring the
implications of the axiomata and other theorems that have previously
been developed. New terms are defined using the primitive terms and
other derived definitions based on those primitive terms.
In a deductive system, one can correctly use the term "proof", as
applying to a theorem. To say that a theorem is proven means that it is
impossible for the axioms to be true and the theorem to be false. For
example, we could do a simple syllogism such as the following:
Notice that it is not possible (assuming all of the trivial
qualifying criteria are supplied) to be in Arches and not be in Utah.
However, one can be in Utah while not in Arches National Park. The
implication only works in one direction. Statements (1) and (2) taken
together imply statement (3). Statement (3) does not imply anything
about statements (1) or (2). Notice that we have not proven statement
(3), but we have shown that statements (1) and (2) together imply
statement (3). In mathematics, what is proven is not the truth of a
particular theorem, but that the axioms of the system imply the theorem.
In other words, it is impossible for the axioms to be true and the
theorem to be false. The strength of deductive systems is that they are
sure of their results. The weakness is that they are abstract constructs
which are, unfortunately, one step removed from the physical world.
They are very useful, however, as mathematics has provided great
insights into natural science by providing useful models of natural
phenomena. One result is the development of products and processes that
benefit mankind.
Learning about the physical world often involves the use of inductive
reasoning. It is useful in enterprises as science and crime scene
detective work. One makes a set of specific observations, and seeks to
make a general principle based on those observations, which will point
to certain other observations that would naturally result from either a
repeat of the experiment or making more observations from a slightly
different set of circumstances. If the predicted observations hold true,
one may be on the right track. However, the general principle has not
been proven. The principle implies that certain observations should
follow, but positive observations do not imply the principle. It is
quite possible that some other principle could also account for the
known observations, and may do better with future experiments. The
implication flows in only one direction, as in the syllogism used in the
discussion on deduction. Therefore, it is never correct to say that a
scientific principle or hypothesis/theory has been "proven" in the
rigorous sense of proof used in deductive systems.
A classic example of this is the study of gravitation. Newton
formed a law for gravitation stating that the force of gravitation is
directly proportional to the product of the two masses and inversely
proportional to the square of the distance between them. For over 170
years, all observations seemed to validate his equation. However,
telescopes eventually became powerful enough to see a slight discrepancy
in the orbit of Mercury. Scientists tried everything imaginable to
explain the discrepancy, but they could not do so using the objects that
would bear on the orbit of Mercury. Eventually, Einstein developed his
theory of general relativity
and it explained the orbit of Mercury and all other known observations
dealing with gravitation. During the long period of time when scientists
were making observations that seemed to validate Newton's theory, they
did not, in fact, prove his theory to be true. However, it must have
seemed at the time that they did. It only took one counterexample
(Mercury's orbit) to prove that there was something wrong with his
theory.
This is typical of inductive reasoning. All of the observations
that seem to validate the theory, do not prove its truth. But one
counter-example can prove it false. That means that deductive logic is
used in the evaluation of a theory. In other words, if A implies B,
then not B implies not A. Einstein's theory of General Relativity has
been supported by many observations using the best scientific
instruments and experiments. However, his theory now has the same status
as Newton's theory of gravitation prior to seeing the problems in the
orbit of Mercury. It is highly credible and validated with all we know,
but it is not proven. It is only the best we have at this point in time.
Another example of correct scientific reasoning is shown in the current search for the Higgs boson. Scientists on the Compact Muon Solenoid experiment at the Large Hadron Collider
have conducted experiments yielding data suggesting the existence of
the Higgs boson. However, realizing that the results could possibly be
explained as a background fluctuation and not the Higgs boson, they are
cautious and waiting for further data from future experiments. Said
Guido Tonelli:
"We cannot exclude the presence of the Standard Model Higgs between 115 and 127 GeV
because of a modest excess of events in this mass region that appears,
quite consistently, in five independent channels [...] As of today what
we see is consistent either with a background fluctuation or with the
presence of the boson."
One way of describing scientific method would then contain these steps as a minimum:
Make a set of observations regarding the phenomenon being studied.
Form a hypothesis that might explain the observations. (This may involve inductive and/or abductive reasoning.)
Identify the implications and outcomes that must follow, if the hypothesis is to be true.
Perform other experiments or observations to see if any of the predicted outcomes fail.
If any predicted outcomes fail, the hypothesis is proven false since
if A implies B, then not B implies not A (by deduction). It is then
necessary to change the hypothesis and go back to step 3. If the
predicted outcomes are confirmed, the hypothesis is not proved, but
rather can be said to be consistent with known data.
When a hypothesis has survived a sufficient number of tests, it may be promoted to a scientific theory.
A theory is a hypothesis that has survived many tests and seems to be
consistent with other established scientific theories. Since a theory
is a promoted hypothesis, it is of the same 'logical' species and shares
the same logical limitations. Just as a hypothesis cannot be proven
but can be disproved, that same is true for a theory. It is a
difference of degree, not kind.
Arguments from analogy are another type of inductive reasoning. In
arguing from analogy, one infers that since two things are alike in
several respects, they are likely to be alike in another respect. This
is, of course, an assumption. It is natural to attempt to find
similarities between two phenomena and wonder what one can learn from
those similarities. However, to notice that two things share attributes
in several respects does not imply any similarities in other respects.
It is possible that the observer has already noticed all of the
attributes that are shared and any other attributes will be distinct.
Argument from analogy is an unreliable method of reasoning that can lead
to erroneous conclusions, and thus cannot be used to establish
scientific facts.
Fake news websites (also referred to as hoax news websites) are websites on the Internet that deliberately publish fake news—hoaxes, propaganda, and disinformation purporting to be real news—often using social media to drive web traffic and amplify their effect. Unlike news satire,
fake news websites deliberately seek to be perceived as legitimate and
taken at face value, often for financial or political gain. Such sites have promoted political falsehoods in India, Germany, Indonesia and the Philippines, Sweden, Mexico, Myanmar, and the United States. Many sites originate in, or are promoted by, Russia, North Macedonia, and Romania, among others. Some media analysts have seen them as a threat to democracy. In 2016, the European Parliament's Committee on Foreign Affairs passed a resolution warning that the Russian government was using "pseudo-news agencies" and Internet trolls as disinformation propaganda to weaken confidence in democratic values.
In 2015, the Swedish Security Service,
Sweden's national security agency, issued a report concluding Russia
was using fake news to inflame "splits in society" through the
proliferation of propaganda. Sweden's Ministry of Defence tasked its Civil Contingencies Agency with combating fake news from Russia.
Fraudulent news affected politics in Indonesia and the Philippines,
where there was simultaneously widespread usage of social media and
limited resources to check the veracity of political claims. German Chancellor Angela Merkel warned of the societal impact of "fake sites, bots, trolls".
The New York Times
has defined "fake news" on the internet as fictitious articles
deliberately fabricated to deceive readers, generally with the goal of
profiting through clickbait. PolitiFact
has described fake news as fabricated content designed to fool readers
and subsequently made viral through the Internet to crowds that increase
its dissemination.
Others have taken as constitutive the "systemic features inherent in
the design of the sources and channels through which fake news
proliferates", for example by playing to the audience's cognitive
biases, heuristics, and partisan affiliation. Some fake news websites use website spoofing, structured to make visitors believe they are visiting trusted sources like ABC News or MSNBC.
Fake news maintained a presence on the internet and in tabloid journalism in the years prior to the 2016 U.S. presidential election. Before the election campaign involving Hillary Clinton and Donald Trump, fake news had not impacted the election process and subsequent events to such a high degree. Subsequent to the 2016 election, the issue of fake news turned into a political weapon, with supporters of left-wing politics saying that supporters of right-wing politics spread false news, while the latter claimed that they were being "censored". Due to these back-and-forth complaints, the definition of fake news as used for such polemics has become more vague.
Pre-Internet history
Unethical journalistic practices existed in printed media for hundreds of years before the advent of the Internet. Yellow journalism,
reporting from a standard which is devoid of integrity and professional
ethics, was pervasive during the time period in history known as the Gilded Age, and unethical journalists would engage in fraud by fabricating stories, interviews, and made-up names for scholars. During the 1890s, the spread of this unethical news sparked violence and conflicts. Both Joseph Pulitzer and William Randolph Hearst
fomented yellow journalism in order to increase profits, which helped
lead to misunderstandings which became partially responsible for the
outset of the Spanish–American War in 1898.
J.B. Montgomery-M'Govern wrote a column harshly critical of "fake news"
in 1898, saying that what characterized "fake news" was sensationalism
and "the publication of articles absolutely false, which tend to mislead
an ignorant or unsuspecting public."
Much of the fake news during the 2016 U.S. presidential election season was traced to adolescents in North Macedonia, specifically Veles. It is a town of 50,000 in the middle of the country, with high unemployment, where the average wage is $4,800. The income from fake news was characterized by NBC News as a gold rush. Adults supported this income, saying they were happy the youths were working.
The mayor of Veles, Slavcho Chadiev, said he was not bothered by their
actions, as they were not against Macedonian law and their finances were
taxable. Chadiev said he was happy if deception from Veles influenced the results of the 2016 U.S. election in favor of Trump.
BuzzFeed News and The Guardian separately investigated and found teenagers in Veles created over 100 sites spreading fake news stories supportive of Donald Trump. The teenagers experimented with left slanted fake stories about Bernie Sanders, but found that pro-Trump fictions were more popular. Prior to the 2016 election the teenagers gained revenues from fake medical advice sites. One youth named Alex stated, in an August 2016 interview with The Guardian, that this fraud would remain profitable regardless of who won the election. Alex explained he plagiarized material for articles by copying and pasting from other websites. This could net them thousands of dollars daily, but they averaged only a few thousand per month.
The Associated Press (AP) interviewed an 18-year-old in Veles about his tactics. A Google Analytics analysis of his traffic showed more than 650,000 views in one week. He plagiarized pro-Trump stories from a right-wing site called The Political Insider. He said he did not care about politics, and published fake news to gain money and experience.
The AP used DomainTools to confirm the teenager was behind fake sites,
and determined there were about 200 websites tracked to Veles focused on
U.S. news, many of which mostly contained plagiarized legitimate news
to create an appearance of credibility.
NBC News also interviewed an 18-year-old there.
Dmitri (a pseudonym) was one of the most profitable fake news operators
in town, and said about 300 people in Veles wrote for fake sites. Dmitri said he gained over $60,000 during the six months prior through doing this, more than both his parents' earnings. Dmitri said his main dupes were supporters of Trump. He said after the 2016 U.S. election he continued to earn significant amounts.
The 2020 U.S. election is their next project.
Romania
"Ending the Fed", a popular purveyor of fraudulent reports, was run by a 24-year-old named Ovidiu Drobota out of Oradea, Romania, who boasted to Inc. magazine about being more popular than mainstream media. Established in March 2016, "Ending the Fed" was responsible for a false story in August 2016 that incorrectly stated Fox News had fired journalist Megyn Kelly—the story was briefly prominent on Facebook on its "Trending News" section.
"Ending the Fed" held four out of the 10 most popular fake articles on
Facebook related to the 2016 U.S. election in the prior three months
before the election itself. The Facebook page for the website, called "End the Feed", had 350,000 "likes" in November 2016. After being contacted by Inc. magazine, Drobota stated he was proud of the impact he had on the 2016 U.S. election in favor of his preferred candidate Donald Trump. According to Alexa Internet, "Ending the Fed" garnered approximately 3.4 million views over a 30-day-period in November 2016. Drobota stated the majority of incoming traffic is from Facebook. He said his normal line of work before starting "Ending the Fed" included web development and search engine optimization.
Beginning in fall 2014, The New Yorker writer Adrian Chen performed a six-month investigation into Russian propaganda dissemination online by the Internet Research Agency (IRA). Yevgeny Prigozhin (Evgeny Prigozhin), a close associate of Vladimir Putin, was behind the operation which hired hundreds of individuals to work in Saint Petersburg. The organization became regarded as a "troll farm",
a term used to refer to propaganda efforts controlling many accounts
online with the aim of artificially providing a semblance of a grassroots organization.
Chen reported that Internet trolling was used by the Russian government
as a tactic largely after observing the social media organization of
the 2011 protests against Putin.
European Union response
In 2015, the Organization for Security and Co-operation in Europe released an analysis critical of disinformation campaigns by Russia masked as news. This was intended to interfere with Ukraine relations with Europe after the removal of former Ukraine president Viktor Yanukovych. According to Deutsche Welle, similar tactics were used in the 2016 U.S. elections. The European Union created a taskforce to deal with Russian disinformation. The taskforce, East StratCom Team, had 11 people including Russian speakers. In November 2016, the EU voted to increase the group's funding. In November 2016, the European Parliament Committee on Foreign Affairs
passed a resolution warning of the use by Russia of tools including:
"pseudo-news agencies ... social media and internet trolls" as
disinformation to weaken democratic values. The resolution requested EU analysts investigate, explaining member nations needed to be wary of disinformation. The resolution condemned Russian sources for publicizing "absolutely fake" news reports. The tally on 23 November 2016 passed by a margin of 304 votes to 179.
Adrian Chen
observed a pattern in December 2015 where pro-Russian accounts became
supportive of 2016 U.S. presidential candidate Donald Trump. Andrew Weisburd and Foreign Policy Research Institute fellow and senior fellow at the Center for Cyber and Homeland Security at George Washington University, Clint Watts, wrote for The Daily Beast in August 2016 that Russian propaganda fabricated articles were popularized by social media. Weisburd and Watts documented how disinformation spread from Russia Today and Sputnik News, "the two biggest Russian state-controlled media organizations publishing in English", to pro-Russian accounts on Twitter. Citing research by Chen, Weisburd and Watts compared Russian tactics during the 2016 U.S. election to Soviet Union Cold War strategies. They referenced the 1992 United States Information Agency report to Congress, which warned about Russian propaganda called active measures. They concluded social media made active measures easier. Institute of International Relations Prague senior fellow and scholar on Russian intelligence, Mark Galeotti, agreed the Kremlin operations were a form of active measures. The most strident Internet promoters of Trump were not U.S. citizens but paid Russian propagandists. The Guardian estimated their number to be in the "low thousands" in November 2016.
Weisburd and Watts collaborated with colleague J. M. Berger and published a follow-up to their Daily Beast article in online magazine War on the Rocks, titled: "Trolling for Trump: How Russia is Trying to Destroy Our Democracy". They researched 7,000 pro-Trump accounts over a 2+1⁄2-year period. Their research detailed trolling techniques to denigrate critics of Russian activities in Syria, and proliferate lies about Clinton's health. Watts said the propaganda targeted the alt-right, the right wing, and fascist groups. After each presidential debate, thousands of Twitter bots used hashtag #Trumpwon to change perceptions.
In November 2016 the Foreign Policy Research Institute stated Russian propaganda exacerbated criticism of Clinton and support for Trump. The strategy involved social media, paid Internet trolls, botnets, and websites in order to denigrate Clinton.
U.S. intelligence analysis
Computer security companyFireEye concluded Russia used social media as a weapon to influence the U.S. election. FireEye Chairman David DeWalt said the 2016 operation was a new development in cyberwarfare by Russia. FireEye CEO Kevin Mandia stated Russian cyberwarfare changed after fall 2014, from covert to overt tactics with decreased operational security. Bellingcat analyst Aric Toler explained fact-checking only drew further attention to the fake news problem.
U.S. intelligence officials stated in November 2016 they believed Russia engaged in spreading fake news, and the FBI released a statement saying they were investigating. Two U.S. intelligence officials each told BuzzFeed News
they "believe Russia helped disseminate fake and propagandized news as
part of a broader effort to influence and undermine the presidential
election". The U.S. intelligence sources stated this involved "dissemination of completely fake news stories". They told BuzzFeed the FBI investigation specifically focused on why "Russia had engaged in spreading false or misleading information".
Fake news has influenced political discourse in multiple countries, including Germany, Indonesia, Philippines, Sweden, China,Myanmar, and the United States.
Austria
Politicians
in Austria dealt with the impact of fake news and its spread on social
media after the 2016 presidential campaign in the country. In December 2016, a court in Austria issued an injunction on Facebook Europe, mandating it block negative postings related to Eva Glawischnig-Piesczek, Austrian Green Party Chairwoman. According to The Washington Post
the postings to Facebook about her "appeared to have been spread via a
fake profile" and directed derogatory epithets towards the Austrian
politician. The derogatory postings were likely created by the identical fake profile that had previously been utilized to attack Alexander van der Bellen, who won the election for President of Austria.
Brazil faced increasing influence from fake news after the 2014 re-election of President Dilma Rousseff and Rousseff's subsequent impeachment in August 2016. In the week surrounding one of the impeachment votes, 3 out of the 5 most-shared articles on Facebook in Brazil were fake. In 2015, reporter Tai Nalon resigned from her position at Brazilian newspaper Folha de S.Paulo in order to start the first fact-checking website in Brazil, called Aos Fatos (To The Facts). Nalon told The Guardian there was a great deal of fake news, and hesitated to compare the problem to that experienced in the U.S.
Canada
Fake news
online was brought to the attention of Canadian politicians in November
2016, as they debated helping assist local newspapers. Member of Parliament for Vancouver CentreHedy Fry
specifically discussed fake news as an example of ways in which
publishers on the Internet are less accountable than print media.
Discussion in parliament contrasted increase of fake news online with
downsizing of Canadian newspapers and the impact for democracy in
Canada.
Representatives from Facebook Canada attended the meeting and told
members of Parliament they felt it was their duty to assist individuals
gather data online.
Fake news during the 2016 U.S. election spread to China. Articles popularized within the United States were translated into Chinese and spread within China. The government of China used the growing problem of fake news as a rationale for increasing Internet censorship in China in November 2016. China then published an editorial in its Communist Party newspaper The Global Times
called: "Western Media's Crusade Against Facebook", and criticized
"unpredictable" political problems posed by freedoms enjoyed by users of
Twitter, Google, and Facebook. China government leaders meeting in Wuzhen at the third World Internet Conference in November 2016 said fake news in the U.S. election justified adding more curbs to free and open use of the Internet. China Deputy Minister Ren Xianliang, official at the Cyberspace Administration of China, said increasing online participation led to "harmful information" and fraud. Kam Chow Wong, a former Hong Kong law enforcement official and criminal justice professor at Xavier University, praised attempts in the U.S. to patrol social media. The Wall Street Journal noted China's themes of Internet censorship became more relevant at the World Internet Conference due to the outgrowth of fake news.
Finland
Officials from 11 countries held a meeting in Helsinki
in November 2016, in order to plan the formation of a center to combat
disinformation cyber-warfare including spread of fake news on social
media.
The center is planned to be located in Helsinki and include efforts
from 10 countries with participation from Sweden, Germany, Finland, and
the U.S. Prime Minister of FinlandJuha Sipilä planned to deal with the center in spring 2017 with a motion before the Parliament of Finland.
Jori Arvonen, Deputy Secretary of State for EU Affairs, said
cyberwarfare became an increased problem in 2016, and included hybrid
cyber-warfare intrusions into Finland from Russia and Islamic State of Iraq and the Levant. Arvonen cited examples including fake news online, disinformation, and the "little green men" of the Russo-Ukrainian War.
France
France saw an uptick in amounts of disinformation and propaganda, primarily in the midst of election cycles. Le Monde fact-checking division "Les décodeurs" was headed by Samuel Laurent, who told The Guardian in December 2016 the upcoming French presidential election campaign in spring 2017 would face problems from fake news. The country faced controversy regarding fake websites providing false information about abortion. The government's lower parliamentary body moved forward with intentions to ban such fake sites. Laurence Rossignol,
women's minister for France, informed parliament though the fake sites
look neutral, in actuality their intentions were specifically targeted
to give women fake information. During the 10-year period preceding 2016, France was witness to an increase in popularity of far-right alternative news sources called the fachosphere ("facho" referring to fascist); known as the extreme right on the Internet [fr]. According to sociologist Antoine Bevort, citing data from Alexa Internet rankings, the most consulted political websites in France included Égalité et Réconciliation, François Desouche [fr], and Les Moutons Enragés. These sites increased skepticism towards mainstream media from both left and right perspectives.
Germany
German ChancellorAngela Merkel
lamented the problem of fraudulent news reports in a November 2016
speech, days after announcing her campaign for a fourth term as leader
of her country. In a speech to the German parliament, Merkel was critical of such fake sites, saying they harmed political discussion. Merkel called attention to the need of government to deal with Internet trolls, bots, and fake news websites. She warned that such fraudulent news websites were a force increasing the power of populist extremism. Merkel called fraudulent news a growing phenomenon that might need to be regulated in the future. Germany's foreign intelligence agency Federal Intelligence Service Chief, Bruno Kahl [de], warned of the potential for cyberattacks by Russia in the 2017 German election. He said the cyberattacks would take the form of the intentional spread of disinformation. Kahl said the goal is to increase chaos in political debates. Germany's domestic intelligence agency Federal Office for the Protection of the Constitution Chief, Hans-Georg Maassen, said sabotage by Russian intelligence was a present threat to German information security.
India
Rasmus Kleis Nielsen, director at Reuters Institute for the Study of Journalism,
thinks that "the problems of disinformation in a society like India
might be more sophisticated and more challenging than they are in the
West".
The damage caused due to fake news on social media has increased due to
the growth of the internet penetration in India, which has risen from
137 million internet users in 2012 to over 600 million in 2019. India is the largest market for WhatsApp, with over 230 million users, and as a result one of the main platforms on which fake news is spread. One of the main problems is of receivers believing anything sent to them over social media due to lack of awareness. Various initiatives and practices have been started and adopted to curb the spread and impact of fake news. Fake news is also spread through Facebook, WhatsApp and Twitter.
According to a report by The Guardian,
the Indian media research agency CMS stated that the cause of spread of
fake news was that India "lacked (a) media policy for verification".
Additionally, law enforcement officers have arrested reporters and
journalists for "creating fictitious articles", especially when the
articles were controversial.
In India, fake news has been spread primarily by the right-wing political outfits. A study published in ThePrint claimed that on Twitter, there were at least 17,000 accounts spreading fake news to favour the BJP, while around 147 accounts were spreading fake news to favour the Indian National Congress.
Similarly, the IT Cell of the BJP has been accused of spreading fake
news against the party's political opponents, religious minorities, and
any campaigns against the party.
The IT Cells of the BJP, Congress and other political parties have been
accused of spreading fake news against the party's political opponents
and any campaigns against the party. The RSS mouthpiece Organizer has also been accused of misleading reports.
Prominent fake news-spreading websites and online resources include OpIndia, TFIPost (previously, The Frustrated Indian) and Postcard News.
Indonesia and Philippines
Fraudulent news has been particularly problematic in Indonesia and the Philippines, where social media has an outsized political influence. According to media analysts, developing countries with new access to social media and democracy felt the fake news problem to a larger extent.
In some developing countries, Facebook gives away smartphone data free
of charge for Facebook and media sources, but at the same time does not
provide the user with Internet access to fact-checking websites.
Iran
On 8 October 2020, Bloomberg reported that 92 websites used by Iran to spread misinformation were seized by the United States government.
Italy
Between 1 October and 30 November 2016, ahead of the Italian constitutional referendum, five out of ten referendum-related stories with most social media participation were hoaxes or inaccurate. Of the three stories with the most social media attention, two were fake. Prime Minister of ItalyMatteo Renzi met with U.S. President Obama and leaders of Europe at a meeting in Berlin, Germany in November 2016, and spoke about the fake news problem. Renzi hosted discussions on Facebook Live in an effort to rebut falsities online. The influence became so heavy that a senior adviser to Renzi began a defamation complaint on an anonymous Twitter user who had used the screenname "Beatrice di Maio".
The Five Star Movement (M5S), an Italian political party founded by Beppe Grillo, managed fake news sites amplifying support for Russian news, propaganda, and inflamed conspiracy theories. The party's site TzeTze had 1.2 million Facebook fans and shared fake news and pieces supportive of Putin cited to Russia-owned sources including Sputnik News. TzeTzeplagiarized the Russian sources, and copied article titles and content from Sputnik. TzeTze, another site critical of Renzi called La Cosa, and a blog by Grillo—were managed by the company Casaleggio Associati which was started by Five Star Movement co-founder Gianroberto Casaleggio. Casaleggio's son Davide Casaleggio owns and manages TzeTze and La Cosa, and medical advice website La Fucina which markets anti-vaccine conspiracy theories and medical cure-all methods. Grillo's blog, Five Star Movement fake sites use the same IP addresses, Google Analytics and Google AdSense.
Cyberwarfare against Renzi increased, and Italian newspaper La Stampa brought attention to false stories by Russia Today which wrongly asserted a pro-Renzi rally in Rome was actually an anti-Renzi rally. In October 2016, the Five Star Movement disseminated a video from Kremlin-aligned Russia Today
which falsely reported displaying thousands of individuals protesting
the 4 December 2016 scheduled referendum in Italy—when in fact the video
that went on to 1.5 million views showed supporters of the referendum. President of the Italian Chamber of Deputies, Laura Boldrini, stated: "Fake news is a critical issue and we can't ignore it. We have to act now."
Boldrini met on 30 November 2016 with vice president of public policy
in Europe for Facebook Richard Allan to voice concerns about fake news. She said Facebook needed to admit they were a media company.
In 2022 the renowned Italian magazine Panorama
brought attention to fake news published by the website "Open di Enrico
Mentana" which repeatedly reported a number of false stories with
regard to the Russo-Ukrainian war.
These fake news were eventually rejected by Alina Dubovksa, journalist of the Ukrainian newspaper Public,
also due to the lack of evidences, by Catalina Marchant de Abreu,
journalist of France 24, due to unfoundedness of the stories, as well as
by Oleksiy Mykolaiovych Arestovych, an Adviser to the Head of the
Office of the President of Ukraine Volodymyr Zelenskyy.
Mexico
Elections
in Mexico are always rigged by the misinformation that is let out in
the public. This is true for any political party, whether they are
democratic or authoritarian. Due to the false information that easily
influences voters in Mexico, it can threaten that state of the country
because actions that are taken by misinformed citizens. In Mexico, fake
exit polls have been moving within digital media outlets. What this
means is that citizens are not receiving real data on what is happening
in their elections.
Moldova
Amid
the 2018 local elections in Moldova a doctored video with mistranslated
subtitles purported to show that the a pro-Europe party candidate for
mayor of Chișinău (pop. 685,900), the capital of Moldova had proposed to
lease the city of Chișinău to the UAE for 50 years.
The video was watched more than 300,000 times on Facebook and almost
250,000 times on the Russian social network site OK.ru, which is popular
among Moldova's Russian-speaking population.
In 2015, fake stories using unrelated photographs and fraudulent captions were shared online in support of the Rohingya. Fake news negatively affected individuals in Myanmar, leading to a rise in violence against Muslims in the country.nline participation surged from one percent to 20 percent of Myanmar's total populace from 2014 to 2016. Fake stories from Facebook were reprinted in paper periodicals called Facebook and The Internet. False reporting related to practitioners of Islam in the country was directly correlated with increased attacks on people of the religion in Myanmar. Fake news fictitiously stated believers in Islam acted out in violence at Buddhist locations. BuzzFeed News documented a direct relationship between the fake news and violence against Muslim people. It noted countries that were relatively newer to Internet exposure were more vulnerable to the problems of fake news and fraud.
In 2016 Polish historian Jerzy Targalski [pl] noted fake news websites had infiltrated Poland through anti-establishment and right-wing focused sources that copied content from Russia Today.
Targalski observed there existed about 20 specific fake news websites
in Poland which spread Russian disinformation in the form of fake news. One example cited was the false claim that Ukraine had claimed that the Polish city of Przemyśl was occupied by Poland. In 2020 fake news websites related to the COVID-19 pandemic have been identified and officially labelled as such by the Polish Ministry of Health.
Sweden
The Swedish Security Service
issued a report in 2015 identifying propaganda from Russia infiltrating
Sweden with the objective to amplify pro-Russian propaganda and inflame
societal conflicts. The Swedish Civil Contingencies Agency (MSB), part of the Ministry of Defence of Sweden, identified fake news reports targeting Sweden in 2016 which originated from Russia.
Swedish Civil Contingencies Agency official Mikael Tofvesson stated a
pattern emerged where views critical of Sweden were constantly repeated. The MSB identified Russia Today and Sputnik News as significant fake news purveyors.
As a result of growth in this propaganda in Sweden, the MSB planned to
hire six additional security officials to fight back against the
campaign of fraudulent information.
Taiwan
In a report in December 2015 by The China Post, a fake video shared online showed people a light show purportedly made at the Shihmen Reservoir.
The Northern Region Water Resources Office confirmed there was no light
show at the reservoir and the event had been fabricated. The fraud led to an increase in tourist visits to the actual attraction.
Ukraine
Deutsche Welle interviewed the founder of Stopfake.org in 2014 about the website's efforts to debunk fake news in Ukraine, including media portrayal of the Ukrainian crisis. Co-founder Margot Gontar began the site in March 2014, and it was aided by volunteers. In 2014, Deutsche Welle awarded the fact-checker website with the People's Choice Award for Russian in its ceremony The BOBs, recognizing excellence in advocacy on the Internet. Gontar highlighted an example debunked by the website, where a fictitious "Doctor Rozovskii" supposedly told The Guardian pro-Ukraine individuals refused to allow him to tend to injured in fighting with Russian supporters in 2014.
Stopfake.org exposed the event was fabricated—there actually was no
individual named "Doctor Rozovskii", and found the Facebook photo
distributed with the incident was of a different individual from Russia
with a separate identity. Former Ukraine president Viktor Yanukovych's ouster from power created instability, and in 2015 the Organization for Security and Co-operation in Europe concluded Russian disinformation campaigns used fake news to disrupt relations between Europe and Ukraine. Russian-financed news spread disinformation after the conflict in Ukraine motivated the European Union to found the European External Action Service specialist task force to counter the propaganda.
Labour MP Michael Dugher was assigned by Deputy Leader of the Labour PartyTom Watson in November 2016 to investigate the impact of fake news spread through social media. Watson said they would work with Twitter and Facebook to root out clear-cut circumstances of "downright lies". Watson wrote an article for The Independent
where he suggested methods to respond to fake news, including
Internet-based societies which fact-check in a manner modeled after Wikipedia. Minister for Culture, Matthew Hancock, stated the British government would investigate the impact of fake news and its pervasiveness on social media websites. Watson stated he welcomed the investigation into fake news by the government. On 8 December 2016, Chief of the Secret Intelligence Service (MI6) Alex Younger delivered a speech to journalists at the MI6 headquarters where he called fake news and propaganda damaging to democracy.
Younger said the mission of MI6 was to combat propaganda and fake news
in order to deliver to his government a strategic advantage in the information warfare arena, and assist other nations including European countries. He called such methods of fake news propaganda online as a "fundamental threat to our sovereignty". Younger said all nations that hold democratic values should feel the same worry over fake news.
After the 2016 election, Republican politicians and conservative media began to appropriate the term by using it to describe any news they see as hostile to their agenda, according to The New York Times, which cited Breitbart News, Rush Limbaugh and supporters of Donald Trump as dismissing true mainstream news reports, and any news they do not like as "fake news".
U.S. response to Russia in Syria
The Russian state-operated newswire RIA Novosti, known as Sputnik International, reported fake news and fabricated statements by White House Press SecretaryJosh Earnest. RIA Novosti falsely reported on 7 December 2016 that Earnest stated sanctions for Russia were on the table related to Syria.
RIA Novosti falsely quoted Earnest as saying: "There are a number of
things that are to be considered, including some of the financial
sanctions that the United States can administer in coordination with our
allies. I would definitely not rule that out." However, the word "sanctions" was never used by the Press Secretary. Russia was discussed in eight instances during the press conference, but never about sanctions. The press conference focused solely on Russian air raids in Syria towards rebels fighting President of SyriaBashar al-Assad in Aleppo.
U.S. PresidentBarack Obama commented on fake news online in a speech the day before Election Day in 2016, saying social media spread lies and created a "dust cloud of nonsense".
Obama commented again on the problem after the election: "if we can't
discriminate between serious arguments and propaganda, then we have
problems." On 9 December 2016, President Obama ordered U.S. Intelligence Community to conduct a complete review of the Russian propaganda operation.
In his year-end press conference on 16 December 2016, President Obama
criticized a hyper-partisan atmosphere for enabling the proliferation of
fake news.
Conspiracy theories and 2016 pizzeria attack
In November 2016, fake news sites and Internet forums falsely implicated the restaurant Comet Ping Pong and Democratic Party figures as part of a fictitious child trafficking ring, which was dubbed "Pizzagate". The rumor was widely debunked by sources such as the Metropolitan Police Department of the District of Columbia, fact-checking website Snopes.com, The New York Times, and Fox News. The restaurant's owners were harassed and threatened, and increased their security. On 4 December 2016, an individual from Salisbury, North Carolina, walked into the restaurant to "self-investigate" this conspiracy theory. He brought a semi-automatic rifle, and fired shots before being arrested; no one was injured. The suspect told police that he planned to "self-investigate" the conspiracy theory,
and was charged with assault with a dangerous weapon, carrying a pistol
without a license, unlawful discharge of a firearm, and carrying a
rifle or shotgun outside the home or business. After the incident, future National Security AdvisorMichael T. Flynn and his son Michael G. Flynn were criticized by many reporters for spreading the rumors.
Two days after the shooting, Trump fired Michael G. Flynn from his
transition team in connection with Flynn's Twitter posting of fake news. Days after the attack, Hillary Clinton spoke out on the dangers of fake news in a tribute speech to retiring Senator Harry Reid at the U.S. Capitol, and called the problem an epidemic.
2018 midterm elections
To track junk news shared on Facebook during the 2018 midterm elections, the Junk News AggregatorArchived 2021-01-27 at the Wayback Machine was launched by the Computational Propaganda Project
of the Oxford Internet Institute, University of Oxford. This Aggregator
is a public platform, offering three interactive tools for tracking in
near real-time public posts shared on Facebook by junk news sources,
showing the content and the user engagement numbers that these posts
have received.
Fact-checking websitesFactCheck.org, PolitiFact.com and Snopes.com authored guides on how to respond to fraudulent news. FactCheck.org advised readers to check the source, author, date, and headline of publications. They recommended their colleagues Snopes.com, The Washington Post Fact Checker, and PolitiFact.com. FactCheck.org admonished consumers to be wary of confirmation bias. PolitiFact.com used a "Fake news" tag so readers could view all stories Politifact had debunked. Snopes.com warned readers social media was used as a harmful tool by fraudsters. The Washington Post's "The Fact Checker" manager Glenn Kessler wrote that all fact-checking sites saw increased visitors during the 2016 election cycle. Unique visitors to The Fact Checker increased five-fold from the 2012 election. Will Moy, director of London-based fact-checker Full Fact, said debunking must take place over a sustained period to be effective. Full Fact worked with Google to help automate fact-checking.
FactCheck.org
former director Brooks Jackson said media companies devoted increased
focus to the importance of debunking fraud during the 2016 election. FactCheck.org partnered with CNN's Jake Tapper in 2016 to examine the veracity of candidate statements. Angie Drobnic Holan, editor of PolitiFact.com, cautioned media companies chiefs must be supportive of debunking, as it often provokes hate mail and extreme responses from zealots. In December 2016, PolitiFact announced fake news was its selection for "Lie of the Year".
PolitiFact explained its choice for the year: "In 2016, the prevalence
of political fact abuse – promulgated by the words of two polarizing
presidential candidates and their passionate supporters – gave rise to a
spreading of fake news with unprecedented impunity." PolitiFact called fake news a significant symbol of a culture accepting of post-truth politics.
In the aftermath of the 2016 U.S. election, Google and Facebook, faced scrutiny regarding the impact of fake news. The top result on Google for election results was to a fake site. "70 News" had fraudulently written an incorrect headline and article that Trump won the popular vote against Clinton. Google later stated that prominence of the fake site in search results was a mistake. By 14 November, the "70 News" result was the second link shown when searching for results of the election. When asked shortly after the election whether fake news influenced election results, Google CEO Sundar Pichai responded: "Sure" and went on to emphasize the importance of stopping the spread of fraudulent sites.
On 14 November 2016, Google responded to the problem of fraudulent
sites by banning such companies from profiting on advertising from
traffic through its program AdSense. Google previously had a policy for denying ads for dieting ripoffs and counterfeit merchandise.
Google stated upon the announcement they would work to ban
advertisements from sources that lie about their purpose, content, or
publisher.The ban is not expected to apply to news satire sites like The Onion, although some satirical sites may be inadvertently blocked under the new system.
On 25 April 2017, Ben Gomes wrote a blog post announcing changes
to the search algorithms that would stop the "spread of blatantly
misleading, low quality, offensive or downright false information." On 27 July 2017, the World Socialist Web Site
published data that showed a significant drop after the 25 April
announcement in Google referrals to left-wing and anti-war websites,
including the ACLU, Alternet, and Counterpunch. The World Socialist Web Site
insists that the "fake news" charge is a cover to remove
anti-establishment websites from public access, and believes the
algorithm changes are infringing on the democratic right to free speech.
One day after Google took action, Facebook decided to block fake sites from advertising there. Facebook said they would ban ads from sites with deceptive content, including fake news, and review publishers for compliance.
These steps by both Google and Facebook intended to deny ad revenue to
fraudulent news sites; neither company took actions to prevent
dissemination of false stories in search engine results pages or web feeds. Facebook CEO Mark Zuckerberg called the notion that fraudulent news impacted the 2016 election a "crazy idea" and denied that his platform influenced the election. He stated that 99% of Facebook's content was neither fake news nor a hoax. Zuckerberg said that Facebook is not a media company. Zuckerberg advised users to check the fact-checking website Snopes.com whenever they encounter fake news on Facebook.
Top staff members at Facebook did not feel simply blocking ad
revenue from fraudulent sites was a strong enough response, and they
made an executive decision and created a secret group to deal with the
issue themselves.
In response to Zuckerberg's first statement that fraudulent news did
not impact the 2016 election, the secret Facebook group disputed this
notion, saying fake news was rampant on their website during the
election cycle. The secret task force included dozens of Facebook employees.
Response
Facebook
faced criticism after its decision to revoke advertising revenues from
fraudulent news providers, and not take further action.
After negative media coverage including assertions that fraudulent news
gave the 2016 U.S. presidential election to Trump, Zuckerberg posted a
second time about it on 18 November 2016. The post was a reversal of his earlier comments on the matter where he had discounted the impact of fraudulent news. Zuckerberg said there it was difficult to filter out fraudulent news because he desired open communication.
Measures considered and not implemented by Facebook included adding an
ability for users to tag questionable material, automated checking
tools, and third-party confirmation.
The 18 November post did not announce any concrete actions the company
would definitively take, or when such measures would be put into usage.
National Public Radio
observed the changes being considered by Facebook to identify fraud
constituted progress for the company into a new media entity. On 19 November 2016, BuzzFeed advised Facebook users they could report posts from fraudulent sites. Users could choose the report option: "I think it shouldn't be on Facebook", followed by: "It's a false news story." In November 2016, Facebook began assessing use of warning labels on fake news. The rollout was at first only available to a few users in a testing phase. A sample warning read: "This website is not a reliable news source. Reason: Classification Pending". TechCrunch analyzed the new feature during the testing phase and surmised it may have a tendency towards false positives.
Fake news proliferation on Facebook had a negative financial
impact for the company. Brian Wieser of Pivotal Research predicted that
revenues could decrease by two percentage points due to the concern over
fake news and loss of advertising dollars.
Shortly after Mark Zuckerberg's second statement on fake news
proliferation on his website, Facebook decided to engage in assisting
the government of China with a version of its software in the country to
allow increased censorship by the government. Barron's
contributor William Pesek was highly critical of this move, writing by
porting its fake news conundrum to China, Facebook would become a tool
in that Communist Party's General SecretaryXi Jinping's efforts to increase censorship.
Media scholar Dr. Nolan Higdon argues that relying on
tech-companies to solve the issues with false information will
exacerbate the problems associated with fake news.
Higdon contends that tech-companies lack an incentive for solving the
problem because they benefit from the proliferation of fake news. Higdon
cites tech-companies utilization of data collection as one of the
strongest forces empowering fake news producers. Rather than government
regulation or industry censorship, Higdon argues for the introduction of
critical news literacy education to American education.
Partnership with debunkers
Society of Professional Journalists president Lynn Walsh said in November 2016 that they would reach out to Facebook to assist weeding out fake news. Walsh said Facebook should evolve and admit it functioned as a media company. On 17 November 2016, the Poynter International Fact-Checking Network (IFCN) published an open letter on the Poynter Institute website to Mark Zuckerberg, imploring him to utilize fact-checkers to identify fraud on Facebook. Signatories to the 2016 letter to Zuckerberg featured a global representation of fact-checking groups, including: Africa Check, FactCheck.org, PolitiFact.com, and The Washington Post Fact Checker.
In his second post on the matter on 18 November 2016, Zuckerberg
responded to the fraudulent news problem by suggesting usage of
fact-checkers. He specifically identified fact-checking website Snopes.com,
and pointed out that Facebook monitors links to such debunkers in reply
comments to determine which original posts were fraudulent.
On 15 December 2016, Facebook announced more specifics in its efforts to combat fake news and hoaxes on its site.
The company said it would form a partnership with fact-checking groups
that had joined the Poynter International Fact-Checking Network
fact-checkers' code of principles, to help debunk fraud on the site. It was the first instance Facebook had ever given third-party entities highlighted featuring in its News Feed, a significant motivator of web traffic online.
The fact-checking organizations partnered with Facebook in order to
confirm whether or not links posted from one individual to another on
the site were factual or fraudulent.
Facebook did not finance the fact-checkers, and acknowledged they could
see increased traffic to their sites from the partnership.
Fact-checking organizations that joined Facebook's initiative included: ABC News, The Washington Post, Snopes.com, FactCheck.org, PolitiFact, and the Associated Press. Fraudulent articles will receive a warning tag: "disputed by third party fact-checkers".
The company planned to start with obvious cases of hoaxes shared
specifically for fraudulent purposes to gain money for the purveyor of
fake news. Users may still share such tagged articles, and they will show up farther down in the news feed with an accompanying warning. Facebook will employ staff researchers to determine whether website spoofing has occurred, for example "washingtonpost.co" instead of the real washingtonpost.com.
In a post on 15 December, Mark Zuckerberg acknowledged the changing
nature of Facebook: "I think of Facebook as a technology company, but I
recognize we have a greater responsibility than just building technology
that information flows through. While we don't write the news stories
you read and share, we also recognize we're more than just a distributor
of news. We're a new kind of platform for public discourse -- and that
means we have a new kind of responsibility to enable people to have the
most meaningful conversations, and to build a space where people can be
informed."
Proposed technology tools
New York magazine contributor Brian Feldman created a Google Chrome extension that would warn users about fraudulent news sites. He invited others to use his code and improve upon it.pworthy co-founder and The Filter Bubble author Eli Pariser launched an open-source model initiative on 17 November 2016 to address false news. Pariser began a Google Document to collaborate with others online on how to lessen the phenomenon of fraudulent news. Pariser called his initiative: "Design Solutions for Fake News". Pariser's document included recommendations for a ratings organization analogous to the Better Business Bureau, and a database on media producers in a format like Wikipedia. Writing for Fortune,
Matthew Ingram agreed with the idea that Wikipedia could serve as a
helpful model to improve Facebook's analysis of potentially fake news.
Ingram concluded Facebook could benefit from a social network form of
fact-checking similar to Wikipedia's methods while incorporating
debunking websites such as PolitiFact.com.
Others
Pope Francis, the leader of the Roman Catholic Church, spoke out against fake news in an interview with the Belgian Catholic weekly Tertio (magazine) [nl] on 7 December 2016.
The Pope had prior experience being the subject of a fake news website
fiction—during the 2016 U.S. election cycle, he was falsely said to
support Donald Trump for president. Pope Francis said the singular worst thing the news media could do was spreading disinformation and that amplifying fake news instead of educating society was a sin. He compared salacious reporting of scandals, whether true or not, to coprophilia and the consumption of it to coprophagy.
The Pope said that he did not intend to offend with his strong words,
but emphasized that "a lot of damage can be done" when the truth is
disregarded and slander is spread.
Academic analysis
Jamie
Condliffe wrote that banning ad revenue from fraudulent sites was not
aggressive enough action by Facebook to deal with the problem, and did
not prevent fake news from appearing in Facebook news feeds. University of Michigan political scientist Brendan Nyhan criticized Facebook for not doing more to combat fake news amplification. Indiana University computer science professor Filippo Menczer
commented on measures by Google and Facebook to deny fraudulent sites
revenue, saying it was a good step to reduce motivation for fraudsters.
Menczer's research team engaged in developing an online tool titled:
Hoaxy — to see the pervasiveness of unconfirmed assertions as well as
related debunking on the Internet.
Zeynep Tufekci wrote critically about Facebook's stance on fraudulent news sites, stating that fraudulent websites in North Macedonia profited handsomely off false stories about the 2016 U.S. election. Tufecki wrote that Facebook's algorithms, and structure exacerbated the impact of echo chambers and increased fake news blight.
In 2016 Melissa Zimdars, associate professor of communications at Merrimack College,
created a handout for her Introduction to Mass Communication students
titled "False, Misleading, Clickbait-y, and/or Satirical 'News' Sources"
and posted it on Google docs. It was circulated on social media, and on 15 November 2016, the Los Angeles Times
published the class handout under the title "Want to keep fake news out
of your newsfeed? College professor creates list of sites to avoid".
Zimdars said that the list "wasn't intended to be widely distributed",
and expressed concern that "people are taking it as this list of 'fake'
sites, which is not its purpose". On 17 November 2016 Zimdars deleted
the list. On 3 January 2017, Zimdars replaced the original handout with a new list at the same URL.
The new list has removed most of the sites from the original handout,
added many new sites, and greatly expanded the categories.
Stanford University professors Sam Wineburg and Sarah McGrew authored a 2016 study analyzing students' ability to discern fraudulent news from factual.
The study took place over a year-long period of time, and involved a
sample size of over 7,800 responses from university, secondary and
middle school students in 12 states within the United States. They were surprised at the consistency with which students thought fraudulent news reports were factual.The study found 82% of students in middle school were unable to differentiate between an advertisement denoted as sponsored content from an actual news article.
The authors concluded the solution was to educate online media
consumers to themselves behave like fact-checkers — and actively
question the veracity of all sources.A 2019 study in the journal Science,
which examined dissemination of fake news articles on Facebook in the
2016 election, found that sharing of fake news articles on Facebook was
"relatively rare", conservatives were more likely than liberals or
moderates to share fake news, and there is a "strong age effect",
whereby individuals over 65 are vastly more likely to share fake news
than younger cohorts. Another 2019 study in Science
found, "fake news accounted for nearly 6% of all news consumption [on
Twitter], but it was heavily concentrated—only 1% of users were exposed
to 80% of fake news, and 0.1% of users were responsible for sharing 80%
of fake news. Interestingly, fake news was most concentrated among
conservative voters."
Scientist Emily Willingham has proposed applying the scientific method to fake news analysis. She had previously written on the topic of differentiating science from pseudoscience, and proposed applying that logic to fake news. She calls the recommended steps Observe, Question, Hypothesize, Analyze data, Draw conclusion, and Act on results. Willingham suggested a hypothesis of "This is real news", and then forming a strong set of questions to attempt to disprove the hypothesis. These tests included: check the URL, date of the article, evaluate reader bias and writer bias, double-check the evidence, and verify the sources cited. University of Connecticut philosophy professor Michael P. Lynch
said that a troubling number of individuals make determinations relying
upon the most recent piece of information they've consumed. He said the greater issue however was that fake news could make people less likely to believe news that really is true.
Lynch summed up the thought process of such individuals, as "...ignore
the facts because nobody knows what's really true anyway."
In 2019, David Lazer and other researchers, from Northeastern University, Harvard University, and the University at Buffalo,
analyzed engagement with a previously defined set of fake news sources
on Twitter. They found that such engagement was highly concentrated both
among a small number of websites and a small number of Twitter users.
Five percent of the sources accounted for over fifty percent of
exposures. Among users, 0.1 percent consumed eighty percent of the
volume from fake news sources.