Search This Blog

Thursday, February 8, 2024

Fake news

From Wikipedia, the free encyclopedia
Three running men carrying papers with the labels "Humbug News", "Fake News", and "Cheap Sensation".
Reporters with various forms of "fake news" from an 1894 illustration by Frederick Burr Opper

Fake news or information disorder is false or misleading information (misinformation, including disinformation, propaganda, and hoaxes) presented as news. Fake news often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Although false news has always been spread throughout history, the term "fake news" was first used in the 1890s when sensational reports in newspapers were common. Nevertheless, the term does not have a fixed definition and has been applied broadly to any type of false information presented as news. It has also been used by high-profile people to apply to any news unfavorable to them. Further, disinformation involves spreading false information with harmful intent and is sometimes generated and propagated by hostile foreign actors, particularly during elections. In some definitions, fake news includes satirical articles misinterpreted as genuine, and articles that employ sensationalist or clickbait headlines that are not supported in the text. Because of this diversity of types of false news, researchers are beginning to favour information disorder as a more neutral and informative term.

The prevalence of fake news has increased with the recent rise of social media, especially the Facebook News Feed, and this misinformation is gradually seeping into the mainstream media. Several factors have been implicated in the spread of fake news, such as political polarization, post-truth politics, motivated reasoning, confirmation bias, and social media algorithms.

Fake news can reduce the impact of real news by competing with it. For example, a BuzzFeed News analysis found that the top fake news stories about the 2016 U.S. presidential election received more engagement on Facebook than top stories from major media outlets. It also particularly has the potential to undermine trust in serious media coverage. The term has at times been used to cast doubt upon credible news, and former U.S. president Donald Trump has been credited with popularizing the term by using it to describe any negative press coverage of himself. It has been increasingly criticized, due in part to Trump's misuse, with the British government deciding to avoid the term, as it is "poorly-defined" and "conflates a variety of false information, from genuine error through to foreign interference".

Multiple strategies for fighting fake news are currently being actively researched, for various types of fake news. Politicians in certain autocratic and democratic countries have demanded effective self-regulation and legally-enforced regulation in varying forms, of social media and web search engines.

On an individual scale, the ability to actively confront false narratives, as well as taking care when sharing information can reduce the prevalence of falsified information. However, it has been noted that this is vulnerable to the effects of confirmation bias, motivated reasoning and other cognitive biases that can seriously distort reasoning, particularly in dysfunctional and polarised societies. Inoculation theory has been proposed as a method to render individuals resistant to undesirable narratives. Because new misinformation pops up all the time, it is much better timewise to inoculate the population against accepting fake news in general (a process termed "prebunking"), instead of continually debunking the same repeated lies.

Defining fake news

Fake news is false or misleading information presented as news. The term is a neologism (a new or re-purposed expression that is entering the language, driven by culture or technology changes). Fake news, or fake news websites, have no basis in fact, but are presented as being factually accurate.

Overlapping terms are bullshit, hoax news, pseudo-news, alternative facts, false news and junk news.

The National Endowment for Democracy defined fake news as: "[M]isleading content found on the internet, especially on social media [...] Much of this content is produced by for-profit websites and Facebook pages gaming the platform for advertising revenue." And distinguished it from disinformation: "[F]ake news does not meet the definition of disinformation or propaganda. Its motives are usually financial, not political, and it is usually not tied to a larger agenda."

Media scholar Nolan Higdon has defined fake news as "false or misleading content presented as news and communicated in formats spanning spoken, written, printed, electronic, and digital communication. Higdon has also argued that the definition of fake news has been applied too narrowly to select mediums and political ideologies.

While most definitions focus strictly on content accuracy and format, current research indicates that the rhetorical structure of the content might play a significant role in the perception of fake news.

Michael Radutzky, a producer of CBS 60 Minutes, said his show considers fake news to be "stories that are probably false, have enormous traction [popular appeal] in the culture, and are consumed by millions of people." These stories are not only found in politics, but also in areas like vaccination, stock values and nutrition. He did not include news that is "invoked by politicians against the media for stories that they don't like or for comments that they don't like" as fake news. Guy Campanile, also a 60 Minutes producer said, "What we are talking about are stories that are fabricated out of thin air. By most measures, deliberately, and by any definition, that's a lie."

The intent and purpose of fake news is important. In some cases, fake news may be news satire, which uses exaggeration and introduces non-factual elements that are intended to amuse or make a point, rather than to deceive. Propaganda can also be fake news.

In the context of the United States of America and its election processes in the 2010s, fake news generated considerable controversy and argument, with some commentators defining concern over it as moral panic or mass hysteria and others worried about damage done to public trust. It particularly has the potential to undermine trust in serious media coverage generally. The term has also been used to cast doubt upon credible mainstream media.

In January 2017, the United Kingdom House of Commons commenced a parliamentary inquiry into the "growing phenomenon of fake news".

In 2016, Politifact selected fake news as their Lie of the Year. There was so much of this in this United States election year, won by President Donald Trump, that no single lie stood out, so the generic term was chosen. Also in 2016, Oxford Dictionaries selected post-truth as its word of the year and defined it as the state of affairs when "objective facts are less influential in shaping public opinion than appeals to emotion and personal belief." Fake news is the boldest sign of a post-truth society. When we can't agree on basic facts—or even that there are such things as facts—how do we talk to each other?

Roots

The roots of "fake news" from UNESCO's World Trends Report

The term "fake news" gained importance with the electoral context in Western Europe and North America. It is determined by fraudulent content in news format and its velocity. According to Bounegru, Gray, Venturini and Mauri, a lie becomes fake news when it "is picked up by dozens of other blogs, retransmitted by hundreds of websites, cross-posted over thousands of social media accounts and read by hundreds of thousands".

The evolving nature of online business models encourages the production of information that is "click-worthy" and independent of its accuracy.

The nature of trust depends on the assumptions that non-institutional forms of communication are freer from power and more able to report information that mainstream media are perceived as unable or unwilling to reveal. Declines in confidence in much traditional media and expert knowledge have created fertile grounds for alternative, and often obscure sources of information to appear as authoritative and credible. This ultimately leaves users confused about basic facts.

Popularity and viral spread

Fake news has become popular with various media outlets and platforms. Researchers at Pew Research Center discovered that over 60% of Americans access news through social media compared to traditional newspaper and magazines. With the popularity of social media, individuals can easily access fake news and disinformation. The rapid spread of false stories on social media during the 2012 elections in Italy has been documented, as has diffusion of false stories on Facebook during the 2016 US election campaign.

Fake news has the tendency to become viral among the public. With the presence of social media platforms like Twitter, it becomes easier for false information to diffuse quickly. Research has found that false political information tends to spread "three times" faster than other false news. On Twitter, false tweets have a much higher chance of being retweeted than truthful tweets. More so, it is humans who are responsible in disseminating false news and information as opposed to bots and click-farms. The tendency for humans to spread false information has to do with human behavior; according to research, humans are attracted to events and information that are surprising and new, and, as a result, causes high-arousal in the brain. Besides, motivated reasoning was found to play a role in the spread of fake news. This ultimately leads humans to retweet or share false information, which are usually characterized with clickbait and eye-catching titles. This prevents people from stopping to verify the information. As a result, massive online communities form around a piece of false news without any prior fact-checking or verification of the veracity of the information.

Of particular concern regarding viral spread of fake news is the role of super-spreaders. Brian Stelter, the anchor of Reliable Sources at CNN, has documented the systematic long-term two-way feedback that developed between President Donald Trump and Fox News presenters. The resultant conditioning of outrage in their large audience against government and the mainstream media, has proved a highly successful money-spinner for the TV network.

Its damaging effects

In 2017, the inventor of the World Wide Web, Tim Berners-Lee claimed that fake news was one of the three most significant new disturbing Internet trends that must first be resolved if the Internet is to be capable of truly "serving humanity." The other two new disturbing trends were: the recent surge in the use of the Internet by governments for citizen-surveillance purposes, and for cyber-warfare purposes.

Author Terry Pratchett, previously a journalist and press officer, was among the first to be concerned about the spread of fake news on the Internet. In a 1995 interview with Bill Gates, founder of Microsoft, he said "Let's say I call myself the Institute for Something-or-other and I decide to promote a spurious treatise saying the Jews were entirely responsible for the Second World War, and the Holocaust didn't happen, and it goes out there on the Internet and is available on the same terms as any piece of historical research which has undergone peer review and so on. There's a kind of parity of esteem of information on the net. It's all there: there's no way of finding out whether this stuff has any bottom to it or whether someone has just made it up". Gates was optimistic and disagreed, saying that authorities on the Net would index and check facts and reputations in a much more sophisticated way than in print. But it was Pratchett who more accurately predicted how the internet would propagate and legitimize fake news.

When the internet first became accessible for public use in the 1990s, its main purpose was for the seeking and accessing of information. As fake news was introduced to the Internet, this made it difficult for some people to find truthful information. The impact of fake news has become a worldwide phenomenon. Fake news is often spread through the use of fake news websites, which, in order to gain credibility, specialize in creating attention-grabbing news, which often impersonate well-known news sources. Jestin Coler, who said he does it for "fun", has indicated that he earned US$10,000 per month from advertising on his fake news websites.

Research has shown that fake news hurts social media and online based outlets far worse than traditional print and TV outlets. After a survey was conducted, it was found that 58% of people had less trust in social media news stories as opposed to 24% of people in mainstream media after learning about fake news. In 2019 Christine Michel Carter, a writer who has reported on Generation Alpha for Forbes stated that one-third of the generation can decipher false or misleading information in the media.

Types of fake news

Claire Wardle of First Draft News, has identified seven types of fake news:

Manipulated content: an intentionally deceptive photoshopped image of Hillary Clinton over a 1977 photo of Peoples Temple cult leader Jim Jones
  1. satire or parody ("no intention to cause harm but has potential to fool")
  2. false connection ("when headlines, visuals or captions don't support the content")
  3. misleading content ("misleading use of information to frame an issue or an individual")
  4. false context ("when genuine content is shared with false contextual information")
  5. impostor content ("when genuine sources are impersonated" with false, made-up sources)
  6. manipulated content ("when genuine information or imagery is manipulated to deceive", as with a "doctored" photo)
  7. fabricated content ("new content is 100% false, designed to deceive and do harm")

Scientific denialism is another potential explanatory type of fake news, defined as the act of producing false or misleading facts to unconsciously support strong pre-existing beliefs.

Criticism of the term

In 2017, Wardle announced she has now rejected the phrase "fake news" and "censors it in conversation", finding it "woefully inadequate" to describe the issues. She now speaks of "information disorder" and "information pollution", and distinguishes between three overarching types of information content problems:

  1. Mis-information (misinformation): false information disseminated without harmful intent.
  2. Dis-information (disinformation): false information created and shared by people with harmful intent.
  3. Mal-information (malinformation): the sharing of "genuine" information with the intent to cause harm.

Disinformation attacks are the most insidious type because of the harmful intent. For example, it is sometimes generated and propagated by hostile foreign actors, particularly during elections.

Because of the manner in which former president Donald Trump has co-opted the term, The Washington Post media columnist Margaret Sullivan has warned fellow journalists that "It's time to retire the tainted term 'fake news'. Though the term hasn't been around long, its meaning already is lost." By late 2018, the term "fake news" had become verboten and U.S. journalists, including the Poynter Institute were asking for apologies and for product retirements from companies using the term.

In October 2018, the British government decided that the term "fake news" will no longer be used in official documents because it is "a poorly-defined and misleading term that conflates a variety of false information, from genuine error through to foreign interference in democratic processes." This followed a recommendation by the House of Commons' Digital, Culture, Media and Sport Committee to avoid the term.

However, recent reviews of fake news still regard it as a useful broad construct, equivalent in meaning to fabricated news, as separate from related types of problematic news content, such as hyperpartisan news, this latter being a particular source of political polarization. Therefore, researchers are beginning to favour "information disorder" as a more neutral and informative term. For example, the Commission of Inquiry by the Aspen Institute (2021) has adopted the term Information Disorder in its investgative report.

Identification

Infographic How to spot fake news published by the International Federation of Library Associations and Institutions

According to an academic library guide, a number of specific aspects of fake news may help to identify it and thus avoid being unduly influenced. These include: clickbait, propaganda, satire/parody, sloppy journalism, misleading headings, manipulation, rumor mill, misinformation, media bias, audience bias, and content farms.

The International Federation of Library Associations and Institutions (IFLA) published a summary in diagram form (pictured at right) to assist people in recognizing fake news. Its main points are:

  1. Consider the source (to understand its mission and purpose)
  2. Read beyond the headline (to understand the whole story)
  3. Check the authors (to see if they are real and credible)
  4. Assess the supporting sources (to ensure they support the claims)
  5. Check the date of publication (to see if the story is relevant and up to date)
  6. Ask if it is a joke (to determine if it is meant to be satire)
  7. Review your own biases (to see if they are affecting your judgment)
  8. Ask experts (to get confirmation from independent people with knowledge).

The International Fact-Checking Network (IFCN), launched in 2015, supports international collaborative efforts in fact-checking, provides training, and has published a code of principles. In 2017 it introduced an application and vetting process for journalistic organisations. One of IFCN's verified signatories, the independent, not-for-profit media journal The Conversation, created a short animation explaining its fact checking process, which involves "extra checks and balances, including blind peer review by a second academic expert, additional scrutiny and editorial oversight".

Beginning in the 2017 school year, children in Taiwan study a new curriculum designed to teach critical reading of propaganda and the evaluation of sources. Called "media literacy", the course provides training in journalism in the new information society.

Online identification

Fake news has become increasingly prevalent over the last few years, with over 100 misleading articles and rumors spread regarding the 2016 United States presidential election alone. These fake news articles tend to come from satirical news websites or individual websites with an incentive to propagate false information, either as clickbait or to serve a purpose. Since they typically hope to intentionally promote incorrect information, such articles are quite difficult to detect.

When identifying a source of information, one must look at many attributes, including but not limited to the content of the email and social media engagements. Specifically, the language is typically more inflammatory in fake news than real articles, in part because the purpose is to confuse and generate clicks.

Furthermore, modeling techniques such as n-gram encodings and bag of words have served as other linguistic techniques to determine the legitimacy of a news source. On top of that, researchers have determined that visual-based cues also play a factor in categorizing an article, specifically some features can be designed to assess if a picture was legitimate and provides more clarity on the news. There is also many social context features that can play a role, as well as the model of spreading the news. Websites such as "Snopes" try to detect this information manually, while certain universities are trying to build mathematical models to do this themselves.

Tackling and suppression strategies

Considerable research is underway regarding strategies for confronting and suppressing fake news of all types, in particular disinformation, which is the deliberate spreading of false narratives for political purposes, or for destabilising social cohesion in targeted communities. Multiple strategies need to be tailored to individual types of fake news, depending for example on whether the fake news is deliberately produced, or rather unintentionally or unconsciously produced.

Considerable resources are available to combat fake news. Regular summaries of current events and research are available on the websites and email newsletters of a number of support organisations. Particularly notable are the First Draft Archive, the Information Futures Lab, School of Public Health, Brown University and the Nieman Foundation for Journalism (Harvard University).

Journalist Bernard Keane, in his book on misinformation in Australia, classifies strategies for dealing with fake news into three categories: (1) the liar (the perpetrator of fake news), (2) the conduit (the method of carriage of the fake news), and (3) the lied-to (the recipient of the fake news).

Strategies regarding the perpetrator

Promotion of facts over emotions

American philosopher of science Lee McIntyre, who has researched the scientific attitude and post-truth, has explained the importance of factual basis of society, in preference to one in which emotions replace facts. A disturbing modern example of this is the symbiotic relationship that developed between President Donald Trump and Fox News, in which the conspiracy beliefs of Fox hosts were repeated shortly after by Trump (and vice versa) in a continuous feedback loop. This served to promote outrage, and thus to condition and radicalise conservative Republican Fox listeners into cult-like Trump supporters, and to demonise and gaslight Democrat opponents, the mainstream media, and elites generally.

A key strategy to counter fake news based on emotions rather than facts is to flood the information space, particularly social media and web browser search results with factual news, thus drowning out misinformation. A key factor in establishing facts is the role of critical thinking, the principles of which should be imbedded more comprehensively within all school and university education courses. Critical thinking is a style of thinking in which citizens, prior to subsequent problem solving and decision-making, have learned to pay attention to the content of written words, and to judge their accuracy and fairness, among other worthy attributes.

Technique rebuttal

Because content rebuttal (presenting true facts to refute false information) does not always work, Lee McIntyre suggests the better method of technique rebuttal, in which faulty reasoning by deniers is exposed, such as cherry-picking data, and relying too much on fake experts. Deniers have lots of information, but a deficit of trust in mainstream sources. McIntyre first builds trust by respectful exchange, listening carefully to their explanation without interrupting. Then he asks questions such as “What evidence would make you change your mind?” and “Why do you trust that source?” McIntyre has used his technique to talk to flat earthers, though he admits it may not work with hard-core deniers.

Individual counteraction

Individuals should confront misinformation when spotted in online blogs, even if briefly, otherwise they fester and proliferate. The person being responded to is probably resistant to change, but many other bloggers may read and learn from an evidence-based reply. A brutal example was learned by John Kerry during the US 2004 Presidential election campaign against George W. Bush. The right-wing Swift Boat Veterans for Truth falsely claimed that Kerry showed cowardice during the Vietnam War. Kerry refused to dignify the claims with a response for two weeks, despite being pummeled in the media, and this action contributed to his marginal loss to Bush. We should never assume any claim is too outrageous to be believed.

However, caution applies regarding over-zealous debunking of fake news. It is often unwise to draw attention to fake news published on a low-impact website or blog (one that has few followers). If this fake news is debunked by a journalist in a high-profile place such as The New York Times, knowledge of the false claim spreads widely, and more people overall will end up believing it, ignoring or denying the debunk.

Backfire effect

A widely reported paper by Brendan Nyhan and Jason Riefler in 2010 found that when persons with a firm belief are presented with corrective information, their mistaken political beliefs were reinforced rather than reduced in two of their five studies. The researchers called this a backfire effect. However this finding was widely misreported as that corrective information was the sole cause of reinforced misinformation. Later studies, including by Nyhan and colleagues, failed to support a backfire effect. Instead, Nyhan now accepts that the reinforced beliefs are largely controlled by cues from high-profile elites and media that spread misinformation.

Strategies regarding carriers

Regulation of social media

Internet companies with threatened credibility have developed new responses to limit fake news and reduce financial incentives for its proliferation.

A valid criticism of social media companies is that users are presented with content that they will like, based on previous viewing preferences. An undesirable side-effect is that confirmation bias is enhanced in users, which in turn enhances the acceptance of fake news. To reduce this bias, effective self-regulation and legally-enforced regulation of social media (notably Facebook and Twitter) and web search engines (notably Google) need to become more effective and innovative.

Financial disincentives to tackle fake news also apply to some mainstream media. Brian Stelter, the anchor of Reliable Sources at CNN, has provided a substantial critique of the symbiotic but damaging relationship that developed between President Donald Trump and Fox News, which has proved an extraordinarily successful money-spinner for the Murdoch-owned TV network, despite this being a super-spreader of fake news.

General strategy

The general approach by these tech companies is the detection of problematic news via human fact-checking and automated artificial intelligence (machine learning, natural language processing and network analysis). Tech companies have utilized two basic counter-strategies: down-ranking fake news and warning messages.

In the first approach, problematic content is down-ranked by the search algorithm, for example, to the second or later pages on a Google search, so that users are less likely to see it (most users just scan the first page of search results). However, two problems arise. One is that truth is not black-and-white, and fact-checkers often disagree on how to classify the content included in computer training sets, running the risk of false positives and unjustified censorship. Also, fake news often evolves rapidly, and therefore identifiers of misinformation may be ineffective in the future.

The second approach involves attaching warnings to content that professional fact-checkers have found to be false. Much evidence indicates that corrections and warnings do produce reduced misperceptions and sharing. Despite some early evidence that fact-checking could backfire, recent research has shown that these backfire effects are extremely uncommon. But an important problem is that professional fact-checking is not scalable – it can take substantial time and effort to investigate each particular claim. Thus, many (if not most) false claims never get fact-checked. Also, the process is slow, and a warning may miss the period of peak viral spreading. Further, warnings are typically only attached to blatantly false news, rather than to biased coverage of events that actually occurred.

A third approach is to place more emphasis on reliable sources such as Wikipedia, as well as mainstream media (for example, The New York Times and The Wall Street Journal), and science communication publications (for example, Scientific American and The Conversation). However, this approach has led to mixed results, as hyperpartisan commentary and confirmation bias is found even in these sources (the media has both news and opinion pages). In addition, some sections of the community completely reject scientific commentary.

A fourth approach is to ban or specifically target so-called super-spreaders of fake news from social media.

Fact-checking

During the 2016 United States presidential election, the creation and coverage of fake news increased substantially. This resulted in a widespread response to combat the spread of fake news. The volume and reluctance of fake news websites to respond to fact-checking organizations has posed a problem to inhibiting the spread of fake news through fact checking alone. In an effort to reduce the effects of fake news, fact-checking websites, including Snopes.com and FactCheck.org, have posted guides to spotting and avoiding fake news websites. Social media sites and search engines, such as Facebook and Google, received criticism for facilitating the spread of fake news. Both of these corporations have taken measures to explicitly prevent the spread of fake news; critics, however, believe more action is needed.

Facebook

After the 2016 American election and the run-up to the German election, Facebook began labeling and warning of inaccurate news and partnered with independent fact-checkers to label inaccurate news, warning readers before sharing it. After a story is flagged as disputed, it will be reviewed by the third-party fact-checkers. Then, if it has been proven to be a fake news story, the post cannot be turned into an ad or promoted. Artificial intelligence is one of the more recent technologies being developed in the United States and Europe to recognize and eliminate fake news through algorithms. In 2017, Facebook targeted 30,000 accounts related to the spread of misinformation regarding the French presidential election.

In 2020, during the COVID-19 pandemic, Facebook found that troll farms from North Macedonia and the Philippines pushed coronavirus disinformation. The publishers that used contents from these farms were banned from the platform.

Google

In March 2018, Google launched Google News Initiative (GNI) to fight the spread of fake news. It launched GNI under the belief that quality journalism and identifying truth online is crucial. GNI has three goals: "to elevate and strengthen quality journalism, evolve business models to drive sustainable growth and empower news organizations through technological innovation". To achieve the first goal, Google created the Disinfo Lab, which combats the spread of fake news during crucial times such as elections or breaking news. The company is also working to adjust its systems to display more trustworthy content during times of breaking news. To make it easier for users to subscribe to media publishers, Google created Subscribe with Google. Additionally, they have created a dashboard, News Consumer Insights that allows news organizations to better understand their audiences using data and analytics. Google will spend $300 million through 2021 on these efforts, among others, to combat fake news.

In November 2020, YouTube (owned by Google) suspended news outlet One America News Network (OANN) for a week for spreading misinformation on coronavirus. The outlet has violated YouTube's policy multiple times. A video that falsely promoted a guaranteed cure to the virus has been deleted from the channel.

Legal and criminal sanctions in general

The use of anonymously hosted fake news websites has made it difficult to prosecute sources of fake news for libel.

Numerous countries have created laws in an attempt to regulate or prosecute harmful misinformation more generally than just with a focus on tech companies. In numerous countries, people have been arrested for allegedly spreading fake news about the COVID-19 pandemic.

Algerian lawmakers passed a law criminalising "fake news" deemed harmful to "public order and state security". The Turkish Interior Ministry has been arresting social media users whose posts were "targeting officials and spreading panic and fear by suggesting the virus had spread widely in Turkey and that officials had taken insufficient measures". Iran's military said 3600 people have been arrested for "spreading rumors" about COVID-19 in the country. In Cambodia, some individuals who expressed concerns about the spread of COVID-19 have been arrested on fake news charges. The United Arab Emirates have introduced criminal penalties for the spread of misinformation and rumours related to the outbreak.

Strategies regarding the recipient

Cognitive biases of recipient

The vast proliferation of online information, such as in blogs and tweets, has inundated the online marketplace. Because of the resulting information overload, humans cannot process all these information units (called memes), so we let our confirmation bias and other cognitive biases decide which ones to pay attention to, thus enhancing the spread of fake news. Moreover, these cognitive vulnerabilities are easily exploited by both computer algorithms that present information we may like (based on our previous social media use) and by individual manipulators who create social media bots to deliberately spread disinformation.

A recent study by Randy Stein and colleagues shows that conservatives value personal stories (non-scientific, intuitive or experiential evidence) more than do liberals (progressives), and therefore perhaps may be less swayed by scientific evidence. This study however only tested responses to apolitical messages.

Nudges as reflection prompts

People tend to react hastily and share fake news without thinking carefully about what they have read or heard, and without checking or verifying the information. "Nudging" people to consider the accuracy of incoming information has been shown to prompt people to think about it, to improve the accuracy of their judgement, and to reduce the likelihood that incorrect information is unreflectively shared. An example of a technology-based nudge is Twitter's "read before you retweet" prompt, which prompts readers to read an article and consider its contents before retweeting it.

Media critical thinking skills

Critical media literacy skills, for both printed and digital media, are essential for recipients to self-evaluate the accuracy of the media content. Media scholar Nolan Higdon argues that a critical media literacy education focused on teaching critical thinking about how to detect fake news is the most effective way for mitigating the pernicious influence of propaganda. Higdon offers a ten-step guide for detecting fake news.

Mental immune health, inoculation and prebunking

American philosopher Andy Norman, in his book Mental Immunity, argues for a new science of cognitive immunology as a practical guide to resisting bad ideas (such as conspiracy theories), as well as transcending petty tribalism. He argues that reasoned argument, the scientific method, fact-checking and critical thinking skills alone are insufficient to counter the broad scope of false information. Overlooked is the power of confirmation bias, motivated reasoning and other cognitive biases that can seriously distort the many facets of mental 'immunity' (public resilience to fake news), particularly in dysfunctional societies.

The problem is that new misinformation – and its darker cousin, intentional disinformation – keep popping up all the time. Therefore, it is much better timewise to inoculate the population against misinformation, rather than to continually having to debunk each new claim later. Inoculation builds public resilience and creates the conditions for psychological 'herd immunity'. The general term for this process is "prebunking", defined as the process of debunking lies, tactics or sources before they strike. New research shows that free online games can provide tools to fight fake news, leading to healthy skepticism when we consume the news. Google is currently (2023) rolling out novel prebunking video adverts, which have been shown to be effective in countering misinformation during trials in Eastern Europe.

Most current research is based on inoculation theory, a social psychological and communication theory that explains how an attitude or belief can be protected against persuasion or influence in much the same way a body can be protected against disease–for example, through pre-exposure to weakened versions of a stronger, future threat. The theory uses inoculation as its explanatory analogy—applied to attitudes (or beliefs) much like a vaccine is applied to an infectious disease. It has great potential for building public resilience ('immunity') against misinformation and fake news, for example, in tackling science denialism, risky health behaviours, and emotionally manipulative marketing and political messaging.

For example, John Cook and colleagues have shown that inoculation theory shows promise in countering climate change denialism. This involves a two-step process. Firstly, list and deconstruct the surprising 50 or so most common myths about climate change, by identifying the reasoning errors and logical fallacies of each one. Secondly, use the concept of parallel argumentation to explain the flaw in the argument by transplanting the same logic into a parallel situation, often an extreme or absurd one. Adding appropriate humour can be particularly effective.

History

Ancient

stone sculpture of a man's head and neck
Roman politician and general Mark Antony killed himself because of misinformation.

In the 13th century BC, Rameses the Great spread lies and propaganda portraying the Battle of Kadesh as a stunning victory for the Egyptians; he depicted scenes of himself smiting his foes during the battle on the walls of nearly all his temples. The treaty between the Egyptians and the Hittites, however, reveals that the battle was actually a stalemate.

During the first century BC, Octavian ran a campaign of misinformation against his rival Mark Antony, portraying him as a drunkard, a womanizer, and a mere puppet of the Egyptian queen Cleopatra VII. He published a document purporting to be Mark Antony's will, which claimed that Mark Antony, upon his death, wished to be entombed in the mausoleum of the Ptolemaic pharaohs. Although the document may have been forged, it invoked outrage from the Roman populace. Mark Antony ultimately killed himself after his defeat in the Battle of Actium upon hearing false rumors propagated by Cleopatra herself claiming that she had committed suicide.

During the second and third centuries AD, false rumors were spread about Christians claiming that they engaged in ritual cannibalism and incest. In the late third century AD, the Christian apologist Lactantius invented and exaggerated stories about pagans engaging in acts of immorality and cruelty, while the anti-Christian writer Porphyry invented similar stories about Christians.

Medieval

In 1475, a fake news story in Trent claimed that the Jewish community had murdered a two-and-a-half-year-old Christian infant named Simonino. The story resulted in all the Jews in the city being arrested and tortured; 15 of them were burned at the stake. Pope Sixtus IV himself attempted to stamp out the story; however, by that point, it had already spread beyond anyone's control. Stories of this kind were known as "blood libel"; they claimed that Jews purposely killed Christians, especially Christian children, and used their blood for religious or ritual purposes.

Early modern

After the invention of the printing press in 1439, publications became widespread but there was no standard of journalistic ethics to follow. By the 17th century, historians began the practice of citing their sources in footnotes. In 1610 when Galileo went on trial, the demand for verifiable news increased.

During the 18th century publishers of fake news were fined and banned in the Netherlands; one man, Gerard Lodewijk van der Macht, was banned four times by Dutch authorities—and four times he moved and restarted his press. In the American colonies, Benjamin Franklin wrote fake news about murderous "scalping" Indians working with King George III in an effort to sway public opinion in favor of the American Revolution.

Canards, the successors of the 16th century pasquinade, were sold in Paris on the street for two centuries, starting in the 17th century. In 1793, Marie Antoinette was executed in part because of popular hatred engendered by a canard on which her face had been printed.

During the era of slave-owning in the United States, supporters of slavery propagated fake news stories about African Americans, whom white people considered to have lower status. Violence occurred in reaction to the spread of some fake news events. In one instance, stories of African Americans spontaneously turning white spread through the south and struck fear into the hearts of many people.

Rumors and anxieties about slave rebellions were common in Virginia from the beginning of the colonial period, despite the only major uprising occurring in the 19th century. One particular instance of fake news regarding revolts occurred in 1730. The serving governor of Virginia at the time, Governor William Gooch, reported that a slave rebellion had occurred but was effectively put down—although this never happened. After Gooch discovered the falsehood, he ordered slaves found off plantations to be punished, tortured and made prisoners.

19th century

b&w drawing of a man with large bat-wings reaching from over his head to mid-thigh
A "lunar animal" said to have been discovered by John Herschel on the Moon

One instance of fake news was the Great Moon Hoax of 1835. The Sun newspaper of New York published articles about a real-life astronomer and a made-up colleague who, according to the hoax, had observed bizarre life on the Moon. The fictionalized articles successfully attracted new subscribers, and the penny paper suffered very little backlash after it admitted the next month that the series had been a hoax. Such stories were intended to entertain readers and not to mislead them.

From 1800 to 1810, James Cheetham made use of fictional stories to advocate politically against Aaron Burr. His stories were often defamatory and he was frequently sued for libel.

Yellow journalism peaked in the mid-1890s characterizing the sensationalist journalism that arose in the circulation war between Joseph Pulitzer's New York World and William Randolph Hearst's New York Journal. Pulitzer and other yellow journalism publishers goaded the United States into the Spanish–American War, which was precipitated when the USS Maine exploded in the harbor of Havana, Cuba. The term "fake news" itself was apparently first used in the 1890s during this era of sensationalist news reporting.

two men dressed as the Yellow Kid pushing on opposite sides of oversize building blocks bearing the letters W A R"
Joseph Pulitzer and William Randolph Hearst caricatured as they urged the U.S. into the Spanish–American War

20th century

Fake news became popular and spread quickly in the 1900s. Media like newspapers, articles, and magazines were in high demand because of technology. Author Sarah Churchwell shows that when The New York Times reprinted the 1915 speech by Woodrow Wilson that popularized the phrase "America First," they also used the subheading "Fake News Condemned" to describe a section of his speech warning against propaganda and misinformation, although Wilson himself had not used the phrase "fake news". In his speech, Wilson warned of a growing problem with news that "turn[s] out to be falsehood," warning the country it "could not afford 'to let the rumors of irresponsible persons and origins get into the United States'" as that would undermine democracy and the principle of a free and accurate press. Following a claim by CNN that "Trump was... the first US President to deploy [the term "fake news"] against his opponents", Sarah Churchwell's work was cited to claim that "it was Woodrow Wilson who popularized the phrase 'fake news' in 1915" without reference, forcing her to counter this claim, saying that "the phrase 'fake news' was very much NOT popularized (or even used) by Wilson. The NY Times used it in passing but it didn't catch on. Trump was the first to popularize it."

Residents of New York City celebrate the news of the Armistice of 11 November 1918.

During the First World War, an example of fake news was the anti-German atrocity propaganda regarding an alleged "German Corpse Factory" in which the German battlefield dead were supposedly rendered down for fats used to make nitroglycerine, candles, lubricants, human soap and boot dubbing. Unfounded rumors regarding such a factory circulated in the Allied press starting in 1915, and by 1917 the English-language publication North China Daily News presented these allegations as true at a time when Britain was trying to convince China to join the Allied war effort; this was based on new, allegedly true stories from The Times and the Daily Mail that turned out to be forgeries. These false allegations became known as such after the war, and in the Second World War Joseph Goebbels used the story in order to deny the ongoing massacre of Jews as British propaganda. According to Joachim Neander and Randal Marlin, the story also "encouraged later disbelief" when reports about the Holocaust surfaced after the liberation of Auschwitz and Dachau concentration camps. After Hitler and the Nazi Party rose to power in Germany in 1933, they established the Reich Ministry of Public Enlightenment and Propaganda under the control of Propaganda Minister Joseph Goebbels. The Nazis used both print and broadcast journalism to promote their agendas, either by obtaining ownership of those media or exerting political influence. The expression Big lie (in German: große Lüge) was coined by Adolf Hitler, when he dictated his 1925 book Mein Kampf. Throughout World War II, both the Axis and the Allies employed fake news in the form of propaganda to persuade the public at home and in enemy countries. The British Political Warfare Executive used radio broadcasts and distributed leaflets to discourage German troops.

The Carnegie Endowment for International Peace claimed that The New York Times printed fake news "depicting Russia as a socialist paradise." During 1932–1933, The New York Times published numerous articles by its Moscow bureau chief, Walter Duranty, who won a Pulitzer prize for a series of reports about the Soviet Union.

21st century

In the 21st century, both the impact of fake news and the use of the term became widespread.

The increasing openness, access and prevalence of the Internet resulted in its growth. New information and stories are published constantly and at a faster rate than ever, often lacking in verification, which may be consumed by anyone with an Internet connection. Fake news has grown from being sent via emails to attacking social media. Besides referring to made-up stories designed to deceive readers into clicking on links, maximizing traffic and profit, the term has also referred to satirical news, whose purpose is not to mislead but rather to inform viewers and share humorous commentary about real news and the mainstream media. United States examples of satire include the newspaper The Onion, Saturday Night Live's Weekend Update, and the television shows The Daily Show, The Colbert Report, The Late Show with Stephen Colbert.

21st-century fake news is often intended to increase the financial profits of the news outlet. In an interview with NPR, Jestin Coler, former CEO of the fake media conglomerate Disinfomedia, told who writes fake news articles, who funds these articles, and why fake news creators create and distribute false information. Coler, who has since left his role as a fake news creator, said that his company employed 20 to 25 writers at a time and made $10,000 to $30,000 monthly from advertisements. Coler began his career in journalism as a magazine salesman before working as a freelance writer. He said he entered the fake news industry to prove to himself and others just how rapidly fake news can spread. Disinfomedia is not the only outlet responsible for the distribution of fake news; Facebook users play a major role in feeding into fake news stories by making sensationalized stories "trend", according to BuzzFeed media editor Craig Silverman, and the individuals behind Google AdSense basically fund fake news websites and their content. Mark Zuckerberg, CEO of Facebook, said, "I think the idea that fake news on Facebook influenced the election in any way, I think is a pretty crazy idea" and then a few days later he blogged that Facebook was looking for ways to flag fake news stories.

Many online pro-Trump fake news stories are being sourced out of a city of Veles in Macedonia, where approximately seven different fake news organizations are employing hundreds of teenagers to rapidly produce and plagiarize sensationalist stories for different U.S. based companies and parties.

Kim LaCapria of the fact checking website Snopes.com has stated that, in America, fake news is a bipartisan phenomenon, saying that "[t]here has always been a sincerely held yet erroneous belief misinformation is more red than blue in America, and that has never been true." Jeff Green of Trade Desk agrees the phenomenon affects both sides. Green's company found that affluent and well-educated persons in their 40s and 50s are the primary consumers of fake news. He told Scott Pelley of 60 Minutes that this audience tends to live in an "echo chamber" and that these are the people who vote.

In 2014, the Russian Government used disinformation via networks such as RT to create a counter-narrative after Russian-backed Ukrainian rebels shot down Malaysia Airlines Flight 17. In 2016, NATO claimed it had seen a significant rise in Russian propaganda and fake news stories since the invasion of Crimea in 2014. Fake news stories originating from Russian government officials were also circulated internationally by Reuters news agency and published in the most popular news websites in the United States.

A 2018 study at Oxford University found that Trump's supporters consumed the "largest volume of 'junk news' on Facebook and Twitter":

On Twitter, a network of Trump supporters consumes the largest volume of junk news, and junk news is the largest proportion of news links they share," the researchers concluded. On Facebook, the skew was even greater. There, "extreme hard right pages—distinct from Republican pages—share more junk news than all the other audiences put together.

In 2018, researchers from Princeton University, Dartmouth College and the University of Exeter examined the consumption of fake news during the 2016 U.S. presidential campaign. Their findings showed that Trump supporters and older Americans (over 60) were far more likely to consume fake news than Clinton supporters. Those most likely to visit fake news websites were the 10% of Americans who consumed the most conservative information. There was a very large difference (800%) in the consumption of fake news stories as related to total news consumption between Trump supporters (6%) and Clinton supporters (1%).

The study also showed that fake pro-Trump and fake pro-Clinton news stories were read by their supporters, but with a significant difference: Trump supporters consumed far more (40%) than Clinton supporters (15%). Facebook was by far the key "gateway" website where these fake stories were spread and which led people to then go to the fake news websites. Fact checks of fake news were rarely seen by consumers, with none of those who saw a fake news story being reached by a related fact check.

Brendan Nyhan, one of the researchers, emphatically stated in an interview on NBC News: "People got vastly more misinformation from Donald Trump than they did from fake news websites—full stop."

NBC NEWS: "It feels like there's a connection between having an active portion of a party that's prone to seeking false stories and conspiracies and a president who has famously spread conspiracies and false claims. In many ways, demographically and ideologically, the president fits the profile of the fake news users that you're describing."
NYHAN: "It's worrisome if fake news websites further weaken the norm against false and misleading information in our politics, which unfortunately has eroded. But it's also important to put the content provided by fake news websites in perspective. People got vastly more misinformation from Donald Trump than they did from fake news websites—full stop."

A 2019 study by researchers at Princeton and New York University found that a person's likelihood of sharing fake-news articles correlated more strongly with age than it did education, sex, or political views. 11% of users older than 65 shared an article consistent with the study's definition of fake news. Just 3% of users ages 18 to 29 did the same.

Another issue in mainstream media is the usage of the filter bubble, a "bubble" that has been created that gives the viewer, on social media platforms, a specific piece of the information knowing they will like it. Thus creating fake news and biased news because only half the story is being shared, the portion the viewer liked. "In 1996, Nicolas Negroponte predicted a world where information technologies become increasingly customizable."

Special topics

Deepfakes and shallowfakes

Deepfakes (a portmanteau of "deep learning" and "fake") are synthetic media (AI-generated media) in which a person in an existing image or video is replaced with someone else's likeness.

Because a picture often has a greater impact than the corresponding words, deepfakes - which leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content - have a particularly high potential to deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks (GANs).

Deepfakes have garnered widespread attention for their uses in creating fake news (notably political), but also child sexual abuse material, celebrity pornographic videos, revenge porn, hoaxes, bullying, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.

Deepfakes generally require specialist software or knowledge, but much the same effect can be achieved quickly by anyone using standard video editing software on most modern computers. These videos (termed “shallowfakes”) have obvious flaws, but nevertheless can still be widely believed as real, or at least have entertainment value that reinforces beliefs. One of the first to go viral, “The Hillary Song”, watched over 3 million times by Donald Trump supporters, shows Hillary Clinton being humiliated on stage by The Rock, a former wrestling champion. The surprised creator (who hates all politicians), pasted images of Clinton into a genuine video of The Rock humiliating a wrestling official.

Bots on social media

In the mid-1990s, Nicolas Negroponte anticipated a world where news through technology become progressively personalized. In his 1996 book Being Digital he predicted a digital life where news consumption becomes an extremely personalized experience and newspapers adapted content to reader preferences. This prediction has since been reflected in news and social media feeds of modern day.

Bots have the potential to increase the spread of fake news, as they use algorithms to decide what articles and information specific users like, without taking into account the authenticity of an article. Bots mass-produce and spread articles, regardless of the credibility of the sources, allowing them to play an essential role in the mass spread of fake news, as bots are capable of creating fake accounts and personalities on the web that are then gaining followers, recognition, and authority. Additionally, almost 30% of the spam and content spread on the Internet originates from these software bots.

In the 21st century, the capacity to mislead was enhanced by the widespread use of social media. For example, one 21st century website that enabled fake news' proliferation was the Facebook newsfeed. In late 2016 fake news gained notoriety following the uptick in news content by this means, and its prevalence on the micro-blogging site Twitter. In the United States, 62% of Americans use social media to receive news. Many people use their Facebook News Feed to get news, despite Facebook not being considered a news site. According to Craig McClain, over 66% of Facebook users obtain news from the site. This, in combination with increased political polarization and filter bubbles, led to a tendency for readers to mainly read headlines.

Numerous individuals and news outlets have stated that fake news may have influenced the outcome of the 2016 American Presidential Election. Fake news saw higher sharing on Facebook than legitimate news stories, which analysts explained was because fake news often panders to expectations or is otherwise more exciting than legitimate news. Facebook itself initially denied this characterization. A Pew Research poll conducted in December 2016 found that 64% of U.S. adults believed completely made-up news had caused "a great deal of confusion" about the basic facts of current events, while 24% claimed it had caused "some confusion" and 11% said it had caused "not much or no confusion". Additionally, 23% of those polled admitted they had personally shared fake news, whether knowingly or not. Researchers from Stanford assessed that only 8% of readers of fake news recalled and believed in the content they were reading, though the same share of readers also recalled and believed in "placebos"—stories they did not actually read, but that were produced by the authors of the study. In comparison, over 50% of the participants recalled reading and believed in true news stories.

By August 2017 Facebook stopped using the term "fake news" and used "false news" in its place instead. Will Oremus of Slate wrote that because supporters of U.S. President Donald Trump had redefined the word "fake news" to refer to mainstream media opposed to them, "it makes sense for Facebook—and others—to cede the term to the right-wing trolls who have claimed it as their own."

Research from Northwestern University concluded that 30% of all fake news traffic, as opposed to only 8% of real news traffic, could be linked back to Facebook. The research concluded fake news consumers do not exist in a filter bubble; many of them also consume real news from established news sources. The fake news audience is only 10 percent of the real news audience, and most fake news consumers spent a relatively similar amount of time on fake news compared with real news consumers—with the exception of Drudge Report readers, who spent more than 11 times longer reading the website than other users.

In the wake of western events, China's Ren Xianling of the Cyberspace Administration of China suggested a "reward and punish" system be implemented to avoid fake news.

Internet trolls

In Internet slang, a troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into an emotional response or off-topic discussion, often for the troll's amusement. Internet trolls also feed on attention.

The idea of internet trolls gained popularity in the 1990s, though its meaning shifted in 2011. Whereas it once denoted provocation, it is a term now widely used to signify the abuse and misuse of the Internet. Trolling comes in various forms, and can be dissected into abuse trolling, entertainment trolling, classical trolling, flame trolling, anonymous trolling, and kudos trolling. It is closely linked to fake news, as internet trolls are now largely interpreted as perpetrators of false information, information that can often be passed along unwittingly by reporters and the public alike.

When interacting with each other, trolls often share misleading information that contributes to the fake news circulated on sites like Twitter and Facebook. In the 2016 American election, Russia paid over 1,000 internet trolls to circulate fake news and disinformation about Hillary Clinton; they also created social media accounts that resembled voters in important swing states, spreading influential political standpoints. In February 2019, Glenn Greenwald wrote that a cybersecurity company New Knowledge "was caught just six weeks ago engaging in a massive scam to create fictitious Russian troll accounts on Facebook and Twitter in order to claim that the Kremlin was working to defeat Democratic Senate nominee Doug Jones in Alabama."

Fake news hoaxes

Paul Horner is perhaps the best known example of a person who deliberately creates fake news for a purpose. He has been referred to as a "hoax artist" by the Associated Press and the Chicago Tribune. The Huffington Post called Horner a "performance artist".

Horner was behind several widespread hoaxes such as: (1) that the graffiti artist Banksy had been arrested; and (2) that he had an "enormous impact" on the 2016 U.S. presidential election, according to CBS News. These stories consistently appeared in Google's top news search results, were shared widely on Facebook, were taken seriously and shared by third parties such as Trump presidential campaign manager Corey Lewandowski, Eric Trump, ABC News and the Fox News Channel. Horner later claimed that the intention was "to make Trump's supporters look like idiots for sharing my stories".

In a November 2016 interview with The Washington Post, Horner expressed regret for the role his fake news stories played in the election and surprise at how gullible people were in treating his stories as news. In February 2017 Horner said,

I truly regret my comment that I think Donald Trump is in the White House because of me. I know all I did was attack him and his supporters and got people not to vote for him. When I said that comment it was because I was confused how this evil man got elected President and I thought maybe instead of hurting his campaign, maybe I had helped it. My intention was to get his supporters NOT to vote for him and I know for a fact that I accomplished that goal. The far right, a lot of the Bible thumpers and alt-right were going to vote him regardless, but I know I swayed so many that were on the fence.

In 2017, Horner stated that a fake story of his about a rape festival in India helped generate over $250,000 in donations to GiveIndia, a site that helps rape victims in India. Horner said he dislikes being grouped with people who write fake news solely to be misleading. "They just write it just to write fake news, like there's no purpose, there's no satire, there's nothing clever.

Donald Trump and hoax news

Donald Trump frequently mentioned fake news on Twitter to criticize the media in the United States, including CNN and The New York Times.

The term "fake news" has at times been used to cast doubt upon credible news, and former U.S. president Donald Trump has been credited with popularizing and re-defining (or misusing) the term to refer to any negative press coverage of himself he dislikes, regardless of veracity. Trump has claimed that the mainstream American media (which he calls the "lying press") regularly reports "fake news" or "hoax news", despite the fact that he generated considerable false and inaccurate or misleading statements himself. According to The Washington Post's Fact Checker's database, Trump made 30,573 false or misleading claims during his four years in office, though the number of unique false claims is much lower because many of his major false claims were repeated hundreds of times each. A searchable online database is available for each documented false claim, and a datafile is available for download for use in academic studies of misinformation and lying. An analysis of the first 16,000 false claims is available as a book.

Trump has often attacked mainstream news reporting publications, deeming them "fake news" and the "enemy of the people". Every few days, Trump would issue a threat against the press due to his claims of "fake news". There have been many instances in which norms that protect press freedom have been pushed or even upended during the Trump-era.

According to Jeff Hemsley, a Syracuse University professor who studies social media, Trump uses this term for any news that is not favorable to him or which he simply dislikes. Trump provided a widely cited example of this interpretation in a tweet on May 9, 2018:

Donald J. Trump
@realDonaldTrump

The Fake News is working overtime. Just reported that, despite the tremendous success we are having with the economy & all things else, 91% of the Network News about me is negative (Fake). Why do we work so hard in working with the media when it is corrupt? Take away credentials?

May 9, 2018

Chris Cillizza described the tweet on CNN as an "accidental" revelation about Trump's "'fake news' attacks", and wrote: "The point can be summed up in these two words from Trump: 'negative (Fake).' To Trump, those words mean the same thing. Negative news coverage is fake news. Fake news is negative news coverage." Other writers made similar comments about the tweet. Dara Lind wrote in Vox: "It's nice of Trump to admit, explicitly, what many skeptics have suspected all along: When he complains about 'fake news,' he doesn't actually mean 'news that is untrue'; he means news that is personally inconvenient to Donald Trump." Jonathan Chait wrote in New York magazine: "Trump admits he calls all negative news 'fake'." "In a tweet this morning, Trump casually opened a window into the source code for his method of identifying liberal media bias. Anything that's negative is, by definition, fake." Philip Bump wrote in The Washington Post: "The important thing in that tweet ... is that he makes explicit his view of what constitutes fake news. It's negative news. Negative. (Fake.)" In an interview with Lesley Stahl, before the cameras were turned on, Trump explained why he attacks the press: "You know why I do it? I do it to discredit you all and demean you all so that when you write negative stories about me no one will believe you."

Author and literary critic Michiko Kakutani has described developments in the right-wing media and websites:

"Fox News and the planetary system of right-wing news sites that would orbit it and, later, Breitbart, were particularly adept at weaponizing such arguments and exploiting the increasingly partisan fervor animating the Republican base: They accused the media establishment of 'liberal bias', and substituted their own right-wing views as 'fair and balanced'—a redefinition of terms that was a harbinger of Trump's hijacking of 'fake news' to refer not to alt-right conspiracy theories and Russian troll posts, but to real news that he perceived as inconvenient or a threat to himself."

In September 2018, National Public Radio noted that Trump has expanded his use of the terms "fake" and "phony" to "an increasingly wide variety of things he doesn't like": "The range of things Trump is declaring fake is growing too. Last month he tweeted about "fake books," "the fake dossier," "fake CNN," and he added a new claim—that Google search results are "RIGGED" to mostly show only negative stories about him." They graphed his expanding use in columns labeled: "Fake news", "Fake (other) and "Phony".

By country

Europe

Austria

Politicians in Austria dealt with the impact of fake news and its spread on social media after the 2016 presidential campaign in the country. In December 2016, a court in Austria issued an injunction on Facebook Europe, mandating it block negative postings related to Eva Glawischnig-Piesczek, Austrian Green Party Chairwoman. According to The Washington Post the postings to Facebook about her "appeared to have been spread via a fake profile" and directed derogatory epithets towards the Austrian politician. The derogatory postings were likely created by the identical fake profile that had previously been utilized to attack Alexander van der Bellen, who won the election for President of Austria.

Belgium

In 2006, French-speaking broadcaster RTBF showed a fictional breaking special news report that Belgium's Flemish Region had proclaimed independence. Staged footage of the royal family evacuating and the Belgian flag being lowered from a pole were made to add credence to the report. It was not until 30 minutes into the report that a sign stating "Fiction" appeared on screen. The RTBF journalist that created the hoax said the purpose was to demonstrate the magnitude of the country's situation and if a partition of Belgium was to really happen.

Czech Republic

Fake news outlets in the Czech Republic redistribute news in Czech and English originally produced by Russian sources. Czech president Miloš Zeman has been supporting media outlets accused of spreading fake news.

The Centre Against Terrorism and Hybrid Threats (CTHH) is unit of the Ministry of the Interior of the Czech Republic primarily aimed at countering disinformation, fake news, hoaxes and foreign propaganda. The CTHH started operations on January 1, 2017. The CTHH has been criticized by Czech President Miloš Zeman, who said: "We don't need censorship. We don't need thought police. We don't need a new agency for press and information as long as we want to live in a free and democratic society."

In 2017 media activists started a website Konspiratori.cz maintaining a list of conspiracy and fake news outlets in Czech.

European Union

In 2018 European Commission introduced a first voluntary code of practice on disinformation. In 2022 this will become a strengthen co-regulation scheme, with responsibility shared between the regulators and companies signatories to the code. It will complement earlier Digital Services Act agreed by the 27-country European Union which already includes a section on combating disinformation.

Finland

Officials from 11 countries met in Helsinki in November 2016 and planned the formation of a center to combat disinformation cyber-warfare, which includes the spread of fake news on social media. The center is planned to be located in Helsinki and combine efforts from 10 countries, including Sweden, Germany, Finland and the U.S. Prime Minister of Finland from 2015 to 2019 Juha Sipilä planned to address the topic of the center in Spring 2017 with a motion before Parliament.

Deputy Secretary of State for EU Affairs Jori Arvonen said cyber-warfare, such as hybrid cyber-warfare intrusions into Finland from Russia and the Islamic State, became an increasing problem in 2016. Arvonen cited examples including online fake news, disinformation, and the "little green men" of the Russo-Ukrainian War.

France

During the ten-year period preceding 2016, France was witness to an increase in popularity of far-right alternative news sources called the fachosphere ("facho" referring to fascist); known as the extreme right on the Internet [fr]. According to sociologist Antoine Bevort, citing data from Alexa Internet rankings, the most consulted political websites in France in 2016 included Égalité et Réconciliation, François Desouche [fr], and Les Moutons Enragés. These sites increased skepticism towards mainstream media from both left and right perspectives.

In September 2016, the country faced controversy regarding fake websites providing false information about abortion. The National Assembly moved forward with intentions to ban such fake sites. Laurence Rossignol, women's minister for France, informed parliament though the fake sites look neutral, in actuality their intentions were specifically targeted to give women fake information.

2017 presidential election. France saw an uptick in amounts of disinformation and propaganda, primarily in the midst of election cycles. A study looking at the diffusion of political news during the 2017 presidential election cycle suggests that one in four links shared in social media comes from sources that actively contest traditional media narratives. Facebook corporate deleted 30,000 Facebook accounts in France associated with fake political information.

In April 2017, Emmanuel Macron's presidential campaign was attacked by the fake news articles more than the campaigns of conservative candidate Marine Le Pen and socialist candidate Benoît Hamon. One of the fake articles even announced that Marine Le Pen won the presidency before the people of France had even voted. Macron's professional and private emails, as well as memos, contracts and accounting documents were posted on a file sharing website. The leaked documents were mixed with fake ones in social media in an attempt to sway the upcoming presidential election. Macron said he would combat fake news of the sort that had been spread during his election campaign.

Initially, the leak was attributed to APT28, a group tied to Russia's GRU military intelligence directorate. However, the head of the French cyber-security agency, ANSSI, later said that there was no evidence that the hack leading to the leaks had anything to do with Russia, saying that the attack was so simple, that "we can imagine that it was a person who did this alone. They could be in any country."

Germany

German Chancellor Angela Merkel lamented the problem of fraudulent news reports in a November 2016 speech, days after announcing her campaign for a fourth term as leader of her country. In a speech to the German parliament, Merkel was critical of such fake sites, saying they harmed political discussion. Merkel called attention to the need of government to deal with Internet trolls, bots, and fake news websites. She warned that such fraudulent news websites were a force increasing the power of populist extremism. Merkel called fraudulent news a growing phenomenon that might need to be regulated in the future. The head of Germany's foreign intelligence agency Federal Intelligence Service, Bruno Kahl, warned of the potential for cyberattacks by Russia in the 2017 German election. He said the cyberattacks would take the form of the intentional spread of disinformation. Kahl said the goal is to increase chaos in political debates. Germany's domestic intelligence agency Federal Office for the Protection of the Constitution Chief, Hans-Georg Maassen, said sabotage by Russian intelligence was a present threat to German information security. German government officials and security experts later said there was no Russian interference during the 2017 German federal election. The German term Lügenpresse, or lying press, has been used since the 19th century and specifically during World War One as a strategy to attack news spread by political opponents in the 19th and 20th century.

The award-winning German journalist Claas Relotius resigned from Der Spiegel in 2018 after admitting numerous instances of journalistic fraud.

In early April 2020, Berlin politician Andreas Geisel alleged that a shipment of 200,000 N95 masks that it had ordered from American producer 3M's China facility were intercepted in Bangkok and diverted to the United States during the COVID-19 pandemic. Berlin police president Barbara Slowik stated that she believed "this is related to the US government's export ban." However, Berlin police confirmed that the shipment was not seized by U.S. authorities, but was said to have simply been bought at a better price, widely believed to be from a German dealer or China. This revelation outraged the Berlin opposition, whose CDU parliamentary group leader Burkard Dregger accused Geisel of "deliberately misleading Berliners" in order "to cover up its own inability to obtain protective equipment". FDP interior expert Marcel Luthe said "Big names in international politics like Berlin's senator Geisel are blaming others and telling US piracy to serve anti-American clichés." Politico Europe reported that "the Berliners are taking a page straight out of the Trump playbook and not letting facts get in the way of a good story."

Hungary

Hungary's illiberal and populist prime minister Viktor Orbán has been casting George Soros, financier and philanthropist, a Hungarian-born Holocaust survivor, as the mastermind of a plot to undermine the country's sovereignty, replace native Hungarians with immigrants and destroy traditional values. This propaganda technique, together with anti-Semitism still present in the country, seems to appeal to his right wing voters, as it mobilizes them by seeding fear in society, creating an enemy image and enabling Orbán to present himself as the protector of the nation from the illusion of this enemy.

Italy

Journalists must be registered with the Ordine Dei Giornalisti (ODG) (transl.: Order of Journalists) and respect its disciplinary and training obligations, to guarantee "correct and truthful information, intended as right of individuals and of the community".

Under certain circumstances, spreading fake news may constitute a criminal offence for the Italian penal code.

Since 2018 it is possible to report fake news directly on the Polizia di Stato website.

The phenomenon is monitored by the DIS, supported by AISE and AISI.

Netherlands

In March 2018, the European Union's East StratCom Task Force compiled a list dubbed a "hall of shame" of articles with suspected Kremlin attempts to influence political decisions. However, controversy arose when three Dutch media outlets claimed they had been wrongfully singled out because of quotes attributed to people with non-mainstream views. The news outlets included ThePostOnline, GeenStijl, and De Gelderlander. All three were flagged for publishing articles critical of Ukrainian policies, and none received any forewarning or opportunity to appeal beforehand. This incident has contributed to the growing issue of what defines news as fake, and how freedoms of press and speech can be protected during attempts to curb the spread of false news.

Poland

Polish historian Jerzy Targalski noted fake news websites had infiltrated Poland through anti-establishment and right-wing sources that copied content from Russia Today. Targalski observed there existed about 20 specific fake news websites in Poland that spread Russian disinformation in the form of fake news. One example cited was fake news that Ukraine announced the Polish city of Przemyśl as occupied Polish land.

Poland's anti-EU Law and Justice (PiS) government has been accused of spreading "illiberal disinformation" to undermine public confidence in the European Union. Maria Snegovaya of Columbia University said: "The true origins of this phenomenon are local. The policies of Fidesz and Law and Justice have a lot in common with Putin's own policies."

Some mainstream outlets were long accused of fabricating half-true or outright false information. One of popular TV stations, TVN, in 2010 attributed to Jarosław Kaczyński (then an opposition leader) words that "there will be times, when true Poles will come to the power". However, Kaczyński has never uttered those words in the commented speech.

Romania

On March 16, 2020, Romanian President Klaus Iohannis signed an emergency decree, giving authorities the power to remove, report or close websites spreading "fake news" about the COVID-19 pandemic, with no opportunity to appeal.

Russia

In March 2019, Russia passed a new bill to ban websites from spreading false information. In addition to tackling fake news, the new legislation specifically punishes any sources or websites for publishing materials that insult the state, the symbol of the government or other political figures. For repeated offenders, they would receive a 15-day jail sentence.

During the 2022 Russian invasion of Ukraine, the Russian government passed a law prohibiting "fake news" regarding the Russian military, which was broadly defined as any information that is deemed by the Russian government to be false, including the use of the terms invasion and war to refer to the invasion. Violations of the law are punishable with up to 15 years of imprisonment. International news organizations in multiple countries ceased operating in Russia and journalists emigrated from Russia en masse after the law was passed, while some domestic non-state news organizations were blocked by the Russian government.

Serbia

In 2018, International Research & Exchanges Board described the situation in the media in Serbia as the worst in recent history, and that Media Sustainability Index dropped because the most polarized media in almost 20 years, an increase in fake news and editorial pressure on media. According to Serbian investigative journalism portal Crime and Corruption Reporting Network, more than 700 fake news were published on the front pages of pro-government tabloids during 2018. Many of them were about alleged attacks on the president Aleksandar Vučić and attempts of coups, as well as messages of support to him by Vladimir Putin. The best-selling newspaper in Serbia is the pro-government tabloid Informer, which most often presents Vučić as a powerful person under constant attack, and also has anti-European content and pro-war rhetoric. Since Vučić's party came to power, Serbia has seen a surge of internet trolls and pages on social networks praising the government and attacking its critics, free media and the opposition in general. That includes a handful of dedicated employees run fake accounts, but also the Facebook page associated with a Serbian franchise of the far-right Breitbart News website, which has a disputed accuracy.

Spain

Fake news in Spain has become much more prevalent in the 2010s, but has been prominent throughout Spain's history. The United States government published a fake article in regards to the purchase of the Philippines from Spain, which they had already purchased. Despite this, the topic of fake news has traditionally not been given much attention to in Spain, until the newspaper El País launched the new blog dedicated strictly to truthful news entitled "Hechos"; which literally translates to "facts" in Spanish. David Alandete, the managing editor of El País, stated how many people misinterpret fake news as real because the sites "have similar names, typography, layouts and are deliberately confusing". Alandete made it the new mission of El País "to respond to fake news". María Ramírez of Univision Communications has stated that much of the political fake news circulating in Spain is due to the lack of investigative journalism on the topics. Most recently El País has created a fact-checking position for five employees, to try and debunk the fake news released.

Sweden

The Swedish Security Service issued a report in 2015 identifying propaganda from Russia infiltrating Sweden with the objective to amplify pro-Russian propaganda and inflame societal conflicts. The Swedish Civil Contingencies Agency (MSB), part of the Ministry of Defence of Sweden, identified fake news reports targeting Sweden in 2016 that originated from Russia. Swedish Civil Contingencies Agency official Mikael Tofvesson stated a pattern emerged where views critical of Sweden were constantly repeated. The Local identified these tactics as a form of psychological warfare. The newspaper reported the MSB identified Russia Today and Sputnik News as significant fake news purveyors. As a result of growth in this propaganda in Sweden, the MSB planned to hire six additional security officials to fight back against the campaign of fraudulent information.

According to the Oxford Internet Institute, eight of the top 10 "junk news" sources during the 2018 Swedish general election campaign were Swedish, and "Russian sources comprised less than 1% of the total number of URLs shared in the data sample."

Ukraine

Since the Euromaidan and the beginning of the Ukrainian crisis in 2014, the Ukrainian media circulated several fake news stories and misleading images, including a dead rebel photograph with a Photoshop-painted tattoo which allegedly indicated that he belonged to Russian Special Forces and the threat of a Russian nuclear attack against the Ukrainian troops. The recurring theme of these fake news was that Russia was solely to blame for the crisis and the war in Donbass.

In 2015 the Organization for Security and Co-operation in Europe published a report criticizing Russian disinformation campaigns to disrupt relations between Europe and Ukraine after ouster of Viktor Yanukovych. According to Deutsche Welle, similar tactics were used by fake news websites during the U.S. elections. A website, StopFake was created by Ukrainian activists in 2014 to debunk fake news in Ukraine, including media portrayal of the Ukrainian crisis.

On May 29, 2018, the Ukrainian media and state officials announced that the Russian journalist Arkady Babchenko was assassinated in his apartment in Kyiv. Later, Babchenko appeared to be alive, and the Security Service of Ukraine claimed that the staged assassination was needed to arrest a person who allegedly was planning a real assassination. Alexander Baunov, writing for Carnegie.ru, mentioned that the staged assassination of Babchenko was the first instance of fake news delivered directly by the highest officials of a state.

United Kingdom

Under King Edward I of England (r. 1272–1307) "'a statute was passed which made it a grave offence to devise or tell any false news of prelates, dukes, earls, barons, or nobles of the realm.'"

In 1702 Queen Anne of England issued a proclamation "for restraining the spreading false news, and printing and publishing of irreligious and seditious papers and libels".

On December 8, 2016, Chief of the Secret Intelligence Service (MI6) Alex Younger delivered a speech to journalists at the MI6 headquarters where he called fake news and propaganda damaging to democracy. Younger said the mission of MI6 was to combat propaganda and fake news in order to deliver to his government a strategic advantage in the information-warfare arena, and to assist other nations including Europe. He called such methods of fake-news propaganda online a "fundamental threat to our sovereignty". Younger said all nations that hold democratic values should feel the same worry over fake news.

However, definitions of "fake news" have been controversial in the UK. Dr Claire Wardle advised some UK Members of Parliament against using the term in certain circumstances "when describing the complexity of information disorder", as the term "fake news" is "woefully inadequate":

Neither the words 'fake' nor 'news' effectively capture this polluted information ecosystem. Much of the content used as examples in debates on this topic are not fake, they are genuine but used out of context or manipulated. Similarly, to understand the entire ecosystem of polluted information, we need to consider far more than content that mimics 'news'.

In October 2020, a hoax claim made by a spoof Twitter account, about the supposed reopening of Woolworths stores, was repeated without verification by news sites including the Daily Mail and Daily Mirror (and the latter's regional sister titles).

Asia

China

Fake news during the 2016 U.S. election spread to China. Articles popularized within the United States were translated into Chinese and spread within China. The government of China used the growing problem of fake news as a rationale for increasing Internet censorship in China in November 2016. China then published an editorial in its Communist Party newspaper The Global Times called: "Western media's crusade against Facebook", and criticized "unpredictable" political problems posed by freedoms enjoyed by users of Twitter, Google, and Facebook. China government leaders meeting in Wuzhen at the third World Internet Conference in November 2016 said fake news in the U.S. election justified adding more curbs to free and open use of the Internet. China Deputy Minister Ren Xianliang, official at the Cyberspace Administration of China, said increasing online participation led to "harmful information" and fraud. Kam Chow Wong, a former Hong Kong law enforcement official and criminal justice professor at Xavier University, praised attempts in the U.S. to patrol social media. The Wall Street Journal noted China's themes of Internet censorship became more relevant at the World Internet Conference due to the outgrowth of fake news.

The issue of fake news in the 2016 United States election gave the Chinese Government a reason to further criticize Western democracy and press freedom. The Chinese government accused Western media organisations of bias, in a move apparently inspired by President Trump.

In March 2017, the People's Daily, a newspaper run by the ruling Chinese Communist Party, denounced news coverage of the torture of Chinese lawyer and human rights advocate Xie Yang, claiming it to be fake news. The newspaper published a Twitter post declaring that "Foreign media reports that police tortured a detained lawyer is FAKE NEWS, fabricated to tarnish China's image". The state-owned Xinhua News Agency claimed that "the stories were essentially fake news". The Chinese government often accused Western news organizations of being biased and dishonest.

The Chinese government also claimed that there were people who posed as journalists who spread negative information on social media in order to extort payment from their victims to stop doing so. David Bandurski of University of Hong Kong's China Media Project said that this issue continued to worsen.

Hong Kong, China

During the 2019–20 Hong Kong protests, the Chinese government has been accused for using fake news to spread misinformation regarding the protests. It includes describing protests as "riots", and "radicals" seeking independence for the city. Due to the online censorship in China, citizens inside mainland China could not read news reports from some media outlets. It was also found by Facebook, Twitter and YouTube that misinformation was spread with fake accounts and advertisements by state-backed media. Large amount of accounts were suspended.

Dot Dot News, a pro-Beijing online media located in Hong Kong, has been banned by Facebook given it has been distributing fake news and hate speech.

India

Fake news in India has led to violent incidents between castes and religions, interfering with public policies. It often spreads through the smartphone instant messenger WhatsApp, which had 200 million monthly active users in the country as of February 2017.

Indonesia

Indonesia is reported to have the fourth-highest number of Facebook users in the world. Indonesia has seen an increase in the amount of fake news or "hoaxes" on social media, particularly around elections in 2014 and 2019. This has been accompanied by increased polarization within the country.

During the 2014 presidential election, the eventual-winning candidate Joko Widodo became a target of a smear campaign by Prabowo Subianto's supporters that falsely claimed he was the child of Indonesian Communist Party members, of Chinese descent, and a Christian. After Widodo won, Subianto challenged the results, making claims of widespread fraud that were not upheld. Observers found that the election was carried out fairly.

According to Mafindo, which tracks fake news in Indonesia, political disinformation increased by 61% between December 2018 and January 2019, leading up to the 2019 presidential election. Both political candidates and electoral institutions were targeted. Both sides formed dedicated anti-hoax groups to counterattacks on social media. The Indonesian government held weekly fake news briefings. Once again, the losing candidate refused to accept the result and claimed that there had been fraud, without presenting any supporting evidence. Protests, rioting, and deaths of protestors were reported.

Fake news in Indonesia frequently tends to be related to alleged Chinese imperialism (including Sinicization), Christianization, and communization. Inflaming ethnic and political tensions is potentially deadly in Indonesia, with its recent incidences of domestic terrorism, and its long and bloody history of anti-communist, anti-Christian and anti-Chinese pogroms cultivated by Suharto's U.S.-backed right-wing dictatorship.

The Indonesian government, watchdog groups, and even religious organizations have taken steps to prevent the spreading of disinformation, through steps such as blocking certain websites and creating fact-check apps. The largest Islamic mass organization in Indonesia, Nahdlatul Ulama, has created an anti-fake news campaign called #TurnBackHoax, while other Islamic groups have defined such propagation as tantamount to a sin. While the government currently views criminal punishment as its last resort, officials are working hard to guarantee law enforcement will respect the freedom of expression.

Malaysia

In April 2018, Malaysia implemented the Anti-Fake News Bill 2018, a controversial law that deemed publishing and circulating misleading information as a crime punishable by up to six years in prison and/or fines of up to 500,000 ringgit. At implementation, the country's prime minister was Najib Razak, whose associates were connected to the mishandling of at least $3.5 billion by a United States Department of Justice report. Of that sum of money, $731 million was deposited into bank accounts controlled by Razak. The convergence between the fake news law and Razak's connection to scandal was made clear by the Malaysian minister of communications and multimedia, Salleh Said Keruak, who said that tying Razak to a specific dollar amount could be a prosecutable offense. In the 2018 Malaysian general election, Najib Razak lost his seat as prime minister to Mahatir Mohammad, who vowed to abolish the fake news law in his campaign, as the law was used to target him. After winning the election, the newly elected prime minister Mohammad has said, "Even though we support freedom of press and freedom of speech, there are limits." As of May 2018, Mohammad has supported amending the law, rather than a full abolition.

Paul Bernal, a lecturer in information and technology, fears that the fake news epidemic is a "Trojan horse" for countries like Malaysia to "control uncomfortable stories". The vagueness of this law means that satirists, opinion writers, and journalists who make errors could face persecution. The law also makes it illegal to share fake news stories. In one instance, a Danish man and Malaysian citizen were arrested for posting false news stories online and were sentenced to serve a month in jail.

Myanmar (Burma)

In 2015, BBC News reported on fake stories, using unrelated photographs and fraudulent captions, shared online in support of the Rohingya. Fake news negatively affected individuals in Myanmar, leading to a rise in violence against Muslims in the country. Online participation surged from one percent to 20 percent of Myanmar's total populace from 2014 to 2016. Fake stories from Facebook were reprinted in paper periodicals called Facebook and The Internet. False reporting related to practitioners of Islam in the country was directly correlated with increased attacks on Muslims in Myanmar. BuzzFeed journalist Sheera Frenkel reported fake news fictitiously stated believers in Islam acted out in violence at Buddhist locations. She documented a direct relationship between the fake news and violence against Muslim people. Frenkel noted countries that were relatively newer to Internet exposure were more vulnerable to the problems of fake news and fraud.

Pakistan

Khawaja Muhammad Asif, the Minister of Defence of Pakistan, threatened on Twitter to attack Israel with nuclear weapons after a false story claiming that Avigdor Lieberman, the Israeli Ministry of Defense, said "If Pakistan send ground troops into Syria on any pretext, we will destroy this country with a nuclear attack."

Philippines

Fake news sites have become rampant for Philippine audiences, especially being shared on social media. Politicians have started filing laws to combat fake news and three Senate hearings have been held on the topic.

The Catholic Church in the Philippines has also released a missive speaking out against it.

Vera Files research at the end of 2017 and 2018 show that the most shared fake news in the Philippines appeared to benefit 2 people the most: President Rodrigo Duterte (as well as his allies) and politician Bongbong Marcos, with the most viral news driven by shares on networks of Facebook pages. Most Philippine audience Facebook pages and groups spreading online disinformation also bear "Duterte", "Marcos" or "News" in their names and are pro-Duterte. Online disinformation in the Philippines is overwhelmingly political as well, with most attacking groups or individuals critical of the Duterte administration. Many Philippine-audience fake news websites also appear to be controlled by the same operators as they share common Google AdSense and Google Analytics IDs.

According to media scholar Jonathan Corpus Ong, Duterte's presidential campaign is regarded as the patient zero in the current era of disinformation, having preceded widespread global coverage of the Cambridge Analytica scandal and Russian trolls. Fake news is so established and severe in the Philippines that Facebook's Global Politics and Government Outreach Director Katie Harbath also calls it "patient zero" in the global misinformation epidemic, having happened before Brexit, the Trump nomination and the 2016 US Elections.

Singapore

Singapore criminalizes the propagation of fake news. Under existing law, "Any person who transmits or causes to be transmitted a message which he knows to be false or fabricated shall be guilty of an offense".

On March 18, 2015, a doctored screenshot of Prime Minister's Office website claiming the demise of the Lee Kuan Yew went viral, and several international news agencies such as CNN and China Central Television initially reported it as news, until corrected by the Prime Minister's Office. The image was created by a student to demonstrate to his classmates how fake news could be easily created and propagated. In 2017, Singaporean news website Mothership.sg was criticized by the Ministry of Education (MOE) for propagating remarks falsely attributed to a MOE official. In addition, Minister of Law K Shanmugam also singled out online news website The States Times Review as an example of a source of fake news, as it once claimed a near-zero turnout at the state funeral of President S. R. Nathan.

Following these incidents, Shanmugam stated that the existing legalization is limited and ineffective and indicated that the government intends to introduce legislation to combat fake news in 2018. In 2017, the Ministry of Communications and Information set up Factually, a website intended to debunk false rumors regarding issues of public interest such as the environment, housing and transport, while in 2018, the Parliament of Singapore formed a Select Committee on Deliberate Online Falsehoods to consider new legislation to tackle fake news.[399] On recommendations from the select committee, the Singapore government introduced the Protection from Online Falsehoods and Manipulation Bill in April 2019.

Critics had pointed out that this bill could introduce government's self censorship and increase government's control over social media. Activist platform The Online Citizen regarded legislation against fake news as an attempt by the government to curb the free flow of information so that only information approved by the government is disseminated to the public. In an online essay, activist and historian Thum Ping Tjin denied that fake news was a problem in Singapore, and accused the People's Action Party government as the only major source of fake news, claiming that detentions made without trial during Operation Coldstore and Operation Spectrum were based on fake news for party political gain. Facebook and Google had opposed the introduction of the law to combat fake news, claiming that existing legislation was adequate to address the problem and that an effective way of combating misinformation is through educating citizens on how to distinguish reliable from unreliable information.

The Bill was passed June 3, 2019. Commencing on October 2, 2019, the law is designed specifically to allow authorities to respond to fake news or false information through a graduated process of enforcing links to fact-checking statements, censorship of website or assets on social media platforms, and criminal charges. There have been 75 recorded instances of POFMA's usage since the law's introduction, with the latest occurred on May 7, 2021.

South Korea

South Korean journalists and media experts lament political hostility between South and North Korea which distorts media coverage of North Korea and North Korea has attributed erroneous reporting to South Korea and United States with being critical to media organization Chosun Ilbo while also American journalist Barbara Demick had made similar criticisms on media coverage of North.

On November 27, 2018, prosecutors raided the house of Gyeonggi Province governor Lee Jae-myung amid suspicions that his wife used a pseudonymous Twitter handle to spread fake news about President Moon Jae-in and other political rivals of her husband.

Taiwan

Taiwan's leaders, including President Tsai Ing-wen and Premier William Lai, accused China's troll army of spreading "fake news" via social media to support candidates more sympathetic to Beijing ahead of the 2018 Taiwanese local elections.

In a report in December 2015 by The China Post, a fake video shared online showed people a light show purportedly made at the Shihmen Reservoir. The Northern Region Water Resources Office confirmed there was no light show at the reservoir and the event had been fabricated. The fraud led to an increase in tourist visits to the actual attraction.

According to the news updated paper from the Time World in regards the global threat to free speech, the Taiwanese government has reformed its policy on education and it will include "media literacy" as one part of school curriculum for the students. It will be included to develop the critical thinking skills needed while using social media. Further, the work of media literacy will also include the skills needed to analyze propaganda and sources, so the student can clarify what is fake news.

Americas

Brazil

Brazil faced increasing influence from fake news after the 2014 re-election of President Dilma Rousseff and Rousseff's subsequent impeachment in August 2016. BBC Brazil reported in April 2016 that in the week surrounding one of the impeachment votes, three out of the five most-shared articles on Facebook in Brazil were fake. In 2015, reporter Tai Nalon resigned from her position at Brazilian newspaper Folha de S Paulo in order to start the first fact-checking website in Brazil, called Aos Fatos (To The Facts). Nalon told The Guardian there was a great deal of fake news, and hesitated to compare the problem to that experienced in the U.S. In fact, Brazil also has a problem with fake news, and according to a survey, it has a greater number of people that believe fake news influenced the outcome of their elections; (69%)more than in the United States, (47%).

Jair Bolsonaro

President of Brazil Jair Bolsonaro has claimed that he will not allow his government to use any of its 1.8 billion reais (US$487 million) media budget on purchases from fake news media (that is, media that does not support him). The BBC reported that Bolsonaro's campaign declared media associating his campaign to the "extreme right" were themselves fake news. In 2020, Brazil's Supreme Court began an investigation into a purported campaign of disinformation by supporters of Bolsonaro. The Brazilian President claimed that this investigation was "unconstitutional", and any restriction of fake news was an act of censorship. After an order by the Brazilian Supreme Court, Facebook had removed "dozens" of fake accounts that were directly linked to Bolsonaro's offices and his sons, and which were directed against politicians and media that opposed the President. A video of Bolsonaro falsely claiming that the anti-malarial drug, hydroxychloroquine, has been working everywhere against the coronavirus, was also taken down by Facebook and Twitter. In regards to the COVID-19 pandemic, he has accused his political opponents of exaggerating the severity of the virus. He gave a speech in 2021 in which he claimed that the virus was not as bad as the media made it out to be, and it was "fantasy" created by the media.

In the wake of the uptick in Amazon fires of 2019, it became clear that many of the forest fire photos that went viral were fake news. Emmanuel Macron, president of France, tweeted picture taken by a photographer who died in 2003, for example.

Canada

Fake news online was brought to the attention of Canadian politicians in November 2016, as they debated helping assist local newspapers. Member of Parliament for Vancouver Centre Hedy Fry specifically discussed fake news as an example of ways in which publishers on the Internet are less accountable than print media. Discussion in parliament contrasted increase of fake news online with downsizing of Canadian newspapers and the impact for democracy in Canada. Representatives from Facebook Canada attended the meeting and told members of Parliament they felt it was their duty to assist individuals gather data online.

In January 2017, the Conservative leadership campaign of Kellie Leitch admitted to spreading fake news, including false claims that Justin Trudeau was financing Hamas. The campaign manager claimed he spread the news in order to provoke negative reactions so that he could determine those who "aren't real Conservatives".

Colombia

In the fall of 2016, WhatsApp spread fake news that impacted votes critical to Colombian history. One of the lies spreading rapidly through WhatsApp was that Colombian citizens would receive less pension so former guerrilla fighters would get money. The misinformation initially began in a question to whether WhatsApp users approved of the peace accord deal between the national government and the Revolutionary Armed Forces of Colombia (FARC) or did not. The peace accord would end five decades of war between paramilitary groups (rebel forces) and the Colombian government that resulted in millions of deaths and displaced citizens throughout the country. A powerful influence of votes was the "no" campaign, the "no" campaign was to convince citizens of Colombia to not accept the peace accord because it would be letting the rebel group off "too easily." Uribe, former president of Colombia and of the democratico party, led the "no" campaign. Santos, president in 2016 took liberal approaches during his presidency. Santos won a Nobel Peace Prize in 2016 because of his efforts towards a peace accord with rebel forces. In addition, Uribe naturally had opposing views than of Santos. Furthermore, other news spread through WhatsApp were easily misinterpreted by the public, including that Santo's scheme was to turn Colombia under harsh rule like Cuba and chaos like Venezuela (under Hugo Chávez), though the logistics were never explained. In an interview of Juan Carlos Vélez, the "no" campaign manager, he says their strategy was that "We discovered the viral power of social networks." In addition, the yes campaign also took part in spreading fake news through WhatsApp. For instance, a photoshopped image of a democratico senator Everth Bustamante spread about of him holding a sign reading "I don't want guirrellas in congress" to show hypocrisy. This would be seen as hypocritical because he was a former left wing M-19 guerrilla. The "no" campaign strongly influenced votes throughout Colombia, Yes votes strong in areas with highest number of victims and no votes in areas influenced by Uribe. In result, there were 50.2 percent of no votes compared to 49.8 percent of yes votes. The result of the fake news throughout WhatsApp included changes within WhatsApp by Journalist, Juanita Leon, who invented the WhatsApp "lie detector" in January 2017 to fight fake news within the app. Although the accord was eventually signed, the WhatsApp incident further prolonged the accord and brought controversial views among citizens.

Mexico

In Mexico, people tend to rely heavily on social media and direct social contact as news sources, over television and print. Usage of and trust in all types of news sources (including social media) declined from 2017 to 2023. As of 2023, the most-used media companies in Mexico were TV Azteca news and Televisa. Televisa is both Mexico's largest television network, and the largest media network in the Spanish-speaking world. The three most frequently used social media platforms were Facebook, Youtube, and WhatsApp.

Prior to 2012, the country's major television networks were central to political communication in Mexico. They were also closely connected to the long-dominant PRI political party. Televiso has been criticized by journalists and academics for misrepresentation and manipulation of information, and attacks on opponents. Televiso shaped the campaign of Enrique Peña Nieto whose Presidency from 2012 to 2018 was marked by scandals and decreased trust in television and print media.

By the 2012 Mexican general election, coordinated online disinformation campaigns were part of an "explosion of digital politics" in Mexico. Mexican politicians used digital strategies and algorithms to boost their apparent popularity and undermine or overwhelm opposing messages. Attempts to "hack" the "attention economy" were made to amplify false narratives, capture attention and dominate discourse. A network of trolls was formed as early as 2009, to be activated as needed in support of Peña Nieto. Estimates of the number of people involved, and how many were paid, vary widely from 20,000 to 100,000 people. Bots were also used to amplify messages and create "false universes of followers". Opposing voices were drowned out by generating large volumes of meaningless responses from "ghost followers". In one incident that was analyzed, 50 spam accounts generated 1,000 tweets per day.

Government activities following Peña Nieto's election included the amplification of support for controversial government initiatives. After the killing or disappearance of a group of activist students in 2014, algorithms were used to sabotage trending hashtags on Twitter such as #YaMeCanse [translated as #IHaveHadEnough]. Bots and trolls have also been used for threats and personal attacks. Between 2017 and 2019, the hashtag #SalarioRosa (Pink Salary for Vulnerability) was associated with political figure Alfredo del Mazo Maza and pushed to the top of Twitter's trending list through astroturfing, creating an appearance of grassroots support. Photographs of women were associated with fake accounts to create the impression that women were engaged in the discussion, "mimicking conversation".

As of 2018, 76-80% of people surveyed in Mexico worried about false information or fake news being used as a weapon, the highest rate of any country in the world. During the 2018 election, bot battles between candidates drowned out conversations by posting attacks, rumors, unsubstantiated claims, and deepfaked videos. Forty percent of the election-related tweets on Twitter mentioned Andrés Manuel López Obrador (AMLO). None of his opponents reached twenty percent of the tweets. Automation and artificial amplification used both commercial and political bot services. At one point at least ten pro- and anti-AMLO bots posted over a thousand tweets in a matter of hours using the hashtag #AMLOFest.

The collaborative journalism project Verificado 2018 was established to address misinformation during the 2018 presidential election. It involved at least eighty organizations, including local and national media outlets, universities and civil society and advocacy groups. The group researched online claims and political statements and published joint verifications. During the course of the election, they produced over 400 notes and 50 videos documenting false claims and suspect sites, and tracked instances where fake news went viral. Verificado.mx received 5.4 million visits during the election, with its partner organizations registering millions more.

One of the tactics they observed was the promotion of fabricated polls that exaggerated the support for various candidates. The fake polls were claimed to have been carried out by sources such as The New York Times and El Universal, one of Mexico's top newspapers. Another tactic was to share encrypted messages via WhatsApp, In response, Verificado set up a hotline where WhatsApp users could submit messages for verification and debunking. Over 10,000 users subscribed to Verificado's hotline. Fake messages included false information about where and how to vote. One campaign urged anti-AMLO voters to check boxes for both of his opponents - an action that would result in disqualification of the vote.

Since being elected president in 2018, Andrés Manuel López Obrador (AMLO)'s behavior toward the media has been adversarial. He has used his morning addresses or mañaneras to target journalists such as Carmen Aristegui, Carlos Loret de Mola, and Victor Trujillo. Article 19, an international human rights organization, considers the Mexican government to be using a ‘strategy of disinformation’ while limiting access to public sources of information. Article 19 estimates that 26.5% (over a quarter) of the public information provided by the Mexican government is false. As of 2019, Mexico became the most dangerous country in the world for journalists, with a higher death toll than active war zones.

During the pandemic, use of social media platforms such as WhatsApp and Facebook increased. In Mexico, the user base of the video app TikTok tripled from 2019 to 2021, reaching 17 million viewers. Patterns of Twitter activity during the pandemic suggest that astroturfing was used to create an appearance of widespread grassroots support for AMLO and for Hugo López-Gatell at a time when both men were being criticized for their handling of the COVID-19 pandemic in Mexico,

United States

Middle East and Africa

Armenia

According to a report by openDemocracy in 2020, the Armenian website Medmedia.am was spreading disinformation about the coronavirus pandemic, calling COVID-19 a "fake pandemic" and warning Armenians to refuse future vaccine programmes. The website is led by Gevorg Grigoryan, a doctor who has been critical of the Armenian government's health ministry and its vaccine programmes, and has a history of anti-LGBT statements, including remarks posted on Facebook in which he called for gay people to be burned. The Guardian newspaper said the site was launched with the unwitting help of a US State Department grant intended to promote democracy.

Israel and Palestinian territories

In 1996, people had been killed in the Western Wall Tunnel riots in reaction to fake news accounts. In April 2018, Palestinian-Israeli football team Bnei Sakhnin threatened to sue Israeli Prime Minister Benjamin Netanyahu for libel, after he claimed fans booed during a minute of silence for Israeli flash-flood victims.

In a social media post, Netanyahu blasted various Israeli news critical of him, as fake news including Channel 2, Channel 10, Haaretz and Ynet the same day U.S. President Trump decried "fake news".

The Palestinian Islamist political organization, Hamas published a political program in 2017 intended to ease its position on Israel. Among other things, this charter accepted the borders of the Palestinian state circa the Six-Day War of 1967. Although this document is an advancement from their previous 1988 charter, which called for the destruction of the State of Israel, it still does not recognize Israel as legitimate independent nation. In a May 2017 video, Prime Minister of Israel, Benjamin Netanyahu responded to the coverage of this event by news outlets such as Al Jazeera, CNN, The New York Times and The Guardian, labeling their reporting "fake news". He specifically disagreed with the notion that Hamas had accepted the state of Israel within their new charter, and called this "a complete distortion of the truth". Instead he said, "The new Hamas document says Israel has no right to exist." Haaretz fact-checked the video, stating, "Netanyahu, following in the footsteps of Trump, is deliberately twisting the definition of 'fake news' to serve his own needs." In a later speech, addressed to his supporters, Netanyahu responded to allegations against him: "The fake news industry is at its peak ... Look, for example, how they cover with unlimited enthusiasm, every week, the left-wing demonstration. The same demonstrations whose goal is to apply improper pressure on law enforcement authorities so they will file an indictment at any price." The Washington Post likened his use of the term fake news for describing left-wing media to Donald Trump's similar statements during the 2016 United States election cycle.

In a most recent studies conducted by Yifat Media Check Ltd. and Hamashrokit ("The Whistle" fact-checking NGO), they found that over 70% of statements made by Israeli political leaders were not accurate.

Some of the fake news Israel has been the victim of includes Israel-related animal conspiracy theories which claim Israel is using various animals to spy on or attack others with.

Saudi Arabia

According to the Global News, Saudi Arabia's state-owned television spread fake news about Canada. In August 2018, Canada's Global News reported that state-owned television Al Arabiya, "has suggested that Canada is the worst country in the world for women, that it has the highest suicide rate and that it treats its Indigenous people the way Myanmar treats the Rohingya—a Muslim minority massacred and driven out of Myanmar en masse last year."

In October 2018, Twitter has suspended a number of bot accounts that appeared to be spreading pro-Saudi rhetoric about the disappearance of Saudi opposition journalist Jamal Khashoggi.

According to Newsweek, Saudi Arabia's Office of Public Prosecution tweeted that "producing rumors or fake news [that Saudi Arabia's government was involved in the disappearance of Khashoggi] that would affect the public order or public security or sending or resending it via social media or any technical means" is punishable "by five years and a fine of 3 million riyals".

Iranian-backed Twitter accounts have spread sensational fake news and rumours about Saudi Arabia.

On August 1, 2019, Facebook identified hundreds of accounts that were running a covert network on behalf of government of the Kingdom of Saudi Arabia to spread fake news and attack regional rivals. The social media giant removed more than 350 accounts, pages and groups with nearly 1.4 million followers. Along with Facebook, these accounts were involved in "coordinated inauthentic behavior" on Instagram as well. According to a Facebook blog post, the network was running two different political agendas, one on behalf of Saudi Arabia and the other for the United Arab Emirates and Egypt.

Syria

In February 2017, Amnesty International reported that up to 13,000 people had been hanged in a Syrian prison as part of an "extermination" campaign. Syrian president Bashar al-Assad questioned the credibility of Amnesty International and called the report "fake news" fabricated to undermine the government. "You can forge anything these days—we are living in a fake news era."

Russia ran a disinformation campaign during the Syrian Civil War to discredit the humanitarian rescue organisation White Helmets, and to discredit reports and images of children and other civilian bombing victims. This was done to weaken criticism of Russia's involvement in the war. The United Nations and international chemical inspectors found Bashar al-Assad responsible for use of chemical weapons, which was called "fake news" by Russia. Russia promoted various contradictory claims that no chemicals were present, or attributing the chemical attacks to other countries or groups.

United Arab Emirates

The United Arab Emirates (UAE) had been funding non-profit organizations, think tanks and contributors of journalism, including Foundation for Defense of Democracies (FDD) and the Middle East Forum (MEF), which further paid journalists spreading fake information to defame countries like Qatar. In 2020, a researcher at FDD, Benjamin Weinthal, and fellow at MEF, Jonathan Spyer, contributed an article on Fox News to promote a negative image of Qatar, in an attempt to stain its diplomatic relations with the United States.

Egypt

According to The Daily Telegraph, an Egyptian official suggested in 2010 that the Israeli spy agency Mossad could have been behind a fatal shark attack in Sharm el-Sheikh. It was estimated by the Egyptian Parliament's Communication and Information Technology Committee that in 2017, 53,000 false rumors had been spread primarily through social media in 60 days.

South Africa

A wide range of South African media sources have reported fake news as a growing problem and tool to both increase distrust in the media, discredit political opponents, and divert attention from corruption. Media outlets owned by the Gupta family have been noted by other South African media organisations such as The Huffington Post (South Africa), Sunday Times, Radio 702, and City Press for targeting them. Individuals targeted include Finance Minister Pravin Gordhan who was seen as blocking Gupta attempts at state capture with accusations levelled against Gordhan of promoting state capture for "white monopoly capital".

The African National Congress (ANC) was taken to court by Sihle Bolani for unpaid work she did during the election on the ANC's behalf. In court papers Bolani stated that the ANC used her to launch and run a covert R50 million fake news and disinformation campaign during the 2016 municipal elections with the intention of discrediting opposition parties.

Oceania

Australia

The Australian Parliament initiated investigation into "fake news" regarding issues surrounding fake news that occurred during the 2016 United States election. The inquiry looked at several major areas in Australia to find audiences most vulnerable to fake news, by considering the impact on traditional journalism, and by evaluating the liability of online advertisers and by regulating the spreading the hoaxes. This act of parliament is meant to combat the threat of social media spreading fake news.

The Australian Code of Practice on Disinformation and Misinformation commenced on 22 February 2021, around 12 months after the Australian Government asked digital platforms to develop a voluntary code to address disinformation and misinformation and assist users of their services to more easily identify the reliability, trustworthiness and source of news content. The request is part of a broader Australian Government strategy to reform the technology and information dissemination landscape. The Australian Communications and Media (ACMA) oversaw the development of the code. The Government will then consider the need for further measures including mandatory regulation.

A well-known case of fabricated news in Australia happened in 2009 when a report Deception detection across Australian populations of a "Levitt Institute" was widely cited on news websites across the country, claiming that Sydney was the most naive city, despite the fact that the report itself contained a clue: amidst the mathematical gibberish, there was a statement: "These results were completely made up to be fictitious material through a process of modified truth and credibility nodes."

Dating app

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Dating_app

An online dating application is an online dating service presented through a mobile phone application (app). These apps often take advantage of a smartphone's GPS location capabilities, always on-hand presence, and access to mobile wallets. These apps aim to speed up the online dating process of sifting through potential dating partners, chatting, flirting, and potentially meeting or becoming romantically involved.

Online dating apps are now mainstream in the U.S. As of 2017, online dating (which included both apps and other online dating services) was the number one method by which new couples in the U.S. met. The percentage of couples meeting online is predicted to increase to 70% by 2040.

Origins

By 2009, several dating apps existed which catered to straight audiences, with Grindr targeting gay and bisexual men at launch. Tinder, launched in 2012, led to a growth of online dating applications by both new providers and existing online dating services that expanded into the mobile app market.

Usage by demographic group

Online dating applications typically target a younger demographic group. Today almost 50% of people know of someone who use the services or has met their loved one through the service. After the iPhone launch in 2007, online dating data has mushroomed as application usage increased. In 2005, only 10% of 18-24 year olds reported to have used online dating services; this number quickly grew to over 27%,  making this target demographic the largest number of users for most applications. When Pew Research Center conducted a study in 2016, they found that 59% of U.S. adults agreed that online dating is a good way to meet people compared to 44% in 2005. This explosion in usage can be explained by the increased use of smartphones. By the end of 2022, it is expected there will be 413 million active users of online dating services worldwide.

The increased use of smartphones by those 65 and older has also driven that population to the use dating apps. The Pew Research Center found that usage increase by 8 points since last surveyed in 2012. A study in 2021 found that more than one-third of seniors have dated in the past 5 years, and roughly one-third of those dating seniors have turned to dating apps.

During the COVID-19 pandemic, Morning Consult found that more Americans were using online dating apps than ever before. In one survey in April 2020, the company discovered that 53% of U.S. adults who use online dating apps have been using them more during the pandemic. As of February 2021, that share increased to 71 percent.

Research using Hofstede's cultural dimensions theory has indicated that norms about online dating applications tend to differ across cultures. A study published in the Journal of Creative Communications looked into the relationships between dating-app advertisements from over 51 countries and the cultural dimensions of these countries. The results revealed that dating-app advertisements appealed to multiple cultural needs, including the needs for relationships, friendship, entertainment, sex, status, design and identity. The use of these appeals was found to be 'congruent with ... the individualism/collectivism and uncertainty avoidance cultural dimensions.' 

Popular applications

Online dating

After Tinder's success, many others tried creating their own dating applications and dating websites such as Match.Com created applications for convenience. ARC from Applause, a research group on app economy, conducted a research study in 2016 on how 1.5 million U.S. consumers rated 97 of the most popular dating apps. The research results indicated that only 11 apps scored 50 or greater (out of 100) with more than 10,000 reviews from the app store. These include: Jaumo, OkCupid, happn, SCRUFF by Perry Street, Moco by JNJ Mobile, GROWL by Initech, Skout, Qeep by Blue Lion mobile, MeetMe, Badoo, and Hornet. An app with a 50+ score was considered successful. Other popular applications like Bumble, Grindr, eHarmony, chamet and Match scored 40 or less. To ensure privacy for celebrities, Raya emerged as a membership-based dating app, allowing entrance only through referrals. In 2019, Taimi, which started out as an alternative to Grindr launched a first LGBTQI+ inclusive dating app. The ability to identify individuals with similar interests has given rise to a number of popular religious dating apps including the likes of Muzmatch (Muslim), Salams (Muslim), Upward (Christian), Christian Connection (Christian), JSwipe (Jewish) and JDate (Jewish).

VR Dating

VR Dating is an application of Social VR where people can exist, collaborate, and perform various activities together. Virtual reality apps use virtual and augmented realities to make the dating experience more lifelike and more effective, as well as allow people to expand what is already possible in the world of online dating.

There are several online platforms of VR Dating. The VR dating app Nevermet is the VR equivalent of Tinder, where people can search and find on dates. However, instead of actual real-life pictures, users will update pictures of virtual selves and will be interacting with avatars rather than real faces. Flirtual is a self-contained social VR app that serves to match users who then decide where and how to meet in VR. Flirtual hosts speed dating and social events in VR.

Effects on dating

The usage of online dating applications can have both advantages and disadvantages:

Advantages

Many of the applications provide personality tests for matching or use algorithms to match users. These factors enhance the possibility of users getting matched with a compatible candidate. Users are in control; they are provided with many options so there are enough matches that fit their particular type. Users can simply choose to not match the candidates that they know they are not interested in. Narrowing down options is easy. Once users think they are interested, they are able to chat and get to know the potential candidate. This type of communication saves the time, money, and risk users would not avoid if they were dating the traditional way. Online dating offers convenience; people want dating to work around their schedules. Online dating can also increase self-confidence; even if users get rejected, they know there are hundreds of other candidates that will want to match with them so they can simply move on to the next option. In fact, 60% of U.S. adults agree that online dating is a good way to meet people and 66% say they have gone on a real date with someone they met through an application. Today, 5% of married Americans or Americans in serious relationships said they met their significant other online.

Disadvantages

Sometimes having too many options can be overwhelming. With so many options available, users can get lost in their choices and end up spending too much time looking for the "perfect" candidate instead of using that time to start a real relationship. In addition, the algorithms and matching systems put in place may not always be as accurate as users think. There is no perfect system that can match two people’s personalities perfectly every time.

Communication online also lacks the physical chemistry aspect that is essential for choosing a potential partner. Much is lost in translation through texting. Online dating has made dating very superficial; the picture on a user's profile may cause someone to match or not match before even getting to know their personalities.

An issue amplified by dating apps is a phenomenon known as 'ghosting', whereby one party in a relationship cuts off all communication with the other party without warning or explanation. Ghosting poses a serious problem for dating apps as it can lead to users deleting the apps. For this reason companies like Bumble and Badoo are cracking down on the practice with new features that make it easier for users to end chat conversations more politely.

Online dating is stigmatized, but it is becoming more accepted over time.

Data privacy

Dating apps and online dating sites are often involved in cases concerning the misuse of data. In 2018 Grindr, the first platform for gay dating is accused to have shared data about the HIV status of its users with numerous companies. Grindr recognized the allegations but claimed that it was in order to optimize its platform which doesn’t convince the LGBT community. Grindr defended itself by sharing the data loss prevention of the company and reassuring the users with the public intervention of its CTO Scott Chen. In Europe, dating platforms care more and more about data legislation because of the GDPR sanctions that threatens companies with economic sanctions.

Other personal data are sold by dating apps. The one that is the most bought by private companies remains the geographical information of users. When the user allows localization, apps record them and store them using Geographic Coordinate System. When a data breach happens, geographical information directly exposes users.

As others applications, dating apps can have breaches: hackers have revealed security issues on Tinder, Coffee Meets Bagel or Adult FriendFinder for instance. On the last one, the data of more than 412 million users was exposed, one of the largest leak in terms of the number of accounts exposed. In 2016, the sharing of personal information from almost 40 million users of Ashley Madison by a group of Hackers, the "Impact Team", revealed their real name, phone number, email address, geographical position and sexual preferences. Ashley Madison assured their more than 35 million users that the service was totally "anonymous" and "100% discrete" but they didn't completely delete accounts when users chose to (and paid for that) or recognize that data had actually leaked in a first time. Some suicides have been reported after the leak. Taimi introduced bank-level security to become the "safest dating app" for gay people to date. 

Data theft and cybersecurity

After analyzing a significant number of diverse mobile dating applications, researchers have concluded that most of the major dating applications are vulnerable to simple sniffing attacks, which could reveal very sensitive personal information such as sexual orientation, preferences, e-mails, degree of interaction between users, etc.

Online dating platforms are also used as honeypots wherein attackers create fake profiles to steal user's private information.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...