Search This Blog

Saturday, July 3, 2021

Fake news website

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Fake_news_website

Fake news websites (also referred to as hoax news websites) are Internet websites that deliberately publish fake newshoaxes, propaganda, and disinformation purporting to be real news—often using social media to drive web traffic and amplify their effect. Unlike news satire, fake news websites deliberately seek to be perceived as legitimate and taken at face value, often for financial or political gain. Such sites have promoted political falsehoods in India, Germany, Indonesia and the Philippines, Sweden, Mexico, Myanmar, and the United States. Many sites originate in, or are promoted by, Russia, North Macedonia, Romania, and the United States.

Overview of coverage

One pan-European newspaper, The Local, described the proliferation of fake news as a form of psychological warfare. Some media analysts have seen it as a threat to democracy. In 2016, the European Parliament's Committee on Foreign Affairs passed a resolution warning that the Russian government was using "pseudo-news agencies" and Internet trolls as disinformation propaganda to weaken confidence in democratic values.

A screenshot of a fake news story, falsely claiming Donald Trump won the popular vote in the 2016 United States presidential election
Screenshot of a fake news story, falsely stating Donald Trump won the popular vote in the 2016 U.S. election

In 2015, the Swedish Security Service, Sweden's national security agency, issued a report concluding Russia was using fake news to inflame "splits in society" through the proliferation of propaganda. Sweden's Ministry of Defence tasked its Civil Contingencies Agency with combating fake news from Russia. Fraudulent news affected politics in Indonesia and the Philippines, where there was simultaneously widespread usage of social media and limited resources to check the veracity of political claims. German Chancellor Angela Merkel warned of the societal impact of "fake sites, bots, trolls".

Fraudulent articles spread through social media during the 2016 U.S. presidential election, and several officials within the U.S. Intelligence Community said that Russia was engaged in spreading fake news. Computer security company FireEye concluded that Russia used social media to spread fake news stories as part of a cyberwarfare campaign. Google and Facebook banned fake sites from using online advertising. Facebook launched a partnership with fact-checking websites to flag fraudulent news and hoaxes; debunking organizations that joined the initiative included: Snopes.com, FactCheck.org, and PolitiFact. U.S. President Barack Obama said a disregard for facts created a "dust cloud of nonsense". Chief of the Secret Intelligence Service (MI6) Alex Younger called fake news propaganda online dangerous for democratic nations.

Definition

Examples of fake news websites
ABCnews.com.co
ABCnews.com.co - fake site creating hoaxes by using website spoofing
RealTrueNews
RealTrueNews

The New York Times has defined "fake news" on the internet as fictitious articles deliberately fabricated to deceive readers, generally with the goal of profiting through clickbait. PolitiFact has described fake news as fabricated content designed to fool readers and subsequently made viral through the Internet to crowds that increase its dissemination. Others have taken as constitutive the "systemic features inherent in the design of the sources and channels through which fake news proliferates", for example by playing to the audience's cognitive biases, heuristics, and partisan affiliation. Some fake news websites use website spoofing, structured to make visitors believe they are visiting trusted sources like ABC News or MSNBC.

Fake news maintained a presence on the internet and in tabloid journalism in the years prior to the 2016 U.S. presidential election. Before the election campaign involving Hillary Clinton and Donald Trump, fake news had not impacted the election process and subsequent events to such a high degree. Subsequent to the 2016 election, the issue of fake news turned into a political weapon, with supporters of left-wing politics saying that supporters of right-wing politics spread false news, while the latter claimed that they were being "censored". Due to these back-and-forth complaints, the definition of fake news as used for such polemics has become more vague.

Pre-Internet history

Unethical journalistic practices existed in printed media for hundreds of years before the advent of the Internet. Yellow journalism, reporting from a standard which is devoid of morals and professional ethics, was pervasive during the time period in history known as the Gilded Age, and unethical journalists would engage in fraud by fabricating stories, interviews, and made-up names for scholars. During the 1890s, the spread of this unethical news sparked violence and conflicts. Both Joseph Pulitzer and William Randolph Hearst fomented yellow journalism in order to increase profits, which helped lead to misunderstandings which became partially responsible for the outset of the Spanish–American War in 1898. J.B. Montgomery-M’Govern wrote a column harshly critical of "fake news" in 1898, saying that what characterized "fake news" was sensationalism and "the publication of articles absolutely false, which tend to mislead an ignorant or unsuspecting public."

A radio broadcast from Gleiwitz by German soldier Karl Homack, pretending to be a Polish invader who had captured the station, was taken at face value by other stations, in Germany and abroad, fueling Adolf Hitler's declaration of war on Poland the next day. According to USA Today, newspapers which have a history of commonly publishing fake news have included Globe, Weekly World News, and The National Enquirer.

Prominent sources

Prominent among fraudulent news sites include false propaganda created by individuals in the countries of Russia, North Macedonia, Romania, and the United States.

North Macedonia

The town of Veles in North Macedonia
Fraudulent news stories during the 2016 U.S. election were traced to teenagers in Veles, North Macedonia.

Much of the fake news during the 2016 U.S. presidential election season was traced to adolescents in Macedonia, now known as North Macedonia, specifically Veles. It is a town of 50,000 in the middle of the country, with high unemployment, where the average wage is $4,800. The income from fake news was characterized by NBC News as a gold rush. Adults supported this income, saying they were happy the youths were working. The mayor of Veles, Slavcho Chadiev, said he was not bothered by their actions, as they were not against Macedonian law and their finances were taxable. Chadiev said he was happy if deception from Veles influenced the results of the 2016 U.S. election in favor of Trump.

BuzzFeed News and The Guardian separately investigated and found teenagers in Veles created over 100 sites spreading fake news stories supportive of Donald Trump. The teenagers experimented with left slanted fake stories about Bernie Sanders, but found that pro-Trump fictions were more popular. Prior to the 2016 election the teenagers gained revenues from fake medical advice sites. One youth named Alex stated, in an August 2016 interview with The Guardian, that this fraud would remain profitable regardless of who won the election. Alex explained he plagiarized material for articles by copying and pasting from other websites. This could net them thousands of dollars daily, but they averaged only a few thousand per month.

The Associated Press (AP) interviewed an 18-year-old in Veles about his tactics. A Google Analytics analysis of his traffic showed more than 650,000 views in one week. He plagiarized pro-Trump stories from a right-wing site called The Political Insider. He said he did not care about politics, and published fake news to gain money and experience. The AP used DomainTools to confirm the teenager was behind fake sites, and determined there were about 200 websites tracked to Veles focused on U.S. news, many of which mostly contained plagiarized legitimate news to create an appearance of credibility.

NBC News also interviewed an 18-year-old there. Dmitri (a pseudonym) was one of the most profitable fake news operators in town, and said about 300 people in Veles wrote for fake sites. Dmitri said he gained over $60,000 during the six months prior through doing this, more than both his parents' earnings. Dmitri said his main dupes were supporters of Trump. He said after the 2016 U.S. election he continued to earn significant amounts.

The 2020 U.S. election is their next project.

Romania

"Ending the Fed", a popular purveyor of fraudulent reports, was run by a 24-year-old named Ovidiu Drobota out of Oradea, Romania, who boasted to Inc. magazine about being more popular than mainstream media. Established in March 2016, "Ending the Fed" was responsible for a false story in August 2016 that incorrectly stated Fox News had fired journalist Megyn Kelly—the story was briefly prominent on Facebook on its "Trending News" section. "Ending the Fed" held four out of the 10 most popular fake articles on Facebook related to the 2016 U.S. election in the prior three months before the election itself. The Facebook page for the website, called "End the Feed", had 350,000 "likes" in November 2016. After being contacted by Inc. magazine, Drobota stated he was proud of the impact he had on the 2016 U.S. election in favor of his preferred candidate Donald Trump. According to Alexa Internet, "Ending the Fed" garnered approximately 3.4 million views over a 30-day-period in November 2016. Drobota stated the majority of incoming traffic is from Facebook. He said his normal line of work before starting "Ending the Fed" included web development and search engine optimization.

Russia

Internet Research Agency

An aerial view of the Smolny Convent in Saint Petersburg
Saint Petersburg
Saint Petersburg

Beginning in fall 2014, The New Yorker writer Adrian Chen performed a six-month investigation into Russian propaganda dissemination online by the Internet Research Agency (IRA). Yevgeny Prigozhin (Evgeny Prigozhin), a close associate of Vladimir Putin, was behind the operation which hired hundreds of individuals to work in Saint Petersburg. The organization became regarded as a "troll farm", a term used to refer to propaganda efforts controlling many accounts online with the aim of artificially providing a semblance of a grassroots organization. Chen reported that Internet trolling was used by the Russian government as a tactic largely after observing the social media organization of the 2011 protests against Putin.

European Union response

Building of the European Union's Committee on Foreign Affairs
European Union parliamentary Committee on Foreign Affairs passed a resolution in November 2016, condemning Russian "pseudo-news agencies" and Internet trolls.

In 2015, the Organization for Security and Co-operation in Europe released an analysis critical of disinformation campaigns by Russia masked as news. This was intended to interfere with Ukraine relations with Europe after the removal of former Ukraine president Viktor Yanukovych. According to Deutsche Welle, similar tactics were used in the 2016 U.S. elections. The European Union created a taskforce to deal with Russian disinformation. The taskforce, East StratCom Team, had 11 people including Russian speakers. In November 2016, the EU voted to increase the group's funding. In November 2016, the European Parliament Committee on Foreign Affairs passed a resolution warning of the use by Russia of tools including: "pseudo-news agencies ... social media and internet trolls" as disinformation to weaken democratic values. The resolution requested EU analysts investigate, explaining member nations needed to be wary of disinformation. The resolution condemned Russian sources for publicizing "absolutely fake" news reports. The tally on 23 November 2016 passed by a margin of 304 votes to 179.

Counter-Disinformation Team

The U.S. State Department planned to use a unit called the Counter-Disinformation Team, formed with the intention of combating disinformation from the Russian government, and that it was disbanded in September 2015 after department heads missed the scope of propaganda before the 2016 U.S. election. The U.S. State Department put eight months into developing the unit before scrapping it. It would have been a reboot of the Active Measures Working Group set up by Reagan Administration. The Counter-Disinformation Team was set up under the Bureau of International Information Programs. Work began in 2014, with the intention to combat propaganda from Russian sources such as the RT network (formerly known as Russia Today). U.S. Intelligence officials explained to former National Security Agency analyst and counterintelligence officer John R. Schindler that the Obama Administration decided to cancel the unit as they were afraid of antagonizing Russia. U.S. Undersecretary of State for Public Diplomacy Richard Stengel was point person for the unit before it was canceled. Stengel previously wrote about disinformation by RT.

Internet trolls shift focus to Trump

In December 2015 Adrian Chen noticed pro-Russia accounts suddenly became supportive of Trump.

Adrian Chen observed a pattern in December 2015 where pro-Russian accounts became supportive of 2016 U.S. presidential candidate Donald Trump. Andrew Weisburd and Foreign Policy Research Institute fellow and senior fellow at the Center for Cyber and Homeland Security at George Washington University, Clint Watts, wrote for The Daily Beast in August 2016 that Russian propaganda fabricated articles were popularized by social media. Weisburd and Watts documented how disinformation spread from Russia Today and Sputnik News, "the two biggest Russian state-controlled media organizations publishing in English", to pro-Russian accounts on Twitter. Citing research by Chen, Weisburd and Watts compared Russian tactics during the 2016 U.S. election to Soviet Union Cold War strategies. They referenced the 1992 United States Information Agency report to Congress, which warned about Russian propaganda called active measures. They concluded social media made active measures easier. Institute of International Relations Prague senior fellow and scholar on Russian intelligence, Mark Galeotti, agreed the Kremlin operations were a form of active measures. The most strident Internet promoters of Trump were not U.S. citizens but paid Russian propagandists. The Guardian estimated their number to be in the "low thousands" in November 2016.

Weisburd and Watts collaborated with colleague J. M. Berger and published a follow-up to their Daily Beast article in online magazine War on the Rocks, titled: "Trolling for Trump: How Russia is Trying to Destroy Our Democracy". They researched 7,000 pro-Trump accounts over a 2+12-year period. Their research detailed trolling techniques to denigrate critics of Russian activities in Syria, and proliferate lies about Clinton's health. Watts said the propaganda targeted the alt-right, the right wing, and fascist groups. After each presidential debate, thousands of Twitter bots used hashtag #Trumpwon to change perceptions.

In November 2016 the Foreign Policy Research Institute stated Russian propaganda exacerbated criticism of Clinton and support for Trump. The strategy involved social media, paid Internet trolls, botnets, and websites in order to denigrate Clinton.

U.S. intelligence analysis

David DeWalt, the chairman of computer security company FireEye
FireEye chairman David DeWalt concluded the Russian operation during the 2016 election was a new development in cyberwarfare by Russia.

Computer security company FireEye concluded Russia used social media as a weapon to influence the U.S. election. FireEye Chairman David DeWalt said the 2016 operation was a new development in cyberwarfare by Russia. FireEye CEO Kevin Mandia stated Russian cyberwarfare changed after fall 2014, from covert to overt tactics with decreased operational security. Bellingcat analyst Aric Toler explained fact-checking only drew further attention to the fake news problem.

U.S. Intelligence agencies debated why Putin chose summer 2016 to escalate active measures. Prior to the election, U.S. national security officials said they were anxious about Russia tampering with U.S. news. Director of National Intelligence James R. Clapper said after the 2011–13 Russian protests, Putin lost self-confidence, and responded with the propaganda operation. Former CIA officer Patrick Skinner said the goal was to spread uncertainty. House Intelligence Committee Ranking Member Adam Schiff commented on Putin's aims, and said U.S. intelligence were concerned with Russian propaganda. Speaking about disinformation that appeared in Hungary, Slovakia, the Czech Republic, and Poland, Schiff said there was an increase of the same behavior in the U.S.

U.S. intelligence officials stated in November 2016 they believed Russia engaged in spreading fake news, and the FBI released a statement saying they were investigating. Two U.S. intelligence officials each told BuzzFeed News they "believe Russia helped disseminate fake and propagandized news as part of a broader effort to influence and undermine the presidential election". The U.S. intelligence sources stated this involved "dissemination of completely fake news stories". They told BuzzFeed the FBI investigation specifically focused on why "Russia had engaged in spreading false or misleading information".

By country

Fake news has influenced political discourse in multiple countries, including Germany, Indonesia and the Philippines, Sweden, China, Myanmar, and the United States.

Austria

Politicians in Austria dealt with the impact of fake news and its spread on social media after the 2016 presidential campaign in the country. In December 2016, a court in Austria issued an injunction on Facebook Europe, mandating it block negative postings related to Eva Glawischnig-Piesczek, Austrian Green Party Chairwoman. According to The Washington Post the postings to Facebook about her "appeared to have been spread via a fake profile" and directed derogatory epithets towards the Austrian politician. The derogatory postings were likely created by the identical fake profile that had previously been utilized to attack Alexander van der Bellen, who won the election for President of Austria.

Brazil

Brazil faced increasing influence from fake news after the 2014 re-election of President Dilma Rousseff and Rousseff's subsequent impeachment in August 2016. In the week surrounding one of the impeachment votes, 3 out of the 5 most-shared articles on Facebook in Brazil were fake. In 2015, reporter Tai Nalon resigned from her position at Brazilian newspaper Folha de S.Paulo in order to start the first fact-checking website in Brazil, called Aos Fatos (To The Facts). Nalon told The Guardian there was a great deal of fake news, and hesitated to compare the problem to that experienced in the U.S.

Canada

Fake news online was brought to the attention of Canadian politicians in November 2016, as they debated helping assist local newspapers. Member of Parliament for Vancouver Centre Hedy Fry specifically discussed fake news as an example of ways in which publishers on the Internet are less accountable than print media. Discussion in parliament contrasted increase of fake news online with downsizing of Canadian newspapers and the impact for democracy in Canada. Representatives from Facebook Canada attended the meeting and told members of Parliament they felt it was their duty to assist individuals gather data online.

China

Fake news during the 2016 U.S. election spread to China. Articles popularized within the United States were translated into Chinese and spread within China. The government of China used the growing problem of fake news as a rationale for increasing Internet censorship in China in November 2016. China then published an editorial in its Communist Party newspaper The Global Times called: "Western Media's Crusade Against Facebook", and criticized "unpredictable" political problems posed by freedoms enjoyed by users of Twitter, Google, and Facebook. China government leaders meeting in Wuzhen at the third World Internet Conference in November 2016 said fake news in the U.S. election justified adding more curbs to free and open use of the Internet. China Deputy Minister Ren Xianliang, official at the Cyberspace Administration of China, said increasing online participation led to "harmful information" and fraud. Kam Chow Wong, a former Hong Kong law enforcement official and criminal justice professor at Xavier University, praised attempts in the U.S. to patrol social media. The Wall Street Journal noted China's themes of Internet censorship became more relevant at the World Internet Conference due to the outgrowth of fake news.

Finland

Officials from 11 countries held a meeting in Helsinki in November 2016, in order to plan the formation of a center to combat disinformation cyber-warfare including spread of fake news on social media. The center is planned to be located in Helsinki and include efforts from 10 countries with participation from Sweden, Germany, Finland, and the U.S. Prime Minister of Finland Juha Sipilä planned to deal with the center in spring 2017 with a motion before the Parliament of Finland. Jori Arvonen, Deputy Secretary of State for EU Affairs, said cyberwarfare became an increased problem in 2016, and included hybrid cyber-warfare intrusions into Finland from Russia and Islamic State of Iraq and the Levant. Arvonen cited examples including fake news online, disinformation, and the little green men troops during the Ukrainian crisis.

France

France saw an uptick in amounts of disinformation and propaganda, primarily in the midst of election cycles. Le Monde fact-checking division "Les décodeurs" was headed by Samuel Laurent, who told The Guardian in December 2016 the upcoming French presidential election campaign in spring 2017 would face problems from fake news. The country faced controversy regarding fake websites providing false information about abortion. The government's lower parliamentary body moved forward with intentions to ban such fake sites. Laurence Rossignol, women's minister for France, informed parliament though the fake sites look neutral, in actuality their intentions were specifically targeted to give women fake information. During the 10-year period preceding 2016, France was witness to an increase in popularity of far-right alternative news sources called the fachosphere ("facho" referring to fascist); known as the extreme right on the Internet [fr]. According to sociologist Antoine Bevort, citing data from Alexa Internet rankings, the most consulted political websites in France included Égalité et Réconciliation, François Desouche [fr], and Les Moutons Enragés. These sites increased skepticism towards mainstream media from both left and right perspectives.

Germany

German Chancellor Angela Merkel lamented the problem of fraudulent news reports in a November 2016 speech, days after announcing her campaign for a fourth term as leader of her country. In a speech to the German parliament, Merkel was critical of such fake sites, saying they harmed political discussion. Merkel called attention to the need of government to deal with Internet trolls, bots, and fake news websites. She warned that such fraudulent news websites were a force increasing the power of populist extremism. Merkel called fraudulent news a growing phenomenon that might need to be regulated in the future. Germany's foreign intelligence agency Federal Intelligence Service Chief, Bruno Kahl [de], warned of the potential for cyberattacks by Russia in the 2017 German election. He said the cyberattacks would take the form of the intentional spread of disinformation. Kahl said the goal is to increase chaos in political debates. Germany's domestic intelligence agency Federal Office for the Protection of the Constitution Chief, Hans-Georg Maassen, said sabotage by Russian intelligence was a present threat to German information security.

India

Rasmus Kleis Nielsen, director at Reuters Institute for the Study of Journalism, thinks that "the problems of disinformation in a society like India might be more sophisticated and more challenging than they are in the West". The damage caused due to fake news on social media has increased due to the growth of the internet penetration in India, which has risen from 137 million internet users in 2012 to over 600 million in 2019. India is the largest market for WhatsApp, with over 230 million users, and as a result one of the main platforms on which fake news is spread. One of the main problems is of receivers believing anything sent to them over social media due to lack of awareness. Various initiatives and practices have been started and adopted to curb the spread and impact of fake news. Fake news is also spread through Facebook, Whatsapp and Twitter.

According to a report by The Guardian, the Indian media research agency CMS stated that the cause of spread of fake news was that India "lacked (a) media policy for verification". Additionally, law enforcement officers have arrested reporters and journalists for "creating fictitious articles", especially when the articles were controversial.

In India, fake news has been spread by both the left and the right side of the political spectrum. A study published in ThePrint claimed that on Twitter, there were at least 17,000 accounts spreading fake news to favour the BJP, while around 147 accounts were spreading fake news to favour the Indian National Congress.[94] Similarly, the IT Cell of the BJP has been accused of spreading fake news against the party's political opponents, religious minorities, and any campaigns against the party. The IT Cells of the BJP, Congress and other political parties have been accused of spreading fake news against the party's political opponents and any campaigns against the party. RSS mouthpiece Organizer and Congress mouthpiece National Herald have also been accused of misleading reports.


Prominent fake news-spreading websites and online resources include OpIndia and Postcard News.

Indonesia and Philippines

Fraudulent news has been particularly problematic in Indonesia and the Philippines, where social media has an outsized political influence. According to media analysts, developing countries with new access to social media and democracy felt the fake news problem to a larger extent. In some developing countries, Facebook gives away smartphone data free of charge for Facebook and media sources, but at the same time does not provide the user with Internet access to fact-checking websites.

Iran

On 8 October 2020, Bloomberg reported that 92 websites used by Iran to spread misinformation were seized by the United States government.

Italy

President of the Italian Chamber of Deputies, Laura Boldrini, stated: "Fake news is a critical issue and we can’t ignore it. We have to act now."

Between 1 October and 30 November 2016, ahead of the Italian constitutional referendum, five out of ten referendum-related stories with most social media participation were hoaxes or inaccurate. Of the three stories with the most social media attention, two were fake. Prime Minister of Italy Matteo Renzi met with U.S. President Obama and leaders of Europe at a meeting in Berlin, Germany in November 2016, and spoke about the fake news problem. Renzi hosted discussions on Facebook Live in an effort to rebut falsities online. The influence became so heavy that a senior adviser to Renzi began a defamation complaint on an anonymous Twitter user who had used the screenname "Beatrice di Maio".

The Five Star Movement (M5S), an Italian political party founded by Beppe Grillo, managed fake news sites amplifying support for Russian news, propaganda, and inflamed conspiracy theories. The party's site TzeTze had 1.2 million Facebook fans and shared fake news and pieces supportive of Putin cited to Russia-owned sources including Sputnik News. TzeTze plagiarized the Russian sources, and copied article titles and content from Sputnik. TzeTze, another site critical of Renzi called La Cosa, and a blog by Grillo—were managed by the company Casaleggio Associati which was started by Five Star Movement co-founder Gianroberto Casaleggio. Casaleggio's son Davide Casaleggio owns and manages TzeTze and La Cosa, and medical advice website La Fucina which markets anti-vaccine conspiracy theories and medical cure-all methods. Grillo's blog, Five Star Movement fake sites use the same IP addresses, Google Analytics and Google AdSense.

Cyberwarfare against Renzi increased, and Italian newspaper La Stampa brought attention to false stories by Russia Today which wrongly asserted a pro-Renzi rally in Rome was actually an anti-Renzi rally. In October 2016, the Five Star Movement disseminated a video from Kremlin-aligned Russia Today which falsely reported displaying thousands of individuals protesting the 4 December 2016 scheduled referendum in Italy—when in fact the video that went on to 1.5 million views showed supporters of the referendum. President of the Italian Chamber of Deputies, Laura Boldrini, stated: "Fake news is a critical issue and we can’t ignore it. We have to act now." Boldrini met on 30 November 2016 with vice president of public policy in Europe for Facebook Richard Allan to voice concerns about fake news. She said Facebook needed to admit they were a media company.

Mexico

Elections in Mexico are always rigged by the misinformation that is let out in the public. This is true for any political party, whether they are democratic or authoritarian. Due to the false information that easily influences voters in Mexico, it can threaten that state of the country because actions that are taken by misinformed citizens. In Mexico, fake exit polls have been moving within digital media outlets. What this means is that citizens are not receiving real data on what is happening in their elections.

Moldova

Amid the 2018 local elections in Moldova a doctored video with mistranslated subtitles purported to show that the a pro-Europe party candidate for mayor of Chișinău (pop. 685,900), the capital of Moldova had proposed to lease the city of Chișinău to the UAE for 50 years. The video was watched more than 300,000 times on Facebook and almost 250,000 times on the Russian social network site OK.ru, which is popular among Moldova's Russian-speaking population.

Myanmar

In 2015, fake stories using unrelated photographs and fraudulent captions were shared online in support of the Rohingya. Fake news negatively affected individuals in Myanmar, leading to a rise in violence against Muslims in the country. Online participation surged from one percent to 20 percent of Myanmar's total populace from 2014 to 2016. Fake stories from Facebook were reprinted in paper periodicals called Facebook and The Internet. False reporting related to practitioners of Islam in the country was directly correlated with increased attacks on people of the religion in Myanmar. Fake news fictitiously stated believers in Islam acted out in violence at Buddhist locations. BuzzFeed News documented a direct relationship between the fake news and violence against Muslim people. It noted countries that were relatively newer to Internet exposure were more vulnerable to the problems of fake news and fraud.

Pakistan

Khawaja Muhammad Asif, the Minister of Defence of Pakistan, threatened to nuke Israel on Twitter after a false story claiming that Avigdor Lieberman, the Israeli Ministry of Defense, said "If Pakistan send ground troops into Syria on any pretext, we will destroy this country with a nuclear attack."

Poland

In 2016 Polish historian Jerzy Targalski [pl] noted fake news websites had infiltrated Poland through anti-establishment and right-wing focused sources that copied content from Russia Today. Targalski observed there existed about 20 specific fake news websites in Poland which spread Russian disinformation in the form of fake news. One example cited was the false claim that Ukraine had claimed that the Polish city of Przemyśl was occupied by Poland. In 2020 fake news websites related to the COVID-19 pandemic have been identified and officially labelled as such by the Polish Ministry of Health.

Sweden

The Swedish Security Service issued a report in 2015 identifying propaganda from Russia infiltrating Sweden with the objective to amplify pro-Russian propaganda and inflame societal conflicts. The Swedish Civil Contingencies Agency (MSB), part of the Ministry of Defence of Sweden, identified fake news reports targeting Sweden in 2016 which originated from Russia. Swedish Civil Contingencies Agency official Mikael Tofvesson stated a pattern emerged where views critical of Sweden were constantly repeated. The MSB identified Russia Today and Sputnik News as significant fake news purveyors. As a result of growth in this propaganda in Sweden, the MSB planned to hire six additional security officials to fight back against the campaign of fraudulent information.

Taiwan

In a report in December 2015 by The China Post, a fake video shared online showed people a light show purportedly made at the Shihmen Reservoir. The Northern Region Water Resources Office confirmed there was no light show at the reservoir and the event had been fabricated. The fraud led to an increase in tourist visits to the actual attraction.

Ukraine

Deutsche Welle interviewed the founder of Stopfake.org in 2014 about the website's efforts to debunk fake news in Ukraine, including media portrayal of the Ukrainian crisis. Co-founder Margot Gontar began the site in March 2014, and it was aided by volunteers. In 2014, Deutsche Welle awarded the fact-checker website with the People's Choice Award for Russian in its ceremony The BOBs, recognizing excellence in advocacy on the Internet. Gontar highlighted an example debunked by the website, where a fictitious "Doctor Rozovskii" supposedly told The Guardian pro-Ukraine individuals refused to allow him to tend to injured in fighting with Russian supporters in 2014 Stopfake.org exposed the event was fabricated—there actually was no individual named "Doctor Rozovskii", and found the Facebook photo distributed with the incident was of a different individual from Russia with a separate identity. Former Ukraine president Viktor Yanukovych's ouster from power created instability, and in 2015 the Organization for Security and Co-operation in Europe concluded Russian disinformation campaigns used fake news to disrupt relations between Europe and Ukraine. Russian-financed news spread disinformation after the conflict in Ukraine motivated the European Union to found the European External Action Service specialist task force to counter the propaganda.

United Kingdom

Labour MP Michael Dugher was assigned by Deputy Leader of the Labour Party Tom Watson in November 2016 to investigate the impact of fake news spread through social media. Watson said they would work with Twitter and Facebook to root out clear-cut circumstances of "downright lies". Watson wrote an article for The Independent where he suggested methods to respond to fake news, including Internet-based societies which fact-check in a manner modeled after Wikipedia. Minister for Culture, Matthew Hancock, stated the British government would investigate the impact of fake news and its pervasiveness on social media websites. Watson stated he welcomed the investigation into fake news by the government. On 8 December 2016, Chief of the Secret Intelligence Service (MI6) Alex Younger delivered a speech to journalists at the MI6 headquarters where he called fake news and propaganda damaging to democracy. Younger said the mission of MI6 was to combat propaganda and fake news in order to deliver to his government a strategic advantage in the information warfare arena, and assist other nations including European countries. He called such methods of fake news propaganda online as a "fundamental threat to our sovereignty". Younger said all nations that hold democratic values should feel the same worry over fake news.

United States

2016 election cycle

U.S. President Barack Obama
U.S. President Barack Obama said, "If we can't discriminate between serious arguments and propaganda, then we have problems."

Fraudulent stories during the 2016 U.S. presidential election popularized on Facebook included a viral post that Pope Francis had endorsed Donald Trump, and another that actor Denzel Washington "backs Trump in the most epic way possible". Donald Trump's son and campaign surrogate Eric Trump, top national security adviser Michael T. Flynn, and then-campaign managers Kellyanne Conway and Corey Lewandowski shared fake news stories during the campaign.

Misuse of the term

After the 2016 election, Republican politicians and conservative media began to appropriate the term by using it to describe any news they see as hostile to their agenda, according to The New York Times, which cited Breitbart News, Rush Limbaugh and supporters of Donald Trump as dismissing true mainstream news reports, and any news they do not like as "fake news".

U.S. response to Russia in Syria

The Russian state-operated newswire RIA Novosti, known as Sputnik International, reported fake news and fabricated statements by White House Press Secretary Josh Earnest. RIA Novosti falsely reported on 7 December 2016 that Earnest stated sanctions for Russia were on the table related to Syria. RIA Novosti falsely quoted Earnest as saying: "There are a number of things that are to be considered, including some of the financial sanctions that the United States can administer in coordination with our allies. I would definitely not rule that out." However, the word "sanctions" was never used by the Press Secretary. Russia was discussed in eight instances during the press conference, but never about sanctions. The press conference focused solely on Russian air raids in Syria towards rebels fighting President of Syria Bashar al-Assad in Aleppo.

Legislative and executive responses

Members of the U.S. Senate Intelligence Committee traveled to Ukraine and Poland in March 2016 and heard about Russian operations to influence internal Ukrainian matters. Senator Angus King recalled they were informed about Russia "planting fake news stories" during elections. On 30 November 2016 seven members of the Senate Intelligence Committee asked President Obama to publicize information on Russia's role in spreading disinformation in the U.S. election. On 30 November 2016, legislators approved a measure within the National Defense Authorization Act to finance the U.S. State Department to act against foreign propaganda. The initiative was developed through a bipartisan bill, the Countering Foreign Propaganda and Disinformation Act, written by U.S. Senators Republican Rob Portman and Democrat Chris Murphy. Republican U.S. Senators stated they planned to hold hearings and investigate Russian influence on the 2016 U.S. elections. By doing so they went against the preference of incoming Republican President-elect Donald Trump, who downplayed any potential Russian meddling in the election. Senate Armed Services Committee Chairman John McCain, Senate Intelligence Committee Chairman Richard Burr, U.S. Senate Foreign Relations Committee Chairman Bob Corker, and Senator Lindsey Graham all planned investigations in the 115th U.S. Congress session.

U.S. President Barack Obama commented on fake news online in a speech the day before Election Day in 2016, saying social media spread lies and created a "dust cloud of nonsense". Obama commented again on the problem after the election: "if we can't discriminate between serious arguments and propaganda, then we have problems." On 9 December 2016, President Obama ordered U.S. Intelligence Community to conduct a complete review of the Russian propaganda operation. In his year-end press conference on 16 December 2016, President Obama criticized a hyper-partisan atmosphere for enabling the proliferation of fake news.

Conspiracy theories and 2016 pizzeria attack

In November 2016, fake news sites and Internet forums falsely implicated the restaurant Comet Ping Pong and Democratic Party figures as part of a fictitious child trafficking ring, which was dubbed "Pizzagate". The rumor was widely debunked by sources such as the Metropolitan Police Department of the District of Columbia, fact-checking website Snopes.com, The New York Times, and Fox News. The restaurant's owners were harassed and threatened, and increased their security. On 4 December 2016, an individual from Salisbury, North Carolina, walked into the restaurant to "self-investigate" this conspiracy theory. He brought a semi-automatic rifle, and fired shots before being arrested; no one was injured. The suspect told police that he planned to "self-investigate" the conspiracy theory, and was charged with assault with a dangerous weapon, carrying a pistol without a license, unlawful discharge of a firearm, and carrying a rifle or shotgun outside the home or business. After the incident, future National Security Advisor Michael T. Flynn and his son Michael G. Flynn were criticized by many reporters for spreading the rumors. Two days after the shooting, Trump fired Michael G. Flynn from his transition team in connection with Flynn's Twitter posting of fake news. Days after the attack, Hillary Clinton spoke out on the dangers of fake news in a tribute speech to retiring Senator Harry Reid at the U.S. Capitol, and called the problem an epidemic.

2018 Midterm Elections

To track junk news shared on Facebook during the 2018 midterm elections, the Junk News Aggregator Archived 2021-01-27 at the Wayback Machine was launched by the Computational Propaganda Project of the Oxford Internet Institute, University of Oxford. This Aggregator is a public platform, offering three interactive tools for tracking in near real-time public posts shared on Facebook by junk news sources, showing the content and the user engagement numbers that these posts have received.

Response

Fact-checking websites and journalists

Logo of PolitiFact
PolitiFact.com was praised by rival fact-checker FactCheck.org and recommended as a resource to debunk fake news sites.

Fact-checking websites FactCheck.org, PolitiFact.com and Snopes.com authored guides on how to respond to fraudulent news. FactCheck.org advised readers to check the source, author, date, and headline of publications. They recommended their colleagues Snopes.com, The Washington Post Fact Checker, and PolitiFact.com. FactCheck.org admonished consumers to be wary of confirmation bias. PolitiFact.com used a "Fake news" tag so readers could view all stories Politifact had debunked. Snopes.com warned readers social media was used as a harmful tool by fraudsters. The Washington Post's "The Fact Checker" manager Glenn Kessler wrote that all fact-checking sites saw increased visitors during the 2016 election cycle. Unique visitors to The Fact Checker increased five-fold from the 2012 election. Will Moy, director of London-based fact-checker Full Fact, said debunking must take place over a sustained period to be effective. Full Fact worked with Google to help automate fact-checking.

FactCheck.org former director Brooks Jackson said media companies devoted increased focus to the importance of debunking fraud during the 2016 election. FactCheck.org partnered with CNN's Jake Tapper in 2016 to examine the veracity of candidate statements. Angie Drobnic Holan, editor of PolitiFact.com, cautioned media companies chiefs must be supportive of debunking, as it often provokes hate mail and extreme responses from zealots. In December 2016, PolitiFact announced fake news was its selection for "Lie of the Year". PolitiFact explained its choice for the year: "In 2016, the prevalence of political fact abuse – promulgated by the words of two polarizing presidential candidates and their passionate supporters – gave rise to a spreading of fake news with unprecedented impunity." PolitiFact called fake news a significant symbol of a culture accepting of post-truth politics.

Google CEO comment and actions

In the aftermath of the 2016 U.S. election, Google and Facebook, faced scrutiny regarding the impact of fake news. The top result on Google for election results was to a fake site. "70 News" had fraudulently written an incorrect headline and article that Trump won the popular vote against Clinton. Google later stated that prominence of the fake site in search results was a mistake. By 14 November, the "70 News" result was the second link shown when searching for results of the election. When asked shortly after the election whether fake news influenced election results, Google CEO Sundar Pichai responded: "Sure" and went on to emphasize the importance of stopping the spread of fraudulent sites. On 14 November 2016, Google responded to the problem of fraudulent sites by banning such companies from profiting on advertising from traffic through its program AdSense. Google previously had a policy for denying ads for dieting ripoffs and counterfeit merchandise. Google stated upon the announcement they would work to ban advertisements from sources that lie about their purpose, content, or publisher. The ban is not expected to apply to news satire sites like The Onion, although some satirical sites may be inadvertently blocked under the new system.

On 25 April 2017, Ben Gomes wrote a blog post announcing changes to the search algorithms that would stop the "spread of blatantly misleading, low quality, offensive or downright false information." On 27 July 2017, the World Socialist Web Site published data that showed a significant drop after the 25 April announcement in Google referrals to left-wing and anti-war websites, including the ACLU, Alternet, and Counterpunch. The World Socialist Web Site insists that the "fake news" charge is a cover to remove anti-establishment websites from public access, and believes the algorithm changes are infringing on the democratic right to free speech.

Facebook deliberations

Blocking fraudulent advertisers

Facebook CEO Mark Zuckerberg
Facebook CEO Mark Zuckerberg specifically recommended fact-checking site Snopes.com.

One day after Google took action, Facebook decided to block fake sites from advertising there. Facebook said they would ban ads from sites with deceptive content, including fake news, and review publishers for compliance. These steps by both Google and Facebook intended to deny ad revenue to fraudulent news sites; neither company took actions to prevent dissemination of false stories in search engine results pages or web feeds. Facebook CEO Mark Zuckerberg called the notion that fraudulent news impacted the 2016 election a "crazy idea" and denied that his platform influenced the election. He stated that 99% of Facebook's content was neither fake news nor a hoax. Zuckerberg said that Facebook is not a media company. Zuckerberg advised users to check the fact-checking website Snopes.com whenever they encounter fake news on Facebook.

Top staff members at Facebook did not feel simply blocking ad revenue from fraudulent sites was a strong enough response, and they made an executive decision and created a secret group to deal with the issue themselves. In response to Zuckerberg's first statement that fraudulent news did not impact the 2016 election, the secret Facebook group disputed this notion, saying fake news was rampant on their website during the election cycle. The secret task force included dozens of Facebook employees.

Response

Facebook faced criticism after its decision to revoke advertising revenues from fraudulent news providers, and not take further action. After negative media coverage including assertions that fraudulent news gave the 2016 U.S. presidential election to Trump, Zuckerberg posted a second time about it on 18 November 2016. The post was a reversal of his earlier comments on the matter where he had discounted the impact of fraudulent news. Zuckerberg said there it was difficult to filter out fraudulent news because he desired open communication. Measures considered and not implemented by Facebook included adding an ability for users to tag questionable material, automated checking tools, and third-party confirmation. The 18 November post did not announce any concrete actions the company would definitively take, or when such measures would be put into usage.

National Public Radio observed the changes being considered by Facebook to identify fraud constituted progress for the company into a new media entity. On 19 November 2016, BuzzFeed advised Facebook users they could report posts from fraudulent sites. Users could choose the report option: "I think it shouldn't be on Facebook", followed by: "It's a false news story." In November 2016, Facebook began assessing use of warning labels on fake news. The rollout was at first only available to a few users in a testing phase. A sample warning read: "This website is not a reliable news source. Reason: Classification Pending". TechCrunch analyzed the new feature during the testing phase and surmised it may have a tendency towards false positives.

Fake news proliferation on Facebook had a negative financial impact for the company. Brian Wieser of Pivotal Research predicted that revenues could decrease by two percentage points due to the concern over fake news and loss of advertising dollars. Shortly after Mark Zuckerberg's second statement on fake news proliferation on his website, Facebook decided to engage in assisting the government of China with a version of its software in the country to allow increased censorship by the government. Barron's contributor William Pesek was highly critical of this move, writing by porting its fake news conundrum to China, Facebook would become a tool in that Communist Party's General Secretary Xi Jinping's efforts to increase censorship.

Media scholar Dr. Nolan Higdon argues that relying on tech-companies to solve the issues with false information will exacerbate the problems associated with fake news. Higdon contends that tech-companies lack an incentive for solving the problem because they benefit from the proliferation of fake news. Higdon cites tech-companies utilization of data collection as one of the strongest forces empowering fake news producers. Rather than government regulation or industry censorship, Higdon argues for the introduction of critical news literacy education to American education.

Partnership with debunkers

Society of Professional Journalists president Lynn Walsh said in November 2016 that they would reach out to Facebook to assist weeding out fake news. Walsh said Facebook should evolve and admit it functioned as a media company. On 17 November 2016, the Poynter International Fact-Checking Network (IFCN) published an open letter on the Poynter Institute website to Mark Zuckerberg, imploring him to utilize fact-checkers to identify fraud on Facebook. Signatories to the 2016 letter to Zuckerberg featured a global representation of fact-checking groups, including: Africa Check, FactCheck.org, PolitiFact.com, and The Washington Post Fact Checker. In his second post on the matter on 18 November 2016, Zuckerberg responded to the fraudulent news problem by suggesting usage of fact-checkers. He specifically identified fact-checking website Snopes.com, and pointed out that Facebook monitors links to such debunkers in reply comments to determine which original posts were fraudulent.

On 15 December 2016, Facebook announced more specifics in its efforts to combat fake news and hoaxes on its site. The company said it would form a partnership with fact-checking groups that had joined the Poynter International Fact-Checking Network fact-checkers' code of principles, to help debunk fraud on the site. It was the first instance Facebook had ever given third-party entities highlighted featuring in its News Feed, a significant motivator of web traffic online. The fact-checking organizations partnered with Facebook in order to confirm whether or not links posted from one individual to another on the site were factual or fraudulent. Facebook did not finance the fact-checkers, and acknowledged they could see increased traffic to their sites from the partnership.

Fact-checking organizations that joined Facebook's initiative included: ABC News, The Washington Post, Snopes.com, FactCheck.org, PolitiFact, and the Associated Press. Fraudulent articles will receive a warning tag: "disputed by 3rd party fact-checkers". The company planned to start with obvious cases of hoaxes shared specifically for fraudulent purposes to gain money for the purveyor of fake news. Users may still share such tagged articles, and they will show up farther down in the news feed with an accompanying warning. Facebook will employ staff researchers to determine whether website spoofing has occurred, for example "washingtonpost.co" instead of the real washingtonpost.com. In a post on 15 December, Mark Zuckerberg acknowledged the changing nature of Facebook: "I think of Facebook as a technology company, but I recognize we have a greater responsibility than just building technology that information flows through. While we don't write the news stories you read and share, we also recognize we're more than just a distributor of news. We're a new kind of platform for public discourse -- and that means we have a new kind of responsibility to enable people to have the most meaningful conversations, and to build a space where people can be informed."

Proposed technology tools

New York magazine contributor Brian Feldman created a Google Chrome extension that would warn users about fraudulent news sites. He invited others to use his code and improve upon it. Upworthy co-founder and The Filter Bubble author Eli Pariser launched an open-source model initiative on 17 November 2016 to address false news. Pariser began a Google Document to collaborate with others online on how to lessen the phenomenon of fraudulent news. Pariser called his initiative: "Design Solutions for Fake News". Pariser's document included recommendations for a ratings organization analogous to the Better Business Bureau, and a database on media producers in a format like Wikipedia. Writing for Fortune, Matthew Ingram agreed with the idea that Wikipedia could serve as a helpful model to improve Facebook's analysis of potentially fake news. Ingram concluded Facebook could benefit from a social network form of fact-checking similar to Wikipedia's methods while incorporating debunking websites such as PolitiFact.com.

Others

Pope Francis, the leader of the Roman Catholic Church, spoke out against fake news in an interview with the Belgian Catholic weekly Tertio (magazine) [nl] on 7 December 2016. The Pope had prior experience being the subject of a fake news website fiction—during the 2016 U.S. election cycle, he was falsely said to support Donald Trump for president. Pope Francis said the singular worst thing the news media could do was spreading disinformation and that amplifying fake news instead of educating society was a sin. He compared salacious reporting of scandals, whether true or not, to coprophilia and the consumption of it to coprophagy The Pope said that he did not intend to offend with his strong words, but emphasized that "a lot of damage can be done" when the truth is disregarded and slander is spread.

Academic analysis

Jamie Condliffe wrote that banning ad revenue from fraudulent sites was not aggressive enough action by Facebook to deal with the problem, and did not prevent fake news from appearing in Facebook news feeds. University of Michigan political scientist Brendan Nyhan criticized Facebook for not doing more to combat fake news amplification. Indiana University computer science professor Filippo Menczer commented on measures by Google and Facebook to deny fraudulent sites revenue, saying it was a good step to reduce motivation for fraudsters. Menczer's research team engaged in developing an online tool titled: Hoaxy — to see the pervasiveness of unconfirmed assertions as well as related debunking on the Internet.

Zeynep Tufekci, a writer and academic
Zeynep Tufekci has written that Facebook amplified fake news and echo chambers.

Zeynep Tufekci wrote critically about Facebook's stance on fraudulent news sites, stating that fraudulent websites in Macedonia profited handsomely off false stories about the 2016 U.S. election. Tufecki wrote that Facebook's algorithms, and structure exacerbated the impact of echo chambers and increased fake news blight.

In 2016 Melissa Zimdars, associate professor of communications at Merrimack College, created a handout for her Introduction to Mass Communication students titled "False, Misleading, Clickbait-y, and/or Satirical 'News' Sources" and posted it on Google docs. It was circulated on social media, and on 15 November 2016 The Los Angeles Times published the class handout under the title "Want to keep fake news out of your newsfeed? College professor creates list of sites to avoid". Zimdars said that the list "wasn't intended to be widely distributed", and expressed concern that "people are taking it as this list of 'fake' sites, which is not its purpose". On 17 November 2016 Zimdars deleted the list. On 3 January 2017, Zimdars replaced the original handout with a new list at the same URL. The new list has removed most of the sites from the original handout, added many new sites, and greatly expanded the categories.

Stanford University professors Sam Wineburg and Sarah McGrew authored a 2016 study analyzing students' ability to discern fraudulent news from factual. The study took place over a year-long period of time, and involved a sample size of over 7,800 responses from university, secondary and middle school students in 12 states within the United States. They were surprised at the consistency with which students thought fraudulent news reports were factual. The study found 82% of students in middle school were unable to differentiate between an advertisement denoted as sponsored content from an actual news article. The authors concluded the solution was to educate online media consumers to themselves behave like fact-checkers — and actively question the veracity of all sources. A 2019 study in the journal Science, which examined dissemination of fake news articles on Facebook in the 2016 election, found that sharing of fake news articles on Facebook was "relatively rare", conservatives were more likely than liberals or moderates to share fake news, and there is a "strong age effect", whereby individuals over 65 are vastly more likely to share fake news than younger cohorts. Another 2019 study in Science found, "fake news accounted for nearly 6% of all news consumption [on Twitter], but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters."

Scientist Emily Willingham has proposed applying the scientific method to fake news analysis. She had previously written on the topic of differentiating science from pseudoscience, and proposed applying that logic to fake news. She calls the recommended steps Observe, Question, Hypothesize, Analyze data, Draw conclusion, and Act on results. Willingham suggested a hypothesis of "This is real news", and then forming a strong set of questions to attempt to disprove the hypothesis. These tests included: check the URL, date of the article, evaluate reader bias and writer bias, double-check the evidence, and verify the sources cited. University of Connecticut philosophy professor Michael P. Lynch said that a troubling number of individuals make determinations relying upon the most recent piece of information they've consumed. He said the greater issue however was that fake news could make people less likely to believe news that really is true. Lynch summed up the thought process of such individuals, as "...ignore the facts because nobody knows what’s really true anyway."

In 2019, David Lazer and other researchers, from Northeastern University, Harvard University, and the University at Buffalo, analyzed engagement with a previously defined set of fake news sources on Twitter. They found that such engagement was highly concentrated both among a small number of websites and a small number of Twitter users. Five percent of the sources accounted for over fifty percent of exposures. Among users, 0.1 percent consumed eighty percent of the volume from fake news sources.

Friday, July 2, 2021

Stereotype threat

From Wikipedia, the free encyclopedia

Stereotype threat is a situational predicament in which people are or feel themselves to be at risk of conforming to stereotypes about their social group. It is purportedly a contributing factor to long-standing racial and gender gaps in academic performance. Since its introduction into the academic literature, stereotype threat has become one of the most widely studied topics in the field of social psychology.

Situational factors that increase stereotype threat can include the difficulty of the task, the belief that the task measures their abilities, and the relevance of the stereotype to the task. Individuals show higher degrees of stereotype threat on tasks they wish to perform well on and when they identify strongly with the stereotyped group. These effects are also increased when they expect discrimination due to their identification with a negatively stereotyped group. Repeated experiences of stereotype threat can lead to a vicious circle of diminished confidence, poor performance, and loss of interest in the relevant area of achievement. Stereotype threat has been argued to show a reduction in the performance of individuals who belong to negatively stereotyped groups. Its role in affecting public health disparities has also been suggested.

According to the theory, if negative stereotypes are present regarding a specific group, group members are likely to become anxious about their performance, which may hinder their ability to perform to their full potential. Importantly, the individual does not need to subscribe to the stereotype for it to be activated. It is hypothesized that the mechanism through which anxiety (induced by the activation of the stereotype) decreases performance is by depleting working memory (especially the phonological aspects of the working memory system). The opposite of stereotype threat is stereotype boost, which is when people perform better than they otherwise would have, because of exposure to positive stereotypes about their social group. A variant of stereotype boost is stereotype lift, which is people achieving better performance because of exposure to negative stereotypes about other social groups.

Some researchers have suggested that stereotype threat should not be interpreted as a factor in real-life performance gaps, and have raised the possibility of publication bias. Other critics have focused on correcting what they claim are misconceptions of early studies showing a large effect. However, meta-analyses and systematic reviews have shown significant evidence for the effects of stereotype threat, though the phenomenon defies over-simplistic characterization.

Empirical studies

More than 300 studies have been published showing the effects of stereotype threat on performance in a variety of domains. Stereotype threat is considered by some researchers to be a contributing factor to long-standing racial and gender achievement gaps, such as under-performance of black students relative to white ones in various academic subjects, and under-representation of women at higher echelons in the field of mathematics.

The strength of the stereotype threat that occurs depends on how the task is framed. If a task is framed to be neutral, stereotype threat is not likely to occur; however, if tasks are framed in terms of active stereotypes, participants are likely to perform worse on the task. For example, a study on chess players revealed that female players performed more poorly than expected when they were told they would be playing against a male opponent. In contrast, women who were told that their opponent was female performed as would be predicted by past ratings of performance. Female participants who were made aware of the stereotype of females performing worse at chess than males performed worse in their chess games.

A 2007 study extended stereotype threat research to entrepreneurship, a traditionally male-stereotyped profession. The study revealed that stereotype threat can depress women's entrepreneurial intentions while boosting men's intentions. However, when entrepreneurship is presented as a gender-neutral profession, men and women express a similar level of interest in becoming entrepreneurs. Another experiment involved a golf game which was described as a test of "natural athletic ability" or of "sports intelligence". When it was described as a test of athletic ability, European-American students performed worse, but when the description mentioned intelligence, African-American students performed worse.

The effect of stereotype threat (ST) on math test scores for girls and boys. Data from Osborne (2007).

Other studies have demonstrated how stereotype threat can negatively affect the performance of European Americans in athletic situations as well as the performance of men who are being tested on their social sensitivity. Although the framing of a task can produce stereotype threat in most individuals, certain individuals appear to be more likely to experience stereotype threat than others. Individuals who highly identify with a particular group appear to be more vulnerable to experiencing stereotype threat than individuals who do not identify strongly with the stereotyped group.

The mere presence of other people can evoke stereotype threat. In one experiment, women who took a mathematics exam along with two other women got 70% of the answers right, whereas women who took the same exam in the presence of two men got an average score of 55%.

The goal of a study conducted by Desert, Preaux, and Jund in 2009 was to see if children from lower socioeconomic groups are affected by stereotype threat. The study compared children that were 6–7 years old with children that were 8–9 years old from multiple elementary schools. These children were presented with the Raven's Matrices test, which is an intellectual ability test. Separate groups of children were given directions in an evaluative way and other groups were given directions in a non-evaluative way. The "evaluative" group received instructions that are usually given with the Raven Matrices test, while the "non-evaluative" group was given directions which made it seem as if the children were simply playing a game. The results showed that third graders performed better on the test than the first graders did, which was expected. However, the lower socioeconomic status children did worse on the test when they received directions in an evaluative way than the higher socioeconomic status children did when they received directions in an evaluative way. These results suggested that the framing of the directions given to the children may have a greater effect on performance than socioeconomic status. This was shown by the differences in performance based on which type of instructions they received. This information can be useful in classroom settings to help improve the performance of students of lower socioeconomic status.

There have been studies on the effects of stereotype threat based on age. A study was done on 99 senior citizens ranging in age from 60–75 years. These seniors were given multiple tests on certain factors and categories such as memory and physical abilities, and were also asked to evaluate how physically fit they believe themselves to be. Additionally, they were asked to read articles that contained both positive and negative outlooks about seniors, and they watched someone reading the same articles. The goal of this study was to see if priming the participants before the tests would affect performance. The results showed that the control group performed better than those that were primed with either negative or positive words prior to the tests. The control group seemed to feel more confident in their abilities than the other two groups.

Many psychological experiments carried out on Stereotype Threat focus on the physiological effects of negative stereotype threat on performance, looking at both high and low status groups. Scheepers and Ellemers tested the following hypothesis: when assessing a performance situation on the basis of current beliefs the low status group members would show a physiological threat response, and high-status members would also show a physiological threat response when examining a possible alteration of the status quo (Scheepers & Ellemers, 2005). The results of this experiment were in line with expectations. As predicted, participants in the low status condition showed higher blood pressure immediately after the status feedback, while participants in the high-status condition showed a spike in blood pressure while anticipating the second round of the task.

In 2012, Scheepers et al. hypothesized that when high social power is stimulated 'an efficient cardiovascular pattern (challenge)' is produced, whereas, 'an inefficient cardiovascular pattern' or threat is caused by the activation of low social power (Scheepers, de Wit, Ellemers & Sassenberg, 2012). Two experiments were carried out in order to test this hypothesis. The first experiment looked at power priming and the second experiment related to role play. Both results from these two experiments provided evidence in support for the hypothesis.

Cleopatra Abdou and Adam Fingerhut were the first to develop experimental methods to study stereotype threat in a health care context, including the first study indicating that health care stereotype threat is linked with adverse health outcomes and disparities.

Some studies have found null results. The single largest experimental test of stereotype threat (N = 2064), conducted on Dutch high school students, found no effect. The authors state, however, that these results are limited to a narrow age-range, experimental procedure and cultural context, and call for further registered reports and replication studies on the topic. Despite these limitations, they state in conclusion that their study shows "that the effects of stereotype threat on math test performance should not be overgeneralized."

Numerous meta-analyses and systematic reviews have shown significant evidence for the effects of stereotype threat. However they also point to ways in which the phenomenon defies over-simplistic characterization. For instance, one meta-analysis found that with female subjects "subtle threat-activating cues produced the largest effect, followed by blatant and moderately explicit cues" while with minorities "moderately explicit stereotype threat-activating cues produced the largest effect, followed by blatant and subtle cues".

Mechanisms

Although numerous studies demonstrate the effects of stereotype threat on performance, questions remain as to the specific cognitive factors that underlie these effects. Steele and Aronson originally speculated that attempts to suppress stereotype-related thoughts lead to anxiety and the narrowing of attention. This could contribute to the observed deficits in performance. In 2008, Toni Schmader, Michael Johns, and Chad Forbes published an integrated model of stereotype threat that focused on three interrelated factors:

  1. stress arousal;
  2. performance monitoring, which narrows attention; and,
  3. efforts to suppress negative thoughts and emotions.

Schmader et al. suggest that these three factors summarize the pattern of evidence that has been accumulated by past experiments on stereotype threat. For example, stereotype threat has been shown to disrupt working memory and executive function, increase arousal, increase self-consciousness about one's performance, and cause individuals to try to suppress negative thoughts as well as negative emotions such as anxiety. People have a limited amount of cognitive resources available. When a large portion of these resources are spent focusing on anxiety and performance pressure, the individual is likely to perform worse on the task at hand.

A number of studies looking at physiological and neurological responses support Schmader and colleagues' integrated model of the processes that produce stereotype threat. Supporting an explanation in terms of stress arousal, one study found that African Americans under stereotype threat exhibit larger increases in arterial blood pressure. One study found increased cardiovascular activation amongst women who watched a video in which men outnumbered women at a math and science conference. Other studies have similarly found that individuals under stereotype threat display increased heart rates. Stereotype threat may also activate a neuroendocrine stress response, as measured by increased levels of cortisol while under threat. The physiological reactions that are induced by stereotype threat can often be subconscious, and can distract and interrupt cognitive focus from the task.

With regard to performance monitoring and vigilance, studies of brain activity have supported the idea that stereotype threat increases both of these processes. Forbes and colleagues recorded electroencephalogram (EEG) signals that measure electrical activity along the scalp, and found that individuals experiencing stereotype threat were more vigilant for performance-related stimuli.

Researchers found that women experiencing stereotype threat while taking a math test showed heightened activation in the ventral stream of the anterior cingulate cortex (ACC).

Another study used functional magnetic resonance imaging (fMRI) to investigate brain activity associated with stereotype threat. The researchers found that women experiencing stereotype threat while taking a math test showed heightened activation in the ventral stream of the anterior cingulate cortex (ACC), a neural region thought to be associated with social and emotional processing. Wraga and colleagues found that women under stereotype threat showed increased activation in the ventral ACC and that the amount of this activation predicted performance decrements on the task. When individuals were made aware of performance-related stimuli, they were more likely to experience stereotype threat.

A study conducted by Boucher, Rydell, Loo, and Rydell has shown that stereotype threat not only affects performance, but can also affect the ability to learn new information. In the study, undergraduate men and women had a session of learning followed by an assessment of what they learned. Some participants were given information intended to induce stereotype threat, and some of these participants were later given "gender fair" information, which it was predicted would reduce or remove stereotype threat. As a result, participants were split into four separate conditions: control group, stereotype threat only, stereotype threat removed before learning, and stereotype threat removed after learning. The results of the study showed that the women who were presented with the "gender fair" information performed better on the math related test than the women who were not presented with this information. This study also showed that it was more beneficial to women for the "gender fair" information to be presented prior to learning rather than after learning. These results suggest that eliminating stereotype threat prior to taking mathematical tests can help women perform better, and that eliminating stereotype threat prior to mathematical learning can help women learn better.

Original study

"The Effects of Stereotype Threat on the Standardized Test Performance of College Students (adjusted for group differences on SAT)". From J. Aronson, C.M. Steele, M.F. Salinas, M.J. Lustina, Readings About the Social Animal, 8th edition, ed. E. Aronson

In 1995, Claude Steele and Joshua Aronson performed the first experiments demonstrating that stereotype threat can undermine intellectual performance. Steele and Aronson measured this through a word completion task.

They had African-American and European-American college students take a difficult verbal portion of the Graduate Record Examination test. As would be expected based on national averages, the African-American students did not perform as well on the test. Steele and Aronson split students into three groups: stereotype-threat (in which the test was described as being "diagnostic of intellectual ability"), non-stereotype threat (in which the test was described as "a laboratory problem-solving task that was nondiagnostic of ability"), and a third condition (in which the test was again described as nondiagnostic of ability, but participants were asked to view the difficult test as a challenge). All three groups received the same test.

Steele and Aronson concluded that changing the instructions on the test could reduce African-American students' concern about confirming a negative stereotype about their group. Supporting this conclusion, they found that African-American students who regarded the test as a measure of intelligence had more thoughts related to negative stereotypes of their group. Additionally, they found that African Americans who thought the test measured intelligence were more likely to complete word fragments using words associated with relevant negative stereotypes (e.g., completing "__mb" as "dumb" rather than as "numb").

Adjusted for previous SAT scores, subjects in the non-diagnostic-challenge condition performed significantly better than those in the non-diagnostic-only condition and those in the diagnostic condition. In the first experiment, the race-by-condition interaction was marginally significant. However, the second study reported in the same paper found a significant interaction effect of race and condition. This suggested that placement in the diagnostic condition significantly impacted African Americans compared with European Americans.

Stereotype lift and stereotype boost

Stereotype threat concerns how stereotype cues can harm performance. However, in certain situations, stereotype activation can also lead to performance enhancement through stereotype lift or stereotype boost. Stereotype lift increases performance when people are exposed to negative stereotypes about another group. This enhanced performance has been attributed to increases in self-efficacy and decreases in self-doubt as a result of negative outgroup stereotypes. Stereotype boost suggests that positive stereotypes may enhance performance. Stereotype boost occurs when a positive aspect of an individual's social identity is made salient in an identity-relevant domain. Although stereotype boost is similar to stereotype lift in enhancing performance, stereotype lift is the result of a negative outgroup stereotype, whereas stereotype boost occurs due to activation of a positive ingroup stereotype.

Consistent with the positive racial stereotype concerning their superior quantitative skills, Asian American women performed better on a math test when their Asian identity was primed compared to a control condition where no social identity was primed. Conversely, these participants did worse on the math test when instead their gender identity—which is associated with stereotypes of inferior quantitative skills—was made salient, which is consistent with stereotype threat. Two replications of this result have been attempted. In one case, the effect was only reproduced after excluding participants who were unaware of stereotypes about the mathematical abilities of Asians or women, while the other replication failed to reproduce the original results even considering several moderating variables.

Long-term and other consequences

Decreased performance is the most recognized consequence of stereotype threat. However, research has also shown that stereotype threat can cause individuals to blame themselves for perceived failures, self-handicap, discount the value and validity of performance tasks, distance themselves from negatively stereotyped groups, and disengage from situations that are perceived as threatening.

Studies examining stereotype threat in Black Americans have found that when subjects are aware of the stereotype of Black criminality, anxiety about encountering police increases. This, in turn, can lead to self-regulatory efforts, more anxiety, and other behaviors that are commonly perceived as suspicious to police officers. Because police officers tend to perceive Black people as threatening, their reactions to these anxiety-induced behaviors are commonly more harsh than reactions to White people with the same behavior, and influences whether or not they decide to shoot the person.

In the long run, the chronic experience of stereotype threat may lead individuals to disidentify with the stereotyped group. For example, a woman may stop seeing herself as "a math person" after experiencing a series of situations in which she experienced stereotype threat. This disidentification is thought to be a psychological coping strategy to maintain self-esteem in the face of failure. Repeated exposure to anxiety and nervousness can lead individuals to choose to distance themselves from the stereotyped group.

Although much of the research on stereotype threat has examined the effects of coping with negative stereotype on academic performance, recently there has been an emphasis on how coping with stereotype threat could "spillover" to dampen self-control and thereby affect a much broader category of behaviors, even in non-stereotyped domains. Research by Michael Inzlicht and colleagues suggest that, when women cope with negative stereotypes about their math ability, they perform worse on math tests, and that, well after completing the math test, women may continue to show deficits even in unrelated domains. For example, women might overeat, be more aggressive, make more risky decisions, and show less endurance during physical exercise.

The perceived discrimination associated with stereotype threat can also have negative long-term consequences on individuals' mental health. Perceived discrimination has been extensively investigated in terms of its effects on mental health, with a particular emphasis on depression. Cross-sectional studies involving diverse minority groups, including those relating to internalized racism, have found that individuals who experience more perceived discrimination are more likely to exhibit depressive symptoms. Additionally, perceived discrimination has also been found to predict depressive symptoms in children and adolescents. Other negative mental health outcomes associated with perceived discrimination include a reduced general well-being, post-traumatic stress disorder, anxiety, and rebellious behavior. A meta-analysis conducted by Pascoe and Smart Richman has shown that the strong link between perceived discrimination and negative mental health persists even after controlling for factors such as education, socioeconomic status, and employment.

Mitigation

Additional research seeks ways to boost the test scores and academic achievement of students in negatively stereotyped groups. Such studies suggest various ways in which the effects of stereotype threat may be mitigated. For example, there have been increasing concerns about the negative effects of stereotype threats on MCAT, SAT, LSAT scores, etc. One effort at mitigation of the negative consequences of stereotype threat involves rescaling standardized test scores to adjust for the adverse effects of stereotypes.

Perhaps most prominently, well replicated findings suggest that teaching students to re-evaluate stress and adopt an incremental theory of intelligence can be an effective way to mitigate the effects of stereotype threat. Two studies sought to measure the effects of persuading participants that intelligence is malleable and can be increased through effort. Both suggested that if people believe that they can improve their performance based on effort, they are more likely to believe that they can overcome negative stereotypes, and thus perform well. Another study found that having students reexamine their situation or anxiety can help their executive resources (attentional control, working memory, etc.), rather than allowing stress to deplete them, and thus improve test performance. Subsequent research has found that students who are taught an incremental view of intelligence do not attribute academic setbacks to their innate ability, but rather to a situational attribute such as a poor study strategy. As a result, students are more likely to implement alternative study strategies and seek help from others.

Research on the power of self-affirmation exercises has shown promising results as well. One such study found that a self-affirmation exercise (in the form of a brief in-class writing assignment about a value that is important to them) significantly improved the grades of African-American middle-school students, and reduced the racial achievement gap by 40%. The authors of this study suggest that the racial achievement gap could be at least partially ameliorated by brief and targeted social-psychological interventions. Another such intervention was attempted with UK medical students, who were given a written assignment and a clinical assessment. For the written assignment group, white students performed worse than minority students. For the clinical assessment, both groups improved their performance, though the gap between racial groups was maintained. Allowing participants to think about a positive value or attribute about themselves prior to completing the task seemed to make them less susceptible to stereotype threat. Self-affirmation has also been shown to mitigate the performance gap between female and male participants on mathematical and geometrical reasoning tests. Similarly, it has been shown that encouraging women to think about their multiple roles and identities by creating self-concept map can eliminate the gender gap on a relatively difficult standardized test. Women given such an opportunity for reflection did equally well as men on the math portion of the GRE, while women who did not create a self-concept map did significantly worse on the math section than men did.

Increasing the representation of minority groups in a field has also been shown to mitigate stereotype threat. In one study, women in STEM fields were shown a video of a conference with either a balanced or unbalanced ratio of men to women. The women viewing an unbalanced ratio reported a lower sense of belonging and less desire to participate. Decreasing cues that reflect only a majority group and increasing cues of minority groups can create environments that mitigate against stereotype threat. Further research has focused on constructing environments such that the physical objects in the environment do not reflect one majority group. For instance, in one study, researchers argued that individuals make decisions about group membership based on the group's environment and showed that altering the physical objects in a room boosted minority participation. In this study, removing stereotypical computer science objects and replacing them with non-stereotypical objects increased female participation in computer science to an equal level as male peers.

Directly communicating that diversity is valued may also be effective. One study revealed that a company's pamphlet stating a direct value of diversity, compared to a color blind approach, caused African Americans to report an increase in trust and comfort towards the company. Promoting cross-group relations between people of varying backgrounds has also been shown to be effective at promoting a sense of belonging among minority group members. For instance, a 2008 study indicates that students have a lower sense of belonging at institutions where they are the minority, but developing friendships with members of other racial groups increased their sense of belonging. In 2007, a study by Greg Walton and Geoffrey Cohen showed results in boosting the grades of African-American college students, and eliminating the racial achievement gap between them and their white peers over the first year of college, by emphasizing to participants that concerns about social belonging tend to lessen over time. These findings suggest that allowing individuals to feel as though they are welcomed into a desirable group makes them more likely to ignore stereotypes. The upshot is that if minority college students are welcomed into the world of academia, they are less likely to be influenced by the negative stereotypes of poor minority performance on academic tasks.

One early study suggested that simply informing college women about stereotype threat and its effects on performance was sufficient to eliminate the predicted gender gap on a difficult math test. The authors of this study argued that making people aware of the fact that they will not necessarily perform worse despite the existence of a stereotype can boost their performance. However, other research has found that merely providing information is not enough, and can even have the opposite effect. In one study, women were given a text "summarizing an experiment in which stereotypes, and not biological differences, were shown to be the cause of women's underperformance in math", and then they performed a math exercise. It was found that "women who properly understood the meaning of the information provided, and thus became knowledgeable about stereotype threat, performed significantly worse at a calculus task". In such cases, further research suggests that the manner in which the information is presented –– that is, whether subjects are made to perceive themselves as targets of negative stereotyping –– may be decisive.

Criticism

Some researchers have argued that stereotype threat should not be interpreted as a factor in real-world achievement gaps. Reviews have raised concerns that the effect might have been over-estimated in the performance of schoolgirls and argued that the field likely suffers from publication bias.

According to Paul R. Sackett, Chaitra M. Hardison, and Michael J. Cullen, both the media and scholarly literature have wrongly concluded that eliminating stereotype threat could completely eliminate differences in test performance between European Americans and African Americans. Sackett et al. argued that, in Steele and Aronson's (1995) experiments where stereotype threat was mitigated, an achievement gap of approximately one standard deviation remained between the groups, which is very close in size to that routinely reported between African American and European Americans' average scores on large-scale standardized tests such as the SAT. In subsequent correspondence between Sackett et al. and Steele and Aronson, Sackett et al. wrote that "They [Steele and Aronson] agree that it is a misinterpretation of the Steele and Aronson (1995) results to conclude that eliminating stereotype threat eliminates the African American-White test-score gap." However, in that same correspondence, Steele and Aronson point out that "it is the stereotype threat conditions, and not the no-threat conditions, that produce group differences most like those of real-life testing."

In a 2009 meta-analysis, Gregory M. Walton and Steven J. Spencer argued that studies of stereotype threat may in fact systematically under-represent its effects, since such studies measure "only that portion of psychological threat that research has identified and remedied. To the extent that unidentified or unremedied psychological threats further undermine performance, the results underestimate the bias." Despite these limitations, they found that efforts to mitigate stereotype threat significantly reduced group differences on high-stakes tests.

In 1998, Arthur R. Jensen criticized stereotype threat theory on the basis that it invokes an additional mechanism to explain effects which could be, according to him, explained by other, at the time better known and more established theories, such as test anxiety and especially the Yerkes–Dodson law. In Jensen's view, the effects which are attributed to stereotype threat may simply reflect "the interaction of ability level with test anxiety as a function of test complexity". However, a subsequent study by Johannes Keller specifically controlled for Jensen's hypothesis and still found significant stereotype threat effects.

Gijsbert Stoet and David C. Geary reviewed the evidence for the stereotype threat explanation of the achievement gap in mathematics between men and women. They concluded that the relevant stereotype threat research has many methodological problems, such as not having a control group, and that some literature on this topic misrepresents stereotype threat as more well established than it is. Still, they did find evidence for a marginally significant (d=0.17) effect of stereotype-threat.

In an article published on Psychology Today in 2015, psychologist Lee Jussim pointed out that, in their original 1995 study, Steele and Aronson controlled for prior SAT scores using analysis of covariance, which caused the difference between black and white students' test scores in the "non-diagnostic" test group to nearly disappear. Jussim argued that, using the same technique to control for prior temperatures, he could cause Nome, Alaska and Tampa, Florida to appear to have nearly the same average temperature. However, as Steele and Aronson point out, the larger literature beyond their 1995 paper "shows the effect of stereotype threat on an array of tests –– SATs, IQ tests, and French language tests to list only a few –– sometimes with a co-variance adjustment, but many times without."

Publication bias

A meta-analysis by Flore and Wicherts (2015) concluded that the average reported effect of stereotype threat is small, but also that the field may be inflated by publication bias. They argued that, correcting for this, the most likely true effect size is near zero.

Some researchers finding null results have raised similar concerns. For instance, Ganley et al. (2013) examined stereotype threat in a well-powered (total N ~ 1000) multi-experiment study and concluded that "no evidence that the mathematics performance of school-age girls was impacted by stereotype threat" was found. Positing that large, well-controlled studies have tended to find smaller or non-significant effects, the authors argued that evidence for stereotype threat in children may reflect publication bias. They also suggested that, among the many underpowered studies run, researchers may have selectively published those in which false-positive effects reached significance.

However, a more recent meta-analysis by Liu et al. (2020) challenges conclusions such as those of Flore and Wicherts, arguing that while publication bias may inflate the effectiveness of stereotype threat interventions, the level of bias found is insufficient to overturn consensus that such interventions are associated with performance benefits. The authors broke down the studies they analyzed into three types –– belief-based, identity-based, and resilience-based –– finding greater evidence for publication bias in the last of these and more robust evidence for the effectiveness of intervention in the first two types.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...