Search This Blog

Sunday, September 5, 2021

Selective exposure theory

From Wikipedia, the free encyclopedia

Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years.

According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions. People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in the cognitive dissonance theory (Festinger 1957), which asserts that when individuals are confronted with contrasting ideas, certain mental defense mechanisms are activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According to Jean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying".

Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during the decision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections. 

Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people hold prejudices regarding outgroups, particular opinions, and personal and group-related issues. Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure.

Effect on decision-making

Individual versus group decision-making

This image, which can be seen as a young woman or an older woman, serves as an example of how individuals can choose to perceive the same image differently. According to Selective Exposure Theory, people tend to seek out the version of a stimulant that they want to be exposed to, such as a form of the stimulant that they are already familiar with.

Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. For example, in the 2020 election, no information was available that then-Presidential candidate Joe Biden's son Hunter was under investigation by the FBI. However, many media outlets would not report the story, or called it Russian disinformation, because it conflicted with their preferred view of candidate Biden. Another example is despite the United States judiciary’s determination in multiple cases there is lack of evidence supporting purported massive election fraud as alleged by attorneys and others associated with President Trump’s campaign that would have materially changed the outcome of the election, a large segment of the population believes otherwise. Another example of the effects of selective exposure is the series of events leading up to the Bay of Pigs Invasion in 1961. President John F. Kennedy was given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan. Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information. A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making. Numerous situational variables have been identified that increase the tendency toward selective exposure. Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts.

Effect of information quantity

According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources. Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making.

Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests that confirmation bias is prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions.

Accuracy motivation and defense motivation

Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information. Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank which then triggered the Global Financial Crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo. Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure.

Personal attributes

Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice President Dick Cheney would only enter a hotel room after the television was turned on and tuned to a conservative television channel. When analyzing a person's decision-making skills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information. Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision. Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because the perception of quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis. Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure. In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure. Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs. On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on the Defensive Confidence Scale, which serves as evidence that their hypothesis was correct.

Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using the terror management theory and found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults. Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors.

Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure on decision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure on decision-making.

Theories accounting for selective exposure

Festinger's groundbreaking study on cognitive dissonance is the foundation for Modern Selective Exposure Theory.

Cognitive dissonance theory

Leon Festinger is widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology. He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis of behaviorism. Festinger proposed the groundbreaking theory of cognitive dissonance that has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957. In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!" Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions. Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008). When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction. Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce (Festinger 1957). Dissonance causes feelings of unhappiness, discomfort, or distress. Festinger (1957, p. 13) asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally. Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy.

In Festinger's theory, there are two basic hypotheses:

1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance.

2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance (Festinger 1957, p. 3).

The theory of cognitive dissonance was developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacts perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011). The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008).

Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented. To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted. Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments.

According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008). It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance. There exist three possibilities which will affect extent of dissonance (Festinger 1957, pp. 127–131):

  • Relative absence of dissonance.

When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced.

  • The presence of moderate amounts of dissonance.

The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance.

  • The presence of extremely large amounts of dissonance.

If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance (Festinger 1957, pp. 127–131).

The reduction in cognitive dissonance following a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system.

Klapper's selective exposure

In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his book The Effects Of Mass Communication that audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message. Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing media propaganda. However, by the time of the release of The Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person.

The following are Klapper's five mediating factors and conditions to affect people:

  • Predispositions and the related processes of selective exposure, selective perception, and selective retention.
  • The groups, and the norms of groups, to which the audience members belong.
  • Interpersonal dissemination of the content of communication
  • The exercise of opinion leadership
  • The nature of mass media in a free enterprise society.

Three basic concepts:

  • Selective exposure – people keep away from communication of opposite hue.
  • Selective perception – If people are confronting unsympathetic material, they do not perceive it, or make it fit for their existing opinion.
  • Selective retention – refers to the process of categorizing and interpreting information in a way that favors one category or interpretation over another. Furthermore, they just simply forget the unsympathetic material.

Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because of mass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. An opinion leader is also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents.

Cognitive economy model

This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy.

Implications

Media

Individuals tailor their media choices to avoid cognitive dissonance and avoid mental incongruity.

Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due to mass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. For example, in the 2020 election, information was available that then-Presidential candidate Joe Biden's son Hunter was under investigation by the FBI. However, many media outlets would not report the story, or called it Russian disinformation, and many media consumers ignored it if presented with it, because it conflicted with their preferred view of candidate Biden. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship. Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels of life satisfaction are more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes.

Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood.

Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watching Fox News regularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watching CNN regularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly." As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well. Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences." Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy.

Many academic studies on selective exposure, however, are based on the electoral system and media system of the United States. Countries with a strong public service broadcasting like many European countries, on the other hand, have less selective exposure based on political ideology or political party. In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years.

Television is the most pervasive conduit of selective exposure in modern society.

In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence of opinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors. Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004 National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization.  This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book, Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs." This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system.

One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012) discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment.

See also

 

Filter bubble

From Wikipedia, the free encyclopedia

The term filter bubble was coined by internet activist Eli Pariser, circa 2010.

A filter bubble or ideological frame is a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent. Prime examples include Google Personalized Search results and Facebook's personalized news-stream. The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal and addressable. The results of the U.S. presidential election in 2016 have been associated with the influence of social media platforms such as Twitter and Facebook, and as a result have called into question the effects of the "filter bubble" phenomenon on user exposure to fake news and echo chambers, spurring new interest in the term, with many concerned that the phenomenon may harm democracy and well-being by making the effects of misinformation worse.

(Technology such as social media) “lets you go off with like-minded people, so you're not mixing and sharing and understanding other points of view ... It's super important. It's turned out to be more of a problem than I, or many others, would have expected.”

— Bill Gates 2017 in Quartz

Concept

Social media, seeking to please users, can shunt information that they guess their users will like hearing, but inadvertently isolate what they know into their own filter bubbles, according to Pariser.

The term filter bubble was coined by internet activist Eli Pariser circa 2010 and discussed in his 2011 book of the same name; according to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. He related an example in which one user searched Google for "BP" and got investment news about British Petroleum, while another searcher got information about the Deepwater Horizon oil spill, and noted that the two search results pages were "strikingly different".

Pariser defined his concept of a filter bubble in more formal terms as "that personal ecosystem of information that's been catered by these algorithms". An internet user's past browsing and search history is built up over time when they indicate interest in topics by "clicking links, viewing friends, putting movies in [their] queue, reading news stories", and so forth. An internet firm then uses this information to target advertising to the user, or make certain types of information appear more prominently in search results pages.

This process is not random, as it operates under a three-step process, per Pariser, who states, "First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune to get the fit just right. Your identity shapes your media." Pariser also reports:

According to one Wall Street Journal study, the top fifty Internet sites, from CNN to Yahoo to MSN, install an average of 64 data-laden cookies and personal tracking beacons each. Search for a word like "depression" on Dictionary.com, and the site installs up to 223 tracking cookies and beacons on your computer so that other Web sites can target you with antidepressants. Share an article about cooking on ABC News, and you may be chased around the Web by ads for Teflon-coated pots. Open—even for an instant—a page listing signs that your spouse may be cheating and prepare to be haunted with DNA paternity-test ads.

Accessing the data of link clicks displayed through site traffic measurements determine that filter bubbles can be collective or individual.

As of 2011, one engineer had told Pariser that Google looked at 57 different pieces of data to personally tailor a user's search results, including non-cookie data such as the type of computer being used and the user's physical location.

Other terms have been used to describe this phenomenon, including "ideological frames" and "the figurative sphere surrounding you as you search the internet". A related term, "echo chamber", was originally applied to news media, but is now applied to social media as well.

Pariser's idea of the filter bubble was popularized after the TED talk he gave in May 2011, in which he gives examples of how filter bubbles work and where they can be seen. In a test seeking to demonstrate the filter bubble effect, Pariser asked several friends to search for the word "Egypt" on Google and send him the results. Comparing two of the friends' first pages of results, while there was overlap between them on topics like news and travel, one friend's results prominently included links to information on the then-ongoing Egyptian revolution of 2011, while the other friend's first page of results did not include such links.

In The Filter Bubble, Pariser warns that a potential downside to filtered searching is that it "closes us off to new ideas, subjects, and important information", and "creates the impression that our narrow self-interest is all that exists". In his view, filter bubbles are potentially harmful to both individuals and society. He criticized Google and Facebook for offering users "too much candy, and not enough carrots". He warned that "invisible algorithmic editing of the web" may limit our exposure to new information and narrow our outlook. According to Pariser, the detrimental effects of filter bubbles include harm to the general society in the sense that they have the possibility of "undermining civic discourse" and making people more vulnerable to "propaganda and manipulation". He wrote:

A world constructed from the familiar is a world in which there's nothing to learn ... (since there is) invisible autopropaganda, indoctrinating us with our own ideas.

— Eli Pariser in The Economist, 2011

Many people are unaware that filter bubbles even exist. This can be seen in an article on The Guardian, which mentioned the fact that "more than 60% of Facebook users are entirely unaware of any curation on Facebook at all, believing instead that every single story from their friends and followed pages appeared in their news feed." A brief explanation for how Facebook decides what goes on a user's news feed is through an algorithm which takes into account "how you have interacted with similar posts in the past."

A filter bubble has been described as exacerbating a phenomenon that has been called splinternet or cyberbalkanization, which happens when the internet becomes divided up into sub-groups of like-minded people who become insulated within their own online community and fail to get exposure to different views. This concern dates back to the early days of the publicly accessible internet, with the term "cyberbalkanization" being coined in 1996.

Similar concepts

In news media, echo chamber is a metaphorical description of a situation in which beliefs are amplified or reinforced by communication and repetition inside a closed system. By visiting an "echo chamber", people are able to seek out information which reinforces their existing views, potentially as an unconscious exercise of confirmation bias. This may increase political and social polarization and extremism. The term is a metaphor based on the acoustic echo chamber, where sounds reverberate in a hollow enclosure. "Echo chambers" reinforce an individuals beliefs without factual support. They are surrounded by those who acknowledge and follow the same viewpoints.

Barack Obama's farewell address identified a similar concept to filter bubbles as a "threat to [Americans'] democracy", i.e., the "retreat into our own bubbles, ...especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions... And increasingly we become so secure in our bubbles that we start accepting only information, whether it's true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there."

Reactions and studies

Media reactions

There are conflicting reports about the extent to which personalized filtering is happening and whether such activity is beneficial or harmful. Analyst Jacob Weisberg, writing in June 2011 for Slate, did a small non-scientific experiment to test Pariser's theory which involved five associates with different ideological backgrounds conducting a series of searches, "John Boehner", "Barney Frank", "Ryan plan", and "Obamacare", and sending Weisberg screenshots of their results. The results varied only in minor respects from person to person, and any differences did not appear to be ideology-related, leading Weisberg to conclude that a filter bubble was not in effect, and to write that the idea that most internet users were "feeding at the trough of a Daily Me" was overblown. Weisberg asked Google to comment, and a spokesperson stated that algorithms were in place to deliberately "limit personalization and promote variety". Book reviewer Paul Boutin did a similar experiment to Weisberg's among people with differing search histories, and again found that the different searchers received nearly identical search results. Interviewing programmers at Google off the record journalist Per Grankvist found that user data used to play a bigger role in determining search results but that Google, through testing, found that the search query is by far the best determinator on what results to display.

There are reports that Google and other sites maintain vast "dossiers" of information on their users which might enable them to further personalize individual internet experiences if they chose to do so. For instance, the technology exists for Google to keep track of users' past histories even if they don't have a personal Google account or are not logged into one. One report stated that Google had collected "10 years' worth" of information amassed from varying sources, such as Gmail, Google Maps, and other services besides its search engine, although a contrary report was that trying to personalize the internet for each user was technically challenging for an internet firm to achieve despite the huge amounts of available data. Analyst Doug Gross of CNN suggested that filtered searching seemed to be more helpful for consumers than for citizens, and would help a consumer looking for "pizza" find local delivery options based on a personalized search and appropriately filter out distant pizza stores. Organizations such as the Washington Post, The New York Times, and others have experimented with creating new personalized information services, with the aim of tailoring search results to those that users are likely to like or agree with.

Academia studies and reactions

In 'The Big Data Public and Its Problems', Tauel Harper suggests that the loss of the editorial subsidy actually produces a more homogenized and normalized public sphere than traditional print media. The process of salience selection, the law of large numbers and the power of pre-existing networks means that algorithmic selections tend to solidify norms and further marginalize difference in digital publics.

A scientific study from Wharton that analyzed personalized recommendations also found that these filters can actually create commonality, not fragmentation, in online music taste. Consumers reportedly use the filters to expand their taste rather than to limit it. Harvard law professor Jonathan Zittrain disputed the extent to which personalization filters distort Google search results, saying that "the effects of search personalization have been light". Further, Google provides the ability for users to shut off personalization features if they choose, by deleting Google's record of their search history and setting Google to not remember their search keywords and visited links in the future.

A study from Internet Policy Review addressed the lack of a clear and testable definition for filter bubbles across disciplines; this often results in researchers defining and studying filter bubbles in different ways. Subsequently, the study explained a lack of empirical data for the existence of filter bubbles across disciplines and suggested that the effects attributed to them may stem more from preexisting ideological biases than from algorithms. Similar views can be found in other academic projects which also address concerns with the definitions of filter bubbles and the relationships between ideological and technological factors associated with them. A critical review of filter bubbles suggested that "the filter bubble thesis often posits a special kind of political human who has opinions that are strong, but at the same time highly malleable" and that it is a "paradox that people have an active agency when they select content, but are passive receivers once they are exposed to the algorithmically curated content recommended to them".

A study by researchers from Oxford, Stanford, and Microsoft examined the browsing histories of 1.2 million U.S. users of the Bing Toolbar add-on for Internet Explorer between March and May 2013. They selected 50,000 of those users who were active consumers of news, then classified whether the news outlets they visited were left- or right-leaning, based on whether the majority of voters in the counties associated with user IP addresses voted for Obama or Romney in the 2012 presidential election. They then identified whether news stories were read after accessing the publisher's site directly, via the Google News aggregation service, via web searches, or via social media. The researchers found that while web searches and social media do contribute to ideological segregation, the vast majority of online news consumption consisted of users directly visiting left- or right-leaning mainstream news sites, and consequently being exposed almost exclusively to views from a single side of the political spectrum. Limitations of the study included selection issues such as Internet Explorer users skewing higher in age than the general internet population; Bing Toolbar usage and the voluntary (or unknowing) sharing of browsing history selecting for users who are less concerned about privacy; the assumption that all stories in left-leaning publications are left-leaning, and the same for right-leaning; and the possibility that users who are not active news consumers may get most of their news via social media, and thus experience stronger effects of social or algorithmic bias than those users who essentially self-select their bias through their choice of news publications (assuming they are aware of the publications' biases).

A study by researchers from Princeton University and New York University, aimed to study the impact of filter bubble and algorithmic filtering on social media polarization. They used a mathematical model called the "stochastic block model" to test their hypothesis on the environments of Reddit and Twitter. The researchers gauged changes in polarization in regularized social media networks and non-regularized networks, specifically measuring the percent changes in polarization and disagreement on Reddit and Twitter. They found that polarization increased significantly at 400% in non-regularized networks, while polarization increased by 4% in regularized networks and disagreement by 5%.

Platform studies

While algorithms do limit political diversity, some of the filter bubble is the result of user choice. A study by data scientists at Facebook found that for every four Facebook friends that share ideology, users have one friend with contrasting views. No matter what Facebook's algorithm for its News Feed is, people are simply more likely to befriend/follow people who share similar beliefs. The nature of the algorithm is that it ranks stories based on a user's history, resulting in a reduction of the "politically cross-cutting content by 5 percent for conservatives and 8 percent for liberals". However, even when people are given the option to click on a link offering contrasting views, they still default to their most viewed sources. "[U]ser choice decreases the likelihood of clicking on a cross-cutting link by 17 percent for conservatives and 6 percent for liberals." A cross-cutting link is one that introduces a different point of view than the user's presumed point of view, or what the website has pegged as the user's beliefs. A recent study from Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro suggests that online media isn't the driving force for political polarization. The paper argues that polarization has been driven by the demographic groups that spend the least time online. The greatest ideological divide is experienced amongst Americans older than 75, while only 20% reported using social media as of 2012. In contrast, 80% of Americans aged 18–39 reported using social media as of 2012. The data suggests that the younger demographic isn't any more polarized in 2012 than it had been when online media barely existed in 1996. The study highlights differences between age groups and how news consumption remains polarized as people seek information that appeals to their preconceptions. Older Americans usually remain stagnant in their political views as traditional media outlets continue to be a primary source of news while online media is the leading source for the younger demographic. Although algorithms and filter bubbles weaken content diversity, this study reveals that political polarization trends are primarily driven by pre-existing views and failure to recognize outside sources. A 2020 study from Germany utilized the Big Five Psychology model to test the effects of individual personality, demographics, and ideologies on user news consumption. Basing their study on the notion that the number of news sources that users consume impacts their likelihood to be caught in a filter bubble—with higher media diversity lessening the chances—their results suggest that certain demographics (higher age and male) along with certain personality traits (high openness) correlate positively with number of news sources consumed by individuals. The study also found a negative ideological association between media diversity and the degree to which users align with right-wing authoritarianism. Beyond offering different individual user factors that may influence the role of user choice, this study also raises questions and associations between the likelihood of users being caught in filter bubbles and user voting behavior.

The Facebook study found that it was "inconclusive" whether or not the algorithm played as big a role in filtering News Feeds as people assumed. The study also found that "individual choice", or confirmation bias, likewise affected what gets filtered out of News Feeds. Some social scientists criticized this conclusion though, because the point of protesting the filter bubble is that the algorithms and individual choice work together to filter out News Feeds. They also criticized Facebook's small sample size, which is about "9% of actual Facebook users", and the fact that the study results are "not reproducible" due to the fact that the study was conducted by "Facebook scientists" who had access to data that Facebook does not make available to outside researchers.

Though the study found that only about 15–20% of the average user's Facebook friends subscribe to the opposite side of the political spectrum, Julia Kaman from Vox theorized that this could have potentially positive implications for viewpoint diversity. These "friends" are often acquaintances with whom we would not likely share our politics without the internet. Facebook may foster a unique environment where a user sees and possibly interacts with content posted or re-posted by these "second-tier" friends. The study found that "24 percent of the news items liberals saw were conservative-leaning and 38 percent of the news conservatives saw was liberal-leaning." "Liberals tend to be connected to fewer friends who share information from the other side, compared with their conservative counterparts." This interplay has the ability to provide diverse information and sources that could moderate users' views.

Similarly, a study of Twitter's filter bubbles by New York University concluded that "Individuals now have access to a wider span of viewpoints about news events, and most of this information is not coming through the traditional channels, but either directly from political actors or through their friends and relatives. Furthermore, the interactive nature of social media creates opportunities for individuals to discuss political events with their peers, including those with whom they have weak social ties". According to these studies, social media may be diversifying information and opinions users come into contact with, though there is much speculation around filter bubbles and their ability to create deeper political polarization.

One driver and possible solution to the problem is the role of emotions in online content. A 2018 study shows that different emotions of messages can lead to polarization or convergence: joy is prevalent in emotional polarization, while sadness and fear play significant roles in emotional convergence. Since it is relatively easy to detect the emotional content of messages, these findings can help to design more socially responsible algorithms by starting to focus on the emotional content of algorithmic recommendations.

Visualization of the process and growth of two social media bots used in the 2019 Weibo study. The diagrams represent two aspects of the structure of filter bubbles, according to the study: large concentrations of users around single topics and a uni-directional, star-like structure that impacts key information flows.

Social bots have been utilized by different researchers to test polarization and related effects that are attributed to filter bubbles and echo chambers. A 2018 study used social bots on Twitter to test deliberate user exposure to partisan viewpoints. The study claimed it demonstrated partisan differences between exposure to differing views, although it warned that the findings should be limited to party-registered American Twitter users. One of the main findings was that after exposure to differing views (provided by the bots) self-registered republicans became more conservative, whereas self-registered liberals showed less ideological change, if none at all. A different study from The People's Republic of China utilized social bots on Weibo–the largest social media platform in China–to examine the structure of filter bubbles in regards to their effects on polarization. The study draws a distinction between two conceptions of polarization. One being where people with similar views form groups, share similar opinions, and block themselves from differing viewpoints (opinion polarization) and the other being where people do not access diverse content and sources of information (information polarization). By utilizing social bots instead of human volunteers and focusing more on information polarization rather than opinion-based, the researchers concluded that there are two essential elements of a filter bubble: a large concentration of users around a single topic and a uni-directional, star-like structure that impacts key information flows.

In June 2018, the platform DuckDuckGo conducted a research study on the Google Web Browser Platform. For this study, 87 adults in various locations around the continental United States googled three key words at the exact same time: immigration, gun control, and vaccinations. Even when in private browsing mode, most people saw results unique to them. Google included certain links for some that it did not include for other participants, and the News and Videos infoboxes showed significant variation. Google publicly disputed these results saying that Search Engine Results Page (SERP) personalization is mostly a myth. Google Search Liaison, Danny Sullivan, stated that “Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn’t the case. Results can differ, but usually for non-personalized reasons.”

When filter bubbles are in place they can create specific moments that scientists call 'Whoa' moments. A 'Whoa' moment is when an article, ad, post, etc. appears on your computer that is in relation to a current action or current use of an object. Scientists discovered this term after a young woman was performing her daily routine, which included drinking coffee, when she opened her computer and noticed an advertisement for the same brand of coffee that she was drinking. "Sat down and opened up Facebook this morning while having my coffee, and there they were two ads for Nespresso. Kind of a 'whoa' moment when the product you're drinking pops up on the screen in front of you." "Whoa" moments occur when people are "found". Which means advertisement algorithms target specific users based on their "click behavior" in order to increase their sale revenue.

Several designers have developed tools to counteract the effects of filter bubbles (see § Countermeasures). Swiss radio station SRF voted the word filterblase (the German translation of filter bubble) word of the year 2016.

Countermeasures

By individuals

In The Filter Bubble: What the Internet Is Hiding from You, internet activist Eli Pariser highlights how the increasing occurrence of filter bubbles further emphasizes the value of one's bridging social capital as defined by Robert Putman. Indeed, while bonding capital corresponds on the one hand to the establishment of strong ties between like-minded people, thus reinforcing some sense of social homogeneity, bridging social capital on the other hand represents the creation of weak ties between people with potentially diverging interests and viewpoints, hence introducing significantly more heterogeneity. In that sense, high bridging capital is much more likely to promote social inclusion by increasing our exposure to a space where we address the problems that transcend our niches and narrow self interests. Fostering one's bridging capital, such as by connecting with more people in an informal setting, can therefore be an effective way to reduce the influence of the filter bubble phenomenon.

Users can in fact take many actions to burst through their filter bubbles, for example by making a conscious effort to evaluate what information they are exposing themselves to, and by thinking critically about whether they are engaging with a broad range of content. This view argues that users should change the psychology of how they approach media, rather than relying on technology to counteract their biases. Users can consciously avoid news sources that are unverifiable or weak. Chris Glushko, the VP of Marketing at IAB, advocates using fact-checking sites to identify fake news. Technology can also play a valuable role in combating filter bubbles.

Some additional plug-ins, such as Media Bias Fact Check, aimed to help people step out of their filter bubbles and make them aware of their personal perspectives; thus, these media show content that contradicts with their beliefs and opinions. For instance, Escape Your Bubble asks users to indicate a specific political party they want to be more informed about. The plug-in will then suggest articles from well-established sources to read relating to that political party, encouraging users to become more educated about the other party. In addition to plug-ins, there are apps created with the mission of encouraging users to open their echo chambers. UnFound.news offers an AI (Artificial Intelligence) curated news app to readers presenting them news from diverse and distinct perspectives, helping them form rationale and informed opinion rather than succumbing to their own biases. It also nudges the readers to read different perspectives if their reading pattern is biased towards one side/ideology. Read Across the Aisle is a news app that reveals whether or not users are reading from diverse new sources that include multiple perspectives. Each source is color coordinated, representing the political leaning of each article. When users only read news from one perspective, the app communicates that to the user and encourages readers to explore other sources with opposing viewpoints. Although apps and plug-ins are tools humans can use, Eli Pariser stated "certainly, there is some individual responsibility here to really seek out new sources and people who aren't like you."

Since web-based advertising can further the effect of the filter bubbles by exposing users to more of the same content, users can block much advertising by deleting their search history, turning off targeted ads, and downloading browser extensions. Extensions such as Escape your Bubble for Google Chrome aim to help curate content and prevent users from only being exposed to biased information, while Mozilla Firefox extensions such as Lightbeam and Self-Destructing Cookies enable users to visualize how their data is being tracked, and lets them remove some of the tracking cookies. Some use anonymous or non-personalized search engines such as YaCy, DuckDuckGo, Qwant, Startpage.com, Disconnect, and Searx in order to prevent companies from gathering their web-search data. Swiss daily Neue Zürcher Zeitung is beta-testing a personalized news engine app which uses machine learning to guess what content a user is interested in, while "always including an element of surprise"; the idea is to mix in stories which a user is unlikely to have followed in the past.

The European Union is taking measures to lessen the effect of the filter bubble. The European Parliament is sponsoring inquiries into how filter bubbles affect people's ability to access diverse news. Additionally, it introduced a program aimed to educate citizens about social media. In the U.S., the CSCW panel suggests the use of news aggregator apps to broaden media consumers news intake. News aggregator apps scan all current news articles and direct you to different viewpoints regarding a certain topic. Users can also use a diversely-aware news balancer which visually shows the media consumer if they are leaning left or right when it comes to reading the news, indicating right-leaning with a bigger red bar or left-leaning with a bigger blue bar. A study evaluating this news balancer found "a small but noticeable change in reading behavior, toward more balanced exposure, among users seeing the feedback, as compared to a control group".

By media companies

In light of recent concerns about information filtering on social media, Facebook acknowledged the presence of filter bubbles and has taken strides toward removing them. In January 2017, Facebook removed personalization from its Trending Topics list in response to problems with some users not seeing highly talked-about events there. Facebook's strategy is to reverse the Related Articles feature that it had implemented in 2013, which would post related news stories after the user read a shared article. Now, the revamped strategy would flip this process and post articles from different perspectives on the same topic. Facebook is also attempting to go through a vetting process whereby only articles from reputable sources will be shown. Along with the founder of Craigslist and a few others, Facebook has invested $14 million into efforts "to increase trust in journalism around the world, and to better inform the public conversation". The idea is that even if people are only reading posts shared from their friends, at least these posts will be credible.

Similarly, Google, as of January 30, 2018, has also acknowledged the existence of a filter bubble difficulties within its platform. Because current Google searches pull algorithmically ranked results based upon "authoritativeness" and "relevancy" which show and hide certain search results, Google is seeking to combat this. By training its search engine to recognize the intent of a search inquiry rather than the literal syntax of the question, Google is attempting to limit the size of filter bubbles. As of now, the initial phase of this training will be introduced in the second quarter of 2018. Questions that involve bias and/or controversial opinions will not be addressed until a later time, prompting a larger problem that exists still: whether the search engine acts either as an arbiter of truth or as a knowledgeable guide by which to make decisions by.

In April 2017 news surfaced that Facebook, Mozilla, and Craigslist contributed to the majority of a $14M donation to CUNY's "News Integrity Initiative," poised at eliminating fake news and creating more honest news media.

Later, in August, Mozilla, makers of the Firefox web browser, announced the formation of the Mozilla Information Trust Initiative (MITI). The +MITI would serve as a collective effort to develop products, research, and community-based solutions to combat the effects of filter bubbles and the proliferation of fake news. Mozilla's Open Innovation team leads the initiative, striving to combat misinformation, with a specific focus on the product with regards to literacy, research and creative interventions.

Ethical implications

As the popularity of cloud services increases, personalized algorithms used to construct filter bubbles are expected to become more widespread. Scholars have begun considering the effect of filter bubbles on the users of social media from an ethical standpoint, particularly concerning the areas of personal freedom, security, and information bias. Filter bubbles in popular social media and personalized search sites can determine the particular content seen by users, often without their direct consent or cognizance, due to the algorithms used to curate that content. Self-created content manifested from behavior patterns can lead to partial information blindness. Critics of the use of filter bubbles speculate that individuals may lose autonomy over their own social media experience and have their identities socially constructed as a result of the pervasiveness of filter bubbles.

Technologists, social media engineers, and computer specialists have also examined the prevalence of filter bubbles. Mark Zuckerberg, founder of Facebook, and Eli Pariser, author of The Filter Bubble, have expressed concerns regarding the risks of privacy and information polarization. The information of the users of personalized search engines and social media platforms is not private, though some people believe it should be. The concern over privacy has resulted in a debate as to whether or not it is moral for information technologists to take users' online activity and manipulate future exposure to related information.

Some scholars have expressed concerns regarding the effects of filter bubbles on individual and social well-being, i.e. the dissemination of health information to the general public and the potential effects of internet search engines to alter health-related behavior. A 2019 multi-disciplinary book reported research and perspectives on the roles filter bubbles play in regards to health misinformation. Drawing from various fields such as journalism, law, medicine, and health psychology, the book addresses different controversial health beliefs (e.g. alternative medicine and pseudoscience) as well as potential remedies to the negative effects of filter bubbles and echo chambers on different topics in health discourse. A 2016 study on the potential effects of filter bubbles on search engine results related to suicide found that algorithms play an important role in whether or not helplines and similar search results are displayed to users and discussed the implications their research may have for health policies. Another 2016 study from the Croatian Medical journal proposed some strategies for mitigating the potentially harmful effects of filter bubbles on health information, such as: informing the public more about filter bubbles and their associated effects, users choosing to try alternative [to Google] search engines, and more explanation of the processes search engines use to determine their displayed results.

Since the content seen by individual social media users is influenced by algorithms that produce filter bubbles, users of social media platforms are more susceptible to confirmation bias, and may be exposed to biased, misleading information. Social sorting and other unintentional discriminatory practices are also anticipated as a result of personalized filtering.

In light of the 2016 U.S. presidential election scholars have likewise expressed concerns about the effect of filter bubbles on democracy and democratic processes, as well as the rise of "ideological media". These scholars fear that users will be unable to "[think] beyond [their] narrow self-interest" as filter bubbles create personalized social feeds, isolating them from diverse points of view and their surrounding communities. For this reason, it is increasingly discussed the possibility to design social media with more serendipity, that is, to proactively recommend content that lies outside one's filter bubble, including challenging political information and, eventually, to provide empowering filters and tools to users. A related concern is in fact how filter bubbles contribute to the proliferation of "fake news" and how this may influence political leaning, including how users vote.

Revelations in March 2018 of Cambridge Analytica's harvesting and use of user data for at least 87 million Facebook profiles during the 2016 presidential election highlight the ethical implications of filter bubbles. Co-Founder and whistleblower of Cambridge Analytica Christopher Wylie, detailed how the firm had the ability to develop "psychographic" profiles of those users and use the information to shape their voting behavior. Access to user data by third parties such as Cambridge Analytica can exasperate and amplify existing filter bubbles users have created, artificially increasing existing biases and further divide societies.

Dangers

Filter bubbles have stemmed from a surge in media personalization, which can trap users. The use of AI to personalize offerings can lead to users viewing only content that reinforces their own viewpoints without challenging them. Social media websites like Facebook may also present content in a way that makes it difficult for users to determine the source of the content, leading them to decide for themselves whether the source is reliable or fake. That can lead to people becoming used to hearing what they want to hear, which can cause them to react more radically when they see an opposing viewpoint. The filter bubble may cause the person to see any opposing viewpoints as incorrect and so could allow the media to force views onto consumers.

Researches explain that the filter bubble reinforces what one is already thinking. This is why it is extremely important to utilize resources that offer various points of view.

Extensions of concept

The concept of a filter bubble has been extended into other areas, to describe societies that self-segregate according political views but also economic, social, and cultural situations. That bubbling results in a loss of the broader community and creates the sense that for example, children do not belong at social events unless those events were especially planned to be appealing for children and unappealing for adults without children.

See also

 

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...