Search This Blog

Tuesday, August 17, 2021

Groupthink

From Wikipedia, the free encyclopedia

Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs. This causes the group to minimize conflict and reach a consensus decision without critical evaluation.

Groupthink is a construct of social psychology, but has an extensive reach and influences literature in the fields of communication studies, political science, management, and organizational theory, as well as important aspects of deviant religious cult behaviour.

Groupthink is sometimes stated to occur (more broadly) within natural groups within the community, for example to explain the lifelong different mindsets of those with differing political views (such as "conservatism" and "liberalism" in the U.S. political context or the purported benefits of team work vs. work conducted in solitude. However, this conformity of viewpoints within a group does not mainly involve deliberate group decision-making, and might be better explained by the collective confirmation bias of the individual members of the group.

The term was coined in 1952 by William H. Whyte Jr.. Most of the initial research on groupthink was conducted by Irving Janis, a research psychologist from Yale University. Janis published an influential book in 1972, which was revised in 1982. Janis used the Bay of Pigs disaster (the failed invasion of Castro's Cuba in 1961) and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have evaluated and reformulated his groupthink model.

Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). Furthermore, groupthink can produce dehumanizing actions against the "outgroup". Members of a group can often feel peer pressure to "go along with the crowd" in fear of rocking the boat or of what them speaking up will do to the overall to how their teammates perceive them. Group interactions tend to favor clear and harmonious agreements and it can be a cause for concern when little to no new innovations or arguments for better policies, outcomes and structures are called to question. (McLeod). Groupthink can often be referred to as a group of “yes men” because group activities and group projects in general make it extremely easy to pass on not offering constructive opinions.

Some methods that have been used to counteract group think in the past is selecting teams from more diverse backgrounds, and even mixing men and women for groups (Kamalnath). Groupthink can be considered by many to be a detriment to companies, organizations and in any work situations. Most positions that are senior level need individuals to be independent in their thinking. There is a positive correlation found between outstanding executives and decisiveness (Kelman). Groupthink also prohibits an organization from moving forward and innovating if no one ever speaks up and says something could be done differently.

Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.

History

From "Groupthink" by William H. Whyte Jr. in Fortune magazine, March 1952

William H. Whyte Jr. derived the term from George Orwell's Nineteen Eighty-Four, and popularized it in 1952 in Fortune magazine:

Groupthink being a coinage – and, admittedly, a loaded one – a working definition is in order. We are not talking about mere instinctive conformity – it is, after all, a perennial failing of mankind. What we are talking about is a rationalized conformity – an open, articulate philosophy which holds that group values are not only expedient but right and good as well.

Irving Janis pioneered the initial research on the groupthink theory. He does not cite Whyte, but coined the term again by analogy with "doublethink" and similar terms that were part of the newspeak vocabulary in the novel Nineteen Eighty-Four by George Orwell. He initially defined groupthink as follows:

I use the term groupthink as a quick and easy way to refer to the mode of thinking that persons engage in when concurrence-seeking becomes so dominant in a cohesive ingroup that it tends to override realistic appraisal of alternative courses of action. Groupthink is a term of the same order as the words in the newspeak vocabulary George Orwell used in his dismaying world of 1984. In that context, groupthink takes on an invidious connotation. Exactly such a connotation is intended, since the term refers to a deterioration in mental efficiency, reality testing and moral judgments as a result of group pressures.

He went on to write:

The main principle of groupthink, which I offer in the spirit of Parkinson's Law, is this: "The more amiability and esprit de corps there is among the members of a policy-making ingroup, the greater the danger that independent critical thinking will be replaced by groupthink, which is likely to result in irrational and dehumanizing actions directed against outgroups."

Janis set the foundation for the study of groupthink starting with his research in the American Soldier Project where he studied the effect of extreme stress on group cohesiveness. After this study he remained interested in the ways in which people make decisions under external threats. This interest led Janis to study a number of "disasters" in American foreign policy, such as failure to anticipate the Japanese attack on Pearl Harbor (1941); the Bay of Pigs Invasion fiasco (1961); and the prosecution of the Vietnam War (1964–67) by President Lyndon Johnson. He concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being expressed and subsequently evaluated.

After the publication of Janis' book Victims of Groupthink in 1972, and a revised edition with the title Groupthink: Psychological Studies of Policy Decisions and Fiascoes in 1982, the concept of groupthink was used to explain many other faulty decisions in history. These events included Nazi Germany's decision to invade the Soviet Union in 1941, the Watergate scandal and others. Despite the popularity of the concept of groupthink, fewer than two dozen studies addressed the phenomenon itself following the publication of Victims of Groupthink, between the years 1972 and 1998. This was surprising considering how many fields of interests it spans, which include political science, communications, organizational studies, social psychology, management, strategy, counseling, and marketing. One can most likely explain this lack of follow-up in that group research is difficult to conduct, groupthink has many independent and dependent variables, and it is unclear "how to translate [groupthink's] theoretical concepts into observable and quantitative constructs".

Nevertheless, outside research psychology and sociology, wider culture has come to detect groupthink in observable situations, for example:

  • " [...] critics of Twitter point to the predominance of the hive mind in such social media, the kind of groupthink that submerges independent thinking in favor of conformity to the group, the collective"
  • "[...] leaders often have beliefs which are very far from matching reality and which can become more extreme as they are encouraged by their followers. The predilection of many cult leaders for abstract, ambiguous, and therefore unchallengeable ideas can further reduce the likelihood of reality testing, while the intense milieu control exerted by cults over their members means that most of the reality available for testing is supplied by the group environment. This is seen in the phenomenon of 'groupthink', alleged to have occurred, notoriously, during the Bay of Pigs fiasco."
  • "Groupthink by Compulsion [...] [G]roupthink at least implies voluntarism. When this fails, the organization is not above outright intimidation. [...] In [a nationwide telecommunications company], refusal by the new hires to cheer on command incurred consequences not unlike the indoctrination and brainwashing techniques associated with a Soviet-era gulag."

Symptoms

To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink:

Type I: Overestimations of the group — its power and morality

  • Illusions of invulnerability creating excessive optimism and encouraging risk taking.
  • Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions.

Type II: Closed-mindedness

  • Rationalizing warnings that might challenge the group's assumptions.
  • Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, impotent, or stupid.

Type III: Pressures toward uniformity

  • Self-censorship of ideas that deviate from the apparent group consensus.
  • Illusions of unanimity among group members, silence is viewed as agreement.
  • Direct pressure to conform placed on any member who questions the group, couched in terms of "disloyalty"
  • Mindguards— self-appointed members who shield the group from dissenting information.

Causes

Janis prescribed three antecedent conditions to groupthink.

  • High group cohesiveness. Janis emphasized that cohesiveness is the main factor that leads to groupthink. Groups that lack cohesiveness can of course make bad decisions, but they do not experience groupthink. In a cohesive group, members avoid speaking out against decisions, avoid arguing with others, and work towards maintaining friendly relationships in the group. If cohesiveness gets to such a high level where there are no longer disagreements between members, then the group is ripe for groupthink.
    • deindividuation: group cohesiveness becomes more important than individual freedom of expression
  • Structural faults. Cohesion is necessary for groupthink, but it becomes even more likely when the group is organized in ways that disrupt the communication of information, and when the group engages in carelessness while making decisions.
    • insulation of the group: can promote the development of unique, inaccurate perspectives on issues the group is dealing with, and can then lead to faulty solutions to the problem.
    • lack of impartial leadership: leaders can completely control the group discussion, by planning what will be discussed, only allowing certain questions to be asked, and asking for opinions of only certain people in the group. Closed style leadership is when leaders announce their opinions on the issue before the group discusses the issue together. Open style leadership is when leaders withhold their opinion until a later time in the discussion. Groups with a closed style leader have been found to be more biased in their judgments, especially when members had a high degree for certainty.
    • lack of norms requiring methodological procedures
    • homogeneity of members' social backgrounds and ideology
  • Situational context:
    • highly stressful external threats: High stake decisions can create tension and anxiety, and group members then may cope with the decisional stress in irrational ways. Group members may rationalize their decision by exaggerating the positive consequences and minimizing the possible negative consequences. In attempt to minimize the stressful situation, the group will make a quick decision with little to no discussion or disagreement about the decision. Studies have shown that groups under high stress are more likely to make errors, lose focus of the ultimate goal, and use procedures that members know have not been effective in the past.
    • recent failures: can lead to low self-esteem, resulting in agreement with the group for fear of being seen as wrong
    • excessive difficulties in decision-making tasks
    • time pressures: group members are more concerned with efficiency and quick results, instead of quality and accuracy. Additionally, time pressures can lead to group members overlooking important information regarding the issue of discussion.
    • moral dilemmas

Although it is possible for a situation to contain all three of these factors, all three are not always present even when groupthink is occurring. Janis considered a high degree of cohesiveness to be the most important antecedent to producing groupthink and always present when groupthink was occurring; however, he believed high cohesiveness would not always produce groupthink. A very cohesive group abides to all group norms; whether or not groupthink arises is dependent on what the group norms are. If the group encourages individual dissent and alternative strategies to problem solving, it is likely that groupthink will be avoided even in a highly cohesive group. This means that high cohesion will lead to groupthink only if one or both of the other antecedents is present, situational context being slightly more likely than structural faults to produce groupthink.

Prevention

As observed by Aldag and Fuller (1993), the groupthink phenomenon seems to rest on a set of unstated and generally restrictive assumptions:

  • The purpose of group problem solving is mainly to improve decision quality
  • Group problem solving is considered a rational process.
  • Benefits of group problem solving:
    • variety of perspectives
    • more information about possible alternatives
    • better decision reliability
    • dampening of biases
    • social presence effects
  • Groupthink prevents these benefits due to structural faults and provocative situational context
  • Groupthink prevention methods will produce better decisions
  • An illusion of well-being is presumed to be inherently dysfunctional.
  • Group pressures towards consensus lead to concurrence-seeking tendencies.

It has been thought that groups with the strong ability to work together will be able to solve dilemmas in a quicker and more efficient fashion than an individual. Groups have a greater amount of resources which lead them to be able to store and retrieve information more readily and come up with more alternative solutions to a problem. There was a recognized downside to group problem solving in that it takes groups more time to come to a decision and requires that people make compromises with each other. However, it was not until the research of Janis appeared that anyone really considered that a highly cohesive group could impair the group's ability to generate quality decisions. Tight-knit groups may appear to make decisions better because they can come to a consensus quickly and at a low energy cost; however, over time this process of decision-making may decrease the members' ability to think critically. It is, therefore, considered by many to be important to combat the effects of groupthink.

According to Janis, decision-making groups are not necessarily destined to groupthink. He devised ways of preventing groupthink:

  • Leaders should assign each member the role of "critical evaluator". This allows each member to freely air objections and doubts.
  • Leaders should not express an opinion when assigning a task to a group.
  • Leaders should absent themselves from many of the group meetings to avoid excessively influencing the outcome.
  • The organization should set up several independent groups, working on the same problem.
  • All effective alternatives should be examined.
  • Each member should discuss the group's ideas with trusted people outside of the group.
  • The group should invite outside experts into meetings. Group members should be allowed to discuss with and question the outside experts.
  • At least one group member should be assigned the role of devil's advocate. This should be a different person for each meeting.

The devil's advocate in a group may provide questions and insight which contradict the majority group in order to avoid groupthink decisions. A study by Hartwig  insists that the devil's advocacy technique is very useful for group problem-solving. It allows for conflict to be used in a way that is most-effective for finding the best solution so that members will not have to go back and find a different solution if the first one fails. Hartwig also suggests that the devil's advocacy technique be incorporated with other group decision-making models such as the functional theory to find and evaluate alternative solutions. The main idea of the devil's advocacy technique is that somewhat structured conflict can be facilitated to not only reduce groupthink, but to also solve problems.

A similar term to groupthink is the Abilene paradox, another phenomenon that is detrimental when working in groups. When organizations fall into the Abilene paradox, they take actions in contradiction to what their perceived goals may be and therefore defeat the very purposes they are trying to achieve. Failure to communicate desires or beliefs can cause the Abilene paradox.

As explained in the Abilene paradox, the Watergate scandal is an example of this. Before the scandal had occurred, a meeting took place where they discussed the issue. One of Nixon's campaign aides was unsure if he should speak up and give his input. If he had voiced his disagreement with the group's decision, it is possible that the scandal could have been avoided.

Other examples of how groupthink could be avoided or prevented:

After the Bay of Pigs invasion fiasco, President John F. Kennedy sought to avoid groupthink during the Cuban Missile Crisis using "vigilant appraisal". During meetings, he invited outside experts to share their viewpoints, and allowed group members to question them carefully. He also encouraged group members to discuss possible solutions with trusted members within their separate departments, and he even divided the group up into various sub-groups, to partially break the group cohesion. Kennedy was deliberately absent from the meetings, so as to avoid pressing his own opinion.

Cass Sunstein reports that introverts can sometimes be silent in meetings with extroverts; he recommends explicitly asking for each person's opinion, either during the meeting or afterwards in one-on-one sessions. Sunstein points to studies showing groups with a high level of internal socialization and happy talk are more prone to bad investment decisions due to groupthink, compared with groups of investors who are relative strangers and more willing to be argumentative. To avoid group polarization, where discussion with like-minded people drives an outcome further to an extreme than any of the individuals favored before the discussion, he recommends creating heterogeneous groups which contain people with different points of view. Sunstein also points out that people arguing a side they do not sincerely believe (in the role of devil's advocate) tend to be much less effective than a sincere argument. This can be accomplished by dissenting individuals, or a group like a Red Team that is expected to pursue an alternative strategy or goal "for real".

Empirical findings and meta-analysis

Testing groupthink in a laboratory is difficult because synthetic settings remove groups from real social situations, which ultimately changes the variables conducive or inhibitive to groupthink. Because of its subjective nature, researchers have struggled to measure groupthink as a complete phenomenon, instead frequently opting to measure its particular factors. These factors range from causal to effectual and focus on group and situational aspects.

Park (1990) found that "only 16 empirical studies have been published on groupthink", and concluded that they "resulted in only partial support of his [Janis's] hypotheses". Park concludes, "despite Janis' claim that group cohesiveness is the major necessary antecedent factor, no research has shown a significant main effect of cohesiveness on groupthink." Park also concludes that research on the interaction between group cohesiveness and leadership style does not support Janis' claim that cohesion and leadership style interact to produce groupthink symptoms. Park presents a summary of the results of the studies analyzed. According to Park, a study by Huseman and Drive (1979) indicates groupthink occurs in both small and large decision-making groups within businesses. This results partly from group isolation within the business. Manz and Sims (1982) conducted a study showing that autonomous work groups are susceptible to groupthink symptoms in the same manner as decisions making groups within businesses. Fodor and Smith (1982) produced a study revealing that group leaders with high power motivation create atmospheres more susceptible to groupthink. Leaders with high power motivation possess characteristics similar to leaders with a "closed" leadership style—an unwillingness to respect dissenting opinion. The same study indicates that level of group cohesiveness is insignificant in predicting groupthink occurrence. Park summarizes a study performed by Callaway, Marriott, and Esser (1985) in which groups with highly dominant members "made higher quality decisions, exhibited lowered state of anxiety, took more time to reach a decision, and made more statements of disagreement/agreement". Overall, groups with highly dominant members expressed characteristics inhibitory to groupthink. If highly dominant members are considered equivalent to leaders with high power motivation, the results of Callaway, Marriott, and Esser contradict the results of Fodor and Smith. A study by Leana (1985) indicates the interaction between level of group cohesion and leadership style is completely insignificant in predicting groupthink. This finding refutes Janis' claim that the factors of cohesion and leadership style interact to produce groupthink. Park summarizes a study by McCauley (1989) in which structural conditions of the group were found to predict groupthink while situational conditions did not. The structural conditions included group insulation, group homogeneity, and promotional leadership. The situational conditions included group cohesion. These findings refute Janis' claim about group cohesiveness predicting groupthink.

Overall, studies on groupthink have largely focused on the factors (antecedents) that predict groupthink. Groupthink occurrence is often measured by number of ideas/solutions generated within a group, but there is no uniform, concrete standard by which researchers can objectively conclude groupthink occurs. The studies of groupthink and groupthink antecedents reveal a mixed body of results. Some studies indicate group cohesion and leadership style to be powerfully predictive of groupthink, while other studies indicate the insignificance of these factors. Group homogeneity and group insulation are generally supported as factors predictive of groupthink.

Case studies

Politics and military

Groupthink can have a strong hold on political decisions and military operations, which may result in enormous wastage of human and material resources. Highly qualified and experienced politicians and military commanders sometimes make very poor decisions when in a suboptimal group setting. Scholars such as Janis and Raven attribute political and military fiascoes, such as the Bay of Pigs Invasion, the Vietnam War, and the Watergate scandal, to the effect of groupthink. More recently, Dina Badie argued that groupthink was largely responsible for the shift in the U.S. administration's view on Saddam Hussein that eventually led to the 2003 invasion of Iraq by the United States. After the September 11 attacks, "stress, promotional leadership, and intergroup conflict" were all factors that gave rise to the occurrence of groupthink. Political case studies of groupthink serve to illustrate the impact that the occurrence of groupthink can have in today's political scene.

Bay of Pigs invasion and the Cuban Missile Crisis

The United States Bay of Pigs Invasion of April 1961 was the primary case study that Janis used to formulate his theory of groupthink. The invasion plan was initiated by the Eisenhower administration, but when the Kennedy administration took over, it "uncritically accepted" the plan of the Central Intelligence Agency (CIA). When some people, such as Arthur M. Schlesinger Jr. and Senator J. William Fulbright, attempted to present their objections to the plan, the Kennedy team as a whole ignored these objections and kept believing in the morality of their plan. Eventually Schlesinger minimized his own doubts, performing self-censorship. The Kennedy team stereotyped Fidel Castro and the Cubans by failing to question the CIA about its many false assumptions, including the ineffectiveness of Castro's air force, the weakness of Castro's army, and the inability of Castro to quell internal uprisings.

Janis argued the fiasco that ensued could have been prevented if the Kennedy administration had followed the methods to preventing groupthink adopted during the Cuban Missile Crisis, which took place just one year later in October 1962. In the latter crisis, essentially the same political leaders were involved in decision-making, but this time they learned from their previous mistake of seriously under-rating their opponents.

Pearl Harbor

The attack on Pearl Harbor on December 7, 1941, is a prime example of groupthink. A number of factors such as shared illusions and rationalizations contributed to the lack of precaution taken by U.S. Navy officers based in Hawaii. The United States had intercepted Japanese messages and they discovered that Japan was arming itself for an offensive attack somewhere in the Pacific Ocean. Washington took action by warning officers stationed at Pearl Harbor, but their warning was not taken seriously. They assumed that the Empire of Japan was taking measures in the event that their embassies and consulates in enemy territories were usurped.

The U.S. Navy and Army in Pearl Harbor also shared rationalizations about why an attack was unlikely. Some of them included:

  • "The Japanese would never dare attempt a full-scale surprise assault against Hawaii because they would realize that it would precipitate an all-out war, which the United States would surely win."
  • "The Pacific Fleet concentrated at Pearl Harbor was a major deterrent against air or naval attack."
  • "Even if the Japanese were foolhardy to send their carriers to attack us [the United States], we could certainly detect and destroy them in plenty of time."
  • "No warships anchored in the shallow water of Pearl Harbor could ever be sunk by torpedo bombs launched from enemy aircraft."

Space Shuttle Challenger disaster

On January 28, 1986, the US launched the Space Shuttle Challenger. This was to be monumental for NASA, as a high school teacher was among the crew and was to be the first American civilian in space. NASA's engineering and launch teams rely on group work, and in order to launch the shuttle the team members must affirm each system is functioning nominally. The Thiokol engineers who designed and built the Challenger's rocket boosters warned that the temperature for the day of the launch could result in total failure of the vehicles and deaths of the crew. The launch resulted in disaster and grounded space shuttle flights for nearly three years.

The Challenger case was subject to a more quantitatively oriented test of Janis's groupthink model performed by Esser and Lindoerfer, who found clear signs of positive antecedents to groupthink in the critical decisions concerning the launch of the shuttle. The day of the launch was rushed for publicity reasons. NASA wanted to captivate and hold the attention of America. Having civilian teacher Christa McAuliffe on board to broadcast a live lesson, and the possible mention by president Ronald Reagan in the State of the Union address, were opportunities NASA deemed critical to increasing interest in its potential civilian space flight program. The schedule NASA set out to meet was, however, self-imposed. It seemed incredible to many that an organization with a perceived history of successful management would have locked itself into a schedule it had no chance of meeting.

2016 United States presidential election

In the weeks and months preceding the 2016 United States presidential election, there was near-unanimity among news media outlets and polling organizations that Hillary Clinton's election was extremely likely. For example, on November 7, the day before the election, The New York Times opined that Clinton then had "a consistent and clear advantage in states worth at least 270 electoral votes". The Times estimated the probability of a Clinton win at 84%. Also on November 7, Reuters estimated the probability of Clinton defeating Donald Trump in the election at 90%, and The Huffington Post put Clinton's odds of winning at 98.2% based on "9.8 million simulations".

The disconnect between the election results and the pre-election estimates, both from news media outlets and from pollsters, may have been due to three factors: news and polling professionals couldn't imagine a candidate as unconventional as Trump becoming president; Trump supporters may have been under-sampled by surveys or may have lied to or misled pollsters out of fear of social ostracism; and polls may have been unable to account for Russian interference in the 2016 United States elections.

Corporate world

In the corporate world, ineffective and suboptimal group decision-making can negatively affect the health of a company and cause a considerable amount of monetary loss.

Swissair

Aaron Hermann and Hussain Rammal illustrate the detrimental role of groupthink in the collapse of Swissair, a Swiss airline company that was thought to be so financially stable that it earned the title the "Flying Bank". The authors argue that, among other factors, Swissair carried two symptoms of groupthink: the belief that the group is invulnerable and the belief in the morality of the group. In addition, before the fiasco, the size of the company board was reduced, subsequently eliminating industrial expertise. This may have further increased the likelihood of groupthink. With the board members lacking expertise in the field and having somewhat similar background, norms, and values, the pressure to conform may have become more prominent. This phenomenon is called group homogeneity, which is an antecedent to groupthink. Together, these conditions may have contributed to the poor decision-making process that eventually led to Swissair's collapse.

Marks & Spencer and British Airways

Another example of groupthink from the corporate world is illustrated in the United Kingdom-based companies Marks & Spencer and British Airways. The negative impact of groupthink took place during the 1990s as both companies released globalization expansion strategies. Researcher Jack Eaton's content analysis of media press releases revealed that all eight symptoms of groupthink were present during this period. The most predominant symptom of groupthink was the illusion of invulnerability as both companies underestimated potential failure due to years of profitability and success during challenging markets. Up until the consequence of groupthink erupted they were considered blue chips and darlings of the London Stock Exchange. During 1998–1999 the price of Marks & Spencer shares fell from 590 to less than 300 and that of British Airways from 740 to 300. Both companies had already featured prominently in the UK press and media for more positive reasons to do with national pride in their undoubted sector-wide performance.

Sports

Recent literature of groupthink attempts to study the application of this concept beyond the framework of business and politics. One particularly relevant and popular arena in which groupthink is rarely studied is sports. The lack of literature in this area prompted Charles Koerber and Christopher Neck to begin a case-study investigation that examined the effect of groupthink on the decision of the Major League Umpires Association (MLUA) to stage a mass resignation in 1999. The decision was a failed attempt to gain a stronger negotiating stance against Major League Baseball. Koerber and Neck suggest that three groupthink symptoms can be found in the decision-making process of the MLUA. First, the umpires overestimated the power that they had over the baseball league and the strength of their group's resolve. The union also exhibited some degree of closed-mindedness with the notion that MLB is the enemy. Lastly, there was the presence of self-censorship; some umpires who disagreed with the decision to resign failed to voice their dissent. These factors, along with other decision-making defects, led to a decision that was suboptimal and ineffective.

Recent developments

Ubiquity model

Researcher Robert Baron (2005) contends that the connection between certain antecedents which Janis believed necessary has not been demonstrated by the current collective body of research on groupthink. He believes that Janis' antecedents for groupthink are incorrect, and argues that not only are they "not necessary to provoke the symptoms of groupthink, but that they often will not even amplify such symptoms". As an alternative to Janis' model, Baron proposed a ubiquity model of groupthink. This model provides a revised set of antecedents for groupthink, including social identification, salient norms, and low self-efficacy.

General group problem-solving (GGPS) model

Aldag and Fuller (1993) argue that the groupthink concept was based on a "small and relatively restricted sample" that became too broadly generalized. Furthermore, the concept is too rigidly staged and deterministic. Empirical support for it has also not been consistent. The authors compare groupthink model to findings presented by Maslow and Piaget; they argue that, in each case, the model incites great interest and further research that, subsequently, invalidate the original concept. Aldag and Fuller thus suggest a new model called the general group problem-solving (GGPS) model, which integrates new findings from groupthink literature and alters aspects of groupthink itself. The primary difference between the GGPS model and groupthink is that the former is more value neutral and more political.

Reexamination

OLater scholars have re-assessed the merit of groupthink by reexamining case studies that Janis originally used to buttress his model. Roderick Kramer (1998) believed that, because scholars today have a more sophisticated set of ideas about the general decision-making process and because new and relevant information about the fiascos have surfaced over the years, a reexamination of the case studies is appropriate and necessary. He argues that new evidence does not support Janis' view that groupthink was largely responsible for President Kennedy's and President Johnson's decisions in the Bay of Pigs Invasion and U.S. escalated military involvement in the Vietnam War, respectively. Both presidents sought the advice of experts outside of their political groups more than Janis suggested. Kramer also argues that the presidents were the final decision-makers of the fiascos; while determining which course of action to take, they relied more heavily on their own construals of the situations than on any group-consenting decision presented to them. Kramer concludes that Janis' explanation of the two military issues is flawed and that groupthink has much less influence on group decision-making than is popularly believed.

Groupthink, while it is thought to be avoided, does have some positive effects. A case study by Choi and Kim  shows that with group identity, group performance has a negative correlation with defective decision making. This study also showed that the relationship between groupthink and defective decision making was insignificant. These findings mean that in the right circumstances, groupthink does not always have negative outcomes. It also questions the original theory of groupthink.

Reformulation

Whyte (1998) suggests that collective efficacy plays a large unrecognised role in groupthink because it causes groups to become less vigilant and to favor risks, two particular factors that characterize groups affected by groupthink. McCauley recasts aspects of groupthink's preconditions by arguing that the level of attractiveness of group members is the most prominent factor in causing poor decision-making. The results of Turner's and Pratkanis' (1991) study on social identity maintenance perspective and groupthink conclude that groupthink can be viewed as a "collective effort directed at warding off potentially negative views of the group". Together, the contributions of these scholars have brought about new understandings of groupthink that help reformulate Janis' original model.

Sociocognitive theory

According to a new theory many of the basic characteristics of groupthink – e.g., strong cohesion, indulgent atmosphere, and exclusive ethos – are the result of a special kind of mnemonic encoding (Tsoukalas, 2007). Members of tightly knit groups have a tendency to represent significant aspects of their community as episodic memories and this has a predictable influence on their group behavior and collective ideology.

 

False consensus effect

From Wikipedia, the free encyclopedia

In psychology, the false consensus effect, also known as consensus bias, is a pervasive cognitive bias that causes people to “see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances”. In other words, they assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population.

This false consensus is significant because it increases self-esteem (overconfidence effect). It can be derived from a desire to conform and be liked by others in a social environment. This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. The false-consensus effect is not restricted to cases where people believe that their values are shared by the majority, but it still manifests as an overestimate of the extent of their belief.

Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do not agree with them are defective in some way. There is no single cause for this cognitive bias; the availability heuristic, self-serving bias, and naïve realism have been suggested as at least partial underlying factors. The bias may also result, at least in part, from non-social stimulus-reward associations. Maintenance of this cognitive bias may be related to the tendency to make decisions with relatively little information. When faced with uncertainty and a limited sample from which to make decisions, people often "project" themselves onto the situation. When this personal knowledge is used as input to make generalizations, it often results in the false sense of being part of the majority.

The false consensus effect has been widely observed and supported by empirical evidence. Previous research has suggested that cognitive and perceptional factors (motivated projection, accessibility of information, emotion, etc.) may contribute to the consensus bias, while recent studies have focused on its neural mechanisms. One recent study has shown that consensus bias may improve decisions about other people's preferences. Ross, Green and House first defined the false consensus effect in 1977 with emphasis on the relative commonness that people perceive about their own responses; however, similar projection phenomena had already caught attention in psychology. Specifically, concerns with respect to connections between individual’s personal predispositions and their estimates of peers appeared in the literature for a while. For instances, Katz and Allport in 1931 illustrated that students’ estimates of the amount of others on the frequency of cheating was positively correlated to their own behavior. Later, around 1970, same phenomena were found on political beliefs and prisoner’s dilemma situation. In 2017, researchers identified a persistent egocentric bias when participants learned about other people's snack-food preferences. Moreover, recent studies suggest that the false consensus effect can also affect professional decision makers; specifically, it has been shown that even experienced marketing managers project their personal product preferences onto consumers.

Major theoretical approaches

The false-consensus effect can be traced back to two parallel theories of social perception, "the study of how we form impressions of and make inferences about other people". The first is the idea of social comparison. The principal claim of Leon Festinger's (1954) social comparison theory was that individuals evaluate their thoughts and attitudes based on other people. This may be motivated by a desire for confirmation and the need to feel good about oneself. As an extension of this theory, people may use others as sources of information to define social reality and guide behavior. This is called informational social influence. The problem, though, is that people are often unable to accurately perceive the social norm and the actual attitudes of others. In other words, research has shown that people are surprisingly poor "intuitive psychologists" and that our social judgments are often inaccurate. This finding helped to lay the groundwork for an understanding of biased processing and inaccurate social perception. The false-consensus effect is just one example of such an inaccuracy.

The second influential theory is projection, the idea that people project their own attitudes and beliefs onto others. This idea of projection is not a new concept. In fact, it can be found in Sigmund Freud's work on the defense mechanism of projection, D.S. Holmes' work on "attributive projection" (1968), and Gustav Ichheisser's work on social perception (1970). D.S. Holmes, for example, described social projection as the process by which people "attempt to validate their beliefs by projecting their own characteristics onto other individuals".

Here a connection can be made between the two stated theories of social comparison and projection. First, as social comparison theory explains, individuals constantly look to peers as a reference group and are motivated to do so in order to seek confirmation for their own attitudes and beliefs. In order to guarantee confirmation and a higher self-esteem, though, an individual might unconsciously project their own beliefs onto the others (the targets of their comparisons). This final outcome is the false-consensus effect. To summarize, the false-consensus effect can be seen as stemming from both social comparison theory and the concept of projection.

The false-consensus effect, as defined by Ross, Greene, and House in 1977, came to be the culmination of the many related theories that preceded it. In their well-known series of four studies, Ross and associates hypothesized and then demonstrated that people tend to overestimate the popularity of their own beliefs and preferences. Studies were both conducted in hypothetical situations by questionnaire surveys and in authentic conflict situations. For questionnaire studies, participants were presented with hypothetical events and then were not only asked to indicate their own behavioral choices and characteristics under the provided circumstances, but also asked to rate the responses and traits of their peers who referred as "actors". As for real occasion studies, participants were actually confronted with the conflict situations in which they were asked to choose behavioral alternatives and to judge the traits as well as decisions of two supposedly true individuals who had attended in the study. In general, the raters made more "extreme predictions" about the personalities of the actors that did not share the raters' own preference. In fact, the raters may have even thought that there was something wrong with the people expressing the alternative response.

In the ten years after the influential Ross et al. study, close to 50 papers were published with data on the false-consensus effect. Theoretical approaches were also expanded. The theoretical perspectives of this era can be divided into four categories: (a) selective exposure and cognitive availability, (b) salience and focus of attention, (c) logical information processing, and (d) motivational processes. In general, the researchers and designers of these theories believe that there is not a single right answer. Instead, they admit that there is overlap among the theories and that the false-consensus effect is most likely due to a combination of these factors.

Selective exposure and cognitive availability

This theory is closely tied to the availability heuristic, which suggests that perceptions of similarity (or difference) are affected by how easily those characteristics can be recalled from memory. And as one might expect, similarities between oneself and others are more easily recalled than differences. This is in part because people usually associate with those who are similar to themselves. This selected exposure to similar people may bias or restrict the "sample of information about the true diversity of opinion in the larger social environment". As a result of the selective exposure and availability heuristic, it is natural for the similarities to prevail in one's thoughts.

Botvin et al. (1992) did a popular study on the effects of the false-consensus effect among a specific adolescent community in an effort to determine whether students show a higher level of false-consensus effect among their direct peers as opposed to society at large. The participants of this experiment were 203 college students ranging in age from 18 to 25 (with an average age of 18.5). The participants were given a questionnaire and asked to answer questions regarding a variety of social topics. For each social topic, they were asked to answer how they felt about the topic and to estimate the percentage of their peers who would agree with them. The results determined that the false-consensus effect was extremely prevalent when participants were describing the rest of their college community; out of twenty topics considered, sixteen of them prominently demonstrated the false-consensus effect. The high levels of false-consensus effect seen in this study can be attributed to the group studied; because the participants were asked to compare themselves to a group of peers that they are constantly around (and view as very similar to themselves), the levels of false-consensus effect increased.

Salience and focus of attention

This theory suggests that when an individual focuses solely on their own preferred position, they are more likely to overestimate its popularity, thus falling victim to the false-consensus effect. This is because that position is the only one in their immediate consciousness. Performing an action that promotes the position will make it more salient and may increase the false-consensus effect. If, however, more positions are presented to the individual, the degree of the false-consensus effect might decrease significantly.

Logical information processing

This theory assumes that active and seemingly rational thinking underlies an individual's estimates of similarity among others. This is manifested in one's causal attributions. For instance, if an individual makes an external attribution for their belief, the individual will likely view his or her experience of the thing in question as merely a matter of objective experience. For example, a few movie-goers may falsely assume that the quality of the film is a purely objective entity. To explain their dissatisfaction with it, the viewers may say that it was simply a bad movie (an external attribution). Based on this (perhaps erroneous) assumption of objectivity, it seems rational or "logical" to assume that everyone else will have the same experience; consensus should be high. On the other hand, someone in the same situation who makes an internal attribution (perhaps a film aficionado who is well-aware of his or her especially high standards) will realize the subjectivity of the experience and will be drawn to the opposite conclusion; their estimation of consensus with their experience will be much lower. Although they result in two opposite outcomes, both paths of attribution rely on an initial assumption which then leads to a "logical" conclusion. By this logic, then, it can be said that the false-consensus effect is really a reflection of the fundamental attribution error (specifically the actor-observer bias), in which people prefer external/situational attributions over internal/dispositional ones to justify their own behaviors.

In a study done by Fox, Yinon, and Mayraz, researchers were attempting to determine whether or not the levels of the false-consensus effect changed in different age groups. In order to come to a conclusion, it was necessary for the researchers to split their participants into four different age groups. Two hundred participants were used, and gender was not considered to be a factor. Just as in the previous study mentioned, this study used a questionnaire as its main source of information. The results showed that the false-consensus effect was extremely prevalent in all groups, but was the most prevalent in the oldest age group (the participants who were labeled as "old-age home residents"). They showed the false-consensus effect in all 12 areas that they were questioned about. The increase in false-consensus effect seen in the oldest age group can be accredited to their high level of "logical" reasoning behind their decisions; the oldest age group has obviously lived the longest, and therefore feels that they can project their beliefs onto all age groups due to their (seemingly objective) past experiences and wisdom. The younger age groups cannot logically relate to those older to them because they have not had that experience and do not pretend to know these objective truths. These results demonstrate a tendency for older people to rely more heavily on situational attributions (life experience) as opposed to internal attributions.

Motivational processes

This theory stresses the benefits of the false-consensus effect: namely, the perception of increased social validation, social support, and self-esteem. It may also be useful to exaggerate similarities in social situations in order to increase liking. It is possible that these benefits serve as positive reinforcement for false-consensus thinking.

Applications

The false-consensus effect is an important attribution bias to take into consideration when conducting business and in everyday social interactions. Essentially, people are inclined to believe that the general population agrees with their opinions and judgments. Whether this belief is accurate, it gives them a feeling of more assurance and security in their decisions. This could be an important phenomenon to either exploit or avoid in business dealings.

For example, if a man doubted whether he wanted to buy a new tool, breaking down his notion that others agree with his doubt would be an important step in persuading him to purchase it. By convincing the customer that other people in fact do want to buy the appliance, the seller could perhaps make a sale that he would not have made otherwise. In this way, the false-consensus effect is closely related to conformity, the effect in which an individual is influenced to match the beliefs or behaviors of a group. There are two differences between the false-consensus effect and conformity: most importantly, conformity is matching the behaviors, beliefs, or attitudes of a real group, while the false-consensus effect is perceiving that others share your behaviors, beliefs, or attitudes, whether or not they really do. Making the customer feel like the opinion of others (society) is to buy the appliance will make the customer feel more confident about his purchase and will make him believe that other people would have made the same decision.

Similarly, any elements of society affected by public opinion—e.g., elections, advertising, publicity—are very much influenced by the false-consensus effect. This is partially because the way in which people develop their perceptions involves "differential processes of awareness". That is to say, while some people are motivated to reach correct conclusions, others may be motivated to reach preferred conclusions. Members of the latter category will more often experience the false-consensus effect, because the subject is likely to search actively for like-minded supporters and may discount or ignore the opposition.

Belief in a favorable future

The concept of false consensus effect can also be extended to predictions about future others. Belief in a favorable future is the belief that future others will change their preferences and beliefs in alignment with one's own. Belief in a favorable future suggests that people overestimate the extent to which other people will come to agree with their preferences and beliefs over time.

Rogers, Moore, and Norton (2017) find that belief in a favorable future is greater in magnitude than the false-consensus effect for two reasons:

  1. It is based in future others whose beliefs are not directly observable, and
  2. It is focused on future beliefs, which gives these future others time to “discover” the truth and change their beliefs.

Uncertainties

There is ambiguity about several facets of the false-consensus effect and of its study. First of all, it is unclear exactly which factors play the largest role in the strength and prevalence of the false-consensus effect in individuals. For example, two individuals in the same group and with very similar social standing could have very different levels of false-consensus effect, but it is unclear what social, personality, or perceptual differences between them play the largest role in causing this disparity. Additionally, it can be difficult to obtain accurate survey data about the false-consensus effect (as well as other psychological biases) because the search for consistent, reliable groups to be surveyed (often over an extended period of time) often leads to groups that might have dynamics slightly different from those of the "real world". For example, many of the referenced studies in this article examined college students, who might have an especially high level of false-consensus effect both because they are surrounded by their peers (and perhaps experience the availability heuristic) and because they often assume that they are similar to their peers. This may result in distorted data from some studies of the false-consensus effect.

Relation to personality psychology

Within the realm of personality psychology, the false-consensus effect does not have significant effects. This is because the false-consensus effect relies heavily on the social environment and how a person interprets this environment. Instead of looking at situational attributions, personality psychology evaluates a person with dispositional attributions, making the false-consensus effect relatively irrelevant in that domain. Therefore, a person's personality potentially could affect the degree to which the person relies on false-consensus effect, but not the existence of such a trait. This should not, however, be interpreted as an individual being the sole product of the social environment. In order for the trait to "exist" in an organism's mind, there must be a biological structure that underpins it. For an organism to visibly see ultraviolet light, they must have genes (which then give rise to the biological structure) that allows them to see the external environment. Since the brain is a biological system, there must be an underlying biological disposition that similarly allows an individual to register and interpret the social environment, thus generating the false-consensus effect. The brain's purpose is, after all, to extract information from the environment and accordingly generate behaviour and regulate physiology. There is no distinction between "innate" or "learned", or "nature" versus "nurture" as the interaction of both are needed; it does not sit along a dimension nor is it to be distinguished from each other. Social and personality psychology are not separate fields, but necessarily complementary fields, as demonstrated by the person-situation debate.

Contrasted with pluralistic ignorance

The false-consensus effect can be contrasted with pluralistic ignorance, an error in which people privately disapprove but publicly support what seems to be the majority view (regarding a norm or belief), when the majority in fact shares their (private) disapproval. While the false-consensus effect leads people to wrongly believe that the majority agrees with them (when the majority, in fact, openly disagrees with them), the pluralistic ignorance effect leads people to wrongly believe that they disagree with the majority (when the majority, in fact, covertly agrees with them). However, the false consensus effect does not deny that pluralistic ignorance could result in biased estimates by minority and majority as well. For example, the probability of intimate partner violence occurred might be underestimated by abusing partner and nonabusing partner alike. The false consensus effect would only reveal that abusing partners perceive intimate partner violence to be more common than do nonabusing partners.

Knowledge

From Wikipedia, the free encyclopedia

Knowledge is a familiarity, awareness, or understanding of someone or something, such as facts (descriptive knowledge), skills (procedural knowledge), or objects (acquaintance knowledge). By most accounts, knowledge can be acquired in many different ways and from many sources, including but not limited to perception, reason, memory, testimony, scientific inquiry, education, and practice. The philosophical study of knowledge is called epistemology.

The term "knowledge" can refer to a theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); formal or informal; systematic or particular. The philosopher Plato famously pointed out the need for a distinction between knowledge and true belief in the Theaetetus, leading many to attribute to him a definition of knowledge as "justified true belief". The difficulties with this definition raised by the Gettier problem have been the subject of extensive debate in epistemology for more than half a century.

Theories of knowledge

Robert Reid, Knowledge (1896). Thomas Jefferson Building, Washington, D.C.

The eventual demarcation of philosophy from science was made possible by the notion that philosophy's core was "theory of knowledge," a theory distinct from the sciences because it was their foundation... Without this idea of a "theory of knowledge," it is hard to imagine what "philosophy" could have been in the age of modern science.

Knowledge is the primary subject of the field of epistemology, which studies what we know, how we come to know it, and what it means to know something. Defining knowledge is an important aspect of epistemology, because it does not suffice to have a belief; one must also have good reasons for that belief, because otherwise there would be no reason to prefer one belief over another.

The definition of knowledge is a matter of ongoing debate among epistemologists. The classical definition, described but not ultimately endorsed by Plato, specifies that a statement must meet three criteria in order to be considered knowledge: it must be justified, true, and believed. Epistemologists today generally agree that these conditions are not sufficient, as various Gettier cases are thought to demonstrate. There are a number of alternative definitions which have been proposed, including Robert Nozick's proposal that all instances of knowledge must 'track the truth' and Simon Blackburn's proposal that those who have a justified true belief 'through a defect, flaw, or failure' fail to have knowledge. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.

In contrast to this approach, Ludwig Wittgenstein observed, following Moore's paradox, that one can say "He believes it, but it isn't so," but not "He knows it, but it isn't so." He goes on to argue that these do not correspond to distinct mental states, but rather to distinct ways of talking about conviction. What is different here is not the mental state of the speaker, but the activity in which they are engaged. For example, on this account, to know that the kettle is boiling is not to be in a particular state of mind, but to perform a particular task with the statement that the kettle is boiling. Wittgenstein sought to bypass the difficulty of definition by looking to the way "knowledge" is used in natural languages. He saw knowledge as a case of a family resemblance. Following this idea, "knowledge" has been reconstructed as a cluster concept that points out relevant features but that is not adequately captured by any definition.

Self-knowledge

“Self-knowledge” usually refers to a person's knowledge of their own sensations, thoughts, beliefs, and other mental states. A number of questions regarding self-knowledge have been the subject of extensive debates in philosophy, including whether self-knowledge differs from other types of knowledge, whether we have privileged self-knowledge compared to knowledge of other minds, and the nature of our acquaintance with ourselves. David Hume famously expressed skepticism about whether we could ever have self-knowledge over and above our immediate awareness of a "bundle of perceptions", which was part of his broader skepticism about personal identity.

The value of knowledge

Los portadores de la antorcha (The Torch-Bearers) – Sculpture by Anna Hyatt Huntington symbolizing the transmission of knowledge from one generation to the next (Ciudad Universitaria, Madrid, Spain)

It is generally assumed that knowledge is more valuable than mere true belief. If so, what is the explanation? A formulation of the value problem in epistemology first occurs in Plato's Meno. Socrates points out to Meno that a man who knew the way to Larissa could lead others there correctly. But so, too, could a man who had true beliefs about how to get there, even if he had not gone there or had any knowledge of Larissa. Socrates says that it seems that both knowledge and true opinion can guide action. Meno then wonders why knowledge is valued more than true belief and why knowledge and true belief are different. Socrates responds that knowledge is more valuable than mere true belief because it is tethered or justified. Justification, or working out the reason for a true belief, locks down true belief.

The problem is to identify what (if anything) makes knowledge more valuable than mere true belief, or that makes knowledge more valuable than a mere minimal conjunction of its components, such as justification, safety, sensitivity, statistical likelihood, and anti-Gettier conditions, on a particular analysis of knowledge that conceives of knowledge as divided into components (to which knowledge-first epistemological theories, which posit knowledge as fundamental, are notable exceptions). The value problem re-emerged in the philosophical literature on epistemology in the twenty-first century following the rise of virtue epistemology in the 1980s, partly because of the obvious link to the concept of value in ethics.

In contemporary philosophy, epistemologists including Ernest Sosa, John Greco, Jonathan Kvanvig, Linda Zagzebski, and Duncan Pritchard have defended virtue epistemology as a solution to the value problem. They argue that epistemology should also evaluate the "properties" of people as epistemic agents (i.e. intellectual virtues), rather than merely the properties of propositions and propositional mental attitudes.

Scientific knowledge

The development of the scientific method has made a significant contribution to how knowledge of the physical world and its phenomena is acquired. To be termed scientific, a method of inquiry must be based on gathering observable and measurable evidence subject to specific principles of reasoning and experimentation. The scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses. Science, and the nature of scientific knowledge have also become the subject of philosophy. As science itself has developed, scientific knowledge now includes a broader usage in the soft sciences such as biology and the social sciences – discussed elsewhere as meta-epistemology, or genetic epistemology, and to some extent related to "theory of cognitive development". Note that "epistemology" is the study of knowledge and how it is acquired. Science is "the process used everyday to logically complete thoughts through inference of facts determined by calculated experiments." Sir Francis Bacon was critical in the historical development of the scientific method; his works established and popularized an inductive methodology for scientific inquiry. His famous aphorism, "knowledge is power", is found in the Meditations Sacrae (1597).

Until recent times, at least in the Western tradition, it was simply taken for granted that knowledge was something possessed only by humans – and probably adult humans at that. Sometimes the notion might stretch to Society-as-such, as in (e. g.) "the knowledge possessed by the Coptic culture" (as opposed to its individual members), but that was not assured either. Nor was it usual to consider unconscious knowledge in any systematic way until this approach was popularized by Freud.

Those who use the phrase "scientific knowledge" don't necessary claim to certainty, since scientists will never be absolutely certain when they are correct and when they are not. It is thus an irony of proper scientific method that one must doubt even when correct, in the hopes that this practice will lead to greater convergence on the truth in general.

Situated knowledge

Situated knowledge is knowledge specific to a particular situation. It was used by Donna Haraway as an extension of the feminist approaches of "successor science" suggested by Sandra Harding, one which "offers a more adequate, richer, better account of a world, in order to live in it well and in critical, reflexive relation to our own as well as others' practices of domination and the unequal parts of privilege and oppression that makes up all positions." This situation partially transforms science into a narrative, which Arturo Escobar explains as, "neither fictions nor supposed facts." This narrative of situation is historical textures woven of fact and fiction, and as Escobar explains further, "even the most neutral scientific domains are narratives in this sense," insisting that rather than a purpose dismissing science as a trivial matter of contingency, "it is to treat (this narrative) in the most serious way, without succumbing to its mystification as 'the truth' or to the ironic skepticism common to many critiques."

Haraway's argument stems from the limitations of the human perception, as well as the overemphasis of the sense of vision in science. According to Haraway, vision in science has been, "used to signify a leap out of the marked body and into a conquering gaze from nowhere." This is the "gaze that mythically inscribes all the marked bodies, that makes the unmarked category claim the power to see and not be seen, to represent while escaping representation." This causes a limitation of views in the position of science itself as a potential player in the creation of knowledge, resulting in a position of "modest witness". This is what Haraway terms a "god trick", or the aforementioned representation while escaping representation. In order to avoid this, "Haraway perpetuates a tradition of thought which emphasizes the importance of the subject in terms of both ethical and political accountability".

Some methods of generating knowledge, such as trial and error, or learning from experience, tend to create highly situational knowledge. Situational knowledge is often embedded in language, culture, or traditions. This integration of situational knowledge is an allusion to the community, and its attempts at collecting subjective perspectives into an embodiment "of views from somewhere." Knowledge is also said to be related to the capacity of acknowledgement in human beings.

Even though Haraway's arguments are largely based on feminist studies, this idea of different worlds, as well as the skeptic stance of situated knowledge is present in the main arguments of post-structuralism. Fundamentally, both argue the contingency of knowledge on the presence of history; power, and geography, as well as the rejection of universal rules or laws or elementary structures; and the idea of power as an inherited trait of objectification.

Partial knowledge

The parable of the blind men and the elephant suggests that people tend to project their partial experiences as the whole truth

One discipline of epistemology focuses on partial knowledge. In most cases, it is not possible to understand an information domain exhaustively; our knowledge is always incomplete or partial. Most real problems have to be solved by taking advantage of a partial understanding of the problem context and problem data, unlike the typical math problems one might solve at school, where all data is given and one is given a complete understanding of formulas necessary to solve them (False consensus effect).

This idea is also present in the concept of bounded rationality which assumes that in real-life situations people often have a limited amount of information and make decisions accordingly.

Religious concepts of knowledge

Christianity

In many expressions of Christianity, such as Catholicism and Anglicanism, knowledge is one of the seven gifts of the Holy Spirit.

"The knowledge that comes from the Holy Spirit, however, is not limited to human knowledge; it is a special gift, which leads us to grasp, through creation, the greatness and love of God and his profound relationship with every creature." (Pope Francis, papal audience May 21, 2014)

Hinduism

विद्या दान (Vidya Daan) i.e. knowledge sharing is a major part of Daan, a tenet of all Dharmic Religions. Hindu Scriptures present two kinds of knowledge, Paroksh Gyan and Prataksh Gyan. Paroksh Gyan (also spelled Paroksha-Jnana) is secondhand knowledge: knowledge obtained from books, hearsay, etc. Pratyaksh Gyan (also spelled Pratyaksha-Jnana) is the knowledge borne of direct experience, i.e., knowledge that one discovers for oneself. Jnana yoga ("path of knowledge") is one of three main types of yoga expounded by Krishna in the Bhagavad Gita. (It is compared and contrasted with Bhakti Yoga and Karma yoga.)

Islam

In Islam, knowledge (Arabic: علم, ʿilm) is given great significance. "The Knowing" (al-ʿAlīm) is one of the 99 names reflecting distinct attributes of God. The Qur'an asserts that knowledge comes from God (2:239) and various hadith encourage the acquisition of knowledge. Muhammad is reported to have said "Seek knowledge from the cradle to the grave" and "Verily the men of knowledge are the inheritors of the prophets". Islamic scholars, theologians and jurists are often given the title alim, meaning "knowledgeble".

Judaism

In Jewish tradition, knowledge (Hebrew: דעת da'ath) is considered one of the most valuable traits a person can acquire. Observant Jews recite three times a day in the Amidah "Favor us with knowledge, understanding and discretion that come from you. Exalted are you, Existent-One, the gracious giver of knowledge." The Tanakh states, "A wise man gains power, and a man of knowledge maintains power", and "knowledge is chosen above gold".

The Old Testament's tree of the knowledge of good and evil contained the knowledge that separated Man from God: "And the LORD God said, Behold, the man is become as one of us, to know good and evil..." (Genesis 3:22)

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...