Search This Blog

Tuesday, February 20, 2024

Atomic Age

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Atomic_Age
An early nuclear power plant that used atomic energy to generate electricity

The Atomic Age, also known as the Atomic Era, is the period of history following the detonation of the first nuclear weapon, The Gadget at the Trinity test in New Mexico, on 16 July 1945, during World War II. Although nuclear chain reactions had been hypothesized in 1933 and the first artificial self-sustaining nuclear chain reaction (Chicago Pile-1) had taken place in December 1942, the Trinity test and the ensuing bombings of Hiroshima and Nagasaki that ended World War II represented the first large-scale use of nuclear technology and ushered in profound changes in sociopolitical thinking and the course of technological development.

While atomic power was promoted for a time as the epitome of progress and modernity, entering into the nuclear power era also entailed frightful implications of nuclear warfare, the Cold War, mutual assured destruction, nuclear proliferation, the risk of nuclear disaster (potentially as extreme as anthropogenic global nuclear winter), as well as beneficial civilian applications in nuclear medicine. It is no easy matter to fully segregate peaceful uses of nuclear technology from military or terrorist uses (such as the fabrication of dirty bombs from radioactive waste), which complicated the development of a global nuclear-power export industry right from the outset.

In 1973, concerning a flourishing nuclear power industry, the United States Atomic Energy Commission predicted that, by the turn of the 21st century, one thousand reactors would be producing electricity for homes and businesses across the U.S. However, the "nuclear dream" fell far short of what was promised because nuclear technology produced a range of social problems, from the nuclear arms race to nuclear meltdowns, and the unresolved difficulties of bomb plant cleanup and civilian plant waste disposal and decommissioning. Since 1973, reactor orders declined sharply as electricity demand fell and construction costs rose. Many orders and partially completed plants were cancelled.

By the late 1970s, nuclear power had suffered a remarkable international destabilization, as it was faced with economic difficulties and widespread public opposition, coming to a head with the Three Mile Island accident in 1979, and the Chernobyl disaster in 1986, both of which adversely affected the nuclear power industry for many decades.

Early years

In 1901, Frederick Soddy and Ernest Rutherford discovered that radioactivity was part of the process by which atoms changed from one kind to another, involving the release of energy. Soddy wrote in popular magazines that radioactivity was a potentially "inexhaustible" source of energy, and offered a vision of an atomic future where it would be possible to "transform a desert continent, thaw the frozen poles, and make the whole earth one smiling Garden of Eden." The promise of an "atomic age," with nuclear energy as the global, utopian technology for the satisfaction of human needs, has been a recurring theme ever since. But "Soddy also saw that atomic energy could possibly be used to create terrible new weapons".

The concept of a nuclear chain reaction was hypothesized in 1933, shortly after Chadwick's discovery of the neutron. Only a few years later, in December 1938 nuclear fission was discovered by Otto Hahn and his assistant Fritz Strassmann. Hahn understood that a "burst" of the atomic nuclei had occurred. Lise Meitner and Otto Frisch gave a full theoretical interpretation and named the process "nuclear fission". The first artificial self-sustaining nuclear chain reaction (Chicago Pile-1, or CP-1) took place in December 1942 under the leadership of Enrico Fermi.

In 1945, the pocketbook The Atomic Age heralded the untapped atomic power in everyday objects and depicted a future where fossil fuels would go unused. One science writer, David Dietz, wrote that instead of filling the gas tank of your car two or three times a week, you will travel for a year on a pellet of atomic energy the size of a vitamin pill. Glenn T. Seaborg, who chaired the Atomic Energy Commission, wrote "there will be nuclear powered earth-to-moon shuttles, nuclear powered artificial hearts, plutonium heated swimming pools for SCUBA divers, and much more".

World War II

The phrase Atomic Age was coined by William L. Laurence, a journalist with The New York Times, who became the official journalist for the Manhattan Project which developed the first nuclear weapons. He witnessed both the Trinity test and the bombing of Nagasaki and went on to write a series of articles extolling the virtues of the new weapon. His reporting before and after the bombings helped to spur public awareness of the potential of nuclear technology and in part motivated development of the technology in the U.S. and in the Soviet Union. The Soviet Union would go on to test its first nuclear weapon in 1949.

In 1949, U.S. Atomic Energy Commission chairman, David Lilienthal stated that "atomic energy is not simply a search for new energy, but more significantly a beginning of human history in which faith in knowledge can vitalize man's whole life".

1950s

This view of downtown Las Vegas shows a mushroom cloud in the background. Scenes such as this were typical during the 1950s. From 1951 to 1962 the government conducted 100 atmospheric tests at the nearby Nevada Test Site.

The phrase gained popularity as a feeling of nuclear optimism emerged in the 1950s in which it was believed that all power generators in the future would be atomic in nature. The atomic bomb would render all conventional explosives obsolete and nuclear power plants would do the same for power sources such as coal and oil. There was a general feeling that everything would use a nuclear power source of some sort, in a positive and productive way, from irradiating food to preserve it, to the development of nuclear medicine. There would be an age of peace and plenty in which atomic energy would "provide the power needed to desalinate water for the thirsty, irrigate the deserts for the hungry, and fuel interstellar travel deep into outer space". This use would render the Atomic Age as significant a step in technological progress as the first smelting of bronze, of iron, or the commencement of the Industrial Revolution.

This included even cars, leading Ford to display the Ford Nucleon concept car to the public in 1958. There was also the promise of golf balls which could always be found and nuclear-powered aircraft, which the U.S. federal government even spent US$1.5 billion researching. Nuclear policymaking became almost a collective technocratic fantasy, or at least was driven by fantasy:

The very idea of splitting the atom had an almost magical grip on the imaginations of inventors and policymakers. As soon as someone said—in an even mildly credible way—that these things could be done, then people quickly convinced themselves ... that they would be done.

In the US, military planners "believed that demonstrating the civilian applications of the atom would also affirm the American system of private enterprise, showcase the expertise of scientists, increase personal living standards, and defend the democratic lifestyle against communism".

Some media reports predicted that thanks to the giant nuclear power stations of the near future electricity would soon become much cheaper and that electricity meters would be removed, because power would be "too cheap to meter."

When the Shippingport reactor went online in 1957 it produced electricity at a cost roughly ten times that of coal-fired generation. Scientists at the AEC's own Brookhaven Laboratory "wrote a 1958 report describing accident scenarios in which 3,000 people would die immediately, with another 40,000 injured".

However Shippingport was an experimental reactor using highly enriched uranium (unlike most power reactors) and originally intended for a (cancelled) nuclear-powered aircraft carrier. Kenneth Nichols, a consultant for the Connecticut Yankee and Yankee Rowe nuclear power stations, wrote that while considered "experimental" and not expected to be competitive with coal and oil, they "became competitive because of inflation ... and the large increase in price of coal and oil." He wrote that for nuclear power stations the capital cost is the major cost factor over the life of the plant, hence "antinukes" try to increase costs and building time with changing regulations and lengthy hearings, so that "it takes almost twice as long to build a (U.S.-designed boiling-water or pressurised water) atomic power plant in the United States as in France, Japan, Taiwan or South Korea." French pressurised-water nuclear plants produce 60% of their electric power, and have proven to be much cheaper than oil or coal.

Fear of possible atomic attack from the Soviet Union caused U.S. school children to participate in "duck and cover" civil defense drills.

Atomic City

During the 1950s, Las Vegas, Nevada, earned the nickname "Atomic City" for becoming a hotspot where tourists would gather to watch above-ground nuclear weapons tests taking place at Nevada Test Site. Following the detonation of Able, one of the first atomic bombs dropped at the Nevada Test Site, the Las Vegas Chamber of Commerce began advertising the tests as an entertainment spectacle to tourists.

The detonations proved popular and casinos throughout the city capitalised on the tests by advertising hotel rooms or rooftops which offered views of the testing site or by planning "Dawn Bomb Parties" where people would come together to celebrate the detonations. Most parties started at midnight and musicians would perform at the venues until 4:00 a.m. when the party would briefly stop so guests could silently watch the detonation. Some casinos capitalised on the tests further by creating so called "atomic cocktails", a mixture of vodka, cognac, sherry and champagne.

Meanwhile, groups of tourists would drive out into the desert with family or friends to watch the detonations.

Despite the health risks associated with nuclear fallout, tourists and viewers were told to simply "shower". Later on, however, anyone who had worked at the testing site or lived in areas exposed to nuclear fallout fell ill and had higher chances of developing cancer or suffering pre-mature deaths.

1960s

By exploiting the peaceful uses of the "friendly atom" in medical applications, earth removal and, subsequently, in nuclear power plants, the nuclear industry and government sought to allay public fears about nuclear technology and promote the acceptance of nuclear weapons. At the peak of the Atomic Age, the United States government initiated Operation Plowshare, involving "peaceful nuclear explosions". The United States Atomic Energy Commission chairman announced that the Plowshares project was intended to "highlight the peaceful applications of nuclear explosive devices and thereby create a climate of world opinion that is more favorable to weapons development and tests".

Project Plowshare "was named directly from the Bible itself, specifically Micah 4:3, which states that God will beat swords into ploughshares, and spears into pruning hooks, so that no country could lift up weapons against another". Proposed uses included widening the Panama Canal, constructing a new sea-level waterway through Nicaragua nicknamed the Pan-Atomic Canal, cutting paths through mountainous areas for highways, and connecting inland river systems. Other proposals involved blasting caverns for water, natural gas, and petroleum storage. It was proposed to plant underground atomic bombs to extract shale oil in eastern Utah and western Colorado. Serious consideration was also given to using these explosives for various mining operations. One proposal suggested using nuclear blasts to connect underground aquifers in Arizona. Another plan involved surface blasting on the western slope of California's Sacramento Valley for a water transport project. However, there were many negative impacts from Project Plowshare's 27 nuclear explosions. Consequences included blighted land, relocated communities, tritium-contaminated water, radioactivity, and fallout from debris being hurled high into the atmosphere. These were ignored and downplayed until the program was terminated in 1977, due in large part to public opposition, after $770 million had been spent on the project.

In the Thunderbirds TV series, a set of vehicles was presented that were imagined to be completely nuclear, as shown in cutaways presented in their comic-books.

The term "atomic age" was initially used in a positive, futuristic sense, but by the 1960s the threats posed by nuclear weapons had begun to edge out nuclear power as the dominant motif of the atom.

1970s to 1990s

A photograph taken in the abandoned city of Pripyat. The Chernobyl nuclear power plant can be seen on the horizon.

French advocates of nuclear power developed an aesthetic vision of nuclear technology as art to bolster support for the technology. Leclerq compares the nuclear cooling tower to some of the grandest architectural monuments of Western culture:

The age in which we live has, for the public, been marked by the nuclear engineer and the gigantic edifices he has created. For builders and visitors alike, nuclear power plants will be considered the cathedrals of the 20th century. Their syncretism mingles the conscious and the unconscious, religious fulfilment and industrial achievement, the limitations of uses of materials and boundless artistic inspiration, utopia come true and the continued search for harmony.

In 1973, the United States Atomic Energy Commission predicted that, by the turn of the 21st century, one thousand reactors would be producing electricity for homes and businesses across the USA. But after 1973, reactor orders declined sharply as electricity demand fell and construction costs rose. Many orders and partially completed plants were cancelled.

Nuclear power has proved controversial since the 1970s. Highly radioactive materials may overheat and escape from the reactor building. Nuclear waste (spent nuclear fuel) needs to be regularly removed from the reactors and disposed of safely for up to a million years, so that it does not pollute the environment. Recycling of nuclear waste has been discussed, but it creates plutonium which can be used in weapons, and in any case still leaves much unwanted waste to be stored and disposed of. Large, purpose-built facilities for long-term disposal of nuclear waste have been difficult to site, and have not yet reached fruition.

By the late 1970s, nuclear power suffered a remarkable international destabilization, as it was faced with economic difficulties and widespread public opposition, coming to a head with the Three Mile Island accident in 1979, and the Chernobyl disaster in 1986, both of which adversely affected the nuclear power industry for decades thereafter. A cover story in the 11 February 1985, issue of Forbes magazine commented on the overall management of the nuclear power program in the United States:

The failure of the U.S. nuclear power program ranks as the largest managerial disaster in business history, a disaster on a monumental scale ... only the blind, or the biased, can now think that the money has been well spent. It is a defeat for the U.S. consumer and for the competitiveness of U.S. industry, for the utilities that undertook the program and for the private enterprise system that made it possible.

So, in a period just over 30 years, the early dramatic rise of nuclear power went into equally meteoric reverse. With no other energy technology has there been a conjunction of such rapid and revolutionary international emergence, followed so quickly by equally transformative demise.

21st century

The 2011 Fukushima Daiichi nuclear disaster in Japan, the worst nuclear accident in 25 years, displaced 50,000 households after radiation leaked into the air, soil and sea.

In the 21st century, the label of the "Atomic Age" connotes either a sense of nostalgia or naïveté, and is considered by many to have ended with the fall of the Soviet Union in 1991, though the term continues to be used by many historians to describe the era following the conclusion of the Second World War. Atomic energy and weapons continue to have a strong effect on world politics in the 21st century. The term is used by some science fiction fans to describe not only the era following the conclusion of the Second World War but also contemporary history up to the present day.

The nuclear power industry has improved the safety and performance of reactors, and has proposed new safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur and the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable. An interdisciplinary team from MIT has estimated that if nuclear power use tripled from 2005 to 2055 (2%–7%), at least four serious nuclear accidents would be expected in that period.

In September 2012, in reaction to the Fukushima disaster, Japan announced that it would completely phase out nuclear power by 2030, although the likelihood of this goal became unlikely during the subsequent Abe administration. Germany planned to completely phase out nuclear energy by 2022 but was still using 11.9% in 2021. In 2022, following the Russian invasion of Ukraine, the United Kingdom pledged to build up to 8 new reactors to reduce their reliance on gas and oil and hopes that 25% of all energy produced will be by nuclear means.

Chronology

A large anti-nuclear demonstration was held on 6 May 1979, in Washington D.C., when 125,000 people including the Governor of California, attended a march and rally against nuclear power. In New York City on 23 September 1979, almost 200,000 people attended a protest against nuclear power. Anti-nuclear power protests preceded the shutdown of the Shoreham, Yankee Rowe, Millstone I, Rancho Seco, Maine Yankee, and about a dozen other nuclear power plants.

On 12 June 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history. International Day of Nuclear Disarmament protests were held on 20 June 1983, at 50 sites across the United States. In 1986, hundreds of people walked from Los Angeles to Washington, D.C., in the Great Peace March for Global Nuclear Disarmament. There were many Nevada Desert Experience protests and peace camps at the Nevada Test Site during the 1980s and 1990s.

On May 1st 2005, forty thousand anti-nuclear/anti-war protesters marched past the United Nations in New York, 60 years after the atomic bombings of Hiroshima and Nagasaki. This was the largest anti-nuclear rally in the U.S. for several decades.

Discovery and development

Nuclear arms deployment

"Atoms for Peace"

Three Mile Island and Chernobyl

Nuclear arms reduction

  • 8 December 1987 – The Intermediate-Range Nuclear Forces Treaty is signed in Washington 1987. Ronald Reagan and Mikhail Gorbachev agreed after negotiations following the 11–12 October 1986 Reykjavík Summit to go farther than a nuclear freeze – they agreed to reduce nuclear arsenals. IRBMs and SRBMs were eliminated.
  • 1993–2007 – Nuclear power is the primary source of electricity in France. Throughout these two decades, France produced over three quarters of its power from nuclear sources (78.8%), the highest percentage in the world at the time.
  • 31 July 1991 – As the Cold War ends, the Start I treaty is signed by the United States and the Soviet Union, reducing the deployed nuclear warheads of each side to no more than 6,000 each.
  • 1993 – The Megatons to Megawatts Program is agreed upon by Russia and the United States and begins to be implemented in 1995. When it is completed in 2013, five hundred tonnes of uranium derived from 20,000 nuclear warheads from Russia will have been converted from weapons-grade to reactor-grade uranium and used in United States nuclear plants to generate electricity. This has provided 10% of the electrical power of the U.S. (50% of its nuclear power) during the 1995–2013 period.
  • 2006 – Patrick Moore, an early member of Greenpeace and environmentalists such as Stewart Brand suggest the deployment of more advanced nuclear power technology for electric power generation (such as pebble-bed reactors) to combat global warming.
  • 21 November 2006 – Implementation of the ITER fusion power reactor project near Cadarache, France is begun. Construction is to be completed in 2016 with the hope that the research conducted there will allow the introduction of practical commercial fusion power plants by 2050.
  • 2006–2009 – A number of nuclear engineers begin to suggest that, to combat global warming, it would be more efficient to build nuclear reactors that operate on the thorium cycle.
  • 8 April 2010 – The New START treaty is signed by the United States and Russia in Prague. It mandates the eventual reduction by both sides to no more than 1,550 deployed strategic nuclear weapons each.

Fukushima

Influence on popular culture

Cover of Atomic War number one, November 1952

Identity (social science)

From Wikipedia, the free encyclopedia
 
Identity is the qualities, beliefs, personality traits, appearance, and/or expressions that characterize a person or a group.

Identity emerges during childhood as children start to comprehend their self-concept, and it remains a consistent aspect throughout different stages of life. Identity is shaped by social and cultural factors and how others perceive and acknowledge one's characteristics. The etymology of the term "identity" from the Latin noun identitas emphasizes an individual's mental image of themselves and their "sameness with others". Identity encompasses various aspects such as occupational, religious, national, ethnic or racial, gender, educational, generational, and political identities, among others.

Identity serves multiple functions, acting as a "self-regulatory structure" that provides meaning, direction, and a sense of self-control. It fosters internal harmony and serves as a behavioral compass, enabling individuals to orient themselves towards the future and establish long-term goals. As an active process, it profoundly influences an individual's capacity to adapt to life events and achieve a state of well-being. However, identity originates from traits or attributes that individuals may have little or no control over, such as their family background or ethnicity.

In sociology, emphasis is placed by sociologists on collective identity, in which an individual's identity is strongly associated with role-behavior or the collection of group memberships that define them. According to Peter Burke, "Identities tell us who we are and they announce to others who we are." Identities subsequently guide behavior, leading "fathers" to behave like "fathers" and "nurses" to act like "nurses."

In psychology, the term "identity" is most commonly used to describe personal identity, or the distinctive qualities or traits that make an individual unique. Identities are strongly associated with self-concept, self-image (one's mental model of oneself), self-esteem, and individuality. Individuals' identities are situated, but also contextual, situationally adaptive and changing. Despite their fluid character, identities often feel as if they are stable ubiquitous categories defining an individual, because of their grounding in the sense of personal identity (the sense of being a continuous and persistent self).

Usage

Mark Mazower noted in 1998: "At some point in the 1970s this term ["identity"] was borrowed from social psychology and applied with abandon to societies, nations and groups."

In psychology

Erik Erikson (1902–94) became one of the earliest psychologists to take an explicit interest in identity. An essential feature of Erikson's theory of psychosocial development was the idea of the ego identity (often referred to as the self), which is described as an individual's personal sense of continuity. He suggested that people can attain this feeling throughout their lives as they develop and is meant to be an ongoing process. The ego-identity consists of two main features: one's personal characteristics and development, and the culmination of social and cultural factors and roles that impact one's identity. In Erikson's theory, he describes eight distinct stages across the lifespan that are each characterized by a conflict between the inner, personal world and the outer, social world of an individual. Erikson identified the conflict of identity as occurring primarily during adolescence and described potential outcomes that depend on how one deals with this conflict. Those who do not manage a resynthesis of childhood identifications are seen as being in a state of 'identity diffusion' whereas those who retain their given identities unquestioned have 'foreclosed' identities. On some readings of Erikson, the development of a strong ego identity, along with the proper integration into a stable society and culture, lead to a stronger sense of identity in general. Accordingly, a deficiency in either of these factors may increase the chance of an identity crisis or confusion.

The "Neo-Eriksonian" identity status paradigm emerged in 1966, driven largely by the work of James Marcia. This model focuses on the concepts of exploration and commitment. The central idea is that an individual's sense of identity is determined in large part by the degrees to which a person has made certain explorations and the extent to which they have commitments to those explorations or a particular identity. A person may display either relative weakness or strength in terms of both exploration and commitments. When assigned categories, there were four possible results: identity diffusion, identity foreclosure, identity moratorium, and identity achievement. Diffusion is when a person avoids or refuses both exploration and making a commitment. Foreclosure occurs when a person does make a commitment to a particular identity but neglected to explore other options. Identity moratorium is when a person avoids or postpones making a commitment but is still actively exploring their options and different identities. Lastly, identity achievement is when a person has both explored many possibilities and has committed to their identity.

Although the self is distinct from identity, the literature of self-psychology can offer some insight into how identity is maintained. From the vantage point of self-psychology, there are two areas of interest: the processes by which a self is formed (the "I"), and the actual content of the schemata which compose the self-concept (the "Me"). In the latter field, theorists have shown interest in relating the self-concept to self-esteem, the differences between complex and simple ways of organizing self-knowledge, and the links between those organizing principles and the processing of information.

Weinreich's identity variant similarly includes the categories of identity diffusion, foreclosure and crisis, but with a somewhat different emphasis. Here, with respect to identity diffusion for example, an optimal level is interpreted as the norm, as it is unrealistic to expect an individual to resolve all their conflicted identifications with others; therefore we should be alert to individuals with levels which are much higher or lower than the norm – highly diffused individuals are classified as diffused, and those with low levels as foreclosed or defensive. Weinreich applies the identity variant in a framework which also allows for the transition from one to another by way of biographical experiences and resolution of conflicted identifications situated in various contexts – for example, an adolescent going through family break-up may be in one state, whereas later in a stable marriage with a secure professional role may be in another. Hence, though there is continuity, there is also development and change.

Laing's definition of identity closely follows Erikson's, in emphasising the past, present and future components of the experienced self. He also develops the concept of the "metaperspective of self", i.e. the self's perception of the other's view of self, which has been found to be extremely important in clinical contexts such as anorexia nervosa. Harré also conceptualises components of self/identity – the "person" (the unique being I am to myself and others) along with aspects of self (including a totality of attributes including beliefs about one's characteristics including life history), and the personal characteristics displayed to others.

In social psychology

At a general level, self-psychology is compelled to investigate the question of how the personal self relates to the social environment. To the extent that these theories place themselves in the tradition of "psychological" social psychology, they focus on explaining an individual's actions within a group in terms of mental events and states. However, some "sociological" social psychology theories go further by attempting to deal with the issue of identity at both the levels of individual cognition and of collective behaviour.

Collective identity

Many people gain a sense of positive self-esteem from their identity groups, which furthers a sense of community and belonging. Another issue that researchers have attempted to address is the question of why people engage in discrimination, i.e., why they tend to favour those they consider a part of their "in-group" over those considered to be outsiders. Both questions have been given extensive attention by researchers working in the social identity tradition. For example, in work relating to social identity theory, it has been shown that merely crafting cognitive distinction between in- and out-groups can lead to subtle effects on people's evaluations of others.

Different social situations also compel people to attach themselves to different self-identities which may cause some to feel marginalized, switch between different groups and self-identifications, or reinterpret certain identity components. These different selves lead to constructed images dichotomized between what people want to be (the ideal self) and how others see them (the limited self). Educational background and occupational status and roles significantly influence identity formation in this regard.

Identity formation strategies

Another issue of interest in social psychology is related to the notion that there are certain identity formation strategies which a person may use to adapt to the social world. Cote and Levine developed a typology which investigated the different manners of behavior that individuals may have. Their typology includes:

Cote and Levine's identity formation strategy typology
Type Psychological signs Personality signs Social signs
Refuser Develops cognitive blocks that prevent adoption of adult role-schemas Engages in childlike behavior Shows extensive dependency upon others and no meaningful engagement with the community of adults
Drifter Possesses greater psychological resources than the Refuser (i.e., intelligence, charisma) Is apathetic toward application of psychological resources Has no meaningful engagement with or commitment to adult communities
Searcher Has a sense of dissatisfaction due to high personal and social expectations Shows disdain for imperfections within the community Interacts to some degree with role-models, but ultimately these relationships are abandoned
Guardian Possesses clear personal values and attitudes, but also a deep fear of change Sense of personal identity is almost exhausted by sense of social identity Has an extremely rigid sense of social identity and strong identification with adult communities
Resolver Consciously desires self-growth Accepts personal skills and competencies and uses them actively Is responsive to communities that provide opportunity for self-growth

Kenneth Gergen formulated additional classifications, which include the strategic manipulator, the pastiche personality, and the relational self. The strategic manipulator is a person who begins to regard all senses of identity merely as role-playing exercises, and who gradually becomes alienated from their social self. The pastiche personality abandons all aspirations toward a true or "essential" identity, instead viewing social interactions as opportunities to play out, and hence become, the roles they play. Finally, the relational self is a perspective by which persons abandon all sense of exclusive self, and view all sense of identity in terms of social engagement with others. For Gergen, these strategies follow one another in phases, and they are linked to the increase in popularity of postmodern culture and the rise of telecommunications technology.

In social anthropology

Anthropologists have most frequently employed the term identity to refer to this idea of selfhood in a loosely Eriksonian way  properties based on the uniqueness and individuality which makes a person distinct from others. Identity became of more interest to anthropologists with the emergence of modern concerns with ethnicity and social movements in the 1970s. This was reinforced by an appreciation, following the trend in sociological thought, of the manner in which the individual is affected by and contributes to the overall social context. At the same time, the Eriksonian approach to identity remained in force, with the result that identity has continued until recently to be used in a largely socio-historical way to refer to qualities of sameness in relation to a person's connection to others and to a particular group of people.

The first favours a primordialist approach which takes the sense of self and belonging to a collective group as a fixed thing, defined by objective criteria such as common ancestry and common biological characteristics. The second, rooted in social constructionist theory, takes the view that identity is formed by a predominantly political choice of certain characteristics. In so doing, it questions the idea that identity is a natural given, characterised by fixed, supposedly objective criteria. Both approaches need to be understood in their respective political and historical contexts, characterised by debate on issues of class, race and ethnicity. While they have been criticized, they continue to exert an influence on approaches to the conceptualisation of identity today.

These different explorations of 'identity' demonstrate how difficult a concept it is to pin down. Since identity is a virtual thing, it is impossible to define it empirically. Discussions of identity use the term with different meanings, from fundamental and abiding sameness, to fluidity, contingency, negotiated and so on. Brubaker and Cooper note a tendency in many scholars to confuse identity as a category of practice and as a category of analysis. Indeed, many scholars demonstrate a tendency to follow their own preconceptions of identity, following more or less the frameworks listed above, rather than taking into account the mechanisms by which the concept is crystallised as reality. In this environment, some analysts, such as Brubaker and Cooper, have suggested doing away with the concept completely. Others, by contrast, have sought to introduce alternative concepts in an attempt to capture the dynamic and fluid qualities of human social self-expression. Stuart Hall for example, suggests treating identity as a process, to take into account the reality of diverse and ever-changing social experience. Some scholars have introduced the idea of identification, whereby identity is perceived as made up of different components that are 'identified' and interpreted by individuals. The construction of an individual sense of self is achieved by personal choices regarding who and what to associate with. Such approaches are liberating in their recognition of the role of the individual in social interaction and the construction of identity.

Anthropologists have contributed to the debate by shifting the focus of research: One of the first challenges for the researcher wishing to carry out empirical research in this area is to identify an appropriate analytical tool. The concept of boundaries is useful here for demonstrating how identity works. In the same way as Barth, in his approach to ethnicity, advocated the critical focus for investigation as being "the ethnic boundary that defines the group rather than the cultural stuff that it encloses", social anthropologists such as Cohen and Bray have shifted the focus of analytical study from identity to the boundaries that are used for purposes of identification. If identity is a kind of virtual site in which the dynamic processes and markers used for identification are made apparent, boundaries provide the framework on which this virtual site is built. They concentrated on how the idea of community belonging is differently constructed by individual members and how individuals within the group conceive ethnic boundaries.

As a non-directive and flexible analytical tool, the concept of boundaries helps both to map and to define the changeability and mutability that are characteristic of people's experiences of the self in society. While identity is a volatile, flexible and abstract 'thing', its manifestations and the ways in which it is exercised are often open to view. Identity is made evident through the use of markers such as language, dress, behaviour and choice of space, whose effect depends on their recognition by other social beings. Markers help to create the boundaries that define similarities or differences between the marker wearer and the marker perceivers, their effectiveness depends on a shared understanding of their meaning. In a social context, misunderstandings can arise due to a misinterpretation of the significance of specific markers. Equally, an individual can use markers of identity to exert influence on other people without necessarily fulfilling all the criteria that an external observer might typically associate with such an abstract identity.

Boundaries can be inclusive or exclusive depending on how they are perceived by other people. An exclusive boundary arises, for example, when a person adopts a marker that imposes restrictions on the behaviour of others. An inclusive boundary is created, by contrast, by the use of a marker with which other people are ready and able to associate. At the same time, however, an inclusive boundary will also impose restrictions on the people it has included by limiting their inclusion within other boundaries. An example of this is the use of a particular language by a newcomer in a room full of people speaking various languages. Some people may understand the language used by this person while others may not. Those who do not understand it might take the newcomer's use of this particular language merely as a neutral sign of identity. But they might also perceive it as imposing an exclusive boundary that is meant to mark them off from the person. On the other hand, those who do understand the newcomer's language could take it as an inclusive boundary, through which the newcomer associates themself with them to the exclusion of the other people present. Equally, however, it is possible that people who do understand the newcomer but who also speak another language may not want to speak the newcomer's language and so see their marker as an imposition and a negative boundary. It is possible that the newcomer is either aware or unaware of this, depending on whether they themself knows other languages or is conscious of the plurilingual quality of the people there and is respectful of it or not.

In religion

A religious identity is the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, mythology, and faith and mystical experience. Religious identity refers to the personal practices related to communal faith along with rituals and communication stemming from such conviction. This identity formation begins with an association in the parents' religious contacts, and individuation requires that the person chooses the same or different religious identity than that of their parents.

The Parable of the Lost Sheep is one of the parables of Jesus. it is about a shepherd who leaves his flock of ninety-nine sheep in order to find the one which is lost. The parable of the lost sheep is an example of the rediscovery of identity. Its aim is to lay bare the nature of the divine response to the recovery of the lost, with the lost sheep representing a lost human being.

Christian meditation is a specific form of personality formation, though often used only by certain practitioners to describe various forms of prayer and the process of knowing the contemplation of God.

In Western culture, personal and secular identity are deeply influenced by the formation of Christianity, throughout history, various Western thinkers who contributed to the development of European identity were influenced by classical cultures and incorporated elements of Greek culture as well as Jewish culture, leading to some movements such as Philhellenism and Philosemitism.

Implications

Due to the multiple functions of identity which include self regulation, self-concept, personal control, meaning and direction, its implications are woven into many aspects of life.

Identity changes

Contexts Influencing Identity Changes

Identity transformations can occur in various contexts, some of which include:

  1. Career Change: When individuals undergo significant shifts in their career paths or occupational identities, they face the challenge of redefining themselves within a new professional context.
  2. Gender Identity Transition: Individuals experiencing gender dysphoria may embark on a journey to align their lives with their true gender identity. This process involves profound personal and social changes to establish an authentic sense of self.
  3. National Immigration: Relocating to a new country necessitates adaptation to unfamiliar societal norms, leading to adjustments in cultural, social, and occupational identities.
  4. Identity Change due to Climate Migration: In the face of environmental challenges and forced displacement, individuals may experience shifts in their identity as they adapt to new geographical locations and cultural contexts.
  5. Adoption: Adoption entails exploring alternative familial features and reconciling with the experience of being adopted, which can significantly impact an individual's self-identity.
  6. Illness Diagnosis: The diagnosis of an illness can provoke an identity shift, altering an individual's self-perception and influencing how they navigate life. Additionally, illnesses may result in changes in abilities, which can affect occupational identity and require adaptations.

Immigration and identity

Immigration and acculturation often lead to shifts in social identity. The extent of this change depends on the disparities between the individual's heritage culture and the culture of the host country, as well as the level of adoption of the new culture versus the retention of the heritage culture. However, the effects of immigration and acculturation on identity can be moderated if the person possesses a strong personal identity. This established personal identity can serve as an "anchor" and play a "protective role" during the process of social and cultural identity transformations that occur.

Occupational identity

Identity is an ongoing and dynamic process that impacts an individual's ability to navigate life's challenges and cultivate a fulfilling existence. Within this process, occupation emerges as a significant factor that allows individuals to express and maintain their identity. Occupation encompasses not only careers or jobs but also activities such as travel, volunteering, sports, or caregiving. However, when individuals face limitations in their ability to participate or engage in meaningful activities, such as due to illness, it poses a threat to the active process and continued development of identity. Feeling socially unproductive can have detrimental effects on one's social identity. Importantly, the relationship between occupation and identity is bidirectional; occupation contributes to the formation of identity, while identity shapes decisions regarding occupational choices. Furthermore, individuals inherently seek a sense of control over their chosen occupation and strive to avoid stigmatizing labels that may undermine their occupational identity.

Navigating stigma and occupational identity

In the realm of occupational identity, individuals make choices regarding employment based on the stigma associated with certain jobs. Likewise, those already working in stigmatized occupations may employ personal rationalization to justify their career path. Factors such as workplace satisfaction and overall quality of life play significant roles in these decisions. Individuals in such jobs face the challenge of forging an identity that aligns with their values and beliefs. Crafting a positive self-concept becomes more arduous when societal standards label their work as "dirty" or undesirable. Consequently, some individuals opt not to define themselves solely by their occupation but strive for a holistic identity that encompasses all aspects of their lives, beyond their job or work. On the other hand, individuals whose identity strongly hinges on their occupation may experience a crisis if they become unable to perform their chosen work. Therefore, occupational identity necessitates an active and adaptable process that ensures both adaptation and continuity amid shifting circumstances.

Factors shaping the concept of identity

The modern notion of personal identity as a distinct and unique characteristic of individuals has evolved relatively recently in history beginning with the first passports in the early 1900s and later becoming more popular as a social science term in the 1950s. Several factors have influenced its evolution, including:

  1. Protestant Influence: In Western societies, the Protestant tradition has underscored individuals' responsibility for their own soul or spiritual well-being, contributing to a heightened focus on personal identity.
  2. Development of Psychology: The emergence of psychology as a separate field of knowledge and study starting in the 19th century has played a significant role in shaping our understanding of identity.
  3. Rise of Privacy: The Renaissance era witnessed a growing sense of privacy, leading to increased attention and importance placed on individual identities.
  4. Specialization in Work: The industrial period brought about a shift from undifferentiated roles in feudal systems to specialized worker roles. This change impacted how individuals identified themselves in relation to their occupations.
  5. Occupation and Identity: The concept of occupation as a crucial aspect of identity was introduced by Christiansen in 1999, highlighting the influence of employment and work roles on an individual's sense of self.
  6. Focus on Gender Identity: There has been an increased emphasis on gender identity, including issues related to gender dysphoria and transgender experiences. These discussions have contributed to a broader understanding of diverse identities.
  7. Relevance of Identity in Personality Pathology: Understanding and assessing personality pathology has highlighted the significance of identity problems in comprehending individuals' psychological well-being.

Self-criticism

From Wikipedia, the free encyclopedia

Self-criticism involves how an individual evaluates oneself. Self-criticism in psychology is typically studied and discussed as a negative personality trait in which a person has a disrupted self-identity. The opposite of self-criticism would be someone who has a coherent, comprehensive, and generally positive self-identity. Self-criticism is often associated with major depressive disorder. Some theorists define self-criticism as a mark of a certain type of depression (introjective depression), and in general people with depression tend to be more self critical than those without depression. People with depression are typically higher on self-criticism than people without depression, and even after depressive episodes they will continue to display self-critical personalities. Much of the scientific focus on self-criticism is because of its association with depression.

Personality theory

Sidney Blatt has proposed a theory of personality which focuses on self-criticism and dependency. Blatt's theory is significant because he evaluates dimensions of personality as they relate to psychopathology and therapy. According to Blatt, personality characteristics affect our experience of depression, and are rooted in the development of our interpersonal interactions and self-identity. He theorizes that personality can be understood in terms of two distinct dimensions - interpersonal relatedness and self-definition. These two dimensions not only represent personality characteristics, but are products of a lifelong developmental process. Disruption in self-definition or identity leads to self-criticism, and disruption in relatedness leads to dependency. Zuroff (2016) found that self-criticism showed stability across time both as a personality trait and as an internal state. Such a finding is important as it supports the fact that self-criticism can be measured in the same manner as other personality traits.

Similar to Blatt's two personality dimensions, Aaron Beck (1983) defines social dependency and autonomy as dimensions of personality that are relevant for depression. Autonomy refers to how much the person relies on "preserving and increasing his independence, mobility, and personal rights". Furthermore, self-criticism involves holding oneself responsible for any past or present failures. Someone who is a self-critic will attribute negative events as a result of deficiencies in their own character or performance. The personality characteristics that Beck describes as self-critical are usually negative for the person experiencing them. His description of their experience with self-criticism as a personality characteristic is therefore important because it will be similar to their experience of depression.

Self-criticism as a personality trait has been linked to several negative effects. In a study examining behavior differences between personality types, Mongrain (1998) found that self-critics experienced greater negative affect, perceived support worse than others, and made fewer requests for support. Those who were high in self-criticism did not differ in the amount of support they received, only in how they accepted or requested it. Participants categorized as being higher in self-criticism had fewer interpersonal goals as well as more self-presentation goals. Among romantic partners, self-criticism predicts a decrease in agreeable comments and an increase in blaming.

Development

Given that self-criticism is typically seen as a negative personality characteristic, it is important to note how some people develop such a trait. As described by the personality theories above, self-criticism often represents a disruption in some characteristic. This disruption could be rooted back in the person's childhood experience. Children of parents who use restrictive and rejecting practices have been shown to have higher levels of self-criticism at age 12. In this same study, women displayed stable levels of self-criticism from age 12 into young adulthood, while men did not. These results show that parenting style can influence the development of self-critical personality, and these effects may potentially last into young adulthood. Another study found that women who were higher in self-criticism reported both that their father was more dominant and their parents maintained strict control and were inconsistent in their expressions of affection. Not surprisingly, these women also reported that their parents tended to seek out achievement and success from their children, as opposed to remaining passive. These studies show that certain experiences in childhood are associated with self-criticism, and the self-critical personality type then extends into later phases of development.

Child maltreatment, which is associated with the development of depression, may also be a risk factor for future self-criticism. Mothers who reported having experience maltreatment as children also perceived themselves as less efficacious mothers. A factor analysis showed that the perception of being less efficacious was mediated by self-criticism, over and above the effects of depressive status. This research shows that self-criticism in particular plays an important role in the relationship between childhood maltreatment and maternal efficacy. In a study assessing child maltreatment and self-injury Glassman and et al. (2007) found that self-criticism specifically was a mediator for the relationship between maltreatment and self-injury. This is particularly important because it shows that self-criticism may play a role in leading to self-injury. Understanding the origins of self-criticism in maltreatment could help prevent such behaviors. Given this research, it seems that self-criticism plays a role in the lasting effects of childhood maltreatment. Assessing self-criticism in preventing maltreatment as well as treating those who have been maltreated could therefore support further research in the area.

Implications for psychopathology

Self-criticism is an important aspect of personality and development, but is also significant in terms of what this trait means for psychopathology. Most theorists described above account for self-criticism as a maladaptive characteristic, so unsurprisingly many researchers have found self-criticism to be connected to depression.

Risk factor for depression

Self-criticism is associated with several other negative variables. In one sample, differences in self-criticism as a personality trait were associated with differences in perceived support, negative affect, self-image goals, and overt self-criticism. These are all characteristics that pertain to the experience of depression, revealing that self-criticism affects depression. The persistence of self-criticism as a personality trait can leave some people vulnerable to developing depression. As stated above, Blatt theorized that people who were more self-critical and focused on achievement concerns were more likely to develop a specific type of depression, which he called introjective depression. Both Blatt and Beck have developed measures to assess self-criticism and the experience of depression. In addition to the fact that many personality theorists classified self-criticism as marking a certain "type" of depression, it has been shown to be a risk factor for the development of depression.

There has been a great deal of research assessing whether certain personality characteristics can lead to depression, among them self-criticism. In one study self-criticism was a significant predictor of depression in medical students, who go through extreme stress during and after medical school. Controlling for initial symptoms, self-criticism was a stronger predictor than even previous depression status both 2 years and 10 years after the initial assessment. In a sample with a history of depression, Mongrain and Leather (2006) found that measures of self-criticism were associated with the number of past episodes of depression. The personality was indicative of depression history, but self-criticism in an interaction with immature dependence was able to predict future episodes of depression as well. In a sample of people who either currently have depression or are in remission from a depressive episode, individuals reported both higher levels of self-criticism and lower levels of self-compassion. This same study found that self-critical individuals were also at an increased risk of experiencing depression chronically over the course of their lives. Self-criticism was also able to explain the variance in depression status for currently depressed, remitted depressed, and never depressed patients, over and above other variables. Carver and Ganellen (1983) assessed self-criticism by breaking it down into three distinct categories: Overgeneralization of negative events, high standards, and self-criticism. These three categories all deal with self-critical cognitions, and are measured by the Attitude Toward Self Scale, which Carver and Ganellen created.

Treatment outcome

In addition to acting as a risk factor for depression, self-criticism also affects the efficacy of depression treatment. Self-criticism as a trait characteristic therefore persists throughout a person's entire life. This means a person can display persistent, long term levels of self-criticism as a personality trait, but levels of self-criticism can vary from moment to moment depending on the person's current mental state. Therefore, in terms of treatment for depression, it could be difficult for clinicians to accurately assess decreases in self-criticism. In a particular session, state levels of self-criticism may increase or decrease, but in the long term it is not as easy to see if trait levels of self-criticism have been reduced, and a reduction in trait self-criticism is more important in terms of effectively treating depression. In other words, it is likely easier to reduce state levels of self-criticism, so researchers who develop treatments for depression should have the goal of treating long-term, trait self-criticism.

It is possible that change in depression symptoms may not necessarily co-occur with change in personality factors, and given that self-criticism as a personality factor has been shown to lead to depression, this could be problematic. One study found that positive change in depression occurred before any change in self-critical perfectionism. The authors of this study suggested that this has implications for deciding how long to provide treatment. If treatment ends as depression fades away, the underlying personality characteristics that affect depression may not have changed. In such a case extending treatment beyond the point when positive change is seen in depression symptoms may give the best results. This same study also found that levels of perfectionism (which is related to self-critical personality) predicted the rate of change in depression status.

Self-criticism is known as autonomy in Beck's personality model, and there has been research looking at his conception of sociotropy and autonomy. Sociotropy characterizes people who are socially dependent, and their main source of distress is interpersonal relationships. Autonomy, however refers to self-critical individuals who are more concerned with independence and achievement. In a study examining treatment differences between these groups, Zettle, Haflich & Reynolds (1992) found that autonomous, self-critical individuals had better results in individual therapy than in group therapy. This research shows that personality characteristics can influence what kind of treatment is best for an individual, and that clinicians should be aware of these differences. Therefore, self-criticism is both a warning sign for the development of depression and affects how it is treated. It is an important facet of depression research, as it is important for how we might prevent and treat this debilitating disorder.

Neuroscience

fMRI finds that engaging in self-criticism activates areas in the lateral prefrontal cortex and dorsal anterior cingulate cortex which are brain areas responsible for processing error detection and correction. In contrast, engaging in self-reassurance activates the left temporal pole and insula areas previously found to be activated in compassion and empathy. Those that as a psychological trait engage in self-criticism tend to show an activated dorsolateral prefrontal activity, while ventrolateral prefrontal cortex activity was found in those with the trait of self-reassurance.

Embarrassment

From Wikipedia, the free encyclopedia
A woman covering her eyes as an expression of embarrassment

Embarrassment or awkwardness is an emotional state that is associated with mild to severe levels of discomfort, and which is usually experienced when someone commits (or thinks of) a socially unacceptable or frowned-upon act that is witnessed by or revealed to others. Frequently grouped with shame and guilt, embarrassment is considered a "self-conscious emotion", and it can have a profoundly negative impact on a person's thoughts or behavior.

Usually, some perception of loss of honor or dignity (or other high-value ideals) is involved, but the embarrassment level and the type depends on the situation.

Causes

Embarrassment can be personal, caused by unwanted attention to private matters or personal flaws or mishaps or shyness. Some causes of embarrassment stem from personal actions, such as being caught in a lie or in making a mistake. In many cultures, being seen nude or inappropriately dressed is a particularly stressful form of embarrassment (see modesty). Personal embarrassment can also stem from the actions of others who place the embarrassed person in a socially awkward situation—such as a parent showing one's baby pictures to friends, having someone make a derogatory comment about one's appearance or behavior, discovering one is the victim of gossip, being rejected by another person (see also humiliation), being made the focus of attention (e.g., birthday celebrants, newlyweds), or even witnessing someone else's embarrassment.

Personal embarrassment is usually accompanied by some combination of blushing, sweating, nervousness, stammering, and fidgeting. Sometimes the embarrassed person tries to mask embarrassment with smiles or nervous laughter, especially in etiquette situations. Such a response is more common in certain cultures, which may lead to misunderstanding. There may also be feelings of anger depending on the perceived seriousness of the situation, especially if the individual thinks another person is intentionally causing the embarrassment. There is a range of responses, with the most minor being a perception of the embarrassing act as inconsequential or even humorous, to intense apprehension or fear.

The idea that embarrassment serves an apology or appeasement function originated with Goffman who argued the embarrassed individual "demonstrates that he/she is at least disturbed by the fact and may prove worthy at another time". Semin and Manstead demonstrated social functions of embarrassment whereby the perpetrator of knocking over a sales display (the "bad act") was deemed more likable by others if he/she appeared embarrassed than if he/she appeared unconcerned – regardless of restitution behavior (rebuilding the display). The capacity to experience embarrassment can also be seen as functional for the group or culture. It has been demonstrated that those who are not prone to embarrassment are more likely to engage in antisocial behavior – for example, adolescent boys who displayed more embarrassment were found less likely to engage in aggressive/delinquent behaviors. Similarly, embarrassment exhibited by boys more likely to engage in aggressive/delinquent behavior was less than one-third of that exhibited by non-aggressive boys. Thus proneness to embarrassment (i.e., a concern for how one is evaluated by others) can act as a brake on behavior that would be dysfunctional for a group or culture.

Professional embarrassment

Embarrassment can also be professional or official, especially after statements expressing confidence in a stated course of action, or willful disregard for evidence. Embarrassment increases greatly in instances involving official duties or workplace facilities, large amounts of money or materials, or loss of human life. Examples of causes include a government's failed public policy, exposure of corrupt practices or unethical behavior, a celebrity whose personal habits receive public scrutiny or face legal action, or officials caught in serious personally embarrassing situations. Even small errors or miscalculations can lead to significantly greater official embarrassment if it is discovered that there was willful disregard for evidence or directives involved (e.g., see Space Shuttle Challenger).

Not all official failures result in official embarrassment, even if the circumstances lead to some slight personal embarrassment for the people involved. For example, losing a close political election might cause some personal embarrassment for the candidate but generally would be considered an honorable loss in the profession and thus not necessarily lead to professional embarrassment. Similarly, a scientist might be personally disappointed and embarrassed if one of their hypotheses was proven wrong, but would not normally suffer professional embarrassment as a result. By contrast, exposure of falsified data supporting a scientific claim would likely lead to professional embarrassment in the scientific community. Professional or official embarrassment is often accompanied by public expressions of anger, denial of involvement, or attempts to minimize the consequences. Sometimes the embarrassed entity issues press statements, removes or distances themselves from sub-level employees, attempts to carry on as if nothing happened, suffers income loss, emigrates, or vanishes from public view.

Vicarious embarrassment

Vicarious embarrassment is an embarrassed feeling from observing the embarrassing actions of another person. People who rate themselves as more empathic are more likely to experience vicarious embarrassment. The effect is present whether or not the observed party is aware of the embarrassing nature of their actions, although awareness generally increases the strength of the felt vicarious embarrassment, as does an accidental (as opposed to intentional) action.

Types in social psychology

An embarrassing proposal by Antoine Watteau

One typology of embarrassment is described by Sharkey and Stafford. There are six types of embarrassment:

  1. Privacy violations – for example where a part of the body is accidentally exposed, or there is an invasion of space, property, or information that may be warranted to privacy,
  2. Lack of knowledge and skill – for example forgetfulness, or experiencing failure while performing a relatively easy task
  3. Criticism and rejection – is another cause of embarrassment, as well as being made the center of attention positively or negatively
  4. Awkward acts – refer to social situations, for example, inappropriate conversations, clumsiness or ungraceful actions (such as an emotional outbreak like speaking out unintentionally) that can trigger embarrassment
  5. Appropriate image – refers to more of a personal reflection of embarrassment, like body image, clothing apparel, and personal possessions (for example owning an older mobile phone compared to the latest model)
  6. Environment – can also have the effect of provoking embarrassment, as when an individual in a movie theatre with his or her parents, other family, co-workers, or mixed-company peers is made uncomfortable by an unexpected occurrence of nudity in the film that the group is watching.

Another typology, by Cupach and Metts, discusses the dimensions of intended-unintended and appropriate-inappropriate behavior, and four basic types of embarrassing circumstances:

  1. Faux pas (socially awkward acts)
  2. Accidents
  3. Mistakes
  4. Failure to perform a duty or moral obligation.

Based on these types, Cupach and Metts classify two basic embarrassment situations: the actor responsible and the observer responsible. Actor responsible situations are embarrassing when a person executes an act that is either inappropriate to a point of proficiency matching social norms and expectations, inconsistent with role expectations, or is out-of-sync with a social identity. The observer responsible categories are embarrassing when an individual becomes the focus of attention through:

  • Recognition, praise, criticism, correction, or teasing
  • Becomes initialized through being tripped or bumped, which is then associated with someone acting inappropriately
  • Has information revealed publicly to another individual or peer group

Etymology

The first known written occurrence of embarrass in English was in 1664 by Samuel Pepys in his diary. The word derives from the French word embarrasser, "to block" or "obstruct", whose first recorded usage was by Michel de Montaigne in 1580. The French word was derived from the Spanish embarazar, whose first recorded usage was in 1460 in Cancionero de Stúñiga (Songbook of Stúñiga) by Álvaro de Luna. The Spanish word comes from the Portuguese embaraçar, which is a combination of the prefix em- (from Latin im- for "in-") with baraço or baraça, "a noose" or "rope". Baraça originated before the Romans began their conquest of the Iberian Peninsula in 218 BC. Thus, baraça could be related to the Celtic word barr, "tuft". (Celtic people actually settled much of Spain and Portugal beginning in the 8th century BC) However, it certainly is not directly derived from it, as the substitution of r for rr in Ibero-Romantic languages was not a known occurrence.

The Spanish word may come from the Italian imbarazzare, from imbarazzo, "obstacle" or "obstruction". That word came from imbarrare, "to block" or "bar", which is a combination of in-, "in" with barra, "bar" (from the Vulgar Latin barra, which is of unknown origin). The problem with this theory is that the first known usage of the word in Italian was by Bernardo Davanzati (1529–1606), long after the word had entered Spanish.

In Judaism

Embarrassing another person is considered to be a serious sin in Judaism. Rabbis quoted in the Babylonian Talmud state that embarrassing another person in public is akin to murder (literally "spilling blood"). Rabbi Naḥman bar Yitzḥak responds by noting how the analogy of "spilling blood" is apt since, when a person is embarrassed, their face becomes less flushed and more pale (after the initial flush).

Pantheism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Pantheism     Pantheism is the philosophic...