Search This Blog

Wednesday, December 11, 2024

Ecocentrism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ecocentrism

Ecocentrism
(/ˌɛkˈsɛntrɪzəm/; from Greek: οἶκος oikos, 'house' and κέντρον kentron, 'center') is a term used by environmental philosophers and ecologists to denote a nature-centered, as opposed to human-centered (i.e., anthropocentric), system of values. The justification for ecocentrism usually consists in an ontological belief and subsequent ethical claim. The ontological belief denies that there are any existential divisions between human and non-human nature sufficient to claim that humans are either (a) the sole bearers of intrinsic value or (b) possess greater intrinsic value than non-human nature. Thus the subsequent ethical claim is for an equality of intrinsic value across human and non-human nature, or biospherical egalitarianism.

Origin of term

The ecocentric ethic was conceived by Aldo Leopold and recognizes that all species, including humans, are the product of a long evolutionary process and are inter-related in their life processes. The writings of Aldo Leopold and his idea of the land ethic and good environmental management are a key element to this philosophy. Ecocentrism focuses on the biotic community as a whole and strives to maintain ecosystem composition and ecological processes. The term also finds expression in the first principle of the deep ecology movement, as formulated by Arne Næss and George Sessions in 1984 which points out that anthropocentrism, which considers humans as the center of the universe and the pinnacle of all creation, is a difficult opponent for ecocentrism.

Background

Environmental thought and the various branches of the environmental movement are often classified into two intellectual camps: those that are considered anthropocentric, or "human-centred," in orientation and those considered biocentric, or "life-centred". This division has been described in other terminology as "shallow" ecology versus "deep" ecology and as "technocentrism" versus "ecocentrism". Ecocentrism can be seen as one stream of thought within environmentalism, the political and ethical movement that seeks to protect and improve the quality of the natural environment through changes to environmentally harmful human activities by adopting environmentally benign forms of political, economic, and social organization and through a reassessment of humanity's relationship with nature. In various ways, environmentalism claims that non-human organisms and the natural environment as a whole deserve consideration when appraising the morality of political, economic, and social policies.

Environmental communication scholars suggest that anthropocentric ways of being and identities are maintained by various modes of cultural disciplinary power such as ridiculing, labelling, and silencing. Accordingly, the transition to more ecocentric ways of being and identities requires not only legal and economic structural change, but also the emergence of ecocultural practices that challenge anthropocentric disciplinary power and lead to the creation of ecocentric cultural norms.  

Relationship to other similar philosophies

Anthropocentrism

Ecocentrism is taken by its proponents to constitute a radical challenge to long-standing and deeply rooted anthropocentric attitudes in Western culture, science, and politics. Anthropocentrism is alleged to leave the case for the protection of non-human nature subject to the demands of human utility, and thus never more than contingent on the demands of human welfare. An ecocentric ethic, by contrast, is believed to be necessary in order to develop a non-contingent basis for protecting the natural world. Critics of ecocentrism have argued that it opens the doors to an anti-humanist morality that risks sacrificing human well-being for the sake of an ill-defined 'greater good'. Deep ecologist Arne Naess has identified anthropocentrism as a root cause of the ecological crisis, human overpopulation, and the extinctions of many non-human species. Lupinacci also points to anthropocentrism as a root cause of environmental degradation. Others point to the gradual historical realization that humans are not the centre of all things, that "A few hundred years ago, with some reluctance, Western people admitted that the planets, Sun and stars did not circle around their abode. In short, our thoughts and concepts though irreducibly anthropomorphic need not be anthropocentric."

Industrocentrism

It sees all things on earth as resources to be utilized by humans or to be commodified. This view is the opposite of anthropocentrism and ecocentrism.

Technocentrism

Ecocentrism is also contrasted with technocentrism (meaning values centred on technology) as two opposing perspectives on attitudes towards human technology and its ability to affect, control and even protect the environment. Ecocentrics, including "deep green" ecologists, see themselves as being subject to nature, rather than in control of it. They lack faith in modern technology and the bureaucracy attached to it. Ecocentrics will argue that the natural world should be respected for its processes and products, and that low impact technology and self-reliance is more desirable than technological control of nature. Technocentrics, including imperialists, have absolute faith in technology and industry and firmly believe that humans have control over nature. Although technocentrics may accept that environmental problems do exist, they do not see them as problems to be solved by a reduction in industry. Indeed, technocentrics see that the way forward for developed and developing countries and the solutions to our environmental problems today lie in scientific and technological advancement.

Biocentrism

The distinction between biocentrism  and ecocentrism is ill-defined. Ecocentrism recognizes Earth's interactive living and non-living systems rather than just the Earth's organisms (biocentrism) as central in importance. The term has been used by those advocating "left biocentrism", combining deep ecology with an "anti-industrial and anti-capitalist" position (David Orton et al.).

Prefrontal cortex

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Prefrontal_cortex
 
Prefrontal cortex
Brodmann areas, 8, 9, 10, 11, 12, 13, 14, 24, 25, 32, 44, 45, 46, and 47 are all in the prefrontal cortex
Details
Part ofFrontal lobe
PartsSuperior frontal gyrus
Middle frontal gyrus
Inferior frontal gyrus
ArteryAnterior cerebral
Middle cerebral
VeinSuperior sagittal sinus
Identifiers
Latincortex praefrontalis
MeSHD017397
NeuroNames2429
NeuroLex IDnlx_anat_090801, ilx_0109209
FMA224850

In mammalian brain anatomy, the prefrontal cortex (PFC) covers the front part of the frontal lobe of the cerebral cortex. It is the association cortex in the frontal lobe. The PFC contains the Brodmann areas BA8, BA9, BA10, BA11, BA12, BA13, BA14, BA24, BA25, BA32, BA44, BA45, BA46, and BA47.

This brain region is involved in a wide range of higher-order cognitive functions, including speech formation (Broca's area), gaze (frontal eye fields), working memory (dorsolateral prefrontal cortex), and risk processing (e.g. ventromedial prefrontal cortex). The basic activity of this brain region is considered to be orchestration of thoughts and actions in accordance with internal goals. Many authors have indicated an integral link between a person's will to live, personality, and the functions of the prefrontal cortex.

This brain region has been implicated in executive functions, such as planning, decision making, working memory, personality expression, moderating social behavior and controlling certain aspects of speech and language. Executive function relates to abilities to differentiate among conflicting thoughts, determine good and bad, better and best, same and different, future consequences of current activities, working toward a defined goal, prediction of outcomes, expectation based on actions, and social "control" (the ability to suppress urges that, if not suppressed, could lead to socially unacceptable outcomes).

The frontal cortex supports concrete rule learning, with more anterior regions supporting rule learning at higher levels of abstraction.

Structure

Definition

There are three possible ways to define the prefrontal cortex:

  • as the granular frontal cortex
  • as the projection zone of the medial dorsal nucleus of the thalamus
  • as that part of the frontal cortex whose electrical stimulation does not evoke movements

Granular frontal cortex

The prefrontal cortex has been defined based on cytoarchitectonics by the presence of a cortical granular layer IV. It is not entirely clear who first used this criterion. Many of the early cytoarchitectonic researchers restricted the use of the term prefrontal to a much smaller region of cortex including the gyrus rectus and the gyrus rostralis (Campbell, 1905; G. E. Smith, 1907; Brodmann, 1909; von Economo and Koskinas, 1925). In 1935, however, Jacobsen used the term prefrontal to distinguish granular prefrontal areas from agranular motor and premotor areas. In terms of Brodmann areas, the prefrontal cortex traditionally includes areas 8, 9, 10, 11, 12, 13, 14, 24, 25, 32, 44, 45, 46, and 47, however, not all of these areas are strictly granular – 44 is dysgranular, caudal 11 and orbital 47 are agranular. The main problem with this definition is that it works well only in primates but not in nonprimates, as the latter lack a granular layer IV.

Projection zone

To define the prefrontal cortex as the projection zone of the mediodorsal nucleus of the thalamus builds on the work of Rose and Woolsey, who showed that this nucleus projects to anterior and ventral parts of the brain in nonprimates, however, Rose and Woolsey termed this projection zone "orbitofrontal." It seems to have been Akert, who, for the first time in 1964, explicitly suggested that this criterion could be used to define homologues of the prefrontal cortex in primates and nonprimates. This allowed the establishment of homologies despite the lack of a granular frontal cortex in nonprimates.

The projection zone definition is still widely accepted today (e.g. Fuster), although its usefulness has been questioned. Modern tract tracing studies have shown that projections of the mediodorsal nucleus of the thalamus are not restricted to the granular frontal cortex in primates. As a result, it was suggested to define the prefrontal cortex as the region of cortex that has stronger reciprocal connections with the mediodorsal nucleus than with any other thalamic nucleus. Uylings et al. acknowledge, however, that even with the application of this criterion, it might be rather difficult to define the prefrontal cortex unequivocally.

Electrically silent area of frontal cortex

A third definition of the prefrontal cortex is the area of frontal cortex whose electrical stimulation does not lead to observable movements. For example, in 1890 David Ferrier used the term in this sense. One complication with this definition is that the electrically "silent" frontal cortex includes both granular and non-granular areas.

Subdivisions

According to Striedter, the PFC of humans can be delineated into two functionally, morphologically, and evolutionarily different regions: the ventromedial PFC (vmPFC) consisting of:

  1. the ventral prefrontal cortex (VPFC)
  2. the medial prefrontal cortex present in all mammals (MPFC)

and the lateral prefrontal cortex (LPFC), consisting of:

  1. the dorsolateral prefrontal cortex (DLPFC)
  2. the ventrolateral prefrontal cortex (VLPFC) present only in primates.

The LPFC contains the Brodmann areas BA8, BA9, BA10, BA45, BA46, and BA47. Some researchers also include BA44. The vmPFC contains the Brodmann areas BA12, BA25, BA32, BA33, BA24, BA11, BA13, and BA14.

The table below shows different ways to subdivide parts of the human prefrontal cortex based upon Brodmann areas.

8 9 10 46 45 47 44 12 25 32 33 24 11 13 14
lateral ventromedial
dorsolateral ventrolateral medial ventral

Interconnections

The prefrontal cortex is highly interconnected with much of the brain, including extensive connections with other cortical, subcortical and brain stem sites. The dorsal prefrontal cortex is especially interconnected with brain regions involved with attention, cognition and action, while the ventral prefrontal cortex interconnects with brain regions involved with emotion. The prefrontal cortex also receives inputs from the brainstem arousal systems, and its function is particularly dependent on its neurochemical environment. Thus, there is coordination between one's state of arousal and mental state. The interplay between the prefrontal cortex and socioemotional system of the brain is relevant for adolescent development, as proposed by the Dual Systems Model.

The medial prefrontal cortex has been implicated in the generation of slow-wave sleep (SWS), and prefrontal atrophy has been linked to decreases in SWS. Prefrontal atrophy occurs naturally as individuals age, and it has been demonstrated that older adults experience impairments in memory consolidation as their medial prefrontal cortices degrade. In older adults, instead of being transferred and stored in the neocortex during SWS, memories start to remain in the hippocampus where they were encoded, as evidenced by increased hippocampal activation compared to younger adults during recall tasks, when subjects learned word associations, slept, and then were asked to recall the learned words.

The ventrolateral prefrontal cortex (VLPFC) has been implicated in various aspects of speech production and language comprehension. The VLPFC is richly connected to various regions of the brain including the lateral and medial temporal lobe, the superior temporal cortex, the infertemporal cortex, the perirhinal cortex, and the parahippoccampal cortex. These brain areas are implicated in memory retrieval and consolidation, language processing, and association of emotions. These connections allow the VLPFC to mediate explicit and implicit memory retrieval and integrate it with language stimulus to help plan coherent speech. In other words, choosing the correct words and staying "on topic" during conversation come from the VLPFC.

Function

Executive function

The original studies of Fuster and of Goldman-Rakic emphasized the fundamental ability of the prefrontal cortex to represent information not currently in the environment, and the central role of this function in creating the "mental sketch pad". Goldman-Rakic spoke of how this representational knowledge was used to intelligently guide thought, action, and emotion, including the inhibition of inappropriate thoughts, distractions, actions, and feelings. In this way, working memory can be seen as fundamental to attention and behavioral inhibition. Fuster speaks of how this prefrontal ability allows the wedding of past to future, allowing both cross-temporal and cross-modal associations in the creation of goal-directed, perception-action cycles. This ability to represent underlies all other higher executive functions.

Shimamura proposed Dynamic Filtering Theory to describe the role of the prefrontal cortex in executive functions. The prefrontal cortex is presumed to act as a high-level gating or filtering mechanism that enhances goal-directed activations and inhibits irrelevant activations. This filtering mechanism enables executive control at various levels of processing, including selecting, maintaining, updating, and rerouting activations. It has also been used to explain emotional regulation.

Miller and Cohen proposed an Integrative Theory of Prefrontal Cortex Function, that arises from the original work of Goldman-Rakic and Fuster. The two theorize that "cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represents goals and means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task". In essence, the two theorize that the prefrontal cortex guides the inputs and connections, which allows for cognitive control of our actions.

The prefrontal cortex is of significant importance when top-down processing is needed. Top-down processing by definition is when behavior is guided by internal states or intentions. According to the two, "The PFC is critical in situations when the mappings between sensory inputs, thoughts, and actions either are weakly established relative to other existing ones or are rapidly changing". An example of this can be portrayed in the Wisconsin Card Sorting Test (WCST). Subjects engaging in this task are instructed to sort cards according to the shape, color, or number of symbols appearing on them. The thought is that any given card can be associated with a number of actions and no single stimulus-response mapping will work. Human subjects with PFC damage are able to sort the card in the initial simple tasks, but unable to do so as the rules of classification change.

Miller and Cohen conclude that the implications of their theory can explain how much of a role the PFC has in guiding control of cognitive actions. In the researchers' own words, they claim that, "depending on their target of influence, representations in the PFC can function variously as attentional templates, rules, or goals by providing top-down bias signals to other parts of the brain that guide the flow of activity along the pathways needed to perform a task".

Experimental data indicate a role for the prefrontal cortex in mediating normal sleep physiology, dreaming and sleep-deprivation phenomena.

When analyzing and thinking about attributes of other individuals, the medial prefrontal cortex is activated, however, it is not activated when contemplating the characteristics of inanimate objects.

Studies using fMRI have shown that the medial prefrontal cortex (mPFC), specifically the anterior medial prefrontal cortex (amPFC), may modulate mimicry behavior. Neuroscientists are suggesting that social priming influences activity and processing in the amPFC, and that this area of the prefrontal cortex modulates mimicry responses and behavior.

As of recent, researchers have used neuroimaging techniques to find that along with the basal ganglia, the prefrontal cortex is involved with learning exemplars, which is part of the exemplar theory, one of the three main ways our mind categorizes things. The exemplar theory states that we categorize judgements by comparing it to a similar past experience within our stored memories.

A 2014 meta-analysis by Professor Nicole P.Yuan from the University of Arizona found that larger prefrontal cortex volume and greater PFC cortical thickness were associated with better executive performance.

Attention and memory

Lebedev et al. experiment that dissociated representation of spatial attention from representation of spatial memory in prefrontal cortex 

A widely accepted theory regarding the function of the brain's prefrontal cortex is that it serves as a store of short-term memory. This idea was first formulated by Jacobsen, who reported in 1936 that damage to the primate prefrontal cortex caused short-term memory deficits. Karl Pribram and colleagues (1952) identified the part of the prefrontal cortex responsible for this deficit as area 46, also known as the dorsolateral prefrontal cortex (dlPFC). More recently, Goldman-Rakic and colleagues (1993) evoked short-term memory loss in localized regions of space by temporary inactivation of portions of the dlPFC. Once the concept of working memory (see also Baddeley's model of working memory) was established in contemporary neuroscience by Alan Baddeley (1986), these neuropsychological findings contributed to the theory that the prefrontal cortex implements working memory and, in some extreme formulations, only working memory. In the 1990s this theory developed a wide following, and it became the predominant theory of PF function, especially for nonhuman primates. The concept of working memory used by proponents of this theory focused mostly on the short-term maintenance of information, and rather less on the manipulation or monitoring of such information or on the use of that information for decisions. Consistent with the idea that the prefrontal cortex functions predominantly in maintenance memory, delay-period activity in the PF has often been interpreted as a memory trace. (The phrase "delay-period activity" applies to neuronal activity that follows the transient presentation of an instruction cue and persists until a subsequent "go" or "trigger" signal.)

To explore alternative interpretations of delay-period activity in the prefrontal cortex, Lebedev et al. (2004) investigated the discharge rates of single prefrontal neurons as monkeys attended to a stimulus marking one location while remembering a different, unmarked location. Both locations served as potential targets of a saccadic eye movement. Although the task made intensive demands on short-term memory, the largest proportion of prefrontal neurons represented attended locations, not remembered ones. These findings showed that short-term memory functions cannot account for all, or even most, delay-period activity in the part of the prefrontal cortex explored. The authors suggested that prefrontal activity during the delay-period contributes more to the process of attentional selection (and selective attention) than to memory storage.

Speech production and language

Various areas of the prefrontal cortex have been implicated in a multitude of critical functions regarding speech production, language comprehension, and response planning before speaking. Cognitive neuroscience has shown that the left ventrolateral prefrontal cortex is vital in the processing of words and sentences.

The right prefrontal cortex has been found to be responsible for coordinating the retrieval of explicit memory for use in speech, whereas the deactivation of the left is responsible for mediating implicit memory retrieval to be used in verb generation. Recollection of nouns (explicit memory) is impaired in some amnesic patients with damaged right prefrontal cortices, but verb generation remains intact because of its reliance on left prefrontal deactivation.

Many researchers now include BA45 in the prefrontal cortex because together with BA44 it makes up an area of the frontal lobe called Broca's area. Broca's Area is widely considered the output area of the language production pathway in the brain (as opposed to Wernicke's area in the medial temporal lobe, which is seen as the language input area). BA45 has been shown to be implicated for the retrieval of relevant semantic knowledge to be used in conversation/speech. The right lateral prefrontal cortex (RLPFC) is implicated in the planning of complex behavior, and together with bilateral BA45, they act to maintain focus and coherence during speech production. However, left BA45 has been shown to be activated significantly while maintaining speech coherence in young people. Older people have been shown to recruit the right BA45 more so than their younger counterparts. This aligns with the evidence of decreased lateralization in other brain systems during aging.

In addition, this increase in BA45 and RLPFC activity in combination of BA47 in older patients has been shown to contribute to "off-topic utterances." The BA47 area in the prefrontal cortex is implicated in "stimulus-driven" retrieval of less-salient knowledge than is required to contribute to a conversation. In other words, elevated activation of the BA47 together with altered activity in BA45 and the broader RLPFC has been shown to contribute to the inclusion of less relevant information and irrelevant tangential conversational speech patterns in older subjects.

Clinical significance

In the last few decades, brain imaging systems have been used to determine brain region volumes and nerve linkages. Several studies have indicated that reduced volume and interconnections of the frontal lobes with other brain regions is observed in patients diagnosed with mental disorders; those subjected to repeated stressors; those who excessively consume sexually explicit materials; suicides; criminals; sociopaths; those affected by lead poisoning; It is believed that at least some of the human abilities to feel guilt or remorse, and to interpret reality, are dependent on a well-functioning prefrontal cortex. The advanced neurocircuitry and self-regulatory function of the human prefrontal cortex is also associated with the higher sentience and sapience of humans, as the prefrontal cortex in humans occupies a far larger percentage of the brain than in any other animal. It is theorized that, as the brain has tripled in size over five million years of human evolution, the prefrontal cortex has increased in size sixfold.

A review on executive functions in healthy exercising individuals noted that the left and right halves of the prefrontal cortex, which is divided by the medial longitudinal fissure, appears to become more interconnected in response to consistent aerobic exercise. Two reviews of structural neuroimaging research indicate that marked improvements in prefrontal and hippocampal gray matter volume occur in healthy adults that engage in medium intensity exercise for several months.

Chronic intake of alcohol leads to persistent alterations in brain function including altered decision-making ability. The prefrontal cortex of chronic alcoholics has been shown to be vulnerable to oxidative DNA damage and neuronal cell death.

History

Perhaps the seminal case in prefrontal cortex function is that of Phineas Gage, whose left frontal lobe was destroyed when a large iron rod was driven through his head in an 1848 accident. The standard presentation is that, although Gage retained normal memory, speech and motor skills, his personality changed radically: He became irritable, quick-tempered, and impatient—characteristics he did not previously display — so that friends described him as "no longer Gage"; and, whereas he had previously been a capable and efficient worker, afterward he was unable to complete. However, careful analysis of primary evidence shows that descriptions of Gage's psychological changes are usually exaggerated when held against the description given by Gage's doctor, the most striking feature being that changes described years after Gage's death are far more dramatic than anything reported while he was alive.

Subsequent studies on patients with prefrontal injuries have shown that the patients verbalized what the most appropriate social responses would be under certain circumstances. Yet, when actually performing, they instead pursued behavior aimed at immediate gratification, despite knowing the longer-term results would be self-defeating.

The interpretation of this data indicates that not only are skills of comparison and understanding of eventual outcomes harbored in the prefrontal cortex but the prefrontal cortex (when functioning correctly) controls the mental option to delay immediate gratification for a better or more rewarding longer-term gratification result. This ability to wait for a reward is one of the key pieces that define optimal executive function of the human brain.

There is much current research devoted to understanding the role of the prefrontal cortex in neurological disorders. Clinical trials have begun on certain drugs that have been shown to improve prefrontal cortex function, including guanfacine, which acts through the alpha-2A adrenergic receptor. A downstream target of this drug, the HCN channel, is one of the most recent areas of exploration in prefrontal cortex pharmacology.

Etymology

The term "prefrontal" as describing a part of the brain appears to have been introduced by Richard Owen in 1868. For him, the prefrontal area was restricted to the anterior-most part of the frontal lobe (approximately corresponding to the frontal pole). It has been hypothesized that his choice of the term was based on the prefrontal bone present in most amphibians and reptiles.

Tuesday, December 10, 2024

Climate model

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Climate_model
Climate models divide the planet into a 3-dimensional grid and apply differential equations to each grid. The equations are based on the basic laws of physics, fluid motion, and chemistry.

Numerical climate models (or climate system models) are mathematical models that can simulate the interactions of important drivers of climate. These drivers are the atmosphere, oceans, land surface and ice. Scientists use climate models to study the dynamics of the climate system and to make projections of future climate and of climate change. Climate models can also be qualitative (i.e. not numerical) models and contain narratives, largely descriptive, of possible futures.

Climate models take account of incoming energy from the Sun as well as outgoing energy from Earth. An imbalance results in a change in temperature. The incoming energy from the Sun is in the form of short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared. The outgoing energy is in the form of long wave (far) infrared electromagnetic energy. These processes are part of the greenhouse effect.

Climate models vary in complexity. For example, a simple radiant heat transfer model treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models) and horizontally. More complex models are the coupled atmosphere–ocean–sea ice global climate models. These types of models solve the full equations for mass transfer, energy transfer and radiant exchange. In addition, other types of models can be interlinked. For example Earth System Models include also land use as well as land use changes. This allows researchers to predict the interactions between climate and ecosystems.

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. Scientists divide the planet into a 3-dimensional grid and apply the basic equations to those grids. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points. These are coupled with oceanic models to simulate climate variability and change that occurs on different timescales due to shifting ocean currents and the much larger heat storage capacity of the global ocean. External drivers of change may also be applied. Including an ice-sheet model better accounts for long term effects such as sea level rise.

Uses

There are three major types of institution where climate models are developed, implemented and used:

Big climate models are essential but they are not perfect. Attention still needs to be given to the real world (what is happening and why). The global models are essential to assimilate all the observations, especially from space (satellites) and produce comprehensive analyses of what is happening, and then they can be used to make predictions/projections. Simple models have a role to play that is widely abused and fails to recognize the simplifications such as not including a water cycle.

General circulation models (GCMs)

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. To "run" a model, scientists divide the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points.

A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.

GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.

Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."

Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.

Energy balance models (EBMs)

Simulation of the climate system in full 3-D space and time was impractical prior to the establishment of large computational facilities starting in the 1960s. In order to begin to understand which factors may have changed Earth's paleoclimate states, the constituent and dimensional complexities of the system needed to be reduced. A simple quantitative model that balanced incoming/outgoing energy was first developed for the atmosphere in the late 19th century. Other EBMs similarly seek an economical description of surface temperatures by applying the conservation of energy constraint to individual columns of the Earth-atmosphere system.

Essential features of EBMs include their relative conceptual simplicity and their ability to sometimes produce analytical solutions. Some models account for effects of ocean, land, or ice features on the surface budget. Others include interactions with parts of the water cycle or carbon cycle. A variety of these and other reduced system models can be useful for specialized tasks that supplement GCMs, particularly to bridge gaps between simulation and understanding.

Zero-dimensional models

Zero-dimensional models consider Earth as a point in space, analogous to the pale blue dot viewed by Voyager 1 or an astronomer's view of very distant objects. This dimensionless view while highly limited is still useful in that the laws of physics are applicable in a bulk fashion to unknown objects, or in an appropriate lumped manner if some major properties of the object are known. For example, astronomers know that most planets in our own solar system feature some kind of solid/liquid surface surrounded by a gaseous atmosphere.

Model with combined surface and atmosphere

A very simple model of the radiative equilibrium of the Earth is

where

  • the left hand side represents the total incoming shortwave power (in Watts) from the Sun
  • the right hand side represents the total outgoing longwave power (in Watts) from Earth, calculated from the Stefan–Boltzmann law.

The constant parameters include

  • S is the solar constant – the incoming solar radiation per unit area—about 1367 W·m−2
  • r is Earth's radius—approximately 6.371×106 m
  • π is the mathematical constant (3.141...)
  • is the Stefan–Boltzmann constant—approximately 5.67×10−8 J·K−4·m−2·s−1

The constant can be factored out, giving a nildimensional equation for the equilibrium

where

  • the left hand side represents the incoming shortwave energy flux from the Sun in W·m−2
  • the right hand side represents the outgoing longwave energy flux from Earth in W·m−2.

The remaining variable parameters which are specific to the planet include

  • is Earth's average albedo, measured to be 0.3.
  • is Earth's average surface temperature, measured as about 288 K as of year 2020
  • is the effective emissivity of Earth's combined surface and atmosphere (including clouds). It is a quantity between 0 and 1 that is calculated from the equilibrium to be about 0.61. For the zero-dimensional treatment it is equivalent to an average value over all viewing angles.

This very simple model is quite instructive. For example, it shows the temperature sensitivity to changes in the solar constant, Earth albedo, or effective Earth emissivity. The effective emissivity also gauges the strength of the atmospheric greenhouse effect, since it is the ratio of the thermal emissions escaping to space versus those emanating from the surface.

The calculated emissivity can be compared to available data. Terrestrial surface emissivities are all in the range of 0.96 to 0.99 (except for some small desert areas which may be as low as 0.7). Clouds, however, which cover about half of the planet's surface, have an average emissivity of about 0.5 (which must be reduced by the fourth power of the ratio of cloud absolute temperature to average surface absolute temperature) and an average cloud temperature of about 258 K (−15 °C; 5 °F). Taking all this properly into account results in an effective earth emissivity of about 0.64 (earth average temperature 285 K (12 °C; 53 °F)).

Models with separated surface and atmospheric layers

One-layer EBM with blackbody surface

Dimensionless models have also been constructed with functionally separated atmospheric layers from the surface. The simplest of these is the zero-dimensional, one-layer model, which may be readily extended to an arbitrary number of atmospheric layers. The surface and atmospheric layer(s) are each characterized by a corresponding temperature and emissivity value, but no thickness. Applying radiative equilibrium (i.e conservation of energy) at the interfaces between layers produces a set of coupled equations which are solvable.

Layered models produce temperatures that better estimate those observed for Earth's surface and atmospheric levels. They likewise further illustrate the radiative heat transfer processes which underlie the greenhouse effect. Quantification of this phenomenon using a version of the one-layer model was first published by Svante Arrhenius in year 1896.

Radiative-convective models

Water vapor is a main determinant of the emissivity of Earth's atmosphere. It both influences the flows of radiation and is influenced by convective flows of heat in a manner that is consistent with its equilibrium concentration and temperature as a function of elevation (i.e. relative humidity distribution). This has been shown by refining the zero dimension model in the vertical to a one-dimensional radiative-convective model which considers two processes of energy transport:

  • upwelling and downwelling radiative transfer through atmospheric layers that both absorb and emit infrared radiation
  • upward transport of heat by air and vapor convection, which is especially important in the lower troposphere.

Radiative-convective models have advantages over simpler models and also lay a foundation for more complex models. They can estimate both surface temperature and the temperature variation with elevation in a more realistic manner. They also simulate the observed decline in upper atmospheric temperature and rise in surface temperature when trace amounts of other non-condensible greenhouse gases such as carbon dioxide are included.

Other parameters are sometimes included to simulate localized effects in other dimensions and to address the factors that move energy about Earth. For example, the effect of ice-albedo feedback on global climate sensitivity has been investigated using a one-dimensional radiative-convective climate model.

Higher-dimension models

The zero-dimensional model may be expanded to consider the energy transported horizontally in the atmosphere. This kind of model may well be zonally averaged. This model has the advantage of allowing a rational dependence of local albedo and emissivity on temperature – the poles can be allowed to be icy and the equator warm – but the lack of true dynamics means that horizontal transports have to be specified.

Early examples include research of Mikhail Budyko and William D. Sellers who worked on the Budyko-Sellers model. This work also showed the role of positive feedback in the climate system and has been considered foundational for the energy balance models since its publication in 1969.

Earth systems models of intermediate complexity (EMICs)

Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap. One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of half a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.

Box models

Schematic of a simple box model used to illustrate fluxes in geochemical cycles, showing a source (Q), sink (S) and reservoir (M)

Box models are simplified versions of complex systems, reducing them to boxes (or reservoirs) linked by fluxes. The boxes are assumed to be mixed homogeneously. Within a given box, the concentration of any chemical species is therefore uniform. However, the abundance of a species within a given box may vary as a function of time due to the input to (or loss from) the box or due to the production, consumption or decay of this species within the box.

Simple box models, i.e. box model with a small number of boxes whose properties (e.g. their volume) do not change with time, are often useful to derive analytical formulas describing the dynamics and steady-state abundance of a species. More complex box models are usually solved using numerical techniques.

Box models are used extensively to model environmental systems or ecosystems and in studies of ocean circulation and the carbon cycle. They are instances of a multi-compartment model.

In 1961 Henry Stommel was the first to use a simple 2-box model to study factors that influence ocean circulation.

History

In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model. Several groups then began working to create general circulation models. The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration.

By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.

By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006. In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts. Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies.

Increase of forecasts confidence over time

The pattern correlation between global climate models and observations has improved over sequential CMIP phases 3, 5, and 6.[42]

The Coupled Model Intercomparison Project (CMIP) has been a leading effort to foster improvements in GCMs and climate change understanding since 1995.

The IPCC stated in 2010 it has increased confidence in forecasts coming from climate models:

"There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases."

Coordination of research

The World Climate Research Programme (WCRP), hosted by the World Meteorological Organization (WMO), coordinates research activities on climate modelling worldwide.

A 2012 U.S. National Research Council report discussed how the large and diverse U.S. climate modeling enterprise could evolve to become more unified. Efficiencies could be gained by developing a common software infrastructure shared by all U.S. climate researchers, and holding an annual climate modeling forum, the report found.

Issues

Electricity consumption

Cloud-resolving climate models are nowadays run on high intensity super-computers which have a high power consumption and thus cause CO2 emissions. They require exascale computing (billion billion – i.e., a quintillion – calculations per second). For example, the Frontier exascale supercomputer consumes 29 MW. It can simulate a year’s worth of climate at cloud resolving scales in a day.

Techniques that could lead to energy savings, include for example: "reducing floating point precision computation; developing machine learning algorithms to avoid unnecessary computations; and creating a new generation of scalable numerical algorithms that would enable higher throughput in terms of simulated years per wall clock day."

Parametrization

Parameterization in a weather or climate model is a method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process. This can be contrasted with other processes—e.g., large-scale flow of the atmosphere—that are explicitly resolved within the models. Associated with these parameterizations are various parameters used in the simplified processes. Examples include the descent rate of raindrops, convective clouds, simplifications of the atmospheric radiative transfer on the basis of atmospheric radiative transfer codes, and cloud microphysics. Radiative parameterizations are important to both atmospheric and oceanic modeling alike. Atmospheric emissions from different sources within individual grid boxes also need to be parameterized to determine their impact on air quality.

Climate engineering

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Climate_engineering Climate engineering (or geoengineering , cli...