Search This Blog

Monday, December 9, 2024

Neuroeconomics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neuroeconomics

Neuroeconomics is an interdisciplinary field that seeks to explain human decision-making, the ability to process multiple alternatives and to follow through on a plan of action. It studies how economic behavior can shape our understanding of the brain, and how neuroscientific discoveries can guide models of economics.

It combines research from neuroscience, experimental and behavioral economics, and cognitive and social psychology. As research into decision-making behavior becomes increasingly computational, it has also incorporated new approaches from theoretical biology, computer science, and mathematics. Neuroeconomics studies decision-making by using a combination of tools from these fields so as to avoid the shortcomings that arise from a single-perspective approach. In mainstream economics, expected utility (EU) and the concept of rational agents are still being used. Neuroscience has the potential to reduce the reliance on this flawed assumption by inferring what emotions, habits, biases, heuristics and environmental factors contribute to individual, and societal preferences. Economists can thereby make more accurate predictions of human behavior in their models.

Behavioral economics was the first subfield to emerge to account for these anomalies by integrating social and cognitive factors in understanding economic decisions. Neuroeconomics adds another layer by using neuroscience and psychology to understand the root of decision-making. This involves researching what occurs within the brain when making economic decisions. The economic decisions researched can cover diverse circumstances such as buying a first home, voting in an election, choosing to marry a partner or go on a diet. Using tools from various fields, neuroeconomics works toward an integrated account of economic decision-making.

History

In 1989, Paul Glimcher joined the Center for Neural Science at NYU. Initial forays into neuroeconomic topics occurred in the late 1990s thanks, in part, to the rising prevalence of cognitive neuroscience research. Improvements in brain imaging technology suddenly allowed for crossover between behavioral and neurobiological enquiry. At the same time, critical tension was building between neoclassical and behavioral schools of economics seeking to produce superior predictive models of human behavior. Behavioral economists, in particular, sought to challenge neo-classicists by looking for alternative computational and psychological processes that validated their counter-findings of irrational choice. These converging trends set the stage for the sub-discipline of neuroeconomics to emerge, with varying and complementary motivations from each parent discipline.

Behavioral economists and cognitive psychologists looked towards functional brain imaging to experiment and develop their alternative theories of decision-making. While groups of physiologists and neuroscientists looked towards economics to develop their algorithmic models of neural hardware pertaining to choice. This split approach characterised the formation of neuroeconomics as an academic pursuit - not without criticism however. Numerous neurobiologists claimed that attempting to synchronise complex models of economics to real human and animal behavior would be futile. Neoclassical economists also argued that this merge would be unlikely to improve the predictive power of the existing revealed preference theory.

Despite the early criticisms, neuroeconomics grew rapidly from its inception in the late 1990s through to the 2000s. Leading many more scholars from father fields of economics, neuroscience and psychology to take notice of the possibilities of such interdisciplinary collaboration. Meetings between scholars and early researchers in neuroeconomics began to take place through the early 2000s. Important among them was a meeting that took place during 2002 at Princeton University. Organized by neuroscientist Jonathan Cohen and economist Christina Paxson, the Princeton meeting gained significant traction for the field and is often credited as the formative beginning of the present-day Society for Neuroeconomics.

The subsequent momentum continued throughout the decade of the 2000s in which research was steadily increasing and the number of publications containing the words "decision making" and "brain" rose impressively. A critical point in 2008 was reached when the first edition of Neuroeconomics: Decision Making and the Brain was published. This marked a watershed moment for the field as it accumulated the growing wealth of research into a widely accessible textbook. The success of this publication sharply increased the visibility of Neuroeconomics and helped affirm its place in economic teachings worldwide.

Major research areas

The field of decision-making is largely concerned with the processes by which individuals make a single choice from among many options. These processes are generally assumed to proceed in a logical manner such that the decision itself is largely independent of context. Different options are first translated into a common currency, such as monetary value, and are then compared to one another and the option with the largest overall utility value is the one that should be chosen. While there has been support for this economic view of decision-making, there are also situations where the assumptions of optimal decision-making seem to be violated.

The field of neuroeconomics arose out of this controversy. By determining which brain areas are active in which types of decision processes, neuroeconomists hope to better understand the nature of what seem to be suboptimal and illogical decisions. While most of these scientists are using human subjects in this research, others are using animal models where studies can be more tightly controlled and the assumptions of the economic model can be tested directly.

For example, Padoa-Schioppa & Assad tracked the firing rates of individual neurons in the monkey orbitofrontal cortex while the animals chose between two kinds of juice. The firing rate of the neurons was directly correlated with the utility of the food items and did not differ when other types of food were offered. This suggests that, in accordance with the economic theory of decision-making, neurons are directly comparing some form of utility across different options and choosing the one with the highest value. Similarly, a common measure of prefrontal cortex dysfunction, the FrSBe, is correlated with multiple different measures of economic attitudes and behavior, supporting the idea that brain activation can display important aspects of the decision process.

Neuroeconomics studies the neurobiological along with the computational bases of decision-making. A framework of basic computations which may be applied to Neuroeconomics studies is proposed by A. Rangel, C. Camerer, and P. R. Montague. It divides the process of decision-making into five stages implemented by a subject. First, a representation of the problem is formed. This includes analysis of internal states, external states and potential course of action. Second, values are assigned to potential actions. Third, based on the valuations, one of the actions is selected. Fourth, the subject evaluates how desirable the outcome is. In the final stage, learning, includes updating all of the above processes in order to improve future decisions.

Decision-making under risk and ambiguity

Most of our decisions are made under some form of uncertainty. Decision sciences such as psychology and economics usually define risk as the uncertainty about several possible outcomes when the probability of each is known. When the probabilities are unknown, uncertainty takes the form of ambiguity. Utility maximization, first proposed by Daniel Bernoulli in 1738, is used to explain decision-making under risk. The theory assumes that humans are rational and will assess options based on the expected utility they will gain from each.

Research and experience uncovered a wide range of expected utility anomalies and common patterns of behavior that are inconsistent with the principle of utility maximization – for example, the tendency to overweight small probabilities and underweight large ones. Daniel Kahneman and Amos Tversky proposed prospect theory to encompass these observations and offer an alternative model.

There seem to be multiple brain areas involved in dealing with situations of uncertainty. In tasks requiring individuals to make predictions when there is some degree of uncertainty about the outcome, there is an increase in activity in area BA8 of the frontomedian cortex as well as a more generalized increase in activity of the mesial prefrontal cortex and the frontoparietal cortex. The prefrontal cortex is generally involved in all reasoning and understanding, so these particular areas may be specifically involved in determining the best course of action when not all relevant information is available.

The Iowa Gambling Task developing in 1994, involved picking from 4 decks of cards where 2 decks were riskier, containing higher payoffs accompanied by much heftier penalties. Most individuals realise after a few rounds of picking cards that the less risky decks have higher payoffs in the long run due to the small losses, however individuals with damage to the ventromedial prefrontal cortex continue picking from the riskier decks. These results suggested the ventromedial prefrontal region of the brain in strongly associated with recognising the long term consequences of risky behavior as patients with damage to the region struggled to make decisions that prioritised the future over the potential for immediate gain.

In situations that involve known risk rather than ambiguity, the insular cortex seems to be highly active. For example, when subjects played a 'double or nothing' game in which they could either stop the game and keep accumulated winnings or take a risky option resulting in either a complete loss or doubling of winnings, activation of the right insula increased when individuals took the gamble. It is hypothesized that the main role of the insular cortex in risky decision-making is to simulate potential negative consequences of taking a gamble. Neuroscience has found the insular is activated when thinking about or experiencing something uncomfortable or painful.

In addition to the importance of specific brain areas to the decision process, there is also evidence that the neurotransmitter dopamine may transmit information about uncertainty throughout the cortex. Dopaminergic neurons are strongly involved in the reward process and become highly active after an unexpected reward occurs. In monkeys, the level of dopaminergic activity is highly correlated with the level of uncertainty such that the activity increases with uncertainty. Furthermore, rats with lesions to the nucleus accumbens, which is an important part of the dopamine reward pathway through the brain, are far more risk averse than normal rats. This suggests that dopamine may be an important mediator of risky behavior.

Individual level of risk aversion among humans is influenced by testosterone concentration. There are studies exhibiting correlation between the choice of a risky career (financial trading, business) and testosterone exposure. In addition, daily achievements of traders with lower digit ratio are more sensitive to circulating testosterone. A long-term study of risk aversion and risky career choice was conducted for a representative group of MBA students. It revealed that females are in average more risk averse, but the difference between genders vanishes for low organizational and activational testosterone exposure leading to risk-averse behavior. Students with high salivary testosterone concentration and low digit ratio, disregarding the gender, tend to choose risky career in finance (e.g. trading or investment banking).

Serial and functionally localized model vs distributed, hierarchical model

In 2017 March, Laurence T. Hunt and Benjamin Y. Hayden argued an alternative viewpoint of the mechanistic model to explain how we evaluate options and choose the best course of action. Many accounts of reward-based choice argue for distinct component processes that are serial and functionally localized. The component processes typically include the evaluation of options, the comparison of option values in the absence of any other factors, the selection of an appropriate action plan and the monitoring of the outcome of the choice. They emphasized how several features of neuroanatomy may support the implementation of choice, including mutual inhibition in recurrent neural networks and the hierarchical organization of timescales for information processing across the cortex.

Loss aversion

One aspect of human decision-making is a strong aversion to potential loss. Under loss aversion, the perceived cost of loss is experienced more intensely than an equivalent gain. For example, if there was a 50/50 chance of either winning $100 or losing $100, and a loss occurred, the accompanying reaction would emulate losing $200; that is the sum of both losing $100 and the possibility of winning $100. This was first discovered in Prospect Theory by Daniel Kahneman and Amos Tversky.

Prospect Theory model originally from Daniel Kahneman and Amos Tversky demonstrating how losses are felt more than gains.

One of the main controversies in understanding loss aversion is whether the phenomenon manifests in the brain, perhaps as increased attention and arousal with losses. Another area of research is whether loss aversion is evident in sub-cortices, such as the limbic system, thereby involving emotional arousal.

A basic controversy in loss aversion research is whether losses are actually experienced more negatively than equivalent gains or merely predicted to be more painful but actually experienced equivalently. Neuroeconomic research has attempted to distinguish between these hypotheses by measuring different physiological changes in response to both loss and gain. Studies have found that skin conductance, pupil dilation and heart rate are all higher in response to monetary loss than to equivalent gain. All three measures are involved in stress responses, so one might argue that losing a particular amount of money is experienced more strongly than gaining the same amount. On the other hand, in some of these studies, there was no physiological signals of loss aversion. That may suggest that the experience of losses is merely on attention (what is known as loss attention); such attentional orienting responses also lead to increased autonomic signals.

Brain studies have initially suggested that there is increased medial prefrontal and anterior cingulate cortex rapid response following losses compared to gains, which was interpreted as a neural signature of loss aversion. However, subsequent reviews have noticed that in this paradigm individuals do not actually show behavioral loss aversion casting doubts on the interpretability of these findings. With respect to fMRI studies, while one study found no evidence for an increase in activation in areas related to negative emotional reactions in response to loss aversion another found that individuals with damaged amygdalas had a lack of loss aversion even though they had normal levels of general risk aversion, suggesting that the behavior was specific to potential losses. These conflicting studies suggest that more research needs to be done to determine whether brain response to losses is due to loss aversion or merely to an alerting or orienting aspect of losses; as well as to examine if there are areas in the brain that respond specifically to potential losses .

Intertemporal choice

In addition to risk preference, another central concept in economics is intertemporal choices which are decisions that involve costs and benefits that are distributed over time. Intertemporal choice research studies the expected utility that humans assign to events occurring at different times. The dominant model in economics which explains it is discounted utility (DU). DU assumes that humans have consistent time preference and will assign value to events regardless of when they occur. Similar to EU in explaining risky decision-making, DU is inadequate in explaining intertemporal choice.

For example, DU assumes that people who value a bar of candy today more than 2 bars tomorrow, will also value 1 bar received 100 days from now more than 2 bars received after 101 days. There is strong evidence against this last part in both humans and animals, and hyperbolic discounting has been proposed as an alternative model. Under this model, valuations fall very rapidly for small delay periods, but then fall slowly for longer delay periods. This better explains why most people who would choose 1 candy bar now over 2 candy bars tomorrow, would, in fact, choose 2 candy bars received after 101 days rather than the 1 candy bar received after 100 days which DU assumes.

Neuroeconomic research in intertemporal choice is largely aimed at understanding what mediates observed behaviors such as future discounting and impulsively choosing smaller sooner rather than larger later rewards. The process of choosing between immediate and delayed rewards seems to be mediated by an interaction between two brain areas. In choices involving both primary (fruit juice) and secondary rewards (money), the limbic system is highly active when choosing the immediate reward while the lateral prefrontal cortex was equally active when making either choice. Furthermore, the ratio of limbic to cortex activity decreased as a function of the amount of time until reward. This suggests that the limbic system, which forms part of the dopamine reward pathway, is most involved in making impulsive decisions while the cortex is responsible for the more general aspects of the intertemporal decision process.

The neurotransmitter serotonin seems to play an important role in modulating future discounting. In rats, reducing serotonin levels increases future discounting while not affecting decision-making under uncertainty. It seems, then, that while the dopamine system is involved in probabilistic uncertainty, serotonin may be responsible for temporal uncertainty since delayed reward involves a potentially uncertain future. In addition to neurotransmitters, intertemporal choice is also modulated by hormones in the brain. In humans, a reduction in cortisol, released by the hypothalamus in response to stress, is correlated with a higher degree of impulsivity in intertemporal choice tasks. Drug addicts tend to have lower levels of cortisol than the general population, which may explain why they seem to discount the future negative effects of taking drugs and opt for the immediate positive reward.

Social decision-making

While most research on decision-making tends to focus on individuals making choices outside of a social context, it is also important to consider decisions that involve social interactions. The types of behavior that decision theorists study are as diverse as altruism, cooperation, punishment, and retribution. One of the most frequently utilized tasks in social decision-making is the prisoner's dilemma.

In this situation, the payoff for a particular choice is dependent not only on the decision of the individual but also on that of another individual playing the game. An individual can choose to either cooperate with his partner or defect against the partner. Over the course of a typical game, individuals tend to prefer mutual cooperation even though defection would lead to a higher overall payout. This suggests that individuals are motivated not only by monetary gains but also by some reward derived from cooperating in social situations.

This idea is supported by neural imaging studies demonstrating a high degree of activation in the ventral striatum when individuals cooperate with another person but that this is not the case when people play the same prisoner's dilemma against a computer. The ventral striatum is part of the reward pathway, so this research suggests that there may be areas of the reward system that are activated specifically when cooperating in social situations. Further support for this idea comes from research demonstrating that activation in the striatum and the ventral tegmental area show similar patterns of activation when receiving money and when donating money to charity. In both cases, the level of activation increases as the amount of money increases, suggesting that both giving and receiving money results in neural reward.

An important aspect of social interactions such as the prisoner's dilemma is trust. The likelihood of one individual cooperating with another is directly related to how much the first individual trusts the second to cooperate; if the other individual is expected to defect, there is no reason to cooperate with them. Trust behavior may be related to the presence of oxytocin, a hormone involved in maternal behavior and pair bonding in many species. When oxytocin levels were increased in humans, they were more trusting of other individuals than a control group even though their overall levels of risk-taking were unaffected suggesting that oxytocin is specifically implicated in the social aspects of risk taking. However this research has recently been questioned.

One more important paradigm for neuroeconomic studies is ultimatum game. In this game Player 1 gets a sum of money and decides how much he wants to offer Player 2. Player 2 either accepts or rejects the offer. If he accepts both players get the amount as proposed by Player 1, if he rejects nobody gets anything. Rational strategy for Player 2 would be to accept any offer because it has more value than zero. However, it has been shown that people often reject offers that they consider as unfair. Neuroimaging studies indicated several brain regions that are activated in response to unfairness in ultimatum game. They include bilateral mid-anterior insula, anterior cingulate cortex (ACC), medial supplementary motor area (SMA), cerebellum and right dorsolateral prefrontal cortex (DLPFC). It has been shown that low-frequency repetitive transcranial magnetic stimulation of DLPFC increases the likelihood of accepting unfair offers in the ultimatum game.

Another issue in the field of neuroeconomics is represented by role of reputation acquisition in social decision-making. Social exchange theory claims that prosocial behavior originates from the intention to maximize social rewards and minimize social costs. In this case, approval from others may be viewed as a significant positive reinforcer - i.e., a reward. Neuroimaging studies have provided evidence supporting this idea – it was shown that processing of social rewards activates striatum, especially left putamen and left caudate nucleus, in the same fashion these areas are activated during the processing of monetary rewards. These findings also support so-called "common neural currency" idea, which assumes existence of shared neural basis for processing of different types of reward.

Sexual decision-making

Regarding the choice of sexual partner, research studies have been conducted on humans and on nonhuman primates. Notably, Cheney & Seyfarth 1990, Deaner et al. 2005, and Hayden et al. 2007 suggest a persistent willingness to accept fewer physical goods or higher prices in return for access to socially high-ranking individuals, including physically attractive individuals, whereas increasingly high rewards are demanded if asked to relate to low-ranking individuals.

Cordelia Fine is most well known for her research on gendered minds and sexual decision-making. In her book Testosterone Rex she critiques sex differences in the brain and goes into detail on the economic cost and benefits of finding a partner, as interpreted and analysed by our brains. Her showcases an interesting sub-topic of neuroeconomics.

The neurobiological basis for this preference includes neurons of the lateral intraparietal cortex (LIP), which is related to eye movement, and which is operative in situations of two-alternative forced choices.

Methodology

Behavioral economics experiments record the subject's decisions over various design parameters and use the data to generate formal models that predict performance. Neuroeconomics extends this approach by adding states of the nervous system to the set of explanatory variables. The goal of neuroeconomics is to help explain decisions and to enrich the data sets available for testing predictions.

Furthermore, neuroeconomic research is being used to understand and explain aspects of human behavior that do not conform to traditional economic models. While these behavior patterns are generally dismissed as 'fallacious' or 'illogical' by economists, neuroeconomic researchers are trying to determine the biological reasons for these behaviors. By using this approach, researchers may be able to find explanations for why people often act sub-optimally. Richard Thaler provides a prime example in his book Misbehaving, detailing a scenario where an appetiser is served before a meal and guest accidentally fill up of it. Most people need the appetiser to be completely hidden in order to stop themselves from the temptation, where as a rational agent would simply stop and wait for the meal. Temptation is just one of the many irrationalities that have been ignored due to difficulties of studying them.

Neurobiological research techniques

There are several different techniques that can be utilized to understand the biological basis of economic behavior. Neural imaging is used in human subjects to determine which areas of the brain are most active during particular tasks. Some of these techniques, such as fMRI or PET are best suited to giving detailed pictures of the brain which can give information about specific structures involved in a task. Other techniques, such as ERP (event-related potentials) and oscillatory brain activity are used to gain detailed knowledge of the time course of events within a more general area of the brain. If a specific region of the brain is suspected to be involved in a type of economic decision-making, researchers may use Transcranial Magnetic Stimulation (TMS) to temporarily disrupt that region, and compare the results to when the brain was allowed to function normally. More recently, there has been interest in the role that brain structure, such as white matter connectivity between brain areas, plays in determining individual differences in reward-based decision-making.

Neuroscience does not always involve observing the brain directly, as brain activity can also be interpret by physiological measurements such skin conductance, heart rate, hormones, pupil dilation and muscle contraction known as electromyography, especially of the face to infer emotions linked to decisions.

Neuroeconomics of addiction

In addition to studying areas of the brain, some studies are aimed at understanding the functions of different brain chemicals in relation to behavior. This can be done by either correlating existing chemical levels with different behavior patterns or by changing the amount of the chemical in the brain and noting any resulting behavioral changes. For example, the neurotransmitter serotonin seems to be involved in making decisions involving intertemporal choice while dopamine is utilized when individuals make judgments involving uncertainty. Furthermore, artificially increasing oxytocin levels increases trust behavior in humans while individuals with lower cortisol levels tend to be more impulsive and exhibit more future discounting.

In addition to studying the behavior of neurologically normal individuals in decision-making tasks, some research involves comparing that behavior to individuals with damage to areas of the brain expected to be involved in decision-making. In humans, this means finding individuals with specific types of neural impairment. These case studies may have things like amygdala damage, leading to a decrease in loss aversion compared to controls. Also, scores from a survey measuring prefrontal cortex dysfunction are correlated with general economic attitudes, like risk preferences.

Previous studies investigated the behavioral patterns of patients with psychiatric disorders, or neuroeconomics of addiction, such as schizophrenia, autism, depression,,to get the insights of their pathophysiology. In animal studies, highly controlled experiments can get more specific information about the importance of brain areas to economic behavior. This can involve either lesioning entire brain areas and measuring resulting behavior changes or using electrodes to measure the firing of individual neurons in response to particular stimuli.

Experiments

As explained in Methodologies above, in a typical behavioral economics experiment, a subject is asked to make a series of economic decisions. For example, a subject may be asked whether they prefer to have 45 cents or a gamble with a 50% chance to win one dollar. Many experiments involve the participant completing games where they either make one-off of repeated decisions and psychological responses and reaction time is measured. For example, it is common to test peoples relationship with the future, known as future discounting, by asking them questions such as "would you prefer $10 today, or $50 in a year from today?" The experimenter will then measure different variables in order to determine what is going on in the subject's brain as they make the decision. Some authors have demonstrated that neuroeconomics may be useful to describe not only experiments involving rewards but also common psychiatric syndromes involving addiction or delusion.

Criticisms

From the beginnings of neuroeconomics and throughout its swift academic rise, criticisms have been voiced over the field's validity and usefulness. Glenn W. Harris and Emanuel Donchin have both criticized the emerging field, with the former publishing his concerns in 2008 with the paper 'Neuroeconomics: A Critical Reconsideration'. Harris surmises that much of the neuroscience-assisted insights into economic modelling is "academic marketing hype" and that the true substance of the field is yet to present itself and needs to be seriously reconsidered. He also mentions that methodologically, many of the studies in neuroeconomics are flawed by their small sample sizes and limited applicability.

A review of the learnings of neuroeconomics, published in 2016 by Arkady Konovalov, shared the sentiment that the field suffers from experimental shortcomings. Primary among them being a lack of analogous ties between specific brain regions and some psychological constructs such as "value". The review mentions that although early neuroeconomic fMRI studies assumed that specific brain regions were singularly responsible for one function in the decision-making process, they have subsequently been shown to be recruiting in multiple different functions. The practice of reverse inference has therefore seen much less use and has hurt the field. Instead, FMRI should not be a standalone methodology, but rather be collected and connected to self-reports and behavioral data. The validity of using functional neuroimaging in consumer neuroscience can be improved by carefully designing studies, conducting meta-analyses, and connecting psychometric and behavioral data with data from neuroimaging.

Ariel Rubinstein, an economist at the University of Tel Aviv, spoke about neuroeconomic research, saying "standard experiments provide little information about the procedures of choice as it is difficult to extrapolate from a few choice observations to the entire choice function. If we want to know more about human procedures of choice we need to look somewhere else". These comments echo a salient and consistent argument of traditional economists against the neuroeconomic approach that the use of non-choice data, such as response times, eye-tracking and neural signals that people generate during decision-making, should be excluded from any economic analysis.

Other critiques have also included claims that neuroeconomics is "a field that oversells itself"; or that neuroeconomic studies "misunderstand and underestimate traditional economic models".

Applications

Currently, the real-world applications and predictions of neuroeconomics are still unknown or under-developed as the burgeoning field continues to grow. Some criticisms have been made that the accumulation of research and its findings have so far produced little in the way of pertinent recommendations to economic policy-makers. But many neuroeconomists insist that the potential of the field to enhance our understanding of the brain's machinations with decision-making may prove highly influential in the future.

In particular, the findings of specific neurological markers of individual preferences may have important implications for well-known economic models and paradigms. An example of this is the finding that an increase in computational capacity (likely related to increased gray matter volume) could lead to higher risk tolerance by loosening the constraints that govern subjective representations of probabilities and rewards in lottery tasks.

Economists are also looking at neuroeconomics to assist with explanations of group aggregate behavior that have market-level implications. For example, many researchers anticipate that neurobiological data may be used to detect when individuals or groups of individuals are likely to exhibit economically problematic behavior. This may be applied to the concept of market bubbles. These occurrences are majorly consequential in modern society and regulators could gain substantial insights into their formulation and lack of prediction/prevention.

Neuroeconomic work has also seen a close relationship with academic investigations of addiction. Researchers acknowledged, in the 2010 publication 'Advances in the Neuroscience of Addiction: 2nd Edition', that the neuroeconomic approach serves as a "powerful new conceptual method that is likely to be critical for progress in understanding addictive behavior".

German neuroscientist Tania Singer spoke at the World Economic Forum in 2015 about her research in compassion training. While economics and neuroscience are largely spilt, her research is an example of how they can meld together. Her study revealed a preference change toward prosocial behavior after 3 months of compassion training. She also demonstrated a structural change in the grey matter of the brain indicating new neural connections had formed as a result of the mental training. She showed that if economists utilised predictors other than consumption, they could model and predict a more diverse range of economic behaviors. She also advocated that neuroeconomics could vastly improve policymaking as we can create contexts that predictably lead to positive behavioral outcomes such as prosocial behavior when caring emotions are primed. Her research demonstrates the impact neuroeconomics could have on our individual psyches, our societal norms and political landscapes at large.

Neuromarketing is another applied example of a distinct discipline closely related to neuroeconomics. While broader neuroeconomics has more academic aims, since it studies the basic mechanisms of decision-making, neuromarketing is an applied sub-field which uses neuroimaging tools for market investigations. Insights derived from brain imaging technologies (fMRI) are typically used to analyse the brain's response to particular marketing stimuli.

Another neuroscientist, Emily Falk contributed to the neuroeconomics and neuromarketing fields by researching how the brain reacts to marketing aimed at evoking behavioral change. Specifically, her paper on anti-smoking advertisements highlighted the disparity between what advertisements we believe will be convincing and what the brain responds to most positively. The advertisement the experts and trial audience agreed on as being the most effective anti-smoking campaign actually elicited very little behavioral change in smokers. Meanwhile, the campaign ranked least likely to be effective by the experts and audience, generated the strongest neural response in the medial prefrontal cortex and resulted in the largest number of people deciding to quit smoking. This revealed that often our brains may know better than us when it comes to what motivators lead to behavioral change. It also emphasises the importance of integrating neuroscience into mainstream and behavioral economics to generate more wholistic models and accurate predictions. This research could have impacts in promoting healthier diets, more exercise, or encouraging people to make behavioral changes that benefit the environment and reduce climate change.

Atmospheric model

From Wikipedia, the free encyclopedia
A 96-hour forecast of 850 mbar geopotential height and temperature from the Global Forecast System

In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth (or other planetary body), or regional (limited-area), covering only part of the Earth. Atmospheric models also differ in how they compute vertical fluid motions; some types of models are thermotropic, barotropic, hydrostatic, and non-hydrostatic. These model types are differentiated by their assumptions about the atmosphere, which must balance computational speed with the model's fidelity to the atmosphere it is simulating.

Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.

Types

Thermotropic

The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the 500 mb (15 inHg) and 1,000 mb (30 inHg) geopotential height surfaces and the average thermal wind between them.

Barotropic

Barotropic models assume the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow arctic highs) and warm-core lows (such as tropical cyclones). A barotropic model tries to solve a simplified form of atmospheric dynamics based on the assumption that the atmosphere is in geostrophic balance; that is, that the Rossby number of the air in the atmosphere is small. If the assumption is made that the atmosphere is divergence-free, the curl of the Euler equations reduces into the barotropic vorticity equation. This latter equation can be solved over a single layer of the atmosphere. Since the atmosphere at a height of approximately 5.5 kilometres (3.4 mi) is mostly divergence-free, the barotropic model best approximates the state of the atmosphere at a geopotential height corresponding to that altitude, which corresponds to the atmosphere's 500 mb (15 inHg) pressure surface.

Hydrostatic

Hydrostatic models filter out vertically moving acoustic waves from the vertical momentum equation, which significantly increases the time step used within the model's run. This is known as the hydrostatic approximation. Hydrostatic models use either pressure or sigma-pressure vertical coordinates. Pressure coordinates intersect topography while sigma coordinates follow the contour of the land. Its hydrostatic assumption is reasonable as long as horizontal grid resolution is not small, which is a scale where the hydrostatic assumption fails. Models which use the entire vertical momentum equation are known as nonhydrostatic. A nonhydrostatic model can be solved anelastically, meaning it solves the complete continuity equation for air assuming it is incompressible, or elastically, meaning it solves the complete continuity equation for air and is fully compressible. Nonhydrostatic models use altitude or sigma altitude for their vertical coordinates. Altitude coordinates can intersect land while sigma-altitude coordinates follow the contours of the land.

History

The ENIAC main control panel at the Moore School of Electrical Engineering

The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson who utilized procedures developed by Vilhelm Bjerknes. It was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC created the first computer forecasts in 1950, and more powerful computers later increased the size of initial datasets and included more complicated versions of the equations of motion. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of global forecasting models led to the first climate models. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s.

Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics (MOS) were developed in the 1970s and 1980s for individual forecast points (locations). Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days. The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible.

Initialization

A WP-3D Orion weather reconnaissance aircraft in flight.
Weather reconnaissance aircraft, such as this WP-3D Orion, provide data that is then used in numerical weather forecasts.

The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.

A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.

Computation

An example of 500 mbar geopotential height prediction from a numerical weather prediction model.

A model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future, with each time increment known as a time step. The equations are then applied to this new atmospheric state to find new rates of change, and these new rates of change predict the atmosphere at a yet further time into the future. Time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified model is run six days into the future, the European Centre for Medium-Range Weather Forecasts model is run out to 10 days into the future, while the Global Forecast System model run by the Environmental Modeling Center is run 16 days into the future.

The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models use spectral methods for the horizontal dimensions and finite difference methods for the vertical dimension, while regional models and other global models usually use finite-difference methods in all three dimensions. The visual output produced by a model solution is known as a prognostic chart, or prog.

Parameterization

Weather and climate model gridboxes have sides of between 5 kilometres (3.1 mi) and 300 kilometres (190 mi). A typical cumulus cloud has a scale of less than 1 kilometre (0.62 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between 5 kilometres (3.1 mi) and 25 kilometres (16 mi) can explicitly represent convective clouds, although they still need to parameterize cloud microphysics. The formation of large-scale (stratus-type) clouds is more physically based, they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds, reflecting the sub grid scale variation that would occur in the real world.

The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parameterized as this process occurs on the molecular scale. Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parameterize.

Domains

The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models also are known as limited-area models, or LAMs. Regional models use finer grid spacing to resolve explicitly smaller-scale meteorological phenomena, since their smaller domain decreases computational demands. Regional models use a compatible global model for initial conditions of the edge of their domain. Uncertainty and errors within LAMs are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as within the creation of the boundary conditions for the LAMs itself.

The vertical coordinate is handled in various ways. Some models, such as Richardson's 1922 model, use geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This follows since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (15 inHg) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates.

Global versions

Some of the better known global numerical models are:

Regional versions

Some of the better known regional numerical models are:

  • WRF The Weather Research and Forecasting model was developed cooperatively by NCEP, NCAR, and the meteorological research community. WRF has several configurations, including:
    • WRF-NMM The WRF Nonhydrostatic Mesoscale Model is the primary short-term weather forecast model for the U.S., replacing the Eta model.
    • WRF-ARW Advanced Research WRF developed primarily at the U.S. National Center for Atmospheric Research (NCAR)
  • HARMONIE-Climate (HCLIM) is a limited area climate model based on the HARMONIE model developed by a large consortium of European weather forecastign and research institutes . It is a model system that like WRF can be run in many configurations, including at high resolution with the non-hydrostatic Arome physics or at lower resolutions with hydrostatic physics based on the ALADIN physical schemes. It has mostly been used in Europe and the Arctic for climate studies including 3km downscaling over Scandinavia and in studies looking at extreme weather events.
  • RACMO was developed at the Netherlands Meteorological Institute, KNMI and is based on the dynamics of the HIRLAM model with physical schemes from the IFS
    • RACMO2.3p2 is a polar version of the model used in many studies to provide surface mass balance of the polar ice sheets that was developed at the University of Utrecht
  • MAR (Modele Atmospherique Regionale) is a regional climate model developed at the University of Grenoble in France and the University of Liege in Belgium.
  • HIRHAM5 is a regional climate model developed at the Danish Meteorological Institute and the Alfred Wegener Institute in Potsdam. It is also based on the HIRLAM dynamics with physical schemes based on those in the ECHAM model. Like the RACMO model HIRHAM has been used widely in many different parts of the world under the CORDEX scheme to provide regional climate projections. It also has a polar mode that has been used for polar ice sheet studies in Greenland and Antarctica
  • NAM The term North American Mesoscale model refers to whatever regional model NCEP operates over the North American domain. NCEP began using this designation system in January 2005. Between January 2005 and May 2006 the Eta model used this designation. Beginning in May 2006, NCEP began to use the WRF-NMM as the operational NAM.
  • RAMS the Regional Atmospheric Modeling System developed at Colorado State University for numerical simulations of atmospheric meteorology and other environmental phenomena on scales from meters to hundreds of kilometers – now supported in the public domain
  • MM5 The Fifth Generation Penn State/NCAR Mesoscale Model
  • ARPS the Advanced Region Prediction System developed at the University of Oklahoma is a comprehensive multi-scale nonhydrostatic simulation and prediction system that can be used for regional-scale weather prediction up to the tornado-scale simulation and prediction. Advanced radar data assimilation for thunderstorm prediction is a key part of the system..
  • HIRLAM High Resolution Limited Area Model, is developed by the European NWP research consortia co-funded by 10 European weather services. The meso-scale HIRLAM model is known as HARMONIE and developed in collaboration with Meteo France and ALADIN consortia.
  • GEM-LAM Global Environmental Multiscale Limited Area Model, the high resolution 2.5 km (1.6 mi) GEM by the Meteorological Service of Canada (MSC)
  • ALADIN The high-resolution limited-area hydrostatic and non-hydrostatic model developed and operated by several European and North African countries under the leadership of Météo-France
  • COSMO The COSMO Model, formerly known as LM, aLMo or LAMI, is a limited-area non-hydrostatic model developed within the framework of the Consortium for Small-Scale Modelling (Germany, Switzerland, Italy, Greece, Poland, Romania, and Russia).
  • Meso-NH The Meso-NH Model is a limited-area non-hydrostatic model developed jointly by the Centre National de Recherches Météorologiques and the Laboratoire d'Aérologie (France, Toulouse) since 1998. Its application is from mesoscale to centimetric scales weather simulations.

Model output statistics

Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models. The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983.

Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.

Applications

Climate modeling

In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model. Several groups then began working to create general circulation models.[63] The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration.

By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.

By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006. In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts. Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies.

Limited area modeling

Model spread with Hurricane Ernesto (2006) within the National Hurricane Center limited area models

Air pollution forecasts depend on atmospheric models to provide fluid flow information for tracking the movement of pollutants. In 1970, a private company in the U.S. developed the regional Urban Airshed Model (UAM), which was used to forecast the effects of air pollution and acid rain. In the mid- to late-1970s, the United States Environmental Protection Agency took over the development of the UAM and then used the results from a regional air pollution study to improve it. Although the UAM was developed for California, it was during the 1980s used elsewhere in North America, Europe, and Asia.

The Movable Fine-Mesh model, which began operating in 1978, was the first tropical cyclone forecast model to be based on atmospheric dynamics. Despite the constantly improving dynamical model guidance made possible by increasing computational power, it was not until the 1980s that numerical weather prediction (NWP) showed skill in forecasting the track of tropical cyclones. And it was not until the 1990s that NWP consistently outperformed statistical or simple dynamical models. Predicting the intensity of tropical cyclones using NWP has also been challenging. As of 2009, dynamical guidance remained less skillful than statistical methods.

Idealized greenhouse model

From Wikipedia, the free encyclopedia
A schematic representation of a planet's radiation balance with its parent star and the rest of space. Thermal radiation absorbed and emitted by the idealized atmosphere can raise the equilibrium surface temperature.

The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere, including carbon dioxide and water vapour, are transparent to the high-frequency solar radiation, but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in, but is partially trapped by these gases as it tries to leave. Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new thermal equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere.

Essential features of this model where first published by Svante Arrhenius in 1896. It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect. The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model) in terms of its mathematical space. The layers include a surface with constant temperature Ts and an atmospheric layer with constant temperature Ta. For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, Ts could be interpreted as a temperature representative of the surface and the lower atmosphere, and Ta could be interpreted as the temperature of the upper atmosphere, also called the skin temperature. In order to justify that Ta and Ts remain constant over the planet, strong oceanic and atmospheric currents can be imagined to provide plentiful lateral mixing. Furthermore, the temperatures are understood to be multi-decadal averages such that any daily or seasonal cycles are insignificant.

Simplified energy flows

The model will find the values of Ts and Ta that will allow the outgoing radiative power, escaping the top of the atmosphere, to be equal to the absorbed radiative power of sunlight. When applied to a planet like Earth, the outgoing radiation will be longwave and the sunlight will be shortwave. These two streams of radiation will have distinct emission and absorption characteristics. In the idealized model, we assume the atmosphere is completely transparent to sunlight. The planetary albedo αP is the fraction of the incoming solar flux that is reflected back to space (since the atmosphere is assumed totally transparent to solar radiation, it does not matter whether this albedo is imagined to be caused by reflection at the surface of the planet or at the top of the atmosphere or a mixture). The flux density of the incoming solar radiation is specified by the solar constant S0. For application to planet Earth, appropriate values are S0=1366 W m−2 and αP=0.30. Accounting for the fact that the surface area of a sphere is 4 times the area of its intercept (its shadow), the average incoming radiation is S0/4.

For longwave radiation, the surface of the Earth is assumed to have an emissivity of 1 (i.e. it is a black body in the infrared, which is realistic). The surface emits a radiative flux density F according to the Stefan–Boltzmann law:

where σ is the Stefan–Boltzmann constant. A key to understanding the greenhouse effect is Kirchhoff's law of thermal radiation. At any given wavelength the absorptivity of the atmosphere will be equal to the emissivity. Radiation from the surface could be in a slightly different portion of the infrared spectrum than the radiation emitted by the atmosphere. The model assumes that the average emissivity (absorptivity) is identical for either of these streams of infrared radiation, as they interact with the atmosphere. Thus, for longwave radiation, one symbol ε denotes both the emissivity and absorptivity of the atmosphere, for any stream of infrared radiation.

Idealized greenhouse model with an isothermal atmosphere. The blue arrows denote shortwave (solar) radiative flux density and the red arrow denotes longwave (terrestrial) radiative flux density. The radiation streams are shown with lateral displacement for clarity; they are collocated in the model. The atmosphere, which interacts only with the longwave radiation, is indicated by the layer within the dashed lines. A specific solution is depicted for ε=0.78 and αp=0.3, representing Planet Earth. The numbers in the parentheses indicate the flux densities as a percent of S0/4.
The equilibrium solution with ε=0.82. The increase by Δε=0.04 corresponds to doubling carbon dioxide and the associated positive feedback on water vapor.
The equilibrium solution with no greenhouse effect: ε=0

The infrared flux density out of the top of the atmosphere is computed as:

In the last term, ε represents the fraction of upward longwave radiation from the surface that is absorbed, the absorptivity of the atmosphere. The remaining fraction (1-ε) is transmitted to space through an atmospheric window. In the first term on the right, ε is the emissivity of the atmosphere, the adjustment of the Stefan–Boltzmann law to account for the fact that the atmosphere is not optically thick. Thus ε plays the role of neatly blending, or averaging, the two streams of radiation in the calculation of the outward flux density.

The energy balance solution

Zero net radiation leaving the top of the atmosphere requires:

Zero net radiation entering the surface requires:

Energy equilibrium of the atmosphere can be either derived from the two above equilibrium conditions, or independently deduced:

Note the important factor of 2, resulting from the fact that the atmosphere radiates both upward and downward. Thus the ratio of Ta to Ts is independent of ε:

Thus Ta can be expressed in terms of Ts, and a solution is obtained for Ts in terms of the model input parameters:

or

The solution can also be expressed in terms of the effective emission temperature Te, which is the temperature that characterizes the outgoing infrared flux density F, as if the radiator were a perfect radiator obeying F=σTe4. This is easy to conceptualize in the context of the model. Te is also the solution for Ts, for the case of ε=0, or no atmosphere:

With the definition of Te:

For a perfect greenhouse, with no radiation escaping from the surface, or ε=1:

Application to Earth

Using the parameters defined above to be appropriate for Earth,

For ε=1:

For ε=0.78,

.

This value of Ts happens to be close to the published 287.2 K of the average global "surface temperature" based on measurements. ε=0.78 implies 22% of the surface radiation escapes directly to space, consistent with the statement of 15% to 30% escaping in the greenhouse effect.

The radiative forcing for doubling carbon dioxide is 3.71 W m−2, in a simple parameterization. This is also the value endorsed by the IPCC. From the equation for ,

Using the values of Ts and Ta for ε=0.78 allows for = -3.71 W m−2 with Δε=.019. Thus a change of ε from 0.78 to 0.80 is consistent with the radiative forcing from a doubling of carbon dioxide. For ε=0.80,

Thus this model predicts a global warming of ΔTs = 1.2 K for a doubling of carbon dioxide. A typical prediction from a GCM is 3 K surface warming, primarily because the GCM allows for positive feedback, notably from increased water vapor. A simple surrogate for including this feedback process is to posit an additional increase of Δε=.02, for a total Δε=.04, to approximate the effect of the increase in water vapor that would be associated with an increase in temperature. This idealized model then predicts a global warming of ΔTs = 2.4 K for a doubling of carbon dioxide, roughly consistent with the IPCC.

Tabular summary with K, C, and F units

ε Ts (K) Ts (C) Ts (F)
0 254.8 -18.3 -1
0.78 288.3 15.2 59
0.80 289.5 16.4 61
0.82 290.7 17.6 64
1 303.0 29.9 86

Extensions

The one-level atmospheric model can be readily extended to a multiple-layer atmosphere. In this case the equations for the temperatures become a series of coupled equations. These simple energy-balance models always predict a decreasing temperature away from the surface, and all levels increase in temperature as "greenhouse gases are added". Neither of these effects are fully realistic: in the real atmosphere temperatures increase above the tropopause, and temperatures in that layer are predicted (and observed) to decrease as GHG's are added. This is directly related to the non-greyness of the real atmosphere.

An interactive version of a model with 2 atmospheric layers, and which accounts for convection, is available online.

CICE (sea ice model)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/CICE_(sea_ice_model) CICE ( / s aɪ s ...