Search This Blog

Thursday, March 12, 2026

Technocracy

From Wikipedia, the free encyclopedia

Technocracy is an expert-based type of governance. In its strongest sense, it is a form of government in which decisions across all sectors and policy domains follow evidence-based, efficiency-oriented procedures grounded in scientific methods and instrumental rationality. In a weaker sense, the term denotes hybrid models that delegate specific functions to experts or implement expertise-driven decision procedures in areas such as central banking, public health, or environmental regulation.

Technocracy is often regarded as a challenge to democracy since it grounds political legitimacy in elite expertise, while democracy justifies itself as the rule of the people. One approach to resolving their tensions suggests that democratically elected officials choose political goals, while technocrats choose the most efficient ways to realize those goals, serving as advisors and implementers. Technocracy is closely related to meritocracy, expertocracy, epistocracy, managerialism, and algocracy. It contrasts with populism, which frames politics as a struggle between morally pure people and a self-serving elite.

Proponents of technocracy argue that scientific expertise and evidence-guided policy produce better outcomes. They hold that its value-neutral approach is best suited to promote the long-term welfare of society as a whole. Opponents contend that technocracy is anti-democratic by excluding large segments of the population from politics, that the claim to neutrality masks value-laden choices, and that science alone is insufficient for political decisions.

Early contributions to the idea of technocracy appear in the utopian visions of Plato, Francis Bacon, and Henri de Saint-Simon. The notion of science- and rationality-based governance gained prominence during the Enlightenment and became increasingly influential as industrial and post-industrial transformations made societies more complex. Notable examples of technocratic influence are found in the North American technocracy movement of the 1930s, Soviet and Chinese centralized planning, developmental efforts in Latin America and Singapore, and the institutional architecture of the European Union.

Definition

Technocracy is a form of government or an approach to political action that emphasizes expertise, but its precise definition is disputed. One characterization focuses on who makes decisions, defining technocracy as rule by experts in contrast to democracy as rule by the people. Another centers on decision procedures rather than rulers, highlighting the role of technical skills, scientific evidence, and instrumental rationality. More abstractly, technocracy can be defined as the view that the main source of political legitimacy is expert-driven reasoning and specialized knowledge, rather than popular will or hereditary entitlement.

In its strongest sense, technocracy means that all main governmental operations follow technocratic principles. Because pure technocracy is rare, the term is often used in a weaker sense to describe leadership styles or institutions that apply such principles within other forms of government, such as a democratically elected leader who relies heavily on expert advice, or a central bank in which unelected officials set monetary policy based on technical criteria.

A hallmark of technocracy is its science-focused approach. It frames policy objectives, resource allocation, and administrative procedures in terms of evidence-based and efficiency-oriented processes that follow a rigorous methodology privileging quantifiable outcomes. It typically employs cost-benefit analysis and risk management, intended to improve long-term prosperity of society as a whole rather than serving the partisan interests of specific groups. Advocates emphasize the method's objective and impartial character, but its claims to value-neutrality and freedom from ideology are contested. Technocracy is normally considered a form of elitism since large parts of the population may lack the technical knowledge and specialized skills required to participate in complex policy decisions. Anti-pluralism is another frequently discussed feature. It reflects the commitment to the singular interest of the long-term social welfare of the whole community in contrast to political processes that mediate among competing interests and preferences of distinct groups.

A technocrat is someone who supports technocracy. A technotopia is an idealized society or government model in which all major aspects of governance are guided by technical expertise. The term technocracy comes from the ancient Greek words τέχνη (tekhne), meaning 'skill' or 'craft', and κράτος (kratos), meaning 'rule'. Its earliest known use dates to the 1890s. The engineer William H. Smyth is usually credited with coining the modern meaning of the term in 1919. The term's popularity increased in the 1930s as part of the technocratic movement.

Areas and approaches

Technocratic principles can be applied to different areas of governance and implemented in several ways. In its broadest form, a pure technocracy would be a society in which decision-making in all sectors and policy domains is guided by experts following empirical evidence. However, this theoretical ideal is not found in real-world conditions, where technocracy usually manifests as a hybrid model integrated with other approaches to governance. Some policy areas are more amenable to technocratic management than others, particularly those with clearly defined policy goals, quantifiable metrics, and accessible evidence.

Photo of the main seat of the European Central Bank in Frankfurt
Central banks often rely on financial experts operating relatively insulated from electoral processes, such as the European Central Bank.

One key area is the central banking system, in which financial experts often operate independently of electoral processes. Central banks are responsible for monetary policy and financial stability. For example, they typically control the base interest rate to promote price stability, high employment, and economic growth. Their decisions are based on diverse technical indicators, such as inflation and employment figures, weighing trade-offs among their objectives to achieve a prudent balance. Other economy-related aspects of technocratic governance include fiscal policy, taxation, and the management of financial crises.

Technocratic principles are also applied to the area of healthcare. Expert panels may implement reforms to medical training, increase reliance on medical technology, design and evaluate vaccination programs, adjust drug policy, and coordinate responses during health crises, such as pandemics. Technocrat concerns about efficient resource allocation may depersonalize the doctor–patient relationship and can lead to reductions of public health services by cutting cost-ineffective interventions to prioritize high-impact treatments.

In the field of education, the influence of technocracy can take various forms. Experts may revise curricula and propose reforms, often with an emphasis on centralization while reducing the autonomy of local institutions. This approach is usually accompanied by standardization, such as standardized testing and quantifiable metrics to track educational progress. Another facet is the integration of modern technology into classrooms, including computers, the internet, and artificial intelligence. A further intersection is found in educational institutions or programs that prepare elite students for expert roles in governance, such as specialized schools or competitive selection processes that filter candidates for senior civil service careers.

In environmental policy, technocratic principles are used to adjust regulations to ecological issues, ranging from climate change to water quality and biodiversity conservation. Other relevant areas include urban planning, energy infrastructure, and research funding.

In addition to the variety of policy areas, there are also different ways of implementing technocratic principles. One model casts technocrats as advisors who collect and interpret empirical evidence, devise policy options, and evaluate their advantages and disadvantages. In this role, technocrats do not hold immediate decision-making powers but wield influence indirectly by shaping how leaders understand and choose among alternatives. This approach contrasts with models in which technocrats have direct authority. Technocracy can also shape governance by implementing decision-making procedures and bureaucratic systems, shifting the emphasis from who holds office to how decisions are made. In this case, technocratic power resides more in administrative mechanisms and organizational arrangements than in individual experts.

Another approach focuses on information technology to assist or automate political decisions. For example, algocracy relies on algorithms and artificial intelligence to analyze data and devise policies. Proposed models vary in the role of humans, ranging from humans as primary decision-makers assisted by artificial intelligence to fully automated systems. Cyberocracy, a similar concept, envisions a future form of government in which information and access to information technology are the primary sources of political power.

Technocrats typically aim for the long-term prosperity of the community as a whole, but this goal can be interpreted in different ways. The Keynesian model seeks to achieve it through economic growth and fair redistribution, relying on an extensive bureaucracy and comprehensive planning. Neoliberal approaches, by contrast, seek to promote community well-being through competition among individuals and organizations, centering technocratic policies on competitive frameworks and risk management.

Relation to other political ideologies

Democracy

Photo of a man with white hair and glasses wearing a suit
Jürgen Habermas argued that expert knowledge should be integrated into democratic processes without replacing them.

Various academic discussions of technocracy examine its relation to other political ideologies, such as democracy. Democracy vests political power in the people. Citizens have the most immediate control in direct democracy, where they participate by voting on laws and policies. In indirect democracy, they elect representatives who decide on the public's behalf.

Technocracy is often framed as a challenge or alternative to democracy. This view emphasizes their contrasting principles of political legitimacy and decision-making. Technocracy grounds authority in expertise and technical knowledge, resulting in decisions that may not align with popular opinion or the consent of the governed. As a result, democratic accountability to voter preferences is replaced by the responsibility of pursuing the long-term common good. Similarly, technocratic procedures of evidence-based analysis and expert deliberation bypass democratic decision-making through confrontation and negotiation among competing interest groups and viewpoints. This is typically accompanied by depoliticization, with contentious issues being framed as neutral technical problems, whereas democratic processes acknowledge evaluative differences and seek to mediate among them. Although technocracy does not restrict political participation through rigid hereditary or class systems, its reliance on specialized expertise often excludes large segments of the population and marginalizes lay perspectives.

A contrasting view rejects the claim that democracy and technocracy are incompatible, seeing them instead as complementary approaches whose tensions can be resolved. One outlook argues that technocracy is primarily about instrumental rationality or how to choose the means to achieve a given goal. According to this view, the people or elected officials choose political goals while the technocrats choose the most efficient ways to realize those goals, acting as advisors and implementers. Another approach confines technocracy to certain institutions or functions within a government. For example, a democratically elected government may delegate monetary policy or public health reforms to expert panels. In this context, one argument holds that every government relies on technocratic principles to some degree by consulting experts and aligning policy to empirical evidence.

The compatibility of democracy and technocracy may also depend on social circumstances. Beneficial conditions for technocratic democracies include a broad consensus on the general goals of state policy and a willingness of individuals to accept short-term personal sacrifices for the sake of the long-term prosperity of the community.

Populism

Technocracy is commonly contrasted with populism, which seeks to promote the interests of ordinary people. Populism typically frames politics as a struggle between morally pure people and a corrupt, self-serving elite. It usually promotes a personalistic leader who appeals to popular sentiment and is regarded as a direct representative of the will of the people.

In several respects, populism is directly opposed to technocracy. Its skeptical attitude toward elitism and expert rule challenges the technocratic reliance on expertise and specialized knowledge. This tension is also reflected in the sources of political legitimacy: populism emphasizes mass mobilization and the representation of popular opinion, whereas technocracy sees evidence-based competence and scientific rationality as primary sources of authority.

However, there are also aspects in which populism and technocracy overlap. Both are seen as challenges or threats to democracy, in part because of their anti-pluralistic outlooks. In populism, this tendency is expressed in its claim to represent a unified popular will, dismissing dissenting views as betrayals of the people's true interests. In technocracy, dissenting views are often portrayed as irrational or biased positions that do not align with expert opinions on how to promote the long-term prosperity of the community as a whole.

Technopopulism attempts to reconcile these two approaches. It combines the populist appeal to a personalistic leader representing the will of the people with a claim to legitimacy based on the leader's expertise, usually coupled with a technology-focused outlook.

Others

As a form of elitism, technocracy vests political power in a small group of technical experts. Other types of elitism have different criteria of political inclusion. Aristocracy has a social class of ruling elites, typically grounded in hereditary birth and inherited titles. Plutocracy places political authority in the hands of the wealthy. Theocracy merges political and religious power, justifying its authority by appeal to a divine command. Elitism contrasts with egalitarianism, which holds that all individuals should have equal political rights and opportunities.

Technocracy is closely related to meritocracy, which highlights the principle of merit by selecting people based on their ability. It can apply to governmental positions or, more broadly, to any social function or job. The core idea is that everybody has the position they deserve. Expertocracy is sometimes defined as a weaker form of technocracy that seeks to align decision-making processes to expert opinions without transferring power to technical elites. Other related political ideologies include epistocracy, which asserts that the degree of political influence of citizens should correspond to their competence in political decision-making, and managerialism, which advocates business-like and efficiency-driven management techniques.

Through its emphasis on value neutrality, technocracy contrasts with political ideologies that explicitly advance substantive values or normative goals. For example, liberalism promotes personal freedom, individual rights, tolerance, the rule of law, and the protection of private propertySocialism prioritizes economic equality, social welfare, and collective ownershipNationalism values social cohesion grounded in national identity and shared customs, culture, and language. Technocracy typically seeks to sideline evaluative and ideological commitments by framing decisions around empirical evidence and cost-benefit analysis rather than pursuing substantive values. However, it may also be combined with certain normative goals, as in neoliberal technocracy and technocratic socialism.

Technocracy is typically contrasted with authoritarianism, which seeks to centralize and monopolize political power in a single leader or party and uses a hierarchical structure to suppress dissent. However, authoritarian regimes may adopt technocratic principles to consolidate control and justify their legitimacy by appealing to efficiency and expertise, giving rise to techno-authoritarianism.

Arguments

For

Black-and-white photo of a man wearing a suit
Howard Scott, a leading figure of the Technical Alliance and Technocracy Inc., held that technocratic governance can maximize efficiency and increase abundance.

Political theorists discuss various arguments for and against technocracy. Advocates emphasize the central role of skill and competence in political decision-making. They argue that political situations are complex, particularly in a globalized world marked by rapid technological advances. These complexities make it difficult to consider all relevant factors, predict outcomes of increasingly interconnected processes, and adjust policies accordingly. This view holds that scientific expertise and evidence-guided inquiry offer a better foundation for effective political decisions than ideology or popular sentiment. Technocrats highlight the rationality and efficiency of their procedures, which seek to avoid contradictory and irrational elements found in approaches that rely on partisan interests and public opinion. Accordingly, they promise better outcomes, benefiting the community as a whole through material progress.

Another line of thought centers on value-neutrality. It asserts that the technocratic focus on objective analysis and scientific solutions promotes the implementation of optimal solutions that prioritize overall welfare. Advocates hold that the ideal of serving the community as a whole helps insulate policy from partisan agendas and lobbying efforts that privilege specific interest groups. Similarly, the emphasis on competence, transparent decision criteria, and quantifiable results constitutes a form of meritocratic fairness that can safeguard against corruption and nepotism.

A different set of arguments focuses on the long-term perspective. It asserts that technocracies are better suited to implement necessary but unpopular reforms that benefit the community in the long run. This view maintains that expert-driven political legitimacy sidelines short-term electoral pressures or incentives to maximize popular approval through immediate benefits. Supporters further highlight technocracy's openness to innovation and flexibility to adapt policy in response to emerging challenges or new data.

Against

Critics of technocracy often focus on its anti-democratic tendencies, viewing it as a form of elitism that excludes large segments of the population from political participation. They argue that governance should align with public consent and reflect a diversity of opinions and preferences. In their view, technocracy alienates ordinary people from politics by framing policy choices as purely technical problems. A related concern is that technocratic decision makers are not directly accountable to the electorate, raising questions about their political legitimacy and the risk of a technology-justified authoritarianism.

Another criticism rejects the claim that technocracy is value-neutral. It argues that the reliance on scientific methods, quantitative metrics, and efficiency carries implicit evaluative biases, meaning that ideological commitments and value conflicts are not removed but merely hidden under the guise of objective analysis. A related objection holds that a technocratic focus on instrumental rationality reduces political decisions to problems of optimization and neglects the intrinsic worth of individuals. It contends that instrumental rationality about the best means to achieve a goal needs to be accompanied by normative rationality about which goals are worth pursuing. This view asserts that there are diverse normative goals and that politics should consider different options and negotiate value trade-offs instead of adopting a narrow perspective that disregards alternative outlooks. Similarly, claims to neutrality can hide partisan agendas, such as attempts by corporate lobbies to influence policy under the guise of impartial optimization.

Critics also target the strong focus on science and the claim that technocracies achieve better outcomes. They argue that technocracy adopts scientism and falsely assumes that science can solve all political problems. In response, they assert that the scientific method is limited and cannot reliably predict social outcomes, specifically in complex and interconnected systems of human behavior. Many aspects of social life are difficult to quantify, and efforts to do so can result in misleading metrics that ignore key factors. This can produce implementation gaps in which idealized models fail in real-world conditions. Unique local contexts pose further challenges, as attempts to impose uniform solutions can lead to unexpected results. Additionally, there are expert disagreements in which competing theories predict divergent outcomes without providing clear guidance for policymakers on which theory to follow. On a conceptual level, critics hold that technocracy is a poorly defined notion that can denote a wide spectrum of approaches to governance depending on the context.

History

Intellectual and cultural developments

Some precursors to technocratic thought are found in ancient philosophy. In ancient Greece, Plato (c. 427 – c. 347 BCE) characterized politics as a techne—an art or craft with a specific goal, comparable to other expert practices such as medicine. He argued that political skill and knowledge of human nature are essential to just and prudent leadership. He regarded rule by wise philosopher-kings as the ideal form of governance. In ancient China, some technocratic principles are reflected in Confucianism, which arose in the 6th or 5th century BCE. They include the elitist view that the most virtuous and capable should rule. Beginning in the 7th century CE, this tradition also saw the emergence of a rigorous, merit-based examination system designed to select the most competent candidates for government service.

Photo of a bearded man with a black hat
In his New Atlantis, Francis Bacon envisioned a technocratic utopia ruled by scientists.
Painting of man with white hair in a formal attire
Henri de Saint-Simon is sometimes regarded as the "father of technocracy" because of his idea of a new kind of society based on scientific rationality.

In the 16th and 17th centuries, the Scientific Revolution established a new paradigm of rational inquiry in which truth is not passively revealed but actively discovered by following the scientific method and seeking empirical evidence. During the subsequent Age of Enlightenment in the late 17th and 18th centuries, Enlightenment thinkers emphasized reason, science, and technical rationality. They promoted the pursuit of knowledge and challenged traditional authorities. This cultural and intellectual movement shaped the French Revolution, which introduced several technocratic principles into governance. It advanced programs of social engineering and recast political authority around meritocratic expertise. The growing economic complexity during the Industrial Revolution in the late 18th and 19th centuries further accelerated the demand for specialized knowledge and expert-led management.

These developments also influenced political thought. Francis Bacon (1561–1626) envisioned a technocratic utopia in his New Atlantis. Unlike Plato, he argued that empirical scientists, rather than philosopher-kings, should rule. For Bacon, politics should take the form of scientific administration, with scientific elites as benevolent rulers promoting the general interest of the whole. Building on Bacon's vision, Henri de Saint-Simon (1760–1825) conceived a new kind of society based on scientific rationality. He held that a society led by scientists, artists, and industrialists would prosper, with a merit-based hierarchy and an elite class made up of the most educated and productive members of society. Saint-Simon is sometimes regarded as the "father of technocracy". His student Auguste Comte (1798–1857) formulated positivism, arguing that scientific understanding based on empirical evidence is the highest form of knowledge. Jeremy Bentham's (1748–1832) utilitarianism also influenced technocratic thought, particularly the idea that governance should maximize overall welfare, laying the groundwork for cost-benefit analysis in public administration. In his novel Looking Backward, Edward Bellamy (1850–1898) reached a mass audience with his utopian vision of technocracy in which technology is harnessed to ensure abundance, leisure, and peace for all citizens.

Frederick Winslow Taylor (1856–1915) developed and popularized the application of scientific management and efficiency-focused organization to industrial manufacturing and labor processe. Henry Gantt (1861–1919), an associate of Taylor, sought to extend these ideas into the political realm. He proposed a "new democracy" that grounds governance in scientific facts. For him, the degree of an individual's political authority should correspond to their ability and willingness to advance the common good. His outlook was also influenced by Thorstein Veblen (1857–1929), who criticized capitalism for introducing conflicts of interest that hinder the public good. He argued that a system run by engineers and technicians could better serve common prosperity.

Max Weber (1864–1920) explored the role of instrumental rationality in modern society. He argued that the pervasiveness of scientific and technical reason is a key feature of modernity, emphasizing the importance of mathematical and scientific knowledge and the increasing reliance on trained experts and managers to organize social life.

During the two World Wars, technocratic principles were implemented to gain military and economic advantages by increasing productivity and efficiency. This happened in several fields, including the weapons industry, logistics, and mobilization of the workforce.

In the book The Managerial Revolution, James Burnham (1905–1987) grounded technocratic principles in a sociological analysis of the growing role of managers and technical experts in modern industrial societiesJohn Kenneth Galbraith (1908–2006) further explored these ideas in his book The New Industrial State, arguing that advanced technology requires large corporations, long-term planning, and an extensive network of technical experts. In the book The Coming of Post-Industrial Society, Daniel Bell (1919–2011) shifted the focus from industrial to post-industrial society. He highlighted the importance of technocratic principles as knowledge, science, and information technologies become the primary drivers of economic growth.

Postmodern thinkers examined the increasing centrality of expert knowledge in modern society. Michel Foucault (1926–1984) explored how knowledge is used to wield power over populations through indirect mechanisms embedded in correctional, medical, and educational institutions. Jean-François Lyotard (1924–1998) warned of the totalizing and dehumanizing tendencies of technocratic rationality. A different criticism was formulated by the Frankfurt School theorist Jürgen Habermas (born 1929), who regards technocracy as a threat to democracy, proposing that expert knowledge should be integrated into democratic processes without replacing them.

In different regions

Circular symbol with two curved sections, one white and one red
Symbol of Technocracy Inc.
Black-and-white photo of a man wearing a suit
Various aspects of technocratic governance were present in the Soviet Union, particularly during the leadership of Leonid Brezhnev.

In the 1930s in North America, the technocracy movement emerged from the Technical Alliance, which had been founded in 1919 by Howard Scott (1890–1970). Followers of the technocracy movement applied Veblen's ideas to the Great Depression—a deep economic crisis that had started in 1929. They argued that this crisis was a symptom of capitalism and proposed a technocratic reorganization of society to overcome it. In addition to the political governance by engineers, scientists, and technical managers, this program aimed to replace the price-based economy with one that organizes distribution and consumption based on energy costs. It sought to increase abundance while reducing average workload through centralized planning, integrating North America into a self-sufficient unit called a technate. In the post-World War II decades, technocracy influenced U.S. politics by shaping institutional frameworks, particularly in economic and military spheres. Links to politics happened through regulatory and advisory bodies such as the Atomic Energy Commission, the National Science Foundation, the Council of Economic Advisers, and the RAND Corporation.

Various aspects of technocratic governance were also present in the Soviet Union, such as the centralized economic planning by experts under the State Planning Committee, established in 1921. These tendencies were particularly prominent during the leadership of Leonid Brezhnev (1906–1982), who promoted a scientific-technological revolution with an increased reliance on engineers as political leaders.

In the 1950s and early 1960s, China implemented technocratic elements as engineers from elite universities rose to political power to engage in central planning and implement communist-style industrialization. This trend saw a significant backlash during the Cultural Revolution (1966–1976), marked by anti-elitism and anti-intellectualism. Since the late 1970s, the technocratic influence has been restored and expanded, with leadership often characterized by a synthesis of technical expertise and socialist ideology.

Technocratic tendencies were prominent in Latin America starting in the 1960s and 1970s, both in democratic and authoritarian regimes. They typically took the form of state-led development efforts in sectors such as agriculture, industry, and health, exemplified by the expert-guided reforms in Colombia under the National Front governments. In subsequent decades, the technocratic agenda in Latin America increasingly centered on the economy.

Since the 1970s and 1980s, Singapore saw the establishment of an administrative state focused on rational organization, technical skill, and meritocratic egalitarianism while seeking to avoid ideological polarization and partisan interests. Because of its economic success in the form of rapid development and growth, it is often regarded as a leading exemplar of the advantages of technocracy. Other Asian examples of the implementation of technocratic principles are found in Taiwan, South Korea, and Japan.

Beginning in the second half of the 20th century, the formation of the European Union introduced various forms of technocratic governance, such as the establishment of expert-led institutions with little democratic control, including the European Central Bank. Starting in the 1990s, evidence-based policy making has become an influential approach in the United Kingdom, seeking to align governance with robust empirical evidence and rigorous scientific evaluation. In the 2000s and 2010s, several EU countries adopted technocratic measures in response to political or economic turmoil, such as technocratic cabinets in Italy and Greece during the European financial crisis.

Climate sensitivity

From Wikipedia, the free encyclopedia
Diagram of factors that determine climate sensitivity. After increasing CO2 levels, there is an initial warming. This warming gets amplified by the net effect of climate feedbacks.

Climate sensitivity is a key measure in climate science and describes how much Earth's surface will warm in response to a doubling of the atmospheric carbon dioxide (CO2) concentration. Its formal definition is: "The change in the surface temperature in response to a change in the atmospheric carbon dioxide (CO2) concentration or other radiative forcing." This concept helps scientists understand the extent and magnitude of the effects of climate change.

The Earth's surface warms as a direct consequence of increased atmospheric CO2, as well as increased concentrations of other greenhouse gases such as nitrous oxide and methane. The increasing temperatures have secondary effects on the climate system. These secondary effects are called climate feedbacks. Self-reinforcing feedbacks include for example the melting of sunlight-reflecting ice as well as higher evapotranspiration. The latter effect increases average atmospheric water vapour, which is itself a greenhouse gas.

Scientists do not know exactly how strong these climate feedbacks are. Therefore, it is difficult to predict the precise amount of warming that will result from a given increase in greenhouse gas concentrations. If climate sensitivity turns out to be on the high side of scientific estimates, the Paris Agreement goal of limiting global warming to below 2 °C (3.6 °F) will be even more difficult to achieve.

There are two main kinds of climate sensitivity: the transient climate response is the initial rise in global temperature when CO2 levels double, and the equilibrium climate sensitivity is the larger long-term temperature increase after the planet adjusts to the doubling. Climate sensitivity is estimated by several methods: looking directly at temperature and greenhouse gas concentrations since the Industrial Revolution began around the 1750s, using indirect measurements from the Earth's distant past, and simulating the climate.

Fundamentals

The rate at which energy reaches Earth (as sunlight) and leaves Earth (as heat radiation to space) must balance, or the planet will get warmer or cooler. An imbalance between incoming and outgoing radiation energy is called radiative forcing. A warmer planet radiates heat to space faster and so a new balance is eventually reached, with a higher temperature and stored energy content. However, the warming of the planet also has knock-on effects, which create further warming in an exacerbating feedback loop. Climate sensitivity is a measure of how much temperature change a given amount of radiative forcing will cause.

Radiative forcing

Radiative forcings are generally quantified as Watts per square meter (W/m2) and averaged over Earth's uppermost surface defined as the top of the atmosphere. The magnitude of a forcing is specific to the physical driver and is defined relative to an accompanying time span of interest for its application. In the context of a contribution to long-term climate sensitivity from 1750 to 2020, the 50% increase in atmospheric CO
2
is characterized by a forcing of about +2.1 W/m2. In the context of shorter-term contributions to Earth's energy imbalance (i.e. its heating/cooling rate), time intervals of interest may be as short as the interval between measurement or simulation data samplings, and are thus likely to be accompanied by smaller forcing values. Forcings from such investigations have also been analyzed and reported at decadal time scales.

Radiative forcing leads to long-term changes in global temperature. A number of factors contribute radiative forcing: increased downwelling radiation from the greenhouse effect, variability in solar radiation from changes in planetary orbit, changes in solar irradiance, direct and indirect effects caused by aerosols (for example changes in albedo from cloud cover), and changes in land use (deforestation or the loss of reflective ice cover). In contemporary research, radiative forcing by greenhouse gases is well understood. As of 2019, large uncertainties remain for aerosols.

Key numbers

Carbon dioxide (CO2) levels rose from 280 parts per million (ppm) in the 18th century, when humans in the Industrial Revolution started burning significant amounts of fossil fuel such as coal, to over 415 ppm by 2020. As CO2 is a greenhouse gas, it hinders heat energy from leaving the Earth's atmosphere. In 2016, atmospheric CO2 levels had increased by 45% over preindustrial levels, and radiative forcing caused by increased CO2 was already more than 50% higher than in pre-industrial times because of non-linear effects. Between the 18th-century start of the Industrial Revolution and the year 2020, the Earth's temperature rose by a little over one degree Celsius (about two degrees Fahrenheit).

Societal importance

Because the economics of climate change mitigation depend greatly on how quickly carbon neutrality needs to be achieved, climate sensitivity estimates can have important economic and policy-making implications. One study suggests that halving the uncertainty of the value for transient climate response (TCR) could save trillions of dollars. A higher climate sensitivity would mean more dramatic increases in temperature, which makes it more prudent to take significant climate action. If climate sensitivity turns out to be on the high end of what scientists estimate, the Paris Agreement goal of limiting global warming to well below 2 °C cannot be achieved, and temperature increases will exceed that limit, at least temporarily. One study estimated that emissions cannot be reduced fast enough to meet the 2 °C goal if equilibrium climate sensitivity (the long-term measure) is higher than 3.4 °C (6.1 °F). The more sensitive the climate system is to changes in greenhouse gas concentrations, the more likely it is to have decades when temperatures are much higher or much lower than the longer-term average.

Factors that determine sensitivity

The radiative forcing caused by a doubling of atmospheric CO2 levels (from the pre-industrial 280 ppm) is approximately 3.7 watts per square meter (W/m2). In the absence of feedbacks, the energy imbalance would eventually result in roughly 1 °C (1.8 °F) of global warming. That figure is straightforward to calculate by using the Stefan–Boltzmann law and is undisputed.

A further contribution arises from climate feedbacks, both self-reinforcing and balancing. The uncertainty in climate sensitivity estimates is mostly from the feedbacks in the climate system, including water vapour feedback, ice–albedo feedback, cloud feedback, and lapse rate feedback. Balancing feedbacks tend to counteract warming by increasing the rate at which energy is radiated to space from a warmer planet. Self-reinfocing feedbacks increase warming; for example, higher temperatures can cause ice to melt, which reduces the ice area and the amount of sunlight the ice reflects, which in turn results in less heat energy being radiated back into space. The reflectiveness of a surface is called albedo. Climate sensitivity depends on the balance between those feedbacks.

Types

Schematic of how different measures of climate sensitivity relate to one another

Depending on the time scale, there are two main ways to define climate sensitivity: the short-term transient climate response (TCR) and the long-term equilibrium climate sensitivity (ECS), both of which incorporate the warming from exacerbating feedback loops. They are not discrete categories, but they overlap. Sensitivity to atmospheric CO2 increases is measured in the amount of temperature change for doubling in the atmospheric CO2 concentration.

Although the term "climate sensitivity" is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

Transient climate response

The transient climate response (TCR) is defined as "the change in the global mean surface temperature, averaged over a 20-year period, centered at the time of atmospheric carbon dioxide doubling, in a climate model simulation" in which the atmospheric CO2 concentration increases at 1% per year. That estimate is generated by using shorter-term simulations. The transient response is lower than the equilibrium climate sensitivity because slower feedbacks, which exacerbate the temperature increase, take more time to respond in full to an increase in the atmospheric CO2 concentration. For instance, the deep ocean takes many centuries to reach a new steady state after a perturbation during which it continues to serve as heatsink, which cools the upper ocean. The IPCC literature assessment estimates that the TCR likely lies between 1 °C (1.8 °F) and 2.5 °C (4.5 °F).

A related measure is the transient climate response to cumulative carbon emissions (TCRE), which is the globally averaged surface temperature change after 1000 GtC of CO2 has been emitted. As such, it includes not only temperature feedbacks to forcing but also the carbon cycle and carbon cycle feedbacks.

Equilibrium climate sensitivity

The equilibrium climate sensitivity (ECS) is the long-term temperature rise (equilibrium global mean near-surface air temperature) that is expected to result from a doubling of the atmospheric CO2 concentration (ΔT). It is a prediction of the new global mean near-surface air temperature once the CO2 concentration has stopped increasing, and most of the feedbacks have had time to have their full effect. Reaching an equilibrium temperature can take centuries or even millennia after CO2 has doubled. ECS is higher than TCR because of the oceans' short-term buffering effects. Computer models are used for estimating the ECS. A comprehensive estimate means that modelling the whole time span during which significant feedbacks continue to change global temperatures in the model, such as fully-equilibrating ocean temperatures, requires running a computer model that covers thousands of years. There are, however, less computing-intensive methods.

The IPCC Sixth Assessment Report (AR6) stated that there is high confidence that ECS is within the range of 2.5 °C to 4 °C, with a best estimate of 3 °C.

The long time scales involved with ECS make it arguably a less relevant measure for policy decisions around climate change.

Effective climate sensitivity

A common approximation to ECS is the effective equilibrium climate sensitivity, is an estimate of equilibrium climate sensitivity by using data from a climate system in model or real-world observations that is not yet in equilibrium. Estimates assume that the net amplification effect of feedbacks, as measured after some period of warming, will remain constant afterwards. That is not necessarily true, as feedbacks can change with time. In many climate models, feedbacks become stronger over time and so the effective climate sensitivity is lower than the real ECS.

Earth system sensitivity

By definition, equilibrium climate sensitivity does not include feedbacks that take millennia to emerge, such as long-term changes in Earth's albedo because of changes in ice sheets and vegetation. It also does not include the slow response of the deep oceans' warming, which takes millennia. Earth system sensitivity (ESS) incorporates the effects of these slower feedback loops, such as the change in Earth's albedo from the melting of large continental ice sheets, which covered much of the Northern Hemisphere during the Last Glacial Maximum and still cover Greenland and Antarctica. Changes in albedo as a result of changes in vegetation, as well as changes in ocean circulation, are also included. The longer-term feedback loops make the ESS larger than the ECS, possibly twice as large. Data from the geological history of Earth is used in estimating ESS. Differences between modern and long-ago climatic conditions mean that estimates of the future ESS are highly uncertain. The carbon cycle is not included in the definition of the ESS, but all other elements of the climate system are included.

Sensitivity to nature of forcing

Different forcing agents, such as greenhouse gases and aerosols, can be compared using their radiative forcing, the initial radiative imbalance averaged over the entire globe. Climate sensitivity is the amount of warming per radiative forcing. To a first approximation, the cause of the radiative imbalance does not matter. However, radiative forcing from sources other than CO2 can cause slightly more or less surface warming than the same averaged forcing from CO2. The amount of feedback varies mainly because the forcings are not uniformly distributed over the globe. Forcings that initially warm the Northern Hemisphere, land, or polar regions generate more self-reinforcing feedbacks (such as the ice-albedo feedback) than an equivalent forcing from CO2, which is more uniformly distributed over the globe. This gives rise to more overall warming. Several studies indicate that human-emitted aerosols are more effective than CO2 at changing global temperatures, and volcanic forcing is less effective. When climate sensitivity to CO2 forcing is estimated using historical temperature and forcing (caused by a mix of aerosols and greenhouse gases), and that effect is not taken into account, climate sensitivity is underestimated.

State dependence

Artist impression of a Snowball Earth.

Climate sensitivity has been defined as the short- or long-term temperature change resulting from any doubling of CO2, but there is evidence that the sensitivity of Earth's climate system is not constant. For instance, the planet has polar ice and high-altitude glaciers. Until the world's ice has completely melted, a self-reinforcing ice–albedo feedback loop makes the system more sensitive overall. Throughout Earth's history, multiple periods are thought to have snow and ice cover almost the entire globe. In most models of "Snowball Earth", parts of the tropics were at least intermittently free of ice cover. As the ice advanced or retreated, climate sensitivity must have been very high, as the large changes in area of ice cover would have made for a very strong ice–albedo feedback. Volcanic atmospheric composition changes are thought to have provided the radiative forcing needed to escape the snowball state.

Equilibrium climate sensitivity can change with climate.

Throughout the Quaternary period (the most recent 2.58 million years), climate has oscillated between glacial periods, the most recent one being the Last Glacial Maximum, and interglacial periods, the most recent one being the current Holocene, but the period's climate sensitivity is difficult to determine. The Paleocene–Eocene Thermal Maximum, about 55.5 million years ago, was unusually warm and may have been characterized by above-average climate sensitivity.

Climate sensitivity may further change if tipping points are crossed. It is unlikely that tipping points will cause short-term changes in climate sensitivity. If a tipping point is crossed, climate sensitivity is expected to change at the time scale of the subsystem that hits its tipping point. Especially if there are multiple interacting tipping points, the transition of climate to a new state may be difficult to reverse.

The two most common definitions of climate sensitivity specify the climate state: the ECS and the TCR are defined for a doubling with respect to the CO2 levels in the pre-industrial era. Because of potential changes in climate sensitivity, the climate system may warm by a different amount after a second doubling of CO2 from after a first doubling. The effect of any change in climate sensitivity is expected to be small or negligible in the first century after additional CO2 is released into the atmosphere.

Estimation

Using Industrial Age (1750–present) data

Climate sensitivity can be estimated using the observed temperature increase, the observed ocean heat uptake, and the modelled or observed radiative forcing. The data are linked through a simple energy-balance model to calculate climate sensitivity. Radiative forcing is often modelled because Earth observation satellites measuring it has existed during only part of the Industrial Age (only since the late 1950s). Estimates of climate sensitivity calculated by using these global energy constraints have consistently been lower than those calculated by using other methods, around 2 °C (3.6 °F) or lower.

Estimates of transient climate response (TCR) that have been calculated from models and observational data can be reconciled if it is taken into account that fewer temperature measurements are taken in the polar regions, which warm more quickly than the Earth as a whole. If only regions for which measurements are available are used in evaluating the model, the differences in TCR estimates are negligible.

A very simple climate model could estimate climate sensitivity from Industrial Age data by waiting for the climate system to reach equilibrium and then by measuring the resulting warming, ΔTeq (°C). Computation of the equilibrium climate sensitivity, S (°C), using the radiative forcing ΔF (W/m2) and the measured temperature rise, would then be possible. The radiative forcing resulting from a doubling of CO2, F2×CO2, is relatively well known, at about 3.7 W/m2. Combining that information results in this equation:

.

However, the climate system is not in equilibrium since the actual warming lags the equilibrium warming, largely because the oceans take up heat and will take centuries or millennia to reach equilibrium. Estimating climate sensitivity from Industrial Age data requires an adjustment to the equation above. The actual forcing felt by the atmosphere is the radiative forcing minus the ocean's heat uptake, H (W/m2) and so climate sensitivity can be estimated:

The global temperature increase between the beginning of the Industrial Period, which is (taken as 1750, and 2011 was about 0.85 °C (1.53 °F). In 2011, the radiative forcing from CO2 and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbon) that have been emitted since the 18th century was roughly 2.8 W/m2. The climate forcing, ΔF, also contains contributions from solar activity (+0.05 W/m2), aerosols (−0.9 W/m2), ozone (+0.35 W/m2), and other smaller influences, which brings the total forcing over the Industrial Period to 2.2 W/m2, according to the best estimate of the IPCC Fifth Assessment Report in 2014, with substantial uncertainty. The ocean heat uptake, estimated by the same report to be 0.42 W/m2, yields a value for S of 1.8 °C (3.2 °F).

Other strategies

In theory, Industrial Age temperatures could also be used to determine a time scale for the temperature response of the climate system and thus climate sensitivity: if the effective heat capacity of the climate system is known, and the timescale is estimated using autocorrelation of the measured temperature, an estimate of climate sensitivity can be derived. In practice, however, the simultaneous determination of the time scale and heat capacity is difficult.

Attempts have been made to use the 11-year solar cycle to constrain the transient climate response. Solar irradiance is about 0.9 W/m2 higher during a solar maximum than during a solar minimum, and those effect can be observed in measured average global temperatures from 1959 to 2004. Unfortunately, the solar minima in the period coincided with volcanic eruptions, which have a cooling effect on the global temperature. Because the eruptions caused a larger and less well-quantified decrease in radiative forcing than the reduced solar irradiance, it is questionable whether useful quantitative conclusions can be derived from the observed temperature variations.

Observations of volcanic eruptions have also been used to try to estimate climate sensitivity, but as the aerosols from a single eruption last at most a couple of years in the atmosphere, the climate system can never come close to equilibrium, and there is less cooling than there would be if the aerosols stayed in the atmosphere for longer. Therefore, volcanic eruptions give information only about a lower bound on transient climate sensitivity.

Using data from Earth's past

Historical climate sensitivity can be estimated by using reconstructions of Earth's past temperatures and CO2 levels. Paleoclimatologists have studied different geological periods, such as the warm Pliocene (5.3 to 2.6 million years ago) and the colder Pleistocene (2.6 million to 11,700 years ago), and sought periods that are in some way analogous to or informative about current climate change. Climates further back in Earth's history are more difficult to study because fewer data are available about them. For instance, past CO2 concentrations can be derived from air trapped in ice cores, but as of 2020, the oldest continuous ice core is less than one million years old. Recent periods, such as the Last Glacial Maximum (LGM) (about 21,000 years ago) and the Mid-Holocene (about 6,000 years ago), are often studied, especially when more information about them becomes available.

A 2007 estimate of sensitivity made using data from the most recent 420 million years is consistent with sensitivities of current climate models and with other determinations. The Paleocene–Eocene Thermal Maximum (about 55.5 million years ago), a 20,000-year period during which massive amount of carbon entered the atmosphere and average global temperatures increased by approximately 6 °C (11 °F), also provides a good opportunity to study the climate system when it was in a warm state. Studies of the last 800,000 years have concluded that climate sensitivity was greater in glacial periods than in interglacial periods.

As the name suggests, the Last Glacial Maximum was much colder than today, and good data on atmospheric CO2 concentrations and radiative forcing from that period are available. The period's orbital forcing was different from today's but had little direct effect on mean annual temperatures. Estimating climate sensitivity from the Last Glacial Maximum can be done by several different ways. One way is to use estimates of global radiative forcing and temperature directly. The set of feedback mechanisms active during the period, however, may be different from the feedbacks caused by a present doubling of CO2, and such feedback differences across climate states must be accounted for when inferring today's climate sensitivity from paleoclimate evidence. In a different approach, a model of intermediate complexity is used to simulate conditions during the period. Several versions of this single model are run, with different values chosen for uncertain parameters, such that each version has a different ECS. Outcomes that best simulate the LGM's observed cooling probably produce the most realistic ECS values.

Using climate models

Histogram of equilibrium climate sensitivity as derived for different plausible assumptions
Frequency distribution of equilibrium climate sensitivity based on simulations of the doubling of CO2. Each model simulation has different estimates for processes which scientists do not sufficiently understand. Few of the simulations result in less than 2 °C (3.6 °F) of warming or significantly more than 4 °C (7.2 °F). However, the positive skew, which is also found in other studies, suggests that if carbon dioxide concentrations double, the probability of large or very large increases in temperature is greater than the probability of small increases.

Climate models simulate the CO2-driven warming of the future as well as the past. They operate on principles similar to those underlying models that predict the weather, but they focus on longer-term processes. Climate models typically begin with a starting state and then apply physical laws and knowledge about biology to predict future states. As with weather modelling, no computer has the power to model the complexity of the entire planet and simplifications are used to reduce that complexity to something manageable. An important simplification divides Earth's atmosphere into model cells. For instance, the atmosphere might be divided into cubes of air ten or one hundred kilometers on each side. Each model cell is treated as if it were homogeneous (uniform). Calculations for model cells are much faster than trying to simulate each molecule of air separately.

A lower model resolution (large model cells and long time steps) takes less computing power but cannot simulate the atmosphere in as much detail. A model cannot simulate processes smaller than the model cells or shorter-term than a single time step. The effects of the smaller-scale and shorter-term processes must therefore be estimated by using other methods. Physical laws contained in the models may also be simplified to speed up calculations. The biosphere must be included in climate models. The effects of the biosphere are estimated by using data on the average behaviour of the average plant assemblage of an area under the modelled conditions. Climate sensitivity is therefore an emergent property of these models; it is not prescribed, but it follows from the interaction of all the modelled processes.

To estimate climate sensitivity, a model is run by using a variety of radiative forcings (doubling quickly, doubling gradually, or following historical emissions) and the temperature results are compared to the forcing applied. Different models give different estimates of climate sensitivity, but they tend to fall within a similar range, as described above.

Testing, comparisons, and climate ensembles

Modelling of the climate system can lead to a wide range of outcomes. Models are often run that use different plausible parameters in their approximation of physical laws and the behaviour of the biosphere, which forms a perturbed physics ensemble, which attempts to model the sensitivity of the climate to different types and amounts of change in each parameter. Alternatively, structurally-different models developed at different institutions are put together, creating an ensemble. By selecting only the simulations that can simulate some part of the historical climate well, a constrained estimate of climate sensitivity can be made. One strategy for obtaining more accurate results is placing more emphasis on climate models that perform well in general.

A model is tested using observations, paleoclimate data, or both to see if it replicates them accurately. If it does not, inaccuracies in the physical model and parametrizations are sought, and the model is modified. For models used to estimate climate sensitivity, specific test metrics that are directly and physically linked to climate sensitivity are sought. Examples of such metrics are the global patterns of warming, the ability of a model to reproduce observed relative humidity in the tropics and subtropics, patterns of heat radiation, and the variability of temperature around long-term historical warming. Ensemble climate models developed at different institutions tend to produce constrained estimates of ECS that are slightly higher than 3 °C (5.4 °F). The models with ECS slightly above 3 °C (5.4 °F) simulate the above situations better than models with a lower climate sensitivity.

Many projects and groups exist to compare and to analyse the results of multiple models. For instance, the Coupled Model Intercomparison Project (CMIP) has been running since the 1990s.

Historical estimates

Svante Arrhenius in the 19th century was the first person to quantify global warming as a consequence of a doubling of the concentration of CO2. In his first paper on the matter, he estimated that global temperature would rise by around 5 to 6 °C (9.0 to 10.8 °F) if the quantity of CO2 was doubled. In later work, he revised that estimate to 4 °C (7.2 °F). Arrhenius used Samuel Pierpont Langley's observations of radiation emitted by the full moon to estimate the amount of radiation that was absorbed by water vapour and by CO2. To account for water vapour feedback, he assumed that relative humidity would stay the same under global warming.

The first calculation of climate sensitivity that used detailed measurements of absorption spectra, as well as the first calculation to use a computer for numerical integration of the radiative transfer through the atmosphere, was performed by Syukuro Manabe and Richard Wetherald in 1967. Assuming constant humidity, they computed an equilibrium climate sensitivity of 2.3 °C per doubling of CO2, which they rounded to 2 °C, the value most often quoted from their work, in the abstract of the paper. The work has been called "arguably the greatest climate-science paper of all time" and "the most influential study of climate of all time."

A committee on anthropogenic global warming, convened in 1979 by the United States National Academy of Sciences and chaired by Jule Charney, estimated equilibrium climate sensitivity to be 3 °C (5.4 °F), plus or minus 1.5 °C (2.7 °F). The Manabe and Wetherald estimate (2 °C (3.6 °F)), James E. Hansen's estimate of 4 °C (7.2 °F), and Charney's model were the only models available in 1979. According to Manabe, speaking in 2004, "Charney chose 0.5 °C as a reasonable margin of error, subtracted it from Manabe's number, and added it to Hansen's, giving rise to the 1.5 to 4.5 °C (2.7 to 8.1 °F) range of likely climate sensitivity that has appeared in every greenhouse assessment since ...." In 2008, climatologist Stefan Rahmstorf said: "At that time [it was published], the [Charney report estimate's] range [of uncertainty] was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world."

Assessment reports of IPCC

diagram showing five historical estimates of equilibrium climate sensitivity by the IPCC
Historical estimates of climate sensitivity from the IPCC assessments. The first three reports gave a qualitative likely range, and the fourth and the fifth assessment report formally quantified the uncertainty. The dark blue range is judged as being more than 66% likely.

Despite considerable progress in the understanding of Earth's climate system, assessments continued to report similar uncertainty ranges for climate sensitivity for some time after the 1979 Charney report. The First Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), published in 1990, estimated that equilibrium climate sensitivity to a doubling of CO2 lay between 1.5 and 4.5 °C (2.7 and 8.1 °F), with a "best guess in the light of current knowledge" of 2.5 °C (4.5 °F). The report used models with simplified representations of ocean dynamics. The IPCC supplementary report, 1992, which used full-ocean circulation models, saw "no compelling reason to warrant changing" the 1990 estimate; and the IPCC Second Assessment Report stated, "No strong reasons have emerged to change [these estimates]," In the reports, much of the uncertainty around climate sensitivity was attributed to insufficient knowledge of cloud processes. The 2001 IPCC Third Assessment Report also retained this likely range.

Authors of the 2007 IPCC Fourth Assessment Report stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the Third Annual Report. The IPCC authors concluded that ECS is very likely to be greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C (3.6 to 8.1 °F), with a most likely value of about 3 °C (5.4 °F). The IPCC stated that fundamental physical reasons and data limitations prevent a climate sensitivity higher than 4.5 °C (8.1 °F) from being ruled out, but the climate sensitivity estimates in the likely range agreed better with observations and the proxy climate data.

The 2013 IPCC Fifth Assessment Report reverted to the earlier range of 1.5 to 4.5 °C (2.7 to 8.1 °F) (with high confidence), because some estimates using industrial-age data came out low. The report also stated that ECS is extremely unlikely to be less than 1 °C (1.8 °F) (high confidence), and it is very unlikely to be greater than 6 °C (11 °F) (medium confidence). Those values were estimated by combining the available data with expert judgement.

In preparation for the 2021 IPCC Sixth Assessment Report, a new generation of climate models was developed by scientific groups around the world. Across 27 global climate models, estimates of a higher climate sensitivity were produced. The values spanned 1.8 to 5.6 °C (3.2 to 10.1 °F) and exceeded 4.5 °C (8.1 °F) in 10 of them. The estimates for equilibrium climate sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the transient climate response from 1.8 °C, to 2.0 °C. The cause of the increased ECS lies mainly in improved modelling of clouds. Temperature rises are now believed to cause sharper decreases in the number of low clouds, and fewer low clouds means more sunlight is absorbed by the planet and less reflected to space.

Remaining deficiencies in the simulation of clouds may have led to overestimates, as models with the highest ECS values were not consistent with observed warming. A fifth of the models began to 'run hot', predicting that global warming would produce significantly higher temperatures than is considered plausible. According to these models, known as hot models, average global temperatures in the worst-case scenario would rise by more than 5 °C above preindustrial levels by 2100, with a "catastrophic" impact on human society. In comparison, empirical observations combined with physics models indicate that the "very likely" range is between 2.3 and 4.7 °C. Models with a very high climate sensitivity are also known to be poor at reproducing known historical climate trends, such as warming over the 20th century or cooling during the last ice age. For these reasons the predictions of hot models are considered implausible, and have been given less weight by the IPCC in 2022.

Technocracy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Technocr...