Search This Blog

Monday, December 9, 2024

Greenhouse effect

From Wikipedia, the free encyclopedia
Energy flows down from the sun and up from the Earth and its atmosphere. When greenhouse gases absorb radiation emitted by Earth's surface, they prevent that radiation from escaping into space, causing surface temperatures to rise by about 33 °C (59 °F).

The greenhouse effect occurs when greenhouse gases in a planet's atmosphere insulate the planet from losing heat to space, raising its surface temperature. Surface heating can happen from an internal heat source as in the case of Jupiter, or from its host star as in the case of the Earth. In the case of Earth, the Sun emits shortwave radiation (sunlight) that passes through greenhouse gases to heat the Earth's surface. In response, the Earth's surface emits longwave radiation that is mostly absorbed by greenhouse gases. The absorption of longwave radiation prevents it from reaching space, reducing the rate at which the Earth can cool off.

Without the greenhouse effect, the Earth's average surface temperature would be as cold as −18 °C (−0.4 °F). This is of course much less than the 20th century average of about 14 °C (57 °F).[3][4] In addition to naturally present greenhouse gases, burning of fossil fuels has increased amounts of carbon dioxide and methane in the atmosphere. As a result, global warming of about 1.2 °C (2.2 °F) has occurred since the Industrial Revolution, with the global average surface temperature increasing at a rate of 0.18 °C (0.32 °F) per decade since 1981.

All objects with a temperature above absolute zero emit thermal radiation. The wavelengths of thermal radiation emitted by the Sun and Earth differ because their surface temperatures are different. The Sun has a surface temperature of 5,500 °C (9,900 °F), so it emits most of its energy as shortwave radiation in near-infrared and visible wavelengths (as sunlight). In contrast, Earth's surface has a much lower temperature, so it emits longwave radiation at mid- and far-infrared wavelengths. A gas is a greenhouse gas if it absorbs longwave radiation. Earth's atmosphere absorbs only 23% of incoming shortwave radiation, but absorbs 90% of the longwave radiation emitted by the surface, thus accumulating energy and warming the Earth's surface.

The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.

Definition

The greenhouse effect on Earth is defined as: "The infrared radiative effect of all infrared absorbing constituents in the atmosphere. Greenhouse gases (GHGs), clouds, and some aerosols absorb terrestrial radiation emitted by the Earth’s surface and elsewhere in the atmosphere."

The enhanced greenhouse effect describes the fact that by increasing the concentration of GHGs in the atmosphere (due to human action), the natural greenhouse effect is increased.

Terminology

The term greenhouse effect comes from an analogy to greenhouses. Both greenhouses and the greenhouse effect work by retaining heat from sunlight, but the way they retain heat differs. Greenhouses retain heat mainly by blocking convection (the movement of air). In contrast, the greenhouse effect retains heat by restricting radiative transfer through the air and reducing the rate at which thermal radiation is emitted into space.

History of discovery and investigation

Eunice Newton Foote recognized carbon dioxide's heat-capturing effect in 1856, appreciating its implications for the planet.
 
The greenhouse effect and its impact on climate were succinctly described in this 1912 Popular Mechanics article, accessible for reading by the general public.

The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."

John Tyndall was the first to measure the infrared absorption and emission of various gases and vapors. From 1859 onwards, he showed that the effect was due to a very small proportion of the atmosphere, with the main gases having no effect, and was largely due to water vapor, though small percentages of hydrocarbons and carbon dioxide had a significant effect. The effect was more fully quantified by Svante Arrhenius in 1896, who made the first quantitative prediction of global warming due to a hypothetical doubling of atmospheric carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.

In 1896 Svante Arrhenius used Langley's observations of increased infrared absorption where Moon rays pass through the atmosphere at a low angle, encountering more carbon dioxide (CO2), to estimate an atmospheric cooling effect from a future decrease of CO2. He realized that the cooler atmosphere would hold less water vapor (another greenhouse gas) and calculated the additional cooling effect. He also realized the cooling would increase snow and ice cover at high latitudes, making the planet reflect more sunlight and thus further cool down, as James Croll had hypothesized. Overall Arrhenius calculated that cutting CO2 in half would suffice to produce an ice age. He further calculated that a doubling of atmospheric CO2 would give a total warming of 5–6 degrees Celsius.

Measurement

Matter emits thermal radiation at a rate that is directly proportional to the fourth power of its temperature. Some of the radiation emitted by the Earth's surface is absorbed by greenhouse gases and clouds. Without this absorption, Earth's surface would have an average temperature of −18 °C (−0.4 °F). However, because some of the radiation is absorbed, Earth's average surface temperature is around 15 °C (59 °F). Thus, the Earth's greenhouse effect may be measured as a temperature change of 33 °C (59 °F).

Thermal radiation is characterized by how much energy it carries, typically in watts per square meter (W/m2). Scientists also measure the greenhouse effect based on how much more longwave thermal radiation leaves the Earth's surface than reaches space. Currently, longwave radiation leaves the surface at an average rate of 398 W/m2, but only 239 W/m2 reaches space. Thus, the Earth's greenhouse effect can also be measured as an energy flow change of 159 W/m2. The greenhouse effect can be expressed as a fraction (0.40) or percentage (40%) of the longwave thermal radiation that leaves Earth's surface but does not reach space.

Whether the greenhouse effect is expressed as a change in temperature or as a change in longwave thermal radiation, the same effect is being measured.

Role in climate change

Earth's rate of heating (graph) is a result of factors which include the enhanced greenhouse effect.

Strengthening of the greenhouse effect through additional greenhouse gases from human activities is known as the enhanced greenhouse effect. As well as being inferred from measurements by ARGO, CERES and other instruments throughout the 21st century, this increase in radiative forcing from human activity has been observed directly, and is attributable mainly to increased atmospheric carbon dioxide levels.

The Keeling Curve of atmospheric CO2 abundance.

CO2 is produced by fossil fuel burning and other activities such as cement production and tropical deforestation. Measurements of CO2 from the Mauna Loa Observatory show that concentrations have increased from about 313 parts per million (ppm) in 1960, passing the 400 ppm milestone in 2013. The current observed amount of CO2 exceeds the geological record maxima (≈300 ppm) from ice core data.

Over the past 800,000 years, ice core data shows that carbon dioxide has varied from values as low as 180 ppm to the pre-industrial level of 270 ppm. Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale.

Energy balance and temperature

Incoming shortwave radiation

The solar radiation spectrum for direct light at both the top of Earth's atmosphere and at sea level

Hotter matter emits shorter wavelengths of radiation. As a result, the Sun emits shortwave radiation as sunlight while the Earth and its atmosphere emit longwave radiation. Sunlight includes ultraviolet, visible light, and near-infrared radiation.

Sunlight is reflected and absorbed by the Earth and its atmosphere. The atmosphere and clouds reflect about 23% and absorb 23%. The surface reflects 7% and absorbs 48%. Overall, Earth reflects about 30% of the incoming sunlight, and absorbs the rest (240 W/m2).

Outgoing longwave radiation

The greenhouse effect is a reduction in the flux of outgoing longwave radiation, which affects the planet's radiative balance. The spectrum of outgoing radiation shows the effects of different greenhouse gases.

The Earth and its atmosphere emit longwave radiation, also known as thermal infrared or terrestrial radiation. Informally, longwave radiation is sometimes called thermal radiation. Outgoing longwave radiation (OLR) is the radiation from Earth and its atmosphere that passes through the atmosphere and into space.

The greenhouse effect can be directly seen in graphs of Earth's outgoing longwave radiation as a function of frequency (or wavelength). The area between the curve for longwave radiation emitted by Earth's surface and the curve for outgoing longwave radiation indicates the size of the greenhouse effect.

Different substances are responsible for reducing the radiation energy reaching space at different frequencies; for some frequencies, multiple substances play a role. Carbon dioxide is understood to be responsible for the dip in outgoing radiation (and associated rise in the greenhouse effect) at around 667 cm−1 (equivalent to a wavelength of 15 microns).

Each layer of the atmosphere with greenhouse gases absorbs some of the longwave radiation being radiated upwards from lower layers. It also emits longwave radiation in all directions, both upwards and downwards, in equilibrium with the amount it has absorbed. This results in less radiative heat loss and more warmth below. Increasing the concentration of the gases increases the amount of absorption and emission, and thereby causing more heat to be retained at the surface and in the layers below.

Effective temperature

Temperature needed to emit a given amount of thermal radiation.

The power of outgoing longwave radiation emitted by a planet corresponds to the effective temperature of the planet. The effective temperature is the temperature that a planet radiating with a uniform temperature (a blackbody) would need to have in order to radiate the same amount of energy.

This concept may be used to compare the amount of longwave radiation emitted to space and the amount of longwave radiation emitted by the surface:

  • Emissions to space: Based on its emissions of longwave radiation to space, Earth's overall effective temperature is −18 °C (0 °F).
  • Emissions from surface: Based on thermal emissions from the surface, Earth's effective surface temperature is about 16 °C (61 °F), which is 34 °C (61 °F) warmer than Earth's overall effective temperature.

Earth's surface temperature is often reported in terms of the average near-surface air temperature. This is about 15 °C (59 °F), a bit lower than the effective surface temperature. This value is 33 °C (59 °F) warmer than Earth's overall effective temperature.

Energy flux

Energy flux is the rate of energy flow per unit area. Energy flux is expressed in units of W/m2, which is the number of joules of energy that pass through a square meter each second. Most fluxes quoted in high-level discussions of climate are global values, which means they are the total flow of energy over the entire globe, divided by the surface area of the Earth, 5.1×1014 m2 (5.1×108 km2; 2.0×108 sq mi).

The fluxes of radiation arriving at and leaving the Earth are important because radiative transfer is the only process capable of exchanging energy between Earth and the rest of the universe.

Radiative balance

The temperature of a planet depends on the balance between incoming radiation and outgoing radiation. If incoming radiation exceeds outgoing radiation, a planet will warm. If outgoing radiation exceeds incoming radiation, a planet will cool. A planet will tend towards a state of radiative equilibrium, in which the power of outgoing radiation equals the power of absorbed incoming radiation.

Earth's energy imbalance is the amount by which the power of incoming sunlight absorbed by Earth's surface or atmosphere exceeds the power of outgoing longwave radiation emitted to space. Energy imbalance is the fundamental measurement that drives surface temperature. A UN presentation says "The EEI is the most critical number defining the prospects for continued global warming and climate change." One study argues, "The absolute value of EEI represents the most fundamental metric defining the status of global climate change."

Earth's energy imbalance (EEI) was about 0.7 W/m2 as of around 2015, indicating that Earth as a whole is accumulating thermal energy and is in a process of becoming warmer.

Over 90% of the retained energy goes into warming the oceans, with much smaller amounts going into heating the land, atmosphere, and ice.

Comparison of Earth's upward flow of longwave radiation in reality and in a hypothetical scenario in which greenhouse gases and clouds are removed or lose their ability to absorb longwave radiation—without changing Earth's albedo (i.e., reflection/absorption of sunlight). Top shows the balance between Earth's heating and cooling as measured at the top of the atmosphere (TOA). Panel (a) shows the real situation with an active greenhouse effect. Panel (b) shows the situation immediately after absorption stops; all longwave radiation emitted by the surface would reach space; there would be more cooling (via longwave radiation emitted to space) than warming (from sunlight). This imbalance would lead to a rapid temperature drop. Panel (c) shows the final stable steady state, after the surface cools sufficiently to emit only enough longwave radiation to match the energy flow from absorbed sunlight.

Day and night cycle

A simple picture assumes a steady state, but in the real world, the day/night (diurnal) cycle, as well as the seasonal cycle and weather disturbances, complicate matters. Solar heating applies only during daytime. At night the atmosphere cools somewhat, but not greatly because the thermal inertia of the climate system resists changes both day and night, as well as for longer periods. Diurnal temperature changes decrease with height in the atmosphere.

Effect of lapse rate

Lapse rate

In the lower portion of the atmosphere, the troposphere, the air temperature decreases (or "lapses") with increasing altitude. The rate at which temperature changes with altitude is called the lapse rate.

On Earth, the air temperature decreases by about 6.5 °C/km (3.6 °F per 1000 ft), on average, although this varies.

The temperature lapse is caused by convection. Air warmed by the surface rises. As it rises, air expands and cools. Simultaneously, other air descends, compresses, and warms. This process creates a vertical temperature gradient within the atmosphere.

This vertical temperature gradient is essential to the greenhouse effect. If the lapse rate was zero (so that the atmospheric temperature did not vary with altitude and was the same as the surface temperature) then there would be no greenhouse effect (i.e., its value would be zero).

Emission temperature and altitude

The temperature at which thermal radiation was emitted can be determined by comparing the intensity at a particular wavenumber to the intensity of a black-body emission curve. In the chart, emission temperatures range between Tmin and Ts. "Wavenumber" is frequency divided by the speed of light).

Greenhouse gases make the atmosphere near Earth's surface mostly opaque to longwave radiation. The atmosphere only becomes transparent to longwave radiation at higher altitudes, where the air is less dense, there is less water vapor, and reduced pressure broadening of absorption lines limits the wavelengths that gas molecules can absorb.

For any given wavelength, the longwave radiation that reaches space is emitted by a particular radiating layer of the atmosphere. The intensity of the emitted radiation is determined by the weighted average air temperature within that layer. So, for any given wavelength of radiation emitted to space, there is an associated effective emission temperature (or brightness temperature).

A given wavelength of radiation may also be said to have an effective emission altitude, which is a weighted average of the altitudes within the radiating layer.

The effective emission temperature and altitude vary by wavelength (or frequency). This phenomenon may be seen by examining plots of radiation emitted to space.

Greenhouse gases and the lapse rate

Greenhouse gases (GHGs) in dense air near the surface absorb most of the longwave radiation emitted by the warm surface. GHGs in sparse air at higher altitudes—cooler because of the environmental lapse rate—emit longwave radiation to space at a lower rate than surface emissions.

Earth's surface radiates longwave radiation with wavelengths in the range of 4–100 microns. Greenhouse gases that were largely transparent to incoming solar radiation are more absorbent for some wavelengths in this range.

The atmosphere near the Earth's surface is largely opaque to longwave radiation and most heat loss from the surface is by evaporation and convection. However radiative energy losses become increasingly important higher in the atmosphere, largely because of the decreasing concentration of water vapor, an important greenhouse gas.

Rather than thinking of longwave radiation headed to space as coming from the surface itself, it is more realistic to think of this outgoing radiation as being emitted by a layer in the mid-troposphere, which is effectively coupled to the surface by a lapse rate. The difference in temperature between these two locations explains the difference between surface emissions and emissions to space, i.e., it explains the greenhouse effect.

Infrared absorbing constituents in the atmosphere

Greenhouse gases

A greenhouse gas (GHG) is a gas which contributes to the trapping of heat by impeding the flow of longwave radiation out of a planet's atmosphere. Greenhouse gases contribute most of the greenhouse effect in Earth's energy budget.

Infrared active gases

Gases which can absorb and emit longwave radiation are said to be infrared active and act as greenhouse gases.

Most gases whose molecules have two different atoms (such as carbon monoxide, CO), and all gases with three or more atoms (including H2O and CO2), are infrared active and act as greenhouse gases. (Technically, this is because when these molecules vibrate, those vibrations modify the molecular dipole moment, or asymmetry in the distribution of electrical charge. See Infrared spectroscopy.)

Gases with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, N
2
, and oxygen, O
2
) are not infrared active. They are transparent to longwave radiation, and, for practical purposes, do not absorb or emit longwave radiation. (This is because their molecules are symmetrical and so do not have a dipole moment.) Such gases make up more than 99% of the dry atmosphere.

Absorption and emission

Longwave absorption coefficients of water vapor and carbon dioxide. For wavelengths near 15 microns (15 μm in top scale), where Earth's surface emits strongly, CO2 is a much stronger absorber than water vapor.

Greenhouse gases absorb and emit longwave radiation within specific ranges of wavelengths (organized as spectral lines or bands).

When greenhouse gases absorb radiation, they distribute the acquired energy to the surrounding air as thermal energy (i.e., kinetic energy of gas molecules). Energy is transferred from greenhouse gas molecules to other molecules via molecular collisions.

Contrary to what is sometimes said, greenhouse gases do not "re-emit" photons after they are absorbed. Because each molecule experiences billions of collisions per second, any energy a greenhouse gas molecule receives by absorbing a photon will be redistributed to other molecules before there is a chance for a new photon to be emitted.

In a separate process, greenhouse gases emit longwave radiation, at a rate determined by the air temperature. This thermal energy is either absorbed by other greenhouse gas molecules or leaves the atmosphere, cooling it.

Radiative effects

Effect on air: Air is warmed by latent heat (buoyant water vapor condensing into water droplets and releasing heat), thermals (warm air rising from below), and by sunlight being absorbed in the atmosphere. Air is cooled radiatively, by greenhouse gases and clouds emitting longwave thermal radiation. Within the troposphere, greenhouse gases typically have a net cooling effect on air, emitting more thermal radiation than they absorb. Warming and cooling of air are well balanced, on average, so that the atmosphere maintains a roughly stable average temperature.

Effect on surface cooling: Longwave radiation flows both upward and downward due to absorption and emission in the atmosphere. These canceling energy flows reduce radiative surface cooling (net upward radiative energy flow). Latent heat transport and thermals provide non-radiative surface cooling which partially compensates for this reduction, but there is still a net reduction in surface cooling, for a given surface temperature.

Effect on TOA energy balance: Greenhouse gases impact the top-of-atmosphere (TOA) energy budget by reducing the flux of longwave radiation emitted to space, for a given surface temperature. Thus, greenhouse gases alter the energy balance at TOA. This means that the surface temperature needs to be higher (than the planet's effective temperature, i.e., the temperature associated with emissions to space), in order for the outgoing energy emitted to space to balance the incoming energy from sunlight. It is important to focus on the top-of-atmosphere (TOA) energy budget (rather than the surface energy budget) when reasoning about the warming effect of greenhouse gases.

Flow of heat in Earth's atmosphere, showing (a) upward radiation heat flow and up/down radiation fluxes, (b) upward non-radiative heat flow (latent heat and thermals), (c) the balance between atmospheric heating and cooling at each altitude, and (d) the atmosphere's temperature profile.

Clouds and aerosols

Clouds and aerosols have both cooling effects, associated with reflecting sunlight back to space, and warming effects, associated with trapping thermal radiation.

On average, clouds have a strong net cooling effect. However, the mix of cooling and warming effects varies, depending on detailed characteristics of particular clouds (including their type, height, and optical properties). Thin cirrus clouds can have a net warming effect. Clouds can absorb and emit infrared radiation and thus affect the radiative properties of the atmosphere.

Atmospheric aerosols affect the climate of the Earth by changing the amount of incoming solar radiation and outgoing terrestrial longwave radiation retained in the Earth's system. This occurs through several distinct mechanisms which are split into direct, indirect and semi-direct aerosol effects. The aerosol climate effects are the biggest source of uncertainty in future climate predictions. The Intergovernmental Panel on Climate Change (IPCC) stated in 2001:

While the radiative forcing due to greenhouse gases may be determined to a reasonably high degree of accuracy... the uncertainties relating to aerosol radiative forcings remain large, and rely to a large extent on the estimates from global modeling studies that are difficult to verify at the present time.

Basic formulas

Effective temperature

A given flux of thermal radiation has an associated effective radiating temperature or effective temperature. Effective temperature is the temperature that a black body (a perfect absorber/emitter) would need to be to emit that much thermal radiation. Thus, the overall effective temperature of a planet is given by

where OLR is the average flux (power per unit area) of outgoing longwave radiation emitted to space and is the Stefan-Boltzmann constant. Similarly, the effective temperature of the surface is given by

where SLR is the average flux of longwave radiation emitted by the surface. (OLR is a conventional abbreviation. SLR is used here to denote the flux of surface-emitted longwave radiation, although there is no standard abbreviation for this.)

Metrics for the greenhouse effect

Increase in the Earth's greenhouse effect (2000–2022) based on NASA CERES satellite data.

The IPCC reports the greenhouse effect, G, as being 159 W m-2, where G is the flux of longwave thermal radiation that leaves the surface minus the flux of outgoing longwave radiation that reaches space:

Alternatively, the greenhouse effect can be described using the normalized greenhouse effect, , defined as

The normalized greenhouse effect is the fraction of the amount of thermal radiation emitted by the surface that does not reach space. Based on the IPCC numbers, = 0.40. In other words, 40 percent less thermal radiation reaches space than what leaves the surface.

Sometimes the greenhouse effect is quantified as a temperature difference. This temperature difference is closely related to the quantities above.

When the greenhouse effect is expressed as a temperature difference, , this refers to the effective temperature associated with thermal radiation emissions from the surface minus the effective temperature associated with emissions to space:

Informal discussions of the greenhouse effect often compare the actual surface temperature to the temperature that the planet would have if there were no greenhouse gases. However, in formal technical discussions, when the size of the greenhouse effect is quantified as a temperature, this is generally done using the above formula. The formula refers to the effective surface temperature rather than the actual surface temperature, and compares the surface with the top of the atmosphere, rather than comparing reality to a hypothetical situation.

The temperature difference, , indicates how much warmer a planet's surface is than the planet's overall effective temperature.

Radiative balance

The greenhouse effect can be understood as a decrease in the efficiency of planetary cooling. The greenhouse effect is quantified as the portion of the radiation flux emitted by the surface minus that doesn't reach space, i.e., 40% or 159 W/m2. Some emitted radiation is effectively cancelled out by downwelling radiation and so does not transfer heat. Evaporation and convection partially compensate for this reduction in surface cooling. Low temperatures at high altitudes limit the rate of thermal emissions to space.

Earth's top-of-atmosphere (TOA) energy imbalance (EEI) is the amount by which the power of incoming radiation exceeds the power of outgoing radiation:

where ASR is the mean flux of absorbed solar radiation. ASR may be expanded as

where is the albedo (reflectivity) of the planet and MSI is the mean solar irradiance incoming at the top of the atmosphere.

The radiative equilibrium temperature of a planet can be expressed as

A planet's temperature will tend to shift towards a state of radiative equilibrium, in which the TOA energy imbalance is zero, i.e., . When the planet is in radiative equilibrium, the overall effective temperature of the planet is given by

Thus, the concept of radiative equilibrium is important because it indicates what effective temperature a planet will tend towards having.

If, in addition to knowing the effective temperature, , we know the value of the greenhouse effect, then we know the mean (average) surface temperature of the planet.

This is why the quantity known as the greenhouse effect is important: it is one of the few quantities that go into determining the planet's mean surface temperature.

Greenhouse effect and temperature

Typically, a planet will be close to radiative equilibrium, with the rates of incoming and outgoing energy being well-balanced. Under such conditions, the planet's equilibrium temperature is determined by the mean solar irradiance and the planetary albedo (how much sunlight is reflected back to space instead of being absorbed).

The greenhouse effect measures how much warmer the surface is than the overall effective temperature of the planet. So, the effective surface temperature, , is, using the definition of ,

One could also express the relationship between and using G or .

So, the principle that a larger greenhouse effect corresponds to a higher surface temperature, if everything else (i.e., the factors that determine ) is held fixed, is true as a matter of definition.

Note that the greenhouse effect influences the temperature of the planet as a whole, in tandem with the planet's tendency to move toward radiative equilibrium.

Misconceptions

Earth's overall heat flow. Heat (net energy) always flows from warmer to cooler, honoring the second law of thermodynamics. (This heat flow diagram is equivalent to NASA's earth energy budget diagram. Data is from 2009.)

There are sometimes misunderstandings about how the greenhouse effect functions and raises temperatures.

The surface budget fallacy is a common error in thinking. It involves thinking that an increased CO2 concentration could only cause warming by increasing the downward thermal radiation to the surface, as a result of making the atmosphere a better emitter. If the atmosphere near the surface is already nearly opaque to thermal radiation, this would mean that increasing CO2 could not lead to higher temperatures. However, it is a mistake to focus on the surface energy budget rather than the top-of-atmosphere energy budget. Regardless of what happens at the surface, increasing the concentration of CO2 tends to reduce the thermal radiation reaching space (OLR), leading to a TOA energy imbalance that leads to warming. Earlier researchers like Callendar (1938) and Plass (1959) focused on the surface budget, but the work of Manabe in the 1960s clarified the importance of the top-of-atmosphere energy budget.

Among those who do not believe in the greenhouse effect, there is a fallacy that the greenhouse effect involves greenhouse gases sending heat from the cool atmosphere to the planet's warm surface, in violation of the second law of thermodynamics. However, this idea reflects a misunderstanding. Radiation heat flow is the net energy flow after the flows of radiation in both directions have been taken into account. Radiation heat flow occurs in the direction from the surface to the atmosphere and space, as is to be expected given that the surface is warmer than the atmosphere and space. While greenhouse gases emit thermal radiation downward to the surface, this is part of the normal process of radiation heat transfer. The downward thermal radiation simply reduces the upward thermal radiation net energy flow (radiation heat flow), i.e., it reduces cooling.

Simplified models

Energy flows between space, the atmosphere, and Earth's surface, with greenhouse gases in the atmosphere absorbing and emitting radiant heat, affecting Earth's energy balance. Data as of 2007.

Simplified models are sometimes used to support understanding of how the greenhouse effect comes about and how this affects surface temperature.

Atmospheric layer models

The greenhouse effect can be seen to occur in a simplified model in which the air is treated as if it is single uniform layer exchanging radiation with the ground and space. Slightly more complex models add additional layers, or introduce convection.

Equivalent emission altitude

One simplification is to treat all outgoing longwave radiation as being emitted from an altitude where the air temperature equals the overall effective temperature for planetary emissions, . Some authors have referred to this altitude as the effective radiating level (ERL), and suggest that as the CO2 concentration increases, the ERL must rise to maintain the same mass of CO2 above that level.

This approach is less accurate than accounting for variation in radiation wavelength by emission altitude. However, it can be useful in supporting a simplified understanding of the greenhouse effect. For instance, it can be used to explain how the greenhouse effect increases as the concentration of greenhouse gases increase.

Earth's overall equivalent emission altitude has been increasing with a trend of 23 m (75 ft)/decade, which is said to be consistent with a global mean surface warming of 0.12 °C (0.22 °F)/decade over the period 1979–2011.

Negative greenhouse effect

Scientists have observed that, at times, there is a negative greenhouse effect over parts of Antarctica. In a location where there is a strong temperature inversion, so that the air is warmer than the surface, it is possible for the greenhouse effect to be reversed, so that the presence of greenhouse gases increases the rate of radiative cooling to space. In this case, the rate of thermal radiation emission to space is greater than the rate at which thermal radiation is emitted by the surface. Thus, the local value of the greenhouse effect is negative.

Runaway greenhouse effect

Most scientists believe that a runaway greenhouse effect is inevitable in the long term, as the Sun gradually becomes more luminous as it ages, and will spell the end of all life on Earth. As the Sun becomes 10% brighter about one billion years from now, the surface temperature of Earth will reach 47 °C (117 °F) (unless Albedo is increased sufficiently), causing the temperature of Earth to rise rapidly and its oceans to boil away until it becomes a greenhouse planet, similar to Venus today.

Bodies other than Earth

Greenhouse effect on different celestial bodies

Venus Earth Mars Titan
Surface temperature, 735 K (462 °C; 863 °F) 288 K (15 °C; 59 °F) 215 K (−58 °C; −73 °F) 94 K (−179 °C; −290 °F)
Greenhouse effect, 503 K (905 °F) 33 K (59 °F) 6 K (11 °F) 21 K (38 °F) GHE;
12 K (22 °F) GHE+AGHE
Pressure 92 atm 1 atm 0.0063 atm 1.5 atm
Primary gases CO2 (0.965)
N2 (0.035)
N2 (0.78)
O2 (0.21)
Ar (0.009)
CO2 (0.95)
N2 (0.03)
Ar (0.02)
N2 (0.95)
CH4 (≈0.05)
Trace gases SO2, Ar H2O, CO2 O2, CO H2
Planetary effective temperature, 232 K (−41 °C; −42 °F) 255 K (−18 °C; −1 °F) 209 K (−64 °C; −83 °F) 73 K tropopause;
82 K stratopause
Greenhouse effect, 16000 W/m2 150 W/m2 13 W/m2 2.8 W/m2 GHE;
1.9 W/m2 GHE+AGHE
Normalized greenhouse effect, 0.99 0.39 0.11 0.63 GHE;
0.42 GHE+AGHE

In the solar system, apart from the Earth, at least two other planets and a moon also have a greenhouse effect.

Venus

The greenhouse effect on Venus is particularly large, and it brings the surface temperature to as high as 735 K (462 °C; 863 °F). This is due to its very dense atmosphere which consists of about 97% carbon dioxide.

Although Venus is about 30% closer to the Sun, it absorbs (and is warmed by) less sunlight than Earth, because Venus reflects 77% of incident sunlight while Earth reflects around 30%. In the absence of a greenhouse effect, the surface of Venus would be expected to have a temperature of 232 K (−41 °C; −42 °F). Thus, contrary to what one might think, being nearer to the Sun is not a reason why Venus is warmer than Earth.

Due to its high pressure, the CO2 in the atmosphere of Venus exhibits continuum absorption (absorption over a broad range of wavelengths) and is not limited to absorption within the bands relevant to its absorption on Earth.

A runaway greenhouse effect involving carbon dioxide and water vapor has for many years been hypothesized to have occurred on Venus; this idea is still largely accepted. The planet Venus experienced a runaway greenhouse effect, resulting in an atmosphere which is 96% carbon dioxide, and a surface atmospheric pressure roughly the same as found 900 m (3,000 ft) underwater on Earth. Venus may have had water oceans, but they would have boiled off as the mean surface temperature rose to the current 735 K (462 °C; 863 °F).

Mars

Mars has about 70 times as much carbon dioxide as Earth, but experiences only a small greenhouse effect, about 6 K (11 °F). The greenhouse effect is small due to the lack of water vapor and the overall thinness of the atmosphere.

The same radiative transfer calculations that predict warming on Earth accurately explain the temperature on Mars, given its atmospheric composition.

Titan

Saturn's moon Titan has both a greenhouse effect and an anti-greenhouse effect. The presence of nitrogen (N2), methane (CH4), and hydrogen (H2) in the atmosphere contribute to a greenhouse effect, increasing the surface temperature by 21 K (38 °F) over the expected temperature of the body without these gases.

While the gases N2 and H2 ordinarily do not absorb infrared radiation, these gases absorb thermal radiation on Titan due to pressure-induced collisions, the large mass and thickness of the atmosphere, and the long wavelengths of the thermal radiation from the cold surface.

The existence of a high-altitude haze, which absorbs wavelengths of solar radiation but is transparent to infrared, contribute to an anti-greenhouse effect of approximately 9 K (16 °F).

The net result of these two effects is a warming of 21 K − 9 K = 12 K (22 °F), so Titan's surface temperature of 94 K (−179 °C; −290 °F) is 12 K warmer than it would be if there were no atmosphere.

Effect of pressure

One cannot predict the relative sizes of the greenhouse effects on different bodies simply by comparing the amount of greenhouse gases in their atmospheres. This is because factors other than the quantity of these gases also play a role in determining the size of the greenhouse effect.

Overall atmospheric pressure affects how much thermal radiation each molecule of a greenhouse gas can absorb. High pressure leads to more absorption and low pressure leads to less.

This is due to "pressure broadening" of spectral lines. When the total atmospheric pressure is higher, collisions between molecules occur at a higher rate. Collisions broaden the width of absorption lines, allowing a greenhouse gas to absorb thermal radiation over a broader range of wavelengths.

Each molecule in the air near Earth's surface experiences about 7 billion collisions per second. This rate is lower at higher altitudes, where the pressure and temperature are both lower. This means that greenhouse gases are able to absorb more wavelengths in the lower atmosphere than they can in the upper atmosphere.

On other planets, pressure broadening means that each molecule of a greenhouse gas is more effective at trapping thermal radiation if the total atmospheric pressure is high (as on Venus), and less effective at trapping thermal radiation if the atmospheric pressure is low (as on Mars).

Geophysical definition of planet

The International Union of Geological Sciences (IUGS) is the internationally recognized body charged with fostering agreement on nomenclature and classification across geoscientific disciplines. However, they have yet to create a formal definition of the term "planet". As a result, there are various geophysical definitions in use among professional geophysicists, planetary scientists, and other professionals in the geosciences. Many professionals opt to use one of several of these geophysical definitions instead of the definition voted on by the International Astronomical Union, the dominant organization for setting planetary nomenclature.

Definitions

Some geoscientists adhere to the formal definition of a planet that was proposed by the International Astronomical Union (IAU) in August 2006. According to IAU definition of planet, a planet is an astronomical body orbiting the Sun that is massive enough to be rounded by its own gravity, and has cleared the neighbourhood around its orbit.

Another widely accepted geophysical definition of a planet includes that which was put forth by planetary scientists Alan Stern and Harold Levison in 2002. The pair proposed the following rules to determine whether an object in space satisfies the definition for a planetary body.

A planetary body is defined as any body in space that satisfies the following testable upper and lower bound criteria on its mass: If isolated from external perturbations (e.g., dynamical and thermal), the body must:

  1. Be low enough in mass that at no time (past or present) can it generate energy in its interior due to any self-sustaining nuclear fusion chain reaction (else it would be a brown dwarf or a star). And also,
  2. Be large enough that its shape becomes determined primarily by gravity rather than mechanical strength or other factors (e.g. surface tension, rotation rate) in less than a Hubble time (roughly the current age of the universe), so that the body would on this timescale or shorter reach a state of hydrostatic equilibrium in its interior.

They explain their reasoning by noting that this definition delineates the evolutionary stages and primary features of planets more clearly. Specifically, they claim that the hallmark of planethood is, "the collective behavior of the body's mass to overpower mechanical strength and flow into an equilibrium ellipsoid whose shape is dominated by its own gravity" and that the definition allows for "an early period during which gravity may not yet have fully manifested itself to be the dominant force".

They subclassified planetary bodies as,

  • Planets: which orbit their stars directly
  • Planetary-scale satellites: the largest being Luna, the Galilean satellites, Titan, and Triton, with the last apparently being "formerly a planet in its own right"
  • Unbound planets: rogue planets between the stars
  • Double planets: in which a planet and a massive satellite orbit a point between the two bodies (the single known example in the Solar System is Pluto–Charon)

Furthermore, there are important dynamical categories:

  • Überplanets: orbit stars and are dynamically dominant enough to clear neighboring planetesimals in a Hubble time
  • Unterplanets: which cannot clear their neighborhood, for example are in unstable orbits, or are in resonance with or orbit a more massive body. They set the boundary at Λ = 1.

A 2018 encapsulation of the above definition defined all planetary bodies as planets. It was worded for a more general audience, and was intended as an alternative to the IAU definition of a planet. It noted that planetary scientists find a different definition of "planet" to be more useful for their field, just as different fields define "metal" differently. For them, a planet is:

a substellar-mass body that has never undergone nuclear fusion and has enough gravitation to be round due to hydrostatic equilibrium, regardless of its orbital parameters.

Some variation can be found in how planetary scientists classify borderline objects, such as the asteroids Pallas and Vesta. These two are probably surviving protoplanets, and are larger than some clearly ellipsoidal objects, but currently are not very round (although Vesta likely was round in the past). Some definitions include them, while others do not.

Other names for geophysical planets

In 2009, Jean-Luc Margot (who proposed a mathematical criterion for clearing the neighborhood) and Levison suggested that "roundness" should refer to bodies whose gravitational forces exceed their material strength, and that round bodies could be called "worlds". They noted that such a geophysical classification was sound and was not necessarily in conflict with the dynamical conception of a planet: for them, "planet" is defined dynamically, and is a subset of "world" (which also includes dwarf planets, round moons, and free floaters). However, they pointed out that a taxonomy based on roundness is highly problematic because roundness is very rarely directly observable, is a continuum, and proxying it based on size or mass leads to inconsistencies because planetary material strength depends on temperature, composition, and mixing ratios. For example, icy Mimas is round at 396-kilometre (246 mi) diameter, but rocky Vesta is not at 525-kilometre (326 mi) diameter. Thus they stated that some uncertainty could be tolerated in classifying an object as a world, while its dynamical classification could be simply determined from mass and orbital period.

Geophysical planets in the Solar System

Under geophysical definitions of a planet, there are more satellite and dwarf planets in the Solar System than classical planets.

The number of geophysical planets in the Solar System cannot be objectively listed, as it depends on the precise definition as well as detailed knowledge of a number of poorly-observed bodies, and there are some borderline cases. At the time of the IAU definition in 2006, it was thought that the limit at which icy astronomical bodies were likely to be in hydrostatic equilibrium was around 400 kilometres (250 mi) in diameter, suggesting that there were a large number of dwarf planets in the Kuiper belt and scattered disk. However, by 2010 it was known that icy moons up to 1,500 kilometres (930 mi) in diameter (e.g. Iapetus) are not in equilibrium. Iapetus is round, but is too oblate for its current spin: it has an equilibrium shape for a rotation period of 16 hours, not its actual spin of 79 days. This might be because the shape of Iapetus was frozen by formation of a thick crust shortly after its formation, while its rotation continued to slow afterwards due to tidal dissipation, until it became tidally locked. Most geophysical definitions list such bodies anyway. (In fact, this is already the case with the IAU definition; Mercury is now known to not be in hydrostatic equilibrium, but it is universally considered to be a planet regardless.)

In 2019, Grundy et al. argued that trans-Neptunian objects up to 900 to 1,000 kilometres (560 to 620 mi) in diameter (e.g. (55637) 2002 UX25 and Gǃkúnǁʼhòmdímà) have never compressed out their internal porosity, and are thus not planetary bodies. In 2023, Emery et al. argued for a similar threshold for chemical evolution in the trans-Neptunian region. Such a high threshold suggests that at most nine known trans-Neptunian objects could possibly be geophysical planets: Pluto, Eris, Haumea, Makemake, Gonggong, Charon, Quaoar, Orcus, and Sedna pass the 900-kilometre (560 mi) threshold.

The bodies generally agreed to be geophysical planets include the eight major planets:

  1. Mercury
  2. Venus
  3. 🜨 Earth
  4. Mars
  5. Jupiter
  6. Saturn
  7. Uranus
  8. Neptune

nine dwarf planets that geophysicists generally agree are planets:

  1. Ceres
  2. Orcus
  3. Pluto
  4. Haumea
  5. Quaoar
  6. Makemake
  7. Gonggong
  8. Eris
  9. Sedna

and nineteen planetary-mass moons:

Some other objects are sometimes included at the borderlines, such as the asteroids Pallas, Vesta, and Hygiea (larger than Mimas, but Pallas and Vesta are noticeably not round); Neptune's second-largest moon Proteus (larger than Mimas, but still not round); or some other trans-Neptunian objects that might or might not be dwarf planets.

An examination of spacecraft imagery suggests that the threshold at which an object is large enough to be rounded by self-gravity (whether due to purely gravitational forces, as with Pluto and Titan, or augmented by tidal heating, as with Io and Europa) is approximately the threshold of geological activity. However, there are exceptions such as Callisto and Mimas, which have equilibrium shapes (historical in the case of Mimas) but show no signs of past or present endogenous geological activity, and Enceladus, which is geologically active due to tidal heating but is apparently not currently in equilibrium.

Comparison to IAU definition of a planet

Some geophysical definitions are the same as the IAU definition, while other geophysical definitions tend to be more or less equivalent to the second clause of the IAU definition of planet.

Stern's 2018 definition, but not his 2002 definition, excludes the first clause of the IAU definition (that a planet be in orbit around a star) and the third clause (that a planet has cleared the neighborhood around its orbit). It thus counts dwarf planets and planetary-mass moons as planets.

Five bodies are currently recognized as or named as dwarf planets by the IAU: Ceres, Pluto (the dwarf planet with the largest known radius), Eris (the dwarf planet with the largest known mass), Haumea, and Makemake, though the last three have not actually been demonstrated to be dwarf planets. Astronomers normally include these five, as well as four more: Quaoar, Sedna, Orcus, and Gonggong.

Reaction to IAU definition

Many critics of the IAU decision were focused specifically on retaining Pluto as a planet and were not interested in debating or discussing how the term "planet" should be defined in geoscience. An early petition rejecting the IAU definition attracted more than 300 signatures, though not all of these critics supported an alternative definition.

Other critics took issue with the definition itself and wished to create alternative definitions that could be used in different disciplines.

The geophysical definition of a planet put forth by Stern and Levinson is an alternative to the IAU's definition of what is and is not a planet and is meant to stand as the geophysical definition, while the IAU definition, they argue, is intended more for astronomers. Nonetheless, some geologists favor the IAU's definition. Proponents of Stern and Levinson's geophysical definition have shown that such conceptions of what a planet is have been used by planetary scientists for decades, and continued after the IAU definition was established, and that asteroids have routinely been regarded as "minor" planets, though usage varies considerably.

Applicability to exoplanets

Geophysical definitions have been used to define exoplanets. The 2006 IAU definition purposefully does not address the complication of exoplanets, though in 2003 the IAU declared that "the minimum mass required for an extrasolar object to be considered a planet should be the same as that used in the Solar System". While some geophysical definitions that differ from the IAU definition apply, in theory, to exoplanets and rogue planets, they have not been used in practice, due to ignorance of the geophysical properties of most exoplanets. Geophysical definitions typically exclude objects that have ever undergone nuclear fusion, and so may exclude the higher-mass objects included in exoplanet catalogs as well as the lower-mass objects. The Extrasolar Planets Encyclopaedia, Exoplanet Data Explorer and NASA Exoplanet Archive all include objects significantly more massive than the theoretical 13-Jupiter mass threshold at which deuterium fusion is believed to be supported, for reasons including: uncertainties in how this limit would apply to a body with a rocky core, uncertainties in the masses of exoplanets, and debate over whether deuterium-fusion or the mechanism of formation is the most appropriate criterion to distinguish a planet from a star. These uncertainties apply equally to the IAU conception of a planet.

Both the IAU definition and the geophysical definitions that differ from it consider the shape of the object, with consideration given to hydrostatic equilibrium. Determining the roundness of a body requires measurements across multiple chords (and even that is not enough to determine whether it is actually in equilibrium), but exoplanet detection techniques provide only the planet's mass, the ratio of its cross-sectional area to that of the host star, or its relative brightness. One small exoplanet, Kepler-1520b, has a mass of less than 0.02 times that of the Earth, and analogy to objects within the Solar System suggests that this may not be enough for a rocky body to be a planet. Another, WD 1145+017 b, is only 0.0007 Earth masses, while SDSS J1228+1040 b may be only 0.01 Earth radii in size, well below the upper equilibrium limit for icy bodies in the Solar System. (See List of smallest exoplanets.)

Rogue planet

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Rogue_planet

A rogue planet, also termed a free-floating planet (FFP) or an isolated planetary-mass object (iPMO), is an interstellar object of planetary mass which is not gravitationally bound to any star or brown dwarf.

Rogue planets may originate from planetary systems in which they are formed and later ejected, or they can also form on their own, outside a planetary system. The Milky Way alone may have billions to trillions of rogue planets, a range the upcoming Nancy Grace Roman Space Telescope will likely be able to narrow.

Some planetary-mass objects may have formed in a similar way to stars, and the International Astronomical Union has proposed that such objects be called sub-brown dwarfs. A possible example is Cha 110913−773444, which may either have been ejected and become a rogue planet or formed on its own to become a sub-brown dwarf.

Terminology

The two first discovery papers use the names isolated planetary-mass objects (iPMO) and free-floating planets (FFP). Most astronomical papers use one of these terms. The term rogue planet is more often used for microlensing studies, which also often uses the term FFP. A press release intended for the public might use an alternative name. The discovery of at least 70 FFPs in 2021, for example, used the terms rogue planet, starless planet, wandering planet and free-floating planet in different press releases.

Discovery

Isolated planetary-mass objects (iPMO) were first discovered in 2000 by the UK team Lucas & Roche with UKIRT in the Orion Nebula. In the same year the Spanish team Zapatero Osorio et al. discovered iPMOs with Keck spectroscopy in the σ Orionis cluster. The spectroscopy of the objects in the Orion Nebula was published in 2001. Both European teams are now recognized for their quasi-simultaneous discoveries. In 1999 the Japanese team Oasa et al. discovered objects in Chamaeleon I that were spectroscopically confirmed years later in 2004 by the US team Luhman et al.

Observation

115 potential rogue planets in the region between Upper Scorpius and Ophiuchus (2021)

There are two techniques to discover free-floating planets: direct imaging and microlensing.

Microlensing

Astrophysicist Takahiro Sumi of Osaka University in Japan and colleagues, who form the Microlensing Observations in Astrophysics and the Optical Gravitational Lensing Experiment collaborations, published their study of microlensing in 2011. They observed 50 million stars in the Milky Way by using the 1.8-metre (5 ft 11 in) MOA-II telescope at New Zealand's Mount John Observatory and the 1.3-metre (4 ft 3 in) University of Warsaw telescope at Chile's Las Campanas Observatory. They found 474 incidents of microlensing, ten of which were brief enough to be planets of around Jupiter's size with no associated star in the immediate vicinity. The researchers estimated from their observations that there are nearly two Jupiter-mass rogue planets for every star in the Milky Way. One study suggested a much larger number, up to 100,000 times more rogue planets than stars in the Milky Way, though this study encompassed hypothetical objects much smaller than Jupiter. A 2017 study by Przemek Mróz of Warsaw University Observatory and colleagues, with six times larger statistics than the 2011 study, indicates an upper limit on Jupiter-mass free-floating or wide-orbit planets of 0.25 planets per main-sequence star in the Milky Way.

In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbound to any star and free floating in the Milky Way galaxy.

Direct imaging

The cold planetary-mass object WISE J0830+2837 (marked orange object) observed with the Spitzer Space Telescope. It has a temperature of 300-350 K (27-77°C; 80-170 °F).

Microlensing planets can only be studied by the microlensing event, which makes the characterization of the planet difficult. Astronomers therefore turn to isolated planetary-mass objects (iPMO) that were found via the direct imaging method. To determine a mass of a brown dwarf or iPMO one needs for example the luminosity and the age of an object. Determining the age of a low-mass object has proven to be difficult. It is no surprise that the vast majority of iPMOs are found inside young nearby star-forming regions of which astronomers know their age. These objects are younger than 200 Myrs, are massive (>5 MJ) and belong to the L- and T-dwarfs. There is however a small growing sample of cold and old Y-dwarfs that have estimated masses of 8-20 MJ. Nearby rogue planet candidates of spectral type Y include WISE 0855−0714 at a distance of 7.27±0.13 light-years. If this sample of Y-dwarfs can be characterized with more accurate measurements or if a way to better characterize their ages can be found, the number of old and cold iPMOs will likely increase significantly.

The first iPMOs were discovered in the early 2000s via direct imaging inside young star-forming regions. These iPMOs found via direct imaging formed probably like stars (sometimes called sub-brown dwarf). There might be iPMOs that form like a planet, which are then ejected. These objects will however be kinematically different from their natal star-forming region, should not be surrounded by a circumstellar disk and have high metallicity. None of the iPMOs found inside young star-forming regions show a high velocity compared to their star-forming region. For old iPMOs the cold WISE J0830+2837 shows a Vtan of about 100 km/s, which is high, but still consistent with formation in our galaxy. For WISE 1534–1043 one alternative scenario explains this object as an ejected exoplanet due to its high Vtan of about 200 km/s, but its color suggests it is an old metal-poor brown dwarf. Most astronomers studying massive iPMOs believe that they represent the low-mass end of the star-formation process.

Astronomers have used the Herschel Space Observatory and the Very Large Telescope to observe a very young free-floating planetary-mass object, OTS 44, and demonstrate that the processes characterizing the canonical star-like mode of formation apply to isolated objects down to a few Jupiter masses. Herschel far-infrared observations have shown that OTS 44 is surrounded by a disk of at least 10 Earth masses and thus could eventually form a mini planetary system. Spectroscopic observations of OTS 44 with the SINFONI spectrograph at the Very Large Telescope have revealed that the disk is actively accreting matter, similar to the disks of young stars.

Binaries

2MASS J1119–1137AB, the first planetary-mass binary discovered, located in the TW Hydrae association
 
JuMBO 29, a candidate 12.5+3 MJ binary, separated by 135 AU, located in the Orion Nebula

The first discovery of a resolved planetary-mass binary was 2MASS J1119–1137AB. There are however other binaries known, such as 2MASS J1553022+153236AB, WISE 1828+2650, WISE 0146+4234, WISE J0336−0143 (could also be a brown dwarf and a planetary-mass object (BD+PMO) binary), NIRISS-NGC1333-12 and several objects discovered by Zhang et al.

In the Orion Nebula a population of 40 wide binaries and 2 triple systems were discovered. This was surprising for two reasons: The trend of binaries of brown dwarfs predicted a decrease of distance between low mass objects with decreasing mass. It was also predicted that the binary fraction decreases with mass. These binaries were named Jupiter-mass binary objects (JuMBOs). They make up at least 9% of the iPMOs and have a separation smaller than 340 AU. It is unclear how these JuMBOs formed, but an extensive study argued that they formed in situ, like stars. If they formed like stars, then there must be an unknown "extra ingredient" to allow them to form. If they formed like planets and were later ejected, then it has to be explained why these binaries did not break apart during the ejection process. Future measurements with JWST might resolve if these objects formed as ejected planets or as stars. A study by Kevin Luhman reanalysed the NIRCam data and found that most JuMBOs did not appear in his sample of substellar objects. Moreover the color were consistent with reddened background sources or low signal-to-noise sources. Only JuMBO 29 is identified as a good candidate in this work. JuMBO 29 also was observed with NIRSpec and one component was identified as a young M8 source. This spectral type is consistent with a low mass for the age of the Orion Nebula.

Total number of known iPMOs

There are likely hundreds of known candidate iPMOs, over a hundred objects with spectra and a small but growing number of candidates discovered via microlensing. Some large surveys include:

As of December 2021, the largest-ever group of rogue planets was discovered, numbering at least 70 and up to 170 depending on the assumed age. They are found in the OB association between Upper Scorpius and Ophiuchus with masses between 4 and 13 MJ and age around 3 to 10 million years, and were most likely formed by either gravitational collapse of gas clouds, or formation in a protoplanetary disk followed by ejection due to dynamical instabilities. Follow-up observations with spectroscopy from the Subaru Telescope and Gran Telescopio Canarias showed that the contamination of this sample is quite low (≤6%). The 16 young objects had a mass between 3 and 14 MJ, confirming that they are indeed planetary-mass objects.

In October 2023 an even larger group of 540 planetary-mass object candidates was discovered in the Trapezium Cluster and inner Orion Nebula with JWST. The objects have a mass between 13 and 0.6 MJ. A surprising number of these objects formed wide binaries, which was not predicted.

Formation

There are in general two scenarios that can lead to the formation of an isolated planetary-mass object (iPMO). It can form like a planet around a star and is then ejected, or it forms like a low-mass star or brown dwarf in isolation. This can influence its composition and motion.

Formation like a star

Objects with a mass of at least one Jupiter mass were thought to be able to form via collapse and fragmentation of molecular clouds from models in 2001. Pre-JWST observations have shown that objects below 3-5 MJ are unlikely to form on their own. Observations in 2023 in the Trapezium Cluster with JWST have shown that objects as massive as 0.6 MJ might form on their own, not requiring a steep cut-off mass. A particular type of globule, called globulettes, are thought to be birthplaces for brown dwarfs and planetary-mass objects. Globulettes are found in the Rosette Nebula and IC 1805. Sometimes young iPMOs are still surrounded by a disk that could form exomoons. Due to the tight orbit of this type of exomoon around their host planet, they have a high chance of 10-15% to be transiting.

Disks

Some very young star-forming regions, typically younger than 5 million years, sometimes contain isolated planetary-mass objects with infrared excess and signs of accretion. Most well known is the iPMO OTS 44 discovered to have a disk and being located in Chamaeleon I. Charmaeleon I and II have other candidate iPMOs with disks. Other star-forming regions with iPMOs with disks or accretion are Lupus I, Rho Ophiuchi Cloud Complex, Sigma Orionis cluster, Orion Nebula, Taurus, NGC 1333 and IC 348. A large survey of disks around brown dwarfs and iPMOs with ALMA found that these disks are not massive enough to form earth-mass planets. There is still the possibility that the disks already have formed planets. Studies of red dwarfs have shown that some have gas-rich disks at a relative old age. These disks were dubbed Peter Pan Disks and this trend could continue into the planetary-mass regime. One Peter Pan disk is the 45 Myr old brown dwarf 2MASS J02265658-5327032 with a mass of about 13.7 MJ, which is close to the planetary-mass regime. Recent studies of the nearby planetary-mass object 2MASS J11151597+1937266 found that this nearby iPMO is surrounded by a disk. It shows signs of accretion from the disk and also infrared excess.

Formation like a planet

Ejected planets are predicted to be mostly low-mass (<30 ME Figure 1 Ma et al.) and their mean mass depends on the mass of their host star. Simulations by Ma et al. did show that 17.5% of 1 M stars eject a total of 16.8 ME per star with a typical (median) mass of 0.8 ME for an individual free-floating planet (FFP). For lower mass red dwarfs with a mass of 0.3 M 12% of stars eject a total of 5.1 ME per star with a typical mass of 0.3 ME for an individual FFP.

Hong et al. predicted that exomoons can be scattered by planet-planet interactions and become ejected exomoons. Higher mass (0.3-1 MJ) ejected FFP are predicted to be possible, but they are also predicted to be rare. Ejection of a planet can occur via planet-planet scatter or due a stellar flyby. Another possibility is the ejection of a fragment of a disk that then forms into a planetary-mass object. Another suggested scenario is the ejection of planets in a tilted circumbinary orbit. Interactions with the central binary and the planets with each other can lead to the ejection of the lower-mass planet in the system.

Other scenarios

If a stellar or brown dwarf embryo experiences a halted accretion, it could remain low-mass enough to become a planetary-mass object. Such a halted accretion could occur if the embryo is ejected or if its circumstellar disk experiences photoevaporation near O-stars. Objects that formed via the ejected embryo scenario would have smaller or no disk and the fraction of binaries decreases for such objects. It could also be that free-floating planetary-mass objects for from a combination of scenarios.

Fate

Most isolated planetary-mass objects will float in interstellar space forever.

Some iPMOs will have a close encounter with a planetary system. This rare encounter can have three outcomes: The iPMO will remain unbound, it could be weakly bound to the star, or it could "kick out" the exoplanet, replacing it. Simulations have shown that the vast majority of these encounters result in a capture event with the iPMO being weakly bound with a low gravitational binding energy and an elongated highly eccentric orbit. These orbits are not stable and 90% of these objects gain energy due to planet-planet encounters and are ejected back into interstellar space. Only 1% of all stars will experience this temporary capture.

Warmth

Artist's conception of a Jupiter-size rogue planet

Interstellar planets generate little heat and are not heated by a star. However, in 1998, David J. Stevenson theorized that some planet-sized objects adrift in interstellar space might sustain a thick atmosphere that would not freeze out. He proposed that these atmospheres would be preserved by the pressure-induced far-infrared radiation opacity of a thick hydrogen-containing atmosphere.

During planetary-system formation, several small protoplanetary bodies may be ejected from the system. An ejected body would receive less of the stellar-generated ultraviolet light that can strip away the lighter elements of its atmosphere. Even an Earth-sized body would have enough gravity to prevent the escape of the hydrogen and helium in its atmosphere. In an Earth-sized object the geothermal energy from residual core radioisotope decay could maintain a surface temperature above the melting point of water, allowing liquid-water oceans to exist. These planets are likely to remain geologically active for long periods. If they have geodynamo-created protective magnetospheres and sea floor volcanism, hydrothermal vents could provide energy for life. These bodies would be difficult to detect because of their weak thermal microwave radiation emissions, although reflected solar radiation and far-infrared thermal emissions may be detectable from an object that is less than 1,000 astronomical units from Earth. Around five percent of Earth-sized ejected planets with Moon-sized natural satellites would retain their satellites after ejection. A large satellite would be a source of significant geological tidal heating.

Sunday, December 8, 2024

Galactic habitable zone

From Wikipedia, the free encyclopedia
 

In astrobiology and planetary astrophysics, the galactic habitable zone is the region of a galaxy in which life is most likely to develop. The concept of a galactic habitable zone analyzes various factors, such as metallicity (the presence of elements heavier than hydrogen and helium) and the rate and density of major catastrophes such as supernovae, and uses these to calculate which regions of a galaxy are more likely to form terrestrial planets, initially develop simple life, and provide a suitable environment for this life to evolve and advance. According to research published in August 2015, very large galaxies may favor the birth and development of habitable planets more than smaller galaxies such as the Milky Way. In the case of the Milky Way, its galactic habitable zone is commonly believed to be an annulus with an outer radius of about 10 kiloparsecs (33,000 ly) and an inner radius close to the Galactic Center (with both radii lacking hard boundaries).

Galactic habitable-zone theory has been criticized due to an inability to accurately quantify the factors making a region of a galaxy favorable for the emergence of life. In addition, computer simulations suggest that stars may change their orbits around the galactic center significantly, therefore challenging at least part of the view that some galactic areas are necessarily more life-supporting than others.

History

Background

The idea of the circumstellar habitable zone was introduced in 1953 by Hubertus Strughold and Harlow Shapley and in 1959 by Su-Shu Huang as the region around a star in which an orbiting planet could retain water at its surface. From the 1970s, planetary scientists and astrobiologists began to consider various other factors required for the creation and sustenance of life, including the impact that a nearby supernova may have on the development of life. In 1981, computer scientist Jim Clarke proposed that the apparent lack of extraterrestrial civilizations in the Milky Way could be explained by Seyfert-type outbursts from an active galactic nucleus, with Earth alone being spared from this radiation by virtue of its location in the galaxy. In the same year, Wallace Hampton Tucker analyzed galactic habitability in a more general context, but later work superseded his proposals.

Modern galactic habitable-zone theory was introduced in 1986 by L.S. Marochnik and L.M. Mukhin of the Russian Space Research Institute, who defined the zone as the region in which intelligent life could flourish. Donald Brownlee and palaeontologist Peter Ward expanded upon the concept of a galactic habitable zone, as well as the other factors required for the emergence of complex life, in their 2000 book Rare Earth: Why Complex Life is Uncommon in the Universe. In that book, the authors used the galactic habitable zone, among other factors, to argue that intelligent life is not a common occurrence in the Universe.

The idea of a galactic habitable zone was further developed in 2001 in a paper by Ward and Brownlee, in collaboration with Guillermo Gonzalez of the University of Washington. In that paper, Gonzalez, Brownlee, and Ward stated that regions near the galactic halo would lack the heavier elements required to produce habitable terrestrial planets, thus creating an outward limit to the size of the galactic habitable zone. Being too close to the galactic center, however, would expose an otherwise habitable planet to numerous supernovae and other energetic cosmic events, as well as excessive cometary impacts caused by perturbations of the host star's Oort cloud. Therefore, the authors established an inner boundary for the galactic habitable zone, located just outside the galactic bulge.

Considerations

In order to identify a location in the galaxy as being a part of the galactic habitable zone, a variety of factors must be accounted for. These include the distribution of stars and spiral arms, the presence or absence of an active galactic nucleus, the frequency of nearby supernovae that can threaten the existence of life, the metallicity of that location, and other factors. Without fulfilling these factors, a region of the galaxy cannot create or sustain life with efficiency.

Chemical evolution

The metallicity of the thin galactic disk is far greater than that of the outlying galactic halo.

One of the most basic requirements for the existence of life around a star is the ability of that star to produce a terrestrial planet of sufficient mass to sustain it. Various elements, such as iron, magnesium, titanium, carbon, oxygen, silicon, and others, are required to produce habitable planets, and the concentration and ratios of these vary throughout the galaxy.

The most common benchmark elemental ratio is that of Fe/H, one of the factors determining the propensity of a region of the galaxy to produce terrestrial planets. The galactic bulge, the region of the galaxy closest to the Galactic Center, has an [Fe/H] distribution peaking at −0.2 decimal exponent units (dex) relative to the Sun's ratio (where −1 would be 110 such metallicity); the thin disk, in which local sectors of the local Arm are, has an average metallicity of −0.02 dex at the orbital distance of the Sun around the galactic center, reducing by 0.07 dex for every additional kiloparsec of orbital distance. The extended thick disk has an average [Fe/H] of −0.6 dex, while the halo, the region farthest from the galactic center, has the lowest [Fe/H] distribution peak, at around −1.5 dex. In addition, ratios such as [C/O], [Mg/Fe], [Si/Fe], and [S/Fe] may be relevant to the ability of a region of a galaxy to form habitable terrestrial planets, and of these [Mg/Fe] and [Si/Fe] are slowly reducing over time, meaning that future terrestrial planets are more likely to possess larger iron cores.

In addition to specific amounts of the various stable elements that comprise a terrestrial planet's mass, an abundance of radionuclides such as 40K, 235U, 238U, and 232Th is required in order to heat the planet's interior and power life-sustaining processes such as plate tectonics, volcanism, and a geomagnetic dynamo. The [U/H] and [Th/H] ratios are dependent on the [Fe/H] ratio; however, a general function for the abundance of 40K cannot be created with existing data.

Even on a habitable planet with enough radioisotopes to heat its interior, various prebiotic molecules are required in order to produce life; therefore, the distribution of these molecules in the galaxy is important in determining the galactic habitable zone. A 2008 study by Samantha Blair and colleagues attempted to determine the outer edge of the galactic habitable zone by means of analyzing formaldehyde and carbon monoxide emissions from various giant molecular clouds scattered throughout the Milky Way; however, the data is neither conclusive nor complete.

While high metallicity is beneficial for the creation of terrestrial extrasolar planets, an excess amount can be harmful for life. Excess metallicity may lead to the formation of a large number of gas giants in a given system, which may subsequently migrate from beyond the system's frost line and become hot Jupiters, disturbing planets that would otherwise have been located in the system's circumstellar habitable zone. Thus, it was found that the Goldilocks principle applies to metallicity as well; low-metallicity systems have low probabilities of forming terrestrial-mass planets at all, while excessive metallicities cause a large number of gas giants to develop, disrupting the orbital dynamics of the system and altering the habitability of terrestrial planets in the system.

Catastrophic events

The impact of supernovae on the extent of the galactic habitable zone has been extensively studied.

As well as being in a region of the galaxy that is chemically advantageous for the development of life, a star must also avoid an excessive number of catastrophic cosmic events with the potential to damage life on its otherwise habitable planets. Nearby supernovae, for example, have the potential to severely harm life on a planet; with excessive frequency, such catastrophic outbursts have the potential to sterilize an entire region of a galaxy for billions of years. The galactic bulge, for example, experienced an initial wave of extremely rapid star formation, triggering a cascade of supernovae that for five billion years left that area almost completely unable to develop life.

In addition to supernovae, gamma-ray bursts, excessive amounts of radiation, gravitational perturbations and various other events have been proposed to affect the distribution of life within the galaxy. These include, controversially, such proposals as "galactic tides" with the potential to induce cometary impacts or even cold bodies of dark matter that pass through organisms and induce genetic mutations. However, the impact of many of these events may be difficult to quantify.

Galactic morphology

Various morphological features of galaxies can affect their potential for habitability. Spiral arms, for example, are the location of star formation, but they contain numerous giant molecular clouds and a high density of stars that can perturb a star's Oort cloud, sending avalanches of comets and asteroids toward any planets further in. In addition, the high density of stars and rate of massive star formation can expose any stars orbiting within the spiral arms for too long to supernova explosions, reducing their prospects for the survival and development of life. Considering these factors, the Sun is advantageously placed within the galaxy because, in addition to being outside a spiral arm, it orbits near the corotation circle, maximizing the interval between spiral-arm crossings.

Spiral arms also have the ability to cause climatic changes on a planet. Passing through the dense molecular clouds of galactic spiral arms, stellar winds may be pushed back to the point that a reflective hydrogen layer accumulates in an orbiting planet's atmosphere, perhaps leading to a snowball Earth scenario.

A galactic bar also has the potential to affect the size of the galactic habitable zone. Galactic bars are thought to grow over time, eventually reaching the corotation radius of the galaxy and perturbing the orbits of the stars already there. High-metallicity stars like the Sun, for example, at an intermediate location between the low-metallicity galactic halo and the high-radiation galactic center, may be scattered throughout the galaxy, affecting the definition of the galactic habitable zone. It has been suggested that for this reason, it may be impossible to properly define a galactic habitable zone.

Boundaries

The galactic habitable zone is often viewed as an annulus 7-9 kpc from the galactic center, shown in green here, though recent research has called this into question.

Early research on the galactic habitable zone, including the 2001 paper by Gonzalez, Brownlee, and Ward, did not demarcate any specific boundaries, merely stating that the zone was an annulus encompassing a region of the galaxy that was both enriched with metals and spared from excessive radiation, and that habitability would be more likely in the galaxy's thin disk. However, later research conducted in 2004 by Lineweaver and colleagues did create boundaries for this annulus, in the case of the Milky Way ranging from 7 kpc to 9 kpc from the galactic center.

The Lineweaver team also analyzed the evolution of the galactic habitable zone with respect to time, finding, for example, that stars close to the galactic bulge had to form within a time window of about two billion years in order to have habitable planets. Before that window, galactic-bulge stars would be prevented from having life-sustaining planets from frequent supernova events. After the supernova threat had subsided, though, the increasing metallicity of the galactic core would eventually mean that stars there would have a high number of giant planets, with the potential to destabilize star systems and radically alter the orbit of any planet located in a star's circumstellar habitable zone. Simulations conducted in 2005 at the University of Washington, however, show that even in the presence of hot Jupiters, terrestrial planets may remain stable over long timescales.

A 2006 study by Milan Ćirković and colleagues extended the notion of a time-dependent galactic habitable zone, analyzing various catastrophic events as well as the underlying secular evolution of galactic dynamics. The paper considers that the number of habitable planets may fluctuate wildly with time due to the unpredictable timing of catastrophic events, thereby creating a punctuated equilibrium in which habitable planets are more likely at some times than at others. Based on the results of Monte Carlo simulations on a toy model of the Milky Way, the team found that the number of habitable planets is likely to increase with time, though not in a perfectly linear pattern.

Subsequent studies saw more fundamental revision of the old concept of the galactic habitable zone as an annulus. In 2008, a study by Nikos Prantzos revealed that, while the probability of a planet escaping sterilization by supernova was highest at a distance of about 10 kpc from the galactic center, the sheer density of stars in the inner galaxy meant that the highest number of habitable planets could be found there. The research was corroborated in a 2011 paper by Michael Gowanlock, who calculated the frequency of supernova-surviving planets as a function of their distance from the galactic center, their height above the galactic plane, and their age, ultimately discovering that about 0.3% of stars in the galaxy could today support complex life, or 1.2% if one does not consider the tidal locking of red dwarf planets as precluding the development of complex life.

Criticism

The idea of the galactic habitable zone has been criticized by Nikos Prantzos, on the grounds that the parameters to create it are impossible to define even approximately, and that thus the galactic habitable zone may merely be a useful conceptual tool to enable a better understanding of the distribution of life, rather than an end to itself. For these reasons, Prantzos has suggested that the entire galaxy may be habitable, rather than habitability being restricted to a specific region in space and time. In addition, stars "riding" the galaxy's spiral arms may move tens of thousands of light years from their original orbits, thus supporting the notion that there may not be one specific galactic habitable zone. A Monte Carlo simulation, improving on the mechanisms used by Ćirković in 2006, was conducted in 2010 by Duncan Forgan of Royal Observatory Edinburgh. The data collected from the experiments support Prantzos's notion that there is no solidly defined galactic habitable zone, indicating the possibility of hundreds of extraterrestrial civilizations in the Milky Way, though further data will be required in order for a definitive determination to be made.

Cognitive rehabilitation therapy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cognitive_rehabilitation_therapy     ...