The faint young Sun paradox describes the apparent contradiction between observations of liquid water early in Earth's history and the astrophysical expectation that the Sun's output would be only 70 percent as intense during that epoch as it is during the modern epoch. The issue was raised by astronomers Carl Sagan and George Mullen in 1972.[1] Explanations of this paradox have taken into account greenhouse effects, astrophysical influences, or a combination of the two.
The unresolved question is how a climate suitable for life was maintained on Earth over the long timescale despite the variable solar output and wide range of terrestrial conditions.[2]
Early solar output
Early in Earth's history, the Sun's output would have been only 70 percent as intense as it is during the modern epoch. In the environmental conditions existing at that time, this solar output would have been insufficient to maintain a liquid ocean. Astronomers Carl Sagan and George Mullen pointed out in 1972 that this is contrary to the geological and paleontological evidence.[1]According to the Standard Solar Model, stars similar to the Sun should gradually brighten over their main sequence lifetime due to contraction of the stellar core caused by fusion.[3] However, with the predicted solar luminosity 4 billion (4 × 109) years ago and with greenhouse gas concentrations the same as are current for the modern Earth, any liquid water exposed to the surface would freeze. However, the geological record shows a continually relatively warm surface in the full early temperature record of Earth, with the exception of a cold phase, the Huronian glaciation, about 2.4 to 2.1 billion years ago. Water-related sediments have been found dating to as early as 3.8 billion years ago.[4] Hints of early life forms have been dated from as early as 3.5 billion years,[5] and the basic carbon isotopy is very much in line with what is found today.[6] A regular alternation between ice ages and warm periods is only found occurring in the period since one billion years ago.[citation needed]
Greenhouse hypothesis
When it first formed, Earth's atmosphere may have contained more greenhouse gases. Carbon dioxide concentrations may have been higher, with estimated partial pressure as large as 1,000 kPa (10 mbar), because there was no bacterial photosynthesis to convert the CO2 gas to organic carbon and gaseous oxygen. Methane, a very active greenhouse gas that reacts with oxygen to produce carbon dioxide and water vapor, may have been more prevalent as well, with a mixing ratio of 10−4 (100 parts per million by volume).[7][8]Based on a study of geological sulfur isotopes, in 2009 a group of scientists including Yuichiro Ueno from the University of Tokyo proposed that carbonyl sulfide (OCS) was present in the Archean atmosphere. Carbonyl sulfide is an efficient greenhouse gas and the scientists estimate that the additional greenhouse effect would have been sufficient to prevent Earth from freezing over.[9]
Based on an "analysis of nitrogen and argon isotopes in fluid inclusions trapped in 3.0- to 3.5-billion-year-old hydrothermal quartz" a 2013 paper concludes that "dinitrogen did not play a significant role in the thermal budget of the ancient Earth and that the Archean partial pressure of CO2 was probably lower than 0.7 bar".[10] Burgess, one of the authors states "The amount of nitrogen in the atmosphere was too low to enhance the greenhouse effect of carbon dioxide sufficiently to warm the planet. However, our results did give a higher than expected pressure reading for carbon dioxide – at odds with the estimates based on fossil soils – which could be high enough to counteract the effects of the faint young Sun and will require further investigation."[11] Also, in 2012-2016 the research by S.M. Som, based on the analysis of raindrop impressions and air bubbles trapped in ancient lavas, have further indicated a low atmospheric pressure below 1.1 bar and probably as low as 0.23 bar during an epoch 2.7 bn years from present.[12]
Following the initial accretion of the continents after about 1 billion years,[13] geo-botanist Heinrich Walter and others contend that a non-biological version of the carbon cycle provided a negative temperature feedback. The carbon dioxide in the atmosphere dissolved in liquid water and combined with metal ions derived from silicate weathering to produce carbonates. During ice age periods, this part of the cycle would shut down. Volcanic carbon emissions would then restart a warming cycle due to the greenhouse effect.[14][15]
According to the Snowball Earth hypothesis, there may have been a number of periods when Earth's oceans froze over completely. The most recent such period may have been about 630 million years ago.[16] Afterwards, the Cambrian explosion of new multicellular life forms started.
Greater radiogenic heat
In the past, the geothermal release of decay heat, emitted from the decay of the isotopes potassium-40, uranium-235 and uranium-238 was considerably greater than it is today.[17] The figure to the right shows that the isotope ratio between uranium-238 and uranium-235 was also considerably different than it is today, with the ratio essentially equivalent to that of modern low-enriched uranium. Therefore, natural uranium ore bodies, if present, would have been capable of supporting natural nuclear fission reactors with common light water as its moderator. Any attempts to explain the paradox must therefore factor in both radiogenic contributions, both from decay heat and from any potential natural nuclear fission reactors.
The primary mechanism for Earth warming by radiogenic heat is not the direct heating (which contribute less than 0.1% to the total heat input even of early Earth) but rather the establishment of the high geothermal gradient of the crust, resulting in greater out-gassing rate and therefore the higher concentration of greenhouse gases in early Earth atmosphere. Additionally, a hotter deep crust would limit the water absorption by crustal minerals, resulting in a smaller amount of high-albedo land protruding from the early oceans, causing more solar energy to be absorbed.
Greater tidal heating
The Moon was much closer to Earth billions of years ago,[18] and therefore produced considerably more tidal heating.[19]Alternatives
A minority view, propounded by the Israeli-American physicist Nir Shaviv, uses climatological influences of solar wind, combined with a hypothesis of Danish physicist Henrik Svensmark for a cooling effect of cosmic rays, to explain the paradox.[20] According to Shaviv, the early Sun had emitted a stronger solar wind that produced a protective effect against cosmic rays. In that early age, a moderate greenhouse effect comparable to today's would have been sufficient to explain an ice-free Earth. Evidence for a more active early Sun has been found in meteorites.[21]
The temperature minimum around 2.4 billion years goes along with a cosmic ray flux modulation by a variable star formation rate in the Milky Way. The reduced solar impact later results in a stronger impact of cosmic ray flux (CRF), which is hypothesized to lead to a relationship with climatological variations.
An alternative model of solar evolution may explain the faint young Sun paradox. In this model, the early Sun underwent an extended period of higher solar wind output. This caused a mass loss from the Sun on the order of 5−10 percent over its lifetime, resulting in a more consistent level of solar luminosity (as the early Sun had more mass, resulting in more energy output than was predicted). In order to explain the warm conditions in the Archean era, this mass loss must have occurred over an interval of about one billion years. However, records of ion implantation from meteorites and lunar samples show that the elevated rate of solar wind flux only lasted for a period of 0.1 billion years. Observations of the young Sun-like star π1 Ursae Majoris matches this rate of decline in the stellar wind output, suggesting that a higher mass loss rate can not by itself resolve the paradox.[22]
Examination of Archaean sediments appears inconsistent with the hypothesis of high greenhouse concentrations. Instead, the moderate temperature range may be explained by a lower surface albedo brought about by less continental area and the "lack of biologically induced cloud condensation nuclei". This would have led to increased absorption of solar energy, thereby compensating for the lower solar output.[23]
On Mars
Usually, the faint young Sun paradox is framed in terms of Earth's paleoclimate. However, the issue also appears in the context of the climate on ancient Mars, where apparently liquid water was present, in significant amounts (hydrological cycle, lakes, rivers, rain, possibly seas and oceans), billions of years ago. Subsequently, significant liquid water disappeared from the surface of Mars. Presently, the surface of Mars is cold and dry. The variable solar output, assuming nothing else changed, would imply colder (and drier) conditions on Mars in the ancient past than they are today, apparently contrary to the empirical evidence from Mars exploration that suggest the wetter and milder past. An explanation of the faint young Sun paradox that could simultaneously account for the observations might be that the sun shed mass through the solar wind, though sufficient rate of mass shedding is so far unsupported by stellar observations and models.[24]An alternative possible explanation posits intermittent bursts of powerful greenhouse gases, like methane. Carbon dioxide alone, even at a pressure far higher than the current one, cannot explain temperatures required for presence of liquid water on early Mars.[25]