Search This Blog

Monday, September 30, 2019

Color temperature

From Wikipedia, the free encyclopedia

The CIE 1931 x,y chromaticity space, also showing the chromaticities of black-body light sources of various temperatures (Planckian locus), and lines of constant correlated color temperature.
 
The color temperature of a light source is the temperature of an ideal black-body radiator that radiates light of a color comparable to that of the light source. Color temperature is a characteristic of visible light that has important applications in lighting, photography, videography, publishing, manufacturing, astrophysics, horticulture, and other fields. In practice, color temperature is meaningful only for light sources that do in fact correspond somewhat closely to the radiation of some black body, i.e., light in a range going from red to orange to yellow to white to blueish white; it does not make sense to speak of the color temperature of, e.g., a green or a purple light. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit of measure for absolute temperature. 

Color temperatures over 5000 K are called "cool colors" (bluish), while lower color temperatures (2700–3000 K) are called "warm colors" (yellowish). "Warm" in this context is an analogy to radiated heat flux of traditional incandescent lighting rather than temperature. The spectral peak of warm-coloured light is closer to infrared, and most natural warm-coloured light sources emit significant infrared radiation. The fact that "warm" lighting in this sense actually has a "cooler" color temperature often leads to confusion.

Categorizing different lighting

Temperature Source
1700 K Match flame, low pressure sodium lamps (LPS/SOX)
1850 K Candle flame, sunset/sunrise
2400 K Standard incandescent lamps
2550 K Soft white incandescent lamps
2700 K "Soft white" compact fluorescent and LED lamps
3000 K Warm white compact fluorescent and LED lamps
3200 K Studio lamps, photofloods, etc.
3350 K Studio "CP" light
5000 K Horizon daylight
5000 K Tubular fluorescent lamps or cool white / daylight
compact fluorescent lamps (CFL)
5500– 6000 K Vertical daylight, electronic flash
6200 K Xenon short-arc lamp
6500 K Daylight, overcast
6500– 9500 K LCD or CRT screen
15,000– 27,000 K Clear blue poleward sky
These temperatures are merely characteristic; there may be considerable variation

The black-body radiance (Bλ) vs. wavelength (λ) curves for the visible spectrum. The vertical axes of Planck's law plots building this animation were proportionally transformed to keep equal areas between functions and horizontal axis for wavelengths 380–780 nm. K indicates the color temperature in Kelvins, and M indicates the color temperature in micro reciprocal degrees.
 
The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in kelvins, or alternatively in micro reciprocal degrees (mired). This permits the definition of a standard by which light sources are compared. 

To the extent that a hot surface emits thermal radiation but is not an ideal black-body radiator, the color temperature of the light is not the actual temperature of the surface. An incandescent lamp's light is thermal radiation, and the bulb approximates an ideal black-body radiator, so its color temperature is essentially the temperature of the filament. Thus a relatively low temperature emits a dull red and a high temperature emits the almost white of the traditional incandescent light bulb. Metal workers are able to judge the temperature of hot metals by their color, from dark red to orange-white and then white.

Many other light sources, such as fluorescent lamps, or LEDs (light emitting diodes) emit light primarily by processes other than thermal radiation. This means that the emitted radiation does not follow the form of a black-body spectrum. These sources are assigned what is known as a correlated color temperature (CCT). CCT is the color temperature of a black-body radiator which to human color perception most closely matches the light from the lamp. Because such an approximation is not required for incandescent light, the CCT for an incandescent light is simply its unadjusted temperature, derived from comparison to a black-body radiator.

The Sun

The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is about 5780 K. The color temperature of sunlight above the atmosphere is about 5900 K.

The Sun may appear red, orange, yellow, or white from Earth, depending on its position in the sky. The changing color of the Sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in black-body radiation. Rayleigh scattering of sunlight by Earth's atmosphere causes the blue color of the sky, which tends to scatter blue light more than red light. 

Some daylight in the early morning and late afternoon (the golden hours) has a lower ("warmer") color temperature due to increased scattering of shorter-wavelength sunlight by atmospheric particles – an optical phenomenon called the Tyndall effect.

Daylight has a spectrum similar to that of a black body with a correlated color temperature of 6500 K (D65 viewing standard) or 5500 K (daylight-balanced photographic film standard). 

Hues of the Planckian locus on a linear scale
 
For colors based on black-body theory, blue occurs at higher temperatures, whereas red occurs at lower temperatures. This is the opposite of the cultural associations attributed to colors, in which "red" is "hot", and "blue" is "cold".

Applications

Lighting

Color temperature comparison of common electric lamps
Color temperature comparison of common electric lamps
 
For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.

CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems are used, for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs.

Aquaculture

In fishkeeping, color temperature has different functions and foci in the various branches.
  • In freshwater aquaria, color temperature is generally of concern only for producing a more attractive display. Lights tend to be designed to produce an attractive spectrum, sometimes with secondary attention paid to keeping the plants in the aquaria alive.
  • In a saltwater/reef aquarium, color temperature is an essential part of tank health. Within about 400 to 3000 nanometers, light of shorter wavelength can penetrate deeper into water than longer wavelengths, providing essential energy sources to the algae hosted in (and sustaining) coral. This is equivalent to an increase of color temperature with water depth in this spectral range. Because coral typically live in shallow water and receive intense, direct tropical sunlight, the focus was once on simulating this situation with 6500 K lights. In the meantime higher temperature light sources have become more popular, first with 10000 K and more recently 16000 K and 20000 K. Actinic lighting at the violet end of the visible range (420–460 nm) is used to allow night viewing without increasing algae bloom or enhancing photosynthesis, and to make the somewhat fluorescent colors of many corals and fish "pop", creating brighter display tanks.

Digital photography

In digital photography, the term color temperature sometimes refers to remapping of color values to simulate variations in ambient color temperature. Most digital cameras and raw image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in kelvins. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled "tint") adding the magenta–green axis, and are to some extent arbitrary and a matter of artistic interpretation.

Photographic film

Photographic emulsion film does not respond to lighting color identically to the human retina or visual perception. An object that appears to the observer to be white may turn out to be very blue or orange in a photograph. The color balance may need to be corrected during printing to achieve a neutral color print. The extent of this correction is limited since color film normally has three layers sensitive to different colors and when used under the "wrong" light source, every layer may not respond proportionally, giving odd color casts in the shadows, although the mid-tones may have been correctly white-balanced under the enlarger. Light sources with discontinuous spectra, such as fluorescent tubes, cannot be fully corrected in printing either, since one of the layers may barely have recorded an image at all. 

Photographic film is made for specific light sources (most commonly daylight film and tungsten film), and, used properly, will create a neutral color print. Matching the sensitivity of the film to the color temperature of the light source is one way to balance color. If tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color. 

Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K. 

If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source. 

Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking.

Desktop publishing

In the desktop publishing industry, it is important to know a monitor’s color temperature. Color matching software, such as Apple's ColorSync for Mac OS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows:
  • 5000 K (D50)
  • 5500 K (D55)
  • 6500 K (D65)
  • 7500 K (D75)
  • 9300 K
D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as D50 are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue. 

Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates (among other things) a 6500 K display white point.

TV, video, and digital still cameras

The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature. 

Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual "white balance" (telling the camera that "this object is white"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations.

Artistic application via control of color temperature

The house above appears a light cream during midday, but seems to be bluish white here in the dim light before full sunrise. Note the color temperature of the sunrise in the background.
 
Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available. 

Cinematographers do not “white balance” in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects. 

For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a “warm gray”. Green, blue, or purple create “cool grays”. Note that this sense of temperature is the reverse of that of real temperature; bluer is described as “cooler” even though it corresponds to a higher-temperature black body

Grays.svg
"Warm" gray "Cool" gray
Mixed with 6% yellow. Mixed with 6% blue.
Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors, (if including 3200 K) are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since each is likely to have a different color temperature.

Correlated color temperature

Log-log graphs of peak emission wavelength and radiant exitance vs black-body temperature – red arrows show that 5780 K black bodies have 501 nm peak wavelength and 63.3 MW/m² radiant exitance
The correlated color temperature (CCT, Tcp) is the temperature of the Planckian radiator whose perceived color most closely resembles that of a given stimulus at the same brightness and under specified viewing conditions
— CIE/IEC 17.4:1987, International Lighting Vocabulary (ISBN 3900734070)

Motivation

Black-body radiators are the reference by which the whiteness of light sources is judged. A black body can be described by its color temperature, whose hues are depicted above. By analogy, nearly Planckian light sources such as certain fluorescent or high-intensity discharge lamps can be judged by their correlated color temperature (CCT), the color temperature of the Planckian radiator that best approximates them. For light source spectra that are not Planckian, color temperature is not a well defined attribute; the concept of correlated color temperature was developed to map such sources as well as possible onto the one-dimensional scale of color temperature, where "as well as possible" is defined in the context of an objective color space.

Background

Judd's (r,g) diagram. The concentric curves indicate the loci of constant purity.
 
Judd's Maxwell triangle. Planckian locus in gray. Translating from trilinear co-ordinates into Cartesian co-ordinates leads to the next diagram.
 
Judd's uniform chromaticity space (UCS), with the Planckian locus and the isotherms from 1000 K to 10000 K, perpendicular to the locus. Judd calculated the isotherms in this space before translating them back into the (x,y) chromaticity space, as depicted in the diagram at the top of the article.
 
Close up of the Planckian locus in the CIE 1960 UCS, with the isotherms in mireds. Note the even spacing of the isotherms when using the reciprocal temperature scale and compare with the similar figure below. The even spacing of the isotherms on the locus implies that the mired scale is a better measure of perceptual color difference than the temperature scale.
 
The notion of using Planckian radiators as a yardstick against which to judge other light sources is not new. In 1923, writing about "grading of illuminants with reference to quality of color ... the temperature of the source as an index of the quality of color", Priest essentially described CCT as we understand it today, going so far as to use the term "apparent color temperature", and astutely recognized three cases:
  • "Those for which the spectral distribution of energy is identical with that given by the Planckian formula."
  • "Those for which the spectral distribution of energy is not identical with that given by the Planckian formula, but still is of such a form that the quality of the color evoked is the same as would be evoked by the energy from a Planckian radiator at the given color temperature."
  • "Those for which the spectral distribution of energy is such that the color can be matched only approximately by a stimulus of the Planckian form of spectral distribution."
Several important developments occurred in 1931. In chronological order:
  1. Raymond Davis published a paper on "correlated color temperature" (his term). Referring to the Planckian locus on the r-g diagram, he defined the CCT as the average of the "primary component temperatures" (RGB CCTs), using trilinear coordinates.
  2. The CIE announced the XYZ color space.
  3. Deane B. Judd published a paper on the nature of "least perceptible differences" with respect to chromatic stimuli. By empirical means he determined that the difference in sensation, which he termed ΔE for a "discriminatory step between colors ... Empfindung" (German for sensation) was proportional to the distance of the colors on the chromaticity diagram. Referring to the (r,g) chromaticity diagram depicted aside, he hypothesized that
KΔE = |c1c2| = max(|r1r2|, |g1g2|).
These developments paved the way for the development of new chromaticity spaces that are more suited to estimating correlated color temperatures and chromaticity differences. Bridging the concepts of color difference and color temperature, Priest made the observation that the eye is sensitive to constant differences in "reciprocal" temperature:
A difference of one micro-reciprocal-degree (μrd) is fairly representative of the doubtfully perceptible difference under the most favorable conditions of observation.
Priest proposed to use "the scale of temperature as a scale for arranging the chromaticities of the several illuminants in a serial order". Over the next few years, Judd published three more significant papers: 

The first verified the findings of Priest, Davis, and Judd, with a paper on sensitivity to change in color temperature.

The second proposed a new chromaticity space, guided by a principle that has become the holy grail of color spaces: perceptual uniformity (chromaticity distance should be commensurate with perceptual difference). By means of a projective transformation, Judd found a more "uniform chromaticity space" (UCS) in which to find the CCT. Judd determined the "nearest color temperature" by simply finding the point on the Planckian locus nearest to the chromaticity of the stimulus on Maxwell's color triangle, depicted aside. The transformation matrix he used to convert X,Y,Z tristimulus values to R,G,B coordinates was:


From this, one can find these chromaticities: 


The third depicted the locus of the isothermal chromaticities on the CIE 1931 x,y chromaticity diagram. Since the isothermal points formed normals on his UCS diagram, transformation back into the xy plane revealed them still to be lines, but no longer perpendicular to the locus.

MacAdam's "uniform chromaticity scale" diagram; a simplification of Judd's UCS.

Calculation

Judd's idea of determining the nearest point to the Planckian locus on a uniform chromaticity space is current. In 1937, MacAdam suggested a "modified uniform chromaticity scale diagram", based on certain simplifying geometrical considerations:


This (u,v) chromaticity space became the CIE 1960 color space, which is still used to calculate the CCT (even though MacAdam did not devise it with this purpose in mind). Using other chromaticity spaces, such as u'v', leads to non-standard results that may nevertheless be perceptually meaningful.

Close up of the CIE 1960 UCS. The isotherms are perpendicular to the Planckian locus, and are drawn to indicate the maximum distance from the locus that the CIE considers the correlated color temperature to be meaningful:

The distance from the locus (i.e., degree of departure from a black body) is traditionally indicated in units of ; positive for points above the locus. This concept of distance has evolved to become Delta E, which continues to be used today.

Robertson's method

Before the advent of powerful personal computers, it was common to estimate the correlated color temperature by way of interpolation from look-up tables and charts. The most famous such method is Robertson's, who took advantage of the relatively even spacing of the mired scale (see above) to calculate the CCT Tc using linear interpolation of the isotherm's mired values:

Computation of the CCT Tc corresponding to the chromaticity coordinate in the CIE 1960 UCS.
 

where and are the color temperatures of the look-up isotherms and i is chosen such that . (Furthermore, the test chromaticity lies between the only two adjacent lines for which .)

If the isotherms are tight enough, one can assume , leading to 


The distance of the test point to the i-th isotherm is given by 


where is the chromaticity coordinate of the i-th isotherm on the Planckian locus and mi is the isotherm's slope. Since it is perpendicular to the locus, it follows that where li is the slope of the locus at .

Precautions

Although the CCT can be calculated for any chromaticity coordinate, the result is meaningful only if the light sources are nearly white. The CIE recommends that "The concept of correlated color temperature should not be used if the chromaticity of the test source differs more than [] from the Planckian radiator." Beyond a certain value of , a chromaticity co-ordinate may be equidistant to two points on the locus, causing ambiguity in the CCT.

Approximation

If a narrow range of color temperatures is considered—those encapsulating daylight being the most practical case—one can approximate the Planckian locus in order to calculate the CCT in terms of chromaticity coordinates. Following Kelly's observation that the isotherms intersect in the purple region near (x = 0.325, y = 0.154), McCamy proposed this cubic approximation:


where n = (xxe)/(y - ye) is the inverse slope line, and (xe = 0.3320, ye = 0.1858) is the "epicenter"; quite close to the intersection point mentioned by Kelly. The maximum absolute error for color temperatures ranging from 2856 K (illuminant A) to 6504 K (D65) is under 2 K. 

A more recent proposal, using exponential terms, considerably extends the applicable range by adding a second epicenter for high color temperatures:


where n is as before and the other constants are defined below: 


3–50 kK 50–800 kK
xe 0.3366 0.3356
ye 0.1735 0.1691
A0 −949.86315 36284.48953
A1 6253.80338 0.00228
t1 0.92159 0.07861
A2 28.70599 5.4535×10−36
t2 0.20039 0.01543
A3 0.00004
t3 0.07125

The author suggests that one use the low-temperature equation to determine whether the higher-temperature parameters are needed.

Color rendering index

The CIE color rendering index (CRI) is a method to determine how well a light source's illumination of eight sample patches compares to the illumination provided by a reference source. Cited together, the CRI and CCT give a numerical estimate of what reference (ideal) light source best approximates a particular artificial light, and what the difference is.

Spectral power distribution

Characteristic spectral power distributions (SPDs) for an incandescent lamp (left) and a fluorescent lamp (right). The horizontal axes are wavelengths in nanometers, and the vertical axes show relative intensity in arbitrary units.
 
Light sources and illuminants may be characterized by their spectral power distribution (SPD). The relative SPD curves provided by many manufacturers may have been produced using 10 nm increments or more on their spectroradiometer. The result is what would seem to be a smoother ("fuller spectrum") power distribution than the lamp actually has. Owing to their spiky distribution, much finer increments are advisable for taking measurements of fluorescent lights, and this requires more expensive equipment.

Color temperature in astronomy

In astronomy, the color temperature is defined by the local slope of the SPD at a given wavelength, or, in practice, a wavelength range. Given, for example, the color magnitudes B and V which are calibrated to be equal for an A0V star (e.g. Vega), the stellar color temperature is given by the temperature for which the color index of a black-body radiator fits the stellar one. Besides the , other color indices can be used as well. The color temperature (as well as the correlated color temperature defined above) may differ largely from the effective temperature given by the radiative flux of the stellar surface. For example, the color temperature of an A0V star is about 15000 K compared to an effective temperature of about 9500 K.

America First Committee

From Wikipedia, the free encyclopedia

America First Committee
America First Committee.jpg
AbbreviationAFC
FormationSeptember 4, 1940
FounderRobert D. Stuart Jr.
Founded atYale University, New Haven, Connecticut, U.S.
ExtinctionDecember 10, 1941
TypeNon-partisan pressure group
PurposeNon-interventionism
HeadquartersChicago, Illinois, U.S.
Membership (1941)
800,000
Chairman
Robert E. Wood
Spokesperson
Charles Lindbergh
Key people
Subsidiaries450 chapters
Revenue (1940)
$370,000

The America First Committee (AFC) was the foremost United States non-interventionist pressure group against the American entry into World War II. Started on September 4, 1940, it put out mixed messaging with antisemitic and pro-fascist rhetoric from leading members, and it was dissolved on December 10, 1941, three days after the attack on Pearl Harbor had brought the war to the United States. Membership peaked at 800,000 paying members in 450 chapters. It was one of the largest anti-war organizations in the history of the United States.

Membership

Students at the University of California (Berkeley) participate in a one-day peace strike opposing U.S. entrance into World War II, April 19, 1940
 
The AFC was established on September 4, 1940, by Yale Law School student R. Douglas Stuart, Jr. (son of R. Douglas Stuart, co-founder of Quaker Oats), along with other students, including future President Gerald Ford, future Peace Corps director Sargent Shriver, and future U.S. Supreme Court justice Potter Stewart. At its peak, America First claimed 800,000 dues-paying members in 450 chapters, located mostly in a 300-mile radius of Chicago.

It claimed 135,000 members in 60 chapters in Illinois, its strongest state. Fundraising drives produced about $370,000 from some 25,000 contributors. Nearly half came from a few millionaires such as William H. Regnery, H. Smith Richardson of the Vick Chemical Company, General Robert E. Wood of Sears-Roebuck, publisher Joseph M. Patterson (New York Daily News) and his cousin, publisher Robert R. McCormick (Chicago Tribune).

The AFC was never able to get funding for its own public opinion poll. The New York chapter received slightly more than $190,000, most of it from its 47,000 contributors. Since it never had a national membership form or national dues, and local chapters were quite autonomous, historians point out that the organization's leaders had no idea how many "members" it had.

Serious organizing of the America First Committee took place in Chicago not long after the September 1940 establishment. Chicago was to remain the national headquarters of the committee. To preside over their committee, America First chose General Robert E. Wood, the 61-year-old chairman of Sears, Roebuck and Co. Wood remained at the head of the committee until it was disbanded in the days after the attack on Pearl Harbor.

The America First Committee had its share of prominent businessmen as well as the sympathies of political figures including Democratic Senators Burton K. Wheeler of Montana and David I. Walsh of Massachusetts, Republican Senator Gerald P. Nye of North Dakota, with its most prominent spokesman being aviator Charles A. Lindbergh. Other celebrities supporting America First were actress Lillian Gish and architect Frank Lloyd Wright.

Two men who would later become presidents, John F. Kennedy and Gerald Ford, supported and contributed to the organization. When he donated $100 to the AFC, Kennedy attached a note which read simply: "What you are doing is vital." Ford was one of the first members of the AFC when a chapter formed at Yale University. Additionally, Potter Stewart, a future Supreme Court justice, served on the original committee of the AFC.

Issues

Flyer for an America First Committee rally in St. Louis, Missouri in April 1941
 
When the war began in September 1939, most Americans, including politicians, demanded neutrality regarding Europe. Although most Americans supported strong measures against Japan, Europe was the focus of the America First Committee. The public mood was changing, however, especially after the fall of France in the spring of 1940.

The America First Committee launched a petition aimed at enforcing the 1939 Neutrality Act and forcing President Franklin D. Roosevelt to keep his pledge to keep America out of the war. They profoundly distrusted Roosevelt and argued that he was lying to the American people.

On the day after Roosevelt's lend-lease bill was submitted to the United States Congress, Wood promised AFC opposition "with all the vigor it can exert". America First staunchly opposed the convoying of ships, the Atlantic Charter, and the placing of economic pressure on Japan. In order to achieve the defeat of lend-lease and the perpetuation of American neutrality, the AFC advocated four basic principles:
  • The United States must build an impregnable defense for America.
  • No foreign power, nor group of powers, can successfully attack a prepared America.
  • American democracy can be preserved only by keeping out of the European war.
  • "Aid short of war" weakens national defense at home and threatens to involve America in war abroad.
Charles Lindbergh was admired in Germany and allowed to see the buildup of the German air force, the Luftwaffe, in 1937. He was impressed by its strength and secretly reported his findings to the General Staff of the United States Army, warning them that the U.S. had fallen behind and that it must urgently build up its aviation. He had feuded with the Roosevelt administration for years. His first radio speech was broadcast on September 15, 1939, on all three of the major radio networks. He urged listeners to look beyond the speeches and propaganda that they were being fed and instead look at who was writing the speeches and reports, who owned the papers and who influenced the speakers.
On June 20, 1941, Lindbergh spoke to 30,000 people in Los Angeles and billed it as a "Peace and Preparedness Mass Meeting", Lindbergh criticized those movements which he perceived were leading America into the war. He proclaimed that the United States was in a position that made it virtually impregnable. He claimed that the interventionists and the British who called for "the defense of England" really meant "the defeat of Germany".

Charles Lindbergh speaking at an America First Committee rally in Fort Wayne, Indiana in early October 1941
 
Nothing did more to escalate the tensions than the speech which Lindbergh delivered to a rally in Des Moines, Iowa on September 11, 1941. In that speech, he identified the forces pulling America into the war as the British, the Roosevelt administration, and American Jews. While he expressed sympathy for the plight of the Jews in Germany, he argued that America's entry into the war would serve them little better. He said, in part, the following:
It is not difficult to understand why Jewish people desire the overthrow of Nazi Germany. The persecution they suffered in Germany would be sufficient to make bitter enemies of any race. No person with a sense of the dignity of mankind can condone the persecution the Jewish race suffered in Germany. But no person of honesty and vision can look on their pro-war policy here today without seeing the dangers involved in such a policy, both for us and for them.
Instead of agitating for war the Jewish groups in this country should be opposing it in every possible way, for they will be among the first to feel its consequences. Tolerance is a virtue that depends upon peace and strength. History shows that it cannot survive war and devastation. A few farsighted Jewish people realize this and stand opposed to intervention. But the majority still do not. Their greatest danger to this country lies in their large ownership and influence in our motion pictures, our press, our radio, and our government.
A Dr. Seuss editorial cartoon from early October 1941 criticizing America First
 
Communists were antiwar until June 1941, and they tried to infiltrate or take over America First. After Hitler attacked the Soviet Union in June 1941, they reversed positions and denounced the AFC as a Nazi front (a group infiltrated by German agents). Nazis also tried to use the committee: at the trial of the aviator and orator Laura Ingalls, the prosecution revealed that her handler, Ulrich Freiherr von Gienanth, a German diplomat, had encouraged her to participate in committee activities.

After Pearl Harbor

After the attack on Pearl Harbor, AFC canceled a rally with Lindbergh at Boston Garden "in view of recent critical developments," and the organization's leaders announced their support of the war effort. Lindbergh gave the rationale:
We have been stepping closer to war for many months. Now it has come and we must meet it as united Americans regardless of our attitude in the past toward the policy our government has followed.
Whether or not that policy has been wise, our country has been attacked by force of arms and by force of arms we must retaliate. Our own defenses and our own military position have already been neglected too long. We must now turn every effort to building the greatest and most efficient Army, Navy and Air Force in the world. When American soldiers go to war it must be with the best equipment that modern skill can design and that modern industry can build.
With the formal declaration of war against Japan, the organization chose to disband. On December 11, the committee leaders met and voted for dissolution. In the statement which they released to the press was the following:
Our principles were right. Had they been followed, war could have been avoided. No good purpose can now be served by considering what might have been, had our objectives been attained.
We are at war. Today, though there may be many important subsidiary considerations, the primary objective is not difficult to state. It can be completely defined in one word: Victory.
Conservative commentator Pat Buchanan has praised America First and used its name as a slogan. "The achievements of that organization are monumental," writes Buchanan. "By keeping America out of World War II until Hitler attacked Stalin in June 1941, Soviet Russia, not America, bore the brunt of the fighting, bleeding and dying to defeat Nazi Germany."

Clathrate gun hypothesis (now effectively disproved)

From https://doomsdaydebunked.miraheze.org/wiki/Clathrate_gun_hypothesis?fbclid=IwAR2cWB8gPurcMuN1HfkNhMXo_8TxpoKNKOgg1gu8UKOtvDQ43OU0Tvq0_pE
 
The theoretical scenario of the clathrate gun hypothesis. Arctic methane emissions lead to warming and in turn to more dissociation.
 
The clathrate gun hypothesis or Methane time bomb (now effectively disproved) is the name given to the idea that as sea temperatures rise in the Arctic, this can trigger a strong positive feedback effect on climate. The hypothesis was that this warming would cause a sudden release of methane from methane clathrate compounds buried in seabeds and seabed permafrost, and then, because methane itself is a powerful greenhouse gas, temperatures rise further, and the cycle repeats. The original idea was that this runaway process, once started, could be as irreversible as the firing of a gun. It originates from a paper by Kennett et al published in 2003, which proposed that the "clathrate gun" could cause abrupt runaway warming on a time scale less than a human lifetime. 

"Gun" suggests an exothermic reaction like an explosion. The clathrate decomposition is endothermic - if some of the clathrates are released they cool down the rest of the deposits. This means that the only way it can happen explosively is by a feedback with Earth's climate rapidly warming up the oceans.

The clathrates are not only kept stable by the low temperatures at the sea bed. They are also kept stable by pressure of the depth of sea above the deposits. The clathrates can slowly decompose as the result of lowering sea levels during ice ages, or by the sea floor rising along continental shelves when the ice resting on the land melts. This needs to be distinguished from clathrate dissociation due to a warming sea, which is needed for the clathrate gun hypothesis.

In December 2016, a major literature review by the 2107 USGS Hydrates project concluded that evidence is lacking for the original hypothesis. In 2017, the Royal Society review came to a similar conclusion that there is a relatively limited role for climate feedback from dissociation of the methane clathrates. 

The 2018 Annual Review of Environment and Resources on Methane and Global Environmental Change concluded that "Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century". In 2018, the CAGE research group (Centre for Arctic Gas Hydrate, Environment and Climate) came to a much stronger conclusion when they published evidence that the methane clathrates formed over 6 million years ago and have been slowly releasing methane for 2 million years independent of warm or cold climate, rather than releasing methane only recently as had previously been thought
.
At one time this hypothesis was thought to be responsible for warming events in and at the end of the Last Glacial Maximum around 26,500 years ago, but this is now also thought to be unlikely.

At one point it was thought that a much slower runaway methane clathrate breakdown might have acted over longer timescales of tens of thousands of years during the Paleocene–Eocene Thermal Maximum 56 million years ago, and the Permian–Triassic extinction event, 252 million years ago. However, this is now thought unlikely.

To hear what an expert says about the topic, see #Video interview with Carolyn Ruppel (USGS Gas Hydrates Project) below.

Methane clathrates

Tetrakaidecahedral Methane clathrate hydrate I, methane molecule consisting of one carbon atom attached to four hydrogen atoms shown in the center surrounded by a cage formed of water molecules
 
Methane clathrate, also known commonly as methane hydrate, is a form of water ice that contains a large amount of methane within its crystal structure, which is stable under pressure, and remains stable under higher temperatures than ice, up to a few degrees above 0 °C depending on the pressure. .
The methane forms a structure I hydrate, trapped in dodecahedral cages made up of water molecules which are kept stable by a methane molecule inside each one. These are then each surrounded by tetrahedra to form part of a larger lattice with tetrakeidecahedral cavities which also contain methane molecules. Potentially large deposits of methane clathrate have been found under sediments on the ocean floors of the Earth.

Methane is much more powerful as a greenhouse gas than carbon dioxide, although it has a short atmospheric lifetime of around 12 years. Shindell et al(2009) calculated that it has a global warming potential, the ratio of its warming potential to that of CO2. of between 79 and 105 over 20 years, and between 25 and 40 over 100 years, after accounting for aerosol interactions.

How the clathrates dissociate

Gas-hydrate deposits by sector. Only those in sector 2 are likely to release methane that reaches the atmosphere
 
As the oceans warm then methane can be released as the methane dissociates. The deposits extend to a depth of many meters and how much of an effect this is depends on how far down into the deposits the dissociation proceeds. The reaction absorbs heat rather than generating it, so as the reaction proceeds it cools surrounding sediments rather than warming them.

The 2017 and 2018 studies have suggested only the topmost layers would be affected, while the original hypothesis was based on the supposition that deep layers would dissociate. The amount of the effect also depends on what happens to the methane as it rises in the water column above it after it is released from the clathrates. If the deposit is more than a hundred meters below the surface then most of the methane in the bubbles dissolves into the sea before it reaches the surface, since the sea is undersaturated in methane. 

At a density of around 0.9 g/cm3, methane hydrate will float to the surface of the sea or of a lake unless it is bound in place by being formed in or anchored to sediment. So the clathrate deposits all consist of clathrates firmly bound within the ocean sediments.

USGS and Royal Society metastudies (2016 and 2017)

A USGS metastudy first published December 2016 by the USGS Gas Hydrates Project concluded:
"“Our review is the culmination of nearly a decade of original research by the USGS, my coauthor Professor John Kessler at the University of Rochester, and many other groups in the community,” said USGS geophysicist Carolyn Ruppel, who is the paper’s lead author and oversees the USGS Gas Hydrates Project. “After so many years spent determining where gas hydrates are breaking down and measuring methane flux at the sea-air interface, we suggest that conclusive evidence for release of hydrate-related methane to the atmosphere is lacking.”
From the Royal Society report:
"Clathrates: Some economic assessments continue to emphasize the potential damage from very strong and rapid methane hydrate release, although AR5 did not consider this likely. Recent measurements of methane fluxes from the Siberian Shelf Seas are much lower than those inferred previously. A range of other studies have suggested a much smaller influence of clathrate release on the Arctic atmosphere than had been suggested.

…. A recent modeling study joined earlier papers in assigning a relatively limited role to dissociation of methane hydrates as a climate feedback. Methane concentrations are rising globally, raising interesting questions (see section on methane) about what the cause is, finally new measurements of the 14C content of methane across the warming out of the last glacial period show that the release of old carbon reservoirs (including methane hydrates) played only a small role in the methane concentration increase that occurred then."

Timeline with original hypothesis, and later developments

This is a timeline of clathrates research with some of the milestones.

2007 - Most deposits are too deep, focus is on shallow deposits

Gas hydrate breakdown due to warming from ocean water
 
Most deposits of methane clathrate are in sediments too deep to respond rapidly, and modeling by Archer (2007) suggests the methane forcing should remain a minor component of the overall greenhouse effect. Clathrate deposits destabilize from the deepest part of their stability zone, which is typically hundreds of meters below the seabed. A sustained increase in sea temperature will warm its way through the sediment eventually, and cause the shallowest, most marginal clathrate to start to break down; but it will typically take on the order of a thousand years or more for the temperature signal to get through.

Subsea permafrost occurs beneath the seabed and exists in the continental shelves of the polar regions. This source of methane is different from methane clathrates, but contributes to the overall outcome and feedbacks.

From sonar measurements in recent years researchers quantified the density of bubbles emanating from subsea permafrost into the ocean (a process called ebullition), and found that 100–630 mg methane per square meter is emitted daily along the East Siberian Shelf, into the water column. They also found that during storms, when wind accelerates air-sea gas exchange, methane levels in the water column drop dramatically. Observations suggest that methane release from seabed permafrost will progress slowly, rather than abruptly. However, Arctic cyclones, fueled by global warming, and further accumulation of greenhouse gases in the atmosphere could contribute to more rapid methane release from this source.

2008 - Original hypothesis, idea of a fast release of 50 gigatons of methane

Research carried out in 2008 in the Siberian Arctic showed millions of tons of methane being released, apparently through perforations in the seabed permafrost, with concentrations in some regions reaching up to 100 times normal levels. The excess methane has been detected in localized hotspots in the outfall of the Lena River and the border between the Laptev Sea and the East Siberian Sea. At the time, some of the melting was thought to be the result of geological heating, but more thawing was believed to be due to the greatly increased volumes of meltwater being discharged from the Siberian rivers flowing north. The current methane release had previously been estimated at 0.5 megatonnes per year. Shakhova et al. (2008) estimate that not less than 1,400 gigatons of carbon is presently locked up as methane and methane hydrates under the Arctic submarine permafrost, and 5–10% of that area is subject to puncturing by open taliks. They conclude that "release of up to 50 gigatonnes of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". That would increase the methane content of the planet's atmosphere by a factor of twelve, equivalent in greenhouse effect to a doubling in the current level of CO2

This is what lead to the original Clathrate gun hypothesis, and in 2008 the United States Department of Energy National Laboratory system and the United States Geological Survey's Climate Change Science Program both identified potential clathrate destabilization in the Arctic as one of four most serious scenarios for abrupt climate change, which have been singled out for priority research. The USCCSP released a report in late December 2008 estimating the gravity of this risk.

2010 - Taliks or pongos could lead to gas migration pathways

There is a possibility for the formation of gas migration pathways within fault zones in the East Siberian Arctic Shelf, through the process of talik formation, or pingo-like features.

2012 - Possible abrupt release of clathrates stabilized by low temperatures or after landslips

A 2012 assessment of the literature identifies methane hydrates on the Shelf of East Arctic Seas as a potential trigger.

The Arctic ocean clathrates can exist in shallower water than elsewhere, stabilized by lower temperatures rather than higher pressures; these may potentially be marginally stable much closer to the surface of the sea-bed, stabilized by a frozen 'lid' of permafrost preventing methane escape. 

The so-called self-preservation phenomenon has been studied by Russian geologists starting in the late 1980s. This metastable clathrate state can be a basis for release events of methane excursions, such as during the interval of the Last Glacial Maximum. A study from 2010 concluded with the possibility for a trigger of abrupt climate warming based on metastable methane clathrates in the East Siberian Arctic Shelf (ESAS) region.

Profile illustrating the continental shelf, slope and rise
 
A trapped gas deposit on the continental slope off Canada in the Beaufort Sea, located in an area of small conical hills on the ocean floor is just 290 meters below sea level and considered the shallowest known deposit of methane hydrate.

Seismic observation (in 2012) of destabilizing methane hydrate along the continental slope of the eastern United States, following the intrusion of warmer ocean currents, suggests that underwater landslides could release methane. The estimated amount of methane hydrate in this slope is 2.5 gigatonnes (about 0.2% of the amount required to cause the PETM), and it is unclear if the methane could reach the atmosphere. However, the authors of the study caution: "It is unlikely that the western North Atlantic margin is the only area experiencing changing ocean currents; our estimate of 2.5 gigatonnes of destabilizing methane hydrate may therefore represent only a fraction of the methane hydrate currently destabilizing globally." 

2015 - Model based on the hypothesis suggests an extra 6 °C rise within 80 years

A study of the effects for the original hypothesis, based on a coupled climate–carbon cycle model (GCM) assessed a 1000-fold (from <1 1000="" 25="" 6="" 80="" a="" amount="" and="" atmospheric="" based="" biosphere="" by="" carbon="" concluded="" critical="" decrease="" ecosystems="" especially="" estimates="" farming="" for="" from="" further="" gtc="" hydrates="" in="" increase="" it="" land="" less="" methane="" more="" nbsp="" on="" p="" petm="" ppmv="" pulse="" single="" situation="" stored="" suggesting="" temperatures="" than="" the="" to="" tropics.="" with="" within="" would="" years.="">

2016 Methane in upper continental slope clathrates doesn't get to surface

Some of the shallow methane clathrates are indeed decomposing and there are higher concentrations of methane near the sea floor that do indeed come from the clathates. But it is taken up by the sea water and from the measurements made by many scientists, almost none reaches the surface of the sea. Methane in the upper layers of the sea do not come from the sea floor and there aren't any significant atmospheric additions. This is also the date of publication of the USGS metastudy

2017 - Fertilizing effect of methane at continental margins may lead to net CO2 sink

One paper published in 2017 found from measurements on the Svalbard margin that CO2 sequestration due to the fertilizing effect of the methane on surface microbes lead to a net negative effect on radiative forcing, 231 times greater than the effect of the methane emissions: 
Continuous sea−air gas flux data collected over a shallow ebullitive methane seep field on the Svalbard margin reveal atmospheric CO2 uptake rates (−33,300±7,900 μmol m−2·d−1) twice that ofsurrounding waters and ∼1,900 times greater than the diffusive sea−air methane efflux (17.3±4.8μmol m−2·d−1). The negative radiative forcing expected from this CO2 uptake is up to 231 times greater than the positive radiative forcing from the methane emissions

2017 - Methane clathrates only decompose to a depth of 1.6 meters

However, later research cast doubt on this picture. Hong et al (2017) studied the seepage from large mounds of hydrates in the shallow arctic seas at Storfjordrenna, in the Barents Sea close to Svalbard. They showed that though the temperature of the sea bed has fluctuated seasonally over the last century, between 1.8 and 4.8 °C, it has only affected release of methane to a depth of about 1.6 meters. The areas that do destabilize do so only very slowly (centuries) because they are only warmed sufficiently for less than half the year, from April to August - and this doesn’t seem to be enough for fast destabilizing. 

Hydrates can be stable through the top 60 meters of the sediments and the current rapid releases came from deeper below the sea floor. They concluded that the increase in flux started hundreds to thousands of years ago well before the onset of warming that others speculated as its cause, and that these seepages are not increasing due to momentary warming. Summarizing his research, Hong stated:
"The results of our study indicate that the immense seeping found in this area is a result of natural state of the system. Understanding how methane interacts with other important geological, chemical and biological processes in the Earth system is essential and should be the emphasis of our scientific community,"
Further research by Klaus Wallmann et al (2018) found that the hydrate release is due to the rebound of the sea bed after the ice melted. The methane dissociation began around 8,000 years ago when the land began to rise faster than the sea level, and the water as a result started to get shallower with less hydrostatic pressure. This dissociation therefore was a result of the uplift of the sea bed rather than anthropogenic warming. The amount of methane released by the hydrate dissociation was small. They found that the methane seeps originate not from the hydrates but from deep geological gas reservoirs (seepage from these formed the hydrates originally). They concluded that the hydrates acted as a dynamic seal regulating the methane emissions from the deep geological gas reservoirs and when they were dissociated 8,000 years ago, weakening the seal, this led to the higher methane release still observed today.

This is also the date of publication of the Royal Society metastudy

2018 - CAGE group findings, the methane has been escaping at the same rate for millions of years

Research by the CAGE group in 2018 showed that the methane there has been escaping at the same rate for millions of years:
Recent observations of extensive methane release from the seafloor into the ocean and atmosphere cause concern as to whether increasing air temperatures across the Arctic are causing rapid melting of natural methane hydrates. Other studies, however, indicate that methane flares released in the Arctic today were created by processes that began way back in time – during the last Ice Age.
Newest research from the Center for Arctic Gas Hydrate, Climate and Environment (CAGE) shows that methane has been leaking in the Arctic for millions of years, independent of warm or cold climate. Methane has been forming in organic carbon rich sediments below the leakage spots off the coast of western Svalbard for a period of about 6 million years (since the late Miocene). According to our models, methane flares occurred at the seafloor for the first time at around 2 million years ago; at the exact time when ice sheets started to expand in the Arctic.
The acceleration of leakage occurred when the ice sheets were big enough to erode and deliver huge amounts of sediments towards the continental slope. Methane leakage was promoted due to formation of natural gas in organic-rich sediments under heavy loads of glacial sediments. Faults and fractures opened within the Earth’s crust as a consequence of growth and decay of the massive ice masses. This brought up the gases from deeper sediments higher up towards the seafloor. These gases then fueled the gas hydrate system off the Svalbard coast for the past 2 million years. It is, to this day, controlling the leakage of methane from the seabed.
So, the methane deposits formed in the late Miocene starting 6 million years ago, and the methane leaks have been going on for two million years through multiple ice ages.

Also published in 2018, the Review of Environment and Resources on Methane and Global Environmental Change concluded that
"Although the clathrate gun hypothesis remains controversial (21), a good understanding of how environmental change affects natural CH4 sources is vital in terms of robustly projecting future fluxes under a changing climate."
Then later:
"Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century"
They did however urge caution about extraction of methane clathrates as a fuel, as this could lead to leaks of methane.
As discussed previously(Section 4.1), the stability of CH4 clathrate deposits may already be at risk from climate change.Accidental or deliberate disturbance, due to fossil fuel extraction, has the potential for extremelyhigh fugitive CH4 losses to the atmosphere

Past mass extinction events

At one point it was thought that runaway methane clathrate breakdown might also have acted over longer timescales of tens of thousands of years during the Paleocene–Eocene Thermal Maximum 56 million years ago, and most notably the Permian–Triassic extinction event, when up to 96% of all marine species became extinct, 252 million years ago. It was thought to have caused drastic alteration of the ocean environment (such as ocean acidification and ocean stratification) and of the atmosphere.

However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen there. First, the isotope shift is too large for this hypothesis, as it would require five times as much methane as is postulated for the PETM, and then, it would have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio throughout the early Triassic before it was released again several times.

One of the hypotheses being considered in its place is that the temperature increase of the PETM was due to the roasting of carbonate sediments such as coal beds by volcanism. Potentially this may have released more than 3 trillion tons of carbon.

Effects thousands of years into our future

Although significant effects are effectively ruled out at present, the oceans would continue to warm by several degrees under the "Business as usual" scenario. This would lead to the clathrates warming and eventually dissociating, and some of this could contribute to the long tail of CO2, helping to keep CO2 levels in the atmosphere higher for longer, as it gradually is removed from the atmosphere by natural processes. David Archer, author of many papers on gas hydrates, put it like this:
On the other hand, the deep ocean could ultimately (after a thousand years or so) warm up by several degrees in a business-as-usual scenario, which would make it warmer than it has been in millions of years. Since it takes millions of years to grow the hydrates, they have had time to grow in response to Earth’s relative cold of the past 10 million years or so. Also, the climate forcing from CO2 release is stronger now than it was millions of years ago when CO2 levels were higher, because of the band saturation effect of CO2 as a greenhouse gas. In short, if there was ever a good time to provoke a hydrate meltdown it would be now. But “now” in a geological sense, over thousands of years in the future, not really “now” in a human sense. The methane hydrates in the ocean, in cahoots with permafrost peats (which never get enough respect), could be a significant multiplier of the long tail of the CO2, but will probably not be a huge player in climate change in the coming century.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...