Translate

Google+ Badge

Follow by Email

Search This Blog

Friday, June 16, 2017

Infrared

From Wikipedia, the free encyclopedia

A false color image of two people taken in long-wavelength infrared (body-temperature thermal) light.
This infrared space telescope image has (false color) blue, green and red corresponding to 3.4, 4.6, and 12 µm wavelengths, respectively.

Infrared radiation, or simply infrared or IR, is electromagnetic radiation (EMR) with longer wavelengths than those of visible light, and is therefore invisible, although it is sometimes loosely called infrared light. It extends from the nominal red edge of the visible spectrum at 700 nanometers (frequency 430 THz), to 1000000 nm (300 GHz)[1] (although people can see infrared up to at least 1050 nm in experiments[2][3][4][5]). Most of the thermal radiation emitted by objects near room temperature is infrared. Like all EMR, IR carries radiant energy, and behaves both like a wave and like its quantum particle, the photon.

Infrared was discovered in 1800 by astronomer Sir William Herschel, who discovered a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer.[6] Slightly more than half of the total energy from the Sun was eventually found to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has a critical effect on Earth's climate.

Infrared radiation is emitted or absorbed by molecules when they change their rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range.[7]

Infrared radiation is used in industrial, scientific, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, detect objects such as planets, and to view highly red-shifted objects from the early days of the universe.[8] Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect overheating of electrical apparatus.

Thermal-infrared imaging is used extensively for military and civilian purposes. Military applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm (micrometers). Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, remote temperature sensing, short-ranged wireless communication, spectroscopy, and weather forecasting.

Definition and relationship to the electromagnetic spectrum

Infrared radiation extends from the nominal red edge of the visible spectrum at 700 nanometers (nm) to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Below infrared is the microwave portion of the electromagnetic spectrum.
 
Infrared in relation to electromagnetic spectrum
Light comparison[9]
Name Wavelength Frequency (Hz) Photon Energy (eV)
Gamma ray less than 0.01 nm more than 30 EHz 124 keV – 300+ GeV
X-ray 0.01 nm – 10 nm 30 EHz – 30 PHz 124 eV  – 124 keV
Ultraviolet 10 nm – 400 nm 30 PHz – 790 THz 3.3 eV – 124 eV
Visible 400 nm–700 nm 790 THz – 430 THz 1.7 eV – 3.3 eV
Infrared 700 nm – 1 mm 430 THz – 300 GHz 1.24 meV – 1.7 eV
Microwave 1 mm – 1 meter 300 GHz – 300 MHz 1.24 µeV – 1.24 meV
Radio 1 meter – 100,000 km 300 MHz3 Hz 12.4 feV – 1.24 µeV

Natural infrared

Sunlight, at an effective temperature of 5,780 kelvins, is composed of near thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation.[10] Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 micrometers.

On the surface of Earth, at far lower temperatures than the surface of the Sun, almost all thermal radiation consists of infrared in mid-infrared region, much longer than in sunlight. Of these natural thermal radiation processes only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy.

Regions within the infrared

In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law.
Therefore, the infrared band is often subdivided into smaller sections.

Commonly used sub-division scheme

Division Name Abbreviation Wavelength Frequency Photon Energy Temperature† Characteristics
Near-infrared NIR, IR-A DIN 0.75–1.4 µm 214–400 THz 886–1653 meV 3,864–2,070 K
(3,591–1,797 °C)
Defined by the water absorption, and commonly used in fiber optic telecommunication because of low attenuation losses in the SiO2 glass (silica) medium. Image intensifiers are sensitive to this area of the spectrum. Examples include night vision devices such as night vision goggles.
Short-wavelength infrared SWIR, IR-B DIN 1.4–3 µm 100–214 THz 413–886 meV 2,070–966 K
(1,797–693 °C)
Water absorption increases significantly at 1450 nm. The 1530 to 1560 nm range is the dominant spectral region for long-distance telecommunications.
Mid-wavelength infrared MWIR, IR-C DIN; MidIR.[12] Also called intermediate infrared (IIR) 3–8 µm 37–100 THz 155–413 meV 966–362 K
(693–89 °C)
In guided missile technology the 3–5 µm portion of this band is the atmospheric window in which the homing heads of passive IR 'heat seeking' missiles are designed to work, homing on to the Infrared signature of the target aircraft, typically the jet engine exhaust plume. This region is also known as thermal infrared.
Long-wavelength infrared LWIR, IR-C DIN 8–15 µm 20–37 THz 83–155 meV 362–193 K
(89 – −80 °C)
The "thermal imaging" region, in which sensors can obtain a completely passive image of objects only slightly higher in temperature than room temperature - for example, the human body - based on thermal emissions only and requiring no illumination such as the sun, moon, or infrared illuminator. This region is also called the "thermal infrared".
Far-infrared FIR 15–1000 µm 0.3–20 THz 1.2–83 meV 193–3 K
(−80.15 – −270.15 °C)
(see also far-infrared laser and far infrared)
Temperatures of black bodies for which spectral peaks fall at the given wavelengths, according to Wien's displacement law[13]
A comparison of a thermal image (top) and an ordinary photograph (bottom) shows that a trash bag is transparent but glass (the man's spectacles) is opaque in long-wavelength infrared.

NIR and SWIR is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". Due to the nature of the blackbody radiation curves, typical "hot" objects, such as exhaust pipes, often appear brighter in the MW compared to the same object viewed in the LW.

CIE division scheme

The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands:[14]

Abbreviation Wavelength Frequency
IR-A 700 nm – 1400 nm (0.7 µm – 1.4 µm) 215 THz – 430 THz
IR-B 1400 nm – 3000 nm (1.4 µm – 3 µm) 100 THz – 215 THz
IR-C 3000 nm – 1 mm (3 µm – 1000 µm) 300 GHz – 100 THz

ISO 20473 scheme

ISO 20473 specifies the following scheme:[15]
 
Designation Abbreviation Wavelength
Near-Infrared NIR 0.78–3 µm
Mid-Infrared MIR 3–50 µm
Far-Infrared FIR 50–1000 µm

Astronomy division scheme

Astronomers typically divide the infrared spectrum as follows:[16]
 
Designation Abbreviation Wavelength
Near-Infrared NIR (0.7–1) to 2.5 µm
Mid-Infrared MIR 2.5 to (25–40) µm
Far-Infrared FIR (25–40) to (200–350) µm.

These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space.

The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers.

Sensor response division scheme

Plot of atmospheric transmittance in part of the infrared region.
A third scheme divides up the band based on the response of various detectors:[17]
  • Near-infrared: from 0.7 to 1.0 µm (from the approximate end of the response of the human eye to that of silicon).
  • Short-wave infrared: 1.0 to 3 µm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 µm; the less sensitive lead salts cover this region.
  • Mid-wave infrared: 3 to 5 µm (defined by the atmospheric window and covered by Indium antimonide [InSb] and HgCdTe and partially by lead selenide [PbSe]).
  • Long-wave infrared: 8 to 12, or 7 to 14 µm (this is the atmospheric window covered by HgCdTe and microbolometers).
  • Very-long wave infrared (VLWIR) (12 to about 30 µm, covered by doped silicon).
Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available.

The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. However, particularly intense near-IR light (e.g., from IR lasers, IR LED sources, or from bright daylight with the visible light removed by colored gels) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage.[18]

Telecommunication bands in the infrared

In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources transmitting/absorbing materials (fibers) and detectors:[19]
 
Band Descriptor Wavelength range
O band Original 1260–1360 nm
E band Extended 1360–1460 nm
S band Short wavelength 1460–1530 nm
C band Conventional 1530–1565 nm
L band Long wavelength 1565–1625 nm
U band Ultralong wavelength 1625–1675 nm
The C-band is the dominant band for long-distance telecommunication networks. The S and L bands are based on less well established technology, and are not as widely deployed.

Heat

Materials with higher emissivity appear to be hotter. In this thermal image, the ceramic cylinder appears to be hotter than its cubic container (made of silicon carbide), while in fact they have the same temperature.

Infrared radiation is popularly known as "heat radiation"[citation needed], but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49%[20] of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 µm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law).[21]

Heat is energy in transit that flows due to temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that is associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiations are associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth.

The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature will not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler. For that reason, incorrect selection of emissivity will give inaccurate results when using infrared cameras and pyrometers.

Applications

Night vision

Active-infrared night vision : the camera illuminates the scene at infrared wavelengths invisible to the human eye. Despite a dark back-lit scene, active-infrared night vision delivers identifying details, as seen on the display monitor.

Infrared is used in night vision equipment when there is insufficient visible light to see.[22] Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light.[22] Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source.[22]

The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment.[23]

Thermography

Thermography helped to determine the temperature profile of the Space Shuttle thermal protection system during re-entry.

Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to the massively reduced production costs.

Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 900–14,000 nanometers or 0.9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name).

Hyperspectral imaging

Hyperspectral thermal infrared emission measurement, an outdoor scan in winter conditions, ambient temperature −15 °C, image produced with a Specim LWIR hyperspectral imager. Relative radiance spectra from various targets in the image are shown with arrows. The infrared spectra of the different objects such as the watch clasp have clearly distinctive characteristics. The contrast level indicates the temperature of the object.[24]
Infrared light from the LED of a remote control as recorded by a digital camera.

A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements.

Thermal infrared hyperspectral imaging can be similarly performed using a Thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the sun or the moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications.[25]

Other imaging

In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can "see" intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy.
Reflected light photograph in various infrared spectra to illustrate the appearance as the wavelength of light changes.

Tracking

Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers", since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background.[26]

Heating

Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing).[27] Infrared can be used in cooking and heating food as it predominantly heats the opaque, absorbent objects, rather than the air around them.

Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating.

Efficiency is achieved by matching the wavelength of the infrared heater to the absorption characteristics of the material.

Communications

IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that is focused by a plastic lens into a narrow beam. The beam is modulated, i.e. switched on and off, to prevent interference from other sources of infrared (like sunlight or artificial lighting). The receiver uses a silicon photodiode to convert the infrared radiation to an electric current. It responds only to the rapidly pulsing signal created by the transmitter, and filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared.

Free space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen."[28]

Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a wavelength around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers.

IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired people through the RIAS (Remote Infrared Audible Signage) project. Transmitting IR data from one device to another is sometimes referred to as beaming.

Spectroscopy

Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from 4000–400 cm−1, the mid-infrared. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1).

Thin film metrology

In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi-Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures.

Meteorology

IR Satellite picture taken 1315 Z on 15th October 2006. A frontal system can be seen in the Gulf of Mexico with embedded Cumulonimbus cloud. Shallower Cumulus and Stratocumulus can be seen off the Eastern Seaboard.

Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 µm (IR4 and IR5 channels).

High, cold ice clouds such as Cirrus or Cumulonimbus show up bright white, lower warmer clouds such as Stratus or Stratocumulus show up as grey with intermediate clouds shaded accordingly. Hot land surfaces will show up as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can be a similar temperature to the surrounding land or sea surface and does not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 µm) and the near-infrared channel (1.58–1.64 µm), low cloud can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied.

These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information.

The main water vapour channel at 6.40 to 7.08 µm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere.

Climatology

In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation.
Schematic of the greenhouse effect

A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 µm and 50 µm.

Astronomy

Beta Pictoris with its planet Beta Pictoris b, the light-blue dot off-center, as seen in infrared. It combines two images, the inner disc is at 3.6 µm.

Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium.

The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy.

The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.)

Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared.[8]

Infrared cleaning

Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting.[29]

Art conservation and analysis

Infrared reflectography-en.svg
Infrared reflectography (fr; it; es), as called by art conservators,[30] can be applied to paintings to reveal underlying layers in a completely non-destructive manner, in particular the underdrawing or outline drawn by the artist as a guide. This often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Art conservators are looking to see whether the visible layers of paint differ from the underdrawing or layers in between – such alterations are called pentimenti when made by the original artist. This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti the more likely a painting is to be the prime version. It also gives useful insights into working practices.[31]

Among many other changes in the Arnolfini Portrait of 1434 (left), the man's face was originally higher by about the height of his eye; the woman's was higher, and her eyes looked more to the front. Each of his feet was underdrawn in one position, painted in another, and then overpainted in a third. These alterations are seen in infrared reflectograms.[32]

Recent progress in the design of infrared sensitive cameras made it possible to discover and depict not only underpaintings and pentimenti but entire paintings which were later overpainted by the artist.[33] Notable examples are Picasso's "Woman ironing" and "Blue room", where in both cases, a portrait of a man has been made visible under the painting as it is known today.

Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves.[34] Carbon black used in ink can show up extremely well.

Biological systems

Thermographic image of a snake eating a mouse

The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system.[35][36]

Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata),[37] darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans).[38]

Although near-infrared vision (780–1000 nm) has long been deemed impossible due to noise in visual pigments,[39] sensation of near-infrared light was reported in the common carp and in three cichlid species.[39][40][41][42][43] Fish use NIR to capture prey[39] and for phototactic swimming orientation.[43] NIR sensation in fish may be relevant under poor lighting conditions during twilight[39] and in turbid surface waters.[43]

Photobiomodulation

Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment.[44] Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms.[45]

Health hazard

Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places.[46]

History of infrared science

The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term 'Infrared' did not appear until late in the 19th century.[47][48]
Other important dates include:[17]
Infrared radiation was discovered in 1800 by William Herschel.

Wednesday, June 14, 2017

Electron configuration

From Wikipedia, the free encyclopedia

In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals.[1] For example, the electron configuration of the neon atom is 1s2 2s2 2p6.

Electronic configurations describe each electron as moving independently in an orbital, in an average field created by all other orbitals. Mathematically, configurations are described by Slater determinants or configuration state functions.

According to the laws of quantum mechanics, for systems with only one electron, an energy is associated with each electron configuration and, upon certain conditions, electrons are able to move from one configuration to another by the emission or absorption of a quantum of energy, in the form of a photon.

Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together. In bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors.

Shells and subshells


s (=0) p (=1)

m=0 m=0 m=±1

s pz px py
n=1 S1M0.png


n=2 S2M0.png Pz orbital.png Px orbital.png Py orbital.png
Electron configuration was first conceived of under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons.

An electron shell is the set of allowed states that share the same principal quantum number, n (the number before the letter in the orbital label), that electrons may occupy. An atom's nth electron shell can accommodate 2n2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons. The factor of two arises because the allowed states are doubled due to electron spin—each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually denoted by an up-arrow) and one with a spin −1/2 (with a down-arrow).

A subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the s, p, d, and f labels, respectively. For example, the 3d subshell has n = 3 and ℓ = 2. The maximum number of electrons that can be placed in a subshell is given by 2(2ℓ+1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell.

The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics,[2] in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.[3]

Notation

Physicists and chemists use a standard notation to indicate the electron configurations of atoms and molecules. For atoms, the notation consists of a sequence of atomic subshell labels (e.g. for phosphorus the sequence 1s, 2s, 2p, 3s, 3p) with the number of electrons assigned to each subshell placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-s-two, two-s-one"). Phosphorus (atomic number 15) is as follows: 1s2 2s2 2p6 3s2 3p3.
For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used. The electron configuration can be visualized as the core electrons, equivalent to the noble gas of the preceding period, and the valence electrons: each element in a period differs only by the last few subshells. Phosphorus, for instance, is in the third period. It differs from the second-period neon, whose configuration is 1s2 2s2 2p6, only by the presence of a third shell. The portion of its configuration that is equivalent to neon is abbreviated as [Ne], allowing the configuration of phosphorus to be written as [Ne] 3s2 3p3 rather than writing out the details of the configuration of neon explicitly. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element.

For a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the electron configuration of the titanium ground state can be written as either [Ar] 4s2 3d2 or [Ar] 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of neutral atoms; 4s is filled before 3d in the sequence Ar, K, Ca, Sc, Ti. The second notation groups all orbitals with the same value of n together, corresponding to the "spectroscopic" order of orbital energies that is the reverse of the order in which electrons are removed from a given atom to form positive ions; 3d is filled before 4s in the sequence Ti4+, Ti3+, Ti2+, Ti+, Ti.

The superscript 1 for a singly occupied subshell is not compulsory; for example aluminium may be written as either [Ne] 3s2 3p1 or [Ne] 3s2 3p. It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fundamental" (or "fine"), based on their observed fine structure: their modern usage indicates orbitals with an azimuthal quantum number, l, of 0, 1, 2 or 3 respectively. After "f", the sequence continues alphabetically "g", "h", "i"... (l = 4, 5, 6...), skipping "j", although orbitals of these types are rarely required.[4][5]

The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below).

Energy — ground state and excited states

The energy associated to an electron is that of its orbital. The energy of a configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground state. Any other configuration is an excited state.

As an example, the ground state configuration of the sodium atom is 1s22s22p63s1, as deduced from the Aufbau principle (see below). The first excited state is obtained by promoting a 3s electron to the 3p orbital, to obtain the 1s22s22p63p configuration, abbreviated as the 3p level. Atoms can move from one configuration to another by absorbing or emitting energy. In a sodium-vapor lamp for example, sodium atoms are excited to the 3p level by an electrical discharge, and return to the ground state by emitting yellow light of wavelength 589 nm.

Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron to the 3s level and form the excited 1s22s22p53s2 configuration.

The remainder of this article deals only with the ground-state configuration, often referred to as "the" configuration of an atom or molecule.

History

Niels Bohr (1923) was the first to propose that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.[6] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as 2.4.4.6 instead of 1s2 2s2 2p6 3s2 3p4 (2.8.6).

The following year, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6.[7] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).

Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli realized that the Zeeman effect must be due only to the outermost electrons of the atom, and was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):[8]
It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [l], j [ml] and m [ms].
The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:[2] this solution yields the atomic orbitals that are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),[9] see below) for the order in which atomic orbitals are filled with electrons.

Atoms: Aufbau principle and Madelung rule

The Aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:[10]
a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals.
The approximate order of filling of atomic orbitals, following the arrows from 1s to 7p. (After 7p the order includes orbitals outside the range of the diagram, starting with 8s.)

The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung's rule (or Klechkowski's rule). This rule was first stated by Charles Janet in 1929, rediscovered by Erwin Madelung in 1936,[9] and later given a theoretical justification by V.M. Klechkowski[11]
  1. Orbitals are filled in the order of increasing n+l;
  2. Where two orbitals have the same value of n+l, they are filled in order of increasing n.
This gives the following order for filling the orbitals:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, 5g, 6f, 7d, 8p, and 9s)
In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (Og, Z = 118).

The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics and nuclear chemistry.

Periodic table

Electron configuration table

The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. In general, the periodicity of the periodic table in terms of periodic table blocks is clearly due to the number of electrons (2, 6, 10, 14...) needed to fill s, p, d, and f subshells.

The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked on more than a century before the idea of electron configuration.[12] It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,[13] although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling.

Shortcomings of the Aufbau principle

The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; in both cases this is only approximately true. It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However, the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions that cannot be calculated exactly[14] (although there are mathematical approximations available, such as the Hartree–Fock method).

The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects of the Lamb shift.)

Ionization of the transition metals

The naïve application of the Aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, i.e. the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n+l  = 4 (n = 4, l = 0) while the 3d-orbital has n+l  = 5 (n = 3, l = 2). After calcium, most neutral atoms in the first series of transition metals (Sc-Zn) have configurations with two 4s electrons, but there are two exceptions. Chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, i.e. one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely filled subshells are particularly stable arrangements of electrons".

The apparent paradox arises when electrons are removed from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. This interchange of electrons between 4s and 3d is found for all atoms of the first series of transition metals.[15] The configurations of the neutral atoms (K, Ca, Sc, Ti, V, Cr, ...) usually follow the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, ...; however the successive stages of ionization of a given atom (such as Fe4+, Fe3+, Fe2+, Fe+, Fe) usually follow the order 1s, 2s, 2p, 3s, 3p, 3d, 4s, ...

This phenomenon is only paradoxical if it is assumed that the energy order of atomic orbitals is fixed and unaffected by the nuclear charge or by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium, and that the chemistry of the two species is very different. Melrose and Eric Scerri have analyzed the changes of orbital energy with orbital occupations in terms of the two-electron repulsion integrals of the Hartree-Fock method of atomic structure calculation.[16] More recently Scerri has argued that contrary to what is stated in the vast majority of sources including the title of his previous article on the subject, 3d orbitals rather than 4s are in fact preferentially occupied.[17]

Similar ion-like 3dx4s0 configurations occur in transition metal complexes as described by the simple crystal field theory, even if the metal has oxidation state 0. For example, chromium hexacarbonyl can be described as a chromium atom (not ion) surrounded by six carbon monoxide ligands. The electron configuration of the central chromium atom is described as 3d6 with the six electrons filling the three lower-energy d orbitals between the ligands. The other two d orbitals are at higher energy due to the crystal field of the ligands. This picture is consistent with the experimental fact that the complex is diamagnetic, meaning that it has no unpaired electrons. However, in a more accurate description using molecular orbital theory, the d-like orbitals occupied by the six electrons are no longer identical with the d orbitals of the free atom.

Other exceptions to Madelung's rule

There are several more exceptions to Madelung's rule among the heavier elements, and as atomic number increases it becomes more and more difficult to find simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,[18] which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects[19] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.[20] The table below shows the ground state configuration in terms of orbital occupancy, but it does not show the ground state in terms of the sequence of orbital energies as determined spectroscopically. For example, in the transition metals, the 4s orbital is of a higher energy than the 3d orbitals; and in the lanthanides, the 6s is higher than the 4f and 5d. The ground states can be seen in the Electron configurations of the elements (data page).
Electron shells filled in violation of Madelung's rule[21] (red)
Period 4   Period 5   Period 6   Period 7
Element Z Electron Configuration   Element Z Electron Configuration   Element Z Electron Configuration   Element Z Electron Configuration
        Lanthanum 57 [Xe] 6s2 5d1   Actinium 89 [Rn] 7s2 6d1
        Cerium 58 [Xe] 6s2 4f1 5d1   Thorium 90 [Rn] 7s2 6d2
        Praseodymium 59 [Xe] 6s2 4f3   Protactinium 91 [Rn] 7s2 5f2 6d1
        Neodymium 60 [Xe] 6s2 4f4   Uranium 92 [Rn] 7s2 5f3 6d1
        Promethium 61 [Xe] 6s2 4f5   Neptunium 93 [Rn] 7s2 5f4 6d1
        Samarium 62 [Xe] 6s2 4f6   Plutonium 94 [Rn] 7s2 5f6
        Europium 63 [Xe] 6s2 4f7   Americium 95 [Rn] 7s2 5f7
        Gadolinium 64 [Xe] 6s2 4f7 5d1   Curium 96 [Rn] 7s2 5f7 6d1
        Terbium 65 [Xe] 6s2 4f9   Berkelium 97 [Rn] 7s2 5f9
             
Scandium 21 [Ar] 4s2 3d1   Yttrium 39 [Kr] 5s2 4d1   Lutetium 71 [Xe] 6s2 4f14 5d1   Lawrencium 103 [Rn] 7s2 5f14 7p1
Titanium 22 [Ar] 4s2 3d2   Zirconium 40 [Kr] 5s2 4d2   Hafnium 72 [Xe] 6s2 4f14 5d2   Rutherfordium 104 [Rn] 7s2 5f14 6d2
Vanadium 23 [Ar] 4s2 3d3   Niobium 41 [Kr] 5s1 4d4   Tantalum 73 [Xe] 6s2 4f14 5d3   Dubnium 105 [Rn] 7s2 5f14 6d3
Chromium 24 [Ar] 4s1 3d5   Molybdenum 42 [Kr] 5s1 4d5   Tungsten 74 [Xe] 6s2 4f14 5d4   Seaborgium 106 [Rn] 7s2 5f14 6d4
Manganese 25 [Ar] 4s2 3d5   Technetium 43 [Kr] 5s2 4d5   Rhenium 75 [Xe] 6s2 4f14 5d5   Bohrium 107 [Rn] 7s2 5f14 6d5
Iron 26 [Ar] 4s2 3d6   Ruthenium 44 [Kr] 5s1 4d7   Osmium 76 [Xe] 6s2 4f14 5d6   Hassium 108 [Rn] 7s2 5f14 6d6
Cobalt 27 [Ar] 4s2 3d7   Rhodium 45 [Kr] 5s1 4d8   Iridium 77 [Xe] 6s2 4f14 5d7    
Nickel 28 [Ar] 4s2 3d8 or
[Ar] 4s1 3d9 (disputed)[22]
  Palladium 46 [Kr] 4d10   Platinum 78 [Xe] 6s1 4f14 5d9    
Copper 29 [Ar] 4s1 3d10   Silver 47 [Kr] 5s1 4d10   Gold 79 [Xe] 6s1 4f14 5d10    
Zinc 30 [Ar] 4s2 3d10   Cadmium 48 [Kr] 5s2 4d10   Mercury 80 [Xe] 6s2 4f14 5d10    
The electron-shell configuration of elements beyond hassium has not yet been empirically verified, but they are expected to follow Madelung's rule without exceptions until element 120.[23]

Electron configuration in molecules

In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,[24] rather than the atomic orbital labels used for atoms and monatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is written 1σg2 1σu2 2σg2 2σu2 3σg2 1πu4 1πg2,[25][26] or equivalently 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.[1] The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.

The electronic configuration of polyatomic molecules can change without absorption or emission of a photon through vibronic couplings.

Electron configuration in solids

In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.

Applications

The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified form of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form.

This approach is taken further in computational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using an ever-larger and more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the Aufbau principle. Not all methods in calculational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method that discards the model.

For atoms or molecules with more than one electron, the motion of electrons are correlated and such a picture is no longer exact. A very large number of electronic configurations are needed to exactly describe any multi-electron system, and no energy can be associated with one single configuration. However, the electronic wave function is usually dominated by a very small number of configurations and therefore the notion of electronic configuration remains essential for multi-electron systems.

A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to supplement the electron configuration with one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined