Search This Blog

Saturday, August 9, 2014

Linearity

Linearity

From Wikipedia, the free encyclopedia
 
In common usage, linearity refers to a mathematical relationship or function that can be graphically represented as a straight line, as in two quantities that are directly proportional to each other, such as voltage and current in a simple DC circuit, or the mass and weight of an object.

A crude but simple example of this concept can be observed in the volume control of an audio amplifier. While our ears may (roughly) perceive a relatively even gradation of volume as the control goes from 1 to 10, the electrical power consumed in the speaker is rising geometrically with each numerical increment. The "loudness" is proportional to the volume number (a linear relationship), while the wattage is doubling with every unit increase (a non-linear, exponential relationship).

In mathematics, a linear map or linear function f(x) is a function that satisfies the following two properties:
The homogeneity and additivity properties together are called the superposition principle. It can be shown that additivity implies homogeneity in all cases where α is rational; this is done by proving the case where α is a natural number by mathematical induction and then extending the result to arbitrary rational numbers. If f is assumed to be continuous as well, then this can be extended to show homogeneity for any real number α, using the fact that rationals form a dense subset of the reals.

In this definition, x is not necessarily a real number, but can in general be a member of any vector space. A more specific definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics.

The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and many constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it is generally straightforward to solve by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.

Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations (also called linear maps), and systems of linear equations.
The word linear comes from the Latin word linearis, which means pertaining to or resembling a line. For a description of linear and nonlinear equations, see linear equation. Nonlinear equations and functions are of interest to physicists and mathematicians because they can be used to represent many natural phenomena, including chaos.

Integral linearity

For a device that converts a quantity to another quantity there are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale, or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.

Many times a device's specifications will simply refer to linearity, with no other explanation as to which type of linearity is intended. In cases where a specification is expressed simply as linearity, it is assumed to imply independent linearity.

Independent linearity is probably the most commonly used linearity definition and is often found in the specifications for DMMs and ADCs, as well as devices like potentiometers. Independent linearity is defined as the maximum deviation of actual performance relative to a straight line, located such that it minimizes the maximum deviation. In that case there are no constraints placed upon the positioning of the straight line and it may be wherever necessary to minimize the deviations between it and the device's actual performance characteristic.

Zero-based linearity forces the lower range value of the straight line to be equal to the actual lower range value of the device's characteristic, but it does allow the line to be rotated to minimize the maximum deviation. In this case, since the positioning of the straight line is constrained by the requirement that the lower range values of the line and the device's characteristic be coincident, the non-linearity based on this definition will generally be larger than for independent linearity.

For terminal linearity, there is no flexibility allowed in the placement of the straight line in order to minimize the deviations. The straight line must be located such that each of its end-points coincides with the device's actual upper and lower range values. This means that the non-linearity measured by this definition will typically be larger than that measured by the independent, or the zero-based linearity definitions. This definition of linearity is often associated with ADCs, DACs and various sensors.

A fourth linearity definition, absolute linearity, is sometimes also encountered. Absolute linearity is a variation of terminal linearity, in that it allows no flexibility in the placement of the straight line, however in this case the gain and offset errors of the actual device are included in the linearity measurement, making this the most difficult measure of a device's performance. For absolute linearity the end points of the straight line are defined by the ideal upper and lower range values for the device, rather than the actual values. The linearity error in this instance is the maximum deviation of the actual device's performance from ideal.

Linear polynomials

In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a line.

Over the reals, a linear equation is one of the forms:
f(x) = m x + b\
where m is often called the slope or gradient; b the y-intercept, which gives the point of intersection between the graph of the function and the y-axis.

Note that this usage of the term linear is not the same as the above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if b = 0. Hence, if b ≠ 0, the function is often called an affine function (see in greater generality affine transformation).

Boolean functions

In Boolean algebra, a linear function is a function f for which there exist a_0, a_1, \ldots, a_n \in \{0,1\} such that
f(b_1, \ldots, b_n) = a_0 \oplus (a_1 \land b_1) \oplus \cdots \oplus (a_n \land b_n) for all b_1, \ldots, b_n \in \{0,1\}.
A Boolean function is linear if one of the following holds for the function's truth table:
  1. In every row in which the truth value of the function is 'T', there are an odd number of 'T's assigned to the arguments and in every row in which the function is 'F' there is an even number of 'T's assigned to arguments. Specifically, f('F', 'F', ..., 'F') = 'F', and these functions correspond to linear maps over the Boolean vector space.
  2. In every row in which the value of the function is 'T', there is an even number of 'T's assigned to the arguments of the function; and in every row in which the truth value of the function is 'F', there are an odd number of 'T's assigned to arguments. In this case, f('F', 'F', ..., 'F') = 'T'.
Another way to express this is that each variable always makes a difference in the truth-value of the operation or it never makes a difference.

Negation, Logical biconditional, exclusive or, tautology, and contradiction are linear functions.

Physics

In physics, linearity is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation.

Linearity of a differential equation means that if two functions f and g are solutions of the equation, then any linear combination af + bg is, too.

Electronics

In electronics, the linear operating region of a device, for example a transistor, is where a dependent variable (such as the transistor collector current) is directly proportional to an independent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier, which must amplify a signal without changing its waveform. Others are linear filters, linear regulators, and linear amplifiers in general.

In most scientific and technological, as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort even a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value, taking it away from the approximately linear part of the transfer function.

Military tactical formations

In military tactical formations, "linear formations" were adapted from phalanx-like formations of pike protected by handgunners towards shallow formations of handgunners protected by progressively fewer pikes. This kind of formation would get thinner until its extreme in the age of Wellington with the 'Thin Red Line'. It would eventually be replaced by skirmish order at the time of the invention of the breech-loading rifle that allowed soldiers to move and fire independently of the large-scale formations and fight in small, mobile units.

Art

Linear is one of the five categories proposed by Swiss art historian Heinrich Wölfflin to distinguish "Classic", or Renaissance art, from the Baroque. According to Wölfflin, painters of the fifteenth and early sixteenth centuries (Leonardo da Vinci, Raphael or Albrecht Dürer) are more linear than "painterly" Baroque painters of the seventeenth century (Peter Paul Rubens, Rembrandt, and Velázquez) because they primarily use outline to create shape.[1] Linearity in art can also be
referenced in digital art. For example, hypertext fiction can be an example of nonlinear narrative, but there are also websites designed to go in a specified, organized manner, following a linear path.

Music

In music the linear aspect is succession, either intervals or melody, as opposed to simultaneity or the vertical aspect.

Measurement

In measurement, the term "linear foot" refers to the number of feet in a straight line of material (such as lumber or fabric) generally without regard to the width. It is sometimes incorrectly referred to as "lineal feet"; however, "lineal" is typically reserved for usage when referring to ancestry or heredity.[1] The words "linear"[2] & "lineal" [3] both descend from the same root meaning, the Latin word for line, which is "linea".

Colligative properties

Colligative properties

From Wikipedia, the free encyclopedia
 
In chemistry, colligative properties are properties of solutions that depend upon the ratio of the number of solute particles to the number of solvent molecules in a solution, and not on the type of chemical species present.[1] This number ratio can be related to the various units for concentration of solutions. Here we shall only consider those properties which result because of the dissolution of nonvolatile solute in a volatile liquid solvent.[2] They are independent of the nature of the solute particles, and are due essentially to the dilution of the solvent by the solute. The word colligative is derived from the Latin colligatus meaning bound together.[3]

Colligative properties include:
  1. Relative lowering of vapor pressure
  2. Elevation of boiling point
  3. Depression of freezing point
  4. Osmotic pressure.
Measurement of colligative properties for a dilute solution of a non-ionized solute such as urea or glucose in water or another solvent can lead to determinations of relative molar masses, both for small molecules and for polymers which cannot be studied by other means. Alternatively, measurements for ionized solutes can lead to an estimation of the percentage of ionization taking place.

Colligative properties are mostly studied for dilute solutions, whose behavior may often be approximated as that of an ideal solution.

Relative lowering of Vapor pressure

The vapor pressure of a liquid is the pressure of a vapor in equilibrium with the liquid phase. The vapor pressure of a solvent is lowered by addition of a non-volatile solute to form a solution.
For an ideal solution, the equilibrium vapor pressure is given by Raoult's law as
p = p^{\star}_{\rm A} x_{\rm A} + p^{\star}_{\rm B} x_{\rm B} + \cdots, where
p^{\star}_{\rm i} is the vapor pressure of the pure component (i= A, B, ...) and x_{\rm i} is the mole fraction of the component in the solution
For a solution with a solvent (A) and one non-volatile solute (B), p^{\star}_{\rm B} = 0 and p = p^{\star}_{\rm A} x_{\rm A}

The vapor pressure lowering relative to pure solvent is \Delta p = p^{\star}_{\rm A} - p = p^{\star}_{\rm A} (1 - x_{\rm A}) = p^{\star}_{\rm A} x_{\rm B}, which is proportional to the mole fraction of solute.
If the solute dissociates in solution, then the vapor pressure lowering is increased by the van 't Hoff factor i, which represents the true number of solute particles for each formula unit. For example, the strong electrolyte MgCl2 dissociates into one Mg2+ ion and two Cl- ions, so that if ionization is complete, i = 3 and \Delta p = 3 p^{\star}_{\rm A} x_{\rm B}. The measured colligative properties show that i is somewhat less than 3 due to ion association.

Boiling point and freezing point

Addition of solute to form a solution stabilizes the solvent in the liquid phase, and lowers the solvent chemical potential so that solvent molecules have less tendency to move to the gas or solid phases.
As a result, liquid solutions slightly above the solvent boiling point at a given pressure become stable, which means that the boiling point increases. Similarly, liquid solutions slightly below the solvent freezing point become stable meaning that the freezing point decreases. Both the boiling point elevation and the freezing point depression are proportional to the lowering of vapor pressure in a dilute solution.

These properties are colligative in systems where the solute is essentially confined to the liquid phase. Boiling point elevation (like vapor pressure lowering) is colligative for non-volatile solutes where the solute presence in the gas phase is negligible. Freezing point depression is colligative for most solutes since very few solutes dissolve appreciably in solid solvents.

Boiling point elevation (ebullioscopy)

The boiling point of a liquid is the temperature (T_{\rm b}) at which its vapor pressure is equal to the external pressure. The normal boiling point is the boiling point at a pressure equal to 1 atmosphere.
The boiling point of a pure solvent is increased by the addition of a non-volatile solute, and the elevation can be measured by ebullioscopy. It is found that
\Delta T_{\rm b} = T_{\rm b}(solution) - T_{\rm b}(solvent) = i\cdot K_b \cdot m
Here i is the van 't Hoff factor as above, Kb is the ebullioscopic constant of the solvent (equal to 0.512 °C kg/mol for water), and m is the molality of the solution.

The boiling point is the temperature at which there is equilibrium between liquid and gas phases. At the boiling point, the number of gas molecules condensing to liquid equals the number of liquid molecules evaporating to gas. Adding a solute dilutes the concentration of the liquid molecules and reduces the rate of evaporation. To compensate for this and re-attain equilibrium, the boiling point occurs at a higher temperature.

If the solution is assumed to be an ideal solution, Kb can be evaluated from the thermodynamic condition for liquid-vapor equilibrium. At the boiling point the chemical potential μA of the solvent in the solution phase equals the chemical potential in the pure vapor phase above the solution.
\mu _A(T_b)  = \mu_A^{\star}(T_b)  + RT\ln x_A\  = \mu_A^{\star}(g, 1 atm),
where the asterisks indicate pure phases. This leads to the result K_b = RMT_b^2/\Delta H_{\mathrm{vap}}, where R is the molar gas constant, M is the solvent molar mass and ΔHvap is the solvent molar enthalpy of vaporization.[4]

Freezing point depression (cryoscopy)

The freezing point (T_{\rm f}) of a pure solvent is lowered by the addition of a solute which is insoluble in the solid solvent, and the measurement of this difference is called cryoscopy. It is found that
\Delta T_{\rm f} = T_{\rm f}(solution) - T_{\rm f}(solvent) = - i\cdot K_f \cdot m
Here Kf is the cryoscopic constant, equal to 1.86 °C kg/mol for the freezing point of water. Again "i" is the van 't Hoff factor and m the molality.

In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze. Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well. The equality of chemical potentials permits the evaluation of the cryoscopic constant as K_f = RMT_f^2/\Delta H_{\mathrm{fus}}, where ΔHfus is the solvent molar enthalpy of fusion.[4]

Osmotic pressure

The osmotic pressure of a solution is the difference in pressure between the solution and the pure liquid solvent when the two are in equilibrium across a semipermeable membrane, which allows the passage of solvent molecules but not of solute particles. If the two phases are at the same initial pressure, there is a net transfer of solvent across the membrane into the solution known as osmosis.
The process stops and equilibrium is attained when the pressure difference equals the osmotic pressure.
Two laws governing the osmotic pressure of a dilute solution were discovered by the German botanist W. F. P. Pfeffer and the Dutch chemist J. H. van’t Hoff:
  1. The osmotic pressure of a dilute solution at constant temperature is directly proportional to its concentration.
  2. The osmotic pressure of a solution is directly proportional to its absolute temperature.
These are analogous to Boyle's law and Charles's Law for gases. Similarly, the combined ideal gas law, pV = nRT, has as analog for ideal solutions \Pi V = n R T i, where \Pi is osmotic pressure; V is the volume; n is the number of moles of solute; R is the molar gas constant 8.314 J K−1 mol−1; T is absolute temperature; and i is the Van 't Hoff factor.
The osmotic pressure is then proportional to the molar concentration c = n/V, since
\Pi = \frac {n R T i}{V} = c R T i
The osmotic pressure is proportional to the concentration of solute particles ci and is therefore a colligative property.
As with the other colligative properties, this equation is a consequence of the equality of solvent chemical potentials of the two phases in equilibrium. In this case the phases are the pure solvent at pressure P and the solution at total pressure P + π.[5]

History

The word colligative (German: kolligativ) was introduced in 1891 by Wilhelm Ostwald. Ostwald classified solute properties in three categories:[6][7]
  1. colligative properties which depend only on solute concentration and temperature, and are independent of the nature of the solute particles
  2. additive properties such as mass, which are the sums of properties of the constituent particles and therefore depend also on the composition (or molecular formula) of the solute, and
  3. constitutional properties which depend further on the molecular structure of the solute.

Friday, August 8, 2014

The Electromagnetic Spectrum

Electromagnetic spectrum

From Wikipedia, the free encyclopedia
ClassFreq
uency
Wave
length
Energy
    300 EHz1 pm1.24 MeV
γGamma rays 
 30 EHz10 pm124 keV
HXHard X-rays 
 3 EHz100 pm12.4 keV
SXSoft X-rays 
 300 PHz1 nm1.24 keV
 
 30 PHz10 nm124 eV
EUVExtreme
ultraviolet
 
 3 PHz100 nm12.4 eV
NUVNear
ultraviolet
 
Visible 300 THz1 μm1.24 eV
 NIRNear Infrared 
 30 THz10 μm124 meV
MIRMid infrared 
 3 THz100 μm12.4 meV
FIRFar infrared 
 300 GHz1 mm1.24 meV
Radio
waves
EHFExtremely high
frequency
 
 30 GHz1 cm124 μeV
SHFSuper high
frequency
 
 3 GHz1 dm12.4 μeV
UHFUltra high
frequency
 
 300 MHz1 m1.24 μeV
VHFVery high
frequency
 
 30 MHz10 m124 neV
HFHigh
frequency
 
 3 MHz100 m12.4 neV
MFMedium
frequency
 
 300 kHz1 km1.24 neV
LFLow
frequency
 
 30 kHz10 km124 peV
VLFVery low
frequency
 
 3 kHz100 km12.4 peV
 VF / ULFVoice
frequency
 
 300 Hz1 Mm1.24 peV
SLFSuper low
frequency
 
 30 Hz10 Mm124 feV
ELFExtremely low
frequency
 
 3 Hz100 Mm12.4 feV
   
Sources: File:Light spectrum.svg [1] [2] [3]
 
The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation.[4] The "electromagnetic spectrum" of an object has a different meaning, and is instead the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object.

The electromagnetic spectrum extends from below the low frequencies used for modern radio communication to gamma radiation at the short-wavelength (high-frequency) end, thereby covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The limit for long wavelengths is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length.[5] Until the middle of last century it was believed by most physicists that this spectrum was infinite and continuous.

Most parts of the electromagnetic spectrum are used in science for spectroscopic and other probing interactions, as ways to study and characterize matter.[6] In addition, radiation from various parts of the spectrum has found many other uses for communications and manufacturing (see electromagnetic radiation for more applications).

 

 

 

 

 

 

 

 

 

 

History of electromagnetic spectrum discovery

For most of history, visible light was the only known part of the electromagnetic spectrum. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, including reflection and refraction. Over the years the study of light continued and during the 16th and 17th centuries there were conflicting theories which regarded light as either a wave or a particle.[citation needed]

The first discovery of electromagnetic radiation other than visible light came in 1800, when William Herschel discovered infrared radiation.[7] He was studying the temperature of different colors by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red. He theorized that this temperature change was due to "calorific rays" which would be in fact a type of light ray that could not be seen. The next year, Johann Ritter worked at the other end of the spectrum and noticed what he called "chemical rays" (invisible light rays that induced certain chemical reactions) that behaved similar to visible violet light rays, but were beyond them in the spectrum.[8] They were later renamed ultraviolet radiation.

Electromagnetic radiation had been first linked to electromagnetism in 1845, when Michael Faraday noticed that the polarization of light traveling through a transparent material responded to a magnetic field (see Faraday effect). During the 1860s James Maxwell developed four partial differential equations for the electromagnetic field. Two of these equations predicted the possibility of, and behavior of, waves in the field. Analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave.
Maxwell's equations predicted an infinite number of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the entire electromagnetic spectrum.

Maxwell's predicted waves included waves at very low frequencies compared to infrared, which in theory might be created by oscillating charges in an ordinary electrical circuit of a certain type. Attempting to prove Maxwell's equations and detect such low frequency electromagnetic radiation, in 1886 the physicist Heinrich Hertz built an apparatus to generate and detect what is now called radio waves. Hertz found the waves and was able to infer (by measuring their wavelength and multiplying it by their frequency) that they traveled at the speed of light. Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin. In a later experiment, Hertz similarly produced and measured the properties of microwaves. These new types of waves paved the way for inventions such as the wireless telegraph and the radio.

In 1895 Wilhelm Röntgen noticed a new type of radiation emitted during an experiment with an evacuated tube subjected to a high voltage. He called these radiations x-rays and found that they were able to travel through parts of the human body but were reflected or stopped by denser matter such as bones. Before long, many uses were found for them in the field of medicine.

The last portion of the electromagnetic spectrum was filled in with the discovery of gamma rays. In 1900 Paul Villard was studying the radioactive emissions of radium when he identified a new type of radiation that he first thought consisted of particles similar to known alpha and beta particles, but with the power of being far more penetrating than either. However, in 1910, British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914, Ernest Rutherford (who had named them gamma rays in 1903 when he realized that they were fundamentally different from charged alpha and beta rays) and Edward Andrade measured their wavelengths, and found that gamma rays were similar to X-rays, but with shorter wavelengths and higher frequencies.

Range of the spectrum

Electromagnetic waves are typically described by any of the following three physical properties: the frequency f, wavelength λ, or photon energy E. Frequencies observed in astronomy range from 2.4×1023 Hz (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to the wave frequency,[6] so gamma rays have very short wavelengths that are fractions of the size of atoms, whereas wavelengths on the opposite end of the spectrum can be as long as the universe. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femtoelectronvolt). These relations are illustrated by the following equations:
f = \frac{c}{\lambda}, \quad\text{or}\quad f = \frac{E}{h}, \quad\text{or}\quad E=\frac{hc}{\lambda},
where:
Whenever electromagnetic waves exist in a medium with matter, their wavelength is decreased.

Wavelengths of electromagnetic radiation, no matter what medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated.

Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, terahertz (or sub-millimeter) radiation, infrared, the visible region is perceived as light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. When EM radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries.

Spectroscopy can detect a much wider region of the EM spectrum than the visible range of 400 nm to 700 nm. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. Spectroscopes are widely used in astrophysics. For example, many hydrogen atoms emit a radio wave photon that has a wavelength of 21.12 cm. Also, frequencies of 30 Hz and below can be produced by and are important in the study of certain stellar nebulae[10] and frequencies as high as 2.9×1027 Hz have been detected from astrophysical sources.[11]

Rationale for spectrum regional names

Electromagnetic radiation interacts with matter in different ways across the spectrum. These types of interaction are so different that historically different names have been applied to different parts of the spectrum, as though these were different types of radiation. Thus, although these "different kinds" of electromagnetic radiation form a quantitatively continuous spectrum of frequencies and wavelengths, the spectrum remains divided for practical reasons related to these qualitative interaction differences.
Region of the spectrumMain interactions with matter
RadioCollective oscillation of charge carriers in bulk material (plasma oscillation). An example would be the oscillatory travels of the electrons in an antenna.
Microwave through far infraredPlasma oscillation, molecular rotation
Near infraredMolecular vibration, plasma oscillation (in metals only)
VisibleMolecular electron excitation (including pigment molecules found in the human retina), plasma oscillations (in metals only)
UltravioletExcitation of molecular and atomic valence electrons, including ejection of the electrons (photoelectric effect)
X-raysExcitation and ejection of core atomic electrons, Compton scattering (for low atomic numbers)
Gamma raysEnergetic ejection of core electrons in heavy elements, Compton scattering (for all atomic numbers), excitation of atomic nuclei, including dissociation of nuclei
High-energy gamma raysCreation of particle-antiparticle pairs. At very high energies a single photon can create a shower of high-energy particles and antiparticles upon interaction with matter.

Types of radiation

The electromagnetic spectrum

Boundaries

A discussion of the regions (or bands or types) of the electromagnetic spectrum is given below. Note that there are no precisely defined boundaries between the bands of the electromagnetic spectrum; rather they fade into each other like the bands in a rainbow (which is the sub-spectrum of visible light). Radiation of each frequency and wavelength (or in each band) will have a mixture of properties of two regions of the spectrum that bound it. For example, red light resembles infrared radiation in that it can excite and add energy to some chemical bonds and indeed must do so to power the chemical mechanisms responsible for photosynthesis and the working of the visual system.

Regions of the spectrum

The types of electromagnetic radiation are broadly classified into the following classes:[6]
  1. Gamma radiation
  2. X-ray radiation
  3. Ultraviolet radiation
  4. Visible radiation
  5. Infrared radiation
  6. Terahertz radiation
  7. Microwave radiation
  8. Radio waves
This classification goes in the increasing order of wavelength, which is characteristic of the type of radiation.[6] While, in general, the classification scheme is accurate, in reality there is often some overlap between neighboring types of electromagnetic energy. For example, SLF radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along wires as electric power, although the latter is, in the strict sense, not electromagnetic radiation at all (see near and far field).

The distinction between X-rays and gamma rays is partly based on sources: the photons generated from nuclear decay or other nuclear and subnuclear/particle process, are always termed gamma rays, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons.[12][13][14] In general, nuclear transitions are much more energetic than electronic transitions, so gamma-rays are more energetic than X-rays, but exceptions exist. By analogy to electronic transitions, muonic atom transitions are also said to produce X-rays, even though their energy may exceed 6 megaelectronvolts (0.96 pJ),[15] whereas there are many (77 known to be less than 10 keV (1.6 fJ)) low-energy nuclear transitions (e.g., the 7.6 eV (1.22 aJ) nuclear transition of thorium-229), and, despite being one million-fold less energetic than some muonic X-rays, the emitted photons are still called gamma rays due to their nuclear origin.[16]

The convention that EM radiation that is known to come from the nucleus, is always called "gamma ray" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high energy physics and in medical radiotherapy, very high energy EMR (in the >10 MeV region) which is of higher energy than any nuclear gamma ray, is not referred to as either X-ray or gamma-ray, but instead by the generic term of "high energy photons."

The region of the spectrum in which a particular observed electromagnetic radiation falls, is reference frame-dependent (due to the Doppler shift for light), so EM radiation that one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced, when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos.

Radio frequency

 
Radio waves generally are utilized by antennas of appropriate size (according to the principle of resonance), with wavelengths ranging from hundreds of meters to about one millimeter. They are used for transmission of data, via modulation. Television, mobile phones, wireless networking, and amateur radio all use radio waves. The use of the radio spectrum is regulated by many governments through frequency allocation.

Radio waves can be made to carry information by varying a combination of the amplitude, frequency, and phase of the wave within a frequency band. When EM radiation impinges upon a conductor, it couples to the conductor, travels along it, and induces an electric current on the surface of that conductor by exciting the electrons of the conducting material. This effect (the skin effect) is used in antennas.

Microwaves

 
Plot of Earth's atmospheric transmittance (or opacity) to various wavelengths of electromagnetic radiation.

The super-high frequency (SHF) and extremely high frequency (EHF) of microwaves are on the short side of radio waves. Microwaves are waves that are typically short enough (measured in millimeters) to employ tubular metal waveguides of reasonable diameter. Microwave energy is produced with klystron and magnetron tubes, and with solid state diodes such as Gunn and IMPATT devices.
Microwaves are absorbed by molecules that have a dipole moment in liquids. In a microwave oven, this effect is used to heat food. Low-intensity microwave radiation is used in Wi-Fi, although this is at intensity levels unable to cause thermal heating.

Volumetric heating, as used by microwave ovens, transfers energy through the material electromagnetically, not as a thermal heat flux. The benefit of this is a more uniform heating and reduced heating time; microwaves can heat material in less than 1% of the time of conventional heating methods.

When active, the average microwave oven is powerful enough to cause interference at close range with poorly shielded electromagnetic fields such as those found in mobile medical devices and poorly made consumer electronics.[citation needed]

Terahertz radiation

Terahertz radiation is a region of the spectrum between far infrared and microwaves. Until recently, the range was rarely studied and few sources existed for microwave energy at the high end of the band (sub-millimeter waves or so-called terahertz waves), but applications such as imaging and communications are now appearing. Scientists are also looking to apply terahertz technology in the armed forces, where high-frequency waves might be directed at enemy troops to incapacitate their electronic equipment.[17]

Infrared radiation

The infrared part of the electromagnetic spectrum covers the range from roughly 300 GHz (1 mm) to 400 THz (750 nm). It can be divided into three parts:[6]
  • Far-infrared, from 300 GHz (1 mm) to 30 THz (10 μm). The lower part of this range may also be called microwaves. This radiation is typically absorbed by so-called rotational modes in gas-phase molecules, by molecular motions in liquids, and by phonons in solids. The water in Earth's atmosphere absorbs so strongly in this range that it renders the atmosphere in effect opaque. However, there are certain wavelength ranges ("windows") within the opaque range that allow partial transmission, and can be used for astronomy. The wavelength range from approximately 200 μm up to a few mm is often referred to as "sub-millimeter" in astronomy, reserving far infrared for wavelengths below 200 μm.
  • Mid-infrared, from 30 to 120 THz (10 to 2.5 μm). Hot objects (black-body radiators) can radiate strongly in this range, and human skin at normal body temperature radiates strongly at the lower end of this region. This radiation is absorbed by molecular vibrations, where the different atoms in a molecule vibrate around their equilibrium positions. This range is sometimes called the fingerprint region, since the mid-infrared absorption spectrum of a compound is very specific for that compound.
  • Near-infrared, from 120 to 400 THz (2,500 to 750 nm). Physical processes that are relevant for this range are similar to those for visible light. The highest frequences in this region can be detected directly by some types of photographic film, and by many types of solid state image sensors for infrared photography and videography.

Visible radiation (light)

Above infrared in frequency comes visible light. The Sun emits its peak power in the visible region, although integrating the entire emission power spectrum through all wavelengths shows that the Sun emits slightly more infrared than visible light.[18] By definition, visible light is the part of the EM spectrum to which the human eye is the most sensitive. Visible light (and near-infrared light) is typically absorbed and emitted by electrons in molecules and atoms that move from one energy level to another. This action allows the chemical mechanisms that underly human vision and plant photosynthesis. The light which excites the human visual system is a very small portion of the electromagnetic spectrum. A rainbow shows the optical (visible) part of the electromagnetic spectrum; infrared (if it could be seen) would be located just beyond the red side of the rainbow with ultraviolet appearing just beyond the violet end.

Electromagnetic radiation with a wavelength between 380 nm and 760 nm (400–790 terahertz) is detected by the human eye and perceived as visible light. Other wavelengths, especially near infrared (longer than 760 nm) and ultraviolet (shorter than 380 nm) are also sometimes referred to as light, especially when the visibility to humans is not relevant. White light is a combination of lights of different wavelengths in the visible spectrum. Passing white light through a prism splits it up into the several colors of light observed in the visible spectrum between 400 nm and 780 nm.

If radiation having a frequency in the visible region of the EM spectrum reflects off an object, say, a bowl of fruit, and then strikes the eyes, this results in visual perception of the scene. The brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this insufficiently-understood psychophysical phenomenon, most people perceive a bowl of fruit.

At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and technology can also manipulate a broad range of wavelengths. Optical fiber transmits light that, although not necessarily in the visible part of the spectrum (it is usually infrared), can carry information. The modulation is similar to that used with radio waves.

Ultraviolet radiation

 
The amount of penetration of UV relative to altitude in Earth's ozone

Next in frequency comes ultraviolet (UV). The wavelength of UV rays is shorter than the violet end of the visible spectrum but longer than the X-ray.

UV in the very shortest range (next to X-rays) is capable even of ionizing atoms (see photoelectric effect), greatly changing their physical behavior.

At the middle range of UV, UV rays cannot ionize but can break chemical bonds, making molecules to be unusually reactive. Sunburn, for example, is caused by the disruptive effects of middle range UV radiation on skin cells, which is the main cause of skin cancer. UV rays in the middle range can irreparably damage the complex DNA molecules in the cells producing thymine dimers making it a very potent mutagen.

The Sun emits significant UV radiation (about 10% of its total power), including extremely short wavelength UV that could potentially destroy most life on land (ocean water would provide some protection for life there). However, most of the Sun's most-damaging UV wavelengths are absorbed first by the magnetosphere and then by the atmosphere's oxygen, nitrogen, and ozone layer before they reach the surface. The higher ranges of UV (vacuum UV) are absorbed by nitrogen and, at longer wavelengths, by simple diatomic oxygen in the air. Most of the UV in this mid-range is blocked by the ozone layer, which absorbs strongly in the important 200–315 nm range, the lower part of which is too long to be absorbed by ordinary dioxygen in air. The range between 315 nm and visible light (called UV-A) is not blocked well by the atmosphere, but does not cause sunburn and does less biological damage. However, it is not harmless and does cause oxygen radicals, mutation and skin damage. See ultraviolet for more information.

X-rays

After UV come X-rays, which, like the upper ranges of UV are also ionizing. However, due to their higher energies, X-rays can also interact with matter by means of the Compton effect. Hard X-rays have shorter wavelengths than soft X-rays. As they can pass through most substances with some absorption, X-rays can be used to 'see through' objects with thicknesses less than equivalent to a few meters of water. One notable use in this category is diagnostic X-ray images in medicine (a process known as radiography). X-rays are useful as probes in high-energy physics. In astronomy, the accretion disks around neutron stars and black holes emit X-rays, which enable them to be studied.
X-rays are also emitted by the coronas of stars and are strongly emitted by some types of nebulae. However, X-ray telescopes must be placed outside the Earth's atmosphere to see astronomical X-rays, since the atmosphere of Earth is a radiation shield with areal density of 1000 grams per cm2, which is the same areal density as 1000 centimeters or 10 meters thickness of water.[19] This is an amount sufficient to block almost all astronomical X-rays (and also astronomical gamma rays—see below).

Gamma rays

After hard X-rays come gamma rays, which were discovered by Paul Villard in 1900. These are the most energetic photons, having no defined lower limit to their wavelength. In astronomy they are valuable for studying high-energy objects or regions, however like with X-rays this can only be done with telescopes outside the Earth's atmosphere. Gamma rays are useful to physicists thanks to their penetrative ability and their production from a number of radioisotopes. Gamma rays are also used for the irradiation of food and seed for sterilization, and in medicine they are occasionally used in radiation cancer therapy. More commonly, gamma rays are used for diagnostic imaging in nuclear medicine, with an example being PET scans. The wavelength of gamma rays can be measured with high accuracy by means of Compton scattering. Gamma rays are first and mostly blocked by Earth's magnetosphere then by the atmosphere.

Quantum decoherence

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_decoherence ...