Search This Blog

Sunday, June 19, 2022

Metallic bonding

From Wikipedia, the free encyclopedia

An example showing metallic bonding. + represents cations, - represents the free floating electrons.
 

Metallic bonding is a type of chemical bonding that arises from the electrostatic attractive force between conduction electrons (in the form of an electron cloud of delocalized electrons) and positively charged metal ions. It may be described as the sharing of free electrons among a structure of positively charged ions (cations). Metallic bonding accounts for many physical properties of metals, such as strength, ductility, thermal and electrical resistivity and conductivity, opacity, and luster.

Metallic bonding is not the only type of chemical bonding a metal can exhibit, even as a pure substance. For example, elemental gallium consists of covalently-bound pairs of atoms in both liquid and solid-state—these pairs form a crystal structure with metallic bonding between them. Another example of a metal–metal covalent bond is the mercurous ion (Hg2+
2
).

History

As chemistry developed into a science, it became clear that metals formed the majority of the periodic table of the elements, and great progress was made in the description of the salts that can be formed in reactions with acids. With the advent of electrochemistry, it became clear that metals generally go into solution as positively charged ions, and the oxidation reactions of the metals became well understood in their electrochemical series. A picture emerged of metals as positive ions held together by an ocean of negative electrons.

With the advent of quantum mechanics, this picture was given a more formal interpretation in the form of the free electron model and its further extension, the nearly free electron model. In both models, the electrons are seen as a gas traveling through the structure of the solid with an energy that is essentially isotropic, in that it depends on the square of the magnitude, not the direction of the momentum vector k. In three-dimensional k-space, the set of points of the highest filled levels (the Fermi surface) should therefore be a sphere. In the nearly-free model, box-like Brillouin zones are added to k-space by the periodic potential experienced from the (ionic) structure, thus mildly breaking the isotropy.

The advent of X-ray diffraction and thermal analysis made it possible to study the structure of crystalline solids, including metals and their alloys; and phase diagrams were developed. Despite all this progress, the nature of intermetallic compounds and alloys largely remained a mystery and their study was often merely empirical. Chemists generally steered away from anything that did not seem to follow Dalton's laws of multiple proportions; and the problem was considered the domain of a different science, metallurgy.

The nearly-free electron model was eagerly taken up by some researchers in this field, notably Hume-Rothery, in an attempt to explain why certain intermetallic alloys with certain compositions would form and others would not. Initially Hume-Rothery's attempts were quite successful. His idea was to add electrons to inflate the spherical Fermi-balloon inside the series of Brillouin-boxes and determine when a certain box would be full. This predicted a fairly large number of alloy compositions that were later observed. As soon as cyclotron resonance became available and the shape of the balloon could be determined, it was found that the assumption that the balloon was spherical did not hold, except perhaps in the case of caesium. This finding reduced many of the conclusions to examples of how a model can sometimes give a whole series of correct predictions, yet still be wrong.

The nearly-free electron debacle showed researchers that any model that assumed that ions were in a sea of free electrons needed modification. So, a number of quantum mechanical models—such as band structure calculations based on molecular orbitals or the density functional theory—were developed. In these models, one either departs from the atomic orbitals of neutral atoms that share their electrons or (in the case of density functional theory) departs from the total electron density. The free-electron picture has, nevertheless, remained a dominant one in education.

The electronic band structure model became a major focus not only for the study of metals but even more so for the study of semiconductors. Together with the electronic states, the vibrational states were also shown to form bands. Rudolf Peierls showed that, in the case of a one-dimensional row of metallic atoms—say, hydrogen—an instability had to arise that would lead to the breakup of such a chain into individual molecules. This sparked an interest in the general question: when is collective metallic bonding stable and when will a more localized form of bonding take its place? Much research went into the study of clustering of metal atoms.

As powerful as the concept of the band structure model proved to be in describing metallic bonding, it has the drawback of remaining a one-electron approximation of a many-body problem. In other words, the energy states of each electron are described as if all the other electrons simply form a homogeneous background. Researchers such as Mott and Hubbard realized that this was perhaps appropriate for strongly delocalized s- and p-electrons; but for d-electrons, and even more for f-electrons, the interaction with electrons (and atomic displacements) in the local environment may become stronger than the delocalization that leads to broad bands. Thus, the transition from localized unpaired electrons to itinerant ones partaking in metallic bonding became more comprehensible.

The nature of metallic bonding

The combination of two phenomena gives rise to metallic bonding: delocalization of electrons and the availability of a far larger number of delocalized energy states than of delocalized electrons. The latter could be called electron deficiency.

In 2D

Graphene is an example of two-dimensional metallic bonding. Its metallic bonds are similar to aromatic bonding in benzene, naphthalene, anthracene, ovalene, etc.

In 3D

Metal aromaticity in metal clusters is another example of delocalization, this time often in three-dimensional arrangements. Metals take the delocalization principle to its extreme, and one could say that a crystal of a metal represents a single molecule over which all conduction electrons are delocalized in all three dimensions. This means that inside the metal one can generally not distinguish molecules, so that the metallic bonding is neither intra- nor inter-molecular. 'Nonmolecular' would perhaps be a better term. Metallic bonding is mostly non-polar, because even in alloys there is little difference among the electronegativities of the atoms participating in the bonding interaction (and, in pure elemental metals, none at all). Thus, metallic bonding is an extremely delocalized communal form of covalent bonding. In a sense, metallic bonding is not a 'new' type of bonding at all. It describes the bonding only as present in a chunk of condensed matter: be it crystalline solid, liquid, or even glass. Metallic vapors, in contrast, are often atomic (Hg) or at times contain molecules, such as Na2, held together by a more conventional covalent bond. This is why it is not correct to speak of a single 'metallic bond'.

Delocalization is most pronounced for s- and p-electrons. Delocalization in caesium is so strong that the electrons are virtually freed from the caesium atoms to form a gas constrained only by the surface of the metal. For caesium, therefore, the picture of Cs+ ions held together by a negatively charged electron gas is not inaccurate. For other elements the electrons are less free, in that they still experience the potential of the metal atoms, sometimes quite strongly. They require a more intricate quantum mechanical treatment (e.g., tight binding) in which the atoms are viewed as neutral, much like the carbon atoms in benzene. For d- and especially f-electrons the delocalization is not strong at all and this explains why these electrons are able to continue behaving as unpaired electrons that retain their spin, adding interesting magnetic properties to these metals.

Electron deficiency and mobility

Metal atoms contain few electrons in their valence shells relative to their periods or energy levels. They are electron-deficient elements and the communal sharing does not change that. There remain far more available energy states than there are shared electrons. Both requirements for conductivity are therefore fulfilled: strong delocalization and partly filled energy bands. Such electrons can therefore easily change from one energy state to a slightly different one. Thus, not only do they become delocalized, forming a sea of electrons permeating the structure, but they are also able to migrate through the structure when an external electrical field is applied, leading to electrical conductivity. Without the field, there are electrons moving equally in all directions. Within such a field, some electrons will adjust their state slightly, adopting a different wave vector. Consequently, there will be more moving one way than another and a net current will result.

The freedom of electrons to migrate also gives metal atoms, or layers of them, the capacity to slide past each other. Locally, bonds can easily be broken and replaced by new ones after a deformation. This process does not affect the communal metallic bonding very much, which gives rise to metals' characteristic malleability and ductility. This is particularly true for pure elements. In the presence of dissolved impurities, the normally easily formed cleavages may be blocked and the material become harder. Gold, for example, is very soft in pure form (24-karat), which is why alloys are preferred in jewelry.

Metals are typically also good conductors of heat, but the conduction electrons only contribute partly to this phenomenon. Collective (i.e., delocalized) vibrations of the atoms, known as phonons that travel through the solid as a wave, are bigger contributors.

However, a substance such as diamond, which conducts heat quite well, is not an electrical conductor. This is not a consequence of delocalization being absent in diamond, but simply that carbon is not electron deficient.

Electron deficiency is important in distinguishing metallic from more conventional covalent bonding. Thus, we should amend the expression given above to: Metallic bonding is an extremely delocalized communal form of electron-deficient covalent bonding.

Metallic radius

The metallic radius is defined as one-half of the distance between the two adjacent metal ions in the metallic structure. This radius depends on the nature of the atom as well as its environment—specifically, on the coordination number (CN), which in turn depends on the temperature and applied pressure.

When comparing periodic trends in the size of atoms it is often desirable to apply the so-called Goldschmidt correction, which converts atomic radii to the values the atoms would have if they were 12-coordinated. Since metallic radii are largest for the highest coordination number, correction for less dense coordinations involves multiplying by x, where 0 < x < 1. Specifically, for CN = 4, x = 0.88; for CN = 6, x = 0.96, and for CN = 8, x = 0.97. The correction is named after Victor Goldschmidt who obtained the numerical values quoted above.

The radii follow general periodic trends: they decrease across the period due to the increase in the effective nuclear charge, which is not offset by the increased number of valence electrons; but the radii increase down the group due to an increase in the principal quantum number. Between the 4d and 5d elements, the lanthanide contraction is observed—there is very little increase of the radius down the group due to the presence of poorly shielding f orbitals.

Strength of the bond

The atoms in metals have a strong attractive force between them. Much energy is required to overcome it. Therefore, metals often have high boiling points, with tungsten (5828 K) being extremely high. A remarkable exception is the elements of the zinc group: Zn, Cd, and Hg. Their electron configurations end in ...ns2, which resembles a noble gas configuration, like that of helium, more and more when going down the periodic table, because the energy differential to the empty np orbitals becomes larger. These metals are therefore relatively volatile, and are avoided in ultra-high vacuum systems.

Otherwise, metallic bonding can be very strong, even in molten metals, such as gallium. Even though gallium will melt from the heat of one's hand just above room temperature, its boiling point is not far from that of copper. Molten gallium is, therefore, a very nonvolatile liquid, thanks to its strong metallic bonding.

The strong bonding of metals in liquid form demonstrates that the energy of a metallic bond is not highly dependent on the direction of the bond; this lack of bond directionality is a direct consequence of electron delocalization, and is best understood in contrast to the directional bonding of covalent bonds. The energy of a metallic bond is thus mostly a function of the number of electrons which surround the metallic atom, as exemplified by the embedded atom model. This typically results in metals assuming relatively simple, close-packed crystal structures, such as FCC, BCC, and HCP.

Given high enough cooling rates and appropriate alloy composition, metallic bonding can occur even in glasses, which have amorphous structures.

Much biochemistry is mediated by the weak interaction of metal ions and biomolecules. Such interactions, and their associated conformational changes, have been measured using dual polarisation interferometry.

Solubility and compound formation

Metals are insoluble in water or organic solvents, unless they undergo a reaction with them. Typically, this is an oxidation reaction that robs the metal atoms of their itinerant electrons, destroying the metallic bonding. However metals are often readily soluble in each other while retaining the metallic character of their bonding. Gold, for example, dissolves easily in mercury, even at room temperature. Even in solid metals, the solubility can be extensive. If the structures of the two metals are the same, there can even be complete solid solubility, as in the case of electrum, an alloy of silver and gold. At times, however, two metals will form alloys with different structures than either of the two parents. One could call these materials metal compounds. But, because materials with metallic bonding are typically not molecular, Dalton's law of integral proportions is not valid; and often a range of stoichiometric ratios can be achieved. It is better to abandon such concepts as 'pure substance' or 'solute' in such cases and speak of phases instead. The study of such phases has traditionally been more the domain of metallurgy than of chemistry, although the two fields overlap considerably.

Localization and clustering: from bonding to bonds

The metallic bonding in complex compounds does not necessarily involve all constituent elements equally. It is quite possible to have one or more elements that do not partake at all. One could picture the conduction electrons flowing around them like a river around an island or a big rock. It is possible to observe which elements do partake: e.g., by looking at the core levels in an X-ray photoelectron spectroscopy (XPS) spectrum. If an element partakes, its peaks tend to be skewed.

Some intermetallic materials, e.g., do exhibit metal clusters reminiscent of molecules; and these compounds are more a topic of chemistry than of metallurgy. The formation of the clusters could be seen as a way to 'condense out' (localize) the electron-deficient bonding into bonds of a more localized nature. Hydrogen is an extreme example of this form of condensation. At high pressures it is a metal. The core of the planet Jupiter could be said to be held together by a combination of metallic bonding and high pressure induced by gravity. At lower pressures, however, the bonding becomes entirely localized into a regular covalent bond. The localization is so complete that the (more familiar) H2 gas results. A similar argument holds for an element such as boron. Though it is electron-deficient compared to carbon, it does not form a metal. Instead it has a number of complex structures in which icosahedral B12 clusters dominate. Charge density waves are a related phenomenon.

As these phenomena involve the movement of the atoms toward or away from each other, they can be interpreted as the coupling between the electronic and the vibrational states (i.e. the phonons) of the material. A different such electron-phonon interaction is thought to lead to a very different result at low temperatures, that of superconductivity. Rather than blocking the mobility of the charge carriers by forming electron pairs in localized bonds, Cooper-pairs are formed that no longer experience any resistance to their mobility.

Optical properties

The presence of an ocean of mobile charge carriers has profound effects on the optical properties of metals, which can only be understood by considering the electrons as a collective, rather than considering the states of individual electrons involved in more conventional covalent bonds.

Light consists of a combination of an electrical and a magnetic field. The electrical field is usually able to excite an elastic response from the electrons involved in the metallic bonding. The result is that photons cannot penetrate very far into the metal and are typically reflected, although some may also be absorbed. This holds equally for all photons in the visible spectrum, which is why metals are often silvery white or grayish with the characteristic specular reflection of metallic luster. The balance between reflection and absorption determines how white or how gray a metal is, although surface tarnish can obscure the luster. Silver, a metal with high conductivity, is one of the whitest.

Notable exceptions are reddish copper and yellowish gold. The reason for their color is that there is an upper limit to the frequency of the light that metallic electrons can readily respond to: the plasmon frequency. At the plasmon frequency, the frequency-dependent dielectric function of the free electron gas goes from negative (reflecting) to positive (transmitting); higher frequency photons are not reflected at the surface, and do not contribute to the color of the metal. There are some materials, such as indium tin oxide (ITO), that are metallic conductors (actually degenerate semiconductors) for which this threshold is in the infrared, which is why they are transparent in the visible, but good reflectors in the infrared.

For silver the limiting frequency is in the far ultraviolet, but for copper and gold it is closer to the visible. This explains the colors of these two metals. At the surface of a metal, resonance effects known as surface plasmons can result. They are collective oscillations of the conduction electrons, like a ripple in the electronic ocean. However, even if photons have enough energy, they usually do not have enough momentum to set the ripple in motion. Therefore, plasmons are hard to excite on a bulk metal. This is why gold and copper look like lustrous metals albeit with a dash of color. However, in colloidal gold the metallic bonding is confined to a tiny metallic particle, which prevents the oscillation wave of the plasmon from 'running away'. The momentum selection rule is therefore broken, and the plasmon resonance causes an extremely intense absorption in the green, with a resulting purple-red color. Such colors are orders of magnitude more intense than ordinary absorptions seen in dyes and the like, which involve individual electrons and their energy states.

Environmentalist

From Wikipedia, the free encyclopedia

An environmentalist is a person who is concerned with and/or advocates for the protection of the environment. An environmentalist can be considered a supporter of the goals of the environmental movement, "a political and ethical movement that seeks to improve and protect the quality of the natural environment through changes to environmentally harmful human activities". An environmentalist is engaged in or believes in the philosophy of environmentalism or one of the related philosophies.

The environmental movement has a number of subcommunities, with different approaches and focuses – each developing distinct movements and identities. Critics of environmentalists sometimes referred to using informal or derogatory terms such as "greenie" and "tree-hugger", with some members of the public disassociating the most radical environmentalists with these derogatory terms.

Types

The environmental movement contains a number of subcommunities, that have developed with different approaches and philosophies in different parts of the world. Notably, the early environmental movement experienced a deep tension between the philosophies of conservation and broader environmental protection. In recent decades the rise to prominence of environmental justice, indigenous rights and key environmental crises like the climate crises, has led to the development of other environmentalist identities. Environmentalists can be describe as one of the following:

Climate activists

The public recognition of the climate crisis and emergence of the climate movement in the beginning of the 21st century led to a distinct group of activists. Activations like the School Strike for Climate and Fridays for Future, have led to a new generation of youth activists like Greta Thunberg, Jamie Margolin and Vanessa Nakate who have created a global youth climate movement.

Conservationists

One notable strain of environmentalism, comes from the philosophy of the conservation movement. Conservationists are concerned with leaving the environment in a better state than the condition they found it distinct from human interaction. The conservation movement is associated with the early parts of the environmental movement of the 19th and 20th century.

Environmental defenders

Environmental defenders or environmental human rights defenders are individuals or collectives who protect the environment from harms resulting from resource extraction, hazardous waste disposal, infrastructure projects, land appropriation, or other dangers. In 2019, the UN Human Rights Council unanimously recognised their importance to environmental protection. The term environmental defender is broadly applied to a diverse range of environmental groups and leaders from different cultures that all employ different tactics and hold different agendas. Use of the term is contested, as it homogenises such a wide range of groups and campaigns, many of whom do not self-identify with the term and may not have explicit aims to protect the environment (being motivated primarily by social justice concerns).

Environmental defenders involved in environmental conflicts face a wide range of threats from governments, local elites, and other powers that benefit from projects that defenders oppose. Global Witness reported 1,922 murders of environmental defenders in 57 countries between 2002 and 2019, with indigenous people accounting for approximately one third of this total. Documentation of this violence is also incomplete. The UN Special Rapporteur on human rights reported that as many as one hundred environmental defenders are intimidated, arrested or otherwise harassed for every one that is killed.

Greens

The adoption of environmentalist into a distinct political ideology led to the development of political parties called "green parties", typically with a leftist political approach to overlapping issues of environmental and social wellbeing.

Water protectors

Oceti Sakowin encampment at the Dakota Access Pipeline protests camps in North Dakota
 
Water protectors marching in Seattle
 

Water protectors are activists, organizers, and cultural workers focused on the defense of the world's water and water systems. The water protector name, analysis and style of activism arose from Indigenous communities in North America during the Dakota Access Pipeline protests at the Standing Rock Indian Reservation, which began with an encampment on LaDonna Brave Bull Allard's land in April, 2016.

Water protectors are similar to land defenders, but are distinguished from other environmental activists by this philosophy and approach that is rooted in an indigenous cultural perspective that sees water and the land as sacred. This relationship with water moves beyond simply having access to clean drinking water, and comes from the beliefs that water is necessary for life and that water is a relative and therefore it must be treated with respect. As such, the reasons for protection of water are older, more holistic, and integrated into a larger cultural and spiritual whole than in most modern forms of environmental activism, which may be more based in seeing water and other extractive resources as commodities.

Historically, water protectors have been led by or composed of women, because as water provides life, so do women.

Notable environmentalists

Sir David Attenborough in May 2003
 
Al Gore, 2007
 
 
Hakob Sanasaryan campaignning against illegal construction of a new ore-processing facility in Sotk, 2011
 
Kevin Buzzacott (Aboriginal activist) in Adelaide 2014

Some of the notable environmentalists who have been active in lobbying for environmental protection and conservation include:

Extension

In recent years, there are not only environmentalists for natural environment but also environmentalists for human environment. For instance, the activists who call for "mental green space" by getting rid of disadvantages of internet, cable TV, and smartphones have been called "info-environmentalists".

Fourier analysis

Bass guitar time signal of open string A note (55 Hz).
 
Fourier transform of bass guitar time signal of open string A note (55 Hz). Fourier analysis reveals the oscillatory components of signals and functions.

In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.

The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.

To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.

Applications

Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.

This wide applicability stems from many useful properties of the transforms:

In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.

Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.

In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.

When a function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function at frequency represents the amplitude of a frequency component whose initial phase is given by the angle of (polar coordinates).

Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.

When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.

Some examples include:

Variants of Fourier analysis

A Fourier transform and 3 variations caused by periodic sampling (at interval T) and/or periodic summation (at interval P) of the underlying time-domain function. The relative computational ease of the DFT sequence and the insight it gives into S(f) make it a popular analysis tool.

(Continuous) Fourier transform

Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (t), and the domain of the output (final) function is ordinary frequency, the transform of function s(t) at frequency f is given by the complex number:

Evaluating this quantity for all values of f produces the frequency-domain function. Then s(t) can be represented as a recombination of complex exponentials of all possible frequencies:

which is the inverse transform formula. The complex number, S(f), conveys both amplitude and phase of frequency f.

See Fourier transform for much more information, including:

  • conventions for amplitude normalization and frequency scaling/units
  • transform properties
  • tabulated transforms of specific functions
  • an extension/generalization for functions of multiple dimensions, such as images.

Fourier series

The Fourier transform of a periodic function, sP(t), with period P, becomes a Dirac comb function, modulated by a sequence of complex coefficients:

    (where P is the integral over any interval of length P).

The inverse transform, known as Fourier series, is a representation of sP(t) in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:

Any sP(t) can be expressed as a periodic summation of another function, s(t):

and the coefficients are proportional to samples of S(f) at discrete intervals of 1/P:

Note that any s(t) whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering s(t) (and therefore S(f)) from just these samples (i.e. from the Fourier series) is that the non-zero portion of s(t) be confined to a known interval of duration P, which is the frequency domain dual of the Nyquist–Shannon sampling theorem.

See Fourier series for more information, including the historical development.

Discrete-time Fourier transform (DTFT)

The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:

which is known as the DTFT. Thus the DTFT of the s[n] sequence is also the Fourier transform of the modulated Dirac comb function.

The Fourier series coefficients (and inverse transform), are defined by:

Parameter T corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula.  Thus we have the important result that when a discrete data sequence, s[n], is proportional to samples of an underlying continuous function, s(t), one can observe a periodic summation of the continuous Fourier transform, S(f). Note that any s(t) with the same discrete sample values produces the same DTFT  But under certain idealized conditions one can theoretically recover S(f) and s(t) exactly. A sufficient condition for perfect recovery is that the non-zero portion of S(f) be confined to a known frequency interval of width 1/T.  When that interval is [−1/2T, 1/2T], the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.

Another reason to be interested in S1/T(f) is that it often provides insight into the amount of aliasing caused by the sampling process.

Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:

  • normalized frequency units
  • windowing (finite-length sequences)
  • transform properties
  • tabulated transforms of specific functions

Discrete Fourier transform (DFT)

Similar to a Fourier series, the DTFT of a periodic sequence, sN[n], with period N, becomes a Dirac comb function, modulated by a sequence of complex coefficients (see DTFT § Periodic data):

  where Σn is the sum over any sequence of length N.

The S[k] sequence is what is customarily known as the DFT of one cycle of sN. It is also N-periodic, so it is never necessary to compute more than N coefficients. The inverse transform, also known as a discrete Fourier series, is given by:

  where Σk is the sum over any sequence of length N.

When sN[n] is expressed as a periodic summation of another function:

  and  

the coefficients are proportional to samples of S1/T(f) at disrete intervals of 1/P = 1/NT:

Conversely, when one wants to compute an arbitrary number (N) of discrete samples of one cycle of a continuous DTFT, S1/T(f), it can be done by computing the relatively simple DFT of sN[n], as defined above. In most cases, N is chosen equal to the length of non-zero portion of s[n]. Increasing N, known as zero-padding or interpolation, results in more closely spaced samples of one cycle of S1/T(f). Decreasing N, causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see Discrete-time Fourier transform § L=N×I) In most cases of practical interest, the s[n] sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.

The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.

See Discrete Fourier transform for much more information, including:

  • transform properties
  • applications
  • tabulated transforms of specific functions

Summary

For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.

It is common in practice for the duration of s(•) to be limited to the period, P or N.  But these formulas do not require that condition.

s(t) transforms (continuous-time)

Continuous frequency Discrete frequencies
Transform
Inverse
s(nT) transforms (discrete-time)

Continuous frequency Discrete frequencies
Transform

Inverse

Symmetry properties

When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:

From this, various relationships are apparent, for example:

  • The transform of a real-valued function (sRE + sRO) is the even symmetric function SRE + i SIO. Conversely, an even-symmetric transform implies a real-valued time-domain.
  • The transform of an imaginary-valued function (i sIE + i sIO) is the odd symmetric function SRO + i SIE, and the converse is true.
  • The transform of an even-symmetric function (sRE + i sIO) is the real-valued function SRE + SRO, and the converse is true.
  • The transform of an odd-symmetric function (sRO + i sIE) is the imaginary-valued function i SIE + i SIO, and the converse is true.

History

An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).

The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle § Mathematical formalism).

In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit, which has been described as the first formula for the DFT, and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits. Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.

An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic: Lagrange transformed the roots x1, x2, x3 into the resolvents:

where ζ is a cubic root of unity, which is the DFT of order 3.

A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.

Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.

The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.

The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.

Time–frequency transforms

In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.

As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.

Fourier transforms on arbitrary locally compact abelian topological groups

The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.

More specific, Fourier analysis can be done on cosets, even discrete cosets.

Information asymmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inf...