Search This Blog

Tuesday, May 13, 2025

Friedmann equations

From Wikipedia, the free encyclopedia
 
The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern cosmic expansion in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924. The physical models built on the Friedmann equations are called FRW or FLRW models and from the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.

Assumptions

The Friedmann equations build on three assumptions:

  1. the Friedmann–Lemaître–Robertson–Walker metric,
  2. Einstein's equations for general relativity, and
  3. a perfect fluid source.

The metric in turn starts with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc.

The metric can be written as: where These three possibilities correspond to parameter k of (0) flat space, (+1) a sphere of constant positive curvature or (−1) a hyperbolic space with constant negative curvature. Here the radial position has been decomposed into a time-dependent scale factor, , and a comoving coordinate, . Inserting this metric into Einstein's field equations relate the evolution of this scale factor to the pressure and energy of the matter in the universe. With the stress–energy tensor for a perfect fluid, results in the equations are described below.

Equations

There are two independent Friedmann equations for modelling a homogeneous, isotropic universe. The first is: and second is: The term Friedmann equation sometimes is used only for the first equation. In these equations, R(t) is the cosmological scale factor, is the Newtonian constant of gravitation, Λ is the cosmological constant with dimension length−2, ρ is the energy density and p is the isotropic pressure. k is constant throughout a particular solution, but may vary from one solution to another. The units set the speed of light in vacuum to one.

In previous equations, R, ρ, and p are functions of time. If the cosmological constant, Λ, is ignored, the term in the first Friedmann equation can be interpreted as a Newtonian total energy, so the evolution of the universe pits gravitational potential energy, against kinetic energy, . The winner depends upon the k value in the total energy: if k is +1, gravity eventually causes the universe to contract. These conclusions will be altered if the Λ is not zero.

Using the first equation, the second equation can be re-expressed as: which eliminates Λ. Alternatively the conservation of mass–energy: leads to the same result.

Spatial curvature

The first Friedmann equation contains a discrete parameter k = +1, 0 or −1 depending on whether the shape of the universe is a closed 3-sphere, flat (Euclidean space) or an open 3-hyperboloid, respectively. If k is positive, then the universe is "closed": starting off on some paths through the universe return to the starting point. Such a universe is analogous to a sphere: finite but unbounded. If k is negative, then the universe is "open": infinite and no paths return. If k = 0, then the universe is Euclidean (flat) and infinite.

Dimensionless scale factor

A dimensionless scale factor can be defined: using the present day value The Friedmann equations can be written in terms of this dimensionless scale factor: where , , and .

Critical density

That value of the mass-energy density, that gives when is called the critical density: If the universe has higher density, , then it is called "spatially closed": in this simple approximation the universe would eventually contract. On the other hand, if has lower density, , then it is called "spatially open" and expands forever. Therefore the geometry of the universe is directly connected to its density.

Density parameter

The density parameter Ω is defined as the ratio of the actual (or observed) density ρ to the critical density ρc of the Friedmann universe: Both the density and the Hubble parameter depend upon time and thus the density parameter varies with time.

The critical density is equivalent to approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2–0.25 atoms per cubic metre.

Estimated relative distribution for components of the energy density of the universe. Dark energy dominates the total energy (74%) while dark matter (22%) constitutes most of the mass. Of the remaining baryonic matter (4%), only one tenth is compact. In February 2015, the European-led research team behind the Planck cosmology probe released new data refining these values to 4.9% ordinary matter, 25.9% dark matter and 69.1% dark energy.

A much greater density comes from the unidentified dark matter, although both ordinary and dark matter contribute in favour of contraction of the universe. However, the largest part comes from so-called dark energy, which accounts for the cosmological constant term. Although the total density is equal to the critical density (exactly, up to measurement error), dark energy does not lead to contraction of the universe but rather may accelerate its expansion.

An expression for the critical density is found by assuming Λ to be zero (as it is for all basic Friedmann universes) and setting the normalised spatial curvature, k, equal to zero. When the substitutions are applied to the first of the Friedmann equations given the new value we find: where:

Given the value of dark energy to be This term originally was used as a means to determine the spatial geometry of the universe, where ρc is the critical density for which the spatial geometry is flat (or Euclidean). Assuming a zero vacuum energy density, if Ω is larger than unity, the space sections of the universe are closed; the universe will eventually stop expanding, then collapse. If Ω is less than unity, they are open; and the universe expands forever. However, one can also subsume the spatial curvature and vacuum energy terms into a more general expression for Ω in which case this density parameter equals exactly unity. Then it is a matter of measuring the different components, usually designated by subscripts. According to the ΛCDM model, there are important components of Ω due to baryons, cold dark matter and dark energy. The spatial geometry of the universe has been measured by the WMAP spacecraft to be nearly flat. This means that the universe can be well approximated by a model where the spatial curvature parameter k is zero; however, this does not necessarily imply that the universe is infinite: it might merely be that the universe is much larger than the part we see.

The first Friedmann equation is often seen in terms of the present values of the density parameters, that is Here Ω0,R is the radiation density today (when a = 1), Ω0,M is the matter (dark plus baryonic) density today, Ω0,k = 1 − Ω0 is the "spatial curvature density" today, and Ω0,Λ is the cosmological constant or vacuum density today.

Other forms

The Hubble parameter can change over time if other parts of the equation are time dependent (in particular the mass density, the vacuum energy, or the spatial curvature). Evaluating the Hubble parameter at the present time yields Hubble's constant which is the proportionality constant of Hubble's law. Applied to a fluid with a given equation of state, the Friedmann equations yield the time evolution and geometry of the universe as a function of the fluid density.

FLRW models

Relativisitic cosmology models based on the FLRW metric and obeying the Friedmann equations are called FRW models. Direct observation of stars has shown their velocities to be dominated by radial recession, validating these assumptions for cosmological models. These models are the basis of the standard model of Big Bang cosmological including the current ΛCDM model.

To apply the metric to cosmology and predict its time evolution via the scale factor requires Einstein's field equations together with a way of calculating the density, such as a cosmological equation of state. This process allows an approximate analytic solution Einstein's field equations giving the Friedmann equations when the energy–momentum tensor is similarly assumed to be isotropic and homogeneous. The resulting equations are:

Because the FLRW model assumes homogeneity, some popular accounts mistakenly assert that the Big Bang model cannot account for the observed lumpiness of the universe. In a strictly FLRW model, there are no clusters of galaxies or stars, since these are objects much denser than a typical part of the universe. Nonetheless, the FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models that calculate the lumpiness in the universe are added onto the FLRW models as extensions. Most cosmologists agree that the observable universe is well approximated by an almost FLRW model, i.e., a model that follows the FLRW metric apart from primordial density fluctuations. As of 2003, the theoretical implications of the various extensions to the FLRW model appear to be well understood, and the goal is to make these consistent with observations from COBE and WMAP.

Interpretation

The pair of equations given above is equivalent to the following pair of equations with , the spatial curvature index, serving as a constant of integration for the first equation.

The first equation can be derived also from thermodynamical considerations and is equivalent to the first law of thermodynamics, assuming the expansion of the universe is an adiabatic process (which is implicitly assumed in the derivation of the Friedmann–Lemaître–Robertson–Walker metric).

The second equation states that both the energy density and the pressure cause the expansion rate of the universe to decrease, i.e., both cause a deceleration in the expansion of the universe. This is a consequence of gravitation, with pressure playing a similar role to that of energy (or mass) density, according to the principles of general relativity. The cosmological constant, on the other hand, causes an acceleration in the expansion of the universe.

Cosmological constant

The cosmological constant term can be omitted if we make the following replacements

Therefore, the cosmological constant can be interpreted as arising from a form of energy that has negative pressure, equal in magnitude to its (positive) energy density: which is an equation of state of vacuum with dark energy.

An attempt to generalize this to would not have general invariance without further modification.

In fact, in order to get a term that causes an acceleration of the universe expansion, it is enough to have a scalar field that satisfies Such a field is sometimes called quintessence.

Newtonian interpretation

This is due to McCrea and Milne, although sometimes incorrectly ascribed to Friedmann. The Friedmann equations are equivalent to this pair of equations:

The first equation says that the decrease in the mass contained in a fixed cube (whose side is momentarily a) is the amount that leaves through the sides due to the expansion of the universe plus the mass equivalent of the work done by pressure against the material being expelled. This is the conservation of mass–energy (first law of thermodynamics) contained within a part of the universe.

The second equation says that the kinetic energy (seen from the origin) of a particle of unit mass moving with the expansion plus its (negative) gravitational potential energy (relative to the mass contained in the sphere of matter closer to the origin) is equal to a constant related to the curvature of the universe. In other words, the energy (relative to the origin) of a co-moving particle in free-fall is conserved. General relativity merely adds a connection between the spatial curvature of the universe and the energy of such a particle: positive total energy implies negative curvature and negative total energy implies positive curvature.

The cosmological constant term is assumed to be treated as dark energy and thus merged into the density and pressure terms.

During the Planck epoch, one cannot neglect quantum effects. So they may cause a deviation from the Friedmann equations.

Useful solutions

The Friedmann equations can be solved exactly in presence of a perfect fluid with equation of state where p is the pressure, ρ is the mass density of the fluid in the comoving frame and w is some constant.

In spatially flat case (k = 0), the solution for the scale factor is where a0 is some integration constant to be fixed by the choice of initial conditions. This family of solutions labelled by w is extremely important for cosmology. For example, w = 0 describes a matter-dominated universe, where the pressure is negligible with respect to the mass density. From the generic solution one easily sees that in a matter-dominated universe the scale factor goes as

Another important example is the case of a radiation-dominated universe, namely when w = 1/3. This leads to

Note that this solution is not valid for domination of the cosmological constant, which corresponds to an w = −1. In this case the energy density is constant and the scale factor grows exponentially.

Solutions for other values of k can be found at Tersic, Balsa. "Lecture Notes on Astrophysics". Retrieved 24 February 2022.

Mixtures

If the matter is a mixture of two or more non-interacting fluids each with such an equation of state, then holds separately for each such fluid f. In each case, from which we get

For example, one can form a linear combination of such terms where A is the density of "dust" (ordinary matter, w = 0) when a = 1; B is the density of radiation (w = 1/3) when a = 1; and C is the density of "dark energy" (w = −1). One then substitutes this into and solves for a as a function of time.

History

Alexander Friedmann

Friedmann published two cosmology papers in the 1922-1923 time frame. He adopted the same homogeneity and isotropy assumptions used by Albert Einstein and by Willem de Sitter in their papers, both published in 1917. Both of the earlier works also assumed the universe was static, eternally unchanging. Einstein postulated an additional term to his equations of general relativity to ensure this stability. In his paper, de Sitter showed that spacetime had curvature even in the absence of matter: the new equations of general relativity implied that a vacuum had properties that altered spacetime.

The idea of static universe was a fundamental assumption of philosophy and science. However, Friedmann abandoned the idea in his first paper "On the curvature of space". Starting with Einstein's 10 equations of relativity, Friedmann applies the symmetry of an isotropic universe and a simple model for mass-energy density to derive a relationship between that density and the curvature of spacetime. He demonstrates that in addition to one solution is static, many time dependent solutions also exist.

Friedmann's second paper, "On the possibility of a world with constant negative curvature," published in 1924 explored more complex geometrical ideas. This paper establish the idea that that the finiteness of spacetime was not a property that could be established based on the equations of general relativity alone: both finite and infinite geometries could be used to give solutions. Friedmann used two concepts of a three dimensional sphere as analogy: a trip at constant latitude could return to the starting point or the sphere might have an infinite number of sheets and the trip never repeats.

Friedmann's paper were largely ignored except – initially – by Einstein who actively dismissed them. However once Edwin Hubble published astronomical evidence that the universe was expanding, Einstein became convinced. Unfortunately for Friedmann, Georges Lemaître discovered some aspects of the same solutions and wrote persuasively about the concept of a universe born from a "primordial atom". Thus historians give these two scientists equal billing for the discovery.

Several students at Tsinghua University (CCP leader Xi Jinping's alma mater) participating in the 2022 COVID-19 protests in China carried placards with Friedmann equations scrawled on them, interpreted by some as a play on the words "Free man". Others have interpreted the use of the equations as a call to “open up” China and stop its Zero Covid policy, as the Friedmann equations relate to the expansion, or “opening” of the universe.

Monday, May 12, 2025

Free-electron laser

From Wikipedia, the free encyclopedia
 
The free-electron laser FELIX Radboud University, Netherlands.

A free-electron laser (FEL) is a fourth generation light source producing extremely brilliant and short pulses of radiation. An FEL functions much as a laser but employs relativistic electrons as a gain medium instead of using stimulated emission from atomic or molecular excitations. In an FEL, a bunch of electrons passes through a magnetic structure called an undulator or wiggler to generate radiation, which re-interacts with the electrons to make them emit coherently, exponentially increasing its intensity.

As electron kinetic energy and undulator parameters can be adapted as desired, free-electron lasers are tunable and can be built for a wider frequency range than any other type of laser, currently ranging in wavelength from microwaves, through terahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray.

Schematic representation of an undulator, at the core of a free-electron laser.

The first free-electron laser was developed by John Madey in 1971 at Stanford University using technology developed by Hans Motz and his coworkers, who built an undulator at Stanford in 1953, using the wiggler magnetic configuration. Madey used a 43 MeV electron beam[8] and 5 m long wiggler to amplify a signal.

Beam creation

The undulator of FELIX.

To create an FEL, an electron gun is used. A beam of electrons is generated by a short laser pulse illuminating a photocathode located inside a microwave cavity and accelerated to almost the speed of light in a device called a photoinjector. The beam is further accelerated to a design energy by a particle accelerator, usually a linear particle accelerator. Then the beam passes through a periodic arrangement of magnets with alternating poles across the beam path, which creates a side to side magnetic field. The direction of the beam is called the longitudinal direction, while the direction across the beam path is called transverse. This array of magnets is called an undulator or a wiggler, because the Lorentz force of the field forces the electrons in the beam to wiggle transversely, traveling along a sinusoidal path about the axis of the undulator.

The transverse acceleration of the electrons across this path results in the release of photons, which are monochromatic but still incoherent, because the electromagnetic waves from randomly distributed electrons interfere constructively and destructively in time. The resulting radiation power scales linearly with the number of electrons. Mirrors at each end of the undulator create an optical cavity, causing the radiation to form standing waves, or alternately an external excitation laser is provided. The radiation becomes sufficiently strong that the transverse electric field of the radiation beam interacts with the transverse electron current created by the sinusoidal wiggling motion, causing some electrons to gain and others to lose energy to the optical field via the ponderomotive force.

This energy modulation evolves into electron density (current) modulations with a period of one optical wavelength. The electrons are thus longitudinally clumped into microbunches, separated by one optical wavelength along the axis. Whereas an undulator alone would cause the electrons to radiate independently (incoherently), the radiation emitted by the bunched electrons is in phase, and the fields add together coherently.

The radiation intensity grows, causing additional microbunching of the electrons, which continue to radiate in phase with each other. This process continues until the electrons are completely microbunched and the radiation reaches a saturated power several orders of magnitude higher than that of the undulator radiation.

The wavelength of the radiation emitted can be readily tuned by adjusting the energy of the electron beam or the magnetic-field strength of the undulators.

FELs are relativistic machines. The wavelength of the emitted radiation, , is given by

or when the wiggler strength parameter K, discussed below, is small

where is the undulator wavelength (the spatial period of the magnetic field), is the relativistic Lorentz factor and the proportionality constant depends on the undulator geometry and is of the order of 1.

This formula can be understood as a combination of two relativistic effects. Imagine you are sitting on an electron passing through the undulator. Due to Lorentz contraction the undulator is shortened by a factor and the electron experiences much shorter undulator wavelength . However, the radiation emitted at this wavelength is observed in the laboratory frame of reference and the relativistic Doppler effect brings the second factor to the above formula. In an X-ray FEL the typical undulator wavelength of 1 cm is transformed to X-ray wavelengths on the order of 1 nm by ≈ 2000, i.e. the electrons have to travel with the speed of 0.9999998c.

Wiggler strength parameter K

K, a dimensionless parameter, defines the wiggler strength as the relationship between the length of a period and the radius of bend,

where is the bending radius, is the applied magnetic field, is the electron mass, and is the elementary charge.

Expressed in practical units, the dimensionless undulator parameter is .

Quantum effects

In most cases, the theory of classical electromagnetism adequately accounts for the behavior of free electron lasers. For sufficiently short wavelengths, quantum effects of electron recoil and shot noise may have to be considered.

Construction

Free-electron lasers require the use of an electron accelerator with its associated shielding, as accelerated electrons can be a radiation hazard if not properly contained. These accelerators are typically powered by klystrons, which require a high-voltage supply. The electron beam must be maintained in a vacuum, which requires the use of numerous vacuum pumps along the beam path. While this equipment is bulky and expensive, free-electron lasers can achieve very high peak powers, and the tunability of FELs makes them highly desirable in many disciplines, including chemistry, structure determination of molecules in biology, medical diagnosis, and nondestructive testing.

Infrared and terahertz FELs

The Fritz Haber Institute in Berlin completed a mid-infrared and terahertz FEL in 2013.

At Helmholtz-Zentrum Dresden - Rossendorf two terahertz and mid-infrared FEL-based sources are in operation. FELBE is an FEL equipped with a cavity with continuous pulsing with a repetition rate of 13 MHz, pulsing with 1 kHz by applying a pulse picker, and macrobunch operation with bunch length > 100 μs and macrobunch repetition rates ≤ 25 Hz. Pulse duration and pulse energy vary with wavelength and lie in the range from 1 - 25 ps and 100 nJ - few μJ, respectively.  The TELBE facility is based on a superradiant undulator offering THz pulses ranging from 0.1 THz to 2.5 THz at repetition rates up to 500 kHz.

X-ray FELs

The lack of mirror materials that can reflect extreme ultraviolet and x-rays means that X-ray free electron lasers (XFEL) need to work without a resonant cavity. Consequently, in an X-ray FEL (XFEL) the beam is produced by a single pass of radiation through the undulator. This requires that there be enough amplification over a single pass to produce an appropriate beam.

Hence, XFELs use long undulator sections that are tens or hundreds of meters long. This allows XFELs to produce the brightest X-ray pulses of any human-made x-ray source. The intense pulses from the X-ray laser lies in the principle of self-amplified spontaneous emission (SASE), which leads to microbunching. Initially all electrons are distributed evenly and emit only incoherent spontaneous radiation. Through the interaction of this radiation and the electrons' oscillations, they drift into microbunches separated by a distance equal to one radiation wavelength. This interaction drives all electrons to begin emitting coherent radiation. Emitted radiation can reinforce itself perfectly whereby wave crests and wave troughs are optimally superimposed on one another. This results in an exponential increase of emitted radiation power, leading to high beam intensities and laser-like properties.

Examples of facilities operating on the SASE FEL principle include the:

In 2022, an upgrade to Stanford University's Linac Coherent Light Source (LCLS-II) used temperatures around −271 °C to produce 106 pulses/second of near light-speed electrons, using superconducting niobium cavities.

Seeding and Self-seeding

One problem with SASE FELs is the lack of temporal coherence due to a noisy startup process. To avoid this, one can "seed" an FEL with a laser tuned to the resonance of the FEL. Such a temporally coherent seed can be produced by more conventional means, such as by high harmonic generation (HHG) using an optical laser pulse. This results in coherent amplification of the input signal; in effect, the output laser quality is characterized by the seed. While HHG seeds are available at wavelengths down to the extreme ultraviolet, seeding is not feasible at x-ray wavelengths due to the lack of conventional x-ray lasers.

In late 2010, in Italy, the seeded-FEL source FERMI@Elettra started commissioning, at the Trieste Synchrotron Laboratory. FERMI@Elettra is a single-pass FEL user-facility covering the wavelength range from 100 nm (12 eV) to 10 nm (124 eV), located next to the third-generation synchrotron radiation facility ELETTRA in Trieste, Italy.

In 2001, at Brookhaven national laboratory, a seeding technique called "High-Gain Harmonic-Generation" that works to X-ray wavelength has been developed. The technique, which can be multiple-staged in an FEL to achieve increasingly shorter wavelengths, utilizes a longitudinal shift of the radiation relative to the electron bunch to avoid the reduced beam quality caused by a previous stage. This longitudinal staging along the beam is called "Fresh-Bunch". This technique was demonstrated at x-ray wavelength at Trieste Synchrotron Laboratory.

A similar staging approach, named "Fresh-Slice", was demonstrated at the Paul Scherrer Institut, also at X-ray wavelengths. In the Fresh Slice the short X-ray pulse produced at the first stage is moved to a fresh part of the electron bunch by a transverse tilt of the bunch.

In 2012, scientists working on the LCLS found an alternative solution to the seeding limitation for x-ray wavelengths by self-seeding the laser with its own beam after being filtered through a diamond monochromator. The resulting intensity and monochromaticity of the beam were unprecedented and allowed new experiments to be conducted involving manipulating atoms and imaging molecules. Other labs around the world are incorporating the technique into their equipment.

Research

Biomedical

Basic research

Researchers have explored X-ray free-electron lasers as an alternative to synchrotron light sources that have been the workhorses of protein crystallography and cell biology.

Exceptionally bright and fast X-rays can image proteins using x-ray crystallography. This technique allows first-time imaging of proteins that do not stack in a way that allows imaging by conventional techniques, 25% of the total number of proteins. Resolutions of 0.8 nm have been achieved with pulse durations of 30 femtoseconds. To get a clear view, a resolution of 0.1–0.3 nm is required. The short pulse durations allow images of X-ray diffraction patterns to be recorded before the molecules are destroyed. The bright, fast X-rays were produced at the Linac Coherent Light Source at SLAC. As of 2014, LCLS was the world's most powerful X-ray FEL.

Due to the increased repetition rates of the next-generation X-ray FEL sources, such as the European XFEL, the expected number of diffraction patterns is also expected to increase by a substantial amount. The increase in the number of diffraction patterns will place a large strain on existing analysis methods. To combat this, several methods have been researched to sort the huge amount of data that typical X-ray FEL experiments will generate. While the various methods have been shown to be effective, it is clear that to pave the way towards single-particle X-ray FEL imaging at full repetition rates, several challenges have to be overcome before the next resolution revolution can be achieved.

New biomarkers for metabolic diseases: taking advantage of the selectivity and sensitivity when combining infrared ion spectroscopy and mass spectrometry scientists can provide a structural fingerprint of small molecules in biological samples, like blood or urine. This new and unique methodology is generating exciting new possibilities to better understand metabolic diseases and develop novel diagnostic and therapeutic strategies.

Surgery

Research by Glenn Edwards and colleagues at Vanderbilt University's FEL Center in 1994 found that soft tissues including skin, cornea, and brain tissue could be cut, or ablated, using infrared FEL wavelengths around 6.45 micrometres with minimal collateral damage to adjacent tissue. This led to surgeries on humans, the first ever using a free-electron laser. Starting in 1999, Copeland and Konrad performed three surgeries in which they resected meningioma brain tumors. Beginning in 2000, Joos and Mawn performed five surgeries that cut a window in the sheath of the optic nerve, to test the efficacy for optic nerve sheath fenestration. These eight surgeries produced results consistent with the standard of care and with the added benefit of minimal collateral damage. A review of FELs for medical uses is given in the 1st edition of Tunable Laser Applications.

Fat removal

Several small, clinical lasers tunable in the 6 to 7 micrometre range with pulse structure and energy to give minimal collateral damage in soft tissue have been created. At Vanderbilt, there exists a Raman shifted system pumped by an Alexandrite laser.

Rox Anderson proposed the medical application of the free-electron laser in melting fats without harming the overlying skin. At infrared wavelengths, water in tissue was heated by the laser, but at wavelengths corresponding to 915, 1210 and 1720 nm, subsurface lipids were differentially heated more strongly than water. The possible applications of this selective photothermolysis (heating tissues using light) include the selective destruction of sebum lipids to treat acne, as well as targeting other lipids associated with cellulite and body fat as well as fatty plaques that form in arteries which can help treat atherosclerosis and heart disease.

Military

FEL technology is being evaluated by the US Navy as a candidate for an anti-aircraft and anti-missile directed-energy weapon. The Thomas Jefferson National Accelerator Facility's FEL has demonstrated over 14 kW power output. Compact multi-megawatt class FEL weapons are undergoing research. On June 9, 2009 the Office of Naval Research announced it had awarded Raytheon a contract to develop a 100 kW experimental FEL. On March 18, 2010 Boeing Directed Energy Systems announced the completion of an initial design for U.S. Naval use. A prototype FEL system was demonstrated, with a full-power prototype scheduled by 2018.

FEL prize winners

The FEL prize is given to a person who has contributed significantly to the advancement of the field of free-electron lasers. In addition, it gives the international FEL community the opportunity to recognize its members for their outstanding achievements. The prize winners are announced at the FEL conference, which currently takes place every two years.

  • 1988 John Madey
  • 1989 William Colson
  • 1990 Todd Smith and Luis Elias
  • 1991 Phillip Sprangle and Nikolai Vinokurov
  • 1992 Robert Phillips
  • 1993 Roger Warren
  • 1994 Alberto Renieri and Giuseppe Dattoli
  • 1995 Richard Pantell and George Bekefi
  • 1996 Charles Brau
  • 1997 Kwang-Je Kim
  • 1998 John Walsh
  • 1999 Claudio Pellegrini
  • 2000 Stephen V. Benson, Eisuke J. Minehara, and George R. Neil
  • 2001 Michel Billardon, Marie-Emmanuelle Couprie, and Jean-Michel Ortega
  • 2002 H. Alan Schwettman and Alexander F.G. van der Meer
  • 2003 Li-Hua Yu
  • 2004 Vladimir Litvinenko and Hiroyuki Hama
  • 2005 Avraham (Avi) Gover
  • 2006 Evgueni Saldin and Jörg Rossbach
  • 2007 Ilan Ben-Zvi and James Rosenzweig
  • 2008 Samuel Krinsky
  • 2009 David Dowell and Paul Emma
  • 2010 Sven Reiche
  • 2011 Tsumoru Shintake
  • 2012 John Galayda
  • 2013 Luca Giannessi and Young Uk Jeong
  • 2014 Zhirong Huang and William Fawley
  • 2015 Mikhail Yurkov and Evgeny Schneidmiller
  • 2017 Bruce Carlsten, Dinh Nguyen, and Richard Sheffield
  • 2019 Enrico Allaria, Gennady Stupakov, and Alex Lumpkin
  • 2022 Brian McNeil and Ying Wu
  • 2024 Toru Hara, Hitoshi Tanaka, and Takashi Tanaka

Young Scientist FEL Award

The Young Scientist FEL Award (or "Young Investigator FEL Prize") is intended to honor outstanding contributions to FEL science and technology from a person who is less than 37 years of age at the time of the FEL conference.

  • 2008 Michael Röhrs
  • 2009 Pavel Evtushenko
  • 2010 Guillaume Lambert
  • 2011 Marie Labat
  • 2012 Daniel F. Ratner
  • 2013 Dao Xiang
  • 2014 Erik Hemsing
  • 2015 Agostino Marinelli and Haixiao Deng
  • 2017 Eugenio Ferrari and Eléonore Roussel
  • 2019 Joe Duris and Chao Feng
  • 2022 Zhen Zhang, Jiawei Yan, and Svitozar Serkez
  • 2024 Philipp Dijkstal

Chain-growth polymerization

From Wikipedia, the free encyclopedia
 

Chain-growth polymerization (AE) or chain-growth polymerisation (BE) is a polymerization technique where monomer molecules add onto the active site on a growing polymer chain one at a time. There are a limited number of these active sites at any moment during the polymerization which gives this method its key characteristics.

Chain-growth polymerization involves 3 types of reactions :

  1. Initiation: An active species I* is formed by some decomposition of an initiator molecule I
  2. Propagation: The initiator fragment reacts with a monomer M to begin the conversion to the polymer; the center of activity is retained in the adduct. Monomers continue to add in the same way until polymers Pi* are formed with the degree of polymerization i
  3. Termination: By some reaction generally involving two polymers containing active centers, the growth center is deactivated, resulting in dead polymer

Introduction

IUPAC definition

chain polymerization: A chain reaction in which the growth of a polymer chain proceeds exclusively by reaction(s) between monomer and reactive site(s) on the polymer chain with regeneration of the reactive site(s) at the end of each growth step. (See Gold Book entry for note.)

An example of chain-growth polymerization by ring opening to polycaprolactone

In 1953, Paul Flory first classified polymerization as "step-growth polymerization" and "chain-growth polymerization". IUPAC recommends to further simplify "chain-growth polymerization" to "chain polymerization". It is a kind of polymerization where an active center (free radical or ion) is formed, and a plurality of monomers can be polymerized together in a short period of time to form a macromolecule having a large molecular weight. In addition to the regenerated active sites of each monomer unit, polymer growth will only occur at one (or possibly more) endpoint.

Many common polymers can be obtained by chain polymerization such as polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), poly(methyl methacrylate) (PMMA), polyacrylonitrile (PAN), polyvinyl acetate (PVA).

Typically, chain-growth polymerization can be understood with the chemical equation:

In this equation, P is the polymer while x represents degree of polymerization, * means active center of chain-growth polymerization, M is the monomer which will react with active center, and L may be a low-molar-mass by-product obtained during chain propagation. For most chain-growth polymerizations, there is no by-product L formed. However there are some exceptions, such as the polymerization of amino acid N-carboxyanhydrides to oxazolidine-2,5-diones.

This type of polymerization is described as "chain" or "chain-growth" because the reaction mechanism is a chemical chain reaction with an initiation step in which an active center is formed, followed by a rapid sequence of chain propagation steps in which the polymer molecule grows by addition of one monomer molecule to the active center in each step. The word "chain" here does not refer to the fact that polymer molecules form long chains. Some polymers are formed instead by a second type of mechanism known as step-growth polymerization without rapid chain propagation steps.

Reaction steps

All chain-growth polymerization reactions must include chain initiation and chain propagation. Chain transfer and chain termination steps also occur in many but not all chain-growth polymerizations.

Chain initiation

Chain initiation is the initial generation of a chain carrier, which is an intermediate such as a radical or an ion which can continue the reaction by chain propagation. Initiation steps are classified according to the way that energy is provided: thermal initiation, high energy initiation, and chemical initiation, etc. Thermal initiation uses molecular thermal motion to dissociate a molecule and form active centers. High energy initiation refers to the generation of chain carriers by radiation. Chemical initiation is due to a chemical initiator.

For the case of radical polymerization as an example, chain initiation involves the dissociation of a radical initiator molecule (I) which is easily dissociated by heat or light into two free radicals (2 R°). Each radical R° then adds a first monomer molecule (M) to start a chain which terminates with a monomer activated by the presence of an unpaired electron (RM1°).

  • I → 2 R°
  • R° + M → RM1°

Chain propagation

IUPAC defines chain propagation as a reaction of an active center on the growing polymer molecule, which adds one monomer molecule to form a new polymer molecule (RM1°) one repeat unit longer.

For radical polymerization, the active center remains an atom with an unpaired electron. The addition of the second monomer and a typical later addition step are

  • RM1° + M → RM2°
  • ...............
  • RMn° + M → RMn+1°

For some polymers, chains of over 1000 monomer units can be formed in milliseconds.

Chain termination

In a chain termination step, the active center disappears, resulting in the termination of chain propagation. This is different from chain transfer in which the active center only shifts to another molecule but does not disappear.

For radical polymerization, termination involves a reaction of two growing polymer chains to eliminate the unpaired electrons of both chains. There are two possibilities.

1. Recombination is the reaction of the unpaired electrons of two chains to form a covalent bond between them. The product is a single polymer molecule with the combined length of the two reactant chains:

  • RMn° + RMm° → Pn+m

2. Disproportionation is the transfer of a hydrogen atom from one chain to the other, so that the two product chain molecules are unchanged in length but are no longer free radicals:

  • RMn° + RMm° → Pn + Pm

Initiation, propagation and termination steps also occur in chain reactions of smaller molecules. This is not true of the chain transfer and branching steps considered next.

Chain transfer

An example of chain transfer in styrene polymerization. Here X = Cl and Y = CCl3.

In some chain-growth polymerizations there is also a chain transfer step, in which the growing polymer chain RMn° takes an atom X from an inactive molecule XY, terminating the growth of the polymer chain: RMn° + XY → RMnX + Y°. The Y fragment ls a new active center which adds more monomer M to form a new growing chain YMn°. This can happen in free radical polymerization for chains RMn°, in ionic polymerization for chains RMn+ or RMn, or in coordination polymerization. In most cases chain transfer will generate a by-product and decrease the molar mass of the final polymer.

Chain transfer to polymer: Branching

Another possibility is chain transfer to a second polymer molecule, result in the formation of a product macromolecule with a branched structure. In this case the growing chain takes an atom X from a second polymer chain whose growth had been completed. The growth of the first polymer chain is completed by the transfer of atom X. However the second molecule loses an atom X from the interior of its polymer chain to form a reactive radical (or ion) which can add more monomer molecules. This results in the addition of a branch or side chain and the formation of a product macromolecule with a branched structure.

Classes of chain-growth polymerization

The International Union of Pure and Applied Chemistry (IUPAC) recommends definitions for several classes of chain-growth polymerization.

Radical polymerization

Based on the IUPAC definition, radical polymerization is a chain polymerization in which the kinetic-chain carriers are radicals. Usually, the growing chain end bears an unpaired electron. Free radicals can be initiated by many methods such as heating, redox reactions, ultraviolet radiation, high energy irradiation, electrolysis, sonication, and plasma. Free radical polymerization is very important in polymer chemistry. It is one of the most developed methods in chain-growth polymerization. Currently, most polymers in our daily life are synthesized by free radical polymerization, including polyethylene, polystyrene, polyvinyl chloride, polymethyl methacrylate, polyacrylonitrile, polyvinyl acetate, styrene butadiene rubber, nitrile rubber, neoprene, etc.

Ionic polymerization

Ionic polymerization is a chain polymerization in which the kinetic-chain carriers are ions or ion pairs. It can be further divided into anionic polymerization and cationic polymerization. Ionic polymerization generates many polymers used in daily life, such as butyl rubber, polyisobutylene, polyphenylene, polyoxymethylene, polysiloxane, polyethylene oxide, high density polyethylene, isotactic polypropylene, butadiene rubber, etc. Living anionic polymerization was developed in the 1950s. The chain will remain active indefinitely unless the reaction is transferred or terminated deliberately, which allows the control of molar weight and dispersity (or polydispersity index, PDI).

Coordination polymerization

Coordination polymerization is a chain polymerization that involves the preliminary coordination of a monomer molecule with a chain carrier. The monomer is first coordinated with the transition metal active center, and then the activated monomer is inserted into the transition metal-carbon bond for chain growth. In some cases, coordination polymerization is also called insertion polymerization or complexing polymerization. Advanced coordination polymerizations can control the tacticity, molecular weight and PDI of the polymer effectively. In addition, the racemic mixture of the chiral metallocene can be separated into its enantiomers. The oligomerization reaction produces an optically active branched olefin using an optically active catalyst.

Living polymerization

Living polymerization was first described by Michael Szwarc in 1956. It is defined as a chain polymerization from which chain transfer and chain termination are absent. In the absence of chain-transfer and chain termination, the monomer in the system is consumed and the polymerization stops but the polymer chain remains active. If new monomer is added, the polymerization can proceed.

Due to the low PDI and predictable molecular weight, living polymerization is at the forefront of polymer research. It can be further divided into living free radical polymerization, living ionic polymerization and living ring-opening metathesis polymerization, etc.

Ring-opening polymerization

Ring-opening polymerization is defined as a polymerization in which a cyclic monomer yields a monomeric unit which is acyclic or contains fewer cycles than the monomer. Generally, the ring-opening polymerization is carried out under mild conditions, and the by-product is less than in the polycondensation reaction. A high molecular weight polymer is easily obtained. Common ring-opening polymerization products includes polypropylene oxide, polytetrahydrofuran, polyepichlorohydrin, polyoxymethylene, polycaprolactam and polysiloxane.

Reversible-deactivation polymerization

Reversible-deactivation polymerization is defined as a chain polymerization propagated by chain carriers that are deactivated reversibly, bringing them into one or more active-dormant equilibria. An example of a reversible-deactivation polymerization is group-transfer polymerization.

Comparison with step-growth polymerization

Polymers were first classified according to polymerization method by Wallace Carothers in 1929, who introduced the terms addition polymer and condensation polymer to describe polymers made by addition reactions and condensation reactions respectively. However this classification is inadequate to describe a polymer which can be made by either type of reaction, for example nylon 6 which can be made either by addition of a cyclic monomer or by condensation of a linear monomer.

Flory revised the classification to chain-growth polymerization and step-growth polymerization, based on polymerization mechanisms rather than polymer structures. IUPAC now recommends that the names of step-growth polymerization and chain-growth polymerization be further simplified to polycondensation (or polyaddition if no low-molar-mass by-product is formed when a monomer is added) and chain polymerization.

Most polymerizations are either chain-growth or step-growth reactions. Chain-growth includes both initiation and propagation steps (at least), and the propagation of chain-growth polymers proceeds by the addition of monomers to a growing polymer with an active centre. In contrast step-growth polymerization involves only one type of step, and macromolecules can grow by reaction steps between any two molecular species: two monomers, a monomer and a growing chain, or two growing chains. In step growth, the monomers will initially form dimers, trimers, etc. which later react to form long chain polymers.

In chain-growth polymerization, a growing macromolecule increases in size rapidly once its growth is initiated. When a macromolecule stops growing it generally will add no more monomers. In step-growth polymerization on the other hand, a single polymer molecule can grow over the course of the whole reaction.

In chain-growth polymerization, long macromolecules with high molecular weight are formed when only a small fraction of monomer has reacted. Monomers are consumed steadily over the course of the whole reaction, but the degree of polymerization can increase very quickly after chain initiation. However in step-growth polymerization the monomer is consumed very quickly to dimer, trimer and oligomer. The degree of polymerization increases steadily during the whole polymerization process.

The type of polymerization of a given monomer usually depends on the functional groups present, and sometimes also on whether the monomer is linear or cyclic. Chain-growth polymers are usually addition polymers by Carothers' definition. They are typically formed by addition reactions of C=C bonds in the monomer backbone, which contains only carbon-carbon bonds. Another possibility is ring-opening polymerization, as for the chain-growth polymerization of tetrahydrofuran or of polycaprolactone (see Introduction above).

Step-growth polymers are typically condensation polymers in which an elimination product as such as H2O are formed. Examples are polyamides, polycarbonates, polyesters, polyimides, polysiloxanes and polysulfones. If no elimination product is formed, then the polymer is an addition polymer, such as a polyurethane or a poly(phenylene oxide). Chain-growth polymerization with a low-molar-mass by-product during chain growth is described by IUPAC as "condensative chain polymerization".

Compared to step-growth polymerization, living chain-growth polymerization shows low molar mass dispersity (or PDI), predictable molar mass distribution and controllable conformation. Generally, polycondensation proceeds in a step-growth polymerization mode.

Application

Chain polymerization products are widely used in many aspects of life, including electronic devices, food packaging, catalyst carriers, medical materials, etc. At present, the world's highest yielding polymers such as polyethylene (PE), polyvinyl chloride (PVC), polypropylene (PP), etc. can be obtained by chain polymerization. In addition, some carbon nanotube polymer is used for electronical devices. Controlled living chain-growth conjugated polymerization will also enable the synthesis of well-defined advanced structures, including block copolymers. Their industrial applications extend to water purification, biomedical devices and sensors.

Inorganic polymer

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Inorganic_polymer
The inorganic polymer (SN)x

In polymer chemistry, an inorganic polymer is a polymer with a skeletal structure that does not include carbon atoms in the backbone. Polymers containing inorganic and organic components are sometimes called hybrid polymers, and most so-called inorganic polymers are hybrid polymers. One of the best known examples is polydimethylsiloxane, otherwise known commonly as silicone rubber. Inorganic polymers offer some properties not found in organic materials including low-temperature flexibility, electrical conductivity, and nonflammability. The term inorganic polymer refers generally to one-dimensional polymers, rather than to heavily crosslinked materials such as silicate minerals. Inorganic polymers with tunable or responsive properties are sometimes called smart inorganic polymers. A special class of inorganic polymers are geopolymers, which may be anthropogenic or naturally occurring.

Main group backbone

Traditionally, the area of inorganic polymers focuses on materials in which the backbone is composed exclusively of main-group elements.

Homochain polymers

Homochain polymers have only one kind of atom in the main chain. One member is polymeric sulfur, which forms reversibly upon melting any of the cyclic allotropes, such as S8. Organic polysulfides and polysulfanes feature short chains of sulfur atoms, capped respectively with alkyl and H. Elemental tellurium and the gray allotrope of elemental selenium also are polymers, although they are not processable.

The gray allotrope of selenium consists of helical chains of Se atoms.

Polymeric forms of the group IV elements are well known. The premier materials are polysilanes, which are analogous to polyethylene and related organic polymers. They are more fragile than the organic analogues and, because of the longer Si−Si bonds, carry larger substituents. Poly(dimethylsilane) is prepared by reduction of dimethyldichlorosilane. Pyrolysis of poly(dimethylsilane) gives SiC fibers.

Heavier analogues of polysilanes are also known to some extent. These include polygermanes, [R2Ge]n, and polystannanes, [R2Sn]n.

Heterochain polymers

Si-based

Heterochain polymers have more than one type of atom in the main chain. Typically two types of atoms alternate along the main chain. Of great commercial interest are the polysiloxanes, where the main chain features Si and O centers: −Si−O−Si−O−. Each Si center has two substituents, usually methyl or phenyl. Examples include polydimethylsiloxane (PDMS, [Me2SiO]n), polymethylhydrosiloxane (PMHS, [MeSi(H)O]n) and polydiphenylsiloxane [Ph2SiO]n). Related to the siloxanes are the polysilazanes. These materials have the backbone formula −Si−N−Si−N−. One example is perhydridopolysilazane PHPS. Such materials are of academic interest.

P-based

A related family of well studied inorganic polymers are the polyphosphazenes. They feature the backbone −P−N−P−N−. With two substituents on phosphorus, they are structurally similar related to the polysiloxanes. Such materials are generated by ring-opening polymerization of hexachlorophosphazene followed by substitution of the P−Cl groups by alkoxide. Such materials find specialized applications as elastomers.

Polyphosphazene general structure
General structure of polyphosphazenes. Gray spheres represent any organic or inorganic group.

B-based

Boronnitrogen polymers feature −B−N−B−N− backbones. Examples are polyborazylenes, polyaminoboranes.

S-based

The polythiazyls have the backbone −S−N−S−N−. Unlike most inorganic polymers, these materials lack substituents on the main chain atoms. Such materials exhibit high electrical conductivity, a finding that attracted much attention during the era when polyacetylene was discovered. It is superconducting below 0.26 K.

Ionomers

Usually not classified with charge-neutral inorganic polymers are ionomers. Phosphorus–oxygen and boron-oxide polymers include the polyphosphates and polyborates.

Transition-metal-containing polymers

Inorganic polymers also include materials with transition metals in the backbone. Examples are Polyferrocenes, Krogmann's salt and Magnus's green salt.

Magnus's green salt is a salt that features a one-dimension chain of weak Pt–Pt bonds.

Polymerization methods

Inorganic polymers are formed, like organic polymers, by:

Reactions

Inorganic polymers are precursors to inorganic solids. This type of reaction is illustrated by the stepwise conversion of ammonia borane to discrete rings and oligomers, which upon pyrolysis give boron nitrides.

BIOS

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/BIOS   A pair of AMD BIOS ch...