Search This Blog

Sunday, April 5, 2026

Wave equation

From Wikipedia, the free encyclopedia
A pulse traveling through a string with fixed endpoints as modeled by the wave equation
 
Spherical waves coming from a point source
 
A solution to the 2D wave equation

The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics.

This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation.

Introduction

The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for as the representation of an electric vector field wave in the absence of wave sources, each coordinate axis component must satisfy the scalar wave equation. Other scalar wave equation solutions u are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions.

The scalar wave equation is

where

The equation states that, at any given point, the second derivative of with respect to time is proportional to the sum of the second derivatives of with respect to space, with the constant of proportionality being the square of the speed of the wave.

Using notations from vector calculus, the wave equation can be written compactly as or where the double subscript denotes the second-order partial derivative with respect to time, is the Laplace operator and the d'Alembert operator, defined as:

A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics.

The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.

Wave equation in one space dimension

French scientist Jean-Baptiste le Rond d'Alembert discovered the wave equation in one space dimension.

The wave equation in one spatial dimension can be written as follows: This equation is typically described as having only one spatial dimension , because the only other independent variable is the time .

Derivation

The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension.

Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress).

Hooke's law

The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of :

Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass at the location is:

By equating the latter equation with

the equation of motion for the weight at the location is obtained: If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array , we can write the above equation as

Taking the limit and assuming smoothness, one gets which is from the definition of a second derivative. is the square of the propagation speed in this particular case.

1-d standing wave as a superposition of two waves traveling in opposite directions

Stress pulse in a bar

In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes

is equal to the volume of the bar, and therefore where is the density of the material. The wave equation reduces to

The speed of a stress wave in a bar is therefore .

General solution

Algebraic approach

For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables changes the wave equation into which leads to the general solution

In other words, the solution is the sum of a right-traveling function and a left-traveling function . "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert.

Another way to arrive at this result is to factor the wave equation using two first-order differential operators: Then, for our original equation, we can define and find that we must have

This advection equation can be solved by interpreting it as telling us that the directional derivative of in the direction is 0. This means that the value of is constant on characteristic lines of the form x + ct = x0, and thus that must depend only on x + ct, that is, have the form H(x + ct). Then, to solve the first (inhomogenous) equation relating to u, we can note that its homogenous solution must be a function of the form F(x - ct), by logic similar to the above. Guessing a particular solution of the form G(x + ct), we find that

Expanding out the left side, rearranging terms, then using the change of variables s = x + ct simplifies the equation to

This means we can find a particular solution G of the desired form by integration. Thus, we have again shown that u obeys u(x, t) = F(x - ct) + G(x + ct).

For an initial-value problem, the arbitrary functions F and G can be determined to satisfy initial conditions:

The result is d'Alembert's formula:

In the classical sense, if f(x) ∈ Ck, and g(x) ∈ Ck−1, then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left.

The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.

Plane-wave eigenmodes

Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency ω, so that the temporal part of the wave function takes the form eiωt = cos(ωt) − i sin(ωt), and the amplitude is a function f(x) of the spatial variable x, giving a separation of variables for the wave function:

This produces an ordinary differential equation for the spatial part f(x):

Therefore, which is precisely an eigenvalue equation for f(x), hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions with wave number k = ω/c.

The total wave function for this eigenmode is then the linear combination where complex numbers A, B depend in general on any initial and boundary conditions of the problem.

Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor so that a full solution can be decomposed into an eigenmode expansion: or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions s±(ω) are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet u(xt), which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of ω. The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source.

Vectorial wave equation in three space dimensions

The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity that is homogeneous (i.e. independent of ) within the volume element, then its stress tensor is given by , for a vectorial elastic deflection . The local equilibrium of:

  1. the tension force due to deflection , and
  2. the inertial force caused by the local acceleration

can be written as

By merging density and elasticity module the sound velocity results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:  (Note: Instead of vectorial only scalar can be used, i.e. waves are travelling only along the axis, and the scalar wave equation follows as .)

The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term can be seen that there are two waves travelling in opposite directions and are possible, hence results the designation "two-way wave equation". It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For special two-wave equation with the d'Alembert operator results:  For this simplifies to Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction results as

Scalar wave equation in three space dimensions

Swiss mathematician and physicist Leonhard Euler (b. 1707) discovered the wave equation in three space dimensions.

A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions.

Spherical waves

To obtain a solution with constant frequencies, apply the Fourier transform which transforms the wave equation into an elliptic partial differential equation of the form:

This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:  The angular part of the solution take the form of spherical harmonics and the radial function satisfies: independent of , with . Substituting transforms the equation into which is the Bessel equation.

Example

Consider the case l = 0. Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., Ψ(r, t) → u(r, t). In this case, the wave equation reduces to or

This equation can be rewritten as where the quantity ru satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where F and G are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.

For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation.

Monochromatic spherical wave

Cut-away of spherical wavefronts, with a wavelength of 10 units, propagating from a point source

Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency ω, then the transformed function ru(r, t) has simply plane-wave solutions: or

From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to 1/r2, an example of the inverse-square law.

Solution of a general initial-value problem

The wave equation is linear in u and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ, η, ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta function. Let a family of spherical waves have center at (ξ, η, ζ), and let r be the radial distance from that point. Thus

If u is a superposition of such waves with weighting function φ, then the denominator 4πc is a convenience.

From the definition of the delta function, u may also be written as where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t, x) is t times the mean value of φ on a sphere of radius ct centered at x:

It follows that

The mean value is an even function of t, and hence if then

These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t, x, y, z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure.

Scalar wave equation in two space dimensions

In two space dimensions, the wave equation is

We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If

then the three-dimensional solution formula becomes

where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as a double integral over the disc D with center (x, y) and radius ct:

It is apparent that the solution at (t, x, y) depends not only on the data on the light cone where but also on data that are interior to that cone.

Scalar wave equation in general dimension and Kirchhoff's formulae

We want to find solutions to utt − Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x).

Odd dimensions

Assume n ≥ 3 is an odd integer, and gCm+1(Rn), hCm(Rn) for m = (n + 1)/2. Let γn = 1 × 3 × 5 × ⋯ × (n − 2) and let

Then

  • ,
  • in ,
  • ,
  • .

Even dimensions

Assume n ≥ 2 is an even integer and gCm+1(Rn), hCm(Rn), for m = (n + 2)/2. Let γn = 2 × 4 × ⋯ × n and let

then

  • uC2(Rn × [0, ∞))
  • utt − Δu = 0 in Rn × (0, ∞)

Green's function

Consider the inhomogeneous wave equation in dimensionsBy rescaling time, we can set wave speed .

Since the wave equation has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity . The effect of inflicting a velocity impulse is to suddenly change the wave displacement .

For acceleration impulse, where is the Dirac delta function. The solution to this case is called the Green's function for the wave equation.

For velocity impulse, , so if we solve the Green function , the solution for this case is just .

Duhamel's principle

The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case.

Given the Green function , and initial conditions , the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly, For the inhomogeneous case, the solution has one additional term by convolution over spacetime:

Solution by Fourier transform

By a Fourier transform,The term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by or by , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution givesThe integral can be solved by analytically continuing the Poisson kernel, givingwhere is half the surface area of a -dimensional hypersphere.

Solutions in particular dimensions

We can relate the Green's function in dimensions to the Green's function in dimensions (lowering the dimension is possible in any case, raising is possible in spherical symmetry).

Lowering dimensions

Given a function and a solution of a differential equation in dimensions, we can trivially extend it to dimensions by setting the additional dimensions to be constant: Since the Green's function is constructed from and , the Green's function in dimensions integrates to the Green's function in dimensions:

Raising dimensions

The Green's function in dimensions can be related to the Green's function in dimensions. By spherical symmetry, Integrating in polar coordinates, where in the last equality we made the change of variables . Thus, we obtain the recurrence relation

Solutions in D = 1, 2, 3

When , the integrand in the Fourier transform is the sinc function where is the sign function and is the unit step function.

The dimension can be raised to give the caseand similarly for the backward solution. This can be integrated down by one dimension to give the case

Wavefronts and wakes

In case, the Green's function solution is the sum of two wavefronts moving in opposite directions.

In odd dimensions, the forward solution is nonzero only at . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,where , and the wave speed is restored.

In even dimensions, the forward solution is nonzero in , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:The wavefront itself also involves increasingly higher derivatives of the Dirac delta function.

This means that a general Huygens' principle – the wave displacement at a point in spacetime depends only on the state at points on characteristic rays passing – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.

Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients

Problems with boundaries

One space dimension

Reflection and transmission at the boundary of two media

For an incident wave traveling from one medium (where the wave speed is c1) to another medium (where the wave speed is c2), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary.

Consider the component of the incident wave with an angular frequency of ω, which has the waveform At t = 0, the incident reaches the boundary between the two media at x = 0. Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms The continuity condition at the boundary is This gives the equations and we have the reflectivity and transmissivity When c2 < c1, the reflected wave has a reflection phase change of 180°, since B/A < 0. The energy conservation can be verified by The above discussion holds true for any component, regardless of its angular frequency of ω.

The limiting case of c2 = 0 corresponds to a "fixed end" that does not move, whereas the limiting case of c2 → ∞ corresponds to a "free end".

The Sturm–Liouville formulation

A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is

where a and b are non-negative. The case where u is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form

A consequence is that

The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem

This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series.

Several space dimensions

A solution of the wave equation in two dimensions with a zero-displacement boundary condition along the entire outer edge

The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D, and t > 0. On the boundary of D, the solution u shall satisfy

where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are

where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies

in D, and

on B.

In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation.

If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order.

Inhomogeneous wave equation in one dimension

The inhomogeneous wave equation in one dimension is with initial conditions

The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism.

One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi + cti) and f(xicti) and the values of the function g(x) between (xicti) and (xi + cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time.

In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region:

To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:

The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:

In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0.

For the other two sides of the region, it is worth noting that x ± ct is a constant, namely xi ± cti, where the sign is chosen appropriately. Using this, we can get the relation dx ± cdt = 0, again choosing the right sign:

And similarly for the final boundary segment:

Adding the three results together and putting them back in the original integral gives

Solving for u(xi, ti), we arrive at

In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source.

Further generalizations

Elastic waves

The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: where:

λ and μ are the so-called Lamé parameters describing the elastic properties of the medium,
ρ is the density,
f is the source function (driving force),
u is the displacement vector.

By using ∇ × (∇ × u) = ∇(∇ ⋅ u) − ∇ ⋅ ∇ u = ∇(∇ ⋅ u) − ∆u, the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation.

Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if f and ∇ ⋅ u are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field E, which has only transverse waves.

Dispersion relation

In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation

where ω is the angular frequency, and k is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is ω = ±c |k|, but in general, the constant speed c gets replaced by a variable phase velocity:

Molecular-scale electronics

From Wikipedia, the free encyclopedia

Molecular-scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures imaginable[citation needed], this miniaturization is the ultimate goal for shrinking electrical circuits.

The field is often termed simply as "molecular electronics", but this term is also used to refer to the distantly related field of conductive polymers and organic electronics, which uses the properties of molecules to affect the bulk properties of a material. A nomenclature distinction has been suggested so that molecular materials for electronics refers to this latter field of bulk applications, while molecular-scale electronics refers to the nanoscale single-molecule applications treated here.

Fundamental concepts

Conventional electronics have conventionally been made from bulk materials. Ever since their invention in 1958, the performance and complexity of integrated circuits has undergone exponential growth, a trend named Moore’s law, as feature sizes of the embedded components have shrunk accordingly. As the structures shrink, the sensitivity to deviations increases. In a few technology generations, the composition of the devices must be controlled to a precision of a few atoms for the devices to work. With bulk methods growing increasingly demanding and costly as they near inherent limits, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) versus carving them out of bulk material (top down). This is the idea behind molecular electronics, with the ultimate miniaturization being components contained in single molecules.

In single-molecule electronics, the bulk material is replaced by single molecules. Instead of forming structures by removing or applying material after a pattern scaffold, the atoms are put together in a chemistry lab. In this way, billions of billions of copies are made simultaneously (typically more than 1020 molecules are made at once) while the composition of molecules is controlled down to the last atom. The molecules used have properties that resemble conventional electronic components such as a wire, transistor or rectifier.

Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular-sized compounds are still very far from being realized. However, the unceasing demand for more computing power, along with the inherent limits of lithographic methods as of 2016, make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes.

Theoretical basis

Molecular electronics operate at distances of less than 100 nanometers. The miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In conventional electronic components, electrons can be filled in or drawn out more or less like a continuous flow of electric charge. In contrast, in molecular electronics the transfer of one electron alters the system significantly. For example, when an electron has been transferred from a source electrode to a molecule, the molecule gets charged up, which makes it far harder for the next electron to transfer (see also Coulomb blockade). The significant amount of energy due to charging must be accounted for when making calculations about the electronic properties of the setup, which is highly sensitive to distances to conducting surfaces nearby.

The theory of single-molecule devices is especially interesting since the system under consideration is an open quantum system in nonequilibrium (driven by voltage). In the low bias voltage regime, the nonequilibrium nature of the molecular junction can be ignored, and the current–voltage traits of the device can be calculated using the equilibrium electronic structure of the system. However, in stronger bias regimes a more sophisticated treatment is required, as there is no longer a variational principle. In the elastic tunneling case (where the passing electron does not exchange energy with the system), the formalism of Rolf Landauer can be used to calculate the transmission through the system as a function of bias voltage, and hence the current. In inelastic tunneling, an elegant formalism based on the non-equilibrium Green's functions of Leo Kadanoff and Gordon Baym, and independently by Leonid Keldysh was advanced by Ned Wingreen and Yigal Meir. This Meir-Wingreen formulation has been used to great success in the molecular electronics community to examine the more difficult and interesting cases where the transient electron exchanges energy with the molecular system (for example through electron-phonon coupling or electronic excitations).

Further, connecting single molecules reliably to a larger-scale circuit has proven a great challenge and constitutes a significant hindrance to commercialization.

Examples

Common for molecules used in molecular electronics is that the structures contain many alternating double and single bonds (see also Conjugated system). This is done because such patterns delocalize the molecular orbitals, making it possible for electrons to move freely over the conjugated area.

Wires

This animation of a rotating carbon nanotube shows its 3D structure.

The sole purpose of molecular wires is to electrically connect different parts of a molecular electrical circuit. As the assembly of these and their connection to a macroscopic circuit is still not mastered, the focus of research in single-molecule electronics is primarily on the functionalized molecules: molecular wires are characterized by containing no functional groups and are hence composed of plain repetitions of a conjugated building block. Among these are the carbon nanotubes that are quite large compared to the other suggestions but have shown very promising electrical properties.

The main problem with the molecular wires is to obtain good electrical contact with the electrodes so that electrons can move freely in and out of the wire.

Transistors

Single-molecule transistors are fundamentally different from the ones known from bulk electronics. The gate in a conventional (field-effect) transistor determines the conductance between the source and drain electrode by controlling the density of charge carriers between them, whereas the gate in a single-molecule transistor controls the possibility of a single electron to jump on and off the molecule by modifying the energy of the molecular orbitals. One of the effects of this difference is that the single-molecule transistor is almost binary: it is either on or off. This opposes its bulk counterparts, which have quadratic responses to gate voltage.

It is the quantization of charge into electrons that is responsible for the markedly different behavior compared to bulk electronics. Because of the size of a single molecule, the charging due to a single electron is significant and provides means to turn a transistor on or off (see Coulomb blockade). For this to work, the electronic orbitals on the transistor molecule cannot be too well integrated with the orbitals on the electrodes. If they are, an electron cannot be said to be located on the molecule or the electrodes and the molecule will function as a wire.

A popular group of molecules, that can work as the semiconducting channel material in a molecular transistor, is the oligopolyphenylenevinylenes (OPVs) that works by the Coulomb blockade mechanism when placed between the source and drain electrode in an appropriate way. Fullerenes work by the same mechanism and have also been commonly used.

Semiconducting carbon nanotubes have also been demonstrated to work as channel material but although molecular, these molecules are sufficiently large to behave almost as bulk semiconductors.

The size of the molecules, and the low temperature of the measurements being conducted, makes the quantum mechanical states well defined. Thus, it is being researched if the quantum mechanical properties can be used for more advanced purposes than simple transistors (e.g. spintronics).

Physicists at the University of Arizona, in collaboration with chemists from the University of Madrid, have designed a single-molecule transistor using a ring-shaped molecule similar to benzene. Physicists at Canada's National Institute for Nanotechnology have designed a single-molecule transistor using styrene. Both groups expect (the designs were experimentally unverified as of June 2005) their respective devices to function at room temperature, and to be controlled by a single electron.

Rectifiers (diodes)

Hydrogen can be removed from individual tetraphenylporphyrin (H2TPP) molecules by applying excess voltage to the tip of a scanning tunneling microscope (STAM, a); this removal alters the current–voltage (I–V) curves of TPP molecules, measured using the same STM tip, from diode-like (red curve in b) to resistor-like (green curve). Image c shows a row of TPP, H2TPP and TPP molecules. While scanning image d, excess voltage was applied to H2TPP at the black dot, which instantly removed hydrogen, as shown in the bottom part of d and in the re-scan image e. Such manipulations can be used in single-molecule electronics.

Molecular rectifiers are mimics of their bulk counterparts and have an asymmetric construction so that the molecule can accept electrons in one end but not the other. The molecules have an electron donor (D) in one end and an electron acceptor (A) in the other. This way, the unstable state D+ – A will be more readily made than D – A+. The result is that an electric current can be drawn through the molecule if the electrons are added through the acceptor end, but less easily if the reverse is attempted.

Methods

One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (on the order of nanometers), alternative strategies are applied.

Molecular gaps

One way to produce electrodes with a molecular-sized gap between them is to break junctions, in which a thin electrode is stretched until it breaks. Another is electromigration. Here a current is led through a thin wire until it melts and the atoms migrate to produce the gap. Further, the reach of conventional photolithography can be enhanced by chemically etching or depositing metal on the electrodes.

Probably the easiest way to conduct measurements on several molecules is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate. Experimental breakthroughs in this approach began with the pioneering work of the Tao group in 1996, who used an Electrochemical Scanning Tunneling Microscope (EC-STM) to observe electron transfer behavior in an iron porphyrin monolayer during redox processes for the first time. When an electrochemical gate voltage was applied, the monolayer thickness exhibited a characteristic bell-shaped change, later theoretically confirmed to result from resonant tunneling effects, providing theoretical support for single-molecule electrochemical control. The scanning tunneling microscopy break junction (STM-BJ) technique developed by the same group in 2003 successfully measured the conductance of a single 4,4'-bipyridine molecule: by precisely controlling the tip-substrate distance via piezoelectric ceramics, repeated formation and breaking of molecular junctions were achieved, transient conductance signals captured by a high-speed acquisition system were compiled into statistical histograms from thousands of measurements, ultimately yielding the molecular conductance. This technique is sensitive to changes in the molecular-electrode interface bonding configuration and, combined with gate voltage modulation, can resolve dynamic processes during redox state transitions. These characteristics make it a core technical method for studying single-molecule electronic devices and charge transport.

Anchoring

A popular way to anchor molecules to the electrodes is to make use of sulfur's high chemical affinity to gold. In these setups, the molecules are synthesized so that sulfur atoms are placed strategically to function as crocodile clips connecting the molecules to the gold electrodes. Though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces. Further, the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection.

To circumvent the latter issue, experiments have shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than one atom of sulfur.

Fullerene nanoelectronics

In polymers, classical organic molecules are composed of both carbon and hydrogen (and sometimes additional compounds such as nitrogen, chlorine or sulphur). They are obtained from petrol and can often be synthesized in large amounts. Most of these molecules are insulating when their length exceeds a few nanometers. However, naturally occurring carbon is conducting, especially graphite recovered from coal or encountered otherwise. From a theoretical viewpoint, graphite is a semi-metal, a category in between metals and semi-conductors. It has a layered structure, each sheet being one atom thick. Between each sheet, the interactions are weak enough to allow an easy manual cleavage.

Tailoring the graphite sheet to obtain well defined nanometer-sized objects remains a challenge. However, by the close of the twentieth century, chemists were exploring methods to fabricate extremely small graphitic objects that could be considered single molecules. After studying the interstellar conditions under which carbon is known to form clusters, Richard Smalley's group (Rice University, Texas) set up an experiment in which graphite was vaporized via laser irradiation. Mass spectrometry revealed that clusters containing specific magic numbers of atoms were stable, especially those clusters of 60 atoms. Harry Kroto, an English chemist who assisted in the experiment, suggested a possible geometry for these clusters – atoms covalently bound with the exact symmetry of a soccer ball. Coined buckminsterfullerenes, buckyballs, or C60, the clusters retained some properties of graphite, such as conductivity. These objects were rapidly envisioned as possible building blocks for molecular electronics.

Problems

Artifacts

When trying to measure electronic traits of molecules, artificial phenomena can occur that can be hard to distinguish from truly molecular behavior. Before they were discovered, these artifacts have mistakenly been published as being features pertaining to the molecules in question.

Applying a voltage drop on the order of volts across a nanometer-sized junction results in a very strong electrical field. The field can cause metal atoms to migrate and eventually close the gap by a thin filament, which can be broken again when carrying a current. The two levels of conductance imitate molecular switching between a conductive and an isolating state of a molecule.

Another encountered artifact is when the electrodes undergo chemical reactions due to the high field strength in the gap. When the voltage bias is reversed, the reaction will cause hysteresis in the measurements that can be interpreted as being of molecular origin.

A metallic grain between the electrodes can act as a single electron transistor by the mechanism described above, thus resembling the traits of a molecular transistor. This artifact is especially common with nanogaps produced by the electromigration method.

History and progress

Graphical representation of a rotaxane, useful as a molecular switch

In their treatment of so-called donor–acceptor complexes in the 1940s, Robert Mulliken and Albert Szent-Györgyi advanced the concept of charge transfer in molecules. They subsequently further refined the study of both charge transfer and energy transfer in molecules. Likewise, a 1974 paper from Mark Ratner and Ari Aviram illustrated a theoretical molecular rectifier.

In 1988, Aviram described in detail a theoretical single-molecule field-effect transistor. Further concepts were proposed by Forrest Carter of the Naval Research Laboratory, including single-molecule logic gates. A wide range of ideas were presented, under his aegis, at a conference entitled Molecular Electronic Devices in 1988. These were theoretical constructs and not concrete devices. The direct measurement of the electronic traits of individual molecules awaited the development of methods for making molecular-scale electrical contacts. This was no easy task. Thus, the first experiment directly measuring the conductance of a single molecule was only reported in 1995 on a single C60 molecule by C. Joachim and J. K. Gimzewsky in their seminal Physical Review Letter paper and later in 1997 by Mark Reed and co-workers on a few hundred molecules. Since then, this branch of the field has advanced rapidly. Likewise, as it has grown possible to measure such properties directly, the theoretical predictions of the early workers have been confirmed substantially.

The concept of molecular electronics was published in 1974 when Aviram and Ratner suggested an organic molecule that could work as a rectifier. Having both huge commercial and fundamental interest, much effort was put into proving its feasibility, and 16 years later in 1990, the first demonstration of an intrinsic molecular rectifier was realized by Ashwell and coworkers for a thin film of molecules.

The first measurement of the conductance of a single molecule was realised in 1994 by C. Joachim and J. K. Gimzewski and published in 1995 (see the corresponding Phys. Rev. Lett. paper). This was the conclusion of 10 years of research started at IBM TJ Watson, using the scanning tunnelling microscope tip apex to switch a single molecule as already explored by A. Aviram, C. Joachim and M. Pomerantz at the end of the 1980s (see their seminal Chem. Phys. Lett. paper during this period). The trick was to use a UHV Scanning Tunneling microscope to allow the tip apex to gently touch the top of a single C
60
molecule adsorbed on an Au(110) surface. A resistance of 55 MOhms was recorded along with a low voltage linear I-V. The contact was certified by recording the I-z current distance property, which allows measurement of the deformation of the C
60
cage under contact. This first experiment was followed by the reported result using a mechanical break junction method to connect two gold electrodes to a sulfur-terminated molecular wire by Mark Reed and James Tour in 1997.

The scanning tunneling microscope (STM) and later the atomic force microscope (AFM) have facilitated manipulating single-molecule electronics. Also, theoretical advances in molecular electronics have facilitated further understanding of non-adiabatic charge transfer events at electrode-electrolyte interfaces.

A single-molecule amplifier was implemented by C. Joachim and J.K. Gimzewski in IBM Zurich. This experiment, involving one C
60
molecule, demonstrated that one such molecule can provide gain in a circuit via intramolecular quantum interference effects alone.

A collaboration of researchers at Hewlett-Packard (HP) and University of California, Los Angeles (UCLA), led by James Heath, Fraser Stoddart, R. Stanley Williams, and Philip Kuekes, has developed molecular electronics based on rotaxanes and catenanes.

Work is also occurring on the use of single-wall carbon nanotubes as field-effect transistors. Most of this work is being done by International Business Machines (IBM).

Some specific reports of a field-effect transistor based on molecular self-assembled monolayers were shown to be fraudulent in 2002 as part of the Schön scandal.

The Aviram-Ratner model for a unimolecular rectifier has been confirmed experimentally. Many rectifying molecules have so far been identified, and the number and efficiency of these systems is growing rapidly.

Supramolecular electronics is a new field involving electronics at a supramolecular level.

An important issue in molecular electronics is the determination of the resistance of a single molecule (both theoretical and experimental). For example, Bumm, et al. used STM to analyze a single molecular switch in a self-assembled monolayer to determine how conductive such a molecule can be. Another problem faced by this field is the difficulty of performing direct characterization since imaging at the molecular scale is often difficult in many experimental devices.

Conductive polymer

From Wikipedia, the free encyclopedia
Chemical structures of some conductive polymers. From top left clockwise: polyacetylene; polyphenylene vinylene; polypyrrole (X = NH) and polythiophene (X = S); and polyaniline (X = NH) and polyphenylene sulfide (X = S).

Conductive polymers or, more precisely, intrinsically conducting polymers (ICPs) are organic polymers that conduct electricity. Such compounds may have metallic conductivity or can be semiconductors. The main advantage of conductive polymers is that they are easy to process, mainly by dispersion. Conductive polymers are generally not thermoplastics, i.e., they are not thermoformable. But, like insulating polymers, they are organic materials. They can offer high electrical conductivity but do not show similar mechanical properties to other commercially available polymers. The electrical properties can be fine-tuned using the methods of organic synthesis and by advanced dispersion techniques.

History

Polyaniline was first described in the mid-19th century by Henry Letheby, who investigated the electrochemical and chemical oxidation products of aniline in acidic media. He noted that the reduced form was colourless but the oxidized forms were deep blue.

The first highly-conductive organic compounds were the charge transfer complexes. In the 1950s, researchers reported that polycyclic aromatic compounds formed semi-conducting charge-transfer complex salts with halogens. In 1954, researchers at Bell Labs and elsewhere reported organic charge transfer complexes with resistivities as low as 8 Ω-cm. In the early 1970s, researchers demonstrated salts of tetrathiafulvalene show almost metallic conductivity, while superconductivity was demonstrated in 1980. Broad research on salts of charge transfer complexes continues today. While these compounds were technically not polymers, this indicated that organic compounds can carry current. While organic conductors were previously intermittently discussed, the field was particularly energized by the prediction of superconductivity following the discovery of BCS theory.

In 1963 Australians B.A. Bolto, D.E. Weiss, and coworkers reported derivatives of polypyrrole with resistivities as low as 1 Ω.cm. There have been multiple reports of similar high-conductivity oxidized polyacetylenes. With the notable exception of charge transfer complexes (some of which are even superconductors), organic molecules were previously considered insulators or at best weakly conducting semiconductors. Subsequently, DeSurville and coworkers reported high conductivity in a polyaniline. Likewise, in 1980, Diaz and Logan reported films of polyaniline that can serve as electrodes.

While mostly operating at the scale of less than 100 nanometers, "molecular" electronic processes can collectively manifest on a macro scale. Examples include quantum tunneling, negative resistance, phonon-assisted hopping and polarons. In 1977, Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa reported similar high conductivity in oxidized iodine-doped polyacetylene. For this research, they were awarded the 2000 Nobel Prize in Chemistry "for the discovery and development of conductive polymers." Polyacetylene itself did not find practical applications, but drew the attention of scientists and encouraged the rapid growth of the field. Since the late 1980s, organic light-emitting diodes (OLEDs) have emerged as an important application of conducting polymers.

Types

Linear-backbone "polymer blacks" (polyacetylene, polypyrrole, polyindole and polyaniline) and their copolymers are the main class of conductive polymers. Poly(p-phenylene vinylene) (PPV) and its soluble derivatives have emerged as the prototypical electroluminescent semiconducting polymers. Today, poly(3-alkylthiophenes) are the archetypical materials for solar cells and transistors.

The following table presents some organic conductive polymers according to their composition. The well-studied classes are written in bold and the less well studied ones are in italic.

The main chain contains No heteroatom Heteroatoms present
Nitrogen-containing Sulfur-containing
Aromatic cycles The N is in the aromatic cycle:

The N is outside the aromatic cycle:

The S is in the aromatic cycle:

The S is outside the aromatic cycle:

Double bonds

Aromatic cycles and double bonds

Synthesis

Conductive polymers are prepared by many methods. Most conductive polymers are prepared by oxidative coupling of monocyclic precursors. Such reactions entail dehydrogenation:

n H–[X]–H H–[X]n–H + 2(n–1) H+ + 2(n–1) e

The low solubility of most polymers presents challenges. Some researchers add solubilizing functional groups to some or all monomers to increase solubility. Others address this through the formation of nanostructures and surfactant-stabilized conducting polymer dispersions in water. These include polyaniline nanofibers and PEDOT:PSS. In many cases, the molecular weights of conductive polymers are lower than conventional polymers such as polyethylene. However, in some cases, the molecular weight need not be high to achieve the desired properties.

There are two main methods used to synthesize conductive polymers, chemical synthesis and electro (co)polymerization. The chemical synthesis means connecting carbon-carbon bond of monomers by placing the simple monomers under various condition, such as heating, pressing, light exposure and catalyst. The advantage is high yield. However, there are many plausible impurities in the end product. The electro (co)polymerization means inserting three electrodes (reference electrode, counter electrode and working electrode) into solution including reactors or monomers. By applying voltage to electrodes, redox reaction to synthesize polymer is promoted. Electro (co)polymerization can also be divided into Cyclic voltammetry and Potentiostatic method by applying cyclic voltage and constant voltage, respectively. The advantage of Electro (co)polymerization are the high purity of products. But the method can only synthesize a few products at a time.

Molecular basis of electrical conductivity

The conductivity of such polymers is the result of several processes. For example, in traditional polymers such as polyethylenes, the valence electrons are bound in sp3 hybridized covalent bonds. Such "sigma-bonding electrons" have low mobility and do not contribute to the electrical conductivity of the material. However, in conjugated materials, the situation is completely different. Conducting polymers have backbones of contiguous sp2 hybridized carbon centers. One valence electron on each center resides in a pz orbital, which is orthogonal to the other three sigma-bonds. All the pz orbitals combine with each other to a molecule wide delocalized set of orbitals. The electrons in these delocalized orbitals have high mobility when the material is "doped" by oxidation, which removes some of these delocalized electrons. Thus, the conjugated p-orbitals form a one-dimensional electronic band, and the electrons within this band become mobile when it is partially emptied. The band structures of conductive polymers can easily be calculated with a tight binding model. In principle, these same materials can be doped by reduction, which adds electrons to an otherwise unfilled band. In practice, most organic conductors are doped oxidatively to give p-type materials. The redox doping of organic conductors is analogous to the doping of silicon semiconductors, whereby a small fraction of silicon atoms are replaced by electron-rich, e.g., phosphorus, or electron-poor, e.g., boron, atoms to create n-type and p-type semiconductors, respectively.

Although typically "doping" conductive polymers involves oxidizing or reducing the material, conductive organic polymers associated with a protic solvent may also be "self-doped."

Undoped conjugated polymers are semiconductors or insulators. In such compounds, the energy gap can be > 2 eV, which is too great for thermally activated conduction. Therefore, undoped conjugated polymers, such as polythiophenes, polyacetylenes only have a low electrical conductivity of around 10−10 to 10−8 S/cm. Even at a very low level of doping (< 1%), electrical conductivity increases several orders of magnitude up to values of around 0.1 S/cm. Subsequent doping of the conducting polymers will result in a saturation of the conductivity at values around 0.1–10 kS/cm (10–1000 S/m) for different polymers. Highest values reported up to now are for the conductivity of stretch oriented polyacetylene with confirmed values of about 80 kS/cm (8 MS/m).[16][19][20][21][22][23][24][excessive citations] Although the pi-electrons in polyacetylene are delocalized along the chain, pristine polyacetylene is not a metal. Polyacetylene has alternating single and double bonds which have lengths of 1.44 and 1.36 Å, respectively. Upon doping, the bond alteration is diminished in conductivity increases. Non-doping increases in conductivity can also be accomplished in a field effect transistor (organic FET or OFET) and by irradiation. Some materials also exhibit negative differential resistance and voltage-controlled "switching" analogous to that seen in inorganic amorphous semiconductors.

Despite intensive research, the relationship between morphology, chain structure and conductivity is still poorly understood. Generally, it is assumed that conductivity should be higher for the higher degree of crystallinity and better alignment of the chains, however this could not be confirmed for polyaniline and was only recently confirmed for PEDOT, which are largely amorphous.

Properties and applications

Conductive polymers show promise in antistatic materials and they have been incorporated into commercial displays and batteries. Literature suggests they are also promising in organic solar cells, printed electronic circuits, organic light-emitting diodes, actuators, electrochromism, supercapacitors, chemical sensors, chemical sensor arrays, and biosensors, flexible transparent displays, electromagnetic shielding and possibly replacement for the popular transparent conductor indium tin oxide. Another use is for microwave-absorbent coatings, particularly radar-absorptive coatings on stealth aircraft. Conducting polymers are rapidly gaining attraction in new applications with increasingly processable materials with better electrical and physical properties and lower costs. The new nano-structured forms of conducting polymers particularly, augment this field with their higher surface area and better dispersability. Research reports showed that nanostructured conducting polymers in the form of nanofibers and nanosponges exhibit significantly improved capacitance values as compared to their non-nanostructured counterparts.

With the availability of stable and reproducible dispersions, PEDOT and polyaniline have gained some large-scale applications. While PEDOT (poly(3,4-ethylenedioxythiophene)) is mainly used in antistatic applications and as a transparent conductive layer in form of PEDOT:PSS dispersions (PSS=polystyrene sulfonic acid), polyaniline is widely used for printed circuit board manufacturing – in the final finish, for protecting copper from corrosion and preventing its solderability. Moreover, polyindole is also starting to gain attention for various applications due to its high redox activity, thermal stability, and slow degradation properties than competitors polyaniline and polypyrrole.

Electroluminescence

Electroluminescence is light emission stimulated by electric current. In organic compounds, electroluminescence has been known since the early 1950s, when Bernanose and coworkers first produced electroluminescence in crystalline thin films of acridine orange and quinacrine. In 1960, researchers at Dow Chemical developed AC-driven electroluminescent cells using doping. In some cases, similar light emission is observed when a voltage is applied to a thin layer of a conductive organic polymer film. While electroluminescence was originally mostly of academic interest, the increased conductivity of modern conductive polymers means enough power can be put through the device at low voltages to generate practical amounts of light. This property has led to the development of flat panel displays using organic LEDs, solar panels, and optical amplifiers.

Barriers to applications

Since most conductive polymers require oxidative doping, the properties of the resulting state are crucial. Such materials are salt-like (polymer salt), which makes them less soluble in organic solvents and water and hence harder to process. Furthermore, the charged organic backbone is often unstable towards atmospheric moisture. Improving processability for many polymers requires the introduction of solubilizing substituents, which can further complicate the synthesis.

Experimental and theoretical thermodynamical evidence suggests that conductive polymers may even be completely and principally insoluble so that they can only be processed by dispersion.

Most recent emphasis is on organic light emitting diodes and organic polymer solar cells. The Organic Electronics Association is an international platform to promote applications of organic semiconductors. Conductive polymer products with embedded and improved electromagnetic interference (EMI) and electrostatic discharge (ESD) protection have led to both prototypes and products. For example, Polymer Electronics Research Center at University of Auckland is developing a range of novel DNA sensor technologies based on conducting polymers, photoluminescent polymers and inorganic nanocrystals (quantum dots) for simple, rapid and sensitive gene detection. Typical conductive polymers must be "doped" to produce high conductivity. As of 2001, there remains to be discovered an organic polymer that is intrinsically electrically conducting. Recently (as of 2020), researchers from IMDEA Nanoscience Institute reported experimental demonstration of the rational engineering of 1D polymers that are located near the quantum phase transition from the topologically trivial to non-trivial class, thus featuring a narrow bandgap.

Isomer

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Isomer In chemistry , isomers are molecule...