But if you go carrying pictures of chairman Mao, You ain’t going to make it with anyone anyhow
—The Beatles, 1968
This month, the Intergovernmental Panel on Climate Change (IPCC) issued a report concluding that it is all but inevitable that overall global warming will exceed the 1.5 degree Celsius limit
dictated in the 2015 Paris Agreement. The report also discusses the
potentially catastrophic consequences of this warming, which include
extreme weather events, an accelerated rise in sea levels, and shrinking
Arctic sea ice.
In keeping with the well-established trend, political conservatives
generally have exhibited skepticism of these newly published IPCC
conclusions. That includes U.S. President Donald Trump, who told
60 Minutes, “We have scientists that disagree with [anthropogenic
global warming]. You’d have to show me the [mainstream] scientists
because they have a very big political agenda.” On Fox News, a
commentator argued
that “the planet has largely stopped warming over the past 15 years,
data shows—and [the IPCC report] could not explain why the Mercury had
stopped rising.” Conservative YouTuber Ian Miles Cheong declared flatly that:
"Climate change is a hoax invented by neo-Marxists within the scientific
community to destabilize the world economy and dismantle what they call
“systems of oppression” and what the rest of us call capitalism."
This pattern of conservative skepticism on climate change is so
well-established that many of us now take it for granted. But given
conservatism’s natural impulse toward protecting our heritage, one might
think that conservatives would be just as concerned with preserving
order in the natural environment as they are with preserving order in
our social and political environments. Ensuring that subsequent
generations can live well is ordinarily a core concern for
conservatives.
To this, conservatives might (and do) counter that they are merely
pushing back against environmental extremists who seek to leverage the
cause of global warming as a means to expand government, eliminate
hierarchies of wealth, and reorganize society along social lines. And
while most environmentally conscious citizens harbor no such ambitions,
there is a substantial basis for this claim. Indeed, some
environmentalists are forthright
in seeking to implement the principles of “ecosocialism.” Meteorologist
and self-described ecosocialist Eric Holthaus, for instance, responded
to the IPCC report by declaring that:
"The world's top scientists just gave rigorous backing to systematically
dismantle capitalism as a key requirement to maintaining civilization
and a habitable planet."
One of the most prominent voices in this space has been Canadian writer Naomi Klein, whose 2015 book, This Changes Everything: Capitalism vs the Climate,
argued that capitalism must be dismantled for the world to avert
catastrophe. While I am sympathetic with some of the critiques that
Klein directs at corporations and “free market fundamentalism,” her
argument doesn’t hold water—because mitigating climate risks is a
project whose enormous scope, cost and complexity can only be managed by
regulated capitalist welfare states. Moreover, it’s difficult to see
how she isn’t simply using the crisis of climate change as a veneer to
agitate for her preferred utopian socio-economic system. As has been
pointed out by Jonathan Chait of New York magazine, Klein appears to be adapting a mirror image of the same strategy she critiqued in her previous book, The Shock Doctrine,
wherein she claimed that cynical politicians, pundits and corporations
seize on crises to lock in economic restructuring along radical free
market principles.
Simply put, describing the call for climate action in economically or
politically revolutionary terms is always going to be
counterproductive, because the vast majority of ordinary people in most
countries don’t want a revolution. Environmentalists such as Klein are
correct, however, in their more limited claim that market mechanisms
alone can’t prevent global warming, since such mechanisms don’t impute
the environmental costs associated with the way we produce goods and
live our lives. Without some means of capturing the social price of
environmentally destructive practices—resource extraction, in
particular—we will invariably embrace wasteful and damaging practices.
Consider, for instance, the vast quantities of natural gas that are flared
at oil wells simply because it’s seen as too costly to build gas
pipelines to these facilities. This is a context in which we’d urge
government to exercise its regulatory power; or to impose some kind of
pricing mechanism that, either by carrot or stick, incentivizes the
capture of the flared gas. Public policy has a necessary role in guiding
capitalist decision makers toward the long-term sustainability of the
environment. Unfortunately, this outcome is hard to achieve in a
political environment characterized by tribalism, polarization and
blame-shifting.
It is true that when it comes to climate change, the political left
is more closely grounded in science than the right (even if both sides
often tend to deny
inconvenient truths more generally). But the left also has proven to be
blinkered when it comes to appropriate responses, a tendency that has
seeped into the latest IPCC report. While it’s not surprising that the
report advocates support for renewable energy, its authors fail to
acknowledge the warming effect that scaled up renewable-energy
generation would have on land use
due to their low energy density (think of the enormous footprint of
solar farms). Likewise, the pro-environmental left’s distaste for
nuclear power persists, despite its status as a geographically dense, safe, virtually carbon-free energy source.
The whole issue has become a sort of microcosm of the blind spots and dogmas embraced by both sides. As Jonathan Haidt argues,
conservatives tend to be skeptical of top-down governance, preferring
to focus on smaller nested structures that are less ambitious in scope,
and hence easier to manage. This general principle takes form in
conservative philosopher Roger Scruton’s approach
to environmentalism, which argues that activism on issues such as
climate change should be undertaken by communities at the local level,
rather than by national (or international) bureaucrats and
politicians—because the local level is where “people protect things
which they know and love, things which are necessary for their life, and
which will elicit in them the kind of disposition to make sacrifices,
which, after all, is what it’s all about.”
While Scruton’s environmentalism gives us a reason to protect our
local environments, the reality is that the effects of many
environmentally damaging practices are not just experienced locally. A
community may be motivated to protect a nearby forest from logging
because it forms part of their love of home, but greenhouse gas
emissions are displaced and dispersed into the shared atmosphere,
contributing to global atmospheric degradation. Because of this, any
approach that dismisses broader policy initiatives is unlikely to
succeed in bringing down global carbon emissions. But at the very least,
Scruton’s analysis awakens us to the reality that such policies will
gain popular support only if they are justified and implemented in a
manner that takes into consideration the views and sentiments of
conservatives and liberals alike. Wind and solar farms will face less
opposition if local communities get a greater say in where they are
located. And while carbon taxes are effective in reducing emissions in some jurisdictions, conservatives will usually oppose them unless
they are structured in a revenue-neutral manner, by legislating them
alongside equivalent reductions in income tax, for instance.
Environmentalists also should acknowledge that some conservative
objections to large-scale, top-down global instruments such as the Paris
Agreement are perfectly legitimate. The provisions in such treaties
typically are non-binding and require the good faith of all signatories.
With many authoritarian countries seemingly misleading the rest of the
world about their levels of economic activity,
it’s not unreasonable to assume they would do the same when it comes to
reporting carbon emissions. Moreover, those countries without the means
to enforce reductions in carbon emissions domestically can’t be
regarded as reliable participants in a global agreement to voluntarily
decarbonize their economies.
This isn’t to say we shouldn’t be discussing climate change at a
global level, or that international agreements don’t have any value. But
environmentalists’ tendency to treat these documents as holy writ comes
off as naïve, and thereby tends to undermine their cause.
Overall, our best hope for dealing with the emissions of developing
countries is likely to assist them in managing their energy
infrastructure so as to bypass high-emissions technologies. China,
despite often being lauded for the amount of renewable energy it
produces, now emits more carbon dioxide than the U.S. and Europe combined.
With technologies such as large-scale solar generation becoming cost
competitive with coal, progress is possible, but far from guaranteed
without Western support.
These measures aren’t revolutionary. But that’s the point: In the
environmental sector, just as in every other arena, there’s an
opportunity cost to adopting revolutionary postures—since these
revolutionaries tend to make more enemies than allies. If this project
is really about saving the planet, rather than destroying capitalism,
cooling the earth will mean cooling our rhetoric as well.
The electrical resistance of an object is a measure of its opposition to the flow of electric current. The inverse quantity is electrical conductance,
and is the ease with which an electric current passes. Electrical
resistance shares some conceptual parallels with the notion of
mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S).
The resistance of an object depends in large part on the material it is made of—objects made of electrical insulators like rubber tend to have very high resistance and low conductivity, while objects made of electrical conductors like metals tend to have very low resistance and high conductivity. This material dependence is quantified by resistivity or conductivity. However, resistance and conductance are extensive rather than bulk properties,
meaning that they also depend on the size and shape of an object. For
example, a wire's resistance is higher if it is long and thin, and lower
if it is short and thick. All objects show some resistance, except for superconductors, which have a resistance of zero.
The resistance (R) of an object is defined as the ratio of voltage across it (V) to current through it (I), while the conductance (G) is the inverse:
For a wide variety of materials and conditions, V and I are directly proportional to each other, and therefore R and G are constants
(although they will depend on the size and shape of the object, the
material it is made of, and other factors like temperature or strain).
This proportionality is called Ohm's law, and materials that satisfy it are called ohmic materials.
In other cases, such as a transformer, diode or battery, V and I are not directly proportional. The ratio V/I is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance", since it corresponds to the inverse slope of a chord between the origin and an I–V curve. In other situations, the derivative may be most useful; this is called the "differential resistance".
Introduction
The hydraulic analogy
compares electric current flowing through circuits to water flowing
through pipes. When a pipe (left) is filled with hair (right), it takes a
larger pressure to achieve the same flow of water. Pushing electric
current through a large resistance is like pushing water through a pipe
clogged with hair: It requires a larger push (electromotive force) to drive the same flow (electric current).
In the hydraulic analogy, current flowing through a wire (or resistor) is like water flowing through a pipe, and the voltage drop across the wire is like the pressure drop
that pushes water through the pipe. Conductance is proportional to how
much flow occurs for a given pressure, and resistance is proportional to
how much pressure is required to achieve a given flow. (Conductance and
resistance are reciprocals.)
The voltage drop (i.e., difference between voltages on one side of the resistor and the other), not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar: The pressure difference
between two sides of a pipe, not the pressure itself, determines the
flow through it. For example, there may be a large water pressure above
the pipe, which tries to push water down through the pipe. But there may
be an equally large water pressure below the pipe, which tries to push
water back up through the pipe. If these pressures are equal, no water
flows. (In the image at right, the water pressure below the pipe is
zero.)
The resistance and conductance of a wire, resistor, or other element is mostly determined by two properties:
geometry (shape), and
material
Geometry is important because it is more difficult to push water
through a long, narrow pipe than a wide, short pipe. In the same way, a
long, thin copper wire has higher resistance (lower conductance) than a
short, thick copper wire.
Materials are important as well. A pipe filled with hair
restricts the flow of water more than a clean pipe of the same shape and
size. Similarly, electrons can flow freely and easily through a copper wire, but cannot flow as easily through a steel wire of the same shape and size, and they essentially cannot flow at all through an insulator like rubber, regardless of its shape. The difference between copper, steel, and rubber is related to their microscopic structure and electron configuration, and is quantified by a property called resistivity.
In addition to geometry and material, there are various other
factors that influence resistance and conductance, such as temperature;
see below.
Substances in which electricity can flow are called conductors. A piece of conducting material of a particular resistance meant for use in a circuit is called a resistor. Conductors are made of high-conductivity
materials such as metals, in particular copper and aluminium.
Resistors, on the other hand, are made of a wide variety of materials
depending on factors such as the desired resistance, amount of energy
that it needs to dissipate, precision, and costs.
when the graph is a straight line through the origin. Therefore,
the two resistors are ohmic, but the diode and battery are not.
For many materials, the current I through the material is proportional to the voltage V applied across it:
over a wide range of voltages and currents. Therefore, the
resistance and conductance of objects or electronic components made of
these materials is constant. This relationship is called Ohm's law, and materials which obey it are called ohmic materials. Examples of ohmic components are wires and resistors. The current-voltage (IV) graph of an ohmic device consists of a straight line through the origin with positive slope.
Other components and materials used in electronics do not obey
Ohm's law; the current is not proportional to the voltage, so the
resistance varies with the voltage and current through them. These are
called nonlinear or nonohmic. Examples include diodes and fluorescent lamps. The IV curve of a nonohmic device is a curved line.
Relation to resistivity and conductivity
A piece of resistive material with electrical contacts on both ends.
The resistance of a given object depends primarily on two factors:
What material it is made of, and its shape. For a given material, the
resistance is inversely proportional to the cross-sectional area; for
example, a thick copper wire has lower resistance than an
otherwise-identical thin copper wire. Also, for a given material, the
resistance is proportional to the length; for example, a long copper
wire has higher resistance than an otherwise-identical short copper
wire. The resistance R and conductance G of a conductor of uniform cross section, therefore, can be computed as
where is the length of the conductor, measured in metres [m], A is the cross-sectional area of the conductor measured in square metres [m²], σ (sigma) is the electrical conductivity measured in siemens per meter (S·m−1), and ρ (rho) is the electrical resistivity (also called specific electrical resistance)
of the material, measured in ohm-metres (Ω·m). The resistivity and
conductivity are proportionality constants, and therefore depend only on
the material the wire is made of, not the geometry of the wire.
Resistivity and conductivity are reciprocals: . Resistivity is a measure of the material's ability to oppose electric current.
This formula is not exact, as it assumes the current density
is totally uniform in the conductor, which is not always true in
practical situations. However, this formula still provides a good
approximation for long thin conductors such as wires.
Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. For this reason, the geometrical cross-section is different from the effective
cross-section in which current actually flows, so resistance is higher
than expected. Similarly, if two conductors near each other carry AC
current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes.
What determines resistivity?
The resistivity of different materials varies by an enormous amount: For example, the conductivity of teflon is about 1030
times lower than the conductivity of copper. Why is there such a
difference? Loosely speaking, a metal has large numbers of "delocalized"
electrons that are not stuck in any one place, but free to move across
large distances, whereas in an insulator (like teflon), each electron is
tightly bound to a single molecule, and a great force is required to
pull it away. Semiconductors lie between these two extremes. More details can be found in the article: Electrical resistivity and conductivity. For the case of electrolyte solutions, see the article: Conductivity (electrolytic).
Resistivity varies with temperature. In semiconductors, resistivity also changes when exposed to light. See below.
Measuring resistance
An instrument for measuring resistance is called an ohmmeter.
Simple ohmmeters cannot measure low resistances accurately because the
resistance of their measuring leads causes a voltage drop that
interferes with the measurement, so more accurate devices use four-terminal sensing.
The IV curve of a non-ohmic device (purple). The static resistance at point A is the inverseslope of line B through the origin. The differential resistance at A is the inverse slope of tangent lineC.
Many electrical elements, such as diodes and batteries do not satisfy Ohm's law. These are called non-ohmic or non-linear, and their I–V curves are not straight lines through the origin.
Resistance and conductance can still be defined for non-ohmic
elements. However, unlike ohmic resistance, non-linear resistance is
not constant but varies with the voltage or current through the device;
i.e., its operating point. There are two types of resistance
Static resistance (also called chordal or DC resistance) – This corresponds to the usual definition of resistance; the voltage divided by the current
.
It is the slope of the line (chord)
from the origin through the point on the curve. Static resistance
determines the power dissipation in an electrical component. Points on
the IV curve located in the 2nd or 4th quadrants, for which the slope of the chordal line is negative, have negative static resistance. Passive
devices, which have no source of energy, cannot have negative static
resistance. However active devices such as transistors or op-amps can synthesize negative static resistance with feedback, and it is used in some circuits such as gyrators.
Differential resistance (also called dynamic, incremental or small signal resistance) – Differential resistance is the derivative of the voltage with respect to the current; the slope of the IV curve at a point
The voltage (red) and current (blue) versus time (horizontal axis) for a capacitor (top) and inductor (bottom). Since the amplitude of the current and voltage sinusoids are the same, the absolute value of impedance is 1 for both the capacitor and the inductor (in whatever units the graph is using). On the other hand, the phase difference between current and voltage is −90° for the capacitor; therefore, the complex phase of the impedance of the capacitor is −90°. Similarly, the phase difference between current and voltage is +90° for the inductor; therefore, the complex phase of the impedance of the inductor is +90°.
When an alternating current flows through a circuit, the relation
between current and voltage across a circuit element is characterized
not only by the ratio of their magnitudes, but also the difference in
their phases.
For example, in an ideal resistor, the moment when the voltage reaches
its maximum, the current also reaches its maximum (current and voltage
are oscillating in phase). But for a capacitor or inductor,
the maximum current flow occurs as the voltage passes through zero and
vice versa (current and voltage are oscillating 90° out of phase, see
image at right). Complex numbers are used to keep track of both the phase and magnitude of current and voltage:
where:
t is time,
V(t) and I(t) are, respectively, voltage and current as a function of time,
The impedance and admittance may be expressed as complex numbers that can be broken into real and imaginary parts:
where R and G are resistance and conductance respectively, X is reactance, and B is susceptance. For ideal resistors, Z and Y reduce to R and G respectively, but for AC networks containing capacitors and inductors, X and B are nonzero.
Running current through a material with high resistance creates heat, in a phenomenon called Joule heating. In this picture, a cartridge heater, warmed by Joule heating, is glowing red hot.
Resistors (and other elements with resistance) oppose the flow of
electric current; therefore, electrical energy is required to push
current through the resistance. This electrical energy is dissipated,
heating the resistor in the process. This is called Joule heating (after James Prescott Joule), also called ohmic heating or resistive heating.
On the other hand, Joule heating is sometimes useful, for example in electric stoves and other electric heaters (also called resistive heaters). As another example, incandescent lamps rely on Joule heating: the filament is heated to such a high temperature that it glows "white hot" with thermal radiation (also called incandescence).
The formula for Joule heating is:
where P is the power (energy per unit time) converted from electrical energy to thermal energy, R is the resistance, and I is the current through the resistor.
Dependence of resistance on other conditions
Temperature dependence
Near room temperature, the resistivity of metals typically increases
as temperature is increased, while the resistivity of semiconductors
typically decreases as temperature is increased. The resistivity of
insulators and electrolytes may increase or decrease depending on the
system. For the detailed As a consequence, the resistance of wires, resistors, and other
components often change with temperature. This effect may be undesired,
causing an electronic circuit to malfunction at extreme temperatures. In
some cases, however, the effect is put to good use. When
temperature-dependent resistance of a component is used purposefully,
the component is called a resistance thermometer or thermistor. (A resistance thermometer is made of metal, usually platinum, while a thermistor is made of ceramic or polymer.)
Resistance thermometers and thermistors are generally used in two ways. First, they can be used as thermometers: By measuring the resistance, the temperature of the environment can be inferred. Second, they can be used in conjunction with Joule heating
(also called self-heating): If a large current is running through the
resistor, the resistor's temperature rises and therefore its resistance
changes. Therefore, these components can be used in a circuit-protection
role similar to fuses, or for feedback in circuits, or for many other purposes. In general, self-heating can turn a resistor into a nonlinear and hysteretic circuit element.
If the temperature T does not vary too much, a linear approximation is typically used:
where is called the temperature coefficient of resistance, is a fixed reference temperature (usually room temperature), and is the resistance at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference.
The temperature coefficient is typically +3×10−3 K−1 to +6×10−3 K−1 for metals near room temperature. It is usually negative for semiconductors and insulators, with highly variable magnitude.
Strain dependence
Just as the resistance of a conductor depends upon temperature, the resistance of a conductor depends upon strain. By placing a conductor under tension (a form of stress
that leads to strain in the form of stretching of the conductor), the
length of the section of conductor under tension increases and its
cross-sectional area decreases. Both these effects contribute to
increasing the resistance of the strained section of conductor. Under compression (strain in the opposite direction), the resistance of the strained section of conductor decreases. See the discussion on strain gauges for details about devices constructed to take advantage of this effect.
Light illumination dependence
Some resistors, particularly those made from semiconductors, exhibit photoconductivity, meaning that their resistance changes when light is shining on them. Therefore, they are called photoresistors (or light dependent resistors). These are a common type of light detector.
Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential. There are two closely related notions of capacitance: self capacitance and mutual capacitance. Any object that can be electrically charged exhibits self capacitance. A material with a large self capacitance holds more electric charge at a given voltage than one with low capacitance. The notion of mutual capacitance is particularly important for understanding the operations of the capacitor, one of the three elementary linear electronic components (along with resistors and inductors).
The capacitance is a function only of the geometry of the design
(e.g. area of the plates and the distance between them) and the permittivity of the dielectric
material between the plates of the capacitor. For many dielectric
materials, the permittivity and thus the capacitance, is independent of
the potential difference between the conductors and the total charge on
them.
The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. The reciprocal of capacitance is called elastance.
Self-capacitance
In electrical circuits, the term capacitance is usually a shorthand for the mutual capacitance
between two adjacent conductors, such as the two plates of a capacitor.
However, for an isolated conductor, there also exists a property called
self-capacitance, which is the amount of electric charge that must be added to an isolated conductor to raise its electric potential by one unit (i.e. one volt, in most measurement systems).
The reference point for this potential is a theoretical hollow
conducting sphere, of infinite radius, with the conductor centered
inside this sphere.
Mathematically, the self-capacitance of a conductor is defined by
where
q is the charge held by the conductor,
is the electric potential,
σ is the surface charge density.
dS is an infinitesimal element of area,
r is the length from dS to a fixed point M within the plate
The inter-winding capacitance of a coil is sometimes called self-capacitance, but this is a different phenomenon. It is actually mutual capacitance between the individual turns of the coil and is a form of stray, or parasitic capacitance. This self-capacitance is an important consideration at high frequencies: It changes the impedance of the coil and gives rise to parallel resonance. In many applications this is an undesirable effect and sets an upper frequency limit for the correct operation of the circuit.
Mutual capacitance
A common form is a parallel-plate capacitor, which consists of two conductive plates insulated from each other, usually sandwiching a dielectric
material. In a parallel plate capacitor, capacitance is very nearly
proportional to the surface area of the conductor plates and inversely
proportional to the separation distance between the plates.
If the charges on the plates are +q and −q, and V gives the voltage between the plates, then the capacitance C is given by
The energy stored in a capacitor is found by integrating the work W
Capacitance matrix
The discussion above is limited to the case of two conducting plates, although of arbitrary size and shape.
The definition
does not apply when there are more than two charged plates, or when the
net charge on the two plates is non-zero. To handle this case, Maxwell
introduced his coefficients of potential. If three (nearly ideal) conductors are given charges , then the voltage at conductor 1 is given by
and similarly for the other voltages. Hermann von Helmholtz and Sir William Thomson showed that the coefficients of potential are symmetric, so that , etc. Thus the system can be described by a collection of coefficients known as the elastance matrix or reciprocal capacitance matrix, which is defined as:
From this, the mutual capacitance between two objects can be defined by solving for the total charge Q and using .
Since no actual device holds perfectly equal and opposite charges on
each of the two "plates", it is the mutual capacitance that is reported
on capacitors.
The collection of coefficients is known as the capacitance matrix, and is the inverse of the elastance matrix.
Capacitors
The capacitance of the majority of capacitors used in electronic
circuits is generally several orders of magnitude smaller than the farad. The most common subunits of capacitance in use today are the microfarad (µF), nanofarad (nF), picofarad (pF), and, in microcircuits, femtofarad (fF). However, specially made supercapacitors
can be much larger (as much as hundreds of farads), and parasitic
capacitive elements can be less than a femtofarad. In the past,
alternate subunits were used in historical electronic books; "mfd" and
"mf" for microfarad (µF); "mmfd", "mmf", "µµF" for picofarad (pF); but
are rarely used any more.
Capacitance can be calculated if the geometry of the conductors
and the dielectric properties of the insulator between the conductors
are known. A qualitative explanation for this can be given as follows.
Once a positive charge is put unto a conductor, this charge creates an
electrical field, repelling any other positive charge to be moved onto
the conductor; i.e., increasing the necessary voltage. But if nearby
there is another conductor with a negative charge on it, the electrical
field of the positive conductor repelling the second positive charge is
weakened (the second positive charge also feels the attracting force of
the negative charge). So due to the second conductor with a negative
charge, it becomes easier to put a positive charge on the already
positive charged first conductor, and vice versa; i.e., the necessary
voltage is lowered.
As a quantitative example consider the capacitance of a capacitor constructed of two parallel plates both of area A separated by a distance d. If d is sufficiently small with respect to the smallest chord of A, there holds, to a high level of accuracy:
where
C is the capacitance, in farads;
A is the area of overlap of the two plates, in square meters;
εr is the relative static permittivity (sometimes called the dielectric constant) of the material between the plates (for a vacuum, εr = 1);
d is the separation between the plates, in meters.
Capacitance is proportional to the area of overlap and inversely
proportional to the separation between conducting sheets. The closer the
sheets are to each other, the greater the capacitance.
The equation is a good approximation if d is small compared to
the other dimensions of the plates so that the electric field in the
capacitor area is uniform, and the so-called fringing field around the periphery provides only a small contribution to the capacitance. In CGS units the equation has the form
where C in this case has the units of length.
Combining the SI equation for capacitance with the above equation for
the energy stored in a capacitance, for a flat-plate capacitor the
energy stored is
where W is the energy, in joules; C is the capacitance, in farads; and V is the voltage, in volts.
Stray capacitance
Any two adjacent conductors can function as a capacitor, though the
capacitance is small unless the conductors are close together for long
distances or over a large area. This (often unwanted) capacitance is
called parasitic or "stray capacitance". Stray capacitance can allow
signals to leak between otherwise isolated circuits (an effect called crosstalk), and it can be a limiting factor for proper functioning of circuits at high frequency.
Stray capacitance between the input and output in amplifier circuits can be troublesome because it can form a path for feedback, which can cause instability and parasitic oscillation
in the amplifier. It is often convenient for analytical purposes to
replace this capacitance with a combination of one input-to-ground
capacitance and one output-to-ground capacitance; the original
configuration — including the input-to-output capacitance — is often
referred to as a pi-configuration. Miller's theorem can be used to
effect this replacement: it states that, if the gain ratio of two nodes
is 1/K, then an impedance of Z connecting the two nodes can be replaced with a Z/(1 − k) impedance between the first node and ground and a KZ/(K − 1)
impedance between the second node and ground. Since impedance varies
inversely with capacitance, the internode capacitance, C, is replaced by a capacitance of KC from input to ground and a capacitance of (K − 1)C/K
from output to ground. When the input-to-output gain is very large, the
equivalent input-to-ground impedance is very small while the
output-to-ground impedance is essentially equal to the original
(input-to-output) impedance.
Capacitance of conductors with simple shapes
Calculating the capacitance of a system amounts to solving the Laplace equation∇2φ = 0 with a constant potential φ
on the surface of the conductors. This is trivial in cases with high
symmetry. There is no solution in terms of elementary functions in more
complicated cases.
For two-dimensional situations analytic functions may be used to map different geometries to each other.
a:Radius d: Distance, d > 2a D = d/2a, D > 1 γ: Euler's constant
Sphere in front of wall
a: Radius d: Distance, d > a D = d/a
Sphere
a: Radius
Circular disc
a: Radius
Prolate ellipsoid
half-axes a>b=c
Thin straight wire, finite length
a: Wire radius ℓ: Length Λ: ln(ℓ/a)
Energy storage
The energy (measured in joules) stored in a capacitor is equal to the work required to push the charges into the capacitor, i.e. to charge it. Consider a capacitor of capacitance C, holding a charge +q on one plate and −q on the other. Moving a small element of charge dq from one plate to the other against the potential difference V = q/C requires the work dW:
where W is the work measured in joules, q is the charge measured in coulombs and C is the capacitance, measured in farads.
The energy stored in a capacitor is found by integrating this equation. Starting with an uncharged capacitance (q = 0) and moving charge from one plate to the other until the plates have charge +Q and −Q requires the work W:
Nanoscale systems
The capacitance of nanoscale dielectric capacitors such as quantum dots
may differ from conventional formulations of larger capacitors. In
particular, the electrostatic potential difference experienced by
electrons in conventional capacitors is spatially well-defined and fixed
by the shape and size of metallic electrodes in addition to the
statistically large number of electrons present in conventional
capacitors. In nanoscale capacitors, however, the electrostatic
potentials experienced by electrons are determined by the number and
locations of all electrons that contribute to the electronic properties
of the device. In such devices, the number of electrons may be very
small, however, the resulting spatial distribution of equipotential
surfaces within the device are exceedingly complex.
Single-electron devices
The
capacitance of a connected, or "closed", single-electron device is
twice the capacitance of an unconnected, or "open", single-electron
device.
This fact may be traced more fundamentally to the energy stored in the
single-electron device whose "direct polarization" interaction energy
may be equally divided into the interaction of the electron with the
polarized charge on the device itself due to the presence of the
electron and the amount of potential energy required to form the
polarized charge on the device (the interaction of charges in the
device's dielectric material with the potential due to the electron).
Few-electron devices
The derivation of a "quantum capacitance" of a few-electron device involves the thermodynamic chemical potential of an N-particle system given by
whose energy terms may be obtained as solutions of the Schrödinger equation. The definition of capacitance,
,
with the potential difference
may be applied to the device with the addition or removal of individual electrons,
and .
Then
is the "quantum capacitance" of the device.
This expression of "quantum capacitance" may be written as
which differs from the conventional expression described in the introduction where , the stored electrostatic potential energy,
by a factor of 1/2 with .
However, within the framework of purely classical electrostatic
interactions, the appearance of the factor of 1/2 is the result of
integration in the conventional formulation,
which is appropriate since for systems involving either many electrons or metallic electrodes, but in few-electron systems, .
The integral generally becomes a summation. One may trivially combine
the expressions of capacitance and electrostatic interaction energy,
and ,
respectively, to obtain,
which is similar to the quantum capacitance. A more rigorous derivation is reported in the literature. In particular, to circumvent the mathematical challenges of the spatially complex equipotential surfaces within the device, an average electrostatic potential experiences by each electron is utilized in the derivation.
The reason for apparent mathematical differences is understood more fundamentally as the potential energy, , of an isolated device (self-capacitance) is twice that stored in a "connected" device in the lower limit N=1. As N grows large, . Thus, the general expression of capacitance is
.
In nanoscale devices such as quantum dots, the "capacitor" is often
an isolated, or partially isolated, component within the device. The
primary differences between nanoscale capacitors and macroscopic
(conventional) capacitors are the number of excess electrons (charge
carriers, or electrons, that contribute to the device's electronic
behavior) and the shape and size of metallic electrodes. In nanoscale
devices, nanowires
consisting of metal atoms typically do not exhibit the same conductive
properties as their macroscopic, or bulk material, counterparts.
Capacitance in electronic and semiconductor devices
In
electronic and semiconductor devices, transient or frequency-dependent
current between terminals contains both conduction and displacement
components. Conduction current is related to moving charge carriers
(electrons, holes, ions, etc.), while displacement current is caused by
time-varying electric field. Carrier transport is affected by electric
field and by a number of physical phenomena - such as carrier drift and
diffusion, trapping, injection, contact-related effects, impact
ionization, etc. As a result, device admittance is frequency-dependent, and a simple electrostatic formula for capacitance is not applicable. A more general definition of capacitance, encompassing electrostatic formula, is:
where is the device admittance, and is the angular frequency.
In general case, capacitance is a function of frequency. At high
frequencies, capacitance approached a constant value, equal to
"geometric" capacitance, determined by the terminals' geometry and
dielectric content in the device.
A paper by Steven Laux
presents a review of numerical techniques for capacitance calculation.
In particular, capacitance can be calculated by a Fourier transform of a
transient current in response to a step-like voltage excitation:
Negative capacitance in semiconductor devices
Usually,
capacitance in semiconductor devices is positive. However, in some
devices and under certain conditions (temperature, applied voltages,
frequency, etc.), capacitance can become negative. Non-monotonic
behavior of the transient current in response to a step-like excitation
has been proposed as the mechanism of negative capacitance. Negative capacitance has been demonstrated and explored in many different types of semiconductor devices.