"Programming" in this context refers to a formal procedure for
solving mathematical problems. This usage dates to the 1940s and is not
specifically tied to the more recent notion of "computer programming."
To avoid confusion, some practitioners prefer the term "optimization" —
e.g., "quadratic optimization."
Problem formulation
The quadratic programming problem with n variables and m constraints can be formulated as follows.
Given:
the objective of quadratic programming is to find an n-dimensional vector x, that will
where xT denotes the vector transpose of x, and the notation Ax ⪯ b means that every entry of the vector Ax is less than or equal to the corresponding entry of the vector b (component-wise inequality).
where Q = RTR follows from the Cholesky decomposition of Q and c = −RTd.
Conversely, any such constrained least squares program can be
equivalently framed as a quadratic programming problem, even for a
generic non-square R matrix.
Generalizations
When minimizing a function f in the neighborhood of some reference point x0, Q is set to its Hessian matrixH(f(x0)) and c is set to its gradient∇f(x0). A related programming problem, quadratically constrained quadratic programming, can be posed by adding quadratic constraints on the variables.
Solution methods
For general problems a variety of methods are commonly used, including
Quadratic programming is particularly simple when Q is positive definite and there are only equality constraints; specifically, the solution process is linear. By using Lagrange multipliers and seeking the extremum of the Lagrangian, it may be readily shown that the solution to the equality constrained problem
is given by the linear system
where λ is a set of Lagrange multipliers which come out of the solution alongside x.
The easiest means of approaching this system is direct solution (for example, LU factorization),
which for small problems is very practical. For large problems, the
system poses some unusual difficulties, most notably that the problem is
never positive definite (even if Q
is), making it potentially very difficult to find a good numeric
approach, and there are many approaches to choose from dependent on the
problem.
If the constraints don't couple the variables too tightly, a
relatively simple attack is to change the variables so that constraints
are unconditionally satisfied. For example, suppose d = 0 (generalizing to nonzero is straightforward). Looking at the constraint equations:
introduce a new variable y defined by
where y has dimension of x minus the number of constraints. Then
and if Z is chosen so that EZ = 0 the constraint equation will be always satisfied. Finding such Z entails finding the null space of E, which is more or less simple depending on the structure of E. Substituting into the quadratic form gives an unconstrained minimization problem:
the solution of which is given by:
Under certain conditions on Q, the reduced matrix ZTQZ will be positive definite. It is possible to write a variation on the conjugate gradient method which avoids the explicit calculation of Z.
The Lagrangian dual of a quadratic programming problem is also a quadratic programming problem. To see this let us focus on the case where c = 0 and Q is positive definite. We write the Lagrangian function as
Defining the (Lagrangian) dual function g(λ) as , we find an infimum of L, using and positive-definiteness of Q:
Hence the dual function is
and so the Lagrangian dual of the quadratic programming problem is
Besides the Lagrangian duality theory, there are other duality pairings (e.g. Wolfe, etc.).
Complexity
For positive definiteQ, the ellipsoid method solves the problem in (weakly) polynomial time. If, on the other hand, Q is indefinite, then the problem is NP-hard.
There can be several stationary points and local minima for these non-convex problems. In fact, even if Q has only one negative eigenvalue, the problem is (strongly) NP-hard.
Integer constraints
There are some situations where one or more elements of the vector x will need to take on integer values. This leads to the formulation of a mixed-integer quadratic programming (MIQP) problem. Applications of MIQP include water resources and the construction of index funds.
A nonlinear solver adjusted to spreadsheets in which function
evaluations are based on the recalculating cells. Basic version
available as a standard add-on for Excel.
A free (its licence is GPLv3)
general-purpose and matrix-oriented programming-language for numerical
computing, similar to MATLAB. Quadratic programming in GNU Octave is
available via its qp command
A general-purpose and matrix-oriented programming-language for
numerical computing. Quadratic programming in MATLAB requires the
Optimization Toolbox in addition to the base MATLAB product
A collection of mathematical and statistical routines developed by the Numerical Algorithms Group
for multiple programming languages (C, C++, Fortran, Visual Basic, Java
and C#) and packages (MATLAB, Excel, R, LabVIEW). The Optimization
chapter of the NAG Library includes routines for quadratic programming
problems with both sparse and non-sparse linear constraint matrices,
together with routines for the optimization of linear, nonlinear, sums
of squares of linear or nonlinear functions with nonlinear, bounded or
no constraints. The NAG Library has routines for both local and global
optimization, and for continuous or integer problems.
High-level programming language with bindings for most available solvers. Quadratic programming is available via the solve_qp function or by calling a specific solver directly.
A suite of solvers for Linear, Integer, Nonlinear, Derivative-Free, Network, Combinatorial and Constraint Optimization; the Algebraic modeling language OPTMODEL; and a variety of vertical solutions aimed at specific problems/markets, all of which are fully integrated with the SAS System.
Mathematical modeling and problem solving software system based on a
declarative, rule-based language, commercialized by Universal Technical
Systems, Inc..
Supports global optimization, integer programming, all types of
least squares, linear, quadratic and unconstrained programming for MATLAB. TOMLAB supports solvers like CPLEX, SNOPT and KNITRO.
Solver for large-scale linear programs, quadratic programs, general
nonlinear and mixed-integer programs. Has API for several programming
languages, also has a modelling language Mosel and works with AMPL, GAMS. Free for academic use.
A seismometer is an instrument that responds to ground noises and shaking such as caused by quakes, volcanic eruptions, and explosions. They are usually combined with a timing device and a recording device to form a seismograph. The output of such a device—formerly recorded on paper (see picture) or film, now recorded and processed digitally—is a seismogram. Such data is used to locate and characterize earthquakes, and to study the Earth's internal structure.
Basic principles
A simple seismometer, sensitive to up-down motions of the Earth, is
like a weight hanging from a spring, both suspended from a frame that
moves along with any motion detected. The relative motion between the
weight (called the mass) and the frame provides a measurement of the
vertical ground motion. A rotating drum is attached to the frame and a pen is attached to the weight, thus recording any ground motion in a seismogram.
Any movement from the ground moves the frame. The mass tends not to move because of its inertia, and by measuring the movement between the frame and the mass, the motion of the ground can be determined.
Early seismometers used optical levers or mechanical linkages to
amplify the small motions involved, recording on soot-covered paper or
photographic paper. Modern instruments use electronics. In some
systems, the mass is held nearly motionless relative to the frame by an
electronic negative feedback loop. The motion of the mass relative to the frame is measured, and the feedback loop
applies a magnetic or electrostatic force to keep the mass nearly
motionless. The voltage needed to produce this force is the output of
the seismometer, which is recorded digitally.
In other systems the weight is allowed to move, and its motion
produces an electrical charge in a coil attached to the mass which
voltage moves through the magnetic field of a magnet attached to the
frame. This design is often used in a geophone, which is used in exploration for oil and gas.
Seismic observatories usually have instruments measuring three
axes: north-south (y-axis), east-west (x-axis), and vertical (z-axis).
If only one axis is measured, it is usually the vertical because it is
less noisy and gives better records of some seismic waves.
The foundation of a seismic station is critical. A professional station is sometimes mounted on bedrock.
The best mountings may be in deep boreholes, which avoid thermal
effects, ground noise and tilting from weather and tides. Other
instruments are often mounted in insulated enclosures on small buried
piers of unreinforced concrete. Reinforcing rods and aggregates would
distort the pier as the temperature changes. A site is always surveyed
for ground noise with a temporary installation before pouring the pier
and laying conduit. Originally, European seismographs were placed in a
particular area after a destructive earthquake. Today, they are spread
to provide appropriate coverage (in the case of weak-motion seismology) or concentrated in high-risk regions (strong-motion seismology).
Nomenclature
The word derives from the Greek σεισμός, seismós, a shaking or quake, from the verb σείω, seíō, to shake; and μέτρον, métron, to measure, and was coined by David Milne-Home in 1841, to describe an instrument designed by Scottish physicist James David Forbes.
Seismograph is another Greek term from seismós and γράφω, gráphō, to draw. It is often used to mean seismometer,
though it is more applicable to the older instruments in which the
measuring and recording of ground motion were combined, than to modern
systems, in which these functions are separated. Both types provide a
continuous record of ground motion; this record distinguishes them from seismoscopes, which merely indicate that motion has occurred, perhaps with some simple measure of how large it was.
The technical discipline concerning such devices is called seismometry, a branch of seismology.
The concept of measuring the "shaking" of something means that
the word "seismograph" might be used in a more general sense. For
example, a monitoring station that tracks changes in electromagnetic noise affecting amateur radio waves presents an rf seismograph. And helioseismology studies the "quakes" on the Sun.
History
The first seismometer was made in China during the 2nd century. It was invented by Zhang Heng,
a Chinese mathematician and astronomer. The first Western description
of the device comes from the French physicist and priest Jean de Hautefeuille in 1703. The modern seismometer was developed in the 19th century.
Seismometers were placed on the Moon starting in 1969 as part of the Apollo Lunar Surface Experiments Package. In December 2018, a seismometer was deployed on the planet Mars by the InSight lander, the first time a seismometer was placed onto the surface of another planet.
In Ancient Egypt, Amenhotep, son of Hapu
invented a precursor of seismometer, a vertical wooden poles connected
with wooden gutters on the central axis functioned to fill water into a
vessel until full to detect earthquakes.
In AD 132, Zhang Heng of China's Han dynasty is said to have invented the first seismoscope (by the definition above), which was called Houfeng Didong Yi
(translated as, "instrument for measuring the seasonal winds and the
movements of the Earth"). The description we have, from the History of the Later Han Dynasty,
says that it was a large bronze vessel, about 2 meters in diameter; at
eight points around the top were dragon's heads holding bronze balls.
When there was an earthquake, one of the dragons' mouths would open and
drop its ball into a bronze toad at the base, making a sound and
supposedly showing the direction of the earthquake. On at least one
occasion, probably at the time of a large earthquake in Gansu
in AD 143, the seismoscope indicated an earthquake even though one was
not felt. The available text says that inside the vessel was a central
column that could move along eight tracks; this is thought to refer to a
pendulum, though it is not known exactly how this was linked to a
mechanism that would open only one dragon's mouth. The first earthquake
recorded by this seismoscope was supposedly "somewhere in the east".
Days later, a rider from the east reported this earthquake.
Early designs (1259–1839)
By the 13th century, seismographic devices existed in the Maragheh observatory (founded 1259) in Persia, though it is unclear whether these were constructed independently or based on the first seismoscope. French physicist and priest Jean de Hautefeuille described a seismoscope in 1703,
which used a bowl filled with mercury which would spill into one of
eight receivers equally spaced around the bowl, though there is no
evidence that he actually constructed the device. A mercury seismoscope was constructed in 1784 or 1785 by Atanasio Cavalli, a copy of which can be found at the University Library in Bologna, and a further mercury seismoscope was constructed by Niccolò Cacciatore in 1818. James Lind also built a seismological tool of unknown design or efficacy (known as an earthquake machine) in the late 1790s.
Pendulum devices were developing at the same time. Neapolitan naturalist Nicola Cirillo
set up a network of pendulum earthquake detectors following the 1731
Puglia Earthquake, where the amplitude was detected using a protractor
to measure the swinging motion. Benedictine monk Andrea Bina
further developed this concept in 1751, having the pendulum create
trace marks in sand under the mechanism, providing both magnitude and
direction of motion. Neapolitan clockmaker Domenico Salsano produced a
similar pendulum which recorded using a paintbrush in 1783, labelling it
a geo-sismometro, possibly the first use of a similar word to seismometer. Naturalist Nicolo Zupo devised an instrument to detect electrical disturbances and earthquakes at the same time (1784).
The first moderately successful device for detecting the time of an earthquake was devised by Ascanio Filomarino
in 1796, who improved upon Salsano's pendulum instrument, using a
pencil to mark, and using a hair attached to the mechanism to inhibit
the motion of a clock's balance wheel. This meant that the clock would
only start once an earthquake took place, allowing determination of the
time of incidence.
After an earthquake taking place on October 4, 1834, Luigi Pagani observed that the mercury seismoscope held at Bologna University had completely spilled over, and did not provide useful information. He therefore devised a portable device that used lead shot
to detect the direction of an earthquake, where the lead fell into four
bins arranged in a circle, to determine the quadrant of earthquake
incidence. He completed the instrument in 1841.
Early Modern designs (1839–1880)
In response to a series of earthquakes near Comrie in Scotland in 1839, a committee was formed in the United Kingdom
in order to produce better detection devices for earthquakes. The
outcome of this was an inverted pendulum seismometer constructed by James David Forbes, first presented in a report by David Milne-Home
in 1842, which recorded the measurements of seismic activity through
the use of a pencil placed on paper above the pendulum. The designs
provided did not prove effective, according to Milne's reports. It was Milne who coined the word seismometer in 1841, to describe this instrument.
In 1843, the first horizontal pendulum was used in a seismometer,
reported by Milne (though it is unclear if he was the original
inventor). After these inventions, Robert Mallet
published an 1848 paper where he suggested ideas for seismometer
design, suggesting that such a device would need to register time,
record amplitudes horizontally and vertically, and ascertain direction.
His suggested design was funded, and construction was attempted, but his
final design did not fulfill his expectations and suffered from the
same problems as the Forbes design, being inaccurate and not
self-recording.
Karl Kreil constructed a seismometer in Prague
between 1848 and 1850, which used a point-suspended rigid cylindrical
pendulum covered in paper, drawn upon by a fixed pencil. The cylinder
was rotated every 24 hours, providing an approximate time for a given
quake.
Luigi Palmieri, influenced by Mallet's 1848 paper,
invented a seismometer in 1856 that could record the time of an
earthquake. This device used metallic pendulums which closed an electric circuit
with vibration, which then powered an electromagnet to stop a clock.
Palmieri seismometers were widely distributed and used for a long time.
By 1872, a committee in the United Kingdom led by James Bryce
expressed their dissatisfaction with the current available
seismometers, still using the large 1842 Forbes device located in Comrie
Parish Church, and requested a seismometer which was compact, easy to
install and easy to read. In 1875 they settled on a large example of the
Mallet device, consisting of an array of cylindrical pins
of various sizes installed at right angles to each other on a sand bed,
where larger earthquakes would knock down larger pins. This device was
constructed in 'Earthquake House' near Comrie, which can be considered
the world's first purpose-built seismological observatory.
As of 2013, no earthquake has been large enough to cause any of the
cylinders to fall in either the original device or replicas.
The first seismographs (1880-)
The
first seismographs were invented in the 1870s and 1880s. The first
seismograph was produced by Filippo Cecchi in around 1875. A seismoscope
would trigger the device to begin recording, and then a recording
surface would produce a graphical illustration of the tremors
automatically (a seismogram). However, the instrument was not sensitive
enough, and the first seismogram produced by the instrument was in 1887,
by which time John Milne had already demonstrated his design in Japan.
In 1880, the first horizontal pendulum seismometer was developed by the team of John Milne, James Alfred Ewing and Thomas Gray, who worked as foreign-government advisors in Japan, from 1880 to 1895. Milne, Ewing and Gray, all having been hired by the Meiji Government in the previous five years to assist Japan's modernization efforts, founded the Seismological Society of Japan in response to an Earthquake that took place on February 22, 1880, at Yokohama.
Two instruments were constructed by Ewing over the next year, one being
a common-pendulum seismometer and the other being the first seismometer
using a damped horizontal pendulum. The innovative recording system
allowed for a continuous record, the first to do so. The first
seismogram was recorded on 3 November 1880 on both of Ewing's
instruments.
Modern seismometers would eventually descend from these designs. Milne
has been referred to as the 'Father of modern seismology' and his seismograph design has been called the first modern seismometer.
This produced the first effective measurement of horizontal
motion. Gray would produce the first reliable method for recording
vertical motion, which produced the first effective 3-axis recordings.
An early special-purpose seismometer consisted of a large, stationary pendulum, with a stylus on the bottom. As the earth started to move, the heavy mass of the pendulum had the inertia to stay still within the frame.
The result is that the stylus scratched a pattern corresponding with
the Earth's movement. This type of strong-motion seismometer recorded
upon a smoked glass (glass with carbon soot).
While not sensitive enough to detect distant earthquakes, this
instrument could indicate the direction of the pressure waves and thus
help find the epicenter of a local quake. Such instruments were useful
in the analysis of the 1906 San Francisco earthquake.
Further analysis was performed in the 1980s, using these early
recordings, enabling a more precise determination of the initial fault
break location in Marin county and its subsequent progression, mostly to the south.
Later, professional suites of instruments for the worldwide
standard seismographic network had one set of instruments tuned to
oscillate at fifteen seconds, and the other at ninety seconds, each set
measuring in three directions. Amateurs or observatories with limited
means tuned their smaller, less sensitive instruments to ten seconds.
The basic damped horizontal pendulum seismometer swings like the gate of
a fence. A heavy weight is mounted on the point of a long (from 10 cm
to several meters) triangle, hinged at its vertical edge. As the ground
moves, the weight stays unmoving, swinging the "gate" on the hinge.
The advantage of a horizontal pendulum is that it achieves very
low frequencies of oscillation in a compact instrument. The "gate" is
slightly tilted, so the weight tends to slowly return to a central
position. The pendulum is adjusted (before the damping is installed) to
oscillate once per three seconds, or once per thirty seconds. The
general-purpose instruments of small stations or amateurs usually
oscillate once per ten seconds. A pan of oil is placed under the arm,
and a small sheet of metal mounted on the underside of the arm drags in
the oil to damp oscillations. The level of oil, position on the arm, and
angle and size of sheet is adjusted until the damping is "critical",
that is, almost having oscillation. The hinge is very low friction,
often torsion wires, so the only friction is the internal friction of
the wire. Small seismographs with low proof masses are placed in a
vacuum to reduce disturbances from air currents.
Zollner described torsionally suspended horizontal pendulums as
early as 1869, but developed them for gravimetry rather than
seismometry.
Early seismometers had an arrangement of levers on jeweled
bearings, to scratch smoked glass or paper. Later, mirrors reflected a
light beam to a direct-recording plate or roll of photographic paper.
Briefly, some designs returned to mechanical movements to save money.
In mid-twentieth-century systems, the light was reflected to a pair of
differential electronic photosensors called a photomultiplier. The
voltage generated in the photomultiplier was used to drive galvanometers
which had a small mirror mounted on the axis. The moving reflected
light beam would strike the surface of the turning drum, which was
covered with photo-sensitive paper. The expense of developing
photo-sensitive paper caused many seismic observatories to switch to ink
or thermal-sensitive paper.
After World War II, the seismometers developed by Milne, Ewing and Gray were adapted into the widely used Press-Ewing seismometer.
Modern instruments
Modern instruments use electronic sensors, amplifiers, and recording
devices. Most are broadband covering a wide range of frequencies. Some
seismometers can measure motions with frequencies from 500 Hz to
0.00118 Hz (1/500 = 0.002 seconds per cycle, to 1/0.00118 = 850 seconds
per cycle). The mechanical suspension for horizontal instruments
remains the garden-gate described above. Vertical instruments use some
kind of constant-force suspension, such as the LaCoste suspension. The LaCoste suspension uses a zero-length spring to provide a long period (high sensitivity). Some modern instruments use a "triaxial" or "Galperin" design,
in which three identical motion sensors are set at the same angle to
the vertical but 120 degrees apart on the horizontal. Vertical and
horizontal motions can be computed from the outputs of the three
sensors.
Seismometers unavoidably introduce some distortion into the
signals they measure, but professionally designed systems have carefully
characterized frequency transforms.
Modern sensitivities come in three broad ranges: geophones, 50 to 750 V/m;
local geologic seismographs, about 1,500 V/m; and teleseismographs,
used for world survey, about 20,000 V/m. Instruments come in three main
varieties: short period, long period and broadband. The short and long
period measure velocity and are very sensitive, however they 'clip' the
signal or go off-scale for ground motion that is strong enough to be
felt by people. A 24-bit analog-to-digital conversion channel is
commonplace. Practical devices are linear to roughly one part per
million.
Delivered seismometers come with two styles of output: analog and
digital. Analog seismographs require analog recording equipment,
possibly including an analog-to-digital converter. The output of a
digital seismograph can be simply input to a computer. It presents the
data in a standard digital format (often "SE2" over Ethernet).
Teleseismometers
The modern broadband seismograph can record a very broad range of frequencies. It consists of a small "proof mass", confined by electrical forces, driven by sophisticated electronics. As the earth moves, the electronics attempt to hold the mass steady through a feedback circuit. The amount of force necessary to achieve this is then recorded.
In most designs the electronics holds a mass motionless relative
to the frame. This device is called a "force balance accelerometer".
It measures acceleration
instead of velocity of ground movement. Basically, the distance
between the mass and some part of the frame is measured very precisely,
by a linear variable differential transformer. Some instruments use a linear variable differential capacitor.
That measurement is then amplified by electronic amplifiers attached to parts of an electronic negative feedback loop. One of the amplified currents from the negative feedback loop drives a coil very like a loudspeaker. The result is that the mass stays nearly motionless.
Most instruments measure directly the ground motion using the
distance sensor. The voltage generated in a sense coil on the mass by
the magnet directly measures the instantaneous velocity of the ground.
The current to the drive coil provides a sensitive, accurate measurement
of the force between the mass and frame, thus measuring directly the
ground's acceleration (using f=ma where f=force, m=mass,
a=acceleration).
One of the continuing problems with sensitive vertical
seismographs is the buoyancy of their masses. The uneven changes in
pressure caused by wind blowing on an open window can easily change the
density of the air in a room enough to cause a vertical seismograph to
show spurious signals. Therefore, most professional seismographs are
sealed in rigid gas-tight enclosures. For example, this is why a common
Streckeisen model has a thick glass base that must be glued to its pier
without bubbles in the glue.
It might seem logical to make the heavy magnet serve as a mass,
but that subjects the seismograph to errors when the Earth's magnetic
field moves. This is also why seismograph's moving parts are
constructed from a material that interacts minimally with magnetic
fields. A seismograph is also sensitive to changes in temperature so
many instruments are constructed from low expansion materials such as
nonmagnetic invar.
The hinges on a seismograph are usually patented, and by the time
the patent has expired, the design has been improved. The most
successful public domain designs use thin foil hinges in a clamp.
Another issue is that the transfer function
of a seismograph must be accurately characterized, so that its
frequency response is known. This is often the crucial difference
between professional and amateur instruments. Most are characterized on
a variable frequency shaking table.
Strong-motion seismometers
Another type of seismometer is a digital strong-motion seismometer, or accelerograph. The data from such an instrument is essential to understand how an earthquake affects man-made structures, through earthquake engineering. The recordings of such instruments are crucial for the assessment of seismic hazard, through engineering seismology.
A strong-motion seismometer measures acceleration. This can be mathematically integrated
later to give velocity and position. Strong-motion seismometers are not
as sensitive to ground motions as teleseismic instruments but they stay
on scale during the strongest seismic shaking.
Strong motion sensors are used for intensity meter applications.
Other forms
Accelerographs and geophones
are often heavy cylindrical magnets with a spring-mounted coil inside.
As the case moves, the coil tends to stay stationary, so the magnetic
field cuts the wires, inducing current in the output wires. They
receive frequencies from several hundred hertz down to 1 Hz. Some have
electronic damping, a low-budget way to get some of the performance of
the closed-loop wide-band geologic seismographs.
Strain-beam accelerometers constructed as integrated circuits are
too insensitive for geologic seismographs (2002), but are widely used
in geophones.
Some other sensitive designs measure the current generated by the flow of a non-corrosive ionic fluid through an electret sponge or a conductive fluid through a magnetic field.
Interconnected seismometers
Seismometers spaced in a seismic array can also be used to precisely locate, in three dimensions, the source of an earthquake, using the time it takes for seismic waves to propagate away from the hypocenter, the initiating point of fault rupture (See also Earthquake location). Interconnected seismometers are also used, as part of the International Monitoring System to detect underground nuclear test explosions, as well as for Earthquake early warning
systems. These seismometers are often used as part of a large scale
governmental or scientific project, but some organizations such as the Quake-Catcher Network, can use residential size detectors built into computers to detect earthquakes as well.
In reflection seismology, an array of seismometers image sub-surface features. The data are reduced to images using algorithms similar to tomography.
The data reduction methods resemble those of computer-aided
tomographic medical imaging X-ray machines (CAT-scans), or imaging sonars.
A worldwide array of seismometers can actually image the interior
of the Earth in wave-speed and transmissivity. This type of system
uses events such as earthquakes, impact events or nuclear explosions
as wave sources. The first efforts at this method used manual data
reduction from paper seismograph charts. Modern digital seismograph
records are better adapted to direct computer use. With inexpensive
seismometer designs and internet access, amateurs and small institutions
have even formed a "public seismograph network".
Seismographic systems used for petroleum or other mineral exploration historically used an explosive and a wireline of geophones
unrolled behind a truck. Now most short-range systems use "thumpers"
that hit the ground, and some small commercial systems have such good
digital signal processing that a few sledgehammer strikes provide enough
signal for short-distance refractive surveys. Exotic cross or
two-dimensional arrays of geophones are sometimes used to perform
three-dimensional reflective imaging of subsurface features. Basic
linear refractive geomapping software (once a black art) is available
off-the-shelf, running on laptop computers, using strings as small as
three geophones. Some systems now come in an 18" (0.5 m) plastic field
case with a computer, display and printer in the cover.
Small seismic imaging systems are now sufficiently inexpensive to
be used by civil engineers to survey foundation sites, locate bedrock,
and find subsurface water.
Fiber optic cables as seismometers
A new technique for detecting earthquakes has been found, using fiber optic cables.
In 2016 a team of metrologists running frequency metrology
experiments in England observed noise with a wave-form resembling the
seismic waves generated by earthquakes. This was found to match
seismological observations of an Mw6.0 earthquake in Italy, ~1400 km away. Further experiments in England, Italy, and with a submarine fiber optic cable to Malta detected additional earthquakes, including one 4,100 km away, and an ML3.4 earthquake 89 km away from the cable.
Seismic waves are detectable because they cause micrometer-scale
changes in the length of the cable. As the length changes so does the
time it takes a packet of light to traverse to the far end of the cable
and back (using a second fiber). Using ultra-stable metrology-grade
lasers, these extremely minute shifts of timing (on the order of femtoseconds) appear as phase-changes.
The point of the cable first disturbed by an earthquake's p-wave
(essentially a sound wave in rock) can be determined by sending packets
in both directions in the looped pair of optical fibers; the difference
in the arrival times of the first pair of perturbed packets indicates
the distance along the cable. This point is also the point closest to
the earthquake's epicenter, which should be on a plane perpendicular to
the cable. The difference between the p-wave/s-wave arrival times
provides a distance (under ideal conditions), constraining the epicenter
to a circle. A second detection on a non-parallel cable is needed to
resolve the ambiguity of the resulting solution. Additional observations
constrain the location of the earthquake's epicenter, and may resolve
the depth.
This technique is expected to be a boon in observing earthquakes,
especially the smaller ones, in vast portions of the global ocean where
there are no seismometers, and at a cost much cheaper than ocean bottom
seismometers.
Deep-Learning
Researchers at Stanford University created a deep-learning algorithm called UrbanDenoiser which can detect earthquakes, particularly in urban cities.
The algorithm filters out the background noise from the seismic noise
gathered from busy cities in urban areas to detect earthquakes.
Today, the most common recorder is a computer with an
analog-to-digital converter, a disk drive and an internet connection;
for amateurs, a PC with a sound card and associated software is
adequate. Most systems record continuously, but some record only when a
signal is detected,
as shown by a short-term increase in the variation of the signal,
compared to its long-term
average (which can vary slowly because of changes in seismic noise), also known as a STA/LTA trigger.
Prior to the availability of digital processing of seismic data
in the late 1970s, the records were done in a few different forms on
different types of media. A "Helicorder" drum was a device used to
record data into photographic paper or in the form of paper and ink. A
"Develocorder" was a machine that record data from up to 20 channels
into a 16-mm film. The recorded film can be viewed by a machine. The
reading and measuring from these types of media can be done by hand.
After the digital processing has been used, the archives of the seismic
data were recorded in magnetic tapes. Due to the deterioration of older
magnetic tape medias, large number of waveforms from the archives are
not recoverable.
The aging of wine is potentially able to improve the quality of wine. This distinguishes wine from most other consumable goods. While wine is perishable and capable of deteriorating, complex chemical reactions involving a wine's sugars, acids and phenolic compounds (such as tannins) can alter the aroma, color, mouthfeel
and taste of the wine in a way that may be more pleasing to the taster.
The ability of a wine to age is influenced by many factors including grape variety, vintage, viticultural practices, wine region and winemaking
style. The condition that the wine is kept in after bottling can also
influence how well a wine ages and may require significant time and
financial investment.
The quality of an aged wine varies significantly bottle-by-bottle,
depending on the conditions under which it was stored, and the condition
of the bottle and cork, and thus it is said that rather than good old
vintages, there are good old bottles. There is a significant mystique
around the aging of wine, as its chemistry was not understood for a long
time, and old wines are often sold for extraordinary prices. However,
the vast majority of wine is not aged, and even wine that is aged is
rarely aged for long; it is estimated that 90% of wine is meant to be
consumed within a year of production, and 99% of wine within 5 years.
History
The Ancient Greeks and Romans were aware of the potential of aged wines. In Greece, early examples of dried "straw wines" were noted for their ability to age due to their high sugar contents. These wines were stored in sealedearthenwareamphorae and kept for many years. In Rome, the most sought after wines – Falernian and Surrentine – were prized for their ability to age for decades. In the Book of Luke, it is noted that "old wine" was valued over "new wine" (Luke 5:39). The Greek physician Galen
wrote that the "taste" of aged wine was desirable and that this could
be accomplished by heating or smoking the wine, though, in Galen's
opinion, these artificially aged wines were not as healthy to consume as
naturally aged wines.
Following the Fall of the Roman Empire, appreciation for aged wine was virtually non-existent. Most of the wines produced in northern Europe were light bodied,
pale in color and with low alcohol. These wines did not have much aging
potential and barely lasted a few months before they rapidly
deteriorated into vinegar.
The older a wine got the cheaper its price became as merchants eagerly
sought to rid themselves of aging wine. By the 16th century, sweeter and
more alcoholic wines (like Malmsey and Sack) were being made in the Mediterranean and gaining attention for their aging ability. Similarly, Riesling from Germany
with its combination of acidity and sugar were also demonstrating their
ability to age. In the 17th century, two innovations occurred that
radically changed the wine industry's view on aging. One was the
development of the cork and bottle
which again allowed producers to package and store wine in a virtually
air-tight environment. The second was the growing popularity of fortified wines such as Port, Madeira and Sherries. The added alcohol was found to act as a preservative, allowing wines to survive long sea voyages to England, The Americas and the East Indies. The English, in particular, were growing in their appreciation of aged wines like Port and Claret from Bordeaux. Demand for matured wines had a pronounced effect on the wine trade. For producers, the cost and space of storing barrels
or bottles of wine was prohibitive so a merchant class evolved with
warehouses and the finances to facilitate aging wines for a longer
period of time. In regions like Bordeaux, Oporto and Burgundy, this situation dramatically increased the balance of power towards the merchant classes.
Aging potential
There is a widespread misconception that wine always improves with age,
or that wine improves with extended aging, or that aging potential is
an indicator of good wine. Some authorities state that more wine is
consumed too old than too young. Aging changes
wine, but does not categorically improve it or worsen it. Fruitiness
deteriorates rapidly, decreasing markedly after only 6 months in the
bottle.
Due to the cost of storage, it is not economical to age cheap wines,
but many varieties of wine do not benefit from aging, regardless of the
quality. Experts vary on precise numbers, but typically state that only
5–10% of wine improves after 1 year, and only 1% improves after 5–10
years.
In general, wines with a low pH (such as pinot noir and Sangiovese) have a greater capability of aging. With red wines, a high level of flavor compounds, such as phenolics
(most notably tannins), will increase the likelihood that a wine will
be able to age. Wines with high levels of phenols include Cabernet Sauvignon, Nebbiolo and Syrah. The white wines with the longest aging potential tend to be those with a high amount of extract and acidity (such as Riesling).
The acidity in white wines, acting as a preservative, has a role
similar to that of tannins in red wines. The process of making white
wines, which includes little to no skin contact, means that white wines
have a significantly lower amount of phenolic compounds, though barrel fermentation and oak aging can impart some phenols. Similarly, the minimal skin contact with rosé wine limits their aging potential.
After aging at the winery most wood-aged ports, sherries, vins doux naturels, vins de liqueur, basic level ice wines, and sparkling wines
are bottled when the producer feels that they are ready to be consumed.
These wines are ready to drink upon release and will not benefit much
from aging. Vintage ports and other bottled-aged ports and sherries will
benefit from some additional aging.
Champagne
and other sparkling wines are infrequently aged, and frequently have no
vintage year (no vintage, NV), but vintage champagne may be aged. Aged champagne has traditionally been a peculiarly British affectation, and thus has been referred to as le goût anglais "the English taste", though this term also refers to a level of champagne sweetness.
In principle champagne has aging potential, due to the acidity, and
aged champagne has increased in popularity in the United States since
the 1996 vintage. A few French winemakers have advocated aging champagne, most notably René Collard (1921–2009). In 2009, a 184-year-old bottle of Perrier-Jouët was opened and tasted, still drinkable, with notes of "truffles and caramel", according to the experts.
Kit wines made from mostly concentrated grape juice
Good aging potential
Master of WineJancis Robinson
provides the following general guidelines on aging wines. Note that
vintage, wine region and winemaking style can influence a wine's aging
potential, so Robinson's guidelines are general estimates for the most
common examples of these wines.
The ratio of sugars, acids and phenolics to water is a key determination of how well a wine can age. The less water in the grapes prior to harvest,
the more likely the resulting wine will have some aging potential.
Grape variety, climate, vintage and viticultural practice come into play
here. Grape varieties with thicker skins, from a dry growing season
where little irrigation was used and yields were kept low will have less water and a higher ratio of sugar, acids and phenolics. The process of making Eisweins, where water is removed from the grape during pressing as frozen ice crystals, has a similar effect of decreasing the amount of water and increasing aging potential.
In winemaking, the duration of maceration or skin contact will influence how much phenolic compounds are leached from skins into the wine. Pigmented tannins, anthocyanins, colloids, tannin-polysaccharides and tannin-proteins
not only influence a wine's resulting color but also act as
preservatives. During fermentation adjustment to a wine's acid levels
can be made with wines with lower pH having more aging potential.
Exposure to oak
either during fermentation or after (during barrel aging) will
introduce more phenolic compounds to the wines. Prior to bottling,
excessive fining or filtering of the wine could strip the wine of some phenolic solids and may lessen a wine's ability to age.
Storage conditions can influence a wine's aging ability.
The storage condition of the bottled wine will influence a wine's
aging. Vibrations and heat fluctuations can hasten a wine's
deterioration and cause adverse effect on the wines. In general, a wine
has a greater potential to develop complexity and more aromatic bouquet
if it is allowed to age slowly in a relatively cool environment. The
lower the temperature, the more slowly a wine develops. On average, the rate of chemical reactions in wine double with each 18 °F (10 °C) increase in temperature. Wine expert Karen MacNeil
recommends keeping wine intended for aging in a cool area with a
constant temperature around 55 °F (13 °C). Wine can be stored at
temperatures as high as 69 °F (20 °C) without long term negative effect.
Professor Cornelius Ough of the University of California, Davis
believes that wine could be exposed to temperatures as high as 120 °F
(49 °C) for a few hours and not be damaged. However, most experts
believe that extreme temperature fluctuations (such as repeated
transferring of a wine from a warm room to a cool refrigerator) would be
detrimental to the wine. The ultra-violet rays of direct sunlight should also be avoided because of the free radicals that can develop in the wine and result in premature oxidation.
Wines packaged in large format bottles, such as magnums and 3 liter Jeroboams, seem to age more slowly than wines packaged in regular 750 ml
bottles or half bottles. This may be because of the greater proportion
of oxygen exposed to the wine during the bottle process. The advent of alternative wine closures
to cork, such as screw caps and synthetic corks have opened up recent
discussions on the aging potential of wines sealed with these
alternative closures. Currently there are no conclusive results and the
topic is the subject of ongoing research.
One of the short-term aging needs of wine is a period where the wine
is considered "sick" due to the trauma and volatility of the bottling
experience. During bottling the wine is exposed to some oxygen which
causes a domino effect of chemical reactions with various components of
the wine. The time it takes for the wine to settle down and have the
oxygen fully dissolve and integrate with the wine is considered its
period of "bottle shock". During this time the wine could taste
drastically different from how it did prior to bottling or how it will
taste after the wine has settled. While many modern bottling lines try
to treat the wine as gently as possible and utilize inert gases
to minimize the amount of oxygen exposure, all wine goes through some
period of bottle shock. The length of this period will vary with each
individual wine.
The transfer of off-flavours
in the cork used to bottle a wine during prolonged aging can be
detrimental to the quality of the bottle. The formation of cork taint is
a complex process which may result from a wide range of factors ranging
from the growing conditions of the cork oak, the processing of the cork
into stoppers, or the molds growing on the cork itself.
Dumb phase
During
the course of aging, a wine may slip into a "dumb phase" where its
aromas and flavors are very muted. In Bordeaux this phase is called the age ingrat or "difficult age" and is likened to a teenager going through adolescence.
The cause or length of time that this "dumb phase" will last is not yet
fully understood and seems to vary from bottle to bottle.
Effects on wine
As red wine ages, the harsh tannins of its youth gradually give way to a softer mouthfeel.
An inky dark color will eventually lose its depth of color and begin to
appear orange at the edges, and eventually turn brown. These changes
occur due to the complex chemical reactions of the phenolic compounds of
the wine. In processes that begin during fermentation and continue
after bottling, these compounds bind together and aggregate. Eventually
these particles reach a certain size where they are too large to stay
suspended in the solution and precipitate out. The presence of visible
sediment in a bottle will usually indicate a mature wine. The resulting
wine, with this loss of tannins and pigment, will have a paler color and
taste softer, less astringent. The sediment, while harmless, can have
an unpleasant taste and is often separated from the wine by decanting.
During the aging process, the perception of a wine's acidity may
change even though the total measurable amount of acidity is more or
less constant throughout a wine's life. This is due to the esterification of the acids, combining with alcohols in complex array to form esters.
In addition to making a wine taste less acidic, these esters introduce a
range of possible aromas. Eventually the wine may age to a point where
other components of the wine (such as a tannins and fruit) are less
noticeable themselves, which will then bring back a heightened
perception of wine acidity. Other chemical processes that occur during
aging include the hydrolysis of flavor precursors which detach themselves from glucose molecules and introduce new flavor notes in the older wine and aldehydes become oxidized. The interaction of certain phenolics develops what is known as tertiary aromas which are different from the primary aromas that are derived from the grape and during fermentation.
As a wine starts to mature, its bouquet will become more developed
and multi-layered. While a taster may be able to pick out a few fruit
notes in a young wine, a more complex wine will have several distinct
fruit, floral, earthy, mineral and oak derived notes. The lingering
finish of a wine will lengthen. Eventually the wine will reach a point
of maturity, when it is said to be at its "peak". This is the point when
the wine has the maximum amount of complexity, most pleasing mouthfeel
and softening of tannins and has not yet started to decay. When this
point will occur is not yet predictable and can vary from bottle to
bottle. If a wine is aged for too long, it will start to descend into
decrepitude where the fruit tastes hollow and weak while the wine's
acidity becomes dominant.
The natural esterification that takes place in wines
and other alcoholic beverages during the aging process is an example of
acid-catalysed esterification. Over time, the acidity of the acetic acid and tannins in an aging wine will catalytically protonate other organic acids (including acetic acid itself), encouraging ethanol to react as a nucleophile. As a result, ethyl acetate
– the ester of ethanol and acetic acid – is the most abundant ester in
wines. Other combinations of organic alcohols (such as phenol-containing
compounds) and organic acids lead to a variety of different esters in
wines, contributing to their different flavours, smells and tastes. Of
course, when compared to sulfuric acid conditions, the acid conditions
in a wine are mild, so yield is low (often in tenths or hundredths of a
percentage point by volume) and take years for ester to accumulate.
Coates’ Law of Maturity
Coates’ Law of Maturity is a principle used in wine tasting relating to the aging ability of wine. Developed by the BritishMaster of Wine, Clive Coates,
the principle states that a wine will remain at its peak (or optimal)
drinking quality for a duration of time that is equal to the time of
maturation required to reach its optimal quality. During the aging of a
wine certain flavors, aromas and textures appear and fade. Rather than
developing and fading in unison,
these traits each operate on a unique path and time line. The principle
allows for the subjectivity of individual tastes because it follows the
logic that positive traits that appeal to one particular wine taster
will continue to persist along the principle's guideline while for
another taster these traits might not be positive and therefore not
applicable to the guideline. Wine expert Tom Stevenson has noted that there is logic in Coates' principle and that he has yet to encounter an anomaly or wine that debunks it.
Example
An
example of the principle in practice would be a wine that someone
acquires when it is 9 years of age, but finds dull. A year later the
drinker finds this wine very pleasing in texture, aroma and mouthfeel.
Under the Coates Law of Maturity the wine will continue to be
drunk at an optimal maturation for that drinker until it has reached 20
years of age at which time those positive traits that the drinker
perceives will start to fade.
Artificial aging
There is a long history of using artificial means to try to accelerate the natural aging process. In Ancient Rome a smoke chamber known as a fumarium was used to enhance the flavor of wine through artificial aging. Amphorae were placed in the chamber, which was built on top of a heated hearth,
in order to impart a smoky flavor in the wine that also seemed to
sharpen the acidity. The wine would sometimes come out of the fumarium
with a paler color just like aged wine. Modern winemaking techniques like micro-oxygenation can have the side effect of artificially aging the wine. In the production of Madeira and rancio
wines, the wines are deliberately exposed to excessive temperatures to
accelerate the maturation of the wine. Other techniques used to
artificially age wine (with inconclusive results on their effectiveness)
include shaking the wine, exposing it to radiation, magnetism or ultra-sonic waves. More recently, experiments with artificial aging through high-voltage electricity have produced results above the remaining techniques, as assessed by a panel of wine tasters.
Some artificial wine-aging gadgets include the "Clef du Vin", which is a
metallic object that is dipped into wine and purportedly ages the wine
one year for every second of dipping. The product has received mixed
reviews from wine commentators.
Several wineries have begun aging finished wine bottles undersea; ocean
aging is thought to accelerate natural aging reactions as a function of
depth (pressure).