Search This Blog

Sunday, February 3, 2019

False vacuum

From Wikipedia, the free encyclopedia

A scalar field φ in a false vacuum. Note that the energy E is higher than that in the true vacuum or ground state, but there is a barrier preventing the field from classically rolling down to the true vacuum. Therefore, the transition to the true vacuum must be stimulated by the creation of high-energy particles or through quantum-mechanical tunneling.
 
In quantum field theory, a false vacuum is a hypothetical vacuum that is somewhat, but not entirely, stable. It may last for a very long time in that state, and might eventually move to a more stable state. The most common suggestion of how such a change might happen is called bubble nucleation - if a small region of the universe by chance reached a more stable vacuum, this 'bubble' would spread. 

A false vacuum may only exist at a local minimum of energy and is therefore not stable, in contrast to a true vacuum, which exists at a global minimum and is stable. A false vacuum may be very long-lived, or metastable.

True vs false vacuum

A vacuum or vacuum state is defined as a space with as little energy in it as possible. Despite the name the vacuum state still has quantum fields. A true vacuum is a global minimum of energy, and coincides with a local vacuum. This configuration is stable. 

It is possible that the process of removing the largest amount of energy and particles possible from a normal space results in a different configuration of quantum fields with a local minimum of energy. This local minimum is called a "false vacuum". In this case, there would be a barrier to entering the true vacuum. Perhaps the barrier is so high that it has never yet been overcome anywhere in the universe. 

A false vacuum is unstable due to the quantum tunnelling of instantons to lower energy states. Tunnelling can be caused by quantum fluctuations or the creation of high-energy particles. The false vacuum is a local minimum, but not the lowest energy state.

Standard Model vacuum

If the Standard Model is correct, the particles and forces we observe in our universe exist as they do because of underlying quantum fields. Quantum fields can have states of differing stability, including 'stable', 'unstable', or 'metastable' (meaning very long-lived but not completely stable). If a more stable vacuum state were able to arise, then existing particles and forces would no longer arise as they do in the universe's present state. Different particles or forces would arise from (and be shaped by) whatever new quantum states arose. The world we know depends upon these particles and forces, so if this happened, everything around us, from subatomic particles to galaxies, and all fundamental forces, would be reconstituted into new fundamental particles and forces and structures. The universe would lose all of its present structures and become inhabited by new ones (depending upon the exact states involved) based upon the same quantum fields.

Stability and instability of the vacuum

Diagram showing the Higgs boson and top quark masses, which could indicate whether our universe is stable, or a long-lived 'bubble'. The outer dotted line is the current measurement uncertainties; the inner ones show predicted sizes after completion of future physics programs, but their location could be anywhere inside the outer.
 
Many scientific models of the universe have included the possibility that it exists as a long-lived, but not completely stable, sector of space, which could potentially at some time be destroyed upon 'toppling' into a more stable vacuum state.

A universe in a false vacuum state allows for the formation of a bubble of more stable "true vacuum" at any time or place. This bubble expands outward at the speed of light.

The Standard Model of particle physics opens the possibility of calculating, from the masses of the Higgs boson and the top quark, whether the universe's present electroweak vacuum state is likely to be stable or merely long-lived. (This was sometimes misreported as the Higgs boson "ending" the universe). A 125–127 GeV Higgs mass seems to be extremely close to the boundary for stability (estimated in 2012 as 123.8–135.0 GeV). However, a definitive answer requires much more precise measurements of the top quark's pole mass, and new physics beyond the Standard Model of Particle Physics could drastically change this picture.

Implications

Existential threat

In a 2005 paper published in Nature, as part of their investigation into global catastrophic risks, MIT physicist Max Tegmark and Oxford philosopher Nick Bostrom calculate the natural risks of the destruction of the Earth at less than 1 per gigayear from all events, including a transition to a lower vacuum state. They argue that due to observer selection effects, we might underestimate the chances of being destroyed by vacuum decay because any information about this event would reach us only at the instant when we too were destroyed. This is in contrast to events like risks from impacts, gamma-ray bursts, supernovae and hypernovae, whose frequencies we have adequate direct measures of.

If measurements of these particles suggests that our universe lies within a false vacuum of this kind, then it would imply—more than likely in many billions of years—that it could cease to exist as we know it, if a true vacuum happened to nucleate.

In a study posted on the arXiv in March 2015, it was pointed out that the vacuum decay rate could be vastly increased in the vicinity of black holes, which would serve as a nucleation seed. According to this study a potentially catastrophic vacuum decay could be triggered any time by primordial black holes, should they exist. If particle collisions produce mini black holes then energetic collisions such as the ones produced in the Large Hadron Collider (LHC) could trigger such a vacuum decay event. However the authors say that this is not a reason to expect the universe to collapse, because if such mini black holes can be created in collisions, they would also be created in the much more energetic collisions of cosmic radiation particles with planetary surfaces. And if there are primordial mini black holes they should have triggered the vacuum decay long ago. Rather, they see their calculations as evidence that there must be something else preventing vacuum decay.

Inflation

It would also have implications for other aspects of physics, and would suggest that the Higgs self-coupling λ and its βλ function could be very close to zero at the Planck scale, with "intriguing" implications, including for theories of gravity and Higgs-based inflation. A future electron-positron collider would be able to provide the precise measurements of the top quark needed for such calculations.

Vacuum decay

Vacuum decay would be theoretically possible if our universe had a false vacuum in the first place, an issue that was highly theoretical and far from resolved in 1982. If this were the case, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all of the observable universe without forewarning. Chaotic Inflation Theory suggests that the universe may be in either a false vacuum or a true vacuum state.

A paper by Coleman and de Luccia which attempted to include simple gravitational assumptions into these theories noted that if this was an accurate representation of nature, then the resulting universe "inside the bubble" in such a case would appear to be extremely unstable and would almost immediately collapse.

In general, gravitation makes the probability of vacuum decay smaller; in the extreme case of very small energy-density difference, it can even stabilize the false vacuum, preventing vacuum decay altogether. We believe we understand this. For the vacuum to decay, it must be possible to build a bubble of total energy zero. In the absence of gravitation, this is no problem, no matter how small the energy-density difference; all one has to do is make the bubble big enough, and the volume/surface ratio will do the job. In the presence of gravitation, though, the negative energy density of the true vacuum distorts geometry within the bubble with the result that, for a small enough energy density, there is no bubble with a big enough volume/surface ratio. Within the bubble, the effects of gravitation are more dramatic. The geometry of space-time within the bubble is that of anti-de Sitter space, a space much like conventional de Sitter space except that its group of symmetries is O(3, 2) rather than O(4, 1). Although this space-time is free of singularities, it is unstable under small perturbations, and inevitably suffers gravitational collapse of the same sort as the end state of a contracting Friedmann universe. The time required for the collapse of the interior universe is on the order of ... microseconds or less.
 
The possibility that we are living in a false vacuum has never been a cheering one to contemplate. Vacuum decay is the ultimate ecological catastrophe; in the new vacuum there are new constants of nature; after vacuum decay, not only is life as we know it impossible, so is chemistry as we know it. However, one could always draw stoic comfort from the possibility that perhaps in the course of time the new vacuum would sustain, if not life as we know it, at least some structures capable of knowing joy. This possibility has now been eliminated.

The second special case is decay into a space of vanishing cosmological constant, the case that applies if we are now living in the debris of a false vacuum which decayed at some early cosmic epoch. This case presents us with less interesting physics and with fewer occasions for rhetorical excess than the preceding one. It is now the interior of the bubble that is ordinary Minkowski space...
Sidney Coleman and Frank De Luccia.
 
Such an event would be one possible doomsday event. It was used as a plot device in a science-fiction story in 1988 by Geoffrey A. Landis, in 2000 by Stephen Baxter, in 2002 by Greg Egan in his novel Schild's Ladder, and in 2015 by Alastair Reynolds in his novel Poseidon's Wake.

In theory, either high enough energy concentrations or random chance could trigger the tunneling needed to set this event in motion. However an immense number of ultra-high energy particles and events have occurred in the history of our universe, dwarfing by many orders of magnitude any events at human disposal. Hut and Rees note that, because we have observed cosmic ray collisions at much higher energies than those produced in terrestrial particle accelerators, these experiments should not, at least for the foreseeable future, pose a threat to our current vacuum. Particle accelerators have reached energies of only approximately eight tera electron volts (8×1012 eV). Cosmic ray collisions have been observed at and beyond energies of 1018 eV, a million times more powerful – the so-called Greisen–Zatsepin–Kuzmin limit – and other cosmic events may be more powerful yet. Against this, John Leslie has argued that if present trends continue, particle accelerators will exceed the energy given off in naturally occurring cosmic ray collisions by the year 2150. Fears of this kind were raised by critics of both the Relativistic Heavy Ion Collider and the Large Hadron Collider at the time of their respective proposal, and determined to be unfounded by scientific inquiry.

Bubble nucleation

In the theoretical physics of the false vacuum, the system moves to a lower energy state – either the true vacuum, or another, lower energy vacuum – through a process known as bubble nucleation. In this, instanton effects cause a bubble to appear in which fields have their true vacuum values inside. Therefore, the interior of the bubble has a lower energy. The walls of the bubble (or domain walls) have a surface tension, as energy is expended as the fields roll over the potential barrier to the lower energy vacuum. The most likely size of the bubble is determined in the semi-classical approximation to be such that the bubble has zero total change in the energy: the decrease in energy by the true vacuum in the interior is compensated by the tension of the walls. 

Joseph Lykken has said that study of the exact properties of the Higgs boson could shed light on the possibility of vacuum collapse.

Expansion of bubble

Any increase in size of the bubble will decrease its potential energy, as the energy of the wall increases as the surface area of a sphere but the negative contribution of the interior increases more quickly, as the volume of a sphere . Therefore, after the bubble is nucleated, it quickly begins expanding at very nearly the speed of light. The excess energy contributes to the very large kinetic energy of the walls. If two bubbles are nucleated and they eventually collide, it is thought that particle production would occur where the walls collide. 

The tunneling rate is increased by increasing the energy difference between the two vacua and decreased by increasing the height or width of the barrier.

Gravitational effects

The addition of gravity to the story leads to a considerably richer variety of phenomena. The key insight is that a false vacuum with positive potential energy density is a de Sitter vacuum, in which the potential energy acts as a cosmological constant and the Universe is undergoing the exponential expansion of de Sitter space. This leads to a number of interesting effects, first studied by Coleman and de Luccia.

Development of theories

Alan Guth, in his original proposal for cosmic inflation, proposed that inflation could end through quantum mechanical bubble nucleation of the sort described above. See History of Chaotic inflation theory. It was soon understood that a homogeneous and isotropic universe could not be preserved through the violent tunneling process. This led Andrei Linde and, independently, Andreas Albrecht and Paul Steinhardt, to propose "new inflation" or "slow roll inflation" in which no tunneling occurs, and the inflationary scalar field instead rolls down a gentle slope.

History of quantum mechanics

From Wikipedia, the free encyclopedia

10 influential figures in the history of quantum mechanics. Left to right:

The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began essentially with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday; the 1859–60 winter statement of the black-body radiation problem by Gustav Kirchhoff; the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete; the discovery of the photoelectric effect by Heinrich Hertz in 1887; and the 1900 quantum hypothesis by Max Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete "energy elements" ε (epsilon) such that each of these energy elements is proportional to the frequency ν with which each of them individually radiate energy, as defined by the following formula:
where h is a numerical value called Planck's constant

Then, Albert Einstein in 1905, in order to explain the photoelectric effect previously reported by Heinrich Hertz in 1887, postulated consistently with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis. The photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface. 

The phrase "quantum mechanics" was coined (in German, Quantenmechanik) by the group of physicists including Max Born, Werner Heisenberg, and Wolfgang Pauli, at the University of Göttingen in the early 1920s, and was first used in Born's 1924 paper "Zur Quantenmechanik". In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding.

Overview

Ludwig Boltzmann's diagram of the I2 molecule proposed in 1898 showing the atomic "sensitive region" (α, β) of overlap.
 
Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete. He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Müller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would also be the case twenty years later with the first quantum theory put forward by Max Planck.

In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's law, that included a Boltzmann distribution (applicable in the classical limit). Planck's law can be stated as follows: where:
I(ν,T) is the energy per unit time (or the power) radiated per unit area of emitting surface in the normal direction per unit solid angle per unit frequency by a black body at temperature T;
h is the Planck constant;
c is the speed of light in a vacuum;
k is the Boltzmann constant;
ν (nu) is the frequency of the electromagnetic radiation; and
T is the temperature of the body in kelvins.
The earlier Wien approximation may be derived from Planck's law by assuming

Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911–1913, and subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, which was later called the "magneton"; similar quantum computations, but with numerically quite different values, were subsequently made possible for both the magnetic moments of the proton and the neutron that are three orders of magnitude smaller than that of the electron. 

Photoelectric effect
The emission of electrons from a metal plate caused by light quanta (photons) with energy greater than the work function of the metal.
The photoelectric effect reported by Heinrich Hertz in 1887,
and explained by Albert Einstein in 1905.
Low-energy phenomena: Photoelectric effect
Mid-energy phenomena: Compton scattering
High-energy phenomena: Pair production

In 1905, Einstein explained the photoelectric effect by postulating that light, or more generally all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states:
According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of 'energy quanta' that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole.
This statement has been called the most revolutionary sentence written by a physicist of the twentieth century. These energy quanta later came to be called "photons", a term introduced by Gilbert N. Lewis in 1926. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement; it effectively solved the problem of black-body radiation attaining infinite energy, which occurred in theory if light were to be explained only in terms of waves. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules

These theories, though successful, were strictly phenomenological: during this time, there was no rigorous justification for quantization, aside, perhaps, from Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quanta. They are collectively known as the old quantum theory

The phrase "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics (1931). 

With decreasing temperature, the peak of the blackbody radiation curve shifts to longer wavelengths and also has lower intensities. The blackbody radiation curves (1862) at left are also compared with the early, classical limit model of Rayleigh and Jeans (1900) shown at right. The short wavelength side of the curves was already approximated in 1896 by the Wien distribution law.
 
Niels Bohr's 1913 quantum model of the atom, which incorporated an explanation of Johannes Rydberg's 1888 formula, Max Planck's 1900 quantum hypothesis, i.e. that atomic energy radiators have discrete energy values (ε = hν), J. J. Thomson's 1904 plum pudding model, Albert Einstein's 1905 light quanta postulate, and Ernest Rutherford's 1907 discovery of the atomic nucleus. Note that the electron does not travel along the black line when emitting a photon. It jumps, disappearing from the outer orbit and appearing in the inner one and cannot exist in the space between orbits 2 and 3.
 
In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. This theory was for a single particle and derived from special relativity theory. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation to the generalized case of de Broglie's theory. Schrödinger subsequently showed that the two approaches were equivalent.

Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. The Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron. He also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period, still stand, and remain widely used. 

The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Caltech, and John C. Slater into various theories such as Molecular Orbital Theory or Valence Theory.

Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, resulting in quantum field theories. Early workers in this area include P.A.M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan. This area of research culminated in the formulation of quantum electrodynamics by R.P. Feynman, F. Dyson, J. Schwinger, and S. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent quantum field theories.

Feynman diagram of gluon radiation in quantum chromodynamics
 
The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilczek in 1975. 

Building on pioneering work by Schwinger, Higgs and Goldstone, the physicists Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force, for which they received the 1979 Nobel Prize in Physics

In October 2018, physicists reported that quantum behavior can be explained with classical physics for a single particle, but not for multiple particles as in quantum entanglement and related nonlocality phenomena.

Founding experiments

Crystal

From Wikipedia, the free encyclopedia

A crystal of amethyst quartz
 
Microscopically, a single crystal has atoms in a near-perfect periodic arrangement; a polycrystal is composed of many microscopic crystals (called "crystallites" or "grains"); and an amorphous solid (such as glass) has no periodic arrangement even microscopically.
 
A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. The scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification.

The word crystal derives from the Ancient Greek word κρύσταλλος (krustallos), meaning both "ice" and "rock crystal", from κρύος (kruos), "icy cold, frost".

Examples of large crystals include snowflakes, diamonds, and table salt. Most inorganic solids are not crystals but polycrystals, i.e. many microscopic crystals fused together into a single solid. Examples of polycrystals include most metals, rocks, ceramics, and ice. A third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics.

Despite the name, lead crystal, crystal glass, and related products are not crystals, but rather types of glass, i.e. amorphous solids.

Crystals are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements.

Crystal structure (microscopic)

Halite (table salt, NaCl): Microscopic and macroscopic
 
Halite crystal (Macroscopic )
Macroscopic (~16cm) halite crystal. The right-angles between crystal faces are due to the cubic symmetry of the atoms' arrangement
 
Halite crystal (microscopic)
Microscopic structure of a halite crystal. (Purple is sodium ion, green is chlorine ion). There is cubic symmetry in the atoms' arrangement

The scientific definition of a "crystal" is based on the microscopic arrangement of atoms inside it, called the crystal structure. A crystal is a solid where the atoms form a periodic arrangement

Not all solids are crystals. For example, when liquid water starts freezing, the phase change begins with small ice crystals that grow until they fuse, forming a polycrystalline structure. In the final block of ice, each of the small crystals (called "crystallites" or "grains") is a true crystal with a periodic arrangement of atoms, but the whole polycrystal does not have a periodic arrangement of atoms, because the periodic pattern is broken at the grain boundaries. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, etc. Solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically. There are distinct differences between crystalline solids and amorphous solids: most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does. 

A crystal structure (an arrangement of atoms in a crystal) is characterized by its unit cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal.

The symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries, called crystallographic space groups. These are grouped into 7 crystal systems, such as cubic crystal system (where the crystals may form cubes or rectangular boxes, such as halite shown at right) or hexagonal crystal system (where the crystals may form hexagons, such as ordinary water ice).

Crystal faces and shapes

As a halite crystal is growing, new atoms can very easily attach to the parts of the surface with rough atomic-scale structure and many dangling bonds. Therefore, these parts of the crystal grow out very quickly (yellow arrows). Eventually, the whole surface consists of smooth, stable faces, where new atoms cannot as easily attach themselves.
 
Crystals are commonly recognized by their shape, consisting of flat faces with sharp angles. These shape characteristics are not necessary for a crystal—a crystal is scientifically defined by its microscopic atomic arrangement, not its macroscopic shape—but the characteristic macroscopic shape is often present and easy to see. 

Euhedral crystals are those with obvious, well-formed flat faces. Anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid.

The flat faces (also called facets) of a euhedral crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal: they are planes of relatively low Miller index. This occurs because some surface orientations are more stable than others (lower surface energy). As a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces. Therefore, the flat surfaces tend to grow larger and smoother, until the whole crystal surface consists of these plane surfaces. (See diagram on right.) 

One of the oldest techniques in the science of crystallography consists of measuring the three-dimensional orientations of the faces of a crystal, and using them to infer the underlying crystal symmetry

A crystal's habit is its visible external shape. This is determined by the crystal structure (which restricts the possible facet orientations), the specific crystal chemistry and bonding (which may favor some facet types over others), and the conditions under which the crystal formed.

Occurrence in nature

Ice crystals
 
Fossil shell with calcite crystals

Rocks

By volume and weight, the largest concentrations of crystals in the Earth are part of its solid bedrock. Crystals found in rocks typically range in size from a fraction of a millimetre to several centimetres across, although exceptionally large crystals are occasionally found. As of 1999, the world's largest known naturally occurring crystal is a crystal of beryl from Malakialina, Madagascar, 18 m (59 ft) long and 3.5 m (11 ft) in diameter, and weighing 380,000 kg (840,000 lb).

Some crystals have formed by magmatic and metamorphic processes, giving origin to large masses of crystalline rock. The vast majority of igneous rocks are formed from molten magma and the degree of crystallization depends primarily on the conditions under which they solidified. Such rocks as granite, which have cooled very slowly and under great pressures, have completely crystallized; but many kinds of lava were poured out at the surface and cooled very rapidly, and in this latter group a small amount of amorphous or glassy matter is common. Other crystalline rocks, the metamorphic rocks such as marbles, mica-schists and quartzites, are recrystallized. This means that they were at first fragmental rocks like limestone, shale and sandstone and have never been in a molten condition nor entirely in solution, but the high temperature and pressure conditions of metamorphism have acted on them by erasing their original structures and inducing recrystallization in the solid state.

Other rock crystals have formed out of precipitation from fluids, commonly water, to form druses or quartz veins. The evaporites such as halite, gypsum and some limestones have been deposited from aqueous solution, mostly owing to evaporation in arid climates.

Ice

Water-based ice in the form of snow, sea ice and glaciers is a very common manifestation of crystalline or polycrystalline matter on Earth. A single snowflake is a single crystal or a collection of crystals, while an ice cube is a polycrystal.

Organigenic crystals

Many living organisms are able to produce crystals, for example calcite and aragonite in the case of most molluscs or hydroxylapatite in the case of vertebrates.

Polymorphism and allotropy

The same group of atoms can often solidify in many different ways. Polymorphism is the ability of a solid to exist in more than one crystal form. For example, water ice is ordinarily found in the hexagonal form Ice Ih, but can also exist as the cubic Ice Ic, the rhombohedral ice II, and many other forms. The different polymorphs are usually called different phases

In addition, the same atoms may be able to form noncrystalline phases. For example, water can also form amorphous ice, while SiO2 can form both fused silica (an amorphous glass) and quartz (a crystal). Likewise, if a substance can form crystals, it can also form polycrystals.

For pure chemical elements, polymorphism is known as allotropy. For example, diamond and graphite are two crystalline forms of carbon, while amorphous carbon is a noncrystalline form. Polymorphs, despite having the same atoms, may have wildly different properties. For example, diamond is among the hardest substances known, while graphite is so soft that it is used as a lubricant. 

Polyamorphism is a similar phenomenon where the same atoms can exist in more than one amorphous solid form.

Crystallization

Vertical cooling crystallizer in a beet sugar factory.
 
Crystallization is the process of forming a crystalline structure from a fluid or from materials dissolved in a fluid. (More rarely, crystals may be deposited directly from gas; see thin-film deposition and epitaxy.) 

Crystallization is a complex and extensively-studied field, because depending on the conditions, a single fluid can solidify into many different possible forms. It can form a single crystal, perhaps with various possible phases, stoichiometries, impurities, defects, and habits. Or, it can form a polycrystal, with various possibilities for the size, arrangement, orientation, and phase of its grains. The final form of the solid is determined by the conditions under which the fluid is being solidified, such as the chemistry of the fluid, the ambient pressure, the temperature, and the speed with which all these parameters are changing.

Specific industrial techniques to produce large single crystals (called boules) include the Czochralski process and the Bridgman technique. Other less exotic methods of crystallization may be used, depending on the physical properties of the substance, including hydrothermal synthesis, sublimation, or simply solvent-based crystallization

Large single crystals can be created by geological processes. For example, selenite crystals in excess of 10 meters are found in the Cave of the Crystals in Naica, Mexico. For more details on geological crystal formation, see above

Crystals can also be formed by biological processes, see above. Conversely, some organisms have special techniques to prevent crystallization from occurring, such as antifreeze proteins.

Defects, impurities, and twinning

Two types of crystallographic defects. Top right: edge dislocation. Bottom right: screw dislocation.
 
An ideal crystal has every atom in a perfect, exactly repeating pattern. However, in reality, most crystalline materials have a variety of crystallographic defects, places where the crystal's pattern is interrupted. The types and structures of these defects may have a profound effect on the properties of the materials. 

A few examples of crystallographic defects include vacancy defects (an empty space where an atom should fit), interstitial defects (an extra atom squeezed in where it does not fit), and dislocations (see figure at right). Dislocations are especially important in materials science, because they help determine the mechanical strength of materials

Another common type of crystallographic defect is an impurity, meaning that the "wrong" type of atom is present in a crystal. For example, a perfect crystal of diamond would only contain carbon atoms, but a real crystal might perhaps contain a few boron atoms as well. These boron impurities change the diamond's color to slightly blue. Likewise, the only difference between ruby and sapphire is the type of impurities present in a corundum crystal. 

Twinned pyrite crystal group.
 
In semiconductors, a special type of impurity, called a dopant, drastically changes the crystal's electrical properties. Semiconductor devices, such as transistors, are made possible largely by putting different semiconductor dopants into different places, in specific patterns. 

Twinning is a phenomenon somewhere between a crystallographic defect and a grain boundary. Like a grain boundary, a twin boundary has different crystal orientations on its two sides. But unlike a grain boundary, the orientations are not random, but related in a specific, mirror-image way. 

Mosaicity is a spread of crystal plane orientations. A mosaic crystal is supposed to consist of smaller crystalline units that are somewhat misaligned with respect to each other.

Chemical bonds

In general, solids can be held together by various types of chemical bonds, such as metallic bonds, ionic bonds, covalent bonds, van der Waals bonds, and others. None of these are necessarily crystalline or non-crystalline. However, there are some general trends as follows. 

Metals are almost always polycrystalline, though there are exceptions like amorphous metal and single-crystal metals. The latter are grown synthetically. (A microscopically-small piece of metal may naturally form into a single crystal, but larger pieces generally do not.) Ionic compound materials are usually crystalline or polycrystalline. In practice, large salt crystals can be created by solidification of a molten fluid, or by crystallization out of a solution. Covalently bonded solids (sometimes called covalent network solids) are also very common, notable examples being diamond and quartz. Weak van der Waals forces also help hold together certain crystals, such as crystalline molecular solids, as well as the interlayer bonding in graphite. Polymer materials generally will form crystalline regions, but the lengths of the molecules usually prevent complete crystallization—and sometimes polymers are completely amorphous.

Quasicrystals

The material holmium–magnesium–zinc (Ho–Mg–Zn) forms quasicrystals, which can take on the macroscopic shape of a dodecahedron. (Only a quasicrystal, not a normal crystal, can take this shape.) The edges are 2 mm long.

A quasicrystal consists of arrays of atoms that are ordered but not strictly periodic. They have many attributes in common with ordinary crystals, such as displaying a discrete pattern in x-ray diffraction, and the ability to form shapes with smooth, flat faces.

Quasicrystals are most famous for their ability to show five-fold symmetry, which is impossible for an ordinary periodic crystal.

The International Union of Crystallography has redefined the term "crystal" to include both ordinary periodic crystals and quasicrystals ("any solid having an essentially discrete diffraction diagram"). 

Quasicrystals, first discovered in 1982, are quite rare in practice. Only about 100 solids are known to form quasicrystals, compared to about 400,000 periodic crystals known in 2004. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for the discovery of quasicrystals.

Special properties from anisotropy

Crystals can have certain special electrical, optical, and mechanical properties that glass and polycrystals normally cannot. These properties are related to the anisotropy of the crystal, i.e. the lack of rotational symmetry in its atomic arrangement. One such property is the piezoelectric effect, where a voltage across the crystal can shrink or stretch it. Another is birefringence, where a double image appears when looking through a crystal. Moreover, various properties of a crystal, including electrical conductivity, electrical permittivity, and Young's modulus, may be different in different directions in a crystal. For example, graphite crystals consist of a stack of sheets, and although each individual sheet is mechanically very strong, the sheets are rather loosely bound to each other. Therefore, the mechanical strength of the material is quite different depending on the direction of stress.

Not all crystals have all of these properties. Conversely, these properties are not quite exclusive to crystals. They can appear in glasses or polycrystals that have been made anisotropic by working or stress—for example, stress-induced birefringence.

Crystallography

Crystallography is the science of measuring the crystal structure (in other words, the atomic arrangement) of a crystal. One widely used crystallography technique is X-ray diffraction. Large numbers of known crystal structures are stored in crystallographic databases.

Image gallery

  • Hoar frost: A type of ice crystal (picture taken from a distance of about 5 cm).


  • Gallium, a metal that easily forms large crystals.

  • An apatite crystal sits front and center on cherry-red rhodochroite rhombs, purple fluorite cubes, quartz and a dusting of brass-yellow pyrite cubes.

  • Boules of silicon, like this one, are an important type of industrially-produced single crystal.

  • A specimen consisting of a bornite-coated chalcopyrite crystal nestled in a bed of clear quartz crystals and lustrous pyrite crystals. The bornite-coated crystal is up to 1.5 cm across.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...