Search This Blog

Tuesday, November 11, 2025

Synthetic diamond

From Wikipedia, the free encyclopedia
Six non-faceted diamond crystals of 2–3 mm (0.079–0.118 in) size; they are yellow, green-yellow, green-blue, light-blue, light-blue and dark blue.
Lab-grown diamonds of various colors grown by the high pressure, high temperature (HPHT) technique

A synthetic diamond or laboratory-grown diamond (LGD), also called a lab-grownlaboratory-created, man-made, artisan-created, artificial, or cultured diamond, is a diamond that is produced in a controlled technological process, in contrast to a naturally-formed diamond, which is created through geological processes and obtained by mining. Unlike diamond simulants (imitations of diamond made of superficially similar non-diamond materials), synthetic diamonds are composed of the same material as naturally formed diamonds—pure carbon crystallized in an isotropic 3D form—and have identical chemical and physical properties.

The maximal size of synthetic diamonds has increased dramatically in the 21st century. Before 2010, most synthetic diamonds were smaller than half a carat. Improvements in technology, plus the availability of larger diamond substrates, have led to synthetic diamonds up to 125 carats in 2025.

In 1797, English chemist Smithson Tennant demonstrated that diamonds are a form of carbon, and between 1879 and 1928, numerous claims of diamond synthesis were reported; most of these attempts were carefully analyzed, but none were confirmed. In the 1940s, systematic research of diamond creation began in the United States, Sweden and the Soviet Union, which culminated in the first reproducible synthesis in 1953. Further research activity led to the development of high pressure high temperature (HPHT) and chemical vapor deposition (CVD) methods of diamond production. These two processes still dominate synthetic diamond production. A third method in which nanometer-sized diamond grains are created in a detonation of carbon-containing explosives, known as detonation synthesis, entered the market in the late 1990s.

Synthetic diamonds, which have a different shade due to the different content of nitrogen impurities. Yellow diamonds are obtained with a higher nitrogen content in the carbon lattice, and colourless diamonds come only from pure carbon. The smallest yellow diamond size is around 0.3 mm.

The properties of synthetic diamonds depend on the manufacturing process. Some have properties such as hardness, thermal conductivity and electron mobility that are superior to those of most naturally formed diamonds. Synthetic diamond is widely used in abrasives, in cutting and polishing tools and in heat sinks. Electronic applications of synthetic diamond are being developed, including high-power switches at power stations, high-frequency field-effect transistors and light-emitting diodes (LEDs). Synthetic diamond detectors of ultraviolet (UV) light and of high-energy particles are used at high-energy research facilities and are available commercially. Due to its unique combination of thermal and chemical stability, low thermal expansion and high optical transparency in a wide spectral range, synthetic diamond is becoming the most popular material for optical windows in high-power CO
2
lasers
and gyrotrons. It is estimated that 98% of industrial-grade diamond demand is supplied with synthetic diamonds.

Both CVD and HPHT diamonds can be cut into gems, and various colors can be produced: clear white, yellow, brown, blue, green and orange. The advent of synthetic gems on the market created major concerns in the diamond trading business, as a result of which special spectroscopic devices and techniques have been developed to distinguish synthetic from natural diamonds.

History

Moissan trying to create synthetic diamonds using an electric arc furnace

In the early stages of diamond synthesis, the founding figure of modern chemistry, Antoine Lavoisier, played a significant role. His groundbreaking discovery that a diamond's crystal lattice is similar to carbon's crystal structure paved the way for initial attempts to produce diamonds. After it was discovered that diamond was pure carbon in 1797, many attempts were made to convert various cheap forms of carbon into diamond. The earliest successes were reported by James Ballantyne Hannay in 1879 and by Ferdinand Frédéric Henri Moissan in 1893. Their method involved heating charcoal at up to 3,500 °C (6,330 °F) with iron inside a carbon crucible in a furnace. Whereas Hannay used a flame-heated tube, Moissan applied his newly developed electric arc furnace, in which an electric arc was struck between carbon rods inside blocks of lime. The molten iron was then rapidly cooled by immersion in water. The contraction generated by the cooling supposedly produced the high pressure required to transform graphite into diamond. Moissan published his work in a series of articles in the 1890s.

Many other scientists tried to replicate his experiments. Sir William Crookes claimed success in 1909. Otto Ruff claimed in 1917 to have produced diamonds up to 7 mm (0.28 in) in diameter, but later retracted his statement. In 1926, Dr. J. Willard Hershey of McPherson College replicated Moissan's and Ruff's experiments, producing a synthetic diamond. Despite the claims of Moissan, Ruff, and Hershey, other experimenters were unable to reproduce their synthesis.

The most definitive replication attempts were performed by Sir Charles Algernon Parsons. A prominent scientist and engineer known for his invention of the steam turbine, he spent about 40 years (1882–1922) and a considerable part of his fortune trying to reproduce the experiments of Moissan and Hannay, but also adapted processes of his own. Parsons was known for his painstakingly accurate approach and methodical record keeping; all his resulting samples were preserved for further analysis by an independent party. He wrote a number of articles—some of the earliest on HPHT diamond—in which he claimed to have produced small diamonds. However, in 1928, he authorized Dr. C. H. Desch to publish an article in which he stated his belief that no synthetic diamonds (including those of Moissan and others) had been produced up to that date. He suggested that most diamonds that had been produced up to that point were likely synthetic spinel.

ASEA

First synthetic diamonds by ASEA 1953

The first known (but initially not reported) diamond synthesis was achieved on February 16, 1953, in Stockholm by ASEA (Allmänna Svenska Elektriska Aktiebolaget), Sweden's major electrical equipment manufacturing company. Starting in 1942, ASEA employed a team of five scientists and engineers as part of a top-secret diamond-making project code-named QUINTUS. The team used a bulky split-sphere apparatus designed by Baltzar von Platen and Anders Kämpe. Pressure was maintained within the device at an estimated 8.4 GPa (1,220,000 psi) and a temperature of 2,400 °C (4,350 °F) for an hour. A few small diamonds were produced, but not of gem quality or size.

Due to questions on the patent process and the reasonable belief that no other serious diamond synthesis research occurred globally, the board of ASEA opted against publicity and patent applications. Thus the announcement of the ASEA results occurred shortly after the GE press conference of February 15, 1955.

GE diamond project

A 3-meter tall press
A belt press produced in the 1980s by KOBELCO

In 1941, an agreement was made between the General Electric (GE), Norton and Carborundum companies to further develop diamond synthesis. They were able to heat carbon to about 3,000 °C (5,430 °F) under a pressure of 3.5 gigapascals (510,000 psi) for a few seconds. Soon thereafter, the Second World War interrupted the project. It was resumed in 1951 at the Schenectady Laboratories of GE, and a high-pressure diamond group was formed with Francis P. Bundy and H. M. Strong. Tracy Hall and others joined the project later.

The Schenectady group improved on the anvils designed by Percy Bridgman, who received a Nobel Prize in Physics for his work in 1946. Bundy and Strong made the first improvements, then more were made by Hall. The GE team used tungsten carbide anvils within a hydraulic press to squeeze the carbonaceous sample held in a catlinite container, the finished grit being squeezed out of the container into a gasket. The team recorded diamond synthesis on one occasion, but the experiment could not be reproduced because of uncertain synthesis conditions, and the diamond was later shown to have been a natural diamond used as a seed.

Hall achieved the first commercially successful synthesis of diamond on December 16, 1954, and this was announced on February 15, 1955. His breakthrough came when he used a press with a hardened steel toroidal "belt" strained to its elastic limit wrapped around the sample, producing pressures above 10 GPa (1,500,000 psi) and temperatures above 2,000 °C (3,630 °F). The press used a pyrophyllite container in which graphite was dissolved within molten nickel, cobalt or iron. Those metals acted as a "solvent-catalyst", which both dissolved carbon and accelerated its conversion into diamond. The largest diamond he produced was 0.15 mm (0.0059 in) across; it was too small and visually imperfect for jewelry, but usable in industrial abrasives. Hall's co-workers were able to replicate his work, and the discovery was published in the major journal Nature. He was the first person to grow a synthetic diamond with a reproducible, verifiable and well-documented process. He left GE in 1955, and three years later developed a new apparatus for the synthesis of diamond—a tetrahedral press with four anvils—to avoid violating a U.S. Department of Commerce secrecy order on the GE patent applications.

Further development

A diamond scalpel consisting of a yellow diamond blade attached to a pen-shaped holder
A scalpel with single-crystal synthetic diamond blade

Synthetic gem-quality diamond crystals were first produced in 1970 by GE, then reported in 1971. The first successes used a pyrophyllite tube seeded at each end with thin pieces of diamond. The graphite feed material was placed in the center and the metal solvent (nickel) between the graphite and the seeds. The container was heated and the pressure was raised to about 5.5 GPa (800,000 psi). The crystals grow as they flow from the center to the ends of the tube, and extending the length of the process produces larger crystals. Initially, a week-long growth process produced gem-quality stones of around 5 mm (0.20 in) (1 carat or 0.2 g), and the process conditions had to be as stable as possible. The graphite feed was soon replaced by diamond grit because that allowed much better control of the shape of the final crystal.

The first gem-quality stones were always yellow to brown in color because of contamination with nitrogen. Inclusions were common, especially "plate-like" ones from the nickel. Removing all nitrogen from the process by adding aluminum or titanium produced colorless "white" stones, and removing the nitrogen and adding boron produced blue ones. Removing nitrogen also slowed the growth process and reduced the crystalline quality, so the process was normally run with nitrogen present.

Although the GE stones and natural diamonds were chemically identical, their physical properties were not the same. The colorless stones produced strong fluorescence and phosphorescence under short-wavelength ultraviolet light, but were inert under long-wave UV. Among natural diamonds, only the rarer blue gems exhibit these properties. Unlike natural diamonds, all the GE stones showed strong yellow fluorescence under X-rays. The De Beers Diamond Research Laboratory has grown stones of up to 25 carats (5.0 g) for research purposes. Stable HPHT conditions were kept for six weeks to grow high-quality diamonds of this size. For economic reasons, the growth of most synthetic diamonds is terminated when they reach a mass of 1 to 1.5 carats (200 to 300 mg).

In the 1950s, research started in the Soviet Union and the US on the growth of diamond by pyrolysis of hydrocarbon gases at the relatively low temperature of 800 °C (1,470 °F). This low-pressure process is known as chemical vapor deposition (CVD). William G. Eversole reportedly achieved vapor deposition of diamond over diamond substrate in 1953, but it was not reported until 1962. Diamond film deposition was independently reproduced by Angus and coworkers in 1968 and by Deryagin and Fedoseev in 1970. Whereas Eversole and Angus used large, expensive, single-crystal diamonds as substrates, Deryagin and Fedoseev succeeded in making diamond films on non-diamond materials (silicon and metals), which led to massive research on inexpensive diamond coatings in the 1980s.

From 2013, reports emerged of a rise in undisclosed synthetic melee diamonds (small round diamonds typically used to frame a central diamond or embellish a band) being found in set jewelry and within diamond parcels sold in the trade. Due to the relatively low cost of diamond melee, as well as relative lack of universal knowledge for identifying large quantities of melee efficiently, not all dealers have made an effort to test diamond melee to correctly identify whether it is of natural or synthetic origin. However, international laboratories are now beginning to tackle the issue head-on, with significant improvements in synthetic melee identification being made.

Manufacturing technologies

There are several methods used to produce synthetic diamonds. The original method uses high pressure and high temperature (HPHT) and is still widely used because of its relatively low cost. The process involves large presses that can weigh hundreds of tons to produce a pressure of 5 GPa (730,000 psi) at 1,500 °C (2,730 °F). The second method, using chemical vapor deposition (CVD), creates a carbon plasma over a substrate onto which the carbon atoms deposit to form diamond. Other methods include explosive formation (forming detonation nanodiamonds) and sonication of graphite solutions.

High pressure, high temperature

A schematic drawing of a vertical cross section through a press setup. The drawing illustrates how the central unit, held by dies on its sides, is vertically compressed by two anvils.
Schematic of a belt press

In the HPHT method, there are three main press designs used to supply the pressure and temperature necessary to produce synthetic diamond: the belt press, the cubic press and the split-sphere (BARS) press. Diamond seeds are placed at the bottom of the press. The internal part of the press is heated above 1,400 °C (2,550 °F) and melts the solvent metal. The molten metal dissolves the high purity carbon source, which is then transported to the small diamond seeds and precipitates, forming a large synthetic diamond.

The original GE invention by Tracy Hall uses the belt press wherein the upper and lower anvils supply the pressure load to a cylindrical inner cell. This internal pressure is confined radially by a belt of pre-stressed steel bands. The anvils also serve as electrodes providing electric current to the compressed cell. A variation of the belt press uses hydraulic pressure, rather than steel belts, to confine the internal pressure. Belt presses are still used today, but they are built on a much larger scale than those of the original design.

The second type of press design is the cubic press. A cubic press has six anvils which provide pressure simultaneously onto all faces of a cube-shaped volume. The first multi-anvil press design was a tetrahedral press, using four anvils to converge upon a tetrahedron-shaped volume. The cubic press was created shortly thereafter to increase the volume to which pressure could be applied. A cubic press is typically smaller than a belt press and can more rapidly achieve the pressure and temperature necessary to create synthetic diamond. However, cubic presses cannot be easily scaled up to larger volumes: the pressurized volume can be increased by using larger anvils, but this also increases the amount of force needed on the anvils to achieve the same pressure. An alternative is to decrease the surface area to volume ratio of the pressurized volume, by using more anvils to converge upon a higher-order platonic solid, such as a dodecahedron. However, such a press would be complex and difficult to manufacture.

A schematic drawing of a vertical cross-section through a BARS press: the synthesis capsule is surrounded by four tungsten carbide inner anvils. Those inner anvils are compressed by four outer steel anvils. The outer anvils are held a disk barrel and are immersed in oil. A rubber diaphragm is placed between the disk barrel and the outer anvils to prevent oil from leaking.
Schematic of a BARS system

The BARS apparatus is claimed to be the most compact, efficient, and economical of all the diamond-producing presses. In the center of a BARS device, there is a ceramic cylindrical "synthesis capsule" of about 2 cm3 (0.12 cu in) in size. The cell is placed into a cube of pressure-transmitting material, such as pyrophyllite ceramics, which is pressed by inner anvils made from cemented carbide (e.g., tungsten carbide or VK10 hard alloy). The outer octahedral cavity is pressed by 8 steel outer anvils. After mounting, the whole assembly is locked in a disc-type barrel with a diameter about 1 m (3 ft 3 in). The barrel is filled with oil, which pressurizes upon heating, and the oil pressure is transferred to the central cell. The synthesis capsule is heated up by a coaxial graphite heater, and the temperature is measured with a thermocouple.

Chemical vapor deposition

Free-standing single-crystal CVD diamond disc

Chemical vapor deposition is a method by which diamond can be grown from a hydrocarbon gas mixture. Since the early 1980s, this method has been the subject of intensive worldwide research. Whereas the mass production of high-quality diamond crystals make the HPHT process the more suitable choice for industrial applications, the flexibility and simplicity of CVD setups explain the popularity of CVD growth in laboratory research. The advantages of CVD diamond growth include the ability to grow diamond over large areas and on various substrates, and the fine control over the chemical impurities and thus properties of the diamond produced. Unlike HPHT, CVD process does not require high pressures, as the growth typically occurs at pressures under 27 kPa (3.9 psi).

The CVD growth involves substrate preparation, feeding varying amounts of gases into a chamber and energizing them. The substrate preparation includes choosing an appropriate material and its crystallographic orientation; cleaning it, often with a diamond powder to abrade a non-diamond substrate; and optimizing the substrate temperature (about 800 °C (1,470 °F)) during the growth through a series of test runs. Moreover, optimizing the gas mixture composition and flow rates is paramount to ensure uniform and high-quality diamond growth. The gases always include a carbon source, typically methane, and hydrogen with a typical ratio of 1:99. Hydrogen is essential because it selectively etches off non-diamond carbon. The gases are ionized into chemically active radicals in the growth chamber using microwave power, a hot filament, an arc discharge, a welding torch, a laser, an electron beam, or other means.

During the growth, the chamber materials are etched off by the plasma and can incorporate into the growing diamond. In particular, CVD diamond is often contaminated by silicon originating from the silica windows of the growth chamber or from the silicon substrate. Therefore, silica windows are either avoided or moved away from the substrate. Boron-containing species in the chamber, even at very low trace levels, also make it unsuitable for the growth of pure diamond.

Detonation of explosives

An image resembling a cluster of grape where the cluster consists of nearly spherical particles of 5 nm (2.0×10−7 in) diameter
Electron micrograph (TEM) of detonation nanodiamond

Diamond nanocrystals (5 nm (2.0×10−7 in) in diameter) can be formed by detonating certain carbon-containing explosives in a metal chamber. These are called "detonation nanodiamonds". During the explosion, the pressure and temperature in the chamber become high enough to convert the carbon of the explosives into diamond. Being immersed in water, the chamber cools rapidly after the explosion, suppressing conversion of newly produced diamond into more stable graphite. In a variation of this technique, a metal tube filled with graphite powder is placed in the detonation chamber. The explosion heats and compresses the graphite to an extent sufficient for its conversion into diamond. The product is always rich in graphite and other non-diamond carbon forms, and requires prolonged boiling in hot nitric acid (about 1 day at 250 °C (482 °F)) to dissolve them. The recovered nanodiamond powder is used primarily in polishing applications. It is mainly produced in China, Russia and Belarus, and started reaching the market in bulk quantities by the early 2000s.

Ultrasound cavitation

Micron-sized diamond crystals can be synthesized from a suspension of graphite in organic liquid at atmospheric pressure and room temperature using ultrasonic cavitation. The diamond yield is about 10% of the initial graphite weight. The estimated cost of diamond produced by this method is comparable to that of the HPHT method but the crystalline perfection of the product is significantly worse for the ultrasonic synthesis. This technique requires relatively simple equipment and procedures, and has been reported by two research groups, but had no industrial use as of 2008. Numerous process parameters, such as preparation of the initial graphite powder, the choice of ultrasonic power, synthesis time and the solvent, were not optimized, leaving a window for potential improvement of the efficiency and reduction of the cost of the ultrasonic synthesis.

Crystallization inside liquid metal

In 2024, scientists announced a method that utilizes injecting methane and hydrogen gases onto a liquid metal alloy of gallium, iron, nickel and silicon (77.25/11.00/11.00/0.25 ratio) at approximately 1,025 °C to crystallize diamond at 1 atmosphere of pressure. The crystallization is a 'seedless' process, which further separates it from conventional high-pressure and high-temperature or chemical vapor deposition methods. Injection of methane and hydrogen results in a diamond nucleus after around 15 minutes and eventually a continuous diamond film after around 150 minutes.

Properties

Traditionally, the absence of crystal flaws is considered to be the most important quality of a diamond. Purity and high crystalline perfection make diamonds transparent and clear, whereas its hardness, optical dispersion (luster), and chemical stability (combined with marketing), make it a popular gemstone. High thermal conductivity is also important for technical applications. Whereas high optical dispersion is an intrinsic property of all diamonds, their other properties vary depending on how the diamond was created.

Crystallinity

Diamond can be one single, continuous crystal or it can be made up of many smaller crystals (polycrystal). Large, clear and transparent single-crystal diamonds are typically used as gemstones. Polycrystalline diamond (PCD) consists of numerous small grains, which are easily seen by the naked eye through strong light absorption and scattering; it is unsuitable for gems and is used for industrial applications such as mining and cutting tools. Polycrystalline diamond is often described by the average size (or grain size) of the crystals that make it up. Grain sizes range from nanometers to hundreds of micrometers, usually referred to as "nanocrystalline" and "microcrystalline" diamond, respectively.

Hardness

The hardness of diamond is 10 on the Mohs scale of mineral hardness, the hardest known material on this scale. Diamond is also the hardest known natural material for its resistance to indentation. The hardness of synthetic diamond depends on its purity, crystalline perfection and orientation: hardness is higher for flawless, pure crystals oriented to the direction (along the longest diagonal of the cubic diamond lattice). Nanocrystalline diamond produced through CVD diamond growth can have a hardness ranging from 30% to 75% of that of single crystal diamond, and the hardness can be controlled for specific applications. Some synthetic single-crystal diamonds and HPHT nanocrystalline diamonds (see hyperdiamond) are harder than any known natural diamond.

Impurities and inclusions

Every diamond contains atoms other than carbon in concentrations detectable by analytical techniques. Those atoms can aggregate into macroscopic phases called inclusions. Impurities are generally avoided, but can be introduced intentionally as a way to control certain properties of the diamond. Growth processes of synthetic diamond, using solvent-catalysts, generally lead to formation of a number of impurity-related complex centers, involving transition metal atoms (such as nickel, cobalt or iron), which affect the electronic properties of the material.

For instance, pure diamond is an electrical insulator, but diamond with boron added is an electrical conductor (and, in some cases, a superconductor), allowing it to be used in electronic applications. Nitrogen impurities hinder movement of lattice dislocations (defects within the crystal structure) and put the lattice under compressive stress, thereby increasing hardness and toughness.

Thermal conductivity

The thermal conductivity of CVD diamond ranges from tens of W/m2K to more than 2000 W/m2K, depending on the defects, grain boundary structures. As the growth of diamond in CVD, the grains grow with the film thickness, leading to a gradient thermal conductivity along the film thickness direction.

Unlike most electrical insulators, pure diamond is an excellent conductor of heat because of the strong covalent bonding within the crystal. The thermal conductivity of pure diamond is the highest of any known solid. Single crystals of synthetic diamond enriched in 12
C
(99.9%), isotopically pure diamond, have the highest thermal conductivity of any material, 30 W/cm·K at room temperature, 7.5 times higher than that of copper. Natural diamond's conductivity is reduced by 1.1% by the 13
C
naturally present, which acts as an inhomogeneity in the lattice.

Diamond's thermal conductivity is made use of by jewelers and gemologists who may employ an electronic thermal probe to separate diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about 2–3 seconds.

Industrial applications

Machining and cutting tools

A polished metal slab embedded with small diamonds
Diamonds in an angle grinder blade

Most industrial applications of synthetic diamond have long been associated with their hardness; this property makes diamond the ideal material for machine tools and cutting tools. As the hardest known naturally occurring material, diamond can be used to polish, cut, or wear away any material, including other diamonds. Common industrial applications of this ability include diamond-tipped drill bits and saws, and the use of diamond powder as an abrasive. These are by far the largest industrial applications of synthetic diamond. While natural diamond is also used for these purposes, synthetic HPHT diamond is more popular, mostly because of better reproducibility of its mechanical properties. Diamond is not suitable for machining ferrous alloys at high speeds, as carbon is soluble in iron at the high temperatures created by high-speed machining, leading to greatly increased wear on diamond tools compared to alternatives.

The usual form of diamond in cutting tools is micron-sized grains dispersed in a metal matrix (usually cobalt) sintered onto the tool. This is typically referred to in industry as polycrystalline diamond (PCD). PCD-tipped tools can be found in mining and cutting applications. For the past fifteen years, work has been done to coat metallic tools with CVD diamond, and though the work shows promise, it has not significantly replaced traditional PCD tools.

Thermal conductor

Most materials with high thermal conductivity are also electrically conductive, such as metals. In contrast, pure synthetic diamond has high thermal conductivity, but negligible electrical conductivity. This combination is invaluable for electronics where diamond is used as a heat spreader for high-power laser diodes, laser arrays and high-power transistors. Efficient heat dissipation prolongs the lifetime of those electronic devices, and the devices' high replacement costs justify the use of efficient, though relatively expensive, diamond heat sinks. In semiconductor technology, synthetic diamond heat spreaders prevent silicon and other semiconducting devices from overheating.

Optical material

Diamond is hard, chemically inert, and has high thermal conductivity and a low coefficient of thermal expansion. These properties make diamond superior to any other existing window material used for transmitting infrared and microwave radiation. Therefore, synthetic diamond is starting to replace zinc selenide as the output window of high-power CO2 lasers and gyrotrons. Those synthetic polycrystalline diamond windows are shaped as disks of large diameters (about 10 cm for gyrotrons) and small thicknesses (to reduce absorption) and can only be produced with the CVD technique. Single crystal slabs of dimensions of length up to approximately 10 mm are becoming increasingly important in several areas of optics including heatspreaders inside laser cavities, diffractive optics and as the optical gain medium in Raman lasers. Recent advances in the HPHT and CVD synthesis techniques have improved the purity and crystallographic structure perfection of single-crystalline diamond enough to replace silicon as a diffraction grating and window material in high-power radiation sources, such as synchrotrons. Both the CVD and HPHT processes are also used to create designer optically transparent diamond anvils as a tool for measuring electric and magnetic properties of materials at ultra high pressures using a diamond anvil cell.

Electronics

Synthetic diamond has potential uses as a semiconductor, because it can be doped with impurities like boron and phosphorus. Since these elements contain one more or one fewer valence electron than carbon, they turn synthetic diamond into p-type or n-type semiconductor. Making a p–n junction by sequential doping of synthetic diamond with boron and phosphorus produces light-emitting diodes (LEDs) producing UV light of 235 nm. Another useful property of synthetic diamond for electronics is high carrier mobility, which reaches 4500 cm2/(V·s) for electrons in single-crystal CVD diamond. High mobility is favorable for high-frequency operation and field-effect transistors made from diamond have already demonstrated promising high-frequency performance above 50 GHz. The wide band gap of diamond (5.5 eV) gives it excellent dielectric properties. Combined with the high mechanical stability of diamond, those properties are being used in prototype high-power switches for power stations.

Synthetic diamond transistors have been produced in the laboratory. They remain functional at much higher temperatures than silicon devices, and are resistant to chemical and radiation damage. While no diamond transistors have yet been successfully integrated into commercial electronics, they are promising for use in exceptionally high-power situations and hostile non-oxidizing environments.

Synthetic diamond is already used as radiation detection device. It is radiation hard and has a wide bandgap of 5.5 eV (at room temperature). Diamond is also distinguished from most other semiconductors by the lack of a stable native oxide. This makes it difficult to fabricate surface MOS devices, but it does create the potential for UV radiation to gain access to the active semiconductor without absorption in a surface layer. Because of these properties, it is employed in applications such as the BaBar detector at the Stanford Linear Accelerator and BOLD (Blind to the Optical Light Detectors for VUV solar observations). A diamond VUV detector recently was used in the European LYRA program.

Conductive CVD diamond is a useful electrode under many circumstances. Photochemical methods have been developed for covalently linking DNA to the surface of polycrystalline diamond films produced through CVD. Such DNA-modified films can be used for detecting various biomolecules, which would interact with DNA thereby changing electrical conductivity of the diamond film. In addition, diamonds can be used to detect redox reactions that cannot ordinarily be studied and in some cases degrade redox-reactive organic contaminants in water supplies. Because diamond is mechanically and chemically stable, it can be used as an electrode under conditions that would destroy traditional materials. As an electrode, synthetic diamond can be used in waste water treatment of organic effluents and the production of strong oxidants.

Gemstones

A colorless faceted gem
Colorless gem cut from diamond grown by chemical vapor deposition

Synthetic diamonds for use as gemstones are grown by HPHT or CVD methods. The market share of synthetic jewelry-quality diamonds is growing as advances in technology allow for larger higher-quality synthetic production on a more economical scale. In 2013, synthetic diamonds accounted for 0.28% of rough diamonds produced for use as gemstones, and 2% of the gem-quality diamond market. In 2023, synthetic diamonds were 17% of the diamond jewelry market. They are available in yellow, pink, green, orange, blue and, to a lesser extent, colorless (or white). The yellow comes from nitrogen impurities in the manufacturing process, while the blue comes from boron. Other colors, such as pink or green, are achievable after synthesis using irradiation. Several companies also offer memorial diamonds grown using cremated remains.

In May 2015, a record was set for an HPHT colorless diamond at 10.02 carats. The faceted jewel was cut from a 32.2-carat stone that was grown in about 300 hours. By 2022, gem-quality diamonds of 16–20 carats were being produced.

Price

Around 2016, the price of synthetic diamond gemstones (e.g., 1-carat stones) began dropping "precipitously", by roughly 30% in one year, becoming clearly lower than that of mined diamond gems. In April 2022, CNN Business reported that a synthetic one-carat round diamond commonly used in engagement rings was up to 73% cheaper than a natural diamond with the same features, and that the number of engagement rings featuring a synthetic or a lab grown diamond had increased 63% compared to the previous year, while those sold with a natural diamond declined 25% in the same period. By the beginning of 2025 laboratory-grown diamonds had dropped in price by 74% since 2020, and prices were expected to continue decreasing. The drop was attributed largely to improvement in speed of laboratory growing of diamonds from weeks to hours.

Marketing and classification

Gem-quality diamonds grown in a lab can be chemically, physically, and optically identical to naturally occurring ones. The mined diamond industry has undertaken legal, marketing, and distribution countermeasures to try to protect its market from the emerging presence of synthetic diamonds, including price fixing. Synthetic diamonds can be distinguished by spectroscopy in the infrared, ultraviolet, or X-ray wavelengths. The DiamondView tester from De Beers uses UV fluorescence to detect trace impurities of nitrogen, nickel, or other metals in HPHT or CVD diamonds. Many other test instruments are available.

Diamond certification laboratories are equipped with instruments that can reliably distinguish laboratory-grown from natural diamonds. Several laboratories, including the Gemological Institute of America (GIA), the International Gemological Institute (IGI), and Gemological Science International (GSI), laser-inscribe the girdle of every lab-grown diamond they certify with their report number and an indication that the diamond is lab-grown. The inscription is invisible to the naked eye, but can be seen at 10x magnification.

In May 2018, De Beers announced that it would introduce a new jewelry brand called "Lightbox" that features synthetic diamonds, which was notable as the company was the world's largest diamond miner and had previously been an outspoken critic of synthetic diamonds. In July 2018, the U.S. Federal Trade Commission (FTC) approved a substantial revision to its Jewelry Guides, with changes that impose new rules on how the trade can describe diamonds and diamond simulants. The revised guidelines were substantially contrary to what had been advocated in 2016 by De Beers. The new guidelines remove the word "natural" from the definition of "diamond", thus including lab-grown diamonds within the scope of the definition of "diamond". The revised guide further states that "If a marketer uses 'synthetic' to imply that a competitor's lab-grown diamond is not an actual diamond, ... this would be deceptive." According to the new FTC guidelines, the GIA dropped the word "synthetic" from its certification process and report for lab-grown diamonds in July 2019.

Ethical and environmental considerations

Traditional diamond mining has led to human rights abuses in several countries in Africa and other diamond mining countries. The 2006 Hollywood movie Blood Diamond helped to publicize the problem. Consumer demand for synthetic diamonds has been increasing as customers look for ethically sound and cheaper stones.

Neutrino oscillation

From Wikipedia, the free encyclopedia

Neutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states as it propagates through space.

First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts. Most notably, the existence of neutrino oscillation resolved the long-standing solar neutrino problem.

Neutrino oscillation is of great theoretical and experimental interest, as the precise properties of the process can shed light on several properties of the neutrino. In particular, it implies that the neutrino has a non-zero mass, which requires a modification to the Standard Model of particle physics. The experimental discovery of neutrino oscillation, and thus neutrino mass, by the Super-Kamiokande Observatory and the Sudbury Neutrino Observatories was recognized with the 2015 Nobel Prize for Physics.

Observations

A great deal of evidence for neutrino oscillation has been collected from many sources, over a wide range of neutrino energies and with many different detector technologies. The 2015 Nobel Prize in Physics was shared by Takaaki Kajita and Arthur B. McDonald for their early pioneering observations of these oscillations.

Neutrino oscillation is a function of the ratio L/E, where L is the distance traveled and E is the neutrino's energy (details in § Propagation and interference below). All available neutrino sources produce a range of energies, and oscillation is measured at a fixed distance for neutrinos of varying energy. The limiting factor in measurements is the accuracy with which the energy of each observed neutrino can be measured. Because current detectors have energy uncertainties of a few percent, it is satisfactory to know the distance to within 1%.

Solar neutrino oscillation

The first experiment that detected the effects of neutrino oscillation was Ray Davis' Homestake experiment in the late 1960s, in which he observed a deficit in the flux of solar neutrinos with respect to the prediction of the Standard Solar Model, using a chlorine-based detector. This gave rise to the solar neutrino problem. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, but neutrino oscillation was not conclusively identified as the source of the deficit until the Sudbury Neutrino Observatory provided clear evidence of neutrino flavor change in 2001.

Solar neutrinos have energies below 20 MeV. At energies above 5 MeV, solar neutrino oscillation actually takes place in the Sun through a resonance known as the MSW effect, a different process from the vacuum oscillation described later in this article.

Atmospheric neutrino oscillation

Following the theories that were proposed in the 1970s suggesting unification of electromagnetic, weak, and strong forces, a few experiments on proton decay followed in the 1980s. Large detectors such as IMB, MACRO, and Kamiokande II have observed a deficit in the ratio of the flux of muon to electron flavor atmospheric neutrinos (see Muon § Muon decay). The Super-Kamiokande experiment provided a very precise measurement of neutrino oscillation in an energy range of hundreds of MeV to a few TeV, and with a baseline of the diameter of the Earth; the first experimental evidence for atmospheric neutrino oscillations was announced in 1998.

Reactor neutrino oscillation

Illustration of neutrino oscillations.

Many experiments have searched for oscillation of electron anti-neutrinos produced in nuclear reactors. No oscillations were found until a detector was installed at a distance 1–2 km. Such oscillations give the value of the parameter θ13. Neutrinos produced in nuclear reactors have energies similar to solar neutrinos, of around a few MeV. The baselines of these experiments have ranged from tens of meters to over 100 km (parameter θ12). Mikaelyan and Sinev proposed to use two identical detectors to cancel systematic uncertainties in reactor experiment to measure the parameter θ13.

In December 2011, the Double Chooz experiment indicated that θ13 ≠ 0. Then, in 2012, the Daya Bay experiment found that θ13 ≠ 0 with a significance of 5.2 σ; these results have since been confirmed by RENO.

The experiment Neutrino-4 started in 2014 with a detector model and continued with a full-scale detector in 2016–2021 obtained the result of the direct observation of the oscillation effect at parameter region Δm2
14
= (7.3 ± 0.13st ± 1.16syst) (eV/c2)2
and sin22θ14 = 0.36 ± 0.12stat (2.9 σ). The simulation showed the expected detector signal for the case of oscillation detection.

Beam neutrino oscillation

Neutrino beams produced at a particle accelerator offer the greatest control over the neutrinos being studied. Many experiments have taken place that study the same oscillations as in atmospheric neutrino oscillation using neutrinos with a few GeV of energy and several-hundred-kilometre baselines. The MINOS, K2K, and Super-K experiments have all independently observed muon neutrino disappearance over such long baselines.

Data from the LSND experiment appear to be in conflict with the oscillation parameters measured in other experiments. Results from the MiniBooNE appeared in Spring 2007 and contradicted the results from LSND, although they could support the existence of a fourth neutrino type, the sterile neutrino.

In 2010, the INFN and CERN announced the observation of a tauon particle in a muon neutrino beam in the OPERA detector located at Gran Sasso, 730 km away from the source in Geneva.

T2K, using a neutrino beam directed through 295 km of earth and the Super-Kamiokande detector, measured a non-zero value for the parameter θ13 in a neutrino beam. NOνA, using the same beam as MINOS with a baseline of 810 km, is sensitive to the same.

Theory

Neutrino oscillation arises from mixing between the flavor and mass eigenstates of neutrinos. That is, the three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three (propagating) neutrino states of definite mass. Neutrinos are produced and detected in weak interactions as flavour eigenstates but propagate as coherent superpositions of mass eigenstates.

As a neutrino superposition propagates through space, the quantum mechanical phases of the three neutrino mass states advance at slightly different rates, due to the slight differences in their respective masses. This results in a changing superposition mixture of mass eigenstates as the neutrino travels; but a different mixture of mass eigenstates corresponds to a different mixture of flavor states. For example, a neutrino born as an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will nearly return to the original mixture, and the neutrino will be again mostly electron neutrino. The electron flavor content of the neutrino will then continue to oscillate – as long as the quantum mechanical state maintains coherence. Since mass differences between neutrino flavors are small in comparison with long coherence lengths for neutrino oscillations, this microscopic quantum effect becomes observable over macroscopic distances.

In contrast, due to their larger masses, the charged leptons (electrons, muons, and tau leptons) have never been observed to oscillate. In nuclear beta decay, muon decay, pion decay, and kaon decay, when a neutrino and a charged lepton are emitted, the charged lepton is emitted in incoherent mass eigenstates such as | e
, because of its large mass. Weak-force couplings compel the simultaneously emitted neutrino to be in a "charged-lepton-centric" superposition such as | ν
e
, which is an eigenstate for a "flavor" that is fixed by the electron's mass eigenstate, and not in one of the neutrino's own mass eigenstates. Because the neutrino is in a coherent superposition that is not a mass eigenstate, the mixture that makes up that superposition oscillates significantly as it travels. No analogous mechanism exists in the Standard Model that would make charged leptons detectably oscillate. In the four decays mentioned above, where the charged lepton is emitted in a unique mass eigenstate, the charged lepton will not oscillate, as single mass eigenstates propagate without oscillation.

The case of (real) W boson decay is more complicated: W boson decay is sufficiently energetic to generate a charged lepton that is not in a mass eigenstate; however, the charged lepton would lose coherence, if it had any, over interatomic distances (0.1 nm) and would thus quickly cease any meaningful oscillation. More importantly, no mechanism in the Standard Model is capable of pinning down a charged lepton into a coherent state that is not a mass eigenstate, in the first place; instead, while the charged lepton from the W boson decay is not initially in a mass eigenstate, neither is it in any "neutrino-centric" eigenstate, nor in any other coherent state. It cannot meaningfully be said that such a featureless charged lepton oscillates or that it does not oscillate, as any "oscillation" transformation would just leave it the same generic state that it was before the oscillation. Therefore, detection of a charged lepton oscillation from W boson decay is infeasible on multiple levels.

Pontecorvo–Maki–Nakagawa–Sakata matrix

The idea of neutrino oscillation was first put forward in 1957 by Bruno Pontecorvo, who proposed that neutrino–antineutrino transitions may occur in analogy with neutral kaon mixing. Although such matter–antimatter oscillation had not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillation, which was first developed by Maki, Nakagawa, and Sakata in 1962 and further elaborated by Pontecorvo in 1967. One year later the solar neutrino deficit was first observed, and that was followed by the famous article by Gribov and Pontecorvo published in 1969 titled "Neutrino astronomy and lepton charge".

The concept of neutrino mixing is a natural outcome of gauge theories with massive neutrinos, and its structure can be characterized in general. In its simplest form it is expressed as a unitary transformation relating the flavor and mass eigenbasis and can be written as

where

is a neutrino with definite flavor = e (electron), μ (muon) or τ (tauon)
is a neutrino with definite mass with = 1, 2 or 3
the superscript asterisk () represents a complex conjugate; for antineutrinos, the complex conjugate should be removed from the first equation and inserted into the second.

The symbol represents the Pontecorvo–Maki–Nakagawa–Sakata matrix (also called the PMNS matrix, lepton mixing matrix, or sometimes simply the MNS matrix). It is the analogue of the CKM matrix describing the analogous mixing of quarks. If this matrix were the identity matrix, then the flavor eigenstates would be the same as the mass eigenstates. However, experiment shows that it is not.

When the standard three-neutrino theory is considered, the matrix is 3 × 3. If only two neutrinos are considered, a 2 × 2 matrix is used. If one or more sterile neutrinos are added (see later), it is 4 × 4 or larger. In the 3 × 3 form, it is given by

where cij ≡ cos θij, and sij ≡ sin θij. The phase factors α1 and α2 are physically meaningful only if neutrinos are Majorana particles—i.e. if the neutrino is identical to its antineutrino (whether or not they are is unknown)—and do not enter into oscillation phenomena regardless. If neutrinoless double beta decay occurs, these factors influence its rate. The phase factor δ is non-zero only if neutrino oscillation violates CP symmetry; this has not yet been observed experimentally. If experiment shows this 3 × 3 matrix to be not unitary, a sterile neutrino or some other new physics is required.

Propagation and interference

Since are mass eigenstates, their propagation can be described by plane wave solutions of the form

where

  • quantities are expressed in natural units (), and ,
  • is the energy of the mass-eigenstate ,
  • is the time from the start of the propagation,
  • is the three-dimensional momentum,
  • is the current position of the particle relative to its starting position

In the ultrarelativistic limit, we can approximate the energy as

where E is the energy of the wavepacket (particle) to be detected.

This limit applies to all practical (currently observed) neutrinos, since their masses are less than 1 eV and their energies are at least 1 MeV, so the Lorentz factor, γ, is greater than 106 in all cases. Using also tL, where L is the distance traveled and also dropping the phase factors, the wavefunction becomes

Eigenstates with different masses propagate with different frequencies. The heavier ones oscillate faster compared to the lighter ones. Since the mass eigenstates are combinations of flavor eigenstates, this difference in frequencies causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation. The probability that a neutrino originally of flavor α will later be observed as having flavor β is

This is more conveniently written as

where

The phase that is responsible for oscillation is often written as (with c and restored)

where 1.27 is unitless. In this form, it is convenient to plug in the oscillation parameters since:

  • The mass differences, Δm2, are known to be on the order of 10−4 (eV/c2)2 = (10−2 eV/c2)2
  • Oscillation distances, L, in modern experiments are on the order of kilometres
  • Neutrino energies, E, in modern experiments are typically on order of MeV or GeV.

If there is no CP-violation (δ is zero), then the second sum is zero. Otherwise, the CP asymmetry can be given as

In terms of Jarlskog invariant

the CP asymmetry is expressed as

Two-neutrino case

The above formula is correct for any number of neutrino generations. Writing it explicitly in terms of mixing angles is extremely cumbersome if there are more than two neutrinos that participate in mixing. Fortunately, there are several meaningful cases in which only two neutrinos participate significantly. In this case, it is sufficient to consider the mixing matrix

Then the probability of a neutrino changing its flavor is

Or, using SI units and the convention introduced above

This formula is often appropriate for discussing the transition νμ ↔ ντ in atmospheric mixing, since the electron neutrino plays almost no role in this case. It is also appropriate for the solar case of νe ↔ νx, where νx is a mix (superposition) of νμ and ντ. These approximations are possible because the mixing angle θ13 is very small and because two of the mass states are very close in mass compared to the third.

Classical analogue of neutrino oscillation

Spring-coupled pendulums
Time evolution of the pendulums
Lower frequency normal mode
Higher frequency normal mode

The basic physics behind neutrino oscillation can be found in any system of coupled harmonic oscillators. A simple example is a system of two pendulums connected by a weak spring (a spring with a small spring constant). The first pendulum is set in motion by the experimenter while the second begins at rest. Over time, the second pendulum begins to swing under the influence of the spring, while the first pendulum's amplitude decreases as it loses energy to the second. Eventually all of the system's energy is transferred to the second pendulum and the first is at rest. The process then reverses. The energy oscillates between the two pendulums repeatedly until it is lost to friction.

The behavior of this system can be understood by looking at its normal modes of oscillation. If the two pendulums are identical then one normal mode consists of both pendulums swinging in the same direction with a constant distance between them, while the other consists of the pendulums swinging in opposite (mirror image) directions. These normal modes have (slightly) different frequencies because the second involves the (weak) spring while the first does not. The initial state of the two-pendulum system is a combination of both normal modes. Over time, these normal modes drift out of phase, and this is seen as a transfer of motion from the first pendulum to the second.

The description of the system in terms of the two pendulums is analogous to the flavor basis of neutrinos. These are the parameters that are most easily produced and detected (in the case of neutrinos, by weak interactions involving the W boson). The description in terms of normal modes is analogous to the mass basis of neutrinos. These modes do not interact with each other when the system is free of outside influence.

When the pendulums are not identical the analysis is slightly more complicated. In the small-angle approximation, the potential energy of a single pendulum system is , where g is standard gravity, L is the length of the pendulum, m is the mass of the pendulum, and x is the horizontal displacement of the pendulum. As an isolated system the pendulum is a harmonic oscillator with a frequency of . The potential energy of a spring is 1/2kx2, where k is the spring constant and x is the displacement. With a mass attached it oscillates with a period of . With two pendulums (labeled a and b) of equal mass but possibly unequal lengths and connected by a spring, the total potential energy is

This is a quadratic form in xa and xb, which can also be written as a matrix product:

The 2 × 2 matrix is real symmetric and so (by the spectral theorem) it is orthogonally diagonalizable. That is, there is an angle θ such that if we define

then

where λ1 and λ2 are the eigenvalues of the matrix. The variables x1 and x2 describe normal modes which oscillate with frequencies of and . When the two pendulums are identical (La = Lb), θ is 45°.

The angle θ is analogous to the Cabibbo angle (though that angle applies to quarks rather than neutrinos).

When the number of oscillators (particles) is increased to three, the orthogonal matrix can no longer be described by a single angle; instead, three are required (Euler angles). Furthermore, in the quantum case, the matrices may be complex. This requires the introduction of complex phases in addition to the rotation angles, which are associated with CP violation but do not influence the observable effects of neutrino oscillation.

Theory, graphically

Two neutrino probabilities in vacuum

In the approximation where only two neutrinos participate in the oscillation, the probability of oscillation follows a simple pattern:

The blue curve shows the probability of the original neutrino retaining its identity. The red curve shows the probability of conversion to the other neutrino. The maximum probability of conversion is equal to sin2 2θ. The frequency of the oscillation is controlled by Δm2.

Three neutrino probabilities

If three neutrinos are considered, the probability for each neutrino to appear is somewhat complex. The graphs below show the probabilities for each flavor, with the plots in the left column showing a long range to display the slow "solar" oscillation, and the plots in the right column zoomed in, to display the fast "atmospheric" oscillation. The parameters used to create these graphs (see below) are consistent with current measurements, but since some parameters are still quite uncertain, some aspects of these plots are only qualitatively correct.

Electron neutrino oscillations, long range. Here and in the following diagrams black means electron neutrino, blue means muon neutrino and red means tau neutrino.[27]
Electron neutrino oscillations, short range
Muon neutrino oscillations, long range
Muon neutrino oscillations, short range
Tau neutrino oscillations, long range
Tau neutrino oscillations, short range

The illustrations were created using the following parameter values:

  • sin2(2θ13) = 0.10 (Determines the size of the small wiggles.)
  • sin2(2θ23) = 0.97
  • sin2(2θ12) = 0.861
  • δ = 0 (If the actual value of this phase is large, the probabilities will be somewhat distorted, and will be different for neutrinos and antineutrinos.)
  • Normal mass hierarchy: m1m2m3
  • Δm2
    12
    = 0.759×10−4 (eV/c2)2
  • Δm2
    32
    ≈ Δm2
    13
    = 23.2×10−4 (eV/c2)2

Observed values of oscillation parameters

  • sin2(2θ13) = 0.093±0.008PDG combination of Daya Bay, RENO, and Double Chooz results.
  • sin2(2θ12) = 0.846±0.021. This corresponds to θsol (solar), obtained from KamLand, solar, reactor and accelerator data.
  • sin2(2θ23″) > 0.92 at 90% confidence level, corresponding to θ23θatm = 45±7.1° (atmospheric)
  • Δm2
    21
    ≡ Δm2
    sol
    = (0.753±0.018)×10−4 (eV/c2)2
  • |Δm2
    31
    | ≈ |Δm2
    32
    | ≡ Δm2
    atm
    = (24.4±0.6)×10−4 (eV/c2)2 (normal mass hierarchy)
  • δ, α1, α2, and the sign of Δm2
    32
    are currently unknown.

Solar neutrino experiments combined with KamLAND have measured the so-called solar parameters Δm2
sol
and sin2 θsol. Atmospheric neutrino experiments such as Super-Kamiokande together with the K2K and MINOS long baseline accelerator neutrino experiment have determined the so-called atmospheric parameters Δm2
atm
and sin2 θatm. The last mixing angle, θ13, has been measured by the experiments Daya Bay, Double Chooz and RENO as sin2(2θ13″).

For atmospheric neutrinos the relevant difference of masses is about Δm2 = 24×10−4 (eV/c2)2 and the typical energies are ~ 1 GeV; for these values the oscillations become visible for neutrinos traveling several hundred kilometres, which would be those neutrinos that reach the detector traveling through the earth, from below the horizon.

The mixing parameter θ13 is measured using electron anti-neutrinos from nuclear reactors. The rate of anti-neutrino interactions is measured in detectors sited near the reactors to determine the flux prior to any significant oscillations and then it is measured in far detectors (placed kilometres from the reactors). The oscillation is observed as an apparent disappearance of electron anti-neutrinos in the far detectors (i.e. the interaction rate at the far site is lower than predicted from the observed rate at the near site).

From atmospheric and solar neutrino oscillation experiments, it is known that two mixing angles of the MNS matrix are large and the third is smaller. This is in sharp contrast to the CKM matrix in which all three angles are small and hierarchically decreasing. The CP-violating phase of the MNS matrix is as of April 2020 to lie somewhere between −2° and −178°, from the T2K experiment.

If the neutrino mass proves to be of Majorana type (making the neutrino its own antiparticle), it is then possible that the MNS matrix has more than one phase.

Since experiments observing neutrino oscillation measure the squared mass difference and not absolute mass, one might claim that the lightest neutrino mass is exactly zero, without contradicting observations. This is however regarded as unlikely by theorists.

Origins of neutrino mass

The question of how neutrino masses arise has not been answered conclusively. In the Standard Model of particle physics, fermions only have intrinsic mass because of interactions with the Higgs field (see Higgs boson). These interactions require both left- and right-handed versions of the fermion (see chirality). However, only left-handed neutrinos have been observed so far.

Neutrinos may have another source of mass through the Majorana mass term. This type of mass applies for electrically neutral particles since otherwise it would allow particles to turn into anti-particles, which would violate conservation of electric charge.

The smallest modification to the Standard Model, which only has left-handed neutrinos, is to allow these left-handed neutrinos to have Majorana masses. The problem with this is that the neutrino masses are surprisingly smaller than the rest of the known particles (at least 600000 times smaller than the mass of an electron), which, while it does not invalidate the theory, is widely regarded as unsatisfactory as this construction offers no insight into the origin of the neutrino mass scale.

The next simplest addition would be to add into the Standard Model right-handed neutrinos that interact with the left-handed neutrinos and the Higgs field in an analogous way to the rest of the fermions. These new neutrinos would interact with the other fermions solely in this way and hence would not be directly observable, so are not phenomenologically excluded. The problem of the disparity of the mass scales remains.

Seesaw mechanism

The most popular conjectured solution currently is the seesaw mechanism, where right-handed neutrinos with very large Majorana masses are added. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the reciprocal of the heavy mass.

If it is assumed that the neutrinos interact with the Higgs field with approximately the same strengths as the charged fermions do, the heavy mass should be close to the GUT scale. Because the Standard Model has only one fundamental mass scale, all particle masses must arise in relation to this scale.

There are other varieties of seesaw and there is currently great interest in the so-called low-scale seesaw schemes, such as the inverse seesaw mechanism.

The addition of right-handed neutrinos has the effect of adding new mass scales, unrelated to the mass scale of the Standard Model, hence the observation of heavy right-handed neutrinos would reveal physics beyond the Standard Model. Right-handed neutrinos would help to explain the origin of matter through a mechanism known as leptogenesis.

Other sources

There are alternative ways to modify the standard model that are similar to the addition of heavy right-handed neutrinos (e.g., the addition of new scalars or fermions in triplet states) and other modifications that are less similar (e.g., neutrino masses from loop effects and/or from suppressed couplings). One example of the last type of models is provided by certain versions supersymmetric extensions of the standard model of fundamental interactions, where R parity is not a symmetry. There, the exchange of supersymmetric particles such as squarks and sleptons can break the lepton number and lead to neutrino masses. These interactions are normally excluded from theories as they come from a class of interactions that lead to unacceptably rapid proton decay if they are all included. These models have little predictive power and are not able to provide a cold dark matter candidate.

Oscillations in the early universe

During the early universe when particle concentrations and temperatures were high, neutrino oscillations could have behaved differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas.

Epigenetics of anxiety and stress–related disorders

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Epigenetics_of_anxiety_and_st...