Search This Blog

Sunday, November 10, 2024

Electroweak interaction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Electroweak_interaction

In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force.

During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around 5.5×1012 K (from the Large Hadron Collider).

Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable.

History

After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding.

In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons.

Formulation

Weinberg's weak mixing angle θW, and relation between coupling constants g, g′, and e. Adapted from Lee (1981).
The pattern of weak isospin, T3, and weak hypercharge, YW, of the known elementary particles, showing the electric charge, Q, along the weak mixing angle. The neutral Higgs field (circled) breaks the electroweak symmetry and interacts with other particles to give them mass. Three components of the Higgs field become part of the massive
W
and
Z
bosons.

Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an SU(2) × U(1) gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields W1, W2, and W3, and the weak hypercharge field B. This invariance is known as electroweak symmetry.

The generators of SU(2) and U(1) are given the name weak isospin (labeled T) and weak hypercharge (labeled Y) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three W bosons of weak isospin (W1, W2, and W3), and the B boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism.

In the Standard Model, the observed physical particles, the
W±
and
Z0
bosons
, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1)Y to U(1)em, effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom.

The electric charge arises as the particular linear combination (nontrivial) of YW (weak hypercharge) and the T3 component of weak isospin () that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and T3 outlined in the figure.

U(1)em (the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the U(1)em group is unbroken, since it does not directly interact with the Higgs.

The above spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different physical bosons with different masses – the
Z0
boson, and the photon (
γ
),

where θW is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (W3, B) plane, by the angle θW. This also introduces a mismatch between the mass of the
Z0
and the mass of the
W±
particles (denoted as mZ and mW, respectively),

The W1 and W2 bosons, in turn, combine to produce the charged massive bosons
W±
:

Why W+ is w1-iW2 and w- is w1+iw2? Further explanation or reference is needed.

Lagrangian

Before electroweak symmetry breaking

The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking becomes manifest,

The term describes the interaction between the three W vector bosons and the B vector boson,

where () and are the field strength tensors for the weak isospin and weak hypercharge gauge fields.

is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative,

where the subscript j sums over the three generations of fermions; Q, u, and d are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and L and e are the left-handed doublet and right-handed singlet electron fields. The Feynman slash means the contraction of the 4-gradient with the Dirac matrices, defined as

and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as

Here is the weak hypercharge and the are the components of the weak isospin.

The term describes the Higgs field and its interactions with itself and the gauge bosons,

where is the vacuum expectation value.

The term describes the Yukawa interaction with the fermions,

and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The for are matrices of Yukawa couplings.

After electroweak symmetry breaking

The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature 159.5±1.5 GeV (assuming the Standard Model of particle physics).

Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows.

The kinetic term contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking)

where the sum runs over all the fermions of the theory (quarks and leptons), and the fields and are given as

with to be replaced by the relevant field ( ) and f abc by the structure constants of the appropriate gauge group.

The neutral current and charged current components of the Lagrangian contain the interactions between the fermions and gauge bosons,

where The electromagnetic current is

where is the fermions' electric charges. The neutral weak current is

where is the fermions' weak isospin.

The charged current part of the Lagrangian is given by

where is the right-handed singlet neutrino field, and the CKM matrix determines the mixing between mass and weak eigenstates of the quarks.

contains the Higgs three-point and four-point self interaction terms,

contains the Higgs interactions with gauge vector bosons,

contains the gauge three-point self interactions,

contains the gauge four-point self interactions,

contains the Yukawa interactions between the fermions and the Higgs field,

Nuclear force

From Wikipedia, the free encyclopedia
Force (as multiples of 10000 N) between two nucleons as a function of distance as computed from the Reid potential (1968). The spins of the neutron and proton are aligned, and they are in the S angular momentum state. The attractive (negative) force has a maximum at a distance of about 1 fm with a force of about 25000 N. Particles much closer than a distance of 0.8 fm experience a large repulsive (positive) force. Particles separated by a distance greater than 1 fm are still attracted (Yukawa potential), but the force falls as an exponential function of distance.
Corresponding potential energy (in units of MeV) of two nucleons as a function of distance as computed from the Reid potential. The potential well has a minimum at a distance of about 0.8 fm. With this potential nucleons can become bound with a negative "binding energy".

The nuclear force (or nucleon–nucleon interaction, residual strong force, or, historically, strong nuclear force) is a force that acts between hadrons, most commonly observed between protons and neutrons of atoms. Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e, they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei.

The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or 0.8×10−15 m), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, of size in the order of angstroms (Å, or 10−10 m), is five orders of magnitude larger.) The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons.

The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons. Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect, which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together.

A quantitative description of the nuclear force relies on equations that are partly empirical. These equations model the internucleon potential energies, or potentials. (Generally, forces within a system of particles can be more simply modelled by describing the system's potential energy; the negative gradient of a potential is equal to the vector force.) The constants for the equations are phenomenological, that is, determined by fitting the equations to experimental data. The internucleon potentials attempt to describe the properties of nucleon–nucleon interaction. Once determined, any given potential can be used in, e.g., the Schrödinger equation to determine the quantum mechanical properties of the nucleon system.

The discovery of the neutron in 1932 revealed that atomic nuclei were made of protons and neutrons, held together by an attractive force. By 1935 the nuclear force was conceived to be transmitted by particles called mesons. This theoretical development included a description of the Yukawa potential, an early example of a nuclear potential. Pions, fulfilling the prediction, were discovered experimentally in 1947. By the 1970s, the quark model had been developed, by which the mesons and nucleons were viewed as composed of quarks and gluons. By this new model, the nuclear force, resulting from the exchange of mesons between neighbouring nucleons, is a multiparticle interaction, the collective effect of strong force on the underlining structure of the nucleons.

Description

Comparison between the Nuclear Force and the Coulomb Force. a – residual strong force (nuclear force), rapidly decreases to insignificance at distances beyond about 2.5 fm, b – at distances less than ~ 0.7 fm between nucleons centres the nuclear force becomes repulsive, c – coulomb repulsion force between two protons (over 3 fm, force becomes the main), d – equilibrium position for proton – proton, r – radius of a nucleon (a cloud composed of three quarks). Note: 1 fm = 10−15 m

While the nuclear force is usually associated with nucleons, more generally this force is felt between hadrons, or particles composed of quarks. At small separations between nucleons (less than ~ 0.7 fm between their centres, depending upon spin alignment) the force becomes repulsive, which keeps the nucleons at a certain average separation. For identical nucleons (such as two neutrons or two protons) this repulsion arises from the Pauli exclusion force. A Pauli repulsion also occurs between quarks of the same flavour from different nucleons (a proton and a neutron).

Field strength

At distances larger than 0.7 fm the force becomes attractive between spin-aligned nucleons, becoming maximal at a centre–centre distance of about 0.9 fm. Beyond this distance the force drops exponentially, until beyond about 2.0 fm separation, the force is negligible. Nucleons have a radius of about 0.8 fm.

At short distances (less than 1.7 fm or so), the attractive nuclear force is stronger than the repulsive Coulomb force between protons; it thus overcomes the repulsion of protons within the nucleus. However, the Coulomb force between protons has a much greater range as it varies as the inverse square of the charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about 2 to 2.5 fm.

The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle. For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned, the nuclear force is too weak to bind them, even if they are of different types.

The nuclear force also has a tensor component which depends on the interaction between the nucleon spins and the angular momentum of the nucleons, leading to deformation from a simple spherical shape.

Nuclear binding

To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's formula E = mc2), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect".

The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved.

The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum numbers; conventionally, the proton is isospin up, while the neutron is isospin down. The strong force is invariant under SU(2) isospin transformations, just as other interactions between particles are invariant under SU(2) transformations of intrinsic spin. In other words, both isospin and intrinsic spin transformations are isomorphic to the SU(2) symmetry group. There are only strong attractions when the total isospin of the set of interacting particles is 0, which is confirmed by experiment.

Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei.

A simplified Feynman diagram of a strong protonneutron interaction mediated by a virtual neutral pion. Time proceeds from left to right.

The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture.

The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in processes such as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa.

History

The nuclear force has been at the heart of nuclear physics ever since the field was born in 1932 with the discovery of the neutron by James Chadwick. The traditional goal of nuclear physics is to understand the properties of atomic nuclei in terms of the "bare" interaction between pairs of nucleons, or nucleon–nucleon forces (NN forces).

Within months after the discovery of the neutron, Werner Heisenberg and Dmitri Ivanenko had proposed proton–neutron models for the nucleus. Heisenberg approached the description of protons and neutrons in the nucleus through quantum mechanics, an approach that was not at all obvious at the time. Heisenberg's theory for protons and neutrons in the nucleus was a "major step toward understanding the nucleus as a quantum mechanical system". Heisenberg introduced the first theory of nuclear exchange forces that bind the nucleons. He considered protons and neutrons to be different quantum states of the same particle, i.e., nucleons distinguished by the value of their nuclear isospin quantum numbers.

One of the earliest models for the nucleus was the liquid-drop model developed in the 1930s. One property of nuclei is that the average binding energy per nucleon is approximately the same for all stable nuclei, which is similar to a liquid drop. The liquid-drop model treated the nucleus as a drop of incompressible nuclear fluid, with nucleons behaving like molecules in a liquid. The model was first proposed by George Gamow and then developed by Niels Bohr, Werner Heisenberg, and Carl Friedrich von Weizsäcker. This crude model did not explain all the properties of the nucleus, but it did explain the spherical shape of most nuclei. The model also gave good predictions for the binding energy of nuclei.

In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. In light of quantum chromodynamics (QCD)—and, by extension, the Standard Model—meson theory is no longer perceived as fundamental. But the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential. The Yukawa potential (also called a screened Coulomb potential) is a potential of the form

where g is a magnitude scaling constant, i.e., the amplitude of potential, is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasing, implying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance r between particles, hence it models a central force.

Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic-resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics.

Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential (1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968)

where and where the potential is given in units of MeV. In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase-shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD.

As a residual of strong force

An animation of the interaction. The coloured double circles are gluons. Anticolours are shown as per this diagram (larger version).
The same diagram as that above with the individual quark constituents shown, to illustrate how the fundamental strong interaction gives rise to the nuclear force. Straight lines are quarks, while multi-coloured loops are gluons (the carriers of the fundamental force). Other gluons, which bind together the proton, neutron, and pion "in flight", are not shown.

The nuclear force is a residual effect of the more fundamental strong force, or strong interaction. The strong interaction is the attractive force that binds the elementary particles called quarks together to form the nucleons (protons and neutrons) themselves. This more powerful force, one of the fundamental forces of nature, is mediated by particles called gluons. Gluons hold quarks together through colour charge which is analogous to electric charge, but far stronger. Quarks, gluons, and their dynamics are mostly confined within nucleons, but residual influences extend slightly beyond nucleon boundaries to give rise to the nuclear force.

The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London dispersion forces. Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "colour neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("colour forces" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus.

Sometimes, the nuclear force is called the residual strong force, in contrast to the strong interactions which arise from QCD. This phrasing arose during the 1970s when QCD was being established. Before that time, the strong nuclear force referred to the inter-nucleon potential. After the verification of the quark model, strong interaction has come to mean QCD.

Nucleon–nucleon potentials

Two-nucleon systems such as the deuteron, the nucleus of a deuterium atom, as well as proton–proton or neutron–proton scattering are ideal for studying the NN force. Such systems can be described by attributing a potential (such as the Yukawa potential) to the nucleons and using the potentials in a Schrödinger equation. The form of the potential is derived phenomenologically (by measurement), although for the long-range interaction, meson-exchange theories help to construct the potential. The parameters of the potential are determined by fitting to experimental data such as the deuteron binding energy or NN elastic scattering cross sections (or, equivalently in this context, so-called NN phase shifts).

The most widely used NN potentials are the Paris potential, the Argonne AV18 potential, the CD-Bonn potential, and the Nijmegen potentials.

A more recent approach is to develop effective field theories for a consistent description of nucleon–nucleon and three-nucleon forces. Quantum hadrodynamics is an effective field theory of the nuclear force, comparable to QCD for colour interactions and QED for electromagnetic interactions. Additionally, chiral symmetry breaking can be analyzed in terms of an effective field theory (called chiral perturbation theory) which allows perturbative calculations of the interactions between nucleons with pions as exchange particles.

From nucleons to nuclei

The ultimate goal of nuclear physics would be to describe all nuclear interactions from the basic interactions between nucleons. This is called the microscopic or ab initio approach of nuclear physics. There are two major obstacles to overcome:

  • Calculations in many-body systems are difficult (because of multi-particle interactions) and require advanced computation techniques.
  • There is evidence that three-nucleon forces (and possibly higher multi-particle interactions) play a significant role. This means that three-nucleon potentials must be included into the model.

This is an active area of research with ongoing advances in computational techniques leading to better first-principles calculations of the nuclear shell structure. Two- and three-nucleon potentials have been implemented for nuclides up to A = 12.

Nuclear potentials

A successful way of describing nuclear interactions is to construct one potential for the whole nucleus instead of considering all its nucleon components. This is called the macroscopic approach. For example, scattering of neutrons from nuclei can be described by considering a plane wave in the potential of the nucleus, which comprises a real part and an imaginary part. This model is often called the optical model since it resembles the case of light scattered by an opaque glass sphere.

Nuclear potentials can be local or global: local potentials are limited to a narrow energy range and/or a narrow nuclear mass range, while global potentials, which have more parameters and are usually less accurate, are functions of the energy and the nuclear mass and can therefore be used in a wider range of applications.

Synergy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Synergy

Synergy is an interaction or cooperation giving rise to a whole that is greater than the simple sum of its parts (i.e., a non-linear addition of force, energy, or effect). The term synergy comes from the Attic Greek word συνεργία synergia from synergos, συνεργός, meaning "working together". Synergy is similar in concept to emergence.

History

The words synergy and synergetic have been used in the field of physiology since at least the middle of the 19th century:

SYN'ERGY, Synergi'a, Synenergi'a, (F.) Synergie; from συν, 'with', and εργον, 'work'. A correlation or concourse of action between different organs in health; and, according to some, in disease.

—Dunglison, Robley Medical Lexicon Blanchard and Lea, 1853

In 1896, Henri Mazel applied the term "synergy" to social psychology by writing La synergie sociale, in which he argued that Darwinian theory failed to account of "social synergy" or "social love", a collective evolutionary drive. The highest civilizations were the work not only of the elite but of the masses too; those masses must be led, however, because the crowd, a feminine and unconscious force, cannot distinguish between good and evil.

In 1909, Lester Frank Ward defined synergy as the universal constructive principle of nature:

I have characterized the social struggle as centrifugal and social solidarity as centripetal. Either alone is productive of evil consequences. Struggle is essentially destructive of the social order, while communism removes individual initiative. The one leads to disorder, the other to degeneracy. What is not seen—the truth that has no expounders—is that the wholesome, constructive movement consists in the properly ordered combination and interaction of both these principles. This is social synergy, which is a form of cosmic synergy, the universal constructive principle of nature.

—Ward, Lester F. Glimpses of the Cosmos, volume VI (1897–1912) G. P. Putnam's Sons, 1918, p. 358

In Christian theology, synergism is the idea that salvation involves some form of cooperation between divine grace and human freedom.

A modern view of synergy in natural sciences derives from the relationship between energy and information. Synergy is manifested when the system makes the transition between the different information (i.e. order, complexity) embedded in both systems.

Abraham Maslow and John Honigmann drew attention to an important development in the cultural anthropology field which arose in lectures by Ruth Benedict from 1941, for which the original manuscripts have been lost but the ideas preserved in "Synergy: Some Notes of Ruth Benedict" (1969).

Descriptions and usages

In the natural world, synergistic phenomena are ubiquitous, ranging from physics (for example, the different combinations of quarks that produce protons and neutrons) to chemistry (a popular example is water, a compound of hydrogen and oxygen), to the cooperative interactions among the genes in genomes, the division of labor in bacterial colonies, the synergies of scale in multicellular organisms, as well as the many different kinds of synergies produced by socially-organized groups, from honeybee colonies to wolf packs and human societies: compare stigmergy, a mechanism of indirect coordination between agents or actions that results in the self-assembly of complex systems. Even the tools and technologies that are widespread in the natural world represent important sources of synergistic effects. The tools that enabled early hominins to become systematic big-game hunters is a primordial human example.

In the context of organizational behavior, following the view that a cohesive group is more than the sum of its parts, synergy is the ability of a group to outperform even its best individual member. These conclusions are derived from the studies conducted by Jay Hall on a number of laboratory-based group ranking and prediction tasks. He found that effective groups actively looked for the points in which they disagreed and in consequence encouraged conflicts amongst the participants in the early stages of the discussion. In contrast, the ineffective groups felt a need to establish a common view quickly, used simple decision making methods such as averaging, and focused on completing the task rather than on finding solutions they could agree on. In a technical context, its meaning is a construct or collection of different elements working together to produce results not obtainable by any of the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, documents: all things required to produce system-level results. The value added by the system as a whole, beyond that contributed independently by the parts, is created primarily by the relationship among the parts, that is, how they are interconnected. In essence, a system constitutes a set of interrelated components working together with a common objective: fulfilling some designated need.

If used in a business application, synergy means that teamwork will produce an overall better result than if each person within the group were working toward the same goal individually. However, the concept of group cohesion needs to be considered. Group cohesion is that property that is inferred from the number and strength of mutual positive attitudes among members of the group. As the group becomes more cohesive, its functioning is affected in a number of ways. First, the interactions and communication between members increase. Common goals, interests and small size all contribute to this. In addition, group member satisfaction increases as the group provides friendship and support against outside threats.

There are negative aspects of group cohesion that have an effect on group decision-making and hence on group effectiveness. There are two issues arising. The risky shift phenomenon is the tendency of a group to make decisions that are riskier than those that the group would have recommended individually. Group Polarisation is when individuals in a group begin by taking a moderate stance on an issue regarding a common value and, after having discussed it, end up taking a more extreme stance.

A second, potential negative consequence of group cohesion is group think. Group think is a mode of thinking that people engage in when they are deeply involved in cohesive group, when the members' striving for unanimity overrides their motivation to appraise realistically the alternative courses of action. Studying the events of several American policy "disasters" such as the failure to anticipate the Japanese attack on Pearl Harbor (1941) and the Bay of Pigs Invasion fiasco (1961), Irving Janis argued that they were due to the cohesive nature of the committees that made the relevant decisions.

That decisions made by committees lead to failure in a simple system is noted by Dr. Chris Elliot. His case study looked at IEEE-488, an international standard set by the leading US standards body; it led to a failure of small automation systems using the IEEE-488 standard (which codified a proprietary communications standard HP-IB). But the external devices used for communication were made by two different companies, and the incompatibility between the external devices led to a financial loss for the company. He argues that systems will be safe only if they are designed, not if they emerge by chance.

The idea of a systemic approach is endorsed by the United Kingdom Health and Safety Executive. The successful performance of the health and safety management depends upon the analyzing the causes of incidents and accidents and learning correct lessons from them. The idea is that all events (not just those causing injuries) represent failures in control, and present an opportunity for learning and improvement. UK Health and Safety Executive, Successful health and safety management (1997): this book describes the principles and management practices, which provide the basis of effective health and safety management. It sets out the issues that need to be addressed, and can be used for developing improvement programs, self-audit, or self-assessment. Its message is that organizations must manage health and safety with the same degree of expertise and to the same standards as other core business activities, if they are to effectively control risks and prevent harm to people.

The term synergy was refined by R. Buckminster Fuller, who analyzed some of its implications more fully and coined the term synergetics.

  • A dynamic state in which combined action is favored over the difference of individual component actions.
  • Behavior of whole systems unpredicted by the behavior of their parts taken separately, known as emergent behavior.
  • The cooperative action of two or more stimuli (or drugs), resulting in a different or greater response than that of the individual stimuli.

Information theory

Mathematical formalizations of synergy have been proposed using information theory to rigorously define the relationships between "wholes" and "parts". In this context, synergy is said to occur when there is information present in the joint state of multiple variables that cannot be extracted from the individual parts considered individually. For example, consider the logical XOR gate. If for three binary variables, the mutual information between any individual source and the target is 0 bit. However, the joint mutual information bit. There is information about the target that can only be extracted from the joint state of the inputs considered jointly, and not any others.

There is, thus far, no universal agreement on how synergy can best be quantified, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature. Despite the lack of universal agreement, information-theoretic approaches to statistical synergy have been applied to diverse fields, including climatology, neuroscience sociology, and machine learning Synergy has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems and may be relevant to formal theories of consciousness.

Biological sciences

Synergy of various kinds has been advanced by Peter Corning as a causal agency that can explain the progressive evolution of complexity in living systems over the course of time. According to the Synergism Hypothesis, synergistic effects have been the drivers of cooperative relationships of all kinds and at all levels in living systems. The thesis, in a nutshell, is that synergistic effects have often provided functional advantages (economic benefits) in relation to survival and reproduction that have been favored by natural selection. The cooperating parts, elements, or individuals become, in effect, functional "units" of selection in evolutionary change. Similarly, environmental systems may react in a non-linear way to perturbations, such as climate change, so that the outcome may be greater than the sum of the individual component alterations. Synergistic responses are a complicating factor in environmental modeling.

Pest synergy

Pest synergy would occur in a biological host organism population, where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet, in some cases, losses are significantly greater. In such cases, it is said that the parasites in combination have a synergistic effect.

Drug synergy

Mechanisms that may be involved in the development of synergistic effects include:

  • Effect on the same cellular system (e.g. two different antibiotics like a penicillin and an aminoglycoside; penicillins damage the cell wall of gram-positive bacteria and improve the penetration of aminoglycosides).
  • Bioavailability (e.g. ayahuasca (or pharmahuasca) consists of DMT combined with MAOIs that interfere with the action of the MAO enzyme and stop the breakdown of chemical compounds such as DMT).
  • Reduced risk for substance abuse (e.g. lisdexamfetamine, which is a combination of the amino acid L-lysine, attached to dextroamphetamine, may have a lower liability for abuse as a recreational drug)
  • Increased potency (e.g. as with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone).
  • Prevention or delay of degradation in the body (e.g. the antibiotic Ciprofloxacin inhibits the metabolism of Theophylline).[30]: 931 
  • Slowdown of excretion (e.g. Probenecid delays the renal excretion of Penicillin and thus prolongs its effect).
  • Anticounteractive action: for example, the effect of oxaliplatin and irinotecan. Oxaliplatin intercalates DNA, thereby preventing the cell from replicating DNA. Irinotecan inhibits topoisomerase 1, consequently the cytostatic effect is increased.
  • Effect on the same receptor but different sites (e.g. the coadministration of benzodiazepines and barbiturates, both act by enhancing the action of GABA on GABAA receptors, but benzodiazepines increase the frequency of channel opening, whilst barbiturates increase the channel closing time, making these two drugs dramatically enhance GABAergic neurotransmission).
  • In addition to the chemical nature of the drug itself, the topology of the chemical reaction network that connect the two targets determines the type of drug-drug interaction.

More mechanisms are described in an exhaustive 2009 review.

Toxicological synergy

Toxicological synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk in combination. Articles in scientific and lay journals include many definitions of chemical or toxicological synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction", a determination of synergy (or antagonism) depends on what is meant by "no interaction". The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition.

For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the United States exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume.

However, in normal agricultural practice, it is rare to use only a single pesticide. During the production of a crop, several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, a combination created by the end user, such as a farmer, has rarely been tested in that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma, and other health problems may be caused by these combination exposures; others have alternative explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals. Examples of pesticide synergists include Piperonyl butoxide and MGK 264.

Human synergy

Synergy exists in individual and social interactions among humans, with some arguing that social cooperation cannot be requires synergy to continue. One way of quantifying synergy in human social groups is via energy use, where larger groups of humans (i.e., cities) use energy more efficiently that smaller groups of humans.

Human synergy can also occur on a smaller scale, like when individuals huddle together for warmth or in workplaces where labor specialization increase efficiencies.

When synergy occurs in the work place, the individuals involved get to work in a positive and supportive working environment. When individuals get to work in environments such as these, the company reaps the benefits. The authors of Creating the Best Workplace on Earth Rob Goffee and Gareth Jones, state that "highly engaged employees are, on average, 50% more likely to exceed expectations that the least-engaged workers. And companies with highly engaged people outperform firms with the most disengaged folks- by 54% in employee retention, by 89% in customer satisfaction, and by fourfold in revenue growth. Also, those that are able to be open about their views on the company, and have confidence that they will be heard, are likely to be a more organized employee who helps his/ her fellow team members succeed.

Human interaction with technology can also increase synergy. Organismic computing is an approach to improving group efficacy by increasing synergy in human groups via technological means.

Theological synergism

In Christian theology, synergism is the belief that salvation involves a cooperation between divine grace and human freedom. Eastern Orthodox theology, in particular, uses the term "synergy" to describe this relationship, drawing on biblical language: "in Paul's words, 'We are fellow-workers (synergoi) with God' (1 Corinthians iii, 9)".

Corporate synergy

Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are distinct types of corporate synergies, as follows.

Marketing

A marketing synergy refers to the use of information campaigns, studies, and scientific discovery or experimentation for research and development. This promotes the sale of products for varied use or off-market sales as well as development of marketing tools and in several cases exaggeration of effects. It is also often a meaningless buzzword used by corporate leaders.

Microsoft Word offers "cooperation" as a refinement suggestion to the word "synergy."

Revenue

A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand-alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B, then the new company could use each salesperson to sell products X and Y, thereby increasing the revenue that each salesperson generates for the company.

In media revenue, synergy is the promotion and sale of a product throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks, or video games.

Financial

Financial synergy gained by the combined firm is a result of number of benefits which flow to the entity as a consequence of acquisition and merger. These benefits may be:

Cash slack

This is when a firm having a number of cash extensive projects acquires a firm which is cash-rich, thus enabling the new combined firm to enjoy the profits from investing the cash of one firm in the projects of the other.

Debt capacity

If two firms have no or little capacity to carry debt before individually, it is possible for them to join and gain the capacity to carry the debt through decreased gearing (leverage). This creates value for the firm, as debt is thought to be a cheaper source of finance.

Tax benefits

It is possible for one firm to have unused tax benefits which might be offset against the profits of another after combination, thus resulting in less tax being paid. However this greatly depends on the tax law of the country.

Management

Synergy in management and in relation to teamwork refers to the combined effort of individuals as participants of the team. The condition that exists when the organization's parts interact to produce a joint effect that is greater than the sum of the parts acting alone. Positive or negative synergies can exist. In these cases, positive synergy has positive effects such as improved efficiency in operations, greater exploitation of opportunities, and improved utilization of resources. Negative synergy on the other hand has negative effects such as: reduced efficiency of operations, decrease in quality, underutilization of resources and disequilibrium with the external environment.

Cost

A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the headquarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of economies of scale.

Synergistic action in economy

The synergistic action of the economic players lies within the economic phenomenon's profundity. The synergistic action gives different dimensions to competitiveness, strategy and network identity becoming an unconventional "weapon" which belongs to those who exploit the economic systems' potential in depth.

Synergistic determinants

The synergistic gravity equation (SYNGEq), according to its complex "title", represents a synthesis of the endogenous and exogenous factors which determine the private and non-private economic decision makers to call to actions of synergistic exploitation of the economic network in which they operate. That is to say, SYNGEq constitutes a big picture of the factors/motivations which determine the entrepreneurs to contour an active synergistic network. SYNGEq includes both factors which character is changing over time (such as the competitive conditions), as well as classics factors, such as the imperative of the access to resources of the collaboration and the quick answers. The synergistic gravity equation (SINGEq) comes to be represented by the formula:

ΣSYN.Act = ΣR-*I(CRed+COOP++AUnimit.)*V(Cust.+Info.)*cc

where:

  • ΣSYN.Act = the sum of the synergistic actions adopted (by the economic actor)
  • Σ R- = the amount of unpurchased but necessary resources
  • ICRed = the imperative for cost reductions
  • ICOOP+ = the imperative for deep cooperation (functional interdependence)
  • IAUnimit. = the imperative for purchasing unimitable competitive advantages (for the economic actor)
  • VCust = the necessity of customer value in purchasing future profits and competitive advantages VInfo = the necessity of informational value in purchasing future profits and competitive advantages
  • cc = the specific competitive conditions in which the economic actor operates

Synergistic networks and systems

The synergistic network represents an integrated part of the economic system which, through the coordination and control functions (of the undertaken economic actions), agrees synergies. The networks which promote synergistic actions can be divided in horizontal synergistic networks and vertical synergistic networks.

Synergy effects

The synergy effects are difficult (even impossible) to imitate by competitors and difficult to reproduce by their authors because these effects depend on the combination of factors with time-varying characteristics. The synergy effects are often called "synergistic benefits", representing the direct and implied result of the developed/adopted synergistic actions.

Computers

Synergy can also be defined as the combination of human strengths and computer strengths, such as advanced chess. Computers can process data much more quickly than humans, but lack the ability to respond meaningfully to arbitrary stimuli.

Synergy in literature

Etymologically, the "synergy" term was first used around 1600, deriving from the Greek word "synergos", which means "to work together" or "to cooperate". If during this period the synergy concept was mainly used in the theological field (describing "the cooperation of human effort with divine will"), in the 19th and 20th centuries, "synergy" was promoted in physics and biochemistry, being implemented in the study of the open economic systems only in the 1960 and 1970s.

In 1938, J. R. R. Tolkien wrote an essay titled On Fairy Stores, delivered at an Andrew Lang Lecture, and reprinted in his book, The Tolkien Reader, published in 1966. In it, he made two references to synergy, although he did not use that term. He wrote:

Faerie cannot be caught in a net of words; for it is one of its qualities to be indescribable, though not imperceptible. It has many ingredients, but analysis will not necessarily discover the secret of the whole.

And more succinctly, in a footnote, about the "part of producing the web of an intricate story", he wrote:

It is indeed easier to unravel a single thread — an incident, a name, a motive — than to trace the history of any picture defined by many threads. For with the picture in the tapestry a new element has come in: the picture is greater than, and not explained by, the sum of the component threads.

The book "Synergy"

Synergy, a book: DION, Eric (2017), Synergy; A Theoretical Model of Canada's Comprehensive Approach, iUniverse, 308 pp.

Synergy in the media

The informational synergies which can be applied also in media involve a compression of transmission, access and use of information's time, the flows, circuits and means of handling information being based on a complementary, integrated, transparent and coordinated use of knowledge.

In media economics, synergy is the promotion and sale of a product (and all its versions) throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks or video games. Walt Disney pioneered synergistic marketing techniques in the 1930s by granting dozens of firms the right to use his Mickey Mouse character in products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games. The NBC sitcom 30 Rock often shows the power of synergy, while also poking fun at the use of the term in the corporate world. There are also different forms of synergy in popular card games like Magic: The Gathering, Yu-Gi-Oh!, Cardfight!! Vanguard, and Future Card Buddyfight.

Pluto

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Pluto 134340 Pluto Pluto, imaged by the New Horizons spac...