Search This Blog

Sunday, November 10, 2024

Synergy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Synergy

Synergy is an interaction or cooperation giving rise to a whole that is greater than the simple sum of its parts (i.e., a non-linear addition of force, energy, or effect). The term synergy comes from the Attic Greek word συνεργία synergia from synergos, συνεργός, meaning "working together". Synergy is similar in concept to emergence.

History

The words synergy and synergetic have been used in the field of physiology since at least the middle of the 19th century:

SYN'ERGY, Synergi'a, Synenergi'a, (F.) Synergie; from συν, 'with', and εργον, 'work'. A correlation or concourse of action between different organs in health; and, according to some, in disease.

—Dunglison, Robley Medical Lexicon Blanchard and Lea, 1853

In 1896, Henri Mazel applied the term "synergy" to social psychology by writing La synergie sociale, in which he argued that Darwinian theory failed to account of "social synergy" or "social love", a collective evolutionary drive. The highest civilizations were the work not only of the elite but of the masses too; those masses must be led, however, because the crowd, a feminine and unconscious force, cannot distinguish between good and evil.

In 1909, Lester Frank Ward defined synergy as the universal constructive principle of nature:

I have characterized the social struggle as centrifugal and social solidarity as centripetal. Either alone is productive of evil consequences. Struggle is essentially destructive of the social order, while communism removes individual initiative. The one leads to disorder, the other to degeneracy. What is not seen—the truth that has no expounders—is that the wholesome, constructive movement consists in the properly ordered combination and interaction of both these principles. This is social synergy, which is a form of cosmic synergy, the universal constructive principle of nature.

—Ward, Lester F. Glimpses of the Cosmos, volume VI (1897–1912) G. P. Putnam's Sons, 1918, p. 358

In Christian theology, synergism is the idea that salvation involves some form of cooperation between divine grace and human freedom.

A modern view of synergy in natural sciences derives from the relationship between energy and information. Synergy is manifested when the system makes the transition between the different information (i.e. order, complexity) embedded in both systems.

Abraham Maslow and John Honigmann drew attention to an important development in the cultural anthropology field which arose in lectures by Ruth Benedict from 1941, for which the original manuscripts have been lost but the ideas preserved in "Synergy: Some Notes of Ruth Benedict" (1969).

Descriptions and usages

In the natural world, synergistic phenomena are ubiquitous, ranging from physics (for example, the different combinations of quarks that produce protons and neutrons) to chemistry (a popular example is water, a compound of hydrogen and oxygen), to the cooperative interactions among the genes in genomes, the division of labor in bacterial colonies, the synergies of scale in multicellular organisms, as well as the many different kinds of synergies produced by socially-organized groups, from honeybee colonies to wolf packs and human societies: compare stigmergy, a mechanism of indirect coordination between agents or actions that results in the self-assembly of complex systems. Even the tools and technologies that are widespread in the natural world represent important sources of synergistic effects. The tools that enabled early hominins to become systematic big-game hunters is a primordial human example.

In the context of organizational behavior, following the view that a cohesive group is more than the sum of its parts, synergy is the ability of a group to outperform even its best individual member. These conclusions are derived from the studies conducted by Jay Hall on a number of laboratory-based group ranking and prediction tasks. He found that effective groups actively looked for the points in which they disagreed and in consequence encouraged conflicts amongst the participants in the early stages of the discussion. In contrast, the ineffective groups felt a need to establish a common view quickly, used simple decision making methods such as averaging, and focused on completing the task rather than on finding solutions they could agree on. In a technical context, its meaning is a construct or collection of different elements working together to produce results not obtainable by any of the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, documents: all things required to produce system-level results. The value added by the system as a whole, beyond that contributed independently by the parts, is created primarily by the relationship among the parts, that is, how they are interconnected. In essence, a system constitutes a set of interrelated components working together with a common objective: fulfilling some designated need.

If used in a business application, synergy means that teamwork will produce an overall better result than if each person within the group were working toward the same goal individually. However, the concept of group cohesion needs to be considered. Group cohesion is that property that is inferred from the number and strength of mutual positive attitudes among members of the group. As the group becomes more cohesive, its functioning is affected in a number of ways. First, the interactions and communication between members increase. Common goals, interests and small size all contribute to this. In addition, group member satisfaction increases as the group provides friendship and support against outside threats.

There are negative aspects of group cohesion that have an effect on group decision-making and hence on group effectiveness. There are two issues arising. The risky shift phenomenon is the tendency of a group to make decisions that are riskier than those that the group would have recommended individually. Group Polarisation is when individuals in a group begin by taking a moderate stance on an issue regarding a common value and, after having discussed it, end up taking a more extreme stance.

A second, potential negative consequence of group cohesion is group think. Group think is a mode of thinking that people engage in when they are deeply involved in cohesive group, when the members' striving for unanimity overrides their motivation to appraise realistically the alternative courses of action. Studying the events of several American policy "disasters" such as the failure to anticipate the Japanese attack on Pearl Harbor (1941) and the Bay of Pigs Invasion fiasco (1961), Irving Janis argued that they were due to the cohesive nature of the committees that made the relevant decisions.

That decisions made by committees lead to failure in a simple system is noted by Dr. Chris Elliot. His case study looked at IEEE-488, an international standard set by the leading US standards body; it led to a failure of small automation systems using the IEEE-488 standard (which codified a proprietary communications standard HP-IB). But the external devices used for communication were made by two different companies, and the incompatibility between the external devices led to a financial loss for the company. He argues that systems will be safe only if they are designed, not if they emerge by chance.

The idea of a systemic approach is endorsed by the United Kingdom Health and Safety Executive. The successful performance of the health and safety management depends upon the analyzing the causes of incidents and accidents and learning correct lessons from them. The idea is that all events (not just those causing injuries) represent failures in control, and present an opportunity for learning and improvement. UK Health and Safety Executive, Successful health and safety management (1997): this book describes the principles and management practices, which provide the basis of effective health and safety management. It sets out the issues that need to be addressed, and can be used for developing improvement programs, self-audit, or self-assessment. Its message is that organizations must manage health and safety with the same degree of expertise and to the same standards as other core business activities, if they are to effectively control risks and prevent harm to people.

The term synergy was refined by R. Buckminster Fuller, who analyzed some of its implications more fully and coined the term synergetics.

  • A dynamic state in which combined action is favored over the difference of individual component actions.
  • Behavior of whole systems unpredicted by the behavior of their parts taken separately, known as emergent behavior.
  • The cooperative action of two or more stimuli (or drugs), resulting in a different or greater response than that of the individual stimuli.

Information theory

Mathematical formalizations of synergy have been proposed using information theory to rigorously define the relationships between "wholes" and "parts". In this context, synergy is said to occur when there is information present in the joint state of multiple variables that cannot be extracted from the individual parts considered individually. For example, consider the logical XOR gate. If for three binary variables, the mutual information between any individual source and the target is 0 bit. However, the joint mutual information bit. There is information about the target that can only be extracted from the joint state of the inputs considered jointly, and not any others.

There is, thus far, no universal agreement on how synergy can best be quantified, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature. Despite the lack of universal agreement, information-theoretic approaches to statistical synergy have been applied to diverse fields, including climatology, neuroscience sociology, and machine learning Synergy has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems and may be relevant to formal theories of consciousness.

Biological sciences

Synergy of various kinds has been advanced by Peter Corning as a causal agency that can explain the progressive evolution of complexity in living systems over the course of time. According to the Synergism Hypothesis, synergistic effects have been the drivers of cooperative relationships of all kinds and at all levels in living systems. The thesis, in a nutshell, is that synergistic effects have often provided functional advantages (economic benefits) in relation to survival and reproduction that have been favored by natural selection. The cooperating parts, elements, or individuals become, in effect, functional "units" of selection in evolutionary change. Similarly, environmental systems may react in a non-linear way to perturbations, such as climate change, so that the outcome may be greater than the sum of the individual component alterations. Synergistic responses are a complicating factor in environmental modeling.

Pest synergy

Pest synergy would occur in a biological host organism population, where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet, in some cases, losses are significantly greater. In such cases, it is said that the parasites in combination have a synergistic effect.

Drug synergy

Mechanisms that may be involved in the development of synergistic effects include:

  • Effect on the same cellular system (e.g. two different antibiotics like a penicillin and an aminoglycoside; penicillins damage the cell wall of gram-positive bacteria and improve the penetration of aminoglycosides).
  • Bioavailability (e.g. ayahuasca (or pharmahuasca) consists of DMT combined with MAOIs that interfere with the action of the MAO enzyme and stop the breakdown of chemical compounds such as DMT).
  • Reduced risk for substance abuse (e.g. lisdexamfetamine, which is a combination of the amino acid L-lysine, attached to dextroamphetamine, may have a lower liability for abuse as a recreational drug)
  • Increased potency (e.g. as with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone).
  • Prevention or delay of degradation in the body (e.g. the antibiotic Ciprofloxacin inhibits the metabolism of Theophylline).[30]: 931 
  • Slowdown of excretion (e.g. Probenecid delays the renal excretion of Penicillin and thus prolongs its effect).
  • Anticounteractive action: for example, the effect of oxaliplatin and irinotecan. Oxaliplatin intercalates DNA, thereby preventing the cell from replicating DNA. Irinotecan inhibits topoisomerase 1, consequently the cytostatic effect is increased.
  • Effect on the same receptor but different sites (e.g. the coadministration of benzodiazepines and barbiturates, both act by enhancing the action of GABA on GABAA receptors, but benzodiazepines increase the frequency of channel opening, whilst barbiturates increase the channel closing time, making these two drugs dramatically enhance GABAergic neurotransmission).
  • In addition to the chemical nature of the drug itself, the topology of the chemical reaction network that connect the two targets determines the type of drug-drug interaction.

More mechanisms are described in an exhaustive 2009 review.

Toxicological synergy

Toxicological synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk in combination. Articles in scientific and lay journals include many definitions of chemical or toxicological synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction", a determination of synergy (or antagonism) depends on what is meant by "no interaction". The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition.

For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the United States exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume.

However, in normal agricultural practice, it is rare to use only a single pesticide. During the production of a crop, several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, a combination created by the end user, such as a farmer, has rarely been tested in that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma, and other health problems may be caused by these combination exposures; others have alternative explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals. Examples of pesticide synergists include Piperonyl butoxide and MGK 264.

Human synergy

Synergy exists in individual and social interactions among humans, with some arguing that social cooperation cannot be requires synergy to continue. One way of quantifying synergy in human social groups is via energy use, where larger groups of humans (i.e., cities) use energy more efficiently that smaller groups of humans.

Human synergy can also occur on a smaller scale, like when individuals huddle together for warmth or in workplaces where labor specialization increase efficiencies.

When synergy occurs in the work place, the individuals involved get to work in a positive and supportive working environment. When individuals get to work in environments such as these, the company reaps the benefits. The authors of Creating the Best Workplace on Earth Rob Goffee and Gareth Jones, state that "highly engaged employees are, on average, 50% more likely to exceed expectations that the least-engaged workers. And companies with highly engaged people outperform firms with the most disengaged folks- by 54% in employee retention, by 89% in customer satisfaction, and by fourfold in revenue growth. Also, those that are able to be open about their views on the company, and have confidence that they will be heard, are likely to be a more organized employee who helps his/ her fellow team members succeed.

Human interaction with technology can also increase synergy. Organismic computing is an approach to improving group efficacy by increasing synergy in human groups via technological means.

Theological synergism

In Christian theology, synergism is the belief that salvation involves a cooperation between divine grace and human freedom. Eastern Orthodox theology, in particular, uses the term "synergy" to describe this relationship, drawing on biblical language: "in Paul's words, 'We are fellow-workers (synergoi) with God' (1 Corinthians iii, 9)".

Corporate synergy

Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are distinct types of corporate synergies, as follows.

Marketing

A marketing synergy refers to the use of information campaigns, studies, and scientific discovery or experimentation for research and development. This promotes the sale of products for varied use or off-market sales as well as development of marketing tools and in several cases exaggeration of effects. It is also often a meaningless buzzword used by corporate leaders.

Microsoft Word offers "cooperation" as a refinement suggestion to the word "synergy."

Revenue

A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand-alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B, then the new company could use each salesperson to sell products X and Y, thereby increasing the revenue that each salesperson generates for the company.

In media revenue, synergy is the promotion and sale of a product throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks, or video games.

Financial

Financial synergy gained by the combined firm is a result of number of benefits which flow to the entity as a consequence of acquisition and merger. These benefits may be:

Cash slack

This is when a firm having a number of cash extensive projects acquires a firm which is cash-rich, thus enabling the new combined firm to enjoy the profits from investing the cash of one firm in the projects of the other.

Debt capacity

If two firms have no or little capacity to carry debt before individually, it is possible for them to join and gain the capacity to carry the debt through decreased gearing (leverage). This creates value for the firm, as debt is thought to be a cheaper source of finance.

Tax benefits

It is possible for one firm to have unused tax benefits which might be offset against the profits of another after combination, thus resulting in less tax being paid. However this greatly depends on the tax law of the country.

Management

Synergy in management and in relation to teamwork refers to the combined effort of individuals as participants of the team. The condition that exists when the organization's parts interact to produce a joint effect that is greater than the sum of the parts acting alone. Positive or negative synergies can exist. In these cases, positive synergy has positive effects such as improved efficiency in operations, greater exploitation of opportunities, and improved utilization of resources. Negative synergy on the other hand has negative effects such as: reduced efficiency of operations, decrease in quality, underutilization of resources and disequilibrium with the external environment.

Cost

A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the headquarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of economies of scale.

Synergistic action in economy

The synergistic action of the economic players lies within the economic phenomenon's profundity. The synergistic action gives different dimensions to competitiveness, strategy and network identity becoming an unconventional "weapon" which belongs to those who exploit the economic systems' potential in depth.

Synergistic determinants

The synergistic gravity equation (SYNGEq), according to its complex "title", represents a synthesis of the endogenous and exogenous factors which determine the private and non-private economic decision makers to call to actions of synergistic exploitation of the economic network in which they operate. That is to say, SYNGEq constitutes a big picture of the factors/motivations which determine the entrepreneurs to contour an active synergistic network. SYNGEq includes both factors which character is changing over time (such as the competitive conditions), as well as classics factors, such as the imperative of the access to resources of the collaboration and the quick answers. The synergistic gravity equation (SINGEq) comes to be represented by the formula:

ΣSYN.Act = ΣR-*I(CRed+COOP++AUnimit.)*V(Cust.+Info.)*cc

where:

  • ΣSYN.Act = the sum of the synergistic actions adopted (by the economic actor)
  • Σ R- = the amount of unpurchased but necessary resources
  • ICRed = the imperative for cost reductions
  • ICOOP+ = the imperative for deep cooperation (functional interdependence)
  • IAUnimit. = the imperative for purchasing unimitable competitive advantages (for the economic actor)
  • VCust = the necessity of customer value in purchasing future profits and competitive advantages VInfo = the necessity of informational value in purchasing future profits and competitive advantages
  • cc = the specific competitive conditions in which the economic actor operates

Synergistic networks and systems

The synergistic network represents an integrated part of the economic system which, through the coordination and control functions (of the undertaken economic actions), agrees synergies. The networks which promote synergistic actions can be divided in horizontal synergistic networks and vertical synergistic networks.

Synergy effects

The synergy effects are difficult (even impossible) to imitate by competitors and difficult to reproduce by their authors because these effects depend on the combination of factors with time-varying characteristics. The synergy effects are often called "synergistic benefits", representing the direct and implied result of the developed/adopted synergistic actions.

Computers

Synergy can also be defined as the combination of human strengths and computer strengths, such as advanced chess. Computers can process data much more quickly than humans, but lack the ability to respond meaningfully to arbitrary stimuli.

Synergy in literature

Etymologically, the "synergy" term was first used around 1600, deriving from the Greek word "synergos", which means "to work together" or "to cooperate". If during this period the synergy concept was mainly used in the theological field (describing "the cooperation of human effort with divine will"), in the 19th and 20th centuries, "synergy" was promoted in physics and biochemistry, being implemented in the study of the open economic systems only in the 1960 and 1970s.

In 1938, J. R. R. Tolkien wrote an essay titled On Fairy Stores, delivered at an Andrew Lang Lecture, and reprinted in his book, The Tolkien Reader, published in 1966. In it, he made two references to synergy, although he did not use that term. He wrote:

Faerie cannot be caught in a net of words; for it is one of its qualities to be indescribable, though not imperceptible. It has many ingredients, but analysis will not necessarily discover the secret of the whole.

And more succinctly, in a footnote, about the "part of producing the web of an intricate story", he wrote:

It is indeed easier to unravel a single thread — an incident, a name, a motive — than to trace the history of any picture defined by many threads. For with the picture in the tapestry a new element has come in: the picture is greater than, and not explained by, the sum of the component threads.

The book "Synergy"

Synergy, a book: DION, Eric (2017), Synergy; A Theoretical Model of Canada's Comprehensive Approach, iUniverse, 308 pp.

Synergy in the media

The informational synergies which can be applied also in media involve a compression of transmission, access and use of information's time, the flows, circuits and means of handling information being based on a complementary, integrated, transparent and coordinated use of knowledge.

In media economics, synergy is the promotion and sale of a product (and all its versions) throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks or video games. Walt Disney pioneered synergistic marketing techniques in the 1930s by granting dozens of firms the right to use his Mickey Mouse character in products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games. The NBC sitcom 30 Rock often shows the power of synergy, while also poking fun at the use of the term in the corporate world. There are also different forms of synergy in popular card games like Magic: The Gathering, Yu-Gi-Oh!, Cardfight!! Vanguard, and Future Card Buddyfight.

Nucleon

From Wikipedia, the free encyclopedia
An atomic nucleus is shown here as a compact bundle of the two types of nucleons, protons (red) and neutrons (blue). In this picture, the protons and neutrons are shown as distinct, which is the conventional view in chemistry, for example. But in an actual nucleus, as understood by modern nuclear physics, the nucleons are partially delocalized and organize themselves according to the laws of quantum chromodynamics.

In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number (nucleon number).

Until the 1960s, nucleons were thought to be elementary particles, not made up of smaller parts. Now they are understood as composite particles, made of three quarks bound together by the strong interaction. The interaction between two or more nucleons is called internucleon interaction or nuclear force, which is also ultimately caused by the strong interaction. (Before the discovery of quarks, the term "strong interaction" referred to just internucleon interactions.)

Nucleons sit at the boundary where particle physics and nuclear physics overlap. Particle physics, particularly quantum chromodynamics, provides the fundamental equations that describe the properties of quarks and of the strong interaction. These equations describe quantitatively how quarks can bind together into protons and neutrons (and all the other hadrons). However, when multiple nucleons are assembled into an atomic nucleus (nuclide), these fundamental equations become too difficult to solve directly (see lattice QCD). Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models, such as the nuclear shell model. These models can successfully describe nuclide properties, as for example, whether or not a particular nuclide undergoes radioactive decay.

The proton and neutron are in a scheme of categories being at once fermions, hadrons and baryons. The proton carries a positive net charge, and the neutron carries a zero net charge; the proton's mass is only about 0.13% less than the neutron's. Thus, they can be viewed as two states of the same nucleon, and together form an isospin doublet (I = 1/2). In isospin space, neutrons can be transformed into protons and conversely by SU(2) symmetries. These nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to Noether's theorem, isospin is conserved with respect to the strong interaction.

Overview

Properties

Quark composition of a nucleon
Proton
Proton (
p
):
u

u

d
Neutron
Neutron (
n
):
u

d

d
Antiproton
Antiproton (
p
):
u

u

d
Antineutron
Antineutron (
n
):
u

d

d
 A proton (p) is composed of two up quarks (u) and one down quark (d): uud. A neutron (n) has one up quark (u) and two down quarks (d): udd. An antiproton (
p
) has two up antiquarks (
u
) and one down antiquark (
d
):
u

u

d
. An antineutron (
n
) has one up antiquark (
u
) and two down antiquarks (
d
):
u

d

d
. The color charge (color assignment) of individual quarks is arbitrary, but all three colors (red, green, blue) must be present.

Protons and neutrons are best known in their role as nucleons, i.e., as the components of atomic nuclei, but they also exist as free particles. Free neutrons are unstable, with a half-life of around 13 minutes, but they have important applications (see neutron radiation and neutron scattering). Protons not bound to other nucleons are the nuclei of hydrogen atoms when bound with an electron or – if not bound to anything – are ions or cosmic rays.

Both the proton and the neutron are composite particles, meaning that each is composed of smaller parts, namely three quarks each; although once thought to be so, neither is an elementary particle. A proton is composed of two up quarks and one down quark, while the neutron has one up quark and two down quarks. Quarks are held together by the strong force, or equivalently, by gluons, which mediate the strong force at the quark level.

An up quark has electric charge ⁠++2/3 e, and a down quark has charge ⁠−+1/3 e, so the summed electric charges of proton and neutron are +e and 0, respectively. Thus, the neutron has a charge of 0 (zero), and therefore is electrically neutral; indeed, the term "neutron" comes from the fact that a neutron is electrically neutral.

The masses of the proton and neutron are similar: for the proton it is 1.6726×10−27 kg (938.27 MeV/c2), while for the neutron it is 1.6749×10−27 kg (939.57 MeV/c2); the neutron is roughly 0.13% heavier. The similarity in mass can be explained roughly by the slight difference in masses of up quarks and down quarks composing the nucleons. However, a detailed description remains an unsolved problem in particle physics.

The spin of the nucleon is 1/2, which means that they are fermions and, like electrons, are subject to the Pauli exclusion principle: no more than one nucleon, e.g. in an atomic nucleus, may occupy the same quantum state.

The isospin and spin quantum numbers of the nucleon have two states each, resulting in four combinations in total. An alpha particle is composed of four nucleons occupying all four combinations, namely, it has two protons (having opposite spin) and two neutrons (also having opposite spin), and its net nuclear spin is zero. In larger nuclei constituent nucleons, by Pauli exclusion, are compelled to have relative motion, which may also contribute to nuclear spin via the orbital quantum number. They spread out into nuclear shells analogous to electron shells known from chemistry.

Both the proton and neutron have magnetic moments, though the nucleon magnetic moments are anomalous and were unexpected when they were discovered in the 1930s. The proton's magnetic moment, symbol μp, is 2.79 μN, whereas, if the proton were an elementary Dirac particle, it should have a magnetic moment of 1.0 μN. Here the unit for the magnetic moments is the nuclear magneton, symbol μN, an atomic-scale unit of measure. The neutron's magnetic moment is μn = −1.91 μN, whereas, since the neutron lacks an electric charge, it should have no magnetic moment. The value of the neutron's magnetic moment is negative because the direction of the moment is opposite to the neutron's spin. The nucleon magnetic moments arise from the quark substructure of the nucleons. The proton magnetic moment is exploited for NMR / MRI scanning.

Stability

A neutron in free state is an unstable particle, with a half-life around ten minutes. It undergoes
β
decay
(a type of radioactive decay) by turning into a proton while emitting an electron and an electron antineutrino. This reaction can occur because the mass of the neutron is slightly greater than that of the proton. (See the Neutron article for more discussion of neutron decay.) A proton by itself is thought to be stable, or at least its lifetime is too long to measure. This is an important discussion in particle physics (see Proton decay).

Inside a nucleus, on the other hand, combined protons and neutrons (nucleons) can be stable or unstable depending on the nuclide, or nuclear species. Inside some nuclides, a neutron can turn into a proton (producing other particles) as described above; the reverse can happen inside other nuclides, where a proton turns into a neutron (producing other particles) through
β+
decay
or electron capture. And inside still other nuclides, both protons and neutrons are stable and do not change form.

Antinucleons

Both nucleons have corresponding antiparticles: the antiproton and the antineutron, which have the same mass and opposite charge as the proton and neutron respectively, and they interact in the same way. (This is generally believed to be exactly true, due to CPT symmetry. If there is a difference, it is too small to measure in all experiments to date.) In particular, antinucleons can bind into an "antinucleus". So far, scientists have created antideuterium and antihelium-3 nuclei.

Tables of detailed properties

Nucleons

The masses of their antiparticles are assumed to be identical, and no experiments have refuted this to date. Current experiments show any relative difference between the masses of the proton and antiproton must be less than 2×10−9 and the difference between the neutron and antineutron masses is on the order of (9±6)×10−5 MeV/c2.

Proton–antiproton CPT invariance tests
Test Formula PDG result
Mass <2×10−9
Charge-to-mass ratio 0.99999999991(9)
Charge-to-mass-to-mass ratio (−9±9)×10−11
Charge <2×10−9
Electron charge <1×10−21
Magnetic moment (−0.1±2.1)×10−3

Nucleon resonances

Nucleon resonances are excited states of nucleon particles, often corresponding to one of the quarks having a flipped spin state, or with different orbital angular momentum when the particle decays. Only resonances with a 3- or 4-star rating at the Particle Data Group (PDG) are included in this table. Due to their extraordinarily short lifetimes, many properties of these particles are still under investigation.

The symbol format is given as N(m) LIJ, where m is the particle's approximate mass, L is the orbital angular momentum (in the spectroscopic notation) of the nucleon–meson pair, produced when it decays, and I and J are the particle's isospin and total angular momentum respectively. Since nucleons are defined as having 1/2 isospin, the first number will always be 1, and the second number will always be odd. When discussing nucleon resonances, sometimes the N is omitted and the order is reversed, in the form LIJ (m); for example, a proton can be denoted as "N(939) S11" or "S11 (939)".

The table below lists only the base resonance; each individual entry represents 4 baryons: 2 nucleon resonances particles and their 2 antiparticles. Each resonance exists in a form with a positive electric charge (Q), with a quark composition of
u

u

d
like the proton, and a neutral form, with a quark composition of
u

d

d
like the neutron, as well as the corresponding antiparticles with antiquark compositions of
u

u

d
and
u

d

d
respectively. Since they contain no strange, charm, bottom, or top quarks, these particles do not possess strangeness, etc.

Quark model classification

In the quark model with SU(2) flavour, the two nucleons are part of the ground-state doublet. The proton has quark content of uud, and the neutron, udd. In SU(3) flavour, they are part of the ground-state octet (8) of spin-1/2 baryons, known as the Eightfold way. The other members of this octet are the hyperons strange isotriplet
Σ+
,
Σ0
,
Σ
, the
Λ
and the strange isodoublet
Ξ0
,
Ξ
. One can extend this multiplet in SU(4) flavour (with the inclusion of the charm quark) to the ground-state 20-plet, or to SU(6) flavour (with the inclusion of the top and bottom quarks) to the ground-state 56-plet.

The article on isospin provides an explicit expression for the nucleon wave functions in terms of the quark flavour eigenstates.

Models

Although it is known that the nucleon is made from three quarks, as of 2006, it is not known how to solve the equations of motion for quantum chromodynamics. Thus, the study of the low-energy properties of the nucleon are performed by means of models. The only first-principles approach available is to attempt to solve the equations of QCD numerically, using lattice QCD. This requires complicated algorithms and very powerful supercomputers. However, several analytic models also exist:

Skyrmion models

The skyrmion models the nucleon as a topological soliton in a nonlinear SU(2) pion field. The topological stability of the skyrmion is interpreted as the conservation of baryon number, that is, the non-decay of the nucleon. The local topological winding number density is identified with the local baryon number density of the nucleon. With the pion isospin vector field oriented in the shape of a hedgehog space, the model is readily solvable, and is thus sometimes called the hedgehog model. The hedgehog model is able to predict low-energy parameters, such as the nucleon mass, radius and axial coupling constant, to approximately 30% of experimental values.

MIT bag model

The MIT bag model confines quarks and gluons interacting through quantum chromodynamics to a region of space determined by balancing the pressure exerted by the quarks and gluons against a hypothetical pressure exerted by the vacuum on all colored quantum fields. The simplest approximation to the model confines three non-interacting quarks to a spherical cavity, with the boundary condition that the quark vector current vanish on the boundary. The non-interacting treatment of the quarks is justified by appealing to the idea of asymptotic freedom, whereas the hard-boundary condition is justified by quark confinement.

Mathematically, the model vaguely resembles that of a radar cavity, with solutions to the Dirac equation standing in for solutions to the Maxwell equations, and the vanishing vector current boundary condition standing for the conducting metal walls of the radar cavity. If the radius of the bag is set to the radius of the nucleon, the bag model predicts a nucleon mass that is within 30% of the actual mass.

Although the basic bag model does not provide a pion-mediated interaction, it describes excellently the nucleon–nucleon forces through the 6 quark bag s-channel mechanism using the P-matrix.

Chiral bag model

The chiral bag model merges the MIT bag model and the skyrmion model. In this model, a hole is punched out of the middle of the skyrmion and replaced with a bag model. The boundary condition is provided by the requirement of continuity of the axial vector current across the bag boundary.

Very curiously, the missing part of the topological winding number (the baryon number) of the hole punched into the skyrmion is exactly made up by the non-zero vacuum expectation value (or spectral asymmetry) of the quark fields inside the bag. As of 2017, this remarkable trade-off between topology and the spectrum of an operator does not have any grounding or explanation in the mathematical theory of Hilbert spaces and their relationship to geometry.

Several other properties of the chiral bag are notable: It provides a better fit to the low-energy nucleon properties, to within 5–10%, and these are almost completely independent of the chiral-bag radius, as long as the radius is less than the nucleon radius. This independence of radius is referred to as the Cheshire Cat principle, after the fading of Lewis Carroll's Cheshire Cat to just its smile. It is expected that a first-principles solution of the equations of QCD will demonstrate a similar duality of quark–meson descriptions.

Big Rip

From Wikipedia, the free encyclopedia

In physical cosmology, the Big Rip is a hypothetical cosmological model concerning the ultimate fate of the universe, in which the matter of the universe, from stars and galaxies to atoms and subatomic particles, and even spacetime itself, is progressively torn apart by the expansion of the universe at a certain time in the future, until distances between particles will infinitely increase.

According to the standard model of cosmology, the scale factor of the universe is accelerating, and, in the future era of cosmological constant dominance, will increase exponentially. However, this expansion is similar for every moment of time (hence the exponential law – the expansion of a local volume is the same number of times over the same time interval), and is characterized by an unchanging, small Hubble constant, effectively ignored by any bound material structures. By contrast, in the Big Rip scenario the Hubble constant increases to infinity in a finite time. According to recent studies, the universe is currently set for a constant expansion and heat death, because w = -1.

The possibility of sudden rip singularity occurs only for hypothetical matter (phantom energy) with implausible physical properties.

Overview

The truth of the hypothesis relies on the type of dark energy present in our universe. The type that could prove this hypothesis is a constantly increasing form of dark energy, known as phantom energy. If the dark energy in the universe increases without limit, it could overcome all forces that hold the universe together. The key value is the equation of state parameter w, the ratio between the dark energy pressure and its energy density. If −1 < w < 0, the expansion of the universe tends to accelerate, but the dark energy tends to dissipate over time, and the Big Rip does not happen. Phantom energy has w < −1, which means that its density increases as the universe expands.

A universe dominated by phantom energy is an accelerating universe, expanding at an ever-increasing rate. However, this implies that the size of the observable universe and the cosmological event horizon is continually shrinking – the distance at which objects can influence an observer becomes ever closer, and the distance over which interactions can propagate becomes ever shorter. When the size of the horizon becomes smaller than any particular structure, no interaction by any of the fundamental forces can occur between the most remote parts of the structure, and the structure is "ripped apart". The progression of time itself will stop. The model implies that after a finite time there will be a final singularity, called the "Big Rip", in which the observable universe eventually reaches zero size and all distances diverge to infinite values.

The authors of this hypothesis, led by Robert R. Caldwell of Dartmouth College, calculate the time from the present to the Big Rip to be

where w is defined above, H0 is Hubble's constant and Ωm is the present value of the density of all the matter in the universe.

Observations of galaxy cluster speeds by the Chandra X-ray Observatory seem to suggest the value of w is between approximately −0.907 and −1.075, meaning the Big Rip cannot be definitively ruled out. Based on the above equation, if the observation determines that the value of w is less than −1, but greater than or equal to −1.075, the Big Rip would occur approximately 152 billion years into the future at the earliest.

Authors' example

In their paper, the authors consider a hypothetical example with w = −1.5, H0 = 70 km/s/Mpc, and Ωm = 0.3, in which case the Big Rip would happen approximately 22 billion years from the present. In this scenario, galaxies would first be separated from each other about 200 million years before the Big Rip. About 60 million years before the Big Rip, galaxies would begin to disintegrate as gravity becomes too weak to hold them together. Planetary systems like the Solar System would become gravitationally unbound about three months before the Big Rip, and planets would fly off into the rapidly expanding universe. In the last minutes, stars and planets would be torn apart, and the now-dispersed atoms would be destroyed about 10−19 seconds before the end (the atoms will first be ionized as electrons fly off, followed by the dissociation of the atomic nuclei). At the time the Big Rip occurs, even spacetime itself would be ripped apart and the scale factor would be infinity.

Observed universe

Evidence indicates w to be very close to −1 in our universe, which makes w the dominating term in the equation. The closer that w is to −1, the closer the denominator is to zero and the further the Big Rip is in the future. If w were exactly equal to −1, the Big Rip could not happen, regardless of the values of H0 or Ωm.

According to the latest cosmological data available, the uncertainties are still too large to discriminate among the three cases w < −1, w = −1, and w > −1.

Moreover, it is nearly impossible to measure w to be exactly at −1 due to statistical fluctuations. This means that the measured value of w can be arbitrarily close to −1 but not exactly at −1 hence the earliest possible date of the Big Rip can be pushed back further with more accurate measurements but the Big Rip is very difficult to completely rule out.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...