Search This Blog

Sunday, July 8, 2018

Population dynamics

From Wikipedia, the free encyclopedia
 
Map of population trends of native and invasive species of jellyfish[1]
 
  Increase (high certainty)
  Increase (low certainty)
  Stable/variable
  Decrease
  No data

Population dynamics is the branch of life sciences that studies the size and age composition of populations as dynamical systems, and the biological and environmental processes driving them (such as birth and death rates, and by immigration and emigration). Example scenarios are ageing populations, population growth, or population decline.

History

Population dynamics has traditionally been the dominant branch of mathematical biology, which has a history of more than 210 years, although more recently the scope of mathematical biology has greatly expanded. The first principle of population dynamics is widely regarded as the exponential law of Malthus, as modeled by the Malthusian growth model. The early period was dominated by demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model.

A more general model formulation was proposed by F.J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations. The computer game SimCity and the MMORPG Ultima Online, among others, tried to simulate some of these population dynamics.

In the past 30 years, population dynamics has been complemented by evolutionary game theory, developed first by John Maynard Smith. Under these dynamics, evolutionary biology concepts may take a deterministic mathematical form. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of viral spread have been proposed and analyzed, and provide important results that may be applied to health policy decisions.

Intrinsic rate of increase

The rate at which a population increases in size if there are no density-dependent forces regulating the population is known as the intrinsic rate of increase. It is
{\dfrac {dN}{dt}}{\dfrac {1}{N}}=r
where the derivative dN/dt is the rate of increase of the population, N is the population size, and r is the intrinsic rate of increase. Thus r is the maximum theoretical rate of increase of a population per individual – that is, the maximum population growth rate. The concept is commonly used in insect population biology to determine how environmental factors affect the rate at which pest populations increase. See also exponential population growth and logistic population growth.[2]

Common mathematical models

Exponential population growth

Exponential growth describes unregulated reproduction. It is very unusual to see this in nature. In the last 100 years, human population growth has appeared to be exponential. In the long run, however, it is not likely. Paul Ehrlich and Thomas Malthus believed that human population growth would lead to overpopulation and starvation due to scarcity of resources. They believed that human population would grow at rate in which they exceed the ability at which humans can find food. In the future, humans would be unable to feed large populations. The biological assumptions of exponential growth is that the per capita growth rate is constant. Growth is not limited by resource scarcity or predation.[3]

Simple discrete time exponential model

N_{t+1}=\lambda N_{t}
where λ is the discrete-time per capita growth rate. At λ = 1, we get a linear line and a discrete-time per capita growth rate of zero. At λ < 1, we get a decrease in per capita growth rate. At λ > 1, we get an increase in per capita growth rate. At λ = 0, we get extinction of the species.[3]

Continuous time version of exponential growth.

Some species have continuous reproduction.
{\dfrac {dN}{dT}}=rN
where {\dfrac {dN}{dT}} is the rate of population growth per unit time, r is the maximum per capita growth rate, and N is the population size.

At r > 0, there is an increase in per capita growth rate. At r = 0, the per capita growth rate is zero. At r < 0, there is a decrease in per capita growth rate.

Logistic population growth

Logistics” comes from the French word logistique, which means “to compute”. Population regulation is a density-dependent process, meaning that population growth rates are regulated by the density of a population. Consider an analogy with a thermostat. When the temperature is too hot, the thermostat turns on the air conditioning to decrease the temperature back to homeostasis. When the temperature is too cold, the thermostat turns on the heater to increase the temperature back to homeostasis. Likewise with density dependence, whether the population density is high or low, population dynamics returns the population density to homeostasis. Homeostasis is the set point, or carrying capacity, defined as K.[3]

Continuous-time model of logistic growth

{\dfrac {dN}{dT}}=rN{\Big (}1-{\dfrac {N}{K}}{\Big )}
where {\Big (}1-{\dfrac {N}{K}}{\Big )} is the density dependence, N is the number in the population, K is the set point for homeostasis and the carrying capacity. In this logistic model, population growth rate is highest at 1/2 K and the population growth rate is zero around K. The optimum harvesting rate is a close rate to 1/2 K where population will grow the fastest. Above K, the population growth rate is negative. The logistic models also show density dependence, meaning the per capita population growth rates decline as the population density increases. In the wild, you can't get these patterns to emerge without simplification. Negative density dependence allows for a population that overshoots the carrying capacity to decrease back to the carrying capacity, K.[3]

According to R/K selection theory organisms may be specialised for rapid growth, or stability closer to carrying capacity.

Discrete time logistical model

N_{t+1}=N_{t}+rN_{t}(1-{N_{t}/K})
This equation uses r instead of λ because per capita growth rate is zero when r = 0. As r gets very high, there are oscillations and deterministic chaos.[3] Deterministic chaos is large changes in population dynamics when there is a very small change in r. This makes it hard to make predictions at high r values because a very small r error results in a massive error in population dynamics.

Population is always density dependent. Even a severe density independent event cannot regulate populate, although it may cause it to go extinct.

Not all population models are necessarily negative density dependent. The Allee effect allows for a positive correlation between population density and per capita growth rate in communities with very small populations. For example, a fish swimming on its own is more likely to be eaten than the same fish swimming among a school of fish, because the pattern of movement of the school of fish is more likely to confuse and stun the predator.[3]

Individual-based models

Cellular automata are used to investigate mechanisms of population dynamics. Here are relatively simple models with one and two species.

Logical deterministic individual-based cellular automata model of single species population growth
 
Logical deterministic individual-based cellular automata model of interspecific competition for a single limited resource

Fisheries and wildlife management

In fisheries and wildlife management, population is affected by three dynamic rate functions.
  • Natality or birth rate, often recruitment, which means reaching a certain size or reproductive stage. Usually refers to the age a fish can be caught and counted in nets.
  • Population growth rate, which measures the growth of individuals in size and length. More important in fisheries, where population is often measured in biomass.
  • Mortality, which includes harvest mortality and natural mortality. Natural mortality includes non-human predation, disease and old age.
If N1 is the number of individuals at time 1 then
N_{1}=N_{0}+B-D+I-E
where N0 is the number of individuals at time 0, B is the number of individuals born, D the number that died, I the number that immigrated, and E the number that emigrated between time 0 and time 1.
If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured.

All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size.

For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as maximum sustainable yield (or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium.[4] While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among scientists,[5] it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries.[6][7] To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby refuge from human predation in the form of a nature reserve, resulting in higher catches than if the whole area was open to fishing.[8][9]

For control applications

Population dynamics have been widely used in several control theory applications. With the use of evolutionary game theory, population games are broadly implemented for different industrial and daily-life contexts. Mostly used in multiple-input-multiple-output (MIMO) systems, although they can be adapted for use in single-input-single-output (SISO) systems. Some examples of applications are military campaigns, resource allocation for water distribution, dispatch of distributed generators, lab experiments, transport problems, communication problems, among others. Furthermore, with the adequate contextualization of industrial problems, population dynamics can be an efficient and easy-to-implement solution for control-related problems. Multiple academic research has been and is continuously carried out.

Evolutionary economics

From Wikipedia, the free encyclopedia

Evolutionary economics is part of mainstream economics as well as a heterodox school of economic thought that is inspired by evolutionary biology. Much like mainstream economics, it stresses complex interdependencies, competition, growth, structural change, and resource constraints but differs in the approaches which are used to analyze these phenomena.

Evolutionary economics deals with the study of processes that transform economy for firms, institutions, industries, employment, production, trade and growth within, through the actions of diverse agents from experience and interactions, using evolutionary methodology[3][4]. Evolutionary economics analyses the unleashing of a process of technological and institutional innovation by generating and testing a diversity of ideas which discover and accumulate more survival value for the costs incurred than competing alternatives. The evidence suggests that it could be adaptive efficiency that defines economic efficiency. Mainstream economic reasoning begins with the postulates of scarcity and rational agents (that is, agents modeled as maximizing their individual welfare), with the "rational choice" for any agent being a straightforward exercise in mathematical optimization. There has been renewed interest in treating economic systems as evolutionary systems in the developing field of Complexity economics.[citation needed]

Evolutionary economics does not take the characteristics of either the objects of choice or of the decision-maker as fixed. Rather its focus is on the non-equilibrium processes that transform the economy from within and their implications[5][6]. The processes in turn emerge from actions of diverse agents with bounded rationality who may learn from experience and interactions and whose differences contribute to the change. The subject draws more recently on evolutionary game theory[7] and on the evolutionary methodology of Charles Darwin and the non-equilibrium economics principle of circular and cumulative causation. It is naturalistic in purging earlier notions of economic change as teleological or necessarily improving the human condition.[8]

A different approach is to apply evolutionary psychology principles to economics which is argued to explain problems such as inconsistencies and biases in rational choice theory. Basic economic concepts such as utility may be better viewed as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one.[9]

Predecessors

In the mid-19th century, Karl Marx presented a schema of stages of historical development, by introducing the notion that human nature was not constant and was not determinative of the nature of the social system; on the contrary, he made it a principle that human behavior was a function of the social and economic system in which it occurred.

Marx based his theory of economic development on the premise of developing economic systems; specifically, over the course of history superior economic systems would replace inferior ones. Inferior systems were beset by internal contradictions and inefficiencies that make them incapable of surviving over the long term. In Marx's scheme, feudalism was replaced by capitalism, which would eventually be superseded by socialism.[10]

At approximately the same time, Charles Darwin developed a general framework for comprehending any process whereby small, random variations could accumulate and predominate over time into large-scale changes that resulted in the emergence of wholly novel forms ("speciation").

This was followed shortly after by the work of the American pragmatic philosophers (Peirce, James, Dewey) and the founding of two new disciplines, psychology and anthropology, both of which were oriented toward cataloging and developing explanatory frameworks for the variety of behavior patterns (both individual and collective) that were becoming increasingly obvious to all systematic observers. The state of the world converged with the state of the evidence to make almost inevitable the development of a more "modern" framework for the analysis of substantive economic issues.

Veblen (1898)

Thorstein Veblen (1898) coined the term "evolutionary economics" in English. He began his career in the midst of this period of intellectual ferment, and as a young scholar came into direct contact with some of the leading figures of the various movements that were to shape the style and substance of social sciences into the next century and beyond. Veblen saw the need for taking account of cultural variation in his approach; no universal "human nature" could possibly be invoked to explain the variety of norms and behaviors that the new science of anthropology showed to be the rule, rather than the exception. He emphasised the conflict between "industrial" and "pecuniary" or ceremonial values and this Veblenian dichotomy was interpreted in the hands of later writers as the "ceremonial / instrumental dichotomy" (Hodgson 2004);

Veblen saw that every culture is materially based and dependent on tools and skills to support the "life process", while at the same time, every culture appeared to have a stratified structure of status ("invidious distinctions") that ran entirely contrary to the imperatives of the "instrumental" (read: "technological") aspects of group life. The "ceremonial" was related to the past, and conformed to and supported the tribal legends; "instrumental" was oriented toward the technological imperative to judge value by the ability to control future consequences. The "Veblenian dichotomy" was a specialized variant of the "instrumental theory of value" due to John Dewey, with whom Veblen was to make contact briefly at the University of Chicago.

Arguably the most important works by Veblen include, but are not restricted to, his most famous works (The Theory of the Leisure Class; The Theory of Business Enterprise), but his monograph Imperial Germany and the Industrial Revolution and the 1898 essay entitled Why is Economics not an Evolutionary Science have both been influential in shaping the research agenda for subsequent generations of social scientists. TOLC and TOBE together constitute an alternative construction to the neoclassical marginalist theories of consumption and production, respectively.

Both are founded on his dichotomy, which is at its core a valuational principle. The ceremonial patterns of activity are not bound to any past, but to one that generated a specific set of advantages and prejudices that underlie the current institutions. "Instrumental" judgments create benefits according to a new criterion, and therefore are inherently subversive. This line of analysis was more fully and explicitly developed by Clarence E. Ayres of the University of Texas at Austin from the 1920s.

A seminal article by Armen Alchian (1950) argued for adaptive success of firms faced with uncertainty and incomplete information replacing profit maximization as an appropriate modeling assumption.[11] Kenneth Boulding was one of the advocates of the evolutionary methods in social science, as is evident from Kenneth Boulding's Evolutionary Perspective. Kenneth Arrow, Ronald Coase and Douglass North are some of the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel winners who are known for their sympathy to the field.

More narrowly the works Jack Downie[12] and Edith Penrose[13] offer many insights for those thinking about evolution at the level of the firm in an industry.

Joseph Schumpeter, who lived in the first half of 20th century, was the author of the book The Theory of Economic Development (1911, transl. 1934). It is important to note that for the word development he used in his native language, the German word "Entwicklung", which can be translated as development or evolution. The translators of the day used the word "development" from the French "développement", as opposed to "evolution" as this was used by Darwin. (Schumpeter, in his later writings in English as a professor at Harvard, used the word "evolution".) The current term in common use is economic development.

In Schumpeter's book he proposed an idea radical for its time: the evolutionary perspective. He based his theory on the assumption of usual macroeconomic equilibrium, which is something like "the normal mode of economic affairs". This equilibrium is being perpetually destroyed by entrepreneurs who try to introduce innovations. A successful introduction of an innovation (i.e. a disruptive technology) disturbs the normal flow of economic life, because it forces some of the already existing technologies and means of production to lose their positions within the economy.[citation needed]

Present state of discussion

One of the major contributions to the emerging field of evolutionary economics has been the publication of An Evolutionary Theory of Economic Changby Richard Nelson and Sidney G. Winter. These authors have focused mostly on the issue of changes in technology and routines, suggesting a framework for their analysis. If the change occurs constantly in the economy, then some kind of evolutionary process must be in action, and there has been a proposal that this process is Darwinian in nature.

Then, mechanisms that provide selection, generate variation and establish self-replication, must be identified. The authors introduced the term 'steady change' to highlight the evolutionary aspect of economic processes and contrast it with the concept of 'steady state' popular in classical economics.[14] Their approach can be compared and contrasted with the population ecology or organizational ecology approach in sociology: see Douma & Schreuder (2013, chapter 11).

Milton Friedman proposed that markets act as major selection vehicles. As firms compete, unsuccessful rivals fail to capture an appropriate market share, go bankrupt and have to exit.[15] The variety of competing firms is both in their products and practices, that are matched against markets. Both products and practices are determined by routines that firms use: standardized patterns of actions implemented constantly. By imitating these routines, firms propagate them and thus establish inheritance of successful practices.[16][17] A general theory of this process has been proposed by Kurt Dopfer, John Foster and Jason Potts as the micro meso macro framework.[18]

Economic processes, as part of life processes, are intrinsically evolutionary[19]. From the evolutionary equation that describe life processes, an analytical formula on the main factors of economic processes, such as fixed cost and variable cost, can be derived. The economic return, or competitiveness, of economic entities of different characteristics under different kinds of environment can be calculated.[20] The change of environment causes the change of competitiveness of different economic entities and systems. This is the process of evolution of economic systems.

In recent years, evolutionary models have been used to assist decision making in applied settings and find solutions to problems such as optimal product design and service portfolio diversification.[21]

Evolutionary psychology

A different approach is to apply evolutionary psychology principles to economics which is argued to explain problems such as inconsistencies and biases in rational choice theory. A basic economic concept such as utility may be better viewed as due to preferences that maximized evolutionary fitness in the ancestral environment but not necessarily in the current one. Loss aversion may be explained as being rational when living at subsistence level where a reduction of resources may have meant death and it thus may have been rational to place a greater value on losses than on gains.[9]

People are sometimes more cooperative and altruistic than predicted by economic theory which may be explained by mechanisms such as reciprocal altruism and group selection for cooperative behavior. An evolutionary approach may also explain differences between groups such as males being less risk-averse than females since males have more variable reproductive success than females. While unsuccessful risk-seeking may limit reproductive success for both sexes, males may potentially increase their reproductive success much more than females from successful risk-seeking. Frequency-dependent selection may explain why people differ in characteristics such as cooperative behavior with cheating becoming an increasingly less successful strategy as the numbers of cheaters increase.[9]

Another argument is that humans have a poor intuitive grasp of the economics of the current environment which is very different from the ancestral environment. The ancestral environment likely had relatively little trade, division of labor, and capital goods. Technological change was very slow, wealth differences were much smaller, and possession of many available resources were likely zero-sum games where large inequalities were caused by various forms of exploitation. Humans therefore may have poor intuitive understanding the benefits of free trade (causing calls for protectionism), the value of capital goods (making the labor theory of value appealing), and may intuitively undervalue the benefits of technological development.[9]

There may be a tendency to see the number of available jobs as a zero-sum game with the total number of jobs being fixed which causes people to not realize that minimum wage laws reduce the number of jobs or to believe that an increased number of jobs in other nations necessarily decreases the number of jobs in their own nation. Large income inequality may easily be viewed as due to exploitation rather than as due to individual differences in productivity. This may easily cause poor economic policies, especially since individual voters have few incentives to make the effort of studying societal economics instead of relying on their intuitions since an individual's vote counts for so little and since politicians may be reluctant to take a stand against intuitive views that are incorrect but widely held.[9]

Cost-Benefit Reform at the EPA Under Obama, the EPA juked the numbers to justify costly regulation.


By The Editorial Board
June 6, 2018, Wall Street Journal

Appeared in the June 7, 2018, print edition.
Original link:  https://junkscience.com/2018/06/more-winning-epa-administrator-pruitt-proposes-cost-benefit-analysis-reform/#more-93974

Barack Obama’s Environmental Protection Agency jammed through an average of 565 new rules each year during the Obama Presidency, imposing the highest regulatory costs of any agency. It pulled off this regulatory spree in part by gaming cost-benefit analysis to downplay the consequences of its major environmental rules. The Trump Administration has already rolled back some of this overregulation, and now Administrator Scott Pruitt wants to stop the EPA’s numerical shenanigans, too.

On Thursday the EPA will take the first step toward a comprehensive cost-benefit reform by issuing an advance notice of proposed rule-making. After weighing public input, EPA will propose a rule establishing an agency-wide standard for how regulations are assessed. The reform would make it easier for Americans and their elected representatives to see whether more regulation is truly justifiable.

The EPA has a statutory obligation to look at the costs and benefits of many proposed rules. That responsibility has been reinforced by executive orders and court rulings. But while all three branches of government have supported such assessments, they leave the EPA broad discretion. Enter the Obama Administration, which saw the chance to add additional considerations to the cost-benefit equation.

By introducing “social costs” and “social benefits,” the EPA began factoring in speculation about how regulatory inaction would affect everything from rising sea levels to pediatric asthma. EPA optimists even included their guesses about how domestic regulations could have a global impact. Meanwhile, the agency ignored best practices from the Office of Management and Budget, juking the numbers to raise the cost of carbon emissions.

This proved as politically useful as it was scientifically imprecise. Months before introducing the Clean Power Plan, the EPA suddenly raised the social cost of a ton of carbon emissions to an average of $36 from $21. Before it embarked on new oil and gas regulations, the EPA put the social cost of methane at an average of $1,100 per ton.

At White House direction, the Trump EPA recalculated those figures last year to include only demonstrable domestic benefits. The social cost estimates dropped to an average of $5 per ton of carbon and $150 per ton of methane. That made a big difference in the cost-benefit analysis. While the Obama Administration claimed the Clean Power Plan would yield up to $43 billion in net benefits by 2030, the Trump EPA concluded it would carry a $13 billion net cost.

Another statistical sleight of hand involves the Mercury and Air Toxics Standards. The regulation’s stated purpose was to reduce mercury pollution, but the EPA added the rule’s potential to decrease dust. That was irrelevant to the central question of whether it was worthwhile to regulate mercury as proposed. But without the erroneous co-benefits, EPA would find such regulations tougher to justify.

On his first day in office, Mr. Pruitt said his goal was to protect the environment and the economy, and that “we don’t have to choose between the two.” His many ethics controversies have distracted from that mission, but this cost-benefit reform is a welcome return.

The regulatory specifics will be hashed out in the coming months, but there’s real potential here to curb the distortions that mask bad policy. If Mr. Pruitt succeeds, future cost-benefit analyses will be more consistent and transparent. The reform would help to ensure regulation is based on sound scientific analysis instead of wishful bureaucratic thinking.

Technological transitions

From Wikipedia, the free encyclopedia
 
Technological innovations have occurred throughout history and rapidly increased over the modern age. New technologies are developed and co-exist with the old before supplanting them. Transport offers several examples; from sailing to steam ships to automobiles replacing horse-based transportation. Technological transitions (TT) describe how these technological innovations occur and are incorporated into society. Alongside the technological developments TT considers wider societal changes such as “user practices, regulation, industrial networks (supply, production, distribution), infrastructure, and symbolic meaning or culture”. For a technology to have use, it must be linked to social structures human agency and organisations to fulfil a specific need. Hughes refers to the ‘seamless web’ where physical artefacts, organisations, scientific communities, and social practices combine. A technological system includes technical and non-technical aspects, and it a major shift in the socio-technical configurations (involving at least one new technology) is when a technological transition occurs.

Origins

Work on technological transitions draws on a number of fields including history of science, technology studies, and evolutionary economics.[2] The focus of evolutionary economics is on economic change, but as a driver of this technological change has been considered in the literature.[5] Joseph Schumpeter, in his classic Theory of Economic Development[6] placed the emphasis on non-economic forces as the driver for growth. The human actor, the entrepreneur is seen as the cause of economic development which occurs as a cyclical process. Schumpeter proposed that radical innovations were the catalyst for Kondratiev cycles.

Long wave theory

The Russian economist Kondratiev[7] proposed that economic growth operated in boom and bust cycles of approximately 50 year periods. These cycles were characterised by periods of expansion, stagnation and recession. The period of expansion is associated with the introduction of a new technology, e.g. steam power or the microprocessor. At the time of publication, Kondratiev had considered that two cycles had occurred in the nineteenth century and third was beginning at the turn of the twentieth. Modern writers, such as Freeman and Perez[8] outlined five cycles in the modern age:
  • The Industrial Revolution (1770–1830)
  • Victorian Prosperity: Age of steam and Rail (1830–1880)
  • The Age of Steel (1880–1930)
  • Oil, Mass Production and the Consumer Society (1930–1980)
  • The Information Age (1980-?)
Freeman and Perez[8] proposed that each cycle consists of pervasive technologies, their production and economic structures that support them. Termed ‘techno-economic paradigms’, they suggest that the shift from one paradigm to another is the result of emergent new technologies.

Following the recent economic crisis, authors such as Moody and Nogrady[9] have suggested that a new cycle is emerging from the old, centred on the use of sustainable technologies in a resource depleted world.

Technological paradigms, trajectories and regimes

Thomas Kuhn[10] described how a paradigm shift is a wholesale shift in the basic understanding of a scientific theory. Examples in science include the change of thought from miasma to germ theory as a cause of disease. Building on this work, Giovanni Dosi[11] developed the concept of ’technical paradigms’ and ‘technological trajectories’. In considering how engineers work, the technical paradigm is an outlook on the technological problem, a definition of what the problems and solutions are. It charts the idea of specific progress. By identifying the problems to be solved the paradigm exerts an influence on technological change. The pattern of problem solving activity and the direction of progress is the technological trajectory. In similar fashion, Nelson and Winter (,[12][13])defined the concept of the ‘technological regime’ which directs technological change through the beliefs of engineers of what problems to solve. The work of the actors and organisations is the result of organisational and cognitive routines which determines search behaviour. This places boundaries and also trajectories (direction) to those boundaries.

Multi-level perspective on technological transitions

In analysing (historic) cases of technological transitions researchers from the systems in transition branch of transitions research have used a multi-level perspective (MLP) as a heuristic model to understand changes in socio-technical systems. ([2][14][15]) Innovation system approaches traditionally focus on the production side. A socio-technical approach combines the science and technology in devising a production, with the application of the technology in fulfilling a societal function.[16] Linking the two domains are the distribution, infrastructure and markets of the product. This approach considers a transition to be multi-dimensional as technology is only one aspect.

The MLP proposes three analytical levels: the niche, regime and landscape.

Niche (Micro-level) Radical innovations occur at the niche level. These act as ‘safe havens’ for fledgling technologies to develop, largely free from market pressures which occur at the regime level. The US Military has acted as niche for major twentieth century technologies such as the aircraft, radio and the internet. More recently, California’s Silicon Valley has provided an arena for ICT focused technologies to emerge. Some innovations will challenge the existing regime while others fail.

Regime (Meso-level) The socio-technical regime, as defined by Geels,[2] includes a web of inter-linking actors across different social groups and communities following a set of rules. In effect, the established practices of a given system. Seven dimensions have been identified in the socio-technical regime: technology, user practices and application, the symbolic meaning of technology, infrastructure, policy and techno-scientific knowledge.[2] Change does occur at the regime level but it is normally slow and incremental unlike the radical change at the niche level. The actors who constitute the existing regime are set to gain from perpetuating the incumbent technology at the expense of the new. This is known as ‘lock-in’.[1]

Landscape (Macro-level) Exogenous to the previous levels is the socio-technical landscape.[2] A broad range of factors are contained here, such as economic pressures, cultural values, social trends, wars and environmental issues. Change occurs at an even slower rate than at the regime level.

A transition is said to happen when a regime shift has occurred. This is the result of the interplay between the three levels. Regimes are relatively inert and resistant to change being structured to incremental innovation following established trajectories.[17] As such, transitions are difficult to achieve. The current regime is typically suffering internal issues. Pressure from the landscape level may cause ‘cracks’ or ‘windows of opportunity’ through which innovations at the niche level may initially co-exist with the established technology before achieving ascendency. Once the technology has fully embedded into society the transition is said to be completed.[18]

Case study

The MLP has been used in describing a range of historic transitions in socio-technical regimes for mobility, sanitation, food, lighting and so on.[19] While early research focused on historical transitions, a second strand of research was more focused on transitions to sustainable technologies in key sectors such as transport, energy and housing.[19]

Geels [2][5] presented three historical transitions on system innovation relating to modes of transportation. The technological transition from sailing ships to steamships in the UK will be summarised and shown in the context of a wider system innovation.

Great Britain was the world’s leading naval power in the nineteenth century, and led the way in the transition from sail to steam. At first, the introduction of steam technology co-existed with the current regime. Steam tugs assisted sail ships into port and hybrid steam / sail ships appeared. Landscape developments create the necessity for improvements in the technology. A demand for trans-Atlantic emigration was prompted by the Irish potato famine, European political instability and the lure of gold in California. The requirement for such arduous journeys had prompted a wealth of innovations at the niche level in steamship-development. From the late 1880s, as steamship technology improved and costs dropped, the new technology was widely diffused and a new regime established. The changes go beyond a technological transition as it involved new ship management and fleet management practices, new supporting infrastructures and new functionalities.

Transition paths

The nature of transitions varies and the differing qualities result in multiple pathways occurring. Geels and Schot [20] defined five transition paths:
  • Reproduction: Ongoing change occurring in the regime level.
  • Transformation: A socio-technical regime that changes without the emergence of a monopolising technology.
  • Technological substitution: An incumbent technology is replaced by a radical innovation resulting in a new socio-technical regime.
  • De-alignment and Re-alignment: Weaknesses in the regime sees the advent of competing new technologies leading to a dominant model. (E.g. the automobile replacing the horse as the primary means of land transport).
  • Re-configuration: When multiple, interlinked technologies are replaced by a similarly linked alternative set.

Characteristics of technological transitions

Six characteristics of technological transitions have been identified.,[1][21]

Transitions are co-evolutionary and multi-dimensional Technological developments occur intertwined with societal needs, wants and uses. A technology is adopted and diffused based on this interplay between innovation and societal requirements. Co-evolution has different aspects. As well as the co-evolution of technology and society, aspects between science, technology, users and culture have been considered.[5]

Multi-actors are involved Scientific and engineering communities are central to the development of a technology, but a wide range of actors are involved in a transition. This can include organisations, policy-makers, government, NGOs, special interest groups and others.

Transitions occur at multiple levels As shown in the MLP transitions occur through the interplay of processes at different levels.

Transitions are a long-term process Complete system-change takes time and can be decades in the making. Case studies show them to be between 40 and 90 years.[18]

Transitions are radical For a true transition to occur the technology has to be a radical innovation.

Change is Non-linear The rate of change will vary over time. For example, the pace of change may be slow at the gestation period (at the niche level) but much more rapid when a breakthrough is occurring.

Diffusion: transition phases

Diffusion of an innovation is the concept of how it is picked up by society, at what rate and why. Everett (1962).The diffusion of a technological innovation into society can be considered in distinct phases.[22] Pre-development is the gestation period where the new technology has yet to make an impact. Take-off is when the process of a system shift is beginning. A breakthrough is occurring when fundamental changes are occurring in existing structures through the interplay of economic, social and cultural forces. Once the rate of change has decreased and a new balance is achieved, stabilization is said to have occurred. A full transition involves an overhaul of existing rules and change of beliefs which takes time, typically spanning at least a generation.[22] This process can be speeded-up through seismic, unforeseen events such as war or economic strife.

Geels[5] proposed a similar four phased approach which draws on the multi-level perspective (MLP) developed by Dutch scholars. Phases one sees the emergence of a novelty, born from the existing regime. Development then occurs in the niche level at phase two. As before, breakthrough then occurs at phase three. In the parlance of the MLP the new technology, having been developed at the niche level, is in competition with the established regime. To breakthrough and achieve wide diffusion, external factors – ‘windows of opportunity’ are required.

Windows of opportunity

A number of possible circumstances can act as windows of opportunity for the diffusion of new technologies:
  • Internal technical problems in the existing regime. Those that cannot be solved by refinement of existing technologies act as a driver for the new.
  • Problems external to the system. Such ‘problems’ are often determined by pressure groups and require wider societal or political backing. An example is environmental concerns.
  • Changing user preferences. Opportunities are presented if existing technologies cannot meet user needs.
  • Strategic advantage. Competition with rivals may necessitate innovation
  • Complimentary technology. The availability of which may enable a breakthrough
Alongside external influences, internal drivers catalyse diffusion.[5] These include economic factors such as the price performance ration. Socio-technical perspectives focus on the links between disparate social and technological elements.[14] Following the breakthrough, the final phases see the new technology supersede the old.

Societal relevance

The study of technological transitions has an impact beyond academic interest. The transitions referred to in the literature may relate to historic processes, such as the transportation transitions studied by Geels, but system changes are required to achieve a safe transition to a low carbon-economy. ([1][5]). Current structural problems are apparent in a range of sectors.[5] Dependency on oil is problematic in the energy sector due to availability, access and contribution to greenhouse gas (GHG) emissions. Transportation is a major user of energy causing significant emission of GHGs. Food production will need to keep pace with an ever-growing world population while overcoming challenges presented by global warming and transportation issues. Incremental change has provided some improvements but a more radical transition is required to achieve a more sustainable future.

Developed from the work on technological transitions is the field of transition management. Within this is an attempt to shape the direction of change complex socio-technical systems to more sustainable patterns.[1] Whereas work on technological transitions is largely based on historic processes, proponents of transition management seek to actively steer transitions in progress.

Criticisms

Genus and Coles[18] outlined a number of criticisms against the analysis of technological transitions, in particular when using the MLP. Empirical research on technological transitions occurring now has been limited, with the focus on historic transitions. Depending on the perspective on transition case studies they could be presented as having occurred on a different transition path to what was shown. For example, the bicycle could be considered an intermediate transport technology between the horse and the car. Judged from shorter different time-frame this could appear a transition in its own right. Determining the nature of a transition is problematic; when it started and ended, or whether one occurred in the sense of a radical innovation displacing an existing socio-technical regime. The perception of time casts doubt on whether a transition has occurred. If viewed over a long enough period even inert regimes may demonstrate radical change in the end. The MLP has also been criticised by scholars studying sustainability transitions using Social Practice Theories.[23]

Synergy

From Wikipedia, the free encyclopedia

Synergy is the creation of a whole that is greater than the simple sum of its parts. The term synergy comes from the Attic Greek word συνεργία synergia[1] from synergos, συνεργός, meaning "working together".

History

The words "synergy" and "synergetic" have been used in the field of physiology since at least the middle of the 19th century:
SYN'ERGY, Synergi'a, Synenergi'a, (F.) Synergie; from συν, 'with,' and εργον, 'work.' A correlation or concourse of action between different organs in health; and, according to some, in disease.
—Dunglison, Robley Medical Lexicon Blanchard and Lea, 1853
In 1896, Henri Mazel applied the term "synergy" to social psychology by writing La synergie sociale, in which he argued that Darwinian theory failed to account for "social synergy" or "social love", a collective evolutionary drive. The highest civilizations were the work not only of the elite but of the masses too; those masses must be led, however, because the crowd, a feminine and unconscious force, cannot distinguish between good and evil.[2]

In 1909, Lester Frank Ward defined synergy as the universal constructive principle of nature:
I have characterized the social struggle as centrifugal and social solidarity as centripetal. Either alone is productive of evil consequences. Struggle is essentially destructive of the social order, while communism removes individual initiative. The one leads to disorder, the other to degeneracy. What is not seen—the truth that has no expounders—is that the wholesome, constructive movement consists in the properly ordered combination and interaction of both these principles. This is social synergy, which is a form of cosmic synergy, the universal constructive principle of nature.
—Ward, Lester F. Glimpses of the Cosmos, volume VI (1897–1912) G. P. Putnam's Sons, 1918, p. 358

Descriptions and usages

In the natural world, synergistic phenomena are ubiquitous, ranging from physics (for example, the different combinations of quarks that produce protons and neutrons) to chemistry (a popular example is water, a compound of hydrogen and oxygen), to the cooperative interactions among the genes in genomes, the division of labor in bacterial colonies, the synergies of scale in multi-cellular organisms, as well as the many different kinds of synergies produced by socially-organized groups, from honeybee colonies to wolf packs and human societies: compare stigmergy, a mechanism of indirect coordination between agents or actions that results in the self-assembly of complex systems. Even the tools and technologies that are widespread in the natural world represent important sources of synergistic effects. The tools that enabled early hominins to become systematic big-game hunters is a primordial human example.[3]

In the context of organizational behavior, following the view that a cohesive group is more than the sum of its parts, synergy is the ability of a group to outperform even its best individual member. These conclusions are derived from the studies conducted by Jay Hall on a number of laboratory-based group ranking and prediction tasks. He found that effective groups actively looked for the points in which they disagreed and in consequence encouraged conflicts amongst the participants in the early stages of the discussion. In contrast, the ineffective groups felt a need to establish a common view quickly, used simple decision making methods such as averaging, and focused on completing the task rather than on finding solutions they could agree on.[4] In a technical context, its meaning is a construct or collection of different elements working together to produce results not obtainable by any of the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, documents: all things required to produce system-level results. The value added by the system as a whole, beyond that contributed independently by the parts, is created primarily by the relationship among the parts, that is, how they are interconnected. In essence, a system constitutes a set of interrelated components working together with a common objective: fulfilling some designated need.[5]

If used in a business application, synergy means that teamwork will produce an overall better result than if each person within the group were working toward the same goal individually. However, the concept of group cohesion needs to be considered. Group cohesion is that property that is inferred from the number and strength of mutual positive attitudes among members of the group. As the group becomes more cohesive, its functioning is affected in a number of ways. First, the interactions and communication between members increase. Common goals, interests and small size all contribute to this. In addition, group member satisfaction increases as the group provides friendship and support against outside threats.[6]

There are negative aspects of group cohesion that have an effect on group decision-making and hence on group effectiveness. There are two issues arising. The risky shift phenomenon is the tendency of a group to make decisions that are riskier than those that the group would have recommended individually. Group Polarisation is when individuals in a group begin by taking a moderate stance on an issue regarding a common value and, after having discussed it, end up taking a more extreme stance.[7]

A second, potential negative consequence of group cohesion is group think. Group think is a mode of thinking that people engage in when they are deeply involved in cohesive group, when the members' striving for unanimity overrides their motivation to appraise realistically the alternative courses of action. Studying the events of several American policy "disasters" such as the failure to anticipate the Japanese attack on Pearl Harbor (1941) and the Bay of Pigs Invasion fiasco (1961), Irving Janis argued that they were due to the cohesive nature of the committees that made the relevant decisions.[8]

That decisions made by committees lead to failure in a simple system is noted by Dr. Chris Elliot. His case study looked at IEEE-488, an international standard set by the leading US standards body; it led to a failure of small automation systems using the IEEE-488 standard (which codified a proprietary communications standard HP-IB). But the external devices used for communication were made by two different companies, and the incompatibility between the external devices led to a financial loss for the company. He argues that systems will be safe only if they are designed, not if they emerge by chance.[9]

The idea of a systemic approach is endorsed by the United Kingdom Health and Safety Executive. The successful performance of the health and safety management depends upon the analyzing the causes of incidents and accidents and learning correct lessons from them. The idea is that all events (not just those causing injuries) represent failures in control, and present an opportunity for learning and improvement.[10] UK Health and Safety Executive, Successful health and safety management (1997): this book describes the principles and management practices, which provide the basis of effective health and safety management. It sets out the issues that need to be addressed, and can be used for developing improvement programs, self-audit, or self-assessment. Its message is that organizations must manage health and safety with the same degree of expertise and to the same standards as other core business activities, if they are to effectively control risks and prevent harm to people.

The term synergy was refined by R. Buckminster Fuller, who analyzed some of its implications more fully[11] and coined the term synergetics.[12]
  • A dynamic state in which combined action is favored over the difference of individual component actions.
  • Behavior of whole systems unpredicted by the behavior of their parts taken separately, known as emergent behavior.
  • The cooperative action of two or more stimuli (or drugs), resulting in a different or greater response than that of the individual stimuli.

Biological sciences

Synergy of various kinds has been advanced by Peter Corning as a causal agency that can explain the progressive evolution of complexity in living systems over the course of time. According to the Synergism Hypothesis, synergistic effects have been the drivers of cooperative relationships of all kinds and at all levels in living systems. The thesis, in a nutshell, is that synergistic effects have often provided functional advantages (economic benefits) in relation to survival and reproduction that have been favored by natural selection. The cooperating parts, elements, or individuals become, in effect, functional “units” of selection in evolutionary change.[13] Similarly, environmental systems may react in a non-linear way to perturbations, such as climate change, so that the outcome may be greater than the sum of the individual component alterations. Synergistic responses are a complicating factor in environmental modeling.[14]

Pest synergy

Pest synergy would occur in a biological host organism population, where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet, in some cases, losses are significantly greater. In such cases, it is said that the parasites in combination have a synergistic effect.

Drug synergy

Mechanisms that may be involved in the development of synergistic effects include:
  • Effect on the same cellular system (e.g. two different antibiotics like a penicillin and an aminoglycoside; penicillins damage the cell wall of gram-positive bacteria and improve the penetration of aminoglycosides).[15]
  • Prevention or delay of degradation in the body (e.g. the antibiotic Ciprofloxacin inhibits the metabolism of Theophylline).[16]
  • Slowdown of excretion (e.g. Probenecid delays the renal excretion of Penicillin and thus prolongs its effect).[16]
  • Anticounteractive action: for example, the effect of oxaliplatin and irinotecan. Oxaliplatin intercalates DNA, thereby preventing the cell from replicating DNA. Irinotecan inhibits topoisomerase 1, consequently the cytostatic effect is increased.[17]
  • Effect on the same receptor but different sites (e.g. the coadministration of benzodiazepines and barbiturates, both act by enhancing the action of GABA on GABAA receptors, but benzodiazepines increase the frequency of channel opening, whilst barbiturates increase the channel closing time, making these two drugs dramatically enhance GABAergic neurotransmission).[citation needed]
More mechanisms are described in an exhaustive 2009 review.[17]

Toxicological synergy

Toxicological synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk in combination. Articles in scientific and lay journals include many definitions of chemical or toxicological synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction", a determination of synergy (or antagonism) depends on what is meant by "no interaction".[18] The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment.[19] In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition.

For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the United States exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume.

However, in normal agricultural practice, it is rare to use only a single pesticide. During the production of a crop, several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, a combination created by the end user, such as a farmer, has rarely been tested in that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma, and other health problems may be caused by these combination exposures; others have alternative explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals. Examples of pesticide synergists include Piperonyl butoxide and MGK 264.[20]

Human synergy

Human synergy relates to human interaction and teamwork. For example, say person A alone is too short to reach an apple on a tree and person B is too short as well. Once person B sits on the shoulders of person A, they are tall enough to reach the apple. In this example, the product of their synergy would be one apple. Another case would be two politicians. If each is able to gather one million votes on their own, but together they were able to appeal to 2.5 million voters, their synergy would have produced 500,000 more votes than had they each worked independently. A song is also a good example of human synergy, taking more than one musical part and putting them together to create a song that has a much more dramatic effect than each of the parts when played individually.

A third form of human synergy is when one person is able to complete two separate tasks by doing one action, for example, if a person were asked by a teacher and his boss at work to write an essay on how he could improve his work. A more visual example of this synergy is a drummer using four separate rhythms to create one drum beat.

Synergy usually arises when two persons with different complementary skills cooperate. In business, cooperation of people with organizational and technical skills happens very often. In general, the most common reason why people cooperate is that it brings a synergy. On the other hand, people tend to specialize just to be able to form groups with high synergy (see also division of labor and teamwork).

Example: Two teams in System Administration working together to combine technical and organizational skills in order to better the client experience, thus creating synergy. Counter-examples can be found in books like The Mythical Man-Month, in which the addition of additional team members is shown to have negative effects on productivity.

Organismic computing is an approach to improving group efficacy by increasing synergy in human groups via technological means.

When synergy occurs in the work place, the individuals involved get to work in a positive and supportive working environment. When individuals get to work in environments such as these, the company reaps the benefits. The authors of Creating the Best Workplace on Earth Rob Goffee and Gareth Jones, state that "highly engaged employees are, on average, 50% more likely to exceed expectations that the least-engaged workers. And companies with highly engaged people outperform firms with the most disengaged folks- by 54% in employee retention, by 89% in customer satisfaction, and by fourfold in revenue growth (Goffee & Jones, pg. 100)." Also, those that are able to be open about their views on the company, and have confidence that they will be heard, are likely to be a more organized employee who helps his/ her fellow team members succeed.[21]

Corporate synergy

Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are distinct types of corporate synergies, as follows.

Marketing

A marketing synergy refers to the use of information campaigns, studies, and scientific discovery or experimentation for research or development. This promotes the sale of products for varied use or off-market sales as well as development of marketing tools and in several cases exaggeration of effects. It is also often a meaningless buzzword used by corporate leaders.[22][23]

Revenue

A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand-alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B then the new company could use each sales person to sell products X and Y, thereby increasing the revenue that each sales person generates for the company.

In media revenue, synergy is the promotion and sale of a product throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks, or video games.

Financial

Financial synergy gained by the combined firm is a result of number of benefits which flow to the entity as a consequence of acquisition and merger. These benefits may be:

Cash slack

This is when a firm having number of cash extensive projects acquires a firm which is cash-rich, thus enabling the new combined firm to enjoy the profits from investing the cash of one firm in the projects of the other.

Debt capacity

If two firms have no or little capacity to carry debt before individually, it is possible for them to join and gain the capacity to carry the debt through decreased gearing (leverage). This creates value for the firm, as debt is thought to be a cheaper source of finance.

Tax benefits

It is possible for one firm to have unused tax benefits which might be offset against the profits of another after combination, thus resulting in less tax being paid. However this greatly depends on the tax law of the country.

Management

Synergy in management and in relation to teamwork refers to the combined effort of individuals as participants of the team.[24] The condition that exists when the organization's parts interact to produce a joint effect that is greater than the sum of the parts acting alone. Positive or negative synergies can exist. In these cases, positive synergy has positive effects such as improved efficiency in operations, greater exploitation of opportunities, and improved utilization of resources. Negative synergy on the other hand has negative effects such as: reduced efficiency of operations, decrease in quality, underutilization of resources and disequilibrium with the external environment.

Cost

A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity.[25] Examples include the headquarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of economies of scale.

Synergistic action in economy

The synergistic action of the economic players lies within the economic phenomenon's profundity. The synergistic action gives different dimensions to competitiveness, strategy and network identity becoming an unconventional "weapon" which belongs to those who exploit the economic systems’ potential in depth.[26]

Synergistic determinants

The synergistic gravity equation (SYNGEq), according to its complex “title”, represents a synthesis of the endogenous and exogenous factors which determine the private and non-private economic decision makers to call to actions of synergistic exploitation of the economic network in which they operate. That is to say, SYNGEq constitutes a big picture of the factors/motivations which determine the entrepreneurs to contour an active synergistic network. SYNGEq includes both factors which character is changing over time (such as the competitive conditions), as well as classics factors, such as the imperative of the access to resources of the collaboration and the quick answers. The synergistic gravity equation (SINGEq) comes to be represented by the formula:[27]

∑SYN.Act = ∑R-*I(CRed+COOP++AUnimit.)*V(Cust.+Info.)*cc

where:
  • ∑SYN.Act = the sum of the synergistic actions adopted (by the economic actor)
  • ∑ R- = the amount of unpurchased but necessary resources
  • ICRed = the imperative for cost reductions
  • ICOOP+ = the imperative for deep cooperation (functional interdependence)
  • IAUnimit. = the imperative for purchasing unimitable competitive advantages (for the economic actor)
  • VCust = the necessity of customer value in purchasing future profits and competitive advantages VInfo = the necessity of informational value in purchasing future profits and competitive advantages
  • cc = the specific competitive conditions in which the economic actor operates

Synergistic networks and systems

The synergistic network represents an integrated part of the economic system which, through the coordination and control functions (of the undertaken economic actions), agrees synergies. The networks which promote synergistic actions can be divided in horizontal synergistic networks and vertical synergistic networks.[28]

Synergy effects

The synergy effects are difficult (even impossible) to imitate by competitors and difficult to reproduce by their authors because these effects depend on the combination of factors with time-varying characteristics. The synergy effects are often called "synergistic benefits", representing the direct and implied result of the developed/adopted synergistic actions.[29]

Computers

Synergy can also be defined as the combination of human strengths and computer strengths, such as advanced chess. Computers can process data much more quickly than humans, but lack the ability to respond meaningfully to arbitrary stimuli.

Synergy in literature

Etymologically, the "synergy" term was first used around 1600, deriving from the Greek word “synergos”, which means “to work together” or “to cooperate”. If during this period the synergy concept was mainly used in the theological field (describing “the cooperation of human effort with divine will”), in the 19th and 20th centuries, "synergy" was promoted in physics and biochemistry, being implemented in the study of the open economic systems only in the 1960 and 1970s.[30]
In 1938, J. R. R. Tolkien wrote an essay titled On Fairy Stores, delivered at an Andrew Lang Lecture, and reprinted in his book, The Tolkien Reader, published in 1966. In it, he made two references to synergy, although he did not use that term. He wrote:
Faerie cannot be caught in a net of words; for it is one of its qualities to be indescribable, though not imperceptible. It has many ingredients, but analysis will not necessarily discover the secret of the whole.
And more succinctly, in a footnote, about the "part of producing the web of an intricate story", he wrote:
It is indeed easier to unravel a single thread — an incident, a name, a motive — than to trace the history of any picture defined by many threads. For with the picture in the tapestry a new element has come in: the picture is greater than, and not explained by, the sum of the component threads.

Synergy in the media

The informational synergies which can be applied also in media involve a compression of transmission, access and use of information’s time, the flows, circuits and means of handling information being based on a complementary, integrated, transparent and coordinated use of knowledge.[31]

In media economics, synergy is the promotion and sale of a product (and all its versions) throughout the various subsidiaries of a media conglomerate,[32] e.g. films, soundtracks or video games. Walt Disney pioneered synergistic marketing techniques in the 1930s by granting dozens of firms the right to use his Mickey Mouse character in products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games.[33] The NBC sitcom 30 Rock often shows the power of synergy, while also poking fun at the use of the term in the corporate world.[34] There are also different forms of synergy in popular card games like Yu-Gi-Oh!, Cardfight!! Vanguard, and Future Card Buddyfight.

Information theory

When multiple sources of information taken together provide more information than the sum of the information provided by each source alone, there is said to be a synergy in the sources. This in contrast to the case in which the sources provide less information, in which case there is said to be a redundancy in the sources.

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...