Complexity characterises the behaviour of a system or model
whose components interact in multiple ways and follow local rules,
meaning there is no reasonable higher instruction to define the various
possible interactions.
The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal of complex systems theory.
Science as of 2010 takes a number of approaches to characterizing complexity; Zayed et al. reflect many of these. Neil Johnson states that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects".
The term is generally used to characterize something with many parts where those parts interact with each other in multiple ways, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex linkages at various scales is the main goal of complex systems theory.
Science as of 2010 takes a number of approaches to characterizing complexity; Zayed et al. reflect many of these. Neil Johnson states that "even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples..." Ultimately Johnson adopts the definition of "complexity science" as "the study of the phenomena which emerge from a collection of interacting objects".
Overview
Definitions of complexity often depend on the concept of a confidential "system"
– a set of parts or elements that have relationships among them
differentiated from relationships with other elements outside the
relational regime. Many definitions tend to postulate or assume that
complexity expresses a condition of numerous elements in a system and
numerous forms of relationships among the elements. However, what one
sees as complex and what one sees as simple is relative and changes with
time.
Warren Weaver posited in 1948 two forms of complexity: disorganized complexity, and organized complexity.
Phenomena of 'disorganized complexity' are treated using probability
theory and statistical mechanics, while 'organized complexity' deals
with phenomena that escape such approaches and confront "dealing
simultaneously with a sizable number of factors which are interrelated
into an organic whole". Weaver's 1948 paper has influenced subsequent thinking about complexity.
The approaches that embody concepts of systems, multiple
elements, multiple relational regimes, and state spaces might be
summarized as implying that complexity arises from the number of
distinguishable relational regimes (and their associated state spaces)
in a defined system.
Some definitions relate to the algorithmic basis for the
expression of a complex phenomenon or model or mathematical expression,
as later set out herein.
Disorganized vs. organized
One
of the problems in addressing complexity issues has been formalizing
the intuitive conceptual distinction between the large number of
variances in relationships extant in random collections, and the
sometimes large, but smaller, number of relationships between elements
in systems where constraints (related to correlation of otherwise
independent elements) simultaneously reduce the variations from element
independence and create distinguishable regimes of more-uniform, or
correlated, relationships, or interactions.
Weaver perceived and addressed this problem, in at least a
preliminary way, in drawing a distinction between "disorganized
complexity" and "organized complexity".
In Weaver's view, disorganized complexity results from the
particular system having a very large number of parts, say millions of
parts, or many more. Though the interactions of the parts in a
"disorganized complexity" situation can be seen as largely random, the
properties of the system as a whole can be understood by using
probability and statistical methods.
A prime example of disorganized complexity is a gas in a
container, with the gas molecules as the parts. Some would suggest that a
system of disorganized complexity may be compared with the (relative) simplicity of planetary orbits – the latter can be predicted by applying Newton's laws of motion.
Of course, most real-world systems, including planetary orbits,
eventually become theoretically unpredictable even using Newtonian
dynamics; as discovered by modern chaos theory.
Organized complexity, in Weaver's view, resides in nothing else
than the non-random, or correlated, interaction between the parts. These
correlated relationships create a differentiated structure that can, as
a system, interact with other systems. The coordinated system manifests
properties not carried or dictated by individual parts. The organized
aspect of this form of complexity vis-a-vis to other systems than the
subject system can be said to "emerge," without any "guiding hand".
The number of parts does not have to be very large for a
particular system to have emergent properties. A system of organized
complexity may be understood in its properties (behavior among the
properties) through modeling and simulation, particularly modeling and simulation with computers.
An example of organized complexity is a city neighborhood as a living
mechanism, with the neighborhood people among the system's parts.
Sources and factors
There are generally rules which can be invoked to explain the origin of complexity in a given system.
The source of disorganized complexity is the large number of
parts in the system of interest, and the lack of correlation between
elements in the system.
In the case of self-organizing living systems, usefully organized
complexity comes from beneficially mutated organisms being selected to
survive by their environment for their differential reproductive ability
or at least success over inanimate matter or less organized complex
organisms.
Complexity of an object or system is a relative property. For
instance, for many functions (problems), such a computational complexity
as time of computation is smaller when multitape Turing machines are used than when Turing machines with one tape are used. Random Access Machines
allow one to even more decrease time complexity (Greenlaw and Hoover
1998: 226), while inductive Turing machines can decrease even the
complexity class of a function, language or set (Burgin 2005). This
shows that tools of activity can be an important factor of complexity.
Varied meanings
In several scientific fields, "complexity" has a precise meaning:
- In computational complexity theory, the amounts of resources required for the execution of algorithms is studied. The most popular types of computational complexity are the time complexity of a problem equal to the number of steps that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm, and the space complexity of a problem equal to the volume of the memory used by the algorithm (e.g., cells of the tape) that it takes to solve an instance of the problem as a function of the size of the input (usually measured in bits), using the most efficient algorithm. This allows classification of computational problems by complexity class (such as P, NP, etc.). An axiomatic approach to computational complexity was developed by Manuel Blum. It allows one to deduce many properties of concrete computational complexity measures, such as time complexity or space complexity, from properties of axiomatically defined measures.
- In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic complexity or algorithmic entropy) of a string is the length of the shortest binary program that outputs that string. Minimum message length is a practical application of this approach. Different kinds of Kolmogorov complexity are studied: the uniform complexity, prefix complexity, monotone complexity, time-bounded Kolmogorov complexity, and space-bounded Kolmogorov complexity. An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov. The axiomatic approach encompasses other approaches to Kolmogorov complexity. It is possible to treat different kinds of Kolmogorov complexity as particular cases of axiomatically defined generalized Kolmogorov complexity. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to Kolmogorov complexity was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).
- In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state.
- In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy in statistical mechanics.
- In mathematics, Krohn–Rhodes complexity is an important topic in the study of finite semigroups and automata.
- In Network theory complexity is the product of richness in the connections between components of a system, and defined by a very unequal distribution of certain measures (some elements being highly connected and some very few, see complex network).
- In software engineering, programming complexity is a measure of the interactions of the various elements of the software. This differs from the computational complexity described above in that it is a measure of the design of the software.
- In abstract sense – Abstract Complexity, is based on visual structures perception It is complexity of binary string defined as a square of features number divided by number of elements (0's and 1's). Features comprise here all distinctive arrangements of 0's and 1's. Though the features number have to be always approximated the definition is precise and meet intuitive criterion.
Other fields introduce less precisely defined notions of complexity:
- A complex adaptive system has some or all of the following attributes:
- The number of parts (and types of parts) in the system and the number of relations between the parts is non-trivial – however, there is no general rule to separate "trivial" from "non-trivial";
- The system has memory or includes feedback;
- The system can adapt itself according to its history or feedback;
- The relations between the system and its environment are non-trivial or non-linear;
- The system can be influenced by, or can adapt itself to, its environment;
- The system is highly sensitive to initial conditions.
Study
Complexity
has always been a part of our environment, and therefore many scientific
fields have dealt with complex systems and phenomena. From one
perspective, that which is somehow complex – displaying variation
without being random – is most worthy of interest given the rewards found in the depths of exploration.
The use of the term complex is often confused with the term
complicated. In today's systems, this is the difference between myriad
connecting "stovepipes" and effective "integrated" solutions. This means that complex is the opposite of independent, while complicated is the opposite of simple.
While this has led some fields to come up with specific
definitions of complexity, there is a more recent movement to regroup
observations from different fields to study complexity in itself, whether it appears in anthills, human brains, or stock markets, social systems. One such interdisciplinary group of fields is relational order theories.
Topics
Behaviour
The behavior of a complex system is often said to be due to emergence and self-organization.
Chaos theory has investigated the sensitivity of systems to variations
in initial conditions as one cause of complex behaviour.
Mechanisms
Recent developments around artificial life, evolutionary computation and genetic algorithms have led to an increasing emphasis on complexity and complex adaptive systems.
Simulations
In social science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the use of computer simulation in social science, i.e.: computational sociology.
Systems
Systems theory has long been concerned with the study of complex systems (in recent times, complexity theory and complex systems have also been used as names of the field). These systems are present in the research of a variety disciplines, including biology, economics, social studies and technology. Recently, complexity has become a natural domain of interest of real world socio-cognitive systems and emerging systemics research. Complex systems tend to be high-dimensional, non-linear, and difficult to model. In specific circumstances, they may exhibit low-dimensional behaviour.
Data
In information theory, algorithmic information theory is concerned with the complexity of strings of data.
Complex strings are harder to compress. While intuition tells us that this may depend on the codec
used to compress a string (a codec could be theoretically created in
any arbitrary language, including one in which the very small command
"X" could cause the computer to output a very complicated string like
"18995316"), any two Turing-complete
languages can be implemented in each other, meaning that the length of
two encodings in different languages will vary by at most the length of
the "translation" language – which will end up being negligible for
sufficiently large data strings.
These algorithmic measures of complexity tend to assign high values to random noise. However, those studying complex systems would not consider randomness as complexity.
Information entropy is also sometimes used in information theory as indicative of complexity.
Recent work in machine learning has examined the complexity of the data as it affects the performance of supervised classification algorithms. Ho and Basu present a set of complexity measures for binary classification problems.
The complexity measures broadly cover:
- the overlaps in feature values from differing classes.
- the separability of the classes.
- measures of geometry, topology, and density of manifolds. Instance hardness is another approach seeks to characterize the data complexity with the goal of determining how hard a data set is to classify correctly and is not limited to binary problems.
Instance hardness is a bottom-up approach that first seeks to
identify instances that are likely to be misclassified (or, in other
words, which instances are the most complex). The characteristics of the
instances that are likely to be misclassified are then measured based
on the output from a set of hardness measures. The hardness measures are
based on several supervised learning techniques such as measuring the
number of disagreeing neighbors or the likelihood of the assigned class
label given the input features. The information provided by the
complexity measures has been examined for use in meta learning to determine for which data sets filtering (or removing suspected noisy instances from the training set) is the most beneficial and could be expanded to other areas.
In molecular recognition
A recent study based on molecular simulations and compliance constants describes molecular recognition as a phenomenon of organisation.
Even for small molecules like carbohydrates, the recognition process can not be predicted or designed even assuming that each individual hydrogen bond's strength is exactly known.
Applications
Computational complexity theory is the study of the complexity of problems – that is, the difficulty of solving
them. Problems can be classified by complexity class according to the
time it takes for an algorithm – usually a computer program – to solve
them as a function of the problem size. Some problems are difficult to
solve, while others are easy. For example, some difficult problems need
algorithms that take an exponential amount of time in terms of the size
of the problem to solve. Take the travelling salesman problem, for example. It can be solved in time (where n
is the size of the network to visit – the number of cities the
travelling salesman must visit exactly once). As the size of the network
of cities grows, the time needed to find the route grows (more than)
exponentially.
Even though a problem may be computationally solvable in
principle, in actual practice it may not be that simple. These problems
might require large amounts of time or an inordinate amount of space. Computational complexity
may be approached from many different aspects. Computational complexity
can be investigated on the basis of time, memory or other resources
used to solve the problem. Time and space are two of the most important
and popular considerations when problems of complexity are analyzed.
There exist a certain class of problems that although they are
solvable in principle they require so much time or space that it is not
practical to attempt to solve them. These problems are called intractable.
There is another form of complexity called hierarchical complexity. It is orthogonal to the forms of complexity discussed so far, which are called horizontal complexity.