Sentientism (or sentiocentrism) is an ethical philosophy that places sentience at the center of moral concern. It holds that moral consideration extends to all sentient beings. Gradualist sentientism assigns moral consideration based on the degree of sentience.
Sentientists argue that assigning different levels of moral consideration based solely on species membership, rather than morally relevant attributes like sentience, constitutes a form of unjustified discrimination known as speciesism.
Etymology
The term sentientism was used by John Rodman in 1977 who referred to Peter Singer's philosophy as "a kind of zoƶcentric sentientism". Andrew Linzey defined the term in 1980 to denote an attitude that arbitrarily favours sentients over non-sentients.
History
English utilitarian philosopher Jeremy Bentham (1748–1832), early proponent of sentientism
The 18th-century utilitarian philosopher Jeremy Bentham was among the first to argue for sentientism. He maintained that any individual who is capable of subjective experience should be considered a moral subject. Members of species who are able to experience pleasure and pain are thus included in the category. In his Introduction to the Principles of Morals and Legislation, Bentham made a comparison between slavery and sadism toward humans and non-human animals:
The
French have already discovered that the blackness of the skin is no
reason why a human being should be abandoned without redress to the
caprice of a tormentor [see Louis XIV's Code Noir]
... What else is it that should trace the insuperable line? Is it the
faculty of reason, or, perhaps, the faculty of discourse? But a
full-grown horse or dog is beyond comparison a more rational, as well as
a more conversable animal, than an infant of a day, or a week, or even a
month, old. But suppose the case were otherwise, what would it avail?
The question is not Can they reason? nor, Can they talk? but, Can they suffer?
— Jeremy Bentham, Introduction to the Principles of Morals and Legislation, (1823), 2nd edition, Chapter 17, footnote
The late 19th- and early 20th-century American philosopher J. Howard Moore, in Better-World Philosophy
(1899), described every sentient being as existing in a constant state
of struggle. He argued that what aids them in their struggle can be
called good and what opposes them can be called bad. Moore
believed that only sentient beings can make such moral judgements
because they are the only parts of the universe which can experience
pleasure and suffering. As a result, he argued that sentience and ethics
are inseparable and therefore every sentient piece of the universe has
an intrinsic ethical relationship to every other sentient part, but not
the insentient parts.
Moore used the term "zoocentricism" to describe the belief that
universal consideration and care should be given to all sentient beings;
he believed that this was too difficult for humans to comprehend in
their current stage of development.
Sentientism posits that sentience is the necessary and sufficient condition in order to belong to the moral community. Other organisms, therefore, aside from humans are morally important in their own right. According to the concept, there are organisms that have some subjective experience, which include self-awareness, rationality as well as the capacity to experience pain and suffering.
There are sources that consider sentientism as a modification of
traditional ethic, which holds that moral concern must be extended to
sentient animals.
Peter Singer provides the following justification of sentientism:
The
capacity for suffering and enjoying things is a prerequisite for having
interests at all, a condition that must be satisfied before we can
speak of interests in any meaningful way. It would be nonsense to say
that it was not in the interests of a stone to be kicked along the road
by a child. A stone does not have interests because it cannot suffer.
Nothing that we can do to it could possibly make any difference to its
welfare. A mouse, on the other hand, does have an interest in not being
tormented, because mice will suffer if they are treated in this way.
If a being suffers, there can be no moral justification for refusing to
take that suffering into consideration. No matter what the nature of the
being, the principle of equality requires that the suffering be counted
equally with the like suffering – in so far as rough comparisons can be
made – of any other being. If a being is not capable of suffering, or
of experiencing enjoyment or happiness, there is nothing to be taken
into account. This is why the limit of sentience (...) is the only
defensible boundary of concern for the interests of others.
— Peter Singer, Practical Ethics (2011), 3rd edition, Cambridge University Press, p. 50
Utilitarian philosophers such as Singer care about the well-being of sentient non-human animals as well as humans. They reject speciesism,
defined by Singer as a "prejudice or attitude of bias in favour of the
interests of members of one’s own species and against those of members
of other species". Singer considers speciesism to be a form of arbitrary
discrimination similar to racism or sexism.
Sentientists are opposed to human-centered ethics, but they may nevertheless identify as humanists, as humanism does not imply caring only for humans.
Gradualist sentientism proposes that the value of sentient beings
is relative to their degree of sentience, which is assumed to increase
with the cognitive, emotional and social complexity.
Criticism
John Rodman criticized the sentientist approach, commenting "the rest
of nature is left in a state of thinghood, having no intrinsic worth,
acquiring instrumental value only as resources for the well-being of an
elite of sentient beings".
The sentientism of Peter Singer
and others has been criticized for holding the view that only sentient
creatures have moral standing because they have interests. A human corpse for example may deserve respect and proper treatment
even though it lacks sentience and can no longer be harmed. The claim
that only sentient beings have interests has also been questioned as a
person in a coma is not sentient but is still being cared for. Philosopher Gregory Bassham has written that "many environmentalists
today reject sentientism and claim instead that all living things, both
plants and animals, have moral standing".
A biocentrist
may argue that valuing lifeforms that have sentience more than other
lifeforms is just as arbitrary as doing the same with any other trait.
The actual data mining task is the semi-automatic
or automatic analysis of massive quantities of data to extract
previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices.
These patterns can then be seen as a kind of summary of the input data,
and may be used in further analysis or, for example, in machine
learning and predictive analytics.
For example, the data mining step might identify multiple groups in the
data, which can then be used to obtain more accurate prediction results
by a decision support system.
Neither the data collection, data preparation, nor result
interpretation and reporting is part of the data mining step, although
they do belong to the overall KDD process as additional steps.
The difference between data analysis
and data mining is that data analysis is used to test models and
hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign,
regardless of the amount of data. In contrast, data mining uses machine
learning and statistical models to uncover clandestine or hidden
patterns in a large volume of data.
The related terms data dredging, data fishing, and data snooping
refer to the use of data mining methods to sample parts of a larger
population data set that are (or may be) too small for reliable
statistical inferences to be made about the validity of any patterns
discovered. These methods can, however, be used in creating new
hypotheses to test against the larger data populations.
Etymology
In the 1960s, statisticians and economists used terms like data fishing or data dredging to refer to what they considered the bad practice of analyzing data without an a-priori hypothesis. The term "data mining" was used in a similarly critical way by economist Michael Lovell in an article published in the Review of Economic Studies in 1983. Lovell indicates that the practice "masquerades under a variety of
aliases, ranging from "experimentation" (positive) to "fishing" or
"snooping" (negative).
The term data mining appeared around 1990 in the database
community, with generally positive connotations. For a short time in
1980s, the phrase "database mining"™, was used, but since it was
trademarked by HNC, a San Diego–based company, to pitch their Database Mining Workstation; researchers consequently turned to data mining. Other terms used include data archaeology, information harvesting, information discovery, knowledge extraction, etc. Gregory Piatetsky-Shapiro coined the term "knowledge discovery in databases" for the first workshop on the same topic (KDD-1989) and this term became more popular in the AI and machine learning communities. However, the term data mining became more popular in the business and press communities. Currently, the terms data mining and knowledge discovery are used interchangeably.
Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology
have dramatically increased data collection, storage, and manipulation
ability. As data sets
have grown in size and complexity, direct "hands-on" data analysis has
increasingly been augmented with indirect, automated data processing,
aided by other discoveries in computer science, specially in the field
of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management
by exploiting the way data is stored and indexed in databases to
execute the actual learning and discovery algorithms more efficiently,
allowing such methods to be applied to ever-larger data sets.
Process
The knowledge discovery in databases (KDD) process is commonly defined with the stages:
or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.
Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners.
The only other data mining standard named in these polls was SEMMA.
However, 3–4 times as many people reported using CRISP-DM. Several
teams of researchers have published reviews of data mining process
models, and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.
Pre-processing
Before data mining algorithms can be used, a target data set must be
assembled. As data mining can only uncover patterns actually present in
the data, the target data set must be large enough to contain these
patterns while remaining concise enough to be mined within an acceptable
time limit. A common source for data is a data mart or data warehouse. Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data.
Data mining
Data mining involves six common classes of tasks:
Anomaly detection
(outlier/change/deviation detection) – The identification of unusual
data records, that might be interesting or data errors that require
further investigation due to being out of standard range.
Association rule learning
(dependency modeling) – Searches for relationships between variables.
For example, a supermarket might gather data on customer purchasing
habits. Using association rule learning, the supermarket can determine
which products are frequently bought together and use this information
for marketing purposes. This is sometimes referred to as market basket analysis.
Clustering
– is the task of discovering groups and structures in the data that are
in some way or another "similar", without using known structures in the
data.
Classification
– is the task of generalizing known structure to apply to new data. For
example, an e-mail program might attempt to classify an e-mail as
"legitimate" or as "spam".
Regression
– attempts to find a function that models the data with the least error
that is, for estimating the relationships among data or datasets.
Summarization – providing a more compact representation of the data set, including visualization and report generation.
Results validation
An example of data produced by data dredging
through a bot operated by statistician Tyler Vigen, apparently showing a
close link between the best word winning a spelling bee competition and
the number of people in the United States killed by venomous spiders
Data
mining can unintentionally be misused, producing results that appear to
be significant but which do not actually predict future behavior and
cannot be reproduced
on a new sample of data, therefore bearing little use. This is
sometimes caused by investigating too many hypotheses and not performing
proper statistical hypothesis testing. A simple version of this problem in machine learning is known as overfitting,
but the same problem can arise at different phases of the process and
thus a train/test split—when applicable at all—may not be sufficient to
prevent this from happening.
The final step of knowledge discovery from data is to verify that
the patterns produced by the data mining algorithms occur in the wider
data set. Not all patterns found by the algorithms are necessarily
valid. It is common for data mining algorithms to find patterns in the
training set which are not present in the general data set. This is
called overfitting. To overcome this, the evaluation uses a test set
of data on which the data mining algorithm was not trained. The learned
patterns are applied to this test set, and the resulting output is
compared to the desired output. For example, a data mining algorithm
trying to distinguish "spam" from "legitimate" e-mails would be trained
on a training set of sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it had not
been trained. The accuracy of the patterns can then be measured from
how many e-mails they correctly classify. Several statistical methods
may be used to evaluate the algorithm, such as ROC curves.
If the learned patterns do not meet the desired standards, it is
necessary to re-evaluate and change the pre-processing and data mining
steps. If the learned patterns do meet the desired standards, then the
final step is to interpret the learned patterns and turn them into
knowledge.
Research
The premier professional body in the field is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD). Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings, and since 1999 it has published a biannual academic journal titled "SIGKDD Explorations".
Computer science conferences on data mining include:
CIKM Conference – ACM Conference on Information and Knowledge Management
There have been some efforts to define standards for the data mining process, for example, the 1999 European Cross-Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining
standard (JDM 1.0). Development on successors to these processes
(CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM
2.0 was withdrawn without reaching a final draft.
For exchanging the extracted models—in particular for use in predictive analytics—the key standard is the Predictive Model Markup Language (PMML), which is an XML-based
language developed by the Data Mining Group (DMG) and supported as
exchange format by many data mining applications. As the name suggests,
it only covers prediction models, a particular data mining task of high
importance to business applications. However, extensions to cover (for
example) subspace clustering have been proposed independently of the DMG.
Data mining is used wherever there is digital data available. Notable examples of data mining can be found throughout business, medicine, science, finance, construction, and surveillance.
Privacy concerns and ethics
While the term "data mining" itself may have no ethical implications,
it is often associated with the mining of information in relation to user behavior (ethical and otherwise).
The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns.
Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation
involves combining data together (possibly from various sources) in a
way that facilitates analysis (but that also might make identification
of private, individual-level data deducible or otherwise apparent). The threat to an individual's privacy comes into play when the data,
once compiled, cause the data miner, or anyone who has access to the
newly compiled data set, to be able to identify specific individuals,
especially when the data were originally anonymous.
Data may also be modified so as to become anonymous, so that individuals may not readily be identified. However, even "anonymized"
data sets can potentially contain enough information to allow
identification of individuals, as occurred when journalists were able to
find several individuals based on a set of search histories that were
inadvertently released by AOL.
The inadvertent revelation of personally identifiable information leading to the provider violates Fair Information Practices. This indiscretion can cause financial,
emotional, or bodily harm to the indicated individual. In one instance of privacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling
prescription information to data mining companies who in turn provided the data
to pharmaceutical companies.
Situation in Europe
Europe has rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, the U.S.–E.U. Safe Harbor Principles,
developed between 1998 and 2000, currently effectively expose European
users to privacy exploitation by U.S. companies. As a consequence of Edward Snowden's global surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to the National Security Agency, and attempts to reach an agreement with the United States have failed.
In the United Kingdom in particular there have been cases of
corporations using data mining as a way to target certain groups of
customers forcing them to pay unfairly high prices. These groups tend to
be people of lower socio-economic status who are not savvy to the ways
they can be exploited in digital market places.
Situation in the United States
In the United States, privacy concerns have been addressed by the US Congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act
(HIPAA). The HIPAA requires individuals to give their "informed
consent" regarding information they provide and its intended present and
future uses. According to an article in Biotech Business Week,
"'[i]n practice, HIPAA may not offer any greater protection than the
longstanding regulations in the research arena,' says the AAHC. More
importantly, the rule's goal of protection through informed consent is
approaching a level of incomprehensibility to average individuals." This underscores the necessity for data anonymity in data aggregation and mining practices.
U.S. information privacy legislation such as HIPAA and the Family Educational Rights and Privacy Act
(FERPA) applies only to the specific areas that each such law
addresses. The use of data mining by the majority of businesses in the
U.S. is not controlled by any legislation.
Copyright law
Situation in Europe
European Union
Even if there is no copyright in a dataset, the European Union recognises a Database right, so data mining becomes subject to intellectual property owners' rights that are protected by the Database Directive. Under European copyrightdatabase laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is permitted under Articles 3 and 4 of the 2019 Directive on Copyright in the Digital Single Market.
A specific TDM exception for scientific research is described in
article 3, whereas a more general exception described in article 4 only
applies if the copyright holder has not opted out.
The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe. The focus on the solution to this legal issue, such as licensing rather
than limitations and exceptions, led to representatives of
universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013.
United Kingdom
On the recommendation of the Hargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as a limitation and exception. The UK was the second country in the world to do so after Japan, which
introduced an exception in 2009 for data mining. However, due to the
restriction of the Information Society Directive
(2001), the UK exception only allows content mining for non-commercial
purposes. UK copyright law also does not allow this provision to be
overridden by contractual terms and conditions.
Switzerland
Since 2020, also Switzerland has been regulating data mining by
allowing it in the research field under certain conditions laid down by
art. 24d of the Swiss Copyright Act. This new article entered into force
on 1 April 2020.
Situation in the United States
US copyright law, and in particular its provision for fair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea.
As content mining is transformative, that is it does not supplant the
original work, it is viewed as being lawful under fair use. For example,
as part of the Google Book settlement the presiding judge on the case ruled that Google's
digitization project of in-copyright books was lawful, in part because
of the transformative uses that the digitization project displayed—one
being text and data mining.
MEPX: cross-platform tool for regression and classification problems based on a Genetic Programming variant.
mlpack: a collection of ready-to-use machine learning algorithms written in the C++ language.
NLTK (Natural Language Toolkit): A suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python language.
UIMA:
The UIMA (Unstructured Information Management Architecture) is a
component framework for analyzing unstructured content such as text,
audio and video – originally developed by IBM.
Weka: A suite of machine learning software applications written in the Java programming language.
Proprietary data-mining software and applications
The following applications are available under proprietary licenses.
LIONsolver:
an integrated software application for data mining, business
intelligence, and modeling that implements the Learning and Intelligent
OptimizatioN (LION) approach.
PolyAnalyst: data and text mining software by Megaputer Intelligence.
"An autopoietic machine is a
machine organized (defined as a unity) as a network of processes of
production (transformation and destruction) of components which: (i)
through their interactions and transformations continuously regenerate
and realize the network of processes (relations) that produced them; and
(ii) constitute it (the machine) as a concrete unity in space in which
they (the components) exist by specifying the topological domain of its
realization as such a network."
They describe the "space defined by an autopoietic system" as
"self-contained", a space that "cannot be described by using dimensions
that define another space. When we refer to our interactions with a
concrete autopoietic system, however, we project this system on the
space of our manipulations and make a description of this projection."
Meaning
Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an internal flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components.
An autopoietic system is to be contrasted with an allopoietic
system, such as a car factory, which uses raw materials (components) to
generate a car (an organized structure) which is something other
than itself (the factory). However, if the system is extended from the
factory to include components in the factory's "environment", such as
supply chains, plant / equipment, workers, dealerships, customers,
contracts, competitors, cars, spare parts, and so on, then as a total
viable system it could be considered to be autopoietic.
Autopoiesis in biological systems can be viewed as a network of
constraints that work to maintain themselves. This concept has been
called organizational closure or constraint closure and is closely related to the study of autocatalytic chemical networks where constraints are reactions required to sustain life.
Though others have often used the term as a synonym for self-organization,
Maturana himself stated he would "[n]ever use the notion of
self-organization ... Operationally it is impossible. That is, if the
organization of a thing changes, the thing changes". Moreover, an autopoietic system is autonomous and operationally closed,
in the sense that there are sufficient processes within it to maintain
the whole. Autopoietic systems are "structurally coupled" with their
medium, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition and can be observed throughout life-forms.
An application of the concept of autopoiesis to sociology can be found in Niklas Luhmann's Systems Theory, which was subsequently adapted by Bob Jessop in his studies of the capitalist state system. Marjatta Maula adapted the concept of autopoiesis in a business context. The theory of autopoiesis has also been applied in the context of legal
systems by not only Niklas Luhmann, but also Gunther Teubner. Patrik Schumacher has applied the term to refer to the 'discursive self-referential making of architecture.' Varela eventually further applied autopoesis to develop models of mind, brain, and behavior called non-representationalist, enactive, embodied cognitive neuroscience, culminating in neurophenomenology.
In the context of textual studies, Jerome McGann
argues that texts are "autopoietic mechanisms operating as
self-generating feedback systems that cannot be separated from those who
manipulate and use them". Citing Maturana and Varela, he defines an autopoietic system as "a
closed topological space that 'continuously generates and specifies its
own organization through its operation as a system of production of its
own components, and does this in an endless turnover of components'",
concluding that "Autopoietic systems are thus distinguished from
allopoietic systems, which are Cartesian and which 'have as the product
of their functioning something different from themselves'". Coding and markup appear allopoietic",
McGann argues, but are generative parts of the system they serve to
maintain, and thus language and print or electronic technology are
autopoietic systems.
"Hegel
is – to use today's terms – the ultimate thinker of autopoiesis, of the
process of the emergence of necessary features out of chaotic
contingency, the thinker of contingency's gradual self-organisation, of
the gradual rise of order out of chaos."[18]
Relation to complexity
Autopoiesis can be defined as the ratio between the complexity of a system and the complexity of its environment.
This generalized view of
autopoiesis considers systems as self-producing not in terms of their
physical components, but in terms of its organization, which can be
measured in terms of information and complexity. In other words, we can
describe autopoietic systems as those producing more of their own
complexity than the one produced by their environment.
— Carlos Gershenson, "Requisite Variety, Autopoiesis, and Self-organization"
Autopoiesis has been proposed as a potential mechanism of abiogenesis, by which molecules evolved into more complex cells that could support the development of life.
Comparison with other theories of life
Autopoiesis is just one of several current theories of life, including the chemoton of Tibor GƔnti, the hypercycle of Manfred Eigen and Peter Schuster, the (M,R) systems of Robert Rosen, and the autocatalytic sets of Stuart Kauffman, similar to an earlier proposal by Freeman Dyson. All of these (including autopoiesis) found their original inspiration in Erwin Schrƶdinger's book What is Life? but at first they appear to have little in common with one another,
largely because the authors did not communicate with one another, and
none of them made any reference in their principal publications to any
of the other theories. Nonetheless, there are more similarities than
may be obvious at first sight, for example between GƔnti and Rosen. Until recently there have been almost no attempts to compare the different theories and discuss them together.
Relation to cognition
An extensive discussion of the connection of autopoiesis to cognition is provided by Evan Thompson in his 2007 publication, Mind in Life. The basic notion of autopoiesis as involving constructive interaction
with the environment is extended to include cognition. Initially,
Maturana defined cognition as behavior of an organism "with relevance to
the maintenance of itself". However, computer models that are self-maintaining but non-cognitive
have been devised, so some additional restrictions are needed, and the
suggestion is that the maintenance process, to be cognitive, involves
readjustment of the internal workings of the system in some metabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition. Thompson wrote that this distinction may or may not be fruitful, but
what matters is that living systems involve autopoiesis and (if it is
necessary to add this point) cognition as well. It can be noted that this definition of 'cognition' is restricted, and does not necessarily entail any awareness or consciousness
by the living system. With the publication of The Embodied Mind in
1991, Varela, Thompson and Rosch applied autopoesis to make non-representationalist, and enactive models of mind, brain and behavior, which further developed embodied cognitive neuroscience, later culminating in neurophenomenology.
Relation to consciousness
The connection of autopoiesis to cognition, or if necessary, of
living systems to cognition, is an objective assessment ascertainable by
observation of a living system.
One question that arises is about the connection between
cognition seen in this manner and consciousness. The separation of
cognition and consciousness recognizes that the organism may be unaware
of the substratum where decisions are made. What is the connection
between these realms? Thompson refers to this issue as the "explanatory gap", and one aspect of it is the hard problem of consciousness, how and why we have qualia.
A second question is whether autopoiesis can provide a bridge
between these concepts. Thompson discusses this issue from the
standpoint of enactivism.
An autopoietic cell actively relates to its environment. Its sensory
responses trigger motor behavior governed by autopoiesis, and this
behavior (it is claimed) is a simplified version of a nervous system
behavior. The further claim is that real-time interactions like this
require attention, and an implication of attention is awareness.
Criticism
There are multiple criticisms of the use of the term in both its
original context, as an attempt to define and explain the living, and
its various expanded usages, such as applying it to self-organizing
systems in general or social systems in particular. Critics have argued that the concept and its theory fail to define or
explain living systems and that, because of the extreme language of self-referentiality it uses without any external reference, it is really an attempt to give substantiation to Maturana's radical constructivist or solipsisticepistemology, or what Danilo Zolo has called instead a "desolate theology". An example is the assertion
by Maturana and Varela that "We do not see what we do not see and what
we do not see does not exist".
According to Razeto-Barry, the influence of Autopoiesis and Cognition: The Realization of the Living
in mainstream biology has proven to be limited. Razeto-Barry believes
that autopoiesis is not commonly used as the criterion for life.
Zoologist and philosopher Donna Haraway also criticizes the usage of the term, arguing that "nothing makes itself; nothing is really autopoietic or self-organizing", and suggests the use of sympoiesis, meaning "making-with", instead.
Multiple sequence alignment
(in this case DNA sequences) and illustrations of the use of
substitution models to make evolutionary inferences. The data in this
alignment (in this case a toy example with 18 sites) is converted to a
set of site patterns. The site patterns are shown along with the number
of times they occur in alignment. These site patterns are used to
calculate the likelihood given the substitution model and a phylogenetic tree
(in this case an unrooted four-taxon tree). It is also necessary to
assume a substitution model to estimate evolutionary distances for pairs
of sequences (distances are the number of substitutions that have
occurred since sequences had a common ancestor). The evolutionary
distance equation (d12) is based on the simple model proposed by Jukes and Cantor in 1969. The equation transforms the proportion of nucleotide differences between taxa 1 and 2 (p12
= 4/18; the four site patterns that differ between taxa 1 and 2 are
indicated with asterisks) into an evolutionary distance (in this case d12=0.2635 substitutions per site).
Some phylogenetic methods account for variation among sites and among tree branches. Different genes, e.g. hemoglobin vs. cytochrome c, generally evolve at different rates. These rates are relatively constant over time (e.g., hemoglobin does
not evolve at the same rate as cytochrome c, but hemoglobins from
humans, mice, etc. do have comparable rates of evolution), although
rapid evolution along one branch can indicate increased directional selection on that branch. Purifying selection causes functionally important regions to evolve more slowly, and amino acid substitutions involving similar amino acids occurs more often than dissimilar substitutions.
Gene phylogeny as lines within grey species phylogeny. Top: An ancestral gene duplication produces two paralogs (histone H1.1 and 1.2). A speciation event produces orthologs in the two daughter species (human and chimpanzee). Bottom: in a separate species (E. coli), a gene has a similar function (histone-like nucleoid-structuring protein) but has a separate evolutionary origin and so is an analog.
Gene duplication can produce multiple homologous proteins (paralogs) within the same species. Phylogenetic analysis of proteins has revealed how proteins evolve and change their structure and function over time.
For example, ribonucleotide reductase (RNR) has evolved a multitude of structural and functional variants. Class I RNRs use a ferritin subunit and differ by the metal they use as cofactors. In class II RNRs, the thiyl radical is generated using an adenosylcobalamin cofactor and these enzymes do not require additional subunits (as opposed to class I which do). In class III RNRs, the thiyl radical is generated using S-adenosylmethionine bound to a [4Fe-4S] cluster. That is, within a single family of proteins numerous structural and functional mechanisms can evolve.
In a proof-of-concept study, Bhattacharya and colleagues converted myoglobin, a non-enzymatic oxygen storage protein, into a highly efficient Kemp eliminase using only three mutations. This demonstrates that only few mutations are needed to radically change the function of a protein. Directed evolution is the attempt to engineer proteins using methods inspired by molecular evolution.
This hedgehog has no pigmentation due to a mutation.
Mutations are permanent, transmissible changes to the genetic material (DNA or RNA) of a cell or virus. Mutations result from errors in DNA replication during cell division and by exposure to radiation, chemicals, other environmental stressors, viruses, or transposable elements. When point mutations to just one base-pair of the DNA fall within a region coding for a protein, they are characterized by whether they are synonymous
(do not change the amino acid sequence) or non-synonymous. Other types
of mutations modify larger segments of DNA and can cause duplications,
insertions, deletions, inversions, and translocations.
The distribution of rates for diverse kinds of mutations is called the "mutation spectrum" (see App. B of ). Mutations of different types occur at widely varying rates. Point mutation rates for most organisms are very low, roughly 10−9 to 10−8 per site per generation, though some viruses have higher mutation rates on the order of 10−6 per site per generation. Transitions (A ↔ G or C ↔ T) are more common than transversions (purine (adenine or guanine)) ↔ pyrimidine (cytosine or thymine, or in RNA, uracil)). Perhaps the most common type of mutation in humans is a change in the length of a short tandem repeat
(e.g., the CAG repeats underlying various disease-associated
mutations). Such STR mutations may occur at rates on the order of 10−3 per generation.
Different frequencies of different types of mutations can play an important role in evolution via bias in the introduction of variation (arrival bias), contributing to parallelism, trends, and differences in the navigability of adaptive landscapes. Mutation bias makes systematic or predictable contributions to parallel evolution. Since the 1960s, genomic GC content has been thought to reflect mutational tendencies.Mutational biases also contribute to codon usage bias. Although such hypotheses are often associated with neutrality, recent
theoretical and empirical results have established that mutational
tendencies can influence both neutral and adaptive evolution via bias in the introduction of variation (arrival bias).
Selection can occur when an allele confers greater fitness, i.e. greater ability to survive or reproduce, on the average individual than carries it. A selectionist approach emphasizes e.g. that biases in codon usage are due at least in part to the ability of even weak selection to shape molecular evolution.
Genetic drift is the change of allele frequencies from one generation to the next due to stochastic effects of random sampling in finite populations. These effects can accumulate until a mutation becomes fixed in a population.
For neutral mutations, the rate of fixation per generation is equal to
the mutation rate per replication. A relatively constant mutation rate
thus produces a constant rate of change per generation (molecular
clock).
Slightly deleterious mutations with a selection coefficient less than a threshold value of 1 / the effective population size
can also fix. Many genomic features have been ascribed to accumulation
of nearly neutral detrimental mutations as a result of small effective
population sizes. With a smaller effective population size, a larger variety of mutations
will behave as if they are neutral due to inefficiency of selection.
Gene conversion occurs during recombination, when nucleotide damage is repaired
using an homologous genomic region as a template. It can be a biased
process, i.e. one allele may have a higher probability of being the
donor than the other in a gene conversion event. In particular,
GC-biased gene conversion tends to increase the GC-content of genomes, particularly in regions with higher recombination rates. There is also evidence for GC bias in the mismatch repair process. It is thought that this may be an adaptation to the high rate of
methyl-cytosine deamination which can lead to C→T transitions.
The dynamics of biased gene conversion resemble those of natural selection, in that a favored allele will tend to increase exponentially in frequency when rare.
Genome size is influenced by the amount of repetitive DNA as well as
number of genes in an organism. Some organisms, such as most bacteria, Drosophila, and Arabidopsis
have particularly compact genomes with little repetitive content or
non-coding DNA. Other organisms, like mammals or maize, have large
amounts of repetitive DNA, long introns, and substantial spacing between genes. The C-value paradox
refers to the lack of correlation between organism 'complexity' and
genome size. Explanations for the so-called paradox are two-fold.
First, repetitive genetic elements can comprise large portions of the
genome for many organisms, thereby inflating DNA content of the haploid
genome. Repetitive genetic elements are often descended from transposable elements.
Secondly, the number of genes is not necessarily indicative of
the number of developmental stages or tissue types in an organism. An
organism with few developmental stages or tissue types may have large
numbers of genes that influence non-developmental phenotypes, inflating
gene content relative to developmental gene families.
Neutral explanations for genome size suggest that when population
sizes are small, many mutations become nearly neutral. Hence, in small
populations repetitive content and other 'junk' DNA
can accumulate without placing the organism at a competitive
disadvantage. There is little evidence to suggest that genome size is
under strong widespread selection in multicellular eukaryotes. Genome
size, independent of gene content, correlates poorly with most
physiological traits and many eukaryotes, including mammals, harbor very
large amounts of repetitive DNA.
However, birds
likely have experienced strong selection for reduced genome size, in
response to changing energetic needs for flight. Birds, unlike humans,
produce nucleated red blood cells, and larger nuclei lead to lower
levels of oxygen transport. Bird metabolism is far higher than that of
mammals, due largely to flight, and oxygen needs are high. Hence, most
birds have small, compact genomes with few repetitive elements.
Indirect evidence suggests that non-avian theropod dinosaur ancestors of
modern birds also had reduced genome sizes, consistent with endothermy and high
energetic needs for running speed. Many bacteria have also experienced
selection for small genome size, as time of replication and energy
consumption are so tightly correlated with fitness.
Chromosome number and organization
The ant Myrmecia pilosula has only a single pair of chromosomes whereas the Adders-tongue fern Ophioglossum reticulatum has up to 1260 chromosomes. The number of chromosomes in an organism's genome does not necessarily correlate with the amount of DNA in its genome. The genome-wide amount of recombination is directly controlled by the number of chromosomes, with one crossover per chromosome or per chromosome arm, depending on the species.
Changes in chromosome number can play a key role in speciation, as differing chromosome numbers can serve as a barrier to reproduction in hybrids. Human chromosome 2 was created from a fusion of two chimpanzee chromosomes and still contains central telomeres as well as a vestigial second centromere. Polyploidy,
especially allopolyploidy, which occurs often in plants, can also
result in reproductive incompatibilities with parental species. Agrodiatus
blue butterflies have diverse chromosome numbers ranging from n=10 to
n=134 and additionally have one of the highest rates of speciation
identified to date.
Cilliate genomes house each gene in individual chromosomes.
In addition to the nuclear genome, endosymbiont organelles contain their own genetic material. Mitochondrial and chloroplast DNA varies across taxa, but membrane-bound proteins, especially electron transport chain constituents are most often encoded in the organelle. Chloroplasts and mitochondria are maternally inherited in most species, as the organelles must pass through the egg. In a rare departure, some species of mussels are known to inherit mitochondria from father to son.
Gene duplication initially leads to redundancy. However, duplicated gene sequences can mutate to develop new functions or specialize so that the new gene performs a subset of the original ancestral functions. Retrotransposition duplicates genes by copying mRNA to DNA and inserting it into the genome. Retrogenes generally insert into new genomic locations, lack introns, and sometimes develop new expression patterns and functions.
Chimeric genes
form when duplication, deletion, or incomplete retrotransposition
combines portions of two different coding sequences to produce a novel
gene sequence. Chimeras often cause regulatory changes and can shuffle
protein domains to produce novel adaptive functions.
De novo gene birth can give rise to protein-coding genes and non-coding genes from previously non-functional DNA. For instance, Levine and colleagues reported the origin of five new genes in the D. melanogaster genome.Similar de novo origin of genes has also been shown in other organisms such as yeast, rice and humans. De novo genes may evolve from spurious transcripts that are already expressed at low levels.
Constructive neutral evolution
(CNE) explains that complex systems can emerge and spread into a
population through neutral transitions with the principles of excess
capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
Journals and societies
The Society for Molecular Biology and Evolution publishes the
journals "Molecular Biology and Evolution" and "Genome Biology and
Evolution" and holds an annual international meeting. Other journals
dedicated to molecular evolution include Journal of Molecular Evolution and Molecular Phylogenetics and Evolution. Research in molecular evolution is also published in journals of genetics, molecular biology, genomics, systematics, and evolutionary biology.