Search This Blog

Wednesday, December 6, 2023

Functionalism (philosophy of mind)

In the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Functionalism is a theoretical level between the physical implementation and behavioral output. Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since a mental state is identified by a functional role, it is said to be realized on multiple levels; in other words, it is able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While a computer's program performs the functions via computations on inputs to give outputs, implemented via its electronic substrate, a brain performs the functions via its biological operation and stimulus responses.

Multiple realizability

An important part of some arguments for functionalism is the idea of multiple realizability. According to standard functionalist theories, a mental state corresponds to a functional role. It is like a valve; a valve can be made of plastic or metal or other material, as long as it performs the proper function (controlling the flow of a liquid or gas). Similarly, functionalists argue, a mental state can be explained without considering the state of the underlying physical medium (such as the brain) that realizes it; one need only consider higher-level function or functions. Because a mental state is not limited to a particular medium, it can be realized in multiple ways, including, theoretically, with non-biological systems, such as computers. A silicon-based machine could have the same sort of mental life that a human being has, provided that its structure realized the proper functional roles.

However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis and David Malet Armstrong. According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (for example, they tend to yell "Ouch!" when poked with sharp objects), we would say that their type of mental state might be similar to ours but it is not the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types

Machine-state functionalism

Artistic representation of a Turing machine

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam in the 1960s. This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Turing machines). Putnam himself, by the mid-1970s, had begun questioning this position. The beginning of his opposition to machine-state functionalism can be read about in his Twin Earth thought experiment.

In non-technical terms, a Turing machine is not a physical object, but rather an abstract machine built upon a mathematical model. Typically, a Turing Machine has a horizontal tape divided into rectangular cells arranged from left to right. The tape itself is infinite in length, and each cell may contain a symbol. The symbols used for any given "machine" can vary. The machine has a read-write head that scans cells and moves in left and right directions. The action of the machine is determined by the symbol in the cell being scanned and a table of transition rules that serve as the machine's programming. Because of the infinite tape, a traditional Turing Machine has an infinite amount of time to compute any particular function or any number of functions. In the below example, each cell is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:

  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.

An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:


State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

The above point is critical to an understanding of machine-state functionalism. Since Turing machines are not required to be physical systems, "anything capable of going through a succession of states in time can be a Turing machine". Because biological organisms “go through a succession of states in time”, any such organisms could also be equivalent to Turing machines.

According to machine-state functionalism, the nature of a mental state is just like the nature of the Turing machine states described above. If one can show the rational functioning and computing skills of these machines to be comparable to the rational functioning and computing skills of human beings, it follows that Turing machine behavior closely resembles that of human beings. Therefore, it is not a particular physical-chemical composition responsible for the particular machine or mental state, it is the programming rules which produce the effects that are responsible. To put it another way, any rational preference is due to the rules being followed, not to the specific material composition of the agent.

Psycho-functionalism

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psycho-functionalism.

The fundamental idea of psycho-functionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist. On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:

Mental state M is the state that is preconceived by P and causes Q.

For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Then M1 and M2 both supervene on the physical facts, but a change of M1 to M2 (say) could occur without any change in these facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Mechanistic functionalism

Mechanistic functionalism, originally formulated and defended by Gualtiero Piccinini and Carl Gillett independently, augments previous functionalist accounts of mental states by maintaining that any psychological explanation must be rendered in mechanistic terms. That is, instead of mental states receiving a purely functional explanation in terms of their relations to other mental states, like those listed above, functions are seen as playing only a part—the other part being played by structures— of the explanation of a given mental state.

A mechanistic explanation involves decomposing a given system, in this case a mental system, into its component physical parts, their activities or functions, and their combined organizational relations. On this account the mind remains a functional system, but one that is understood in mechanistic terms. This account remains a sort of functionalism because functional relations are still essential to mental states, but it is mechanistic because the functional relations are always manifestations of concrete structures—albeit structures understood at a certain level of abstraction. Functions are individuated and explained either in terms of the contributions they make to the given system or in teleological terms. If the functions are understood in teleological terms, then they may be characterized either etiologically or non-etiologically.

Mechanistic functionalism leads functionalism away from the traditional functionalist autonomy of psychology from neuroscience and towards integrating psychology and neuroscience. By providing an applicable framework for merging traditional psychological models with neurological data, mechanistic functionalism may be understood as reconciling the functionalist theory of mind with neurological accounts of how the brain actually works. This is due to the fact that mechanistic explanations of function attempt to provide an account of how functional states (mental states) are physically realized through neurological mechanisms.

Physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

China brain

Ned Block argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron. (The tremendous difference in speed of operation of each unit is not addressed.). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.

Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious. Indeed, it may be the case that we are constrained by our theory of mind and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.

The Chinese room

The Chinese room argument by John Searle is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.

In connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. In response, Searle suggested the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block. This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike normal people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.

David Chalmers tries to show that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. He considers gradually replacing a human brain by functionally equivalent circuitry; the general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least for entities that behave like humans. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that, while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam, is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H2O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H2O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H2O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of inputs and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counterintuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic–synthetic distinction, as many seek to avoid).

Triviality arguments

According to Ned Block, if functionalism is to avoid the chauvinism of type-physicalism, it becomes overly liberal in "ascribing mental properties to things that do not in fact have them". As an example, he proposes that the economy of Bolivia might be organized such that the economic states, inputs, and outputs would be isomorphic to a person under some bizarre mapping from mental to economic variables.

Hilary Putnam, John Searle, and others have offered further arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.

Evolvability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Evolvability

Evolvability is defined as the capacity of a system for adaptive evolution. Evolvability is the ability of a population of organisms to not merely generate genetic diversity, but to generate adaptive genetic diversity, and thereby evolve through natural selection.

In order for a biological organism to evolve by natural selection, there must be a certain minimum probability that new, heritable variants are beneficial. Random mutations, unless they occur in DNA sequences with no function, are expected to be mostly detrimental. Beneficial mutations are always rare, but if they are too rare, then adaptation cannot occur. Early failed efforts to evolve computer programs by random mutation and selection showed that evolvability is not a given, but depends on the representation of the program as a data structure, because this determines how changes in the program map to changes in its behavior. Analogously, the evolvability of organisms depends on their genotype–phenotype map. This means that genomes are structured in ways that make beneficial changes more likely. This has been taken as evidence that evolution has created fitter populations of organisms that are better able to evolve.

Alternative definitions

Andreas Wagner describes two definitions of evolvability. According to the first definition, a biological system is evolvable:

  • if its properties show heritable genetic variation, and
  • if natural selection can thus change these properties.

According to the second definition, a biological system is evolvable:

  • if it can acquire novel functions through genetic change, functions that help the organism survive and reproduce.

For example, consider an enzyme with multiple alleles in the population. Each allele catalyzes the same reaction, but with a different level of activity. However, even after millions of years of evolution, exploring many sequences with similar function, no mutation might exist that gives this enzyme the ability to catalyze a different reaction. Thus, although the enzyme's activity is evolvable in the first sense, that does not mean that the enzyme's function is evolvable in the second sense. However, every system evolvable in the second sense must also be evolvable in the first.

Pigliucci recognizes three classes of definition, depending on timescale. The first corresponds to Wagner's first, and represents the very short timescales that are described by quantitative genetics. He divides Wagner's second definition into two categories, one representing the intermediate timescales that can be studied using population genetics, and one representing exceedingly rare long-term innovations of form.

Pigliucci's second definition of evolvability includes Altenberg's quantitative concept of evolvability, being not a single number, but the entire upper tail of the fitness distribution of the offspring produced by the population. This quantity was considered a "local" property of the instantaneous state of a population, and its integration over the population's evolutionary trajectory, and over many possible populations, would be necessary to give a more global measure of evolvability.

Generating more variation

More heritable phenotypic variation means more evolvability. While mutation is the ultimate source of heritable variation, its permutations and combinations also make a big difference. Sexual reproduction generates more variation (and thereby evolvability) relative to asexual reproduction (see evolution of sexual reproduction). Evolvability is further increased by generating more variation when an organism is stressed, and thus likely to be less well adapted, but less variation when an organism is doing well. The amount of variation generated can be adjusted in many different ways, for example via the mutation rate, via the probability of sexual vs. asexual reproduction, via the probability of outcrossing vs. inbreeding, via dispersal, and via access to previously cryptic variants through the switching of an evolutionary capacitor. A large population size increases the influx of novel mutations in each generation.

Enhancement of selection

Rather than creating more phenotypic variation, some mechanisms increase the intensity and effectiveness with which selection acts on existing phenotypic variation. For example:

  • Mating rituals that allow sexual selection on "good genes", and so intensify natural selection.
  • Large effective population size increasing the threshold value of the selection coefficient above which selection becomes an important player. This could happen through an increase in the census population size, decreasing genetic drift, through an increase in the recombination rate, decreasing genetic draft, or through changes in the probability distribution of the numbers of offspring.
  • Recombination decreasing the importance of the Hill-Robertson effect, where different genotypes contain different adaptive mutations. Recombination brings the two alleles together, creating a super-genotype in place of two competing lineages.
  • Shorter generation time.

Robustness and evolvability

The relationship between robustness and evolvability depends on whether recombination can be ignored. Recombination can generally be ignored in asexual populations and for traits affected by single genes.

Without recombination

Robustness in the face of mutation does not increase evolvability in the first sense. In organisms with a high level of robustness, mutations have smaller phenotypic effects than in organisms with a low level of robustness. Thus, robustness reduces the amount of heritable genetic variation on which selection can act. However, robustness may allow exploration of large regions of genotype space, increasing evolvability according to the second sense. Even without genetic diversity, some genotypes have higher evolvability than others, and selection for robustness can increase the "neighborhood richness" of phenotypes that can be accessed from the same starting genotype by mutation. For example, one reason many proteins are less robust to mutation is that they have marginal thermodynamic stability, and most mutations reduce this stability further. Proteins that are more thermostable can tolerate a wider range of mutations and are more evolvable. For polygenic traits, neighborhood richness contributes more to evolvability than does genetic diversity or "spread" across genotype space.

With recombination

Temporary robustness, or canalisation, may lead to the accumulation of significant quantities of cryptic genetic variation. In a new environment or genetic background, this variation may be revealed and sometimes be adaptive.

Factors affecting evolvability via robustness

Different genetic codes have the potential to change robustness and evolvability by changing the effect of single-base mutational changes.

Exploration ahead of time

When mutational robustness exists, many mutants will persist in a cryptic state. Mutations tend to fall into two categories, having either a very bad effect or very little effect: few mutations fall somewhere in between. Sometimes, these mutations will not be completely invisible, but still have rare effects, with very low penetrance. When this happens, natural selection weeds out the very bad mutations, while leaving the others relatively unaffected. While evolution has no "foresight" to know which environment will be encountered in the future, some mutations cause major disruption to a basic biological process, and will never be adaptive in any environment. Screening these out in advance leads to preadapted stocks of cryptic genetic variation.

Another way that phenotypes can be explored, prior to strong genetic commitment, is through learning. An organism that learns gets to "sample" several different phenotypes during its early development, and later sticks to whatever worked best. Later in evolution, the optimal phenotype can be genetically assimilated so it becomes the default behavior rather than a rare behavior. This is known as the Baldwin effect, and it can increase evolvability.

Learning biases phenotypes in a beneficial direction. But an exploratory flattening of the fitness landscape can also increase evolvability even when it has no direction, for example when the flattening is a result of random errors in molecular and/or developmental processes. This increase in evolvability can happen when evolution is faced with crossing a "valley" in an adaptive landscape. This means that two mutations exist that are deleterious by themselves, but beneficial in combination. These combinations can evolve more easily when the landscape is first flattened, and the discovered phenotype is then fixed by genetic assimilation.

Modularity

If every mutation affected every trait, then a mutation that was an improvement for one trait would be a disadvantage for other traits. This means that almost no mutations would be beneficial overall. But if pleiotropy is restricted to within functional modules, then mutations affect only one trait at a time, and adaptation is much less constrained. In a modular gene network, for example, a gene that induces a limited set of other genes that control a specific trait under selection may evolve more readily than one that also induces other gene pathways controlling traits not under selection. Individual genes also exhibit modularity. A mutation in one cis-regulatory element of a gene's promoter region may allow the expression of the gene to be altered only in specific tissues, developmental stages, or environmental conditions rather than changing gene activity in the entire organism simultaneously.

Evolution of evolvability

While variation yielding high evolvability could be useful in the long term, in the short term most of that variation is likely to be a disadvantage. For example, naively it would seem that increasing the mutation rate via a mutator allele would increase evolvability. But as an extreme example, if the mutation rate is too high then all individuals will be dead or at least carry a heavy mutation load. Short-term selection for low variation most of the time is usually thought likely to be more powerful than long-term selection for evolvability, making it difficult for natural selection to cause the evolution of evolvability. Other forces of selection also affect the generation of variation; for example, mutation and recombination may in part be byproducts of mechanisms to cope with DNA damage.

When recombination is low, mutator alleles may still sometimes hitchhike on the success of adaptive mutations that they cause. In this case, selection can take place at the level of the lineage. This may explain why mutators are often seen during experimental evolution of microbes. Mutator alleles can also evolve more easily when they only increase mutation rates in nearby DNA sequences, not across the whole genome: this is known as a contingency locus.

The evolution of evolvability is less controversial if it occurs via the evolution of sexual reproduction, or via the tendency of variation-generating mechanisms to become more active when an organism is stressed. The yeast prion [PSI+] may also be an example of the evolution of evolvability through evolutionary capacitance. An evolutionary capacitor is a switch that turns genetic variation on and off. This is very much like bet-hedging the risk that a future environment will be similar or different. Theoretical models also predict the evolution of evolvability via modularity. When the costs of evolvability are sufficiently short-lived, more evolvable lineages may be the most successful in the long-term. However, the hypothesis that evolvability is an adaptation is often rejected in favor of alternative hypotheses, e.g. minimization of costs.

Applications

Evolvability phenomena have practical applications. For protein engineering we wish to increase evolvability, and in medicine and agriculture we wish to decrease it. Protein evolvability is defined as the ability of the protein to acquire sequence diversity and conformational flexibility which can enable it to evolve toward a new function.

In protein engineering, both rational design and directed evolution approaches aim to create changes rapidly through mutations with large effects. Such mutations, however, commonly destroy enzyme function or at least reduce tolerance to further mutations. Identifying evolvable proteins and manipulating their evolvability is becoming increasingly necessary in order to achieve ever larger functional modification of enzymes. Proteins are also often studied as part of the basic science of evolvability, because the biophysical properties and chemical functions can be easily changed by a few mutations. More evolvable proteins can tolerate a broader range of amino acid changes and allow them to evolve toward new functions. The study of evolvability has fundamental importance for understanding very long term evolution of protein superfamilies.

Many human diseases are capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics. Predicting the evolution and evolvability of our pathogens, and devising strategies to slow or circumvent the development of resistance, demands deeper knowledge of the complex forces driving evolution at the molecular level.

A better understanding of evolvability is proposed to be part of an Extended Evolutionary Synthesis.

Fusion gene

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Fusion_gene

A fusion gene is a hybrid gene formed from two previously independent genes. It can occur as a result of translocation, interstitial deletion, or chromosomal inversion. Fusion genes have been found to be prevalent in all main types of human neoplasia. The identification of these fusion genes play a prominent role in being a diagnostic and prognostic marker.

A schematic showing the ways a fusion gene can occur at a chromosomal level.

History

The first fusion gene was described in cancer cells in the early 1980s. The finding was based on the discovery in 1960 by Peter Nowell and David Hungerford in Philadelphia of a small abnormal marker chromosome in patients with chronic myeloid leukemia—the first consistent chromosome abnormality detected in a human malignancy, later designated the Philadelphia chromosome. In 1973, Janet Rowley in Chicago showed that the Philadelphia chromosome had originated through a translocation between chromosomes 9 and 22, and not through a simple deletion of chromosome 22 as was previously thought. Several investigators in the early 1980s showed that the Philadelphia chromosome translocation led to the formation of a new BCR::ABL1 fusion gene, composed of the 3' part of the ABL1 gene in the breakpoint on chromosome 9 and the 5' part of a gene called BCR in the breakpoint in chromosome 22. In 1985 it was clearly established that the fusion gene on chromosome 22 produced an abnormal chimeric BCR::ABL1 protein with the capacity to induce chronic myeloid leukemia.

Oncogenes

It has been known for 30 years that the corresponding gene fusion plays an important role in tumorigenesis. Fusion genes can contribute to tumor formation because fusion genes can produce much more active abnormal protein than non-fusion genes. Often, fusion genes are oncogenes that cause cancer; these include BCR-ABL, TEL-AML1 (ALL with t(12 ; 21)), AML1-ETO (M2 AML with t(8 ; 21)), and TMPRSS2-ERG with an interstitial deletion on chromosome 21, often occurring in prostate cancer. In the case of TMPRSS2-ERG, by disrupting androgen receptor (AR) signaling and inhibiting AR expression by oncogenic ETS transcription factor, the fusion product regulates the prostate cancer. Most fusion genes are found from hematological cancers, sarcomas, and prostate cancer. BCAM-AKT2 is a fusion gene that is specific and unique to high-grade serous ovarian cancer.

Oncogenic fusion genes may lead to a gene product with a new or different function from the two fusion partners. Alternatively, a proto-oncogene is fused to a strong promoter, and thereby the oncogenic function is set to function by an upregulation caused by the strong promoter of the upstream fusion partner. The latter is common in lymphomas, where oncogenes are juxtaposed to the promoters of the immunoglobulin genes. Oncogenic fusion transcripts may also be caused by trans-splicing or read-through events.

Since chromosomal translocations play such a significant role in neoplasia, a specialized database of chromosomal aberrations and gene fusions in cancer has been created. This database is called Mitelman Database of Chromosome Aberrations and Gene Fusions in Cancer.

Diagnostics

Presence of certain chromosomal aberrations and their resulting fusion genes is commonly used within cancer diagnostics in order to set a precise diagnosis. Chromosome banding analysis, fluorescence in situ hybridization (FISH), and reverse transcription polymerase chain reaction (RT-PCR) are common methods employed at diagnostic laboratories. These methods all have their distinct shortcomings due to the very complex nature of cancer genomes. Recent developments such as high-throughput sequencing and custom DNA microarrays bear promise of introduction of more efficient methods.

Evolution

Gene fusion plays a key role in the evolution of gene architecture. We can observe its effect if gene fusion occurs in coding sequences. Duplication, sequence divergence, and recombination are the major contributors at work in gene evolution. These events can probably produce new genes from already existing parts. When gene fusion happens in non-coding sequence region, it can lead to the misregulation of the expression of a gene now under the control of the cis-regulatory sequence of another gene. If it happens in coding sequences, gene fusion cause the assembly of a new gene, then it allows the appearance of new functions by adding peptide modules into a multi-domain protein. The detecting methods to inventory gene fusion events on a large biological scale can provide insights about the multi modular architecture of proteins.

Purine biosynthesis

The purines adenine and guanine are two of the four information encoding bases of the universal genetic code. Biosynthesis of these purines occurs by similar, but not identical, pathways in different species of the three domains of life, the Archaea, Bacteria and Eukaryotes. A major distinctive feature of the purine biosynthetic pathways in Bacteria is the prevalence of gene fusions where two or more purine biosynthetic enzymes are encoded by a single gene. Such gene fusions are almost exclusively between genes that encode enzymes that perform sequential steps in the biosynthetic pathway. Eukaryotic species generally exhibit the most common gene fusions seen in the Bacteria, but in addition have new fusions that potentially increase metabolic flux.

Detection

In recent years, next generation sequencing technology has already become available to screen known and novel gene fusion events on a genome wide scale. However, the precondition for large scale detection is a paired-end sequencing of the cell's transcriptome. The direction of fusion gene detection is mainly towards data analysis and visualization. Some researchers already developed a new tool called Transcriptome Viewer (TViewer) to directly visualize detected gene fusions on the transcript level.

Research applications

Biologists may also deliberately create fusion genes for research purposes. The fusion of reporter genes to the regulatory elements of genes of interest allows researches to study gene expression. Reporter gene fusions can be used to measure activity levels of gene regulators, identify the regulatory sites of genes (including the signals required), identify various genes that are regulated in response to the same stimulus, and artificially control the expression of desired genes in particular cells. For example, by creating a fusion gene of a protein of interest and green fluorescent protein, the protein of interest may be observed in cells or tissue using fluorescence microscopy. The protein synthesized when a fusion gene is expressed is called a fusion protein.

Mobile genetic elements

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mobile_genetic_elements
DNA transposons, LTR retrotransposons, SINEs, and LINEs make up a majority of the human genome.

Mobile genetic elements (MGEs) sometimes called selfish genetic elements are a type of genetic material that can move around within a genome, or that can be transferred from one species or replicon to another. MGEs are found in all organisms. In humans, approximately 50% of the genome is thought to be MGEs. MGEs play a distinct role in evolution. Gene duplication events can also happen through the mechanism of MGEs. MGEs can also cause mutations in protein coding regions, which alters the protein functions. These mechanisms can also rearrange genes in the host genome generating variation. These mechanism can increase fitness by gaining new or additional functions. An example of MGEs in evolutionary context are that virulence factors and antibiotic resistance genes of MGEs can be transported to share genetic code with neighboring bacteria. However, MGEs can also decrease fitness by introducing disease-causing alleles or mutations. The set of MGEs in an organism is called a mobilome, which is composed of a large number of plasmids, transposons and viruses.

pBLU is a 5437bp vector plasmid. This vector contains the origin of replication sequence, the restriction enzyme cut site, lacZ gene, and an ampicillin resistance gene.

Types

  • Plasmids: These are generally circular extrachromosomal DNA molecules that replicate and are transmitted independent from chromosomal DNA. These molecules are present in prokaryotes (bacteria and archaea) and sometimes in eukaryotic organisms such as yeast. Fitness of a plasmid is determined by its mobility. The first factor of plasmid fitness is its ability to replicate DNA. The second fitness factor is a plasmid's ability to horizontally transfer. Plasmids during their cycle carry genes from one organism to another through a process called conjugation. Plasmids usually contain a set of mobility genes that are necessary for conjugation. Some plasmids employ membrane associated mating pair formation (MPF). A plasmid containing its own MPF genes is considered to be self transmissible or conjugative. Plasmids can be further divided into mobilizable and non-mobilizable classes. Plasmids that use other genetic element MFPs in the cell are mobilizable. Plasmids that are not mobilizable but spread by transduction or transformation are termed non-mobilizable. Plasmids can often inject genes that make bacteria resistant to antibiotics.
  • Cloning vectors: These are types of hybrid plasmids with bacteriophages, used to transfer and replicate DNA . Fragments of DNA can be inserted by recombinant DNA techniques. A viable vector must be able to replicate together with the DNA fragments it carries. These vectors can contain desired genes for insertion into an organism's genome. Examples are cosmids and phagemids.
Examples of mobile genetic elements in the cell (left) and the ways they can be acquired (right)
Transposition of target sequence into recombination site in DNA by Transposase. Replication of the transposable sequence starts to occur when transposase cuts single strands on opposite sides of the dsDNA. The replication is completed in the transposon complex and excised to target sequence for recombination.
  • DNA transposons: These are transposons that move directly from one position to another in the genome using a transposase to cut and stick at another locus. These genetic elements are cleaved at four single stranded sites in DNA by transposase. In order to achieve max stability of the intermediate transposon, one single strand cleavage at the target DNA occurs. Simultaneously the donor strand is ligated to the target strand after cleavage leaving a single strand overhang on either end of the target sequence. These sites usually contain a 5 to 9 base pair overhang that can create a cohesive end. Transposase then holds the sequence in a crossed formation and ligates the donor strand to the target strand. The structure formed by the duplex of DNA and transposase in replicative transposons is known as the Shapiro Intermediate. The 5 to 9 base pair overhang is left on either side of the target sequence allowing it to join to its target sequence in either orientation. The sequence of these overhangs can determine joining orientation. Before site specific recombination can occur, the oligonucleotide ends must be filled. The ligation of these ends generates a replication fork at each end of the transposable element. The single strand displacement causes synthesis from the un-ligated 3' hydroxyl group to form long single stranded sections adjacent to the 5' end. Therefore, the opposite strand is sequenced discontinuously as both replication forks approach the center of the transposable element. This results in two recombinant duplexes containing the semi conserved transposable element flanked by the previous 5 to 9 base pair overhang. Site specific reciprocal recombination takes place between the two transposable elements facilitated by proteins. This reciprocal replication overlaps in time and occurs between duplicated segments of the replication element before replication is completed. The target molecule as a result contains the inserted element flanked by the 5 to 9 base pair sequences. Transposition of these elements duplicates the transposition element leaving a transposition element in its original location and a new transposon at the reciprocal replication site. In doing so, organisms total base pairs in their genomes are increased. Transposition occurrences increase over time and as organisms age.
Retrotransposon mechanism that uses reverse transcriptase to change mRNA transposon back into DNA for integration.
  • Retrotransposons: These are transposons that move in the genome, being transcribed into RNA and later into DNA by reverse transcriptase. Many retrotransposons also exhibit replicative transposition. Retrotransposons are present exclusively in eukaryotes. Retrotransposons consist of two major types, long terminal repeats (LTRs) and Non-LTR transposons. Non-LTR transposons can be further classified into Long interspersed nuclear element (LINEs) and Short interspersed nuclear element (SINEs). These retrotransposons are regulated by a family of short non-coding RNAs termed as PIWI [P-element induced wimpy testis]-interacting RNAs (piRNAs). piRNA is a recently discovered class of ncRNAs, which are in the length range of ~24-32 nucleotides. Initially, piRNAs were described as repeat-associated siRNAs (rasiRNAs) because of their origin from the repetitive elements such as transposable sequences of the genome. However, later it was identified that they acted via PIWI-protein. In addition to having a role in the suppression of genomic transposons, various roles of piRNAs have been recently reported like regulation of 3’ UTR of protein-coding genes via RNAi, transgenerational epigenetic inheritance to convey a memory of past transposon activity, and RNA-induced epigenetic silencing.
  • Integrons: These are gene cassettes that usually carry antibiotic resistance genes to bacterial plasmids and transposons.
  • Introns: Group I and II introns are nucleotide sequences with catalytic activity that are part of host transcripts and act as ribozymes that can invade genes that encode tRNA, rRNA, and proteins. They are present in all cellular organisms and viruses.
  • Introners: Sequences similar to transposons that can jump in the genome leaving new introns where they were, they have been pointed as a possible mechanism of intron gain in the evolution of eukaryotes where they are present in at least 5% of all species, specially in the aquatic taxa due possibly to horizontal gene transfer that occurs more frequently in these animals. They were first described in 2009 in the unicellular green algae micromonas.
  • Viral agents: These are mostly infective acellular agents that replicate in cellular hosts. During their infective cycle they can carry genes from one host to another. They can also carry genes from one organism to another in case that viral agent infects more than two different species. Traditionally they are considered separate entities, but the truth is that many researchers who study their characteristics and evolution refer to them as mobile genetic elements. This is based on the fact that viral agents are simple particles or molecules that replicate and are transferred between various hosts like the remaining non-viral mobile genetic elements. According to this point of view, viruses and other viral agents should not be considered living beings and should be better conceived as mobile genetic elements. Viral agents are evolutionarily connected with various mobile genetic elements. These viral agents are thought to have arisen from secreted or ejected plasmids of other organisms. Transposons also provide insight into how these elements may have originally started. This theory is known as the vagrancy hypothesis proposed by Barbara McClintock in 1950.
    • Viruses: These are viral agents composed of a molecule of genetic material (DNA or RNA) and with the ability to form complex particles called virions to be able to move easily between their hosts. Viruses are present in all living things. Viral particles are manufactured by the host's replicative machinery for horizontal transfer.
    • Satellite nucleic acids: These are DNA or RNA molecules, which are encapsulated as a stowaway in the virions of certain helper viruses and which depend on these to be able to replicate. Although they are sometimes considered genetic elements of their helper viruses, they are not always found within their helper viruses.
    • Viroids: These are viral agents that consist of small circular RNA molecules that infect and replicate in plants. These mobile genetic elements do not have a protective protein coating. Specifically, these mobile genetic elements are found in angiosperms.
    • Endogenous viral element: These are viral nucleic acids integrated into the genome of a cell. They can move and replicate multiple times in the host cell without causing disease or mutation. They are considered autonomous forms of transposons. Examples are proviruses and endogenous retroviruses.

Research examples

CRISPR-Cas systems in bacteria and archaea are adaptive immune systems to protect against deadly consequences from MGEs. Using comparative genomic and phylogenetic analysis, researchers found that CRISPR-Cas variants are associated with distinct types of MGEs such as transposable elements. In CRISPR-associated transposons, CRISPR-Cas controls transposable elements for their propagation.

MGEs such as plasmids by a horizontal transmission are generally beneficial to an organism. The ability of transferring plasmids (sharing) is important in an evolutionary perspective. Tazzyman and Bonhoeffer found that fixation (receiving) of the transferred plasmids in a new organism is just as important as the ability to transfer them. Beneficial rare and transferable plasmids have a higher fixation probability, whereas deleterious transferable genetic elements have a lower fixation probability because they are lethal to the host organisms.

One type of MGEs, namely the Integrative Conjugative Elements (ICEs) are central to horizontal gene transfer shaping the genomes of prokaryotes enabling rapid acquisition of novel adaptive traits.

As a representative example of ICEs, the ICEBs1 is well-characterized for its role in the global DNA damage SOS response of Bacillus subtilis and also its potential link to the radiation and desiccation resistance of Bacillus pumilus SAFR-032 spores, isolated from spacecraft cleanroom facilities.

Transposition by transposable elements is mutagenic. Thus, organisms have evolved to repress the transposition events, and failure to repress the events causes cancers in somatic cells. Cecco et al. found that during early age transcription of retrotransposable elements are minimal in mice, but in advanced age the transcription level increases. This age-dependent expression level of transposable elements is reduced by calorie restriction diet. Replication of transposable elements often results in repeated sequences being added into the genome. These sequences are often non coding but can interfere with coding sequences of DNA. Though mutagenetic by nature, transposons increase the genome of an organism that they transpose into. More research should be conducted into how these elements may serve as a rapid adaptation tool employed by organisms to generate variability. Many transposition elements are dormant or require activation. should also be noted that current values for coding sequences of DNA would be higher if transposition elements that code for their own transposition machinery were considered as coding sequences.

Some others researched examples include Mavericks, Starships and Space invaders (or SPINs)

Diseases

The consequence of mobile genetic elements can alter the transcriptional patterns, which frequently leads to genetic disorders such as immune disorders, breast cancer, multiple sclerosis, and amyotrophic lateral sclerosis. In humans, stress can lead to transactional activation of MGEs such as endogenous retroviruses, and this activation has been linked to neurodegeneration.

Other notes

The total of all mobile genetic elements in a genome may be referred to as the mobilome.

Barbara McClintock was awarded the 1983 Nobel Prize in Physiology or Medicine "for her discovery of mobile genetic elements" (transposable elements).

Mobile genetic elements play a critical role in the spread of virulence factors, such as exotoxins and exoenzymes, among bacteria. Strategies to combat certain bacterial infections by targeting these specific virulence factors and mobile genetic elements have been proposed.

Politics of Europe

From Wikipedia, the free encyclopedia ...