Search This Blog

Friday, August 1, 2025

Randomized algorithm

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Randomized_algorithm

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.

There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort), and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.

In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.

Motivation

As a motivating example, consider the problem of finding an ‘a’ in an array of n elements.

Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.

Output: Find an ‘a’ in the array.

We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.

Las Vegas algorithm:

findingA_LV(array A, n)
begin
    repeat
        Randomly select one element out of n elements.
    until 'a' is found
end

This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is

Since it is constant, the expected run time over many calls is . (See Big Theta notation)

Monte Carlo algorithm:

findingA_MC(array A, n, k)
begin
    i := 0
    repeat
        Randomly select one element out of n elements.
        i := i + 1
    until i = k or 'a' is found
end

If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:

This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is .

Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.

In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.

Computational complexity

Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.

The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.

Early history

Sorting

Quicksort was discovered by Tony Hoare in 1959, and subsequently published in 1961. In the same year, Hoare published the quickselect algorithm, which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.

Number theory

In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding square roots modulo prime numbers. In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field. In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.

Data structures

One of the earliest randomized data structures is the hash table, which was introduced in 1953 by Hans Peter Luhn at IBM. Luhn's hash table used chaining to resolve collisions and was also one of the first applications of linked lists. Subsequently, in 1954, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research introduced linear probing, although Andrey Ershov independently had the same idea in 1957. In 1962, Donald Knuth performed the first correct analysis of linear probing, although the memorandum containing his analysis was not published until much later. The first published analysis was due to Konheim and Weiss in 1966.

Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random. In 1979, Carter and Wegman introduced universal hash functions, which they showed could be used to implement chained hash tables with constant expected time per operation.

Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the Bloom filter. In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap. In the same year, William Pugh introduced another randomized search tree known as the skip list.

Implicit uses in combinatorics

Prior to the popularization of randomized algorithms in computer science, Paul Erdős popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic methodErdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.

Examples

Quicksort

Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.

Randomized incremental constructions in geometry

In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.

Min cut

Input: A graph G(V,E)

Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.

Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B. After contraction, the resulting graph may have parallel edges, but contains no self loops.

Figure 2: Successful run of Karger's algorithm on a 10-vertex graph. The minimum cut has size 3 and is indicated by the vertex colours.
Figure 1: Contraction of vertex A and B

Karger's basic algorithm:

begin
    i = 1
    repeat
        repeat
            Take a random edge (u,v) ∈ E in G
            replace u and v with the contraction u'
        until only 2 nodes remain
        obtain the corresponding cut result Ci
        i = i + 1
    until i = m
    output the minimum cut among C1, C2, ..., Cm.
end

In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is , and n denotes the number of vertices. After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an example of one execution of the algorithm. After execution, we get a cut of size 3.

Lemma 1Let k be the min cut size, and let C = {e1, e2, ..., ek} be the min cut. If, during iteration i, no edge eC is selected for contraction, then Ci = C.

Proof

If G is not connected, then G can be partitioned into L and R without any edge between them. So the min cut in a disconnected graph is 0. Now, assume G is connected. Let V=LR be the partition of V induced by C : C = { {u,v} ∈ E : uL,vR} (well-defined since G is connected). Consider an edge {u,v} of C. Initially, u,v are distinct vertices. As long as we pick an edge , u and v do not get merged. Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices of L and the other consisting of the vertices of R. As in figure 2, the size of min cut is 1, and C = {(A,B)}. If we don't select (A,B) for contraction, we can get the min cut.

Lemma 2If G is a multigraph with p vertices and whose min cut has size k, then G has at least pk/2 edges.

Proof

Because the min cut is k, every vertex v must satisfy degree(v) ≥ k. Therefore, the sum of the degree is at least pk. But it is well known that the sum of vertex degrees equals 2|E|. The lemma follows.

Analysis of algorithm

The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is

By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0, 1, …, n − 3}. Gj has nj vertices. We use the chain rule of conditional possibilities. The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is . Note that Gj still has min cut of size k, so by Lemma 2, it still has at least edges.

Thus, .

So by the chain rule, the probability of finding the min cut C is

Cancellation gives . Thus the probability that the algorithm succeeds is at least . For , this is equivalent to . The algorithm finds the min cut with probability , in time .

Derandomization

Randomness can be viewed as a resource, like space and time. Derandomization is then the process of removing randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness.

There are specific methods that can be employed to derandomize particular randomized algorithms:

  • the method of conditional probabilities, and its generalization, pessimistic estimators
  • discrepancy theory (which is used to derandomize geometric algorithms)
  • the exploitation of limited independence in the random variables used by the algorithm, such as the pairwise independence used in universal hashing
  • the use of expander graphs (or dispersers in general) to amplify a limited amount of initial randomness (this last approach is also referred to as generating pseudorandom bits from a random source, and leads to the related topic of pseudorandomness)
  • changing the randomized algorithm to use a hash function as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)

Where randomness helps

When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.

  • Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random-access machine requires 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
  • The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
  • In communication complexity, the equality of two strings can be verified to some reliability using bits of communication with a randomized protocol. Any deterministic protocol requires bits if defending against a strong opponent.
  • The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
  • A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE. However, if it is required that the verifier be deterministic, then IP = NP.
  • In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions.

Thursday, July 31, 2025

Devolution (biology)

From Wikipedia, the free encyclopedia

Devolution, de-evolution, or backward evolution (not to be confused with dysgenics) is the notion that species can revert to supposedly more primitive forms over time. The concept relates to the idea that evolution has a divine purpose (teleology) and is thus progressive (orthogenesis), for example that feet might be better than hooves, or lungs than gills. However, evolutionary biology makes no such assumptions, and natural selection shapes adaptations with no foreknowledge or foresights of any kind regarding the outcome. It is possible for small changes (such as in the frequency of a single gene) to be reversed by chance or selection, but this is no different from the normal course of evolution and as such de-evolution is not compatible with a proper understanding of evolution due to natural selection.

In the 19th century, when belief in orthogenesis was widespread, zoologists such as Ray Lankester and Anton Dohrn and palaeontologists Alpheus Hyatt and Carl H. Eigenmann advocated the idea of devolution. The concept appears in Kurt Vonnegut's 1985 novel Galápagos, which portrays a society that has evolved backwards to have small brains.

Dollo's law of irreversibility, first stated in 1893 by the palaeontologist Louis Dollo, denies the possibility of devolution. The evolutionary biologist Richard Dawkins explains Dollo's law as being simply a statement about the improbability of evolution's following precisely the same path twice.

Context

Lamarck's theory of evolution involved a complexifying force that progressively drives animal body plans towards higher levels, creating a ladder of phyla, as well as an adaptive force that causes animals with a given body plan to adapt to circumstances. The idea of progress in such theories permits the opposite idea of decay, seen in devolution.

The idea of devolution is based on the presumption of orthogenesis, the view that evolution has a purposeful direction towards increasing complexity. Modern evolutionary theory, beginning with Darwin at least, poses no such presumption; further, the concept of evolutionary change is independent of either any increase in complexity of organisms sharing a gene pool, or any decrease, such as in vestigiality or in loss of genes. Earlier views that species are subject to "cultural decay", "drives to perfection", or "devolution" are practically meaningless in terms of current (neo-)Darwinian theory. Early scientific theories of transmutation of species such as Lamarckism perceived species diversity as a result of a purposeful internal drive or tendency to form improved adaptations to the environment. In contrast, Darwinian evolution and its elaboration in the light of subsequent advances in biological research, have shown that adaptation through natural selection comes about when particular heritable attributes in a population happen to give a better chance of successful reproduction in the reigning environment than rival attributes do. By the same process less advantageous attributes are less "successful"; they decrease in frequency or are lost completely. Since Darwin's time it has been shown how these changes in the frequencies of attributes occur according to the mechanisms of genetics and the laws of inheritance originally investigated by Gregor Mendel. Combined with Darwin's original insights, genetic advances led to what has variously been called the modern evolutionary synthesis or the neo-Darwinism of the 20th century. In these terms evolutionary adaptation may occur most obviously through the natural selection of particular alleles. Such alleles may be long established, or they may be new mutations. Selection also might arise from more complex epigenetic or other chromosomal changes, but the fundamental requirement is that any adaptive effect must be heritable.

The concept of devolution on the other hand, requires that there be a preferred hierarchy of structure and function, and that evolution must mean "progress" to "more advanced" organisms. For example, it could be said that "feet are better than hooves" or "lungs are better than gills", so their development is "evolutionary" whereas change to an inferior or "less advanced" structure would be called "devolution". In reality an evolutionary biologist defines all heritable changes to relative frequencies of the genes or indeed to epigenetic states in the gene pool as evolution. All gene pool changes that lead to increased fitness in terms of appropriate aspects of reproduction are seen as (neo-)Darwinian adaptation because, for the organisms possessing the changed structures, each is a useful adaptation to their circumstances. For example, hooves have advantages for running quickly on plains, which benefits horses, and feet offer advantages in climbing trees, which some ancestors of humans did.

The concept of devolution as regress from progress relates to the ancient ideas that either life came into being through special creation or that humans are the ultimate product or goal of evolution. The latter belief is related to anthropocentrism, the idea that human existence is the point of all universal existence. Such thinking can lead on to the idea that species evolve because they "need to" in order to adapt to environmental changes. Biologists refer to this misconception as teleology, the idea of intrinsic finality that things are "supposed" to be and behave a certain way, and naturally tend to act that way to pursue their own good. From a biological viewpoint, in contrast, if species evolve it is not a reaction to necessity, but rather that the population contains variations with traits that favour their natural selection. This view is supported by the fossil record which demonstrates that roughly ninety-nine percent of all species that ever lived are now extinct.

People thinking in terms of devolution commonly assume that progress is shown by increasing complexity, but biologists studying the evolution of complexity find evidence of many examples of decreasing complexity in the record of evolution. The lower jaw in fish, reptiles and mammals has seen a decrease in complexity, if measured by the number of bones. Ancestors of modern horses had several toes on each foot; modern horses have a single hooved toe. Modern humans may be evolving towards never having wisdom teeth, and already have lost most of the tail found in many other mammals - not to mention other vestigial structures, such as the vermiform appendix or the nictitating membrane. In some cases, the level of organization of living creatures can also “shift” downwards (e.g., the loss of multicellularity in some groups of protists and fungi).

A more rational version of the concept of devolution, a version that does not involve concepts of "primitive" or "advanced" organisms, is based on the observation that if certain genetic changes in a particular combination (sometimes in a particular sequence as well) are precisely reversed, one should get precise reversal of the evolutionary process, yielding an atavism or "throwback", whether more or less complex than the ancestors where the process began. At a trivial level, where just one or a few mutations are involved, selection pressure in one direction can have one effect, which can be reversed by new patterns of selection when conditions change. That could be seen as reversed evolution, though the concept is not of much interest because it does not differ in any functional or effective way from any other adaptation to selection pressures.

History

Bénédict Morel (1809–1873) suggested a link between the environment and social degeneration.

The concept of degenerative evolution was used by scientists in the 19th century; at this time it was believed by most biologists that evolution had some kind of direction.

In 1857 the physician Bénédict Morel, influenced by Lamarckism, claimed that environmental factors such as taking drugs or alcohol would produce social degeneration in the offspring of those individuals, and would revert those offspring to a primitive state. Morel, a devout Catholic, had believed that mankind had started in perfection, contrasting modern humanity to the past. Morel claimed there had been "Morbid deviation from an original type". His theory of devolution was later advocated by some biologists.

According to Roger Luckhurst:

Darwin soothed readers that evolution was progressive, and directed towards human perfectibility. The next generation of biologists were less confident or consoling. Using Darwin's theory, and many rival biological accounts of development then in circulation, scientists suspected that it was just as possible to devolve, to slip back down the evolutionary scale to prior states of development.

One of the first biologists to suggest devolution was Ray Lankester, he explored the possibility that evolution by natural selection may in some cases lead to devolution, an example he studied was the regressions in the life cycle of sea squirts. Lankester discussed the idea of devolution in his book Degeneration: A Chapter in Darwinism (1880). He was a critic of progressive evolution, pointing out that higher forms existed in the past which have since degenerated into simpler forms. Lankester argued that "if it was possible to evolve, it was also possible to devolve, and that complex organisms could devolve into simpler forms or animals".

Anton Dohrn also developed a theory of degenerative evolution based on his studies of vertebrates. According to Dohrn many chordates are degenerated because of their environmental conditions. Dohrn claimed cyclostomes such as lampreys are degenerate fish as there is no evidence their jawless state is an ancestral feature but is the product of environmental adaptation due to parasitism. According to Dohrn if cyclostomes would devolve further then they would resemble something like an Amphioxus.

The historian of biology Peter J. Bowler has written that devolution was taken seriously by proponents of orthogenesis and others in the late 19th century who at this period of time firmly believed that there was a direction in evolution. Orthogenesis was the belief that evolution travels in internally directed trends and levels. The paleontologist Alpheus Hyatt discussed devolution in his work, using the concept of racial senility as the mechanism of devolution. Bowler defines racial senility as "an evolutionary retreat back to a state resembling that from which it began."

Hyatt who studied the fossils of invertebrates believed that up to a point ammonoids developed by regular stages up until a specific level but would later due to unfavourable conditions descend back to a previous level, this according to Hyatt was a form of lamarckism as the degeneration was a direct response to external factors. To Hyatt after the level of degeneration the species would then become extinct, according to Hyatt there was a "phase of youth, a phase of maturity, a phase of senility or degeneration foreshadowing the extinction of a type". To Hyatt the devolution was predetermined by internal factors which organisms can neither control or reverse. This idea of all evolutionary branches eventually running out of energy and degenerating into extinction was a pessimistic view of evolution and was unpopular amongst many scientists of the time.

Carl H. Eigenmann an ichthyologist wrote Cave vertebrates of America: a study in degenerative evolution (1909) in which he concluded that cave evolution was essentially degenerative. The entomologist William Morton Wheeler and the Lamarckian Ernest MacBride (1866–1940) also advocated degenerative evolution. According to Macbride invertebrates were actually degenerate vertebrates, his argument was based on the idea that "crawling on the seabed was inherently less stimulating than swimming in open waters."

Degeneration theory

Johan Friedrich Blumenbach 1752 - 1840

Johann Friedrich Blumenbach and other monogenists such as Georges-Louis Leclerc, Comte de Buffon were believers in the "Degeneration theory" of racial origins. The theory claims that races can degenerate into "primitive" forms. Blumenbach claimed that Adam and Eve were white and that other races came about by degeneration from environmental factors such as the sun and poor diet. Buffon believed that the degeneration could be reversed if proper environmental control was taken and that all contemporary forms of man could revert to the original Caucasian race.

Blumenbach claimed Negroid pigmentation arose because of the result of the heat of the tropical sun, cold wind caused the tawny colour of the Eskimos and the Chinese were fair skinned compared to the other Asian stocks because they kept mostly in towns protected from environmental factors.

According to Blumenbach there are five races all belonging to a single species: Caucasian, Mongolian, Ethiopian, American and Malay. Blumenbach however stated:

I have allotted the first place to the Caucasian because this stock displays the most beautiful race of men.

According to Blumenbach the other races are supposed to have degenerated from the Caucasian ideal stock. Blumenbach denied that his "Degeneration theory" was racist; he also wrote three essays claiming non-white peoples are capable of excelling in arts and sciences in reaction against racialists of his time who believed they couldn't.

Cyril M. Kornbluth's 1951 short story "The Marching Morons" is an example of dysgenic pressure in fiction, describing a man who accidentally ends up in the distant future and discovers that dysgenics has resulted in mass stupidity. Similarly, Mike Judge's 2006 film Idiocracy has the same premise, with the main character the subject of a military hibernation experiment that goes awry, taking him 500 years into the future. While in "The Marching Morons", civilization is kept afloat by a small group of dedicated geniuses, in Idiocracy, voluntary childlessness among high-IQ couples leaves only automated systems to fill that role. The 1998 song "Flagpole Sitta" by Harvey Danger finds lighthearted humor in dysgenics with the lines "Been around the world and found/That only stupid people are breeding/The cretins cloning and feeding/And I don't even own a tv". H. G. Wells' 1895 novel, The Time Machine, describes a future world where humanity has degenerated into two distinct branches who have their roots in the class distinctions of Wells' day. Both have sub-human intelligence and other putative dysgenic traits.

T. J. Bass's novels Half Past Human and The Godwhale describe humanity becoming cooperative and "low-maintenance" to the detriment of all other traits.

The American new wave band Devo derived both their name and overarching philosophy from the concept of "de-evolution" and used social satire and humor to espouse the idea that humanity had actually regressed over time. According to music critic Steve Huey, the band "adapted the theory to fit their view of American society as a rigid, dichotomized instrument of repression ensuring that its members behaved like clones, marching through life with mechanical, assembly-line precision and no tolerance for ambiguity."

DC Comics' Aquaman has one of the seven races of Atlantis called The Trench, similar to the Grindylows of British folklore, Cthulhu Mythos' Deep One, Universal Classic Monsters' Gill-man, and Fallout's Mirelurk. They were regressed to survive in the deepest, darkest places on the bottom of ocean trenches where they hide—hence their name—and are photophobic when in contact with light.

LEGO's 2009 Bionicle sets include Glatorian and Agori. One of the six tribes includes The Sand Tribe, which the Glatorian and Agori of that tribe are turned into scorpion-like beasts—the Vorox and the Zesk—by their creators, The Great Beings; whom are also of the same species as Glatorian and Agori.

Kurt Vonnegut's 1985 novel Galápagos is set a million years in the future, where humans have "devolved" to have much smaller brains. Robert E. Howard, in The Hyborian Age, an essay on his Conan the Barbarian universe, stated that the Atlanteans devolved into "ape-men", and had once been the Picts (distinct from the actual people; his are closely modeled on Algonquian Native Americans). Similarly, Helena Blavatsky, founder of Theosophy, believed, contrary to standard evolutionary theory, that apes had devolved from humans rather than the opposite, through affected people "putting themselves on the animal level".

Jonathan Swift's 1726 novel Gulliver's Travels contains a story about Yahoos, a kind of human-like creature turned into a savage, animal-like the state of society in which the Houyhnhnms—descendants of horses—are the dominant species.

H.P. Lovecraft's 1924 short story The Rats in the Walls also describes devolved humans.

Religion and children

From Wikipedia, the free encyclopedia

Children often acquire religious views approximating those of their parents, although they may also be influenced by others they communicate with – such as peers and teachers. Matters relating the subject of children and religion may include rites of passage, education, and child psychology, as well as discussion of the moral issue of the religious education of children.

The Children and Parents area in the Priory Church of St Mary, Totnes, Devon, UK
Chairs for children in the Church of Agia Marina in Kissos (Pelion, Greece)

Rites of passage

A Roman Catholic infant baptism in the United States

Most Christian denominations practice infant baptism to enter children into the faith. Some form of confirmation ritual occurs when the child has reached the age of reason and voluntarily accepts the religion.

Ritual circumcision is used to mark Jewish and Muslim and Coptic Christian and Ethiopian Orthodox Christian infant males as belonging to the faith. Jewish boys and girls then confirm their belonging at a coming of age ceremony known as the Bar and Bat Mitzvah respectively.

Education

A young Muslim couple and their toddler at Masjid al-Haram, Makkah, Saudi Arabia

Religious education

A parochial school (US) or faith school (UK), is a type of school which engages in religious education in addition to conventional education. Parochial schools may be primary or secondary and may have state funding but varying amounts of control by a religious organization. In addition, there are religious schools which only teach the religion and subsidiary subjects (such as the language of the holy books), typically run on a part-time basis separate from normal schooling. Examples are the Christian Sunday schools and the Jewish Hebrew schools. Islamic religious schools are known in English by the Arabic loanword Madrasah.

Prayer in school

Religion may have an influence on what goes on in state schools. For example, in the UK the Education Act 1944 introduced the requirement for daily prayers in all state-funded schools, but later acts changed this requirement to a daily "collective act of worship", the School Standards and Framework Act 1998 being the most recent. This also requires such acts of worship to be "wholly or mainly of a broadly Christian character". The term "mainly" means that acts related to other faiths can be carried out providing the majority are Christian.

Teaching evolution

The creation–evolution controversy, especially the status of creation and evolution in public education, is a debate over teaching children the origin and evolution of life, mostly in conservative regions of the United States. However, evolution is accepted by the Catholic Church and is a part of the Catholic Catechism.

Display of religious symbols

In France, children are forbidden from wearing conspicuous religious symbols in public schools.

Religious indoctrination of children

Many legal experts have argued that the government should create laws in the interests of the welfare of children, irrespective of the religion of their parents. Nicholas Humphrey has argued that children "have a human right not to have their minds crippled by exposure to other people's bad ideas," and should have the ability to question the religious views of their parents.

In "Parents' religion and children's welfare: debunking the doctrine of parents' rights", philosopher Arthur Schopenhauer spoke of the subject in the 19th century:

"And as the capacity for believing is strongest in childhood, special care is taken to make sure of this tender age. This has much more to do with the doctrines of belief taking root than threats and reports of miracles. If, in early childhood, certain fundamental views and doctrines are paraded with unusual solemnity, and an air of the greatest earnestness never before visible in anything else; if, at the same time, the possibility of a doubt about them be completely passed over, or touched upon only to indicate that doubt is the first step to eternal perdition, the resulting impression will be so deep that, as a rule, that is, in almost every case, doubt about them will be almost as impossible as doubt about one's own existence."

— Arthur Schopenhauer, On Religion: A Dialogue

Several authors have been critical of religious indoctrination of children, such as Nicholas HumphreyDaniel Dennett and Richard DawkinsChristopher Hitchens and Dawkins use the term child abuse to describe the harm that some religious upbringings inflict on children. A. C. Grayling has argued "we are all born atheists... and it takes a certain amount of work on the part of the adults in our community to persuade [children] differently."

Dawkins states that he is angered by the labels "Muslim child" or "Catholic child". He asks how a young child can be considered intellectually mature enough to have such independent views on the cosmos and humanity's place within it. By contrast, Dawkins states, no reasonable person would speak of a "Marxist child" or a "Tory child." He suggests there is little controversy over such labeling because of the "weirdly privileged status of religion". Once, Dawkins stated that sexually abusing a child is "arguably less" damaging than "the long term psychological damage inflicted by bringing up a child Catholic in the first place". Dawkins later wrote that this was an off-the-cuff remark.

Child marriage

Some scholars of Islam have permitted the child marriage of older men to girls as young as 10 years of age if they have entered puberty. The Seyaj Organization for the Protection of Children describes cases of a 10-year-old girl being married and raped in Yemen (Nujood Ali), a 13-year-old Yemeni girl dying of internal bleeding three days after marriage, and a 12-year-old girl dying in childbirth after marriage.

Latter Day Saint church founder Joseph Smith married girls as young as 13 and 14, and other Latter Day Saints married girls as young as 10. The Church of Jesus Christ of Latter-day Saints eliminated underaged marriages in the 19th century, but several fundamentalist branches of Mormonism continue the practice.

Health effects

Medical care

Saint Francis Borgia performing an exorcism, by Goya

Some religions treat illness, both mental and physical, in a manner that does not heal, and in some cases exacerbates the problem. Specific examples include faith healing of certain Christian sects, denominations which eschew medical care including vaccinations or blood transfusions, and exorcisms.

Faith based practices for healing purposes have come into direct conflict with both the medical profession and the law when victims of these practices are harmed, or in the most extreme cases, killed by these "cures." A detailed study in 1998 found 140 instances of deaths of children due to religion-based medical neglect. Most of these cases involved religious parents relying on prayer to cure the child's disease, and withholding medical care.

Jehovah's Witnesses object to blood transfusion primarily on religious grounds, they believe that blood is sacred and God said "abstain from blood" (Acts 15:28–29).

Religion as a by-product of children's attributes

Dawkins proposes that religion is a by-product arising from other features of the human species that are adaptive. One such feature is the tendency of children to "believe, without question, whatever your grown-ups tell you" (Dawkins, 2006, p. 174).

Psychologist Paul Bloom sees religion as a by-product of children's instinctive tendency toward a dualistic view of the world, and a predisposition towards creationism. Deborah Kelemen has also written that children are naturally teleologists, assigning a purpose to everything they come across.

Developmental psychology

Many have looked at stage models, like those of Jean Piaget and Lawrence Kohlberg, to explain how children develop ideas about God and religion in general.

Epithelium

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Epitheliu...