Search This Blog

Wednesday, November 18, 2020

Survival of the fittest

From Wikipedia, the free encyclopedia

Herbert Spencer coined the phrase "survival of the fittest".

"Survival of the fittest" is a phrase that originated from Darwinian evolutionary theory as a way of describing the mechanism of natural selection. The biological concept of fitness is defined as reproductive success. In Darwinian terms the phrase is best understood as "Survival of the form that will leave the most copies of itself in successive generations."

Herbert Spencer first used the phrase, after reading Charles Darwin's On the Origin of Species, in his Principles of Biology (1864), in which he drew parallels between his own economic theories and Darwin's biological ones: "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favoured races in the struggle for life."

Darwin responded positively to Alfred Russel Wallace's suggestion of using Spencer's new phrase "survival of the fittest" as an alternative to "natural selection", and adopted the phrase in The Variation of Animals and Plants under Domestication published in 1868. In On the Origin of Species, he introduced the phrase in the fifth edition published in 1869, intending it to mean "better designed for an immediate, local environment".

History of the phrase

By his own account, Herbert Spencer described a concept similar to "survival of the fittest" in his 1852 "A Theory of Population". He first used the phrase – after reading Charles Darwin's On the Origin of Species – in his Principles of Biology of 1864 in which he drew parallels between his economic theories and Darwin's biological, evolutionary ones, writing, "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favored races in the struggle for life."

In July 1866 Alfred Russel Wallace wrote to Darwin about readers thinking that the phrase "natural selection" personified nature as "selecting", and said this misconception could be avoided "by adopting Spencer's term" Survival of the fittest. Darwin promptly replied that Wallace's letter was "as clear as daylight. I fully agree with all that you say on the advantages of H. Spencer's excellent expression of 'the survival of the fittest'. This however had not occurred to me till reading your letter. It is, however, a great objection to this term that it cannot be used as a substantive governing a verb". Had he received the letter two months earlier, he would have worked the phrase into the fourth edition of the Origin which was then being printed, and he would use it in his "next book on Domestic Animals etc.".

Darwin wrote on page 6 of The Variation of Animals and Plants under Domestication published in 1868, "This preservation, during the battle for life, of varieties which possess any advantage in structure, constitution, or instinct, I have called Natural Selection; and Mr. Herbert Spencer has well expressed the same idea by the Survival of the Fittest. The term "natural selection" is in some respects a bad one, as it seems to imply conscious choice; but this will be disregarded after a little familiarity". He defended his analogy as similar to language used in chemistry, and to astronomers depicting the "attraction of gravity as ruling the movements of the planets", or the way in which "agriculturists speak of man making domestic races by his power of selection". He had "often personified the word Nature; for I have found it difficult to avoid this ambiguity; but I mean by nature only the aggregate action and product of many natural laws,—and by laws only the ascertained sequence of events."

In the first four editions of On the Origin of Species, Darwin had used the phrase "natural selection". In Chapter 4 of the 5th edition of The Origin published in 1869, Darwin implies again the synonym: "Natural Selection, or the Survival of the Fittest". By "fittest" Darwin meant "better adapted for the immediate, local environment", not the common modern meaning of "in the best physical shape" (think of a puzzle piece, not an athlete). In the introduction he gave full credit to Spencer, writing "I have called this principle, by which each slight variation, if useful, is preserved, by the term Natural Selection, in order to mark its relation to man's power of selection. But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient."

In The Man Versus The State, Spencer used the phrase in a postscript to justify a plausible explanation of how his theories would not be adopted by "societies of militant type". He uses the term in the context of societies at war, and the form of his reference suggests that he is applying a general principle.

"Thus by survival of the fittest, the militant type of society becomes characterized by profound confidence in the governing power, joined with a loyalty causing submission to it in all matters whatever".

Though Spencer's conception of organic evolution is commonly interpreted as a form of Lamarckism, Herbert Spencer is sometimes credited with inaugurating Social Darwinism. The phrase "survival of the fittest" has become widely used in popular literature as a catchphrase for any topic related or analogous to evolution and natural selection. It has thus been applied to principles of unrestrained competition, and it has been used extensively by both proponents and opponents of Social Darwinism.

Evolutionary biologists criticise the manner in which the term is used by non-scientists and the connotations that have grown around the term in popular culture. The phrase also does not help in conveying the complex nature of natural selection, so modern biologists prefer and almost exclusively use the term natural selection. The biological concept of fitness refers to reproductive success, as opposed to survival, and is not explicit in the specific ways in which organisms can be more "fit" (increase reproductive success) as having phenotypic characteristics that enhance survival and reproduction (which was the meaning that Spencer had in mind).

Critiquing the phrase

While the phrase "survival of the fittest" is often used to mean "natural selection", it is avoided by modern biologists, because the phrase can be misleading. For example, survival is only one aspect of selection, and not always the most important. Another problem is that the word "fit" is frequently confused with a state of physical fitness. In the evolutionary meaning "fitness" is the rate of reproductive output among a class of genetic variants.

Interpreted as expressing a biological theory

The phrase can also be interpreted to express a theory or hypothesis: that "fit" as opposed to "unfit" individuals or species, in some sense of "fit", will survive some test. Nevertheless, when extended to individuals it is a conceptual mistake, the phrase is a reference to the transgenerational survival of the heritable attributes; particular individuals are quite irrelevant. This becomes more clear when referring to Viral quasispecies, in survival of the flattest, which makes it clear to survive makes no reference to the question of even being alive itself; rather the functional capacity of proteins to carry out work.

Interpretations of the phrase as expressing a theory are in danger of being tautological, meaning roughly "those with a propensity to survive have a propensity to survive"; to have content the theory must use a concept of fitness that is independent of that of survival.

Interpreted as a theory of species survival, the theory that the fittest species survive is undermined by evidence that while direct competition is observed between individuals, populations and species, there is little evidence that competition has been the driving force in the evolution of large groups such as, for example, amphibians, reptiles, and mammals. Instead, these groups have evolved by expanding into empty ecological niches. In the punctuated equilibrium model of environmental and biological change, the factor determining survival is often not superiority over another in competition but ability to survive dramatic changes in environmental conditions, such as after a meteor impact energetic enough to greatly change the environment globally. The main land dwelling animals to survive the K-Pg impact 66 million years ago had the ability to live in tunnels, for example.

In 2010 Sahney et al. argued that there is little evidence that intrinsic, biological factors such as competition have been the driving force in the evolution of large groups. Instead, they cited extrinsic, abiotic factors such as expansion as the driving factor on a large evolutionary scale. The rise of dominant groups such as amphibians, reptiles, mammals and birds occurred by opportunistic expansion into empty ecological niches and the extinction of groups happened due to large shifts in the abiotic environment.

Interpreted as expressing a moral theory

Social Darwinists

It has been claimed that "the survival of the fittest" theory in biology was interpreted by late 19th century capitalists as "an ethical precept that sanctioned cut-throat economic competition" and led to the advent of the theory of "social Darwinism" which was used to justify laissez-faire economics, war and racism. However, these ideas pre-date and commonly contradict Darwin's ideas, and indeed their proponents rarely invoked Darwin in support. The term "social Darwinism" referring to capitalist ideologies was introduced as a term of abuse by Richard Hofstadter's Social Darwinism in American Thought published in 1944.

Anarchists

Russian anarchist Peter Kropotkin viewed the concept of "survival of the fittest" as supporting co-operation rather than competition. In his book Mutual Aid: A Factor of Evolution he set out his analysis leading to the conclusion that the fittest was not necessarily the best at competing individually, but often the community made up of those best at working together. He concluded that

In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood in its wide Darwinian sense – not as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress.

Applying this concept to human society, Kropotkin presented mutual aid as one of the dominant factors of evolution, the other being self-assertion, and concluded that

In the practice of mutual aid, which we can retrace to the earliest beginnings of evolution, we thus find the positive and undoubted origin of our ethical conceptions; and we can affirm that in the ethical progress of man, mutual support not mutual struggle – has had the leading part. In its wide extension, even at the present time, we also see the best guarantee of a still loftier evolution of our race.

Tautology

"Survival of the fittest" is sometimes claimed to be a tautology. The reasoning is that if one takes the term "fit" to mean "endowed with phenotypic characteristics which improve chances of survival and reproduction" (which is roughly how Spencer understood it), then "survival of the fittest" can simply be rewritten as "survival of those who are better equipped for surviving". Furthermore, the expression does become a tautology if one uses the most widely accepted definition of "fitness" in modern biology, namely reproductive success itself (rather than any set of characters conducive to this reproductive success). This reasoning is sometimes used to claim that Darwin's entire theory of evolution by natural selection is fundamentally tautological, and therefore devoid of any explanatory power.

However, the expression "survival of the fittest" (taken on its own and out of context) gives a very incomplete account of the mechanism of natural selection. The reason is that it does not mention a key requirement for natural selection, namely the requirement of heritability. It is true that the phrase "survival of the fittest", in and by itself, is a tautology if fitness is defined by survival and reproduction. Natural selection is the portion of variation in reproductive success that is caused by heritable characters.

If certain heritable characters increase or decrease the chances of survival and reproduction of their bearers, then it follows mechanically (by definition of "heritable") that those characters that improve survival and reproduction will increase in frequency over generations. This is precisely what is called "evolution by natural selection". On the other hand, if the characters which lead to differential reproductive success are not heritable, then no meaningful evolution will occur, "survival of the fittest" or not: if improvement in reproductive success is caused by traits that are not heritable, then there is no reason why these traits should increase in frequency over generations. In other words, natural selection does not simply state that "survivors survive" or "reproducers reproduce"; rather, it states that "survivors survive, reproduce and therefore propagate any heritable characters which have affected their survival and reproductive success". This statement is not tautological: it hinges on the testable hypothesis that such fitness-impacting heritable variations actually exist (a hypothesis that has been amply confirmed.)

Momme von Sydow suggested further definitions of 'survival of the fittest' that may yield a testable meaning in biology and also in other areas where Darwinian processes have been influential. However, much care would be needed to disentangle tautological from testable aspects. Moreover, an "implicit shifting between a testable and an untestable interpretation can be an illicit tactic to immunize natural selection ... while conveying the impression that one is concerned with testable hypotheses".

Skeptic Society founder and Skeptic magazine publisher Michael Shermer addresses the tautology problem in his 1997 book, Why People Believe Weird Things, in which he points out that although tautologies are sometimes the beginning of science, they are never the end, and that scientific principles like natural selection are testable and falsifiable by virtue of their predictive power. Shermer points out, as an example, that population genetics accurately demonstrate when natural selection will and will not effect change on a population. Shermer hypothesizes that if hominid fossils were found in the same geological strata as trilobites, it would be evidence against natural selection.

The United Nations and the Origins of "The Great Reset"

 great reset world economic forum davos

4 hours ago

About twenty-four hundred years ago, the Greek philosopher Plato came up with the idea constructing the state and society according to an elaborate plan. Plato wanted “wise men” (philosophers) at the helm of the government, but he made it also clear that his kind of state would need a transformation of the humans. In modern times, the promoters of the omnipotent state want to substitute Plato’s philosopher with the expert and create the new man through eugenics, which is now called transhumanism. The United Nations and its various suborganizations play a pivotal role in this project which has reached its present stage in the project of the Agenda 2030 and the Great Reset.

The Strife for a World Government

The Great Reset did not come from nowhere. The first modern attempts to create a global institution with a governmental function was launched by the government of Woodrow Wilson who acted as US president from 1913 to 1921. Under the inspiration of Colonel Mandell House, the president’s prime advisor and best friend, Wilson wanted to establish a world forum for the period after World War I. Yet the plan of American participation in the League of Nations failed and the drive toward internationalism and establishing a new world order receded during the Roaring Twenties.

A new move toward managing a society like an organization, however, came during the Great DepressionFranklin Delano Roosevelt did not let the crisis go by without driving the agenda forward with his “New Deal.” FDR was especially interested in the special executive privileges that came with the Second World War. Resistance was almost nil when he moved forward to lay the groundwork for a new League of Nations, which was now to be named the United Nations.

Under the leadership of Stalin, Churchill, and Roosevelt, twenty-six nations agreed in January 1942 to the initiative of establishing a United Nations Organization (UNO), which came into existence on October 24, 1945. Since its inception, the United Nations and its branches, such as the World Bank Group and the World Health Organization (WHO), have prepared the countries of the world to comply with the goals that were announced at its foundation.

Yet the unctuous pronouncements of promoting “international peace and security,” “developing friendly relations among nations,” and working for “social progress, better living standards, and human rights” hides the agenda of establishing a world government with executive powers whose task would not be promoting liberty and free markets but greater interventionism and control through cultural and scientific organizations. This became clear with the creation of the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 1945.  

Eugenics

After the foundation of UNESCO in 1945, the English evolutionary biologist, eugenicist, and declared globalist Julian Huxley (the brother of Aldous Huxley, author of Brave New World) became its first director.

At the launch of the organization,  Huxley called for a “scientific world humanism, global in extent” (p. 8) and asked to manipulate human evolution to a “desirable” end. Referring to dialectical materialism as “the first radical attempt at an evolutionary philosophy” (p. 11), the director of UNESCO laments that the Marxist approach to changing society was bound to fail because of its lack of an indispensable “biological component.”

With these ideas, Julian Huxley was in respectable company. Since the late nineteenth century, the call for the genetic betterment of the human race through eugenics has been gaining many prominent followers. John Maynard Keynes, for example, held the promotion of eugenics and population control as one the most important social questions and a crucial area of research.

Keynes was not alone. The list of advocates of breeding the human race for its own betterment is quite large and impressive. These “illiberal reformers” include, among many other well-known names, the writers H.G. Wells and G.B. Shaw, US president Theodore Roosevelt, and British prime minister Winston Churchill as well as the economist Irving Fisher and the family-planning pioneers Margaret Sanger and Bill Gates Sr., the father of Bill Gates, Microsoft cofounder and head of the Bill and Melinda Gates Foundation.

In his discourse at the foundation of the UNESCO, Julian Huxley was quite specific about the goals and methods of this institution. To achieve the desired “evolutionary progress” of mankind, the first step must be to stress “the ultimate need for world political unity and familiarize all peoples with the implications of the transfer of full sovereignty from separate nations to a world organization.”

Furthermore, the institution must consider the tradeoff between the “importance of quality as against quantity” (p. 14), which means it must take into account that there is, “an optimum range of size for every human organization as for every type of organism” (p. 15). The educational, scientific, and cultural organization of the UN should give special attention to “unity-in-variety of the world’s art and culture as well as the promotion of one single pool of scientific knowledge” (p 17).

Huxley makes it clear that human diversity is not for all. Variety for “weaklings, fools, and moral deficients…cannot but be bad,” and because a “considerable percentage of the population is not capable of profiting from higher education” and also a “considerable percentage of young men” suffer from “physical weakness or mental instability” and “these grounds are often genetic in origin” (p. 20), these groups must be excluded from the efforts of advancing human progress.

In his discourse, Huxley diagnosed that at the time of his writing the “indirect effect of civilization” is rather “dysgenic instead of eugenic” and that “in any case, it seems likely that the deadweight of genetic stupidity, physical weakness, mental instability, and disease-proneness, which already exist in the human species, will prove too great a burden for real progress to be achieved” (p. 21). After all, it is “essential that eugenics should be brought entirely within the borders of science, for as already indicated, in the not very remote future the problem of improving the average quality of human beings is likely to become urgent; and this can only be accomplished by applying the findings of a truly scientific eugenics” (pp. 37–38).

Use of the Climate Threat

The next decisive step toward the global economic transformation was taken with the first report of the Club of Rome. In 1968, the Club of Rome was initiated at the Rockefeller estate Bellagio in Italy. Its first report was published in 1972 under the title “The Limits to Growth.” 

The president emeritus of the Club of Rome, Alexander King, and the secretary of the club, General Bertrand Schneider, inform in their Report of the Council of the Club of Rome that when the members of the club were in search of identifying a new enemy, they listed pollution, global warming, water shortages, and famines as the most opportune items to be blamed on humanity with the implication that humanity itself must be reduced to keep these threats in check.

Since the 1990s, several comprehensive initiatives toward a global system of control have been undertaken by the United Nations with Agenda 2021 and Agenda 2030. The 2030 Agenda was adopted by all United Nations member states in 2015. It launched its blueprint for global change with the call to achieve seventeen sustainable development goals (SDGs). The key concept is “sustainable development” that includes population control as a crucial instrument.

Saving the earth has become the slogan of green policy warriors. Since the 1970s, the horror scenario of global warming has been a useful tool in their hands to gain political influence and finally rule over public discourse. In the meanwhile, these anticapitalist groups have obtained a dominant influence in the media, the educational and judicial systems, and have become major players in the political arena.

In many countries, particularly in Europe, the so-called green parties have become a pivotal factor in the political system. Many of the representatives are quite open in their demands to make society and the economy compatible with high ecological standards that require a profound reset of the present system. 

In 1945, Huxley (p. 21) noted that it is too early to propose outright a eugenic depopulation program but advised that it will be important for the organization “to see that the eugenic problem is examined with the greatest care, and that the public mind is informed of the issues at stake so that much that now is unthinkable may at least become thinkable.”

Huxley’s caution is no longer necessary. In the meantime, the branches of the United Nations have gained such a level of power that even originally minor UN suborganizations such as the World Health Organization (WHO) have been enabled to command individual governments around the world to obey their orders. The WHO and the International Monetary Fund (IMF)—whose conditionality for loans has changed from fiscal restraint to the degree to which a country follows the rules set by the WHO—have become the supreme tandem to work toward establishing the new world order.

As Julian Huxley pointed out in his discourse in 1945, it is the task of the United Nations to do away with economic freedom, because “laisser-faire and capitalist economic systems” have “created a great deal of ugliness” (p. 38). The time has come to work toward the emergence “of a single world culture” (p. 61). This must be done with the explicit help of the mass media and the educational systems.

Conclusion

With the foundation of the United Nations and its suborganizations, the drive to advance the programs of eugenics and transhumanism took a big step forward. Together with the activities of the Club of Rome, they have stage to initiate the great reset that is going on currently. With the pronouncement of a pandemic, the goal of comprehensive government control of the economy and society has taken another leap toward transforming the economy and society. Freedom faces a new enemy. The tyranny comes under the disguise of expert rule and benevolent dictatorship. The new rulers do not justify their right to dominance because of divine providence but now claim the right to rule the people in the name of universal health and safety based on presumed scientific evidence.

Author:

Antony P. Mueller

Dr. Antony P. Mueller is a German professor of economics who currently teaches in Brazil. Write an email. See his website and blog.

Emission theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Emission_theory

Emission theory, also called emitter theory or ballistic theory of light, was a competing theory for the special theory of relativity, explaining the results of the Michelson–Morley experiment of 1887. Emission theories obey the principle of relativity by having no preferred frame for light transmission, but say that light is emitted at speed "c" relative to its source instead of applying the invariance postulate. Thus, emitter theory combines electrodynamics and mechanics with a simple Newtonian theory. Although there are still proponents of this theory outside the scientific mainstream, this theory is considered to be conclusively discredited by most scientists.

History

The name most often associated with emission theory is Isaac Newton. In his corpuscular theory Newton visualized light "corpuscles" being thrown off from hot bodies at a nominal speed of c with respect to the emitting object, and obeying the usual laws of Newtonian mechanics, and we then expect light to be moving towards us with a speed that is offset by the speed of the distant emitter (c ± v).

In the 20th century, special relativity was created by Albert Einstein to solve the apparent conflict between electrodynamics and the principle of relativity. The theory's geometrical simplicity was persuasive, and the majority of scientists accepted relativity by 1911. However, a few scientists rejected the second basic postulate of relativity: the constancy of the speed of light in all inertial frames. So different types of emission theories were proposed where the speed of light depends on the velocity of the source, and the Galilean transformation is used instead of the Lorentz transformation. All of them can explain the negative outcome of the Michelson–Morley experiment, since the speed of light is constant with respect to the interferometer in all frames of reference. Some of those theories were:

  • Light retains throughout its whole path the component of velocity which it obtained from its original moving source, and after reflection light spreads out in spherical form around a center which moves with the same velocity as the original source. (Proposed by Walter Ritz in 1908). This model was considered to be the most complete emission theory. (Actually, Ritz was modeling Maxwell–Lorentz electrodynamics. In a later paper Ritz said that the emission particles in his theory should suffer interactions with charges along their path and thus waves (produced by them) would not retain their original emission velocities indefinitely.)
  • The excited portion of a reflecting mirror acts as a new source of light and the reflected light has the same velocity c with respect to the mirror as has original light with respect to its source. (Proposed by Richard Chase Tolman in 1910, although he was a supporter of special relativity).
  • Light reflected from a mirror acquires a component of velocity equal to the velocity of the mirror image of the original source (Proposed by Oscar M. Stewart in 1911).
  • A modification of the Ritz–Tolman theory was introduced by J. G. Fox (1965). He argued that the extinction theorem (i.e., the regeneration of light within the traversed medium) must be considered. In air, the extinction distance would be only 0.2 cm, that is, after traversing this distance the speed of light would be constant with respect to the medium, not to the initial light source. (Fox himself was, however, a supporter of special relativity.)

Albert Einstein is supposed to have worked on his own emission theory before abandoning it in favor of his special theory of relativity. Many years later R.S. Shankland reports Einstein as saying that Ritz's theory had been "very bad" in places and that he himself had eventually discarded emission theory because he could think of no form of differential equations that described it, since it leads to the waves of light becoming "all mixed up".

Refutations of emission theory

The following scheme was introduced by de Sitter to test emission theories:

where c is the speed of light, v that of the source, c' the resultant speed of light, and k a constant denoting the extent of source dependence which can attain values between 0 and 1. According to special relativity and the stationary aether, k=0, while emission theories allow values up to 1. Numerous terrestrial experiments have been performed, over very short distances, where no "light dragging" or extinction effects could come into play, and again the results confirm that light speed is independent of the speed of the source, conclusively ruling out emission theories.

Astronomical sources

de Sitter's argument against emission theory.
Animation of de Sitter's argument.
Willem de Sitter's argument against emission theory. According to simple emission theory, light moves at a speed of c with respect to the emitting object. If this were true, light emitted from a star in a double-star system from different parts of the orbital path would travel towards us at different speeds. For certain combinations of orbital speed, distance, and inclination, the "fast" light given off during approach would overtake "slow" light emitted during a recessional part of the star's orbit. Many bizarre effects would be seen, including (a) as illustrated, unusually shaped variable star light curves such as have never been seen, (b) extreme Doppler red- and blue-shifts in phase with the light curves, implying highly non-Keplerian orbits, and (c) splitting of the spectral lines (note simultaneous arrival of blue- and red-shifted light at the target).

In 1910 Daniel Frost Comstock and in 1913 Willem de Sitter wrote that for the case of a double-star system seen edge-on, light from the approaching star might be expected to travel faster than light from its receding companion, and overtake it. If the distance was great enough for an approaching star's "fast" signal to catch up with and overtake the "slow" light that it had emitted earlier when it was receding, then the image of the star system should appear completely scrambled. De Sitter argued that none of the star systems he had studied showed the extreme optical effect behavior, and this was considered the death knell for Ritzian theory and emission theory in general, with .

The effect of extinction on de Sitter's experiment has been considered in detail by Fox, and it arguably undermines the cogency of de Sitter type evidence based on binary stars. However, similar observations have been made more recently in the x-ray spectrum by Brecher (1977), which have a long enough extinction distance that it should not affect the results. The observations confirm that the speed of light is independent of the speed of the source, with .

Hans Thirring argued in 1926, that an atom which is accelerated during the emission process by thermal collisions in the sun, is emitting light rays having different velocities at their start- and endpoints. So one end of the light ray would overtake the preceding parts, and consequently the distance between the ends would be elongated up to 500 km until they reach Earth, so that the mere existence of sharp spectral lines in the sun's radiation, disproves the ballistic model.

Terrestrial sources

Such experiments include that of Sadeh (1963) who used a time-of-flight technique to measure velocity differences of photons traveling in opposite direction, which were produced by positron annihilation. Another experiment was conducted by Alväger et al. (1963), who compared the time of flight of gamma rays from moving and resting sources. Both experiments found no difference, in accordance with relativity.

Filippas and Fox (1964) did not consider Sadeh (1963) and Alväger (1963) to have sufficiently controlled for the effects of extinction. So they conducted an experiment using a setup specifically designed to account for extinction. Data collected from various detector-target distances were consistent with there being no dependence of the speed of light on the velocity of the source, and were inconsistent with modeled behavior assuming c ± v both with and without extinction.

Continuing their previous investigations, Alväger et al. (1964) observed π0-mesons which decay into photons at 99.9% light speed. The experiment showed that the photons didn't attain the velocity of their sources and still traveled at the speed of light, with . The investigation of the media which were crossed by the photons showed that the extinction shift was not sufficient to distort the result significantly.

Also measurements of neutrino speed have been conducted. Mesons travelling nearly at light speed were used as sources. Since neutrinos only participate in the electroweak interaction, extinction plays no role. Terrestrial measurements provided upper limits of .

Interferometry

The Sagnac effect demonstrates that one beam on a rotating platform covers less distance than the other beam, which creates the shift in the interference pattern. Georges Sagnac's original experiment has been shown to suffer extinction effects, but since then, the Sagnac effect has also been shown to occur in vacuum, where extinction plays no role.

The predictions of Ritz's version of emission theory were consistent with almost all terrestrial interferometric tests save those involving the propagation of light in moving media, and Ritz did not consider the difficulties presented by tests such as the Fizeau experiment to be insurmountable. Tolman, however, noted that a Michelson–Morley experiment using an extraterrestrial light source could provide a decisive test of the Ritz hypothesis. In 1924, Rudolf Tomaschek performed a modified Michelson–Morley experiment using starlight, while Dayton Miller used sunlight. Both experiments were inconsistent with the Ritz hypothesis.

Babcock and Bergman (1964) placed rotating glass plates between the mirrors of a common-path interferometer set up in a static Sagnac configuration. If the glass plates behave as new sources of light so that the total speed of light emerging from their surfaces is c + v, a shift in the interference pattern would be expected. However, there was no such effect which again confirms special relativity, and which again demonstrates the source independence of light speed. This experiment was executed in vacuum, thus extinction effects should play no role.

Albert Abraham Michelson (1913) and Quirino Majorana (1918/9) conducted interferometer experiments with resting sources and moving mirrors (and vice versa), and showed that there is no source dependence of light speed in air. Michelson's arrangement was designed to distinguish between three possible interactions of moving mirrors with light: (1) "the light corpuscles are reflected as projectiles from an elastic wall", (2) "the mirror surface acts as a new source", (3) "the velocity of light is independent of the velocity of the source". His results were consistent with source independence of light speed. Majorana analyzed the light from moving sources and mirrors using an unequal arm Michelson interferometer that was extremely sensitive to wavelength changes. Emission theory asserts that Doppler shifting of light from a moving source represents a frequency shift with no shift in wavelength. Instead, Majorana detected wavelength changes inconsistent with emission theory.

Beckmann and Mandics (1965) repeated the Michelson (1913) and Majorana (1918) moving mirror experiments in high vacuum, finding k to be less than 0.09. Although the vacuum employed was insufficient to definitively rule out extinction as the reason for their negative results, it was sufficient to make extinction highly unlikely. Light from the moving mirror passed through a Lloyd interferometer, part of the beam traveling a direct path to the photographic film, part reflecting off the Lloyd mirror. The experiment compared the speed of light hypothetically traveling at c + v from the moving mirrors, versus reflected light hypothetically traveling at c from the Lloyd mirror.

Other refutations

Emission theories use the Galilean transformation, according to which time coordinates are invariant when changing frames ("absolute time"). Thus the Ives–Stilwell experiment, which confirms relativistic time dilation, also refutes the emission theory of light. As shown by Howard Percy Robertson, the complete Lorentz transformation can be derived, when the Ives–Stillwell experiment is considered together with the Michelson–Morley experiment and the Kennedy–Thorndike experiment.

Furthermore, quantum electrodynamics places the propagation of light in an entirely different, but still relativistic, context, which is completely incompatible with any theory that postulates a speed of light that is affected by the speed of the source.

Gravitational redshift

From Wikipedia, the free encyclopedia
 
The gravitational redshift of a light wave as it moves upwards against a gravitational field (produced by the yellow star below). The effect is greatly exaggerated in this diagram.

In Einstein's general theory of relativity, the gravitational redshift is the phenomenon that clocks deeper in a gravitational well tick slower when observed from outside the well. More specifically the term refers to the shift of wavelength of a photon to longer wavelength (the red side in an optical spectrum) when observed from a point at a higher gravitational potential. In the latter case the 'clock' is the frequency of the photon and a lower frequency is the same as a longer ("redder") wavelength.

The gravitational redshift is a simple consequence of Einstein's equivalence principle (that gravity and acceleration are equivalent) and was found by Einstein eight years before the full theory of relativity.

Observing the gravitational redshift in the solar system is one of the classical tests of general relativity. Gravitational redshifts are an important effect in satellite-based navigation systems such as GPS. If the effects of general relativity were not taken into account, such systems would not work at all.

Prediction by the equivalence principle and general relativity

Einstein's theory of general relativity incorporates the equivalence principle, which can be stated in various different ways. One such statement is that gravitational effects are locally undetectable for a free-falling observer. Therefore, in a laboratory experiment at the surface of the earth, all gravitational effects should be equivalent to the effects that would have been observed if the laboratory had been accelerating through outer space at g. One consequence is a gravitational Doppler effect. If a light pulse is emitted at the floor of the laboratory, then a free-falling observer says that by the time it reaches the ceiling, the ceiling has accelerated away from it, and therefore when observed by a detector fixed to the ceiling, it will be observed to have been Doppler shifted toward the red end of the spectrum. This shift, which the free-falling observer considers to be a kinematical Doppler shift, is thought of by the laboratory observer as a gravitational redshift. Such an effect was verified in the 1959 Pound–Rebka experiment. In a case such as this, where the gravitational field is uniform, the change in wavelength is given by

where is the change in height. Since this prediction arises directly from the equivalence principle, it does not require any of the mathematical apparatus of general relativity, and its verification does not specifically support general relativity over any other theory that incorporates the equivalence principle.

When the field is not uniform, the simplest and most useful case to consider is that of a spherically symmetric field. By Birkhoff's theorem, such a field is described in general relativity by the Schwarzschild metric, , where is the clock time of an observer at distance R from the center, is the time measured by an observer at infinity, is the Schwarzschild radius , "..." represents terms that vanish if the observer is at rest, is Newton's gravitational constant, the mass of the gravitating body, and the speed of light. The result is that frequencies and wavelengths are shifted according to the ratio

where

  • is the wavelength of the light as measured by the observer at infinity,
  • is the wavelength measured at the source of emission, and
  • radius at which the photon is emitted.

This can be related to the redshift parameter conventionally defined as . In the case where neither the emitter nor the observer is at infinity, the transitivity of Doppler shifts allows us to generalize the result to . The redshift formula for the frequency is . When is small, these results are consistent with the equation given above based on the equivalence principle.

For an object compact enough to have an event horizon, the redshift is not defined for photons emitted inside the Schwarzschild radius, both because signals cannot escape from inside the horizon and because an object such as the emitter cannot be stationary inside the horizon, as was assumed above. Therefore, this formula only applies when is larger than . When the photon is emitted at a distance equal to the Schwarzschild radius, the redshift will be infinitely large, and it will not escape to any finite distance from the Schwarzschild sphere. When the photon is emitted at an infinitely large distance, there is no redshift.

In the Newtonian limit, i.e. when is sufficiently large compared to the Schwarzschild radius , the redshift can be approximated as

Experimental verification

Initial observations of gravitational redshift of white dwarf stars

A number of experimenters initially claimed to have identified the effect using astronomical measurements, and the effect was considered to have been finally identified in the spectral lines of the star Sirius B by W.S. Adams in 1925. However, measurements by Adams have been criticized as being too low and these observations are now considered to be measurements of spectra that are unusable because of scattered light from the primary, Sirius A. The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/s gravitational redshift of 40 Eridani B.

The redshift of Sirius B was finally measured by Greenstein et al. in 1971, obtaining the value for the gravitational redshift of 89±19 km/s, with more accurate measurements by the Hubble Space Telescope, showing 80.4±4.8 km/s.

Terrestrial tests

The effect is now considered to have been definitively verified by the experiments of Pound, Rebka and Snider between 1959 and 1965. The Pound–Rebka experiment of 1959 measured the gravitational redshift in spectral lines using a terrestrial 57Fe gamma source over a vertical height of 22.5 metres. This paper was the first determination of the gravitational redshift which used measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The accuracy of the gamma-ray measurements was typically 1%.

An improved experiment was done by Pound and Snider in 1965, with an accuracy better than the 1% level.

A very accurate gravitational redshift experiment was performed in 1976, where a hydrogen maser clock on a rocket was launched to a height of 10,000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%.

Later tests can be done with the Global Positioning System (GPS), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, it showed the predicted shift of 38 microseconds per day. This rate of the discrepancy is sufficient to substantially impair the function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003.

Later astronomical measurements

James W. Brault, a graduate student of Robert Dicke at Princeton University, measured the gravitational redshift of the sun using optical methods in 1962.

In 2011 the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity.

Early historical development of the theory

The gravitational weakening of light from high-gravity stars was predicted by John Michell in 1783 and Pierre-Simon Laplace in 1796, using Isaac Newton's concept of light corpuscles (see: emission theory) and who predicted that some stars would have a gravity so strong that light would not be able to escape. The effect of gravity on light was then explored by Johann Georg von Soldner (1801), who calculated the amount of deflection of a light ray by the sun, arriving at the Newtonian answer which is half the value predicted by general relativity. All of this early work assumed that light could slow down and fall, which is inconsistent with the modern understanding of light waves.

Once it became accepted that light was an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself were altered—if clocks at different points had different rates.

This was precisely Einstein's conclusion in 1911. He considered an accelerating box, and noted that according to the special theory of relativity, the clock rate at the "bottom" of the box (the side away from the direction of acceleration) was slower than the clock rate at the "top" (the side toward the direction of acceleration). Nowadays, this can be easily shown in accelerated coordinates. The metric tensor in units where the speed of light is one is:

and for an observer at a constant value of r, the rate at which a clock ticks, R(r), is the square root of the time coefficient, R(r)=r. The acceleration at position r is equal to the curvature of the hyperbola at fixed r, and like the curvature of the nested circles in polar coordinates, it is equal to 1/r.

So at a fixed value of g, the fractional rate of change of the clock-rate, the percentage change in the ticking at the top of an accelerating box vs at the bottom, is:

The rate is faster at larger values of R, away from the apparent direction of acceleration. The rate is zero at r=0, which is the location of the acceleration horizon.

Using the equivalence principle, Einstein concluded that the same thing holds in any gravitational field, that the rate of clocks R at different heights was altered according to the gravitational field g. When g is slowly varying, it gives the fractional rate of change of the ticking rate. If the ticking rate is everywhere almost this same, the fractional rate of change is the same as the absolute rate of change, so that:

Since the rate of clocks and the gravitational potential have the same derivative, they are the same up to a constant. The constant is chosen to make the clock rate at infinity equal to 1. Since the gravitational potential is zero at infinity:

where the speed of light has been restored to make the gravitational potential dimensionless.

The coefficient of the in the metric tensor is the square of the clock rate, which for small values of the potential is given by keeping only the linear term:

and the full metric tensor is:

where again the C's have been restored. This expression is correct in the full theory of general relativity, to lowest order in the gravitational field, and ignoring the variation of the space-space and space-time components of the metric tensor, which only affect fast moving objects.

Using this approximation, Einstein reproduced the incorrect Newtonian value for the deflection of light in 1909. But since a light beam is a fast moving object, the space-space components contribute too. After constructing the full theory of general relativity in 1916, Einstein solved for the space-space components in a post-Newtonian approximation and calculated the correct amount of light deflection – double the Newtonian value. Einstein's prediction was confirmed by many experiments, starting with Arthur Eddington's 1919 solar eclipse expedition.

The changing rates of clocks allowed Einstein to conclude that light waves change frequency as they move, and the frequency/energy relationship for photons allowed him to see that this was best interpreted as the effect of the gravitational field on the mass–energy of the photon. To calculate the changes in frequency in a nearly static gravitational field, only the time component of the metric tensor is important, and the lowest order approximation is accurate enough for ordinary stars and planets, which are much bigger than their Schwarzschild radius.

Cousin marriage in the Middle East

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cou...