Search This Blog

Sunday, August 3, 2014

Nuclear Fission and Fusion

Nuclear Fission and Fusion

Condensed From Wikipedia, the free encyclopedia
________________________________________

Nuclear fission

 

An induced fission reaction. A neutron is absorbed by a uranium-235 nucleus, turning it briefly into an excited uranium-236 nucleus, with the excitation energy provided by the kinetic energy of the neutron plus the forces that bind the neutron. The uranium-236, in turn, splits into fast-moving lighter elements (fission products) and releases three free neutrons. At the same time, one or more "prompt gamma rays" (not shown) are produced, as well.
 
In nuclear physics and nuclear chemistry, nuclear fission is either a nuclear reaction or a radioactive decay process in which the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often produces free neutrons and photons (in the form of gamma rays), and releases a very large amount of energy even by the energetic standards of radioactive decay.
 
Nuclear fission of heavy elements was discovered on December 17, 1938 by Otto Hahn and his assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. It is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments (heating the bulk material where fission takes place).
In order for fission to produce energy, the total binding energy of the resulting elements must be greater than that of the starting element.
 
Fission is a form of nuclear transmutation because the resulting fragments are not the same element as the original atom. The two nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes.[1][2] Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus.
 
Fission as encountered in the modern world is usually a deliberately produced man-made nuclear reaction induced by a neutron. It is less commonly encountered as a natural form of spontaneous radioactive decay (not requiring a neutron), occurring especially in very high-mass-number isotopes.
The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum-tunnelling processes such as proton emission, alpha decay and cluster decay, which give the same products each time. Nuclear fission produces energy for nuclear power and drives the explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes possible a self-sustaining nuclear chain reaction that releases energy at a controlled rate in a nuclear reactor or at a very rapid uncontrolled rate in a nuclear weapon.
 
The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. Concerns over nuclear waste accumulation and over the destructive potential of nuclear weapons may counterbalance the desirable qualities of fission as an energy source, and give rise to ongoing political debate over nuclear power.

Mechanism


A visual representation of an induced nuclear fission event where a slow-moving neutron is absorbed by the nucleus of a uranium-235 atom, which fissions into two fast-moving lighter elements (fission products) and additional neutrons. Most of the energy released is in the form of the kinetic velocities of the fission products and the neutrons.
 

Fission product yields by mass for thermal neutron fission of U-235, Pu-239, a combination of the two typical of current nuclear power reactors, and U-233 used in the thorium cycle.
 
 
Nuclear fission can occur without neutron bombardment, as a type of radioactive decay. This type of fission (called spontaneous fission) is rare except in a few heavy isotopes. In engineered nuclear devices, essentially all nuclear fission occurs as a "nuclear reaction" — a bombardment-driven process that results from the collision of two subatomic particles. In nuclear reactions, a subatomic particle collides with an atomic nucleus and causes changes to it. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively constant exponential decay and half-life characteristic of spontaneous radioactive processes.
 
Many types of nuclear reactions are currently known. Nuclear fission differs importantly from other types of nuclear reactions, in that it can be amplified and sometimes controlled via a nuclear chain reaction (one type of general chain reaction). In such a reaction, free neutrons released by each fission event can trigger yet more events, which in turn release more neutrons and cause more fissions.
 
The chemical element isotopes that can sustain a fission chain reaction are called nuclear fuels, and are said to be fissile. The most common nuclear fuels are 235U (the isotope of uranium with an atomic mass of 235 and of use in nuclear reactors) and 239Pu (the isotope of plutonium with an atomic mass of 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 u (fission products). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha/beta decay chain over periods of millennia to eons.
In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events.
 
Nuclear fissions in fissile fuels are the result of the nuclear excitation energy produced when a fissile nucleus captures a neutron. This energy, resulting from the neutron capture, is a result of the attractive nuclear force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop," to the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged nucleons together, and when this happens, the two fragments complete their separation and then are driven further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these isotopes require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons).
 
The liquid drop model of the atomic nucleus predicts equal-sized fission products as an outcome of nuclear deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the more energetically favorable outcome, in which one fission product is slightly smaller than the other. A theory of the fission based on shell model has been formulated by Maria Goeppert Mayer.
 
The most common fission process is binary fission, and it produces the fission products noted above, at 95±15 and 135±15 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a fragment as argon (Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6 nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors.[3]
__________________________________________

Nuclear fusion


The Sun is a main-sequence star, and thus generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen each second.

In nuclear physics, nuclear fusion is a nuclear reaction in which two or more atomic nuclei collide at a very high speed and join to form a new type of atomic nucleus. During this process, matter is not conserved because some of the matter of the fusing nuclei is converted to photons (energy). Fusion is the process that powers active or "main sequence" stars.

The fusion of two nuclei with lower masses than iron (which, along with nickel, has the largest binding energy per nucleon) generally releases energy, while the fusion of nuclei heavier than iron absorbs energy. The opposite is true for the reverse process, nuclear fission. This means that fusion generally occurs for lighter elements only, and likewise, that fission normally occurs only for heavier elements. There are extreme astrophysical events that can lead to short periods of fusion with heavier nuclei. This is the process that gives rise to nucleosynthesis, the creation of the heavy elements during events such as supernovae. Following the discovery of quantum tunneling by Friedrich Hund, in 1929 Robert Atkinson and Fritz Houtermans used the measured masses of light elements to predict that large amounts of energy could be released by fusing small nuclei. Building upon the nuclear transmutation experiments by Ernest Rutherford, carried out several years earlier, the laboratory fusion of hydrogen isotopes was first accomplished by Mark Oliphant in 1932. During the remainder of that decade the steps of the main cycle of nuclear fusion in stars were worked out by Hans Bethe.

Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. Fusion was accomplished in 1951 with the Greenhouse Item nuclear test. Nuclear fusion on a large scale in an explosion was first carried out on November 1, 1952, in the Ivy Mike hydrogen bomb test.

Research into developing controlled thermonuclear fusion for civil purposes also began in earnest in the 1950s, and it continues to this day. Two projects, the National Ignition Facility and ITER, have the goal of high gains, that is, producing more energy than required to ignite the reaction, after 60 years of design improvements developed from previous experiments.[citation needed] While these ICF and Tokamak designs became popular in recent times, experiments with Stellarators are gaining international scientific attention again, like Wendelstein 7-X in Greifswald, Germany.
 
fusion energy.
The reaction cross section σ is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross section and velocity. This average is called the 'reactivity', denoted <σv>. The reaction rate (fusions per volume per time) is <σv> times the product of the reactant number densities:
f = n_1 n_2 \langle \sigma v \rangle.
If a species of nuclei is reacting with itself, such as the DD reaction, then the product n_1n_2 must be replaced by (1/2)n^2.
\langle \sigma v \rangle increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of \langle \sigma v \rangle as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current high state of technical prowess.[10]

Methods for achieving fusion

Thermonuclear fusion

Main article: Thermonuclear fusion
If the matter is sufficiently heated (hence being plasma), the fusion reaction may occur due to collisions with extreme thermal kinetic energies of the particles. In the form of thermonuclear weapons, thermonuclear fusion is the only fusion technique so far to yield undeniably large amounts of useful fusion energy.[citation needed] Usable amounts of thermonuclear fusion energy released in a controlled manner have yet to be achieved.

Inertial confinement fusion

Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear fusion reactions by heating and compressing a fuel target, typically in the form of a pellet that most often contains a mixture of deuterium and tritium.

Beam-beam or beam-target fusion

If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called beam-target fusion; if both nuclei are accelerated, it is beam-beam fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an efficient manner—all it takes is a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between electrodes. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross sections. Therefore the vast majority of ions end up expending their energy on bremsstrahlung and ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of these nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.

Muon-catalyzed fusion

Muon-catalyzed fusion is a well-established and reproducible fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction cannot occur because of the high energy required to create muons, their short 2.2 µs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.[11]

Other principles


The Tokamak à configuration variable, research fusion reactor, at the École Polytechnique Fédérale de Lausanne (Switzerland).

Some other confinement principles have been investigated, some of them have been confirmed to run nuclear fusion while having lesser expectation of eventually being able to produce net power, others have not yet been shown to produce fusion.

Sonofusion or bubble fusion, a controversial variation on the sonoluminescence theme, suggests that acoustic shock waves, creating temporary bubbles (cavitation) that expand and collapse shortly after creation, can produce temperatures and pressures sufficient for nuclear fusion.[12]

The Farnsworth–Hirsch fusor is a tabletop device in which fusion occurs. This fusion comes from high effective temperatures produced by electrostatic acceleration of ions.

The Polywell is a non-thermodynamic equilibrium machine that uses electrostatic confinement to accelerate ions into a center where they fuse together.

Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone.

Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels,[13] the D-D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.[14][15][16][17]

Hybrid nuclear fusion-fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion.[18] Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable.

Wahhabi movement

Wahhabi movement

Condensed From Wikipedia, the free encyclopedia
 
Wahhabism (Arabic: وهابية‎, Wahhābiyyah) is a radical religious movement or offshoot branch of Islam [1][2] variously described as "orthodox", "ultraconservative",[3] "austere", "fundamentalist", "puritanical"[4] (or "puritan"),[5] an Islamic "reform movement" to restore "pure monotheistic worship",[6] or an "extremist movement".[7] It aspires to return to the earliest fundamental Islamic sources of the Quran and Hadith with different interpretation from mainstream Islam, inspired by the teachings of medieval theologian Ibn Taymiyyah and early jurist Ahmad ibn Hanbal.[8]
The majority of the world's Wahhabis are from Qatar, the UAE, and Saudi Arabia.[9] 22.9% of all Saudis are Wahhabis (concentrated in Najd).[9] 46.87% of Qataris[9] and 44.8% of Emiratis are Wahhabis.[9] 5.7% of Bahrainis are Wahhabis and 2.17% of Kuwaitis are Wahhabis.[9]

A number of terrorist organizations adhering to the Wahhabi movement include al-Qaeda, the Taliban, and, more recently, ISIS.[10] The radical takfiri beliefs of Wahhabism enables its followers to label non-Wahhabi and mainstream Muslims as apostates along with non-Muslims, thus paving the way for their bloodshed.[11][12] In July 2013, European Parliament identified the Wahhabi movement as the source of global terrorism and a threat to traditional and diverse Muslim cultures of the whole world.[13] Many buildings associated with early Islam, including mazaars, mausoleums, and other artifacts, have been destroyed in Saudi Arabia by Wahhabis from the early 19th century through the present day.[14][15]

Initially, Wahhabism was a revivalist movement instigated by an eighteenth century theologian, Muhammad ibn Abd al-Wahhab (1703–1792) from Najd, Saudi Arabia,[16] who was opposed by his own father and brother for his non-traditional interpretation of Islam.[17] He attacked a "perceived moral decline and political weakness" in the Arabian Peninsula and condemned what he perceived as idolatry, the popular cult of saints, and shrine and tomb visitation,[18] advocating a purging of the widespread practices by Muslims that he considered impurities and innovations in Islam.[1] He eventually convinced the local Amir, Uthman ibn Mu'ammar, to help him in his struggle.[19] The movement gained unchallenged precedence in most of the Arabian Peninsula through an alliance between Muhammad ibn Abd al-Wahhab and the House of Muhammad ibn Saud, which provided political and financial power for the religious revival represented by Ibn Abd al-Wahhab. The alliance created the Kingdom of Saudi Arabia, where Mohammed bin Abd Al-Wahhab's teachings are state-sponsored and the dominant form of Islam in Saudi Arabia.

The terms Wahhabi and Salafi and ahl al-hadith (people of hadith) are often used interchangeably,[20] but Wahhabism has also been called "a particular orientation within Salafism",[1] considered ultra-conservative and which rejects traditional Islamic legal scholarship as unnecessary innovation.[21][22] Salafism, on the other hand, has been termed as the hybridation between the teachings of Ibn Abdul-Wahhab and others which have taken place since the 1960s.[23]

Captains of industry explore space's new frontiers

Captains of industry explore space's new frontiers

6 hours ago by Patrick Rahir
A file picture shows the WhiteKnightTwo, which carries Richard Branson's SpaceShipTwo into high altitude, prior to a flight at S       













A file picture shows the WhiteKnightTwo, which carries Richard Branson's SpaceShipTwo into high altitude, prior to a flight at Spaceport America, northeast of Truth Or Consequences, New Mexico
With spacecraft that can carry tourists into orbit and connect Paris to New York in less than two hours, the new heroes of space travel are not astronauts but daring captains of industry.

This new breed of pioneers are all using private money to push the final frontier as government space programmes fall away.

Times have changed. Once the space race was led by the likes of the US space agency NASA that put the first man on the moon in 1969.

Today it is entrepreneur Elon Musk—the founder of Tesla electric cars and company SpaceX—who wants to reach Mars in the 2020s.
The furthest advanced—and most highly-publicised—private space project is led by Richard

Branson, the British founder of the Virgin Group.
His shuttle, SpaceShipTwo, will be launched at high altitude from a weird-looking four-engined mothership—which can carry two pilots and up to six passengers—before embarking on a three-hour suborbital flight.

Branson and his sons will be the first passengers aboard the shuttle when it is expected to launch later this year.

His company Virgin Galactic was given the green light in May by the US Federal Aviation Agency (FAA) to carry passengers from a base in New Mexico, which is named "Spaceport America"—the stuff of science fiction.

$250,000 a ticket

The $250,000 (190,000 euro) price of a ticket has not deterred more than 600 people, including celebrities such as actor Leonardo DiCaprio, from booking their seats.

Main shuttles and private companies developing suborbital travel with data on flights
 The US spaceflight company XCOR is more affordable, offering a one-hour suborbital flight for $100,000 (74,000 euros) on a shuttle that takes off from the Mojave Desert in California. It has already sold nearly 300 tickets.

"The first prototype is being assembled. Hopefully, the test flights will begin before the end of the year, and commercial flights before the end of 2015," Michiel Mol, an XCOR board member, told AFP.

It plans four flights a day and hopes its frequency will eventually give it an edge on Virgin Galactic.
But the new space business is not just about pandering to the whims of the rich, it also hopes to address a market for launching smaller satellites that weigh less than 250 kilograms (550 pounds).                     

"There is no dedicated launcher for small satellites," said Rachel Villain of Euroconsult, a global consulting firm specialising in space markets.

"Everyone has been looking for years for the Holy Grail of how to reduce costs, other than to send them as passengers on big launchers."
'Smarter, cheaper, reusable'

"These new players are revolutionising the launch market," said aeronautical expert Philippe Boissat of consultants Deloitte. "They are smarter, cheaper, and they are reusable and don't leave debris in space."

Which is exactly what one newcomer, Swiss Space Systems, or S3, proposes. With a shuttle on the back of an Airbus A300, its founder Pascal Jaussi wants to start launching satellites before going into intercontinental passenger flights.

Virtual photo of XCOR Aerospace's Lynx during a press conference in Beverly Hills, California, on December 2, 2008
 The 37-year-old former test pilot claims he can cut the price of a 250-kilogram satellite launch to eight million euros (almost $11 million), a quarter of what it now costs.

"Satellite makers wanting to launch groups of weather and surveillance satellites have already filled our order books," he said.

The first test flights are planned for the end of 2017, and the first satellite launches will begin at the end of the following year from a base in the Canary Islands, the Spanish archipelago off northwest Africa.

For passenger travel, the new space companies have to be passed by the regulators who currently control air travel.

At the moment a passenger plane covers the 5,800 kilometres (3,600 miles) between Paris and New York in seven hours. At Mach3 speed, the S3 shuttle will do the same trip in one-and-a-half hours.

"We hope to have a ticket price comparable to a first-class transatlantic fare. It should never be more than 30,000 Swiss francs (24,700 euros, $33,100)," he said.
Boissat of Deloitte is already looking further ahead.

"These suborbital flights will produce a new generation of fighter pilots at the controls of space shuttles sent up to protect satellites or neutralise ones that pose a threat," he predicted.
Explore further: Virgin space flights cleared for US take-off

Convergent and Divergent Evolution

Convergent and Divergent Evolution

Condensed from Wikipedia, the free encyclopedia
                           http://en.wikipedia.org/wiki/Divergent_evolution
 

Convergent Evolution

Example: Two succulent plant genera, Euphorbia and Astrophytum, are only distantly related, but have independently converged on a similar body form.
 
Convergent evolution describes the independent evolution of similar features in species of different lineages. Convergent evolution creates analogous structures that have similar form or function, but that were not present in the last common ancestor of those groups.[1] The cladistic term for the same phenomenon is homoplasy, from Greek for same form.[2] The recurrent evolution of flight is a classic example of convergent evolution. Flying insects, birds, and bats have all evolved the capacity of flight independently. They have "converged" on this useful trait.
 
Functionally similar features arising through convergent evolution are termed analogous, in contrast to homologous structures or traits, which have a common origin, but not necessarily similar function.[1]
The British anatomist Richard Owen was the first scientist to recognise the fundamental difference between analogies and homologies.[3] Bat and pterosaur wings constitute an example of analogous structures, while the bat wing is homologous to human and other mammal forearms, sharing an ancestral state despite serving different functions. The opposite of convergent evolution is divergent evolution, whereby related species evolve different traits. On a molecular level, this can happen due to random mutation unrelated to adaptive changes; see long branch attraction.
 
Convergent evolution is similar to, but distinguishable from, the phenomena of parallel evolution. Parallel evolution occurs when two independent but similar species evolve in the same direction and thus independently acquire similar characteristics—for instance gliding frogs have evolved in parallel from multiple types of tree frog.
 

Causes

In morphology, analogous traits will often arise where different species live in similar ways and/or similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems lead to similar solutions.[4]
 
In biochemistry, physical and chemical constraints on mechanisms cause some active site arrangements to independently evolve multiple times in separate enzyme superfamilies (for example, see also catalytic triad).[5]

Significance

Convergence has been associated with Darwinian evolution in the popular imagination since at least the 1940s. For example, Elbert A. Rogers argued: "If we lean toward the theories of Darwin might we not assume that man was [just as] apt to have developed in one continent as another?"[6]
 
In his book Wonderful Life, Stephen Jay Gould argues that if the tape of life were re-wound and played back, life would have taken a very different course.[7] Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.[8]
 
Convergence is difficult to quantify, so progress on this issue may require exploitation of engineering specifications (as of wing aerodynamics) and comparably rigorous measures of "very different course" in terms of phylogenetic (molecular) distances.[citation needed]

Distinctions

Convergent evolution is a topic touched by many different fields of biology, many of which use slightly different nomenclature. This section attempts to clarify some of those terms.

Diagram of cladistic definition of homoplasy, synapomorphy, autapomorphy, apomorphy and plesiomorphy.

Cladistic definition

In cladistics, a homoplasy or a homoplastic character state is a trait (genetic, morphological etc.) that is shared by two or more taxa because of convergence, parallelism or reversal.[9] Homoplastic character states require extra steps to explain their distribution on a most parsimonious cladogram.
Homoplasy is only recognizable when other characters imply an alternative hypothesis of grouping, because in the absence of such evidence, shared features are always interpreted as similarity due to common descent.[10] Homoplasious traits or changes (derived trait values acquired in unrelated organisms in parallel) can be compared with synapomorphy (a derived trait present in all members of a monophyletic clade), autapomorphy (derived trait present in only one member of a clade), or apomorphies, derived traits acquired in all members of a monophyletic clade following divergence where the most recent common ancestor had the ancestral trait (the ancestral trait manifesting in paraphyletic species as a plesiomorphy).

Re-evolution vs. convergent evolution

In some cases, it is difficult to tell whether a trait has been lost then re-evolved convergently, or whether a gene has simply been 'switched off' and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years.[11]

Parallel vs. convergent evolution


Evolution at an amino acid position. In each case, the left-hand species changes from incorporating alanine (A) at a specific position within a protein in a hypothetical common ancestor deduced from comparison of sequences of several species, and now incorporates serine (S) in its present-day form. The right-hand species may undergo divergent, parallel, or convergent evolution at this amino acid position relative to that of the first species.
 
For a particular trait, proceeding in each of two lineages from a specified ancestor to a later descendant, parallel and convergent evolutionary trends can be strictly defined and clearly distinguished from one another.[12] However the cutoff point for what is considered convergent and what is considered parallel evolution is assigned somewhat arbitrarily. When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar and convergent if they were not. However, this definition is somewhat murky. All organisms share a common ancestor more or less recently, so the question of how far back to look in evolutionary time and how similar the ancestors need to be for one to consider parallel evolution to have taken place is not entirely resolved within evolutionary biology. Some scientists have argued parallel evolution and convergent evolution are more or less indistinguishable from one another.[13] Others have argued that we should not shy away from the gray area and that there are still important distinctions between parallel and convergent evolution.[14]
 
When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution,[15] because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences. Stephen Jay Gould describes many of the same examples as parallel evolution starting from the common ancestor of all marsupials and placentals. Many evolved similarities can be described in concept as parallel evolution from a remote ancestor, with the exception of those where quite different structures are co-opted to a similar function. For example, consider Mixotricha paradoxa, a microbe that has assembled a system of rows of apparent cilia and basal bodies closely resembling that of ciliates but that are actually smaller symbiont micro-organisms, or the differently oriented tails of fish and whales. On the converse, any case in which lineages do not evolve together at the same time in the same ecospace might be described as convergent evolution at some point in time.
 
The definition of a trait is crucial in deciding whether a change is seen as divergent, or as parallel or convergent. In the image above, note that, since serine and threonine possess similar structures with an alcohol side-chain, the example marked "divergent" would be termed "parallel" if the amino acids were grouped by similarity instead of being considered individually. As another example, if genes in two species independently become restricted to the same region of the animals through regulation by a certain transcription factor, this may be described as a case of parallel evolution — but examination of the actual DNA sequence will probably show only divergent changes in individual base-pair positions, since a new transcription factor binding site can be added in a wide range of places within the gene with similar effect.
 
A similar situation occurs considering the homology of morphological structures. For example, many insects possess two pairs of flying wings. In beetles, the first pair of wings is hardened into wing covers with little role in flight, while in flies the second pair of wings is condensed into small halteres used for balance. If the two pairs of wings are considered as interchangeable, homologous structures, this may be described as a parallel reduction in the number of wings, but otherwise the two changes are each divergent changes in one pair of wings.
 
Similar to convergent evolution, evolutionary relay describes how independent species acquire similar characteristics through their evolution in similar ecosystems, but not at the same time (dorsal fins of sharks and ichthyosaurs).
________________________________________

Divergent Evolution


Darwin's finches are a clear and famous example of divergent evolution, in which an ancestral species radiates into a number of descendant species with both similar and different traits.


Divergent evolution is the accumulation of differences between groups which can lead to the formation of new species, usually a result of diffusion of the same species to different and isolated environments which blocks the gene flow among the distinct populations allowing differentiated fixation of characteristics through genetic drift and natural selection. Primarily diffusion is the basis of molecular division can be seen in some higher-level characters of structure and function that are readily observable in organisms. For example, the vertebrate limb is one example of divergent evolution. The limb in many different species has a common origin, but has diverged somewhat in overall structure and function.[citation needed]

Alternatively, "divergent evolution" can be applied to molecular biology characteristics. This could apply to a pathway in two or more organisms or cell types, for example. This can apply to genes and proteins, such as nucleotide sequences or protein sequences that derive from two or more homologous genes. Both orthologous genes (resulting from a speciation event) and paralogous genes (resulting from gene duplication within a population) can be said to display divergent evolution. Because of the latter, it is possible for divergent evolution to occur between two genes within a species.

In the case of divergent evolution, similarity is due to the common origin, such as divergence from a common ancestral structure or function has not yet completely obscured the underlying similarity. In contrast, convergent evolution arises when there are some sort of ecological or physical drivers toward a similar solution, even though the structure or function has arisen independently, such as different characters converging on a common, similar solution from different points of origin. This includes analogous structures.

Usage

J. T. Gulick founded the usage of this term[1] and other related terms, which can vary slightly from one researcher to the next. Furthermore, the actual relationships might be more complex than the simple definitions of these terms allow. "Divergent evolution" is most commonly meant when someone invokes evolutionary relationships and "convergent evolution" is applied when similarity is created by evolution independently creating similar structures and functions. The term parallel evolution is also sometimes used to describe the appearance of a similar structure in closely related species, whereas convergent evolution is used primarily to refer to similar structures in much more distantly related clades. For example, some might call the modification of the vertebrate limb to become a wing in bats and birds to be an example of parallel evolution. Vertebrate forelimbs have a common origin and thus, in general, show divergent evolution. However, the modification to the specific structure and function of a wing evolved independently and in parallel within several different vertebrate clades. Also, it has much to do with humans and the way they function from day to day.

In complex structures, there may be other cases where some aspects of the structures are due to divergence and some aspects that might be due to convergence or parallelism. In the case of the eye, it was initially thought that different clades had different origins of the eye, but this is no longer thought by some researchers. It is possible that induction of the light-sensing eye during development might be diverging from a common ancestor across many clades, but the details of how the eye is constructed—and in particular the structures that focus light in cephalopods and vertebrates, for example—might have some convergent or parallel aspects to it, as well.[2]

A good example of a divergent evolution is Darwin's finches, which now have over 80 varieties which all diverged from one original species of finch. (John Barnes)Another example of divergent evolution are the organisms having the 5 digit pentadactyle limbs like the humans, bats, and whales.
They have evolved from a common ancestor but, today they are different due to environmental pressures

Divergent species


Comparison of allopatric, peripatric, parapatric and sympatric speciation.

Divergent species are a consequence of divergent evolution. The divergence of one species into two or more descendant species can occur in four major ways:[3]
  • Allopatric speciation occurs when a population becomes separated into two entirely isolated subpopulations. Once the separation occurs, natural selection and genetic drift operate on each subpopulation independently, producing different evolutionary outcomes.
  • Peripatric speciation is somewhat similar to allopatric speciation, but specifically occurs when a very small subpopulation becomes isolated from a much larger majority. Because the isolated subpopulation is so small, divergence can happen relatively rapidly due to the founder effect, in which small populations are more sensitive to genetic drift and natural selection acts on a small gene pool.
  • Parapatric speciation occurs when a small subpopulation remains within the habitat of an original population but enters a different niche. Effects other than physical separation prevent interbreeding between the two separated populations. Because one of the genetically isolated populations is so small, however, the founder effect can still play a role in speciation.
  • Sympatric speciation, the rarest and most controversial form of speciation, occurs with no form of isolation (physical or otherwise) between two populations.
Species can diverge when a part of the species is separated from the main population by a reproductive barrier. In the cases of allopatric and peripatric speciation, the reproductive barrier is the result of a physical barrier (e.g. flood waters, mountain range, deserts). Once separated, the species begins to adapt to their new environment via genetic drift and natural selection. After many generations and continual evolution of the separated species, the population eventually becomes two separate species to such an extent where they are no longer able to interbreed with one another. One particular cause of divergent species is adaptive radiation.

An example of divergent species is the apple maggot fly. The apple maggot fly once infested the fruit of a native Australian hawthorn. In the 1860s some maggot flies began to infest apples. They multiplied rapidly because they were able to make use of an abundant food supply. Now there are two distinct species, one that reproduces when the apples are ripe, and another that continues to infest the native hawthorn. Furthermore, they have not only evolved different reproductive timing, but also now have distinctive physical characteristics.

Corporate Profits Grow and Wages Slide

Corporate Profits Grow and Wages Slide

CORPORATE profits are at their highest level in at least 85 years. Employee compensation is at the lowest level in 65 years.
 
The Commerce Department last week estimated that corporations earned $2.1 trillion during 2013, and paid $419 billion in corporate taxes. The after-tax profit of $1.7 trillion amounted to 10 percent of gross domestic product during the year, the first full year it has been that high. In 2012, it was 9.7 percent, itself a record.
 
Until 2010, the highest level of after-tax profits ever recorded was 9.1 percent, in 1929, the first year that the government began calculating the number.
Before taxes, corporate profits accounted for 12.5 percent of the total economy, tying the previous record that was set in 1942, when World War II pushed up profits for many companies. But in 1942, most of those profits were taxed away. The effective corporate tax rate was nearly 55 percent, in sharp contrast to last year’s figure of under 20 percent.
 
The trend of higher profits and lower effective taxes has been gaining strength for years, but really picked up after the Great Recession temporarily depressed profits in 2009. The effective rate has been below 20 percent in three of the last five years. Before 2009, the rate had not been that low since 1931.
 
The statutory top corporate tax rate in the United States is 35 percent, and corporations have been vigorously lobbying to reduce that, saying it puts them at a competitive disadvantage against companies based in other countries, where rates are lower. But there are myriad tax credits, deductions and preferences available, particularly to multinational companies, and the result is that effective tax rates have fallen for many companies.
 
The Commerce Department also said total wages and salaries last year amounted to $7.1 trillion, or 42.5 percent of the entire economy. That was down from 42.6 percent in 2012 and was lower than in any year previously measured.
 
Including the cost of employer-paid benefits, like health insurance and pensions, as well as the employer’s share of Social Security and Medicare contributions, the total cost of compensation was $8.9 trillion, or 52.7 percent of G.D.P., down from 53 percent in 2012 and the lowest level since 1948.

Profits High, Wages Low

After-tax corporate profits in 2013 rose to a record of 10 percent of gross domestic product, while total compensation of employees slipped to a 65-year low. Corporate tax rates — under 20 percent of pretax corporate income in three of the last five years — have not been that low since Herbert Hoover was president. During the Obama administration, profits have taken a higher share of national income than during any administration since 1929.
After-tax corporate profits
Employee compensation
As a percentage of G.D.P.
Effective corporate tax rate*
 
As a percentage of G.D.P.
RECESSION YEARS
10
%
60
%
60
%
50
58
8
40
56
6
30
54
4
20
52
2
10
50
0
48
0
’30
’50
’70
’90
’13
’30
’50
’70
’90
’13
’30
’50
’70
’90
’13
By presidential term
AFTER-TAX
CORPORATE PROFITS
EFFECTIVE
CORPORATE
TAX RATE
 
EMPLOYEE
COMPENSATION
 
CHANGE IN S.&P. 500
Highest in each
category is highlighted
AS A PCT. OF G.D.P.
AS A PCT. OF G.D.P.
TOTAL CHANGE
ANNUALIZED
Obama
G.W. Bush
Clinton
G.H.W. Bush
Reagan
Carter
Ford
Nixon
Johnson
Kennedy
Eisenhower
Truman
F.D. Roosevelt
Hoover
9.3
7.2
6.0
4.8
5.2
5.8
5.5
5.4
7.2
6.1
5.8
5.6
5.1
5.4
%
20.5
26.0
31.0
32.9
31.7
36.7
37.3
39.0
36.1
41.1
44.1
47.3
44.2
14.7
%
53.2
55.0
55.5
55.9
55.6
56.3
56.1
57.4
55.5
55.2
55.1
53.6
52.9
51.0
%
+
+
+
+
+
+
+
+
+
+
+
133
40
210
51
118
28
27
20
46
16
129
86
141
77
%
+
+
+
+
+
+
+
+
+
+
+
17.7
6.2
15.2
10.9
10.2
6.3
10.4
4.0
7.6
5.4
10.9
8.3
7.5
30.8
%
Benefits were a steadily rising cost for employers for many decades, but that trend seems to have ended. In 2013, the figure was 10.2 percent, the lowest since 2000.
 
One way to look at the current situation is to compare 2013 with 2006, the last full year before the recession began. Adjusted for inflation, corporate profits were 28 percent higher, before taxes, last year. But taxes were down by 21 percent,so after-tax profits were up by 36 percent. At the same time, total employee compensation was up by 5 percent, or less than the 7 percent increase in the working-age population over the same period.
 
Several reasons that have been offered as explanations for the declining share of national income going to workers, including the effects of globalization that have shifted some jobs to lower-paid overseas workers and the declining bargaining power of unions.
 
The accompanying charts compare President Obama’s administration with each of his predecessors, going back to Herbert Hoover. After-tax corporate profits in President Obama’s five years in office have averaged 9.3 percent of G.D.P. That is a full two percentage points higher than the 7.2 percent averages under Lyndon B. Johnson and George W. Bush, previously the presidents with the highest ratios of corporate profits.
 
The stock market has reflected that strong performance. Through the end of March, the Standard & Poor’s 500-stock index was up 133 percent since Mr. Obama’s inauguration in 2009. Of the 13 presidents since 1929, only Bill Clinton and Franklin D. Roosevelt saw a larger total increase. On an annualized basis, the Obama administration gains come to 17.7 percent a year, higher than any of the previous presidents. The figures reflect price changes, and are not adjusted for dividends or inflation.

The Incredible Shrinking Dinosaurs

The Incredible Shrinking Dinosaurs

 
Saturday, August 2, 2014 18:22
 

For decades, paleontologists have been uncovering the remarkable evolutionary relationships between fearsome, two-legged, meat-eating dinosaurs and birds.

A new study suggests that the pace of the transition from one to the other was quick by dinosaur standards. In the 50 million years preceding the appearance of the first birds some 163 million years ago, the size and weight of theropods along the direct line of descent to birds shrank one group after another – slowly at first, but going into free fall during the final 10 to 15 million years once Maniraptors took the evolutionary baton from their direct ancestors, the Coelurosaurs.

The skeletal changes taking place during the 50-million-year dinosaur-to-bird transition were occurring four times faster than for dinosaurs as a whole, according to the analysis, conducted by an international team of researchers led by Michael Lee, with the South Australia Museum in Adelaide.

Rapid rates of change in body size have appeared before in the fossil record. Following a mass extinction at the end of the Cretaceous period some 65 million years ago, an event that drove non-avian dinosaurs extinct, the size and diversity of mammals exploded over a 15-million-year period, researchers say. This came as mammals began to fill ecological niches vacated by the late, great dinosaurs.

The interplay between evolution and ecological niches was likely at work for the ancestors to birds as well – in this case, the Great Escape.

Some researchers surmise that the changes in body size and skeletal structures that led to the first birds, particularly during the phase of accelerated change, could have occurred as the now-smaller theropods moved into trees to escape becoming another animal’s meal or to take advantage of new sources of food.

The continuing reduction in size needed to succeed as tree dwellers would have triggered a cascade of evolutionary changes, suggests University of Bristol paleontologist Mike Benton. These changes would have improved vision, improved the aerodynamics of forelimbs to allow for increasingly ambitious leaps from tree to tree, or encouraged the evolution of feathers to insulate the new tree dwellers.

“Being smaller and lighter in the land of giants, with rapidly evolving anatomical adaptations, provided these bird ancestors with new ecological opportunities,” Dr. Lee said in a prepared statement.

Past studies of animal sizes in the run-up to birds had looked at individual branches of the avian ancestral tree or used trees that used physical traits to establish relationships, but no dates.

Lee and colleagues were able to take advantage of the explosion of small feathered theropod fossils coming out of China since the mid-1990s, known collectively as Paraves. These animals were trying to exploit various ways of getting from tree to tree – jumping, gliding, or parachuting, notes Dr. Benton in an article in the current issue of the journal Science. The article accompanies the analysis Lee and his colleagues performed.

The researchers gathered data on 1,549 skeletal traits from 120 species of theropods, including the length of the thigh bones and the ages of the specimens. The team used the femur as a marker of body mass. They then applied sophisticated statistical techniques to reconstruct the relationships among the species, their chronology, and track their evolutionary changes.

Some 200 million years ago, direct ancestors of the first birds tipped the scales at about 360 pounds. By about 175 million years ago, the typical weight of a new generation of direct ancestor had fallen to 100 pounds. Over the next 10 million to 15 million years, body weights would plummet, winding up at about a pound for the first birds.

The study is significant on two levels, suggests Daniel Field, a PhD candidate in paleontology at Yale University and a predoctoral fellow at the university’s Peabody Museum of Natural History.

Researchers have a good idea of what the pattern of evolutionary relationships are along that lineage, he says, “but we don’t have quite as good an idea of how those evolutionary transitions actually played out,” he says. The analysis Lee and his colleagues have performed help fill in that information.

But the study has broader implications, Mr. Field adds. The team amassed a remarkable set of data that will be valuable in its own right and raises additional, intriguing questions.

For instance, the Great Jurassic Shrink Off was apparent at each of 12 or more points along the main line of evolution between theropods and birds. Those points represent branches in the family tree where other theropods went off in their own evolutionary directions – directions in which body size either remained stable or often increase significantly, in one case giving the world Tyrannosaurus rex. It’s a pattern that repeats along each branch. Explaining that repetition even as the avian lineage was yielding ever smaller animals over the same time span is a fresh mystery the data present, according to Field.



Click to zoom




Source: http://www.ascensionearth2012.org/2014/08/the-incredible-shrinking-dinosaurs-video.html

Hollow-point bullet

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wi...