Search This Blog

Wednesday, October 25, 2023

Common descent

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Common_descent

Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth.

Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.

Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species:

There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

History

The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted:

May we not say that, in the fortuitous combination of the productions of Nature, since only those creatures could survive in whose organizations a certain degree of adaptation was present, there is nothing extraordinary in the fact that such adaptation is actually found in all these species which now exist? Chance, one might say, turned out a vast number of individuals; a small proportion of these were organized in such a manner that the animals' organs could satisfy their needs. A much greater number showed neither adaptation nor order; these last have all perished.... Thus the species which we see today are but a small part of all those that a blind destiny has produced.

In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent.

In 1794, Charles Darwin's grandfather, Erasmus Darwin asked:

[W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which the great First Cause endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end?

Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms:

Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed.

But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather,

"We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification".

Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent."

In 2008, biologist T. Ryan Gregory noted that:

No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science.

Evidence

Common biochemistry

All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.

Common genetic code

Amino acids nonpolar polar basic acidic
Stop codon
Standard genetic code
1st
base
2nd base
T C A G
T TTT Phenyl-
alanine
TCT Serine TAT Tyrosine TGT Cysteine
TTC TCC TAC TGC
TTA Leucine TCA TAA Stop TGA Stop
TTG TCG TAG Stop TGG Tryptophan 
C CTT CCT Proline CAT Histidine CGT Arginine
CTC CCC CAC CGC
CTA CCA CAA Glutamine CGA
CTG CCG CAG CGG
A ATT Isoleucine ACT Threonine  AAT Asparagine AGT Serine
ATC ACC AAC AGC
ATA ACA AAA Lysine AGA Arginine
ATG Methionine ACG AAG AGG
G GTT Valine GCT Alanine GAT Aspartic
acid
GGT Glycine
GTC GCC GAC GGC
GTA GCA GAA Glutamic
acid
GGA
GTG GCG GAG GGG

The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent.

The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer.

Selectively neutral similarities

Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry.

Other similarities

Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also.

Phylogenetic trees

BacteriaArchaeaEukaryotaAquifexThermotogaBacteroides–CytophagaPlanctomyces"Cyanobacteria"ProteobacteriaSpirochetesGram-positivesChloroflexiThermoproteus–PyrodictiumThermococcus celerMethanococcusMethanobacteriumMethanosarcinaHaloarchaeaEntamoebaeSlime moldsAnimalsFungiPlantsCiliatesFlagellatesTrichomonadsMicrosporidiaDiplomonads
A phylogenetic tree based on ribosomal RNA genes implies a single origin for all life.

Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life."

Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent.

Objections

2005 tree of life shows many horizontal gene transfers, implying multiple possible origins.

Gene exchange clouds phylogenetic analysis

Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have.

Convergent evolution

If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's "formal test" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that "real universally conserved proteins are homologous."

RNA world

The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the “origin” of life. To understand the “origin” of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection. During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single “origin” of life event from which all life descended.

Quantum optics

From Wikipedia, the free encyclopedia

Quantum optics is a branch of atomic, molecular, and optical physics dealing with how individual quanta of light, known as photons, interact with atoms and molecules. It includes the study of the particle-like properties of photons. Photons have been used to test many of the counter-intuitive predictions of quantum mechanics, such as entanglement and teleportation, and are a useful resource for quantum information processing.

History

Light propagating in a restricted volume of space has its energy and momentum quantized according to an integer number of particles known as photons. Quantum optics studies the nature and effects of light as quantized photons. The first major development leading to that understanding was the correct modeling of the blackbody radiation spectrum by Max Planck in 1899 under the hypothesis of light being emitted in discrete units of energy. The photoelectric effect was further evidence of this quantization as explained by Albert Einstein in a 1905 paper, a discovery for which he was to be awarded the Nobel Prize in 1921. Niels Bohr showed that the hypothesis of optical radiation being quantized corresponded to his theory of the quantized energy levels of atoms, and the spectrum of discharge emission from hydrogen in particular. The understanding of the interaction between light and matter following these developments was crucial for the development of quantum mechanics as a whole. However, the subfields of quantum mechanics dealing with matter-light interaction were principally regarded as research into matter rather than into light; hence one rather spoke of atom physics and quantum electronics in 1960. Laser science—i.e., research into principles, design and application of these devices—became an important field, and the quantum mechanics underlying the laser's principles was studied now with more emphasis on the properties of light, and the name quantum optics became customary.

As laser science needed good theoretical foundations, and also because research into these soon proved very fruitful, interest in quantum optics rose. Following the work of Dirac in quantum field theory, John R. Klauder, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light (see degree of coherence). This led to the introduction of the coherent state as a concept which addressed variations between laser light, thermal light, exotic squeezed states, etc. as it became understood that light cannot be fully described just referring to the electromagnetic fields describing the waves in the classical picture. In 1977, Kimble et al. demonstrated a single atom emitting one photon at a time, further compelling evidence that light consists of photons. Previously unknown quantum states of light with characteristics unlike classical states, such as squeezed light were subsequently discovered.

Development of short and ultrashort laser pulses—created by Q switching and modelocking techniques—opened the way to the study of what became known as ultrafast processes. Applications for solid state research (e.g. Raman spectroscopy) were found, and mechanical forces of light on matter were studied. The latter led to levitating and positioning clouds of atoms or even small biological samples in an optical trap or optical tweezers by laser beam. This, along with Doppler cooling and Sisyphus cooling, was the crucial technology needed to achieve the celebrated Bose–Einstein condensation.

Other remarkable results are the demonstration of quantum entanglement, quantum teleportation, and quantum logic gates. The latter are of much interest in quantum information theory, a subject which partly emerged from quantum optics, partly from theoretical computer science.

Today's fields of interest among quantum optics researchers include parametric down-conversion, parametric oscillation, even shorter (attosecond) light pulses, use of quantum optics for quantum information, manipulation of single atoms, Bose–Einstein condensates, their application, and how to manipulate them (a sub-field often called atom optics), coherent perfect absorbers, and much more. Topics classified under the term of quantum optics, especially as applied to engineering and technological innovation, often go under the modern term photonics.

Several Nobel prizes have been awarded for work in quantum optics. These were awarded:

Concepts

According to quantum theory, light may be considered not only to be as an electro-magnetic wave but also as a "stream" of particles called photons which travel with c, the vacuum speed of light. These particles should not be considered to be classical billiard balls, but as quantum mechanical particles described by a wavefunction spread over a finite region.

Each particle carries one quantum of energy, equal to hf, where h is Planck's constant and f is the frequency of the light. That energy possessed by a single photon corresponds exactly to the transition between discrete energy levels in an atom (or other system) that emitted the photon; material absorption of a photon is the reverse process. Einstein's explanation of spontaneous emission also predicted the existence of stimulated emission, the principle upon which the laser rests. However, the actual invention of the maser (and laser) many years later was dependent on a method to produce a population inversion.

The use of statistical mechanics is fundamental to the concepts of quantum optics: light is described in terms of field operators for creation and annihilation of photons—i.e. in the language of quantum electrodynamics.

A frequently encountered state of the light field is the coherent state, as introduced by E.C. George Sudarshan in 1960. This state, which can be used to approximately describe the output of a single-frequency laser well above the laser threshold, exhibits Poissonian photon number statistics. Via certain nonlinear interactions, a coherent state can be transformed into a squeezed coherent state, by applying a squeezing operator which can exhibit super- or sub-Poissonian photon statistics. Such light is called squeezed light. Other important quantum aspects are related to correlations of photon statistics between different beams. For example, spontaneous parametric down-conversion can generate so-called 'twin beams', where (ideally) each photon of one beam is associated with a photon in the other beam.

Atoms are considered as quantum mechanical oscillators with a discrete energy spectrum, with the transitions between the energy eigenstates being driven by the absorption or emission of light according to Einstein's theory.

For solid state matter, one uses the energy band models of solid state physics. This is important for understanding how light is detected by solid-state devices, commonly used in experiments.

Quantum electronics

Quantum electronics is a term that was used mainly between the 1950s and 1970s to denote the area of physics dealing with the effects of quantum mechanics on the behavior of electrons in matter, together with their interactions with photons. Today, it is rarely considered a sub-field in its own right, and it has been absorbed by other fields. Solid state physics regularly takes quantum mechanics into account, and is usually concerned with electrons. Specific applications of quantum mechanics in electronics is researched within semiconductor physics. The term also encompassed the basic processes of laser operation, which is today studied as a topic in quantum optics. Usage of the term overlapped early work on the quantum Hall effect and quantum cellular automata.


Chelation therapy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Chelation_therapy
 
Chelation therapy
Two molecules of deferasirox, an orally administered chelator, binding iron. Deferasirox is used in the treatment of transfusional iron overload in people with thalassemia.

Chelation therapy is a medical procedure that involves the administration of chelating agents to remove heavy metals from the body. Chelation therapy has a long history of use in clinical toxicology and remains in use for some very specific medical treatments, although it is administered under very careful medical supervision due to various inherent risks, including the mobilization of mercury and other metals through the brain and other parts of the body by the use of weak chelating agents that unbind with metals before elimination, exacerbating existing damage. To avoid mobilization, some practitioners of chelation use strong chelators, such as selenium, taken at low doses over a long period of time.

Chelation therapy must be administered with care as it has a number of possible side effects, including death. In response to increasing use of chelation therapy as alternative medicine and in circumstances in which the therapy should not be used in conventional medicine, various health organizations have confirmed that medical evidence does not support the effectiveness of chelation therapy for any purpose other than the treatment of heavy metal poisoning. Over-the-counter chelation products are not approved for sale in the United States.

Medical uses

Chelation therapy is the preferred medical treatment for metal poisoning, including acute mercury, iron (including in cases of sickle-cell disease and thalassemia), arsenic, lead, uranium, plutonium and other forms of toxic metal poisoning. The chelating agent may be administered intravenously, intramuscularly, or orally, depending on the agent and the type of poisoning.

Chelating agents

There are a variety of common chelating agents with differing affinities for different metals, physical characteristics, and biological mechanism of action. For the most common forms of heavy metal intoxication – lead, arsenic, or mercury – a number of chelating agents are available. Dimercaptosuccinic acid (DMSA) has been recommended by poison control centers around the world for the treatment of lead poisoning in children. Other chelating agents, such as 2,3-dimercaptopropanesulfonic acid (DMPS) and alpha lipoic acid (ALA), are used in conventional and alternative medicine. Some common chelating agents are ethylenediaminetetraacetic acid (EDTA), 2,3-dimercaptopropanesulfonic acid (DMPS), and thiamine tetrahydrofurfuryl disulfide (TTFD). Calcium-disodium EDTA and DMSA are only approved for the removal of lead by the Food and Drug Administration while DMPS and TTFD are not approved by the FDA. These drugs bind to heavy metals in the body and prevent them from binding to other agents. They are then excreted from the body. The chelating process also removes vital nutrients such as vitamins C and E, therefore these must be supplemented.

The German Environmental Agency (Umweltbundesamt) listed DMSA and DMPS as the two most useful and safe chelating agents available.

Chelator Used in
Dimercaprol (British anti-Lewisite; BAL)
Dimercaptosuccinic acid (DMSA)
Dimercapto-propane sulfonate (DMPS)
  • severe acute arsenic poisoning
  • severe acute mercury poisoning
Penicillamine Mainly in:

Occasionally adjunctive therapy in:

Ethylenediamine tetraacetic acid (calcium disodium versenate) (CaNa2-EDTA)
Deferoxamine, Deferasirox and Deferiprone

Side effects

When used properly in response to a diagnosis of harm from metal toxicity, side effects of chelation therapy include dehydration, low blood calcium, harm to kidneys, increased enzymes as would be detected in liver function tests, allergic reactions, and lowered levels of dietary elements. When administered inappropriately, there are the additional risks of hypocalcaemia (low calcium levels), neurodevelopmental disorders, and death.

History

Chelation therapy can be traced back to the early 1930s, when Ferdinand Münz, a German chemist working for I.G. Farben, first synthesized ethylenediaminetetraacetic acid (EDTA). Munz was looking for a replacement for citric acid as a water softener. Chelation therapy itself began during World War II when chemists at the University of Oxford searched for an antidote for lewisite, an arsenic-based chemical weapon. The chemists learned that EDTA was particularly effective in treating lead poisoning.

Following World War II, chelation therapy was used to treat workers who had painted United States naval vessels with lead-based paints. In the 1950s, Norman Clarke, Sr. was treating workers at a battery factory for lead poisoning when he noticed that some of his patients had improved angina pectoris following chelation therapy. Clarke subsequently administered chelation therapy to patients with angina pectoris and other occlusive vascular disease and published his findings in The American Journal of the Medical Sciences in December 1956. He hypothesized that "EDTA could dissolve disease-causing plaques in the coronary systems of human beings." In a series of 283 patients treated by Clarke et al. From 1956 to 1960, 87% showed improvement in their symptomatology. Other early medical investigators made similar observations of EDTA's role in the treatment of cardiovascular disease (Bechtel, 1956; Bessman, 1957; Perry, 1961; Szekely, 1963; Wenig, 1958: and Wilder, 1962).

In 1973, a group of practicing physicians created the Academy of Medical Preventics (now the American College for Advancement in Medicine). The academy trains and certifies physicians in the safe administration of chelation therapy. Members of the academy continued to use EDTA therapy for the treatment of vascular disease and developed safer administration protocols.

In the 1960s, BAL was modified into DMSA, a related dithiol with far fewer side effects. DMSA quickly replaced both BAL and EDTA as the primary treatment for lead, arsenic and mercury poisoning in the United States. Esters of DMSA have been developed which are reportedly more effective; for example, the monoisoamyl ester (MiADMSA) is reportedly more effective than DMSA at clearing mercury and cadmium. Research in the former Soviet Union led to the introduction of DMPS, another dithiol, as a mercury-chelating agent. The Soviets also introduced ALA, which is transformed by the body into the dithiol dihydrolipoic acid, a mercury- and arsenic-chelating agent. DMPS has experimental status in the United States, while ALA is a common nutritional supplement.

Since the 1970s, iron chelation therapy has been used as an alternative to regular phlebotomy to treat excess iron stores in people with haemochromatosis. Other chelating agents have been discovered. They all function by making several chemical bonds with metal ions, thus rendering them much less chemically reactive. The resulting complex is water-soluble, allowing it to enter the bloodstream and be excreted harmlessly.

Calcium-disodium EDTA chelation has been studied by the U.S. National Center for Complementary and Alternative Medicine for treating coronary disease. In 1998, the U.S. Federal Trade Commission (FTC) pursued the American College for Advancement in Medicine (ACAM), an organization that promotes "complementary, alternative and integrative medicine" over the claims made regarding the treatment of atherosclerosis in advertisements for EDTA chelation therapy. The FTC concluded that there was a lack of scientific studies to support these claims and that the statements by the ACAM were false. In 1999, the ACAM agreed to stop presenting chelation therapy as effective in treating heart disease, avoiding legal proceedings. In 2010 the U.S. Food and Drug Administration (FDA) warned companies who sold over-the-counter (OTC) chelation products and stated that such "products are unapproved drugs and devices and that it is a violation of federal law to make unproven claims about these products. There are no FDA-approved OTC chelation products."

Society and culture

In 1998, the U.S. Federal Trade Commission (FTC) charged that the web site of the American College for Advancement in Medicine (ACAM) and a brochure they published had made false or unsubstantiated claims. In December 1998, the FTC announced that it had secured a consent agreement barring ACAM from making unsubstantiated advertising claims that chelation therapy is effective against atherosclerosis or any other disease of the circulatory system.

In August 2005, doctor error led to the death of a five-year-old boy with autism who was undergoing chelation therapy. Others, including a three-year-old nonautistic girl and a nonautistic adult, have died while undergoing chelation therapy. These deaths were due to cardiac arrest caused by hypocalcemia during chelation therapy. In two of the cases hypocalcemia appears to have been caused by the administration of Na2EDTA (disodium EDTA) and in the third case the type of EDTA was unknown. Only the three-year-old girl had been found to have an elevated blood lead level and resulting low iron levels and anemia, which is the conventional medical cause for administration of chelation therapy. According to protocol, EDTA should not be used in the treatment of children. More than 30 deaths have been recorded in association with IV-administered disodium EDTA since the 1970s.

Use in alternative medicine

In alternative medicine, some practitioners claim chelation therapy can treat a variety of ailments, including heart disease and autism. The use of chelation therapy by alternative medicine practitioners for behavioral and other disorders is considered pseudoscientific; there is no proof that it is effective. Chelation therapy prior to heavy metal testing can artificially raise urinary heavy metal concentrations ("provoked" urine testing) and lead to inappropriate and unnecessary treatment. The American College of Medical Toxicology and the American Academy of Clinical Toxicology warn the public that chelating drugs used in chelation therapy may have serious side effects, including liver and kidney damage, blood pressure changes, allergies and in some cases even death of the patient.

Cancer

The American Cancer Society says of chelation therapy: "Available scientific evidence does not support claims that it is effective for treating other conditions such as cancer. Chelation therapy can be toxic and has the potential to cause kidney damage, irregular heartbeat, and even death."

Cardiovascular disease

According to the findings of a 1997 systematic review, EDTA chelation therapy is not effective as a treatment for coronary artery disease and this use is not approved in the United States by the US Food and Drug Administration (FDA).

The American Heart Association stated in 1997 that there is "no scientific evidence to demonstrate any benefit from this form of therapy." The United States Food and Drug Administration (FDA), the National Institutes of Health (NIH) and the American College of Cardiology "all agree with the American Heart Association" that "there have been no adequate, controlled, published scientific studies using currently approved scientific methodology to support this therapy for cardiovascular disease." They speculate that any improvement among heart patients undergoing chelation therapy can be attributed to the placebo effect and generally recommended lifestyle changes such as "quitting smoking, losing weight, eating more fruits and vegetables, avoiding foods high in saturated fats and exercising regularly." They also are concerned that patients could put off proven treatments for heart disease like drugs or surgery.

A systematic review published in 2005 found that controlled scientific studies did not support chelation therapy for heart disease. It found that very small trials and uncontrolled descriptive studies have reported benefits while larger controlled studies have found results no better than placebo.

In 2009, the Montana Board of Medical Examiners issued a position paper concluding that "chelation therapy has no proven efficacy in the treatment of cardiovascular disease, and in some patients could be injurious."

The U.S. National Center for Complementary and Alternative Medicine (NCCAM) conducted a trial on the chelation therapy's safety and efficacy for patients with coronary artery disease. NCCAM Director Stephen E. Straus cited the "widespread use of chelation therapy in lieu of established therapies, the lack of adequate prior research to verify its safety and effectiveness, and the overall impact of coronary artery disease" as factors motivating the trial. The study has been criticized by some who said it was unethical, unnecessary and dangerous, and that multiple studies conducted prior to it demonstrated that the treatment provides no benefit.

The US National Center for Complementary and Alternative Medicine began the Trial to Assess Chelation Therapy (TACT) in 2003. Patient enrollment was to be completed around July 2009 with final completion around July 2010, but enrollment in the trial was voluntarily suspended by organizers in September 2008 after the Office for Human Research Protections began investigating complaints such as inadequate informed consent. Additionally, the trial was criticized for lacking prior Phase I and II studies, and critics summarized previous controlled trials as having "found no evidence that chelation is superior to placebo for treatment of CAD or PVD." The same critics argued that methodological flaws and lack of prior probability made the trial "unethical, dangerous, pointless, and wasteful." The American College of Cardiology supported the trial and research to explore whether chelation therapy was effective in treating heart disease. Evidence of insurance fraud and other felony convictions among (chelation proponent) investigators further undermined the credibility of the trial.

The final results of TACT were published in November 2012. The authors concluded that disodium EDTA chelation "modestly" reduced the risk of adverse cardiovascular outcomes among stable patients with a history of myocardial infarction. The study also showed a "marked" reduction in cardiovascular events in diabetic patients treated with EDTA chelation. An editorial published in the Journal of the American Medical Association said that "the study findings may provide novel hypotheses that merit further evaluation to help understand the pathophysiology of secondary prevention of vascular disease." Critics of the study characterized the study as showing no support for the use of chelation therapy in coronary heart disease, particularly the claims to reduce the need for coronary artery bypass grafting (CABG, pronounced "cabbage").

Autism

Quackwatch says that autism is one of the conditions for which chelation therapy has been falsely promoted as effective, and practitioners falsify diagnoses of metal poisoning to trick parents into having their children undergo the risky process. As of 2008, up to 7% of children with autism worldwide had been subjected to chelation therapy. The death of two children in 2005 was caused by the administration of chelation treatments, according to the American Center for Disease Control. One of them had autism. Parents either have a doctor use a treatment for lead poisoning, or buy unregulated supplements, in particular DMSA and lipoic acid. Aspies For Freedom, an autism rights organization, considers this use of chelation therapy unethical and potentially dangerous. There is little to no credible scientific research that supports the use of chelation therapy for the effective treatment of autism.

Protoplanet

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Protoplanet

A surviving protoplanet, Vesta

A protoplanet is a large planetary embryo that originated within a protoplanetary disk and has undergone internal melting to produce a differentiated interior. Protoplanets are thought to form out of kilometer-sized planetesimals that gravitationally perturb each other's orbits and collide, gradually coalescing into the dominant planets.

The planetesimal hypothesis

A planetesimal is an object formed from dust, rock, and other materials, measuring from meters to hundreds of kilometers in size. According to the Chamberlin–Moulton planetesimal hypothesis and the theories of Viktor Safronov, a protoplanetary disk of materials such as gas and dust would orbit a star early in the formation of a planetary system. The action of gravity on such materials form larger and larger chunks until some reach the size of planetesimals.

It is thought that the collisions of planetesimals created a few hundred larger planetary embryos. Over the course of hundreds of millions of years, they collided with one another. The exact sequence whereby planetary embryos collided to assemble the planets is not known, but it is thought that initial collisions would have replaced the first "generation" of embryos with a second generation consisting of fewer but larger embryos. These in their turn would have collided to create a third generation of fewer but even larger embryos. Eventually, only a handful of embryos were left, which collided to complete the assembly of the planets proper.

Early protoplanets had more radioactive elements, the quantity of which has been reduced over time due to radioactive decay. Heating due to radioactivity, impact, and gravitational pressure melted parts of protoplanets as they grew toward being planets. In melted zones their heavier elements sank to the center, whereas lighter elements rose to the surface. Such a process is known as planetary differentiation. The composition of some meteorites show that differentiation took place in some asteroids.

Evidence in the Solar System

In the case of the Solar System, it is thought that the collisions of planetesimals created a few hundred planetary embryos. Such embryos were similar to Ceres and Pluto with masses of about 1022 to 1023 kg and were a few thousand kilometers in diameter.

According to the giant impact hypothesis the Moon formed from a colossal impact of a hypothetical protoplanet called Theia with Earth, early in the Solar System's history.

In the inner Solar System, the three protoplanets to survive more-or-less intact are the asteroids Ceres, Pallas, and Vesta. Psyche is likely the survivor of a violent hit-and-run with another object that stripped off the outer, rocky layers of a protoplanet. The asteroid Metis may also have a similar origin history to that of Psyche. The asteroid Lutetia also has characteristics that resemble a protoplanet. Kuiper-belt dwarf planets have also been referred to as protoplanets. Because iron meteorites have been found on Earth, it is deemed likely that there once were other metal-cored protoplanets in the asteroid belt that since have been disrupted and that are the source of these meteorites.

Observed protoplanets

In February 2013 astronomers made the first direct observation of a candidate protoplanet forming in a disk of gas and dust around a distant star, HD 100546. Subsequent observations suggest that several protoplanets may be present in the gas disk.

Another protoplanet, AB Aur b, may be in the earliest observed stage of formation for a gas giant. It is located in the gas disk of the star AB Aurigae. AB Aur b is among the largest exoplanets identified, and has a distant orbit, three times as far as Neptune is from the Earth's sun. Observations of AB Aur b may challenge conventional thinking about how planets are formed. It was viewed by the Subaru Telescope and the Hubble Space Telescope.

Rings, gaps, spirals, dust concentrations and shadows in protoplanetary disks could be caused by protoplanets. These structures are not completely understood and are therefore not seen as a proof for the presence of a protoplanet.

One new emerging way to study the effect of protoplanets on the disk are molecular line observations of protoplanetary disks in the form of gas velocity maps. HD 97048 b is the first protoplanet detected by disk kinematics in the form of a kink in the gas velocity map. Other disks like around IM Lupi or HD 163296 show similar kinks in their gas velocity map. Another candidate exoplanet, called HD 169142 b, was first directly imaged in 2014. HD 169142 b additionally shows multiple lines of evidence to be a protoplanet.

Tidal acceleration

From Wikipedia, the free encyclopedia
A picture of Earth and the Moon from Mars. The presence of the Moon (which has about 1/81 the mass of Earth), is slowing Earth's rotation and extending the day by a little under 2 milliseconds every 100 years.

Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The acceleration causes a gradual recession of a satellite in a prograde orbit away from the primary, and a corresponding slowdown of the primary's rotation. The process eventually leads to tidal locking, usually of the smaller body first, and later the larger body (e.g. theoretically with Earth in 50 billion years). The Earth–Moon system is the best-studied case.

The similar process of tidal deceleration occurs for satellites that have an orbital period that is shorter than the primary's rotational period, or that orbit in a retrograde direction.

The naming is somewhat confusing, because the average speed of the satellite relative to the body it orbits is decreased as a result of tidal acceleration, and increased as a result of tidal deceleration. This conundrum occurs because a positive acceleration at one instant causes the satellite to loop farther outward during the next half orbit, decreasing its average speed. A continuing positive acceleration causes the satellite to spiral outward with a decreasing speed and angular rate, resulting in a negative acceleration of angle. A continuing negative acceleration has the opposite effect.

Earth–Moon system

Discovery history of the secular acceleration

Edmond Halley was the first to suggest, in 1695, that the mean motion of the Moon was apparently getting faster, by comparison with ancient eclipse observations, but he gave no data. (It was not yet known in Halley's time that what is actually occurring includes a slowing-down of Earth's rate of rotation: see also Ephemeris time – History. When measured as a function of mean solar time rather than uniform time, the effect appears as a positive acceleration.) In 1749 Richard Dunthorne confirmed Halley's suspicion after re-examining ancient records, and produced the first quantitative estimate for the size of this apparent effect: a centurial rate of +10″ (arcseconds) in lunar longitude, which is a surprisingly accurate result for its time, not differing greatly from values assessed later, e.g. in 1786 by de Lalande, and to compare with values from about 10″ to nearly 13″ being derived about a century later.

Pierre-Simon Laplace produced in 1786 a theoretical analysis giving a basis on which the Moon's mean motion should accelerate in response to perturbational changes in the eccentricity of the orbit of Earth around the Sun. Laplace's initial computation accounted for the whole effect, thus seeming to tie up the theory neatly with both modern and ancient observations.

However, in 1854, John Couch Adams caused the question to be re-opened by finding an error in Laplace's computations: it turned out that only about half of the Moon's apparent acceleration could be accounted for on Laplace's basis by the change in Earth's orbital eccentricity. Adams' finding provoked a sharp astronomical controversy that lasted some years, but the correctness of his result, agreed upon by other mathematical astronomers including C. E. Delaunay, was eventually accepted. The question depended on correct analysis of the lunar motions, and received a further complication with another discovery, around the same time, that another significant long-term perturbation that had been calculated for the Moon (supposedly due to the action of Venus) was also in error, was found on re-examination to be almost negligible, and practically had to disappear from the theory. A part of the answer was suggested independently in the 1860s by Delaunay and by William Ferrel: tidal retardation of Earth's rotation rate was lengthening the unit of time and causing a lunar acceleration that was only apparent.

It took some time for the astronomical community to accept the reality and the scale of tidal effects. But eventually it became clear that three effects are involved, when measured in terms of mean solar time. Beside the effects of perturbational changes in Earth's orbital eccentricity, as found by Laplace and corrected by Adams, there are two tidal effects (a combination first suggested by Emmanuel Liais). First there is a real retardation of the Moon's angular rate of orbital motion, due to tidal exchange of angular momentum between Earth and Moon. This increases the Moon's angular momentum around Earth (and moves the Moon to a higher orbit with a lower orbital speed). Secondly, there is an apparent increase in the Moon's angular rate of orbital motion (when measured in terms of mean solar time). This arises from Earth's loss of angular momentum and the consequent increase in length of day.

A diagram of the Earth–Moon system showing how the tidal bulge is pushed ahead by Earth's rotation. This offset bulge exerts a net torque on the Moon, boosting it while slowing Earth's rotation.

Effects of Moon's gravity

Because the Moon's mass is a considerable fraction of that of Earth (about 1:81), the two bodies can be regarded as a double planet system, rather than as a planet with a satellite. The plane of the Moon's orbit around Earth lies close to the plane of Earth's orbit around the Sun (the ecliptic), rather than in the plane of the earth's rotation (the equator) as is usually the case with planetary satellites. The mass of the Moon is sufficiently large, and it is sufficiently close, to raise tides in the matter of Earth. Foremost among such matter, the water of the oceans bulges out both towards and away from the Moon. If the material of the Earth responded immediately, there would be a bulge directly toward and away from the Moon. In the solid Earth there is a delayed response due to the dissipation of tidal energy. The case for the oceans is more complicated, but there is also a delay associated with the dissipation of energy since the Earth rotates at a faster rate than the Moon's orbital angular velocity. The delay in the responses causes the tidal bulge to be carried forward. Consequently, the line through the two bulges is tilted with respect to the Earth-Moon direction exerting torque between the Earth and the Moon. This torque boosts the Moon in its orbit and slows the rotation of Earth.

As a result of this process, the mean solar day, which has to be 86,400 equal seconds, is actually getting longer when measured in SI seconds with stable atomic clocks. (The SI second, when adopted, was already a little shorter than the current value of the second of mean solar time.) The small difference accumulates over time, which leads to an increasing difference between our clock time (Universal Time) on the one hand, and International Atomic Time and ephemeris time on the other hand: see ΔT. This led to the introduction of the leap second in 1972 to compensate for differences in the bases for time standardization.

In addition to the effect of the ocean tides, there is also a tidal acceleration due to flexing of Earth's crust, but this accounts for only about 4% of the total effect when expressed in terms of heat dissipation.

If other effects were ignored, tidal acceleration would continue until the rotational period of Earth matched the orbital period of the Moon. At that time, the Moon would always be overhead of a single fixed place on Earth. Such a situation already exists in the PlutoCharon system. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects make this irrelevant: about 1 to 1.5 billion years from now, the continual increase of the Sun's radiation will likely cause Earth's oceans to vaporize, removing the bulk of the tidal friction and acceleration. Even without this, the slowdown to a month-long day would still not have been completed by 4.5 billion years from now when the Sun will probably evolve into a red giant and likely destroy both Earth and the Moon.

Tidal acceleration is one of the few examples in the dynamics of the Solar System of a so-called secular perturbation of an orbit, i.e. a perturbation that continuously increases with time and is not periodic. Up to a high order of approximation, mutual gravitational perturbations between major or minor planets only cause periodic variations in their orbits, that is, parameters oscillate between maximum and minimum values. The tidal effect gives rise to a quadratic term in the equations, which leads to unbounded growth. In the mathematical theories of the planetary orbits that form the basis of ephemerides, quadratic and higher order secular terms do occur, but these are mostly Taylor expansions of very long time periodic terms. The reason that tidal effects are different is that unlike distant gravitational perturbations, friction is an essential part of tidal acceleration, and leads to permanent loss of energy from the dynamic system in the form of heat. In other words, we do not have a Hamiltonian system here.

Angular momentum and energy

The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit and Earth to be decelerated in its rotation. As in any physical process within an isolated system, total energy and angular momentum are conserved. Effectively, energy and angular momentum are transferred from the rotation of Earth to the orbital motion of the Moon (however, most of the energy lost by Earth (−3.78 TW) is converted to heat by frictional losses in the oceans and their interaction with the solid Earth, and only about 1/30th (+0.121 TW) is transferred to the Moon). The Moon moves farther away from Earth (+38.30±0.08 mm/yr), so its potential energy, which is still negative (in Earth's gravity well), increases, i. e. becomes less negative. It stays in orbit, and from Kepler's 3rd law it follows that its average angular velocity actually decreases, so the tidal action on the Moon actually causes an angular deceleration, i.e. a negative acceleration (−25.97±0.05"/century2) of its rotation around Earth. The actual speed of the Moon also decreases. Although its kinetic energy decreases, its potential energy increases by a larger amount, i. e. Ep = -2Ec (Virial Theorem).

The rotational angular momentum of Earth decreases and consequently the length of the day increases. The net tide raised on Earth by the Moon is dragged ahead of the Moon by Earth's much faster rotation. Tidal friction is required to drag and maintain the bulge ahead of the Moon, and it dissipates the excess energy of the exchange of rotational and orbital energy between Earth and the Moon as heat. If the friction and heat dissipation were not present, the Moon's gravitational force on the tidal bulge would rapidly (within two days) bring the tide back into synchronization with the Moon, and the Moon would no longer recede. Most of the dissipation occurs in a turbulent bottom boundary layer in shallow seas such as the European Shelf around the British Isles, the Patagonian Shelf off Argentina, and the Bering Sea.

The dissipation of energy by tidal friction averages about 3.64 terawatts of the 3.78 terawatts extracted, of which 2.5 terawatts are from the principal M2 lunar component and the remainder from other components, both lunar and solar.

An equilibrium tidal bulge does not really exist on Earth because the continents do not allow this mathematical solution to take place. Oceanic tides actually rotate around the ocean basins as vast gyres around several amphidromic points where no tide exists. The Moon pulls on each individual undulation as Earth rotates—some undulations are ahead of the Moon, others are behind it, whereas still others are on either side. The "bulges" that actually do exist for the Moon to pull on (and which pull on the Moon) are the net result of integrating the actual undulations over all the world's oceans. Earth's net (or equivalent) equilibrium tide has an amplitude of only 3.23 cm, which is totally swamped by oceanic tides that can exceed one metre.

Historical evidence

This mechanism has been working for 4.5 billion years, since oceans first formed on Earth, but less so at times when much or most of the water was ice. There is geological and paleontological evidence that Earth rotated faster and that the Moon was closer to Earth in the remote past. Tidal rhythmites are alternating layers of sand and silt laid down offshore from estuaries having great tidal flows. Daily, monthly and seasonal cycles can be found in the deposits. This geological record is consistent with these conditions 620 million years ago: the day was 21.9±0.4 hours, and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies.

Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then.

Quantitative description of the Earth–Moon case

The motion of the Moon can be followed with an accuracy of a few centimeters by lunar laser ranging (LLR). Laser pulses are bounced off corner-cube prism retroreflectors on the surface of the Moon, emplaced during the Apollo missions of 1969 to 1972 and by Lunokhod 1 in 1970 and Lunokhod 2 in 1973. Measuring the return time of the pulse yields a very accurate measure of the distance. These measurements are fitted to the equations of motion. This yields numerical values for the Moon's secular deceleration, i.e. negative acceleration, in longitude and the rate of change of the semimajor axis of the Earth–Moon ellipse. From the period 1970–2015, the results are:

−25.97 ± 0.05 arcsecond/century2 in ecliptic longitude
+38.30 ± 0.08 mm/yr in the mean Earth–Moon distance

This is consistent with results from satellite laser ranging (SLR), a similar technique applied to artificial satellites orbiting Earth, which yields a model for the gravitational field of Earth, including that of the tides. The model accurately predicts the changes in the motion of the Moon.

Finally, ancient observations of solar eclipses give fairly accurate positions for the Moon at those moments. Studies of these observations give results consistent with the value quoted above.

The other consequence of tidal acceleration is the deceleration of the rotation of Earth. The rotation of Earth is somewhat erratic on all time scales (from hours to centuries) due to various causes. The small tidal effect cannot be observed in a short period, but the cumulative effect on Earth's rotation as measured with a stable clock (ephemeris time, International Atomic Time) of a shortfall of even a few milliseconds every day becomes readily noticeable in a few centuries. Since some event in the remote past, more days and hours have passed (as measured in full rotations of Earth) (Universal Time) than would be measured by stable clocks calibrated to the present, longer length of the day (ephemeris time). This is known as ΔT. Recent values can be obtained from the International Earth Rotation and Reference Systems Service (IERS). A table of the actual length of the day in the past few centuries is also available.

From the observed change in the Moon's orbit, the corresponding change in the length of the day can be computed (where "cy" means "century"):

+2.4 ms/d/century or +88 s/cy2 or +66 ns/d2.

However, from historical records over the past 2700 years the following average value is found:

+1.72 ± 0.03 ms/d/century or +63 s/cy2 or +47 ns/d2. (i.e. an accelerating cause is responsible for -0.7 ms/d/cy)

By twice integrating over the time, the corresponding cumulative value is a parabola having a coefficient of T2 (time in centuries squared) of (1/2) 63 s/cy2 :

ΔT = (1/2) 63 s/cy2 T2 = +31 s/cy2 T2.

Opposing the tidal deceleration of Earth is a mechanism that is in fact accelerating the rotation. Earth is not a sphere, but rather an ellipsoid that is flattened at the poles. SLR has shown that this flattening is decreasing. The explanation is that during the ice age large masses of ice collected at the poles, and depressed the underlying rocks. The ice mass started disappearing over 10000 years ago, but Earth's crust is still not in hydrostatic equilibrium and is still rebounding (the relaxation time is estimated to be about 4000 years). As a consequence, the polar diameter of Earth increases, and the equatorial diameter decreases (Earth's volume must remain the same). This means that mass moves closer to the rotation axis of Earth, and that Earth's moment of inertia decreases. This process alone leads to an increase of the rotation rate (phenomenon of a spinning figure skater who spins ever faster as they retract their arms). From the observed change in the moment of inertia the acceleration of rotation can be computed: the average value over the historical period must have been about −0.6 ms/century. This largely explains the historical observations.

Other cases of tidal acceleration

Most natural satellites of the planets undergo tidal acceleration to some degree (usually small), except for the two classes of tidally decelerated bodies. In most cases, however, the effect is small enough that even after billions of years most satellites will not actually be lost. The effect is probably most pronounced for Mars's second moon Deimos, which may become an Earth-crossing asteroid after it leaks out of Mars's grip. The effect also arises between different components in a binary star.

Tidal deceleration

In tidal acceleration (1), a satellite orbits in the same direction as (but slower than) its parent body's rotation. The nearer tidal bulge (red) attracts the satellite more than the farther bulge (blue), imparting a net positive force (dotted arrows showing forces resolved into their components) in the direction of orbit, lifting it into a higher orbit.
In tidal deceleration (2) with the rotation reversed, the net force opposes the direction of orbit, lowering it.

This comes in two varieties:

  1. Fast satellites: Some inner moons of the giant planets and Phobos orbit within the synchronous orbit radius so that their orbital period is shorter than their planet's rotation. In other words, they orbit their planet faster than the planet rotates. In this case the tidal bulges raised by the moon on their planet lag behind the moon, and act to decelerate it in its orbit. The net effect is a decay of that moon's orbit as it gradually spirals towards the planet. The planet's rotation also speeds up slightly in the process. In the distant future these moons will strike the planet or cross within their Roche limit and be tidally disrupted into fragments. However, all such moons in the Solar System are very small bodies and the tidal bulges raised by them on the planet are also small, so the effect is usually weak and the orbit decays slowly. The moons affected are: Some hypothesize that after the Sun becomes a red giant, its surface rotation will be much slower and it will cause tidal deceleration of any remaining planets.
  2. Retrograde satellites: All retrograde satellites experience tidal deceleration to some degree because their orbital motion and their planet's rotation are in opposite directions, causing restoring forces from their tidal bulges. A difference to the previous "fast satellite" case here is that the planet's rotation is also slowed down rather than sped up (angular momentum is still conserved because in such a case the values for the planet's rotation and the moon's revolution have opposite signs). The only satellite in the Solar System for which this effect is non-negligible is Neptune's moon Triton. All the other retrograde satellites are on distant orbits and tidal forces between them and the planet are negligible.

Mercury and Venus are believed to have no satellites chiefly because any hypothetical satellite would have suffered deceleration long ago and crashed into the planets due to the very slow rotation speeds of both planets; in addition, Venus also has retrograde rotation.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...