Search This Blog

Tuesday, February 6, 2018

Thomas Kuhn

From Wikipedia, the free encyclopedia

Thomas Kuhn
Thomas Kuhn.jpg
Born Thomas Samuel Kuhn
July 18, 1922
Cincinnati, Ohio, U.S.
Died June 17, 1996 (aged 73)
Cambridge, Massachusetts, U.S.
Alma mater Harvard University

Era 20th-century philosophy
Region Western philosophy
School Analytic
Historical turn[1]
Main interests
Philosophy of science
Notable ideas
Thomas Samuel Kuhn (/kn/; July 18, 1922 – June 17, 1996) was an American physicist, historian and philosopher of science whose controversial 1962 book The Structure of Scientific Revolutions was influential in both academic and popular circles, introducing the term paradigm shift, which has since become an English-language idiom.

Kuhn made several notable claims concerning the progress of scientific knowledge: that scientific fields undergo periodic "paradigm shifts" rather than solely progressing in a linear and continuous way, and that these paradigm shifts open up new approaches to understanding what scientists would never have considered valid before; and that the notion of scientific truth, at any given moment, cannot be established solely by objective criteria but is defined by a consensus of a scientific community. Competing paradigms are frequently incommensurable; that is, they are competing and irreconcilable accounts of reality. Thus, our comprehension of science can never rely wholly upon "objectivity" alone. Science must account for subjective perspectives as well, since all objective conclusions are ultimately founded upon the subjective conditioning/worldview of its researchers and participants.

Life

Kuhn was born in Cincinnati, Ohio, to Samuel L. Kuhn, an industrial engineer, and Minette Stroock Kuhn, both Jewish. He graduated from The Taft School in Watertown, CT, in 1940, where he became aware of his serious interest in mathematics and physics. He obtained his BS degree in physics from Harvard University in 1943, where he also obtained MS and PhD degrees in physics in 1946 and 1949, respectively, under the supervision of John Van Vleck.[12] As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant. After leaving Harvard, Kuhn taught at the University of California, Berkeley, in both the philosophy department and the history department, being named Professor of the History of science in 1961. Kuhn interviewed and tape recorded Danish physicist Niels Bohr the day before Bohr's death.[13] At Berkeley, he wrote and published (in 1962) his best known and most influential work:[14] The Structure of Scientific Revolutions. In 1964, he joined Princeton University as the M. Taylor Pyne Professor of Philosophy and History of Science. He served as the president of the History of Science Society from 1969–70.[15] In 1979 he joined the Massachusetts Institute of Technology (MIT) as the Laurance S. Rockefeller Professor of Philosophy, remaining there until 1991. In 1994 Kuhn was diagnosed with lung cancer. He died in 1996.

Thomas Kuhn was married twice, first to Kathryn Muhs with whom he had three children, then to Jehane Barton Burns (Jehane R. Kuhn).

The Structure of Scientific Revolutions

The Structure of Scientific Revolutions (SSR) was originally printed as an article in the International Encyclopedia of Unified Science, published by the logical positivists of the Vienna Circle. In this book, Kuhn argued that science does not progress via a linear accumulation of new knowledge, but undergoes periodic revolutions, also called "paradigm shifts" (although he did not coin the phrase),[16] in which the nature of scientific inquiry within a particular field is abruptly transformed. In general, science is broken up into three distinct stages. Prescience, which lacks a central paradigm, comes first. This is followed by "normal science", when scientists attempt to enlarge the central paradigm by "puzzle-solving". Guided by the paradigm, normal science is extremely productive: "when the paradigm is successful, the profession will have solved problems that its members could scarcely have imagined and would never have undertaken without commitment to the paradigm".[17]

In regard to experimentation and collection of data with a view toward solving problems through the commitment to a paradigm, Kuhn states: “The operations and measurements that a scientist undertakes in the laboratory are not ‘the given’ of experience but rather ‘the collected with diffculty.’ They are not what the scientist sees—at least not before his research is well advanced and his attention focused. Rather, they are concrete indices to the content of more elementary perceptions, and as such they are selected for the close scrutiny of normal research only because they promise opportunity for the fruitful elaboration of an accepted paradigm. Far more clearly than the immediate experience from which they in part derive, operations and measurements are paradigm-determined. Science does not deal in all possible laboratory manipulations. Instead, it selects those relevant to the juxtaposition of a paradigm with the immediate experience that that paradigm has partially determined. As a result, scientists with different paradigms engage in different concrete laboratory manipulations.”[18]

During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

In SSR, Kuhn also argues that rival paradigms are incommensurable—that is, it is not possible to understand one paradigm through the conceptual framework and terminology of another rival paradigm. For many critics, for example David Stove (Popper and After, 1982), this thesis seemed to entail that theory choice is fundamentally irrational: if rival theories cannot be directly compared, then one cannot make a rational choice as to which one is better. Whether Kuhn's views had such relativistic consequences is the subject of much debate; Kuhn himself denied the accusation of relativism in the third edition of SSR, and sought to clarify his views to avoid further misinterpretation. Freeman Dyson has quoted Kuhn as saying "I am not a Kuhnian!",[19] referring to the relativism that some philosophers have developed based on his work.

The enormous impact of Kuhn's work can be measured in the changes it brought about in the vocabulary of the philosophy of science: besides "paradigm shift", Kuhn popularized the word "paradigm" itself from a term used in certain forms of linguistics and the work of Georg Lichtenberg to its current broader meaning, coined the term "normal science" to refer to the relatively routine, day-to-day work of scientists working within a paradigm, and was largely responsible for the use of the term "scientific revolutions" in the plural, taking place at widely different periods of time and in different disciplines, as opposed to a single scientific revolution in the late Renaissance. The frequent use of the phrase "paradigm shift" has made scientists more aware of and in many cases more receptive to paradigm changes, so that Kuhn's analysis of the evolution of scientific views has by itself influenced that evolution.[citation needed]

Kuhn's work has been extensively used in social science; for instance, in the post-positivist/positivist debate within International Relations. Kuhn is credited as a foundational force behind the post-Mertonian sociology of scientific knowledge. Kuhn's work has also been used in the Arts and Humanities, such as by Matthew Edward Harris to distinguish between scientific and historical communities (such as political or religious groups): 'political-religious beliefs and opinions are not epistemologically the same as those pertaining to scientific theories'.[20] This is because would-be scientists' worldviews are changed through rigorous training, through the engagement between what Kuhn calls 'exemplars' and the Global Paradigm. Kuhn's notions of paradigms and paradigm shifts have been influential in understanding the history of economic thought, for example the Keynesian revolution,[21] and in debates in political science.[22]

A defense Kuhn gives against the objection that his account of science from The Structure of Scientific Revolutions results in relativism can be found in an essay by Kuhn called "Objectivity, Value Judgment, and Theory Choice."[23] In this essay, he reiterates five criteria from the penultimate chapter of SSR that determine (or help determine, more properly) theory choice:
  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory's consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam's razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena
He then goes on to show how, although these criteria admittedly determine theory choice, they are imprecise in practice and relative to individual scientists. According to Kuhn, "When scientists must choose between competing theories, two men fully committed to the same list of criteria for choice may nevertheless reach different conclusions."[23] For this reason, the criteria still are not "objective" in the usual sense of the word because individual scientists reach different conclusions with the same criteria due to valuing one criterion over another or even adding additional criteria for selfish or other subjective reasons. Kuhn then goes on to say, "I am suggesting, of course, that the criteria of choice with which I began function not as rules, which determine choice, but as values, which influence it."[23] Because Kuhn utilizes the history of science in his account of science, his criteria or values for theory choice are often understood as descriptive normative rules (or more properly, values) of theory choice for the scientific community rather than prescriptive normative rules in the usual sense of the word "criteria", although there are many varied interpretations of Kuhn's account of science.

Polanyi–Kuhn debate

Although they used different terminologies, both Kuhn and Michael Polanyi believed that scientists' subjective experiences made science a relativized discipline. Polanyi lectured on this topic for decades before Kuhn published The Structure of Scientific Revolutions.

Supporters of Polanyi charged Kuhn with plagiarism, as it was known that Kuhn attended several of Polanyi's lectures, and that the two men had debated endlessly over epistemology before either had achieved fame. The charge of plagiarism is peculiar, for Kuhn had generously acknowledged Polanyi in the first edition of The Structure of Scientific Revolutions.[5] Despite this intellectual alliance, Polanyi's work was constantly interpreted by others within the framework of Kuhn's paradigm shifts, much to Polanyi's (and Kuhn's) dismay.[24]

Thomas Kuhn Paradigm Shift Award

In honor of his legacy, the "Thomas Kuhn Paradigm Shift Award" is awarded by the American Chemical Society to speakers who present original views that are at odds with mainstream scientific understanding. The winner is selected based in the novelty of the viewpoint and its potential impact if it were to be widely accepted.[25]

Honors

Kuhn was named a Guggenheim Fellow in 1954, and in 1982 was awarded the George Sarton Medal by the History of Science Society. He also received numerous honorary doctorates.

Bibliography

Monday, February 5, 2018

Atomic theory

From Wikipedia, the free encyclopedia

The current theoretical model of the atom involves a dense nucleus surrounded by a probabilistic "cloud" of electrons

In chemistry and physics, atomic theory is a scientific theory of the nature of matter, which states that matter is composed of discrete units called atoms. It began as a philosophical concept in ancient Greece and entered the scientific mainstream in the early 19th century when discoveries in the field of chemistry showed that matter did indeed behave as if it were made up of atoms.

The word atom comes from the Ancient Greek adjective atomos, meaning "indivisible".[1] 19th century chemists began using the term in connection with the growing number of irreducible chemical elements. While seemingly apropos, around the turn of the 20th century, through various experiments with electromagnetism and radioactivity, physicists discovered that the so-called "uncuttable atom" was actually a conglomerate of various subatomic particles (chiefly, electrons, protons and neutrons) which can exist separately from each other. In fact, in certain extreme environments, such as neutron stars, extreme temperature and pressure prevents atoms from existing at all.

Since atoms were found to be divisible, physicists later invented the term "elementary particles" to describe the "uncuttable", though not indestructible, parts of an atom. The field of science which studies subatomic particles is particle physics, and it is in this field that physicists hope to discover the true fundamental nature of matter.

History

Philosophical atomism

The idea that matter is made up of discrete units is a very old one, appearing in many ancient cultures such as Greece and India. The word "atom" was coined by the ancient Greek philosophers Leucippus and his pupil Democritus.[2][3] However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. Because of this, they could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, as the blossoming science of chemistry produced discoveries that could easily be explained using the concept of atoms.

John Dalton

Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, formulated by Antoine Lavoisier in 1789, which states that the total mass in a chemical reaction remains constant (that is, the reactants have the same mass as the products).[4] The second was the law of definite proportions. First proven by the French chemist Joseph Louis Proust in 1799,[5] this law states that if a compound is broken down into its constituent elements, then the masses of the constituents will always have the same proportions, regardless of the quantity or source of the original substance.

John Dalton studied and expanded upon this previous work and developed the law of multiple proportions: if two elements can be combined to form a number of possible compounds, then the ratios of the masses of the second element which combine with a fixed mass of the first element will be ratios of small whole numbers. For example: Proust had studied tin oxides and found that their masses were either 88.1% tin and 11.9% oxygen or 78.7% tin and 21.3% oxygen (these were tin(II) oxide and tin dioxide respectively). Dalton noted from these percentages that 100g of tin will combine either with 13.5g or 27g of oxygen; 13.5 and 27 form a ratio of 1:2. Dalton found that an atomic theory of matter could elegantly explain this common pattern in chemistry. In the case of Proust's tin oxides, one tin atom will combine with either one or two oxygen atoms.[6]

Dalton believed atomic theory could explain why water absorbed different gases in different proportions - for example, he found that water absorbed carbon dioxide far better than it absorbed nitrogen.[7] Dalton hypothesized this was due to the differences in mass and complexity of the gases' respective particles. Indeed, carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2).

Dalton proposed that each chemical element is composed of atoms of a single, unique type, and though they cannot be altered or destroyed by chemical means, they can combine to form more complex structures (chemical compounds). This marked the first truly scientific theory of the atom, since Dalton reached his conclusions by experimentation and examination of the results in an empirical fashion.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808).

In 1803 Dalton orally presented his first list of relative atomic weights for a number of substances. This paper was published in 1805, but he did not discuss there exactly how he obtained these figures.[7] The method was first revealed in 1807 by his acquaintance Thomas Thomson, in the third edition of Thomson's textbook, A System of Chemistry. Finally, Dalton published a full account in his own textbook, A New System of Chemical Philosophy, 1808 and 1810.

Dalton estimated the atomic weights according to the mass ratios in which they combined, with the hydrogen atom taken as unity. However, Dalton did not conceive that with some elements atoms exist in molecules—e.g. pure oxygen exists as O2. He also mistakenly believed that the simplest compound between any two elements is always one atom of each (so he thought water was HO, not H2O).[8] This, in addition to the crudity of his equipment, flawed his results. For instance, in 1803 he believed that oxygen atoms were 5.5 times heavier than hydrogen atoms, because in water he measured 5.5 grams of oxygen for every 1 gram of hydrogen and believed the formula for water was HO. Adopting better data, in 1806 he concluded that the atomic weight of oxygen must actually be 7 rather than 5.5, and he retained this weight for the rest of his life. Others at this time had already concluded that the oxygen atom must weigh 8 relative to hydrogen equals 1, if one assumes Dalton's formula for the water molecule (HO), or 16 if one assumes the modern water formula (H2O).[9]

Avogadro

The flaw in Dalton's theory was corrected in principle in 1811 by Amedeo Avogadro. Avogadro had proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies).[10] Avogadro's law allowed him to deduce the diatomic nature of numerous gases by studying the volumes at which they reacted. For instance: since two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature), it meant a single oxygen molecule splits in two in order to form two particles of water. Thus, Avogadro was able to offer more accurate estimates of the atomic mass of oxygen and various other elements, and made a clear distinction between molecules and atoms.

Brownian Motion

In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Albert Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a hypothetical mathematical model to describe it.[11] This model was validated experimentally in 1908 by French physicist Jean Perrin, thus providing additional validation for particle theory (and by extension atomic theory).

Discovery of subatomic particles

Atoms were thought to be the smallest possible division of matter until 1897 when J.J. Thomson discovered the electron through his work on cathode rays.[12]
A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by an electric field (in addition to magnetic fields, which was already known). He concluded that these rays, rather than being a form of light, were composed of very light negatively charged particles he called "corpuscles" (they would later be renamed electrons by other scientists). He measured the mass-to-charge ratio and discovered it was 1800 times smaller than that of hydrogen, the smallest atom. These corpuscles were a particle unlike any other previously known.

Thomson suggested that atoms were divisible, and that the corpuscles were their building blocks.[13] To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the plum pudding model[14] as the electrons were embedded in the positive charge like plums in a plum pudding (although in Thomson's model they were not stationary).

Discovery of the nucleus

The Geiger-Marsden experiment
Left: Expected results: alpha particles passing through the plum pudding model of the atom with negligible deflection.
Right: Observed results: a small portion of the particles were deflected by the concentrated positive charge of the nucleus.

Thomson's plum pudding model was disproved in 1909 by one of his former students, Ernest Rutherford, who discovered that most of the mass and positive charge of an atom is concentrated in a very small fraction of its volume, which he assumed to be at the very center.

In the Geiger–Marsden experiment, Hans Geiger and Ernest Marsden (colleagues of Rutherford working at his behest) shot alpha particles at thin sheets of metal and measured their deflection through the use of a fluorescent screen.[15] Given the very small mass of the electrons, the high momentum of the alpha particles, and the low concentration of the positive charge of the plum pudding model, the experimenters expected all the alpha particles to pass through the metal foil without significant deflection. To their astonishment, a small fraction of the alpha particles experienced heavy deflection. Rutherford concluded that the positive charge of the atom must be concentrated in a very tiny volume to produce an electric field sufficiently intense to deflect the alpha particles so strongly.

This led Rutherford to propose a planetary model in which a cloud of electrons surrounded a small, compact nucleus of positive charge. Only such a concentration of charge could produce the electric field strong enough to cause the heavy deflection.[16]

First steps toward a quantum physical model of the atom

The planetary model of the atom had two significant shortcomings. The first is that, unlike planets orbiting a sun, electrons are charged particles. An accelerating electric charge is known to emit electromagnetic waves according to the Larmor formula in classical electromagnetism. An orbiting charge should steadily lose energy and spiral toward the nucleus, colliding with it in a small fraction of a second. The second problem was that the planetary model could not explain the highly peaked emission and absorption spectra of atoms that were observed.
The Bohr model of the atom

Quantum theory revolutionized physics at the beginning of the 20th century, when Max Planck and Albert Einstein postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). In 1913, Niels Bohr incorporated this idea into his Bohr model of the atom, in which an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy.[17] Under this model an electron could not spiral into the nucleus because it could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels.[17] When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).[17]

Bohr's model was not perfect. It could only predict the spectral lines of hydrogen; it couldn't predict those of multielectron atoms. Worse still, as spectrographic technology improved, additional spectral lines in hydrogen were observed which Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms.

Discovery of isotopes

While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one element at each position on the periodic table.[18] The term isotope was coined by Margaret Todd as a suitable name for these elements.
That same year, J.J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass.[19] The nature of this differing mass would later be explained by the discovery of neutrons in 1932.

Discovery of nuclear particles

In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen nuclei being emitted from the gas (Rutherford recognized these, because he had previously obtained them bombarding hydrogen with alpha particles, and observing hydrogen nuclei in the products). Rutherford concluded that the hydrogen nuclei emerged from the nuclei of the nitrogen atoms themselves (in effect, he had split a nitrogen).[20]

From his own work and the work of his students Bohr and Henry Moseley, Rutherford knew that the positive charge of any atom could always be equated to that of an integer number of hydrogen nuclei. This, coupled with the atomic mass of many elements being roughly equivalent to an integer number of hydrogen atoms - then assumed to be the lightest particles - led him to conclude that hydrogen nuclei were singular particles and a basic constituent of all atomic nuclei. He named such particles protons. Further experimentation by Rutherford found that the nuclear mass of most atoms exceeded that of the protons it possessed; he speculated that this surplus mass was composed of previously-unknown neutrally charged particles, which were tentatively dubbed "neutrons".

In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons.[21] For his discovery of the neutron, Chadwick received the Nobel Prize in 1935.

Quantum physical models of the atom

The five filled atomic orbitals of a neon atom separated and arranged in order of increasing energy from left to right, with the last three orbitals being equal in energy. Each orbital holds up to two electrons, which most probably exist in the zones represented by the colored bubbles. Each electron is equally present in both orbital zones, shown here by color only to highlight the different wave phase.

In 1924, Louis de Broglie proposed that all moving particles—particularly subatomic particles such as electrons—exhibit a degree of wave-like behavior. Erwin Schrödinger, fascinated by this idea, explored whether or not the movement of an electron in an atom could be better explained as a wave rather than as a particle. Schrödinger's equation, published in 1926,[22] describes an electron as a wavefunction instead of as a point particle. This approach elegantly predicted many of the spectral phenomena that Bohr's model failed to explain. Although this concept was mathematically convenient, it was difficult to visualize, and faced opposition.[23] One of its critics, Max Born, proposed instead that Schrödinger's wavefunction described not the electron but rather all its possible states, and thus could be used to calculate the probability of finding an electron at any given location around the nucleus.[24] This reconciled the two opposing theories of particle versus wave electrons and the idea of wave–particle duality was introduced. This theory stated that the electron may exhibit the properties of both a wave and a particle. For example, it can be refracted like a wave, and has mass like a particle.[25]

A consequence of describing electrons as waveforms is that it is mathematically impossible to simultaneously derive the position and momentum of an electron. This became known as the Heisenberg uncertainty principle after the theoretical physicist Werner Heisenberg, who first described it and published it in 1927.[26] This invalidated Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes-sphere, dumbbell, torus, etc.-with the nucleus in the middle.[27]

Sunday, January 28, 2018

Orbital eccentricity

From Wikipedia, the free encyclopedia
An elliptic, parabolic, and hyperbolic Kepler orbit:
  elliptic (eccentricity = 0.7)
  parabolic (eccentricity = 1)
  hyperbolic orbit (eccentricity = 1.3)

The orbital eccentricity of an astronomical object is a parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit, and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the galaxy.

Definition

e=0
e=0
e=0.5
e=0.5
Orbits in a two-body system for two values of the eccentricity, e.

In a two-body problem with inverse-square-law force, every orbit is a Kepler orbit. The eccentricity of this Kepler orbit is a non-negative number that defines its shape.

The eccentricity may take the following values:
The eccentricity e is given by
{\displaystyle e={\sqrt {1+{\frac {2EL^{2}}{m_{\text{red}}\alpha ^{2}}}}}}
where E is the total orbital energy, L is the angular momentum, mred is the reduced mass, and α the coefficient of the inverse-square law central force such as gravity or electrostatics in classical physics:
{\displaystyle F={\frac {\alpha }{r^{2}}}}
(α is negative for an attractive force, positive for a repulsive one; see also Kepler problem)
or in the case of a gravitational force:
{\displaystyle e={\sqrt {1+{\frac {2\varepsilon h^{2}}{\mu ^{2}}}}}}
where ε is the specific orbital energy (total energy divided by the reduced mass), μ the standard gravitational parameter based on the total mass, and h the specific relative angular momentum (angular momentum divided by the reduced mass).

For values of e from 0 to 1 the orbit's shape is an increasingly elongated (or flatter) ellipse; for values of e from 1 to infinity the orbit is a hyperbola branch making a total turn of 2 arccsc e, decreasing from 180 to 0 degrees. The limit case between an ellipse and a hyperbola, when e equals 1, is parabola.

Radial trajectories are classified as elliptic, parabolic, or hyperbolic based on the energy of the orbit, not the eccentricity. Radial orbits have zero angular momentum and hence eccentricity equal to one. Keeping the energy constant and reducing the angular momentum, elliptic, parabolic, and hyperbolic orbits each tend to the corresponding type of radial trajectory while e tends to 1 (or in the parabolic case, remains 1).

For a repulsive force only the hyperbolic trajectory, including the radial version, is applicable.

For elliptical orbits, a simple proof shows that arcsin(e) yields the projection angle of a perfect circle to an ellipse of eccentricity e. For example, to view the eccentricity of the planet Mercury (e = 0.2056), one must simply calculate the inverse sine to find the projection angle of 11.86 degrees. Next, tilt any circular object (such as a coffee mug viewed from the top) by that angle and the apparent ellipse projected to your eye will be of that same eccentricity.

Etymology

The word "eccentricity" comes from Medieval Latin eccentricus, derived from Greek ἔκκεντρος ekkentros "out of the center", from ἐκ- ek-, "out of" + κέντρον kentron "center". "Eccentric" first appeared in English in 1551, with the definition "a circle in which the earth, sun. etc. deviates from its center".[citation needed] By five years later, in 1556, an adjectival form of the word had developed.

Calculation

The eccentricity of an orbit can be calculated from the orbital state vectors as the magnitude of the eccentricity vector:
e=\left|\mathbf {e} \right|
where:
For elliptical orbits it can also be calculated from the periapsis and apoapsis since rp = a(1 − e) and ra = a(1 + e), where a is the semimajor axis.
{\displaystyle {\begin{aligned}e&={{r_{\text{a}}-r_{\text{p}}} \over {r_{\text{a}}+r_{\text{p}}}}\\&=1-{\frac {2}{{\frac {r_{\text{a}}}{r_{\text{p}}}}+1}}\end{aligned}}}
where:
  • ra is the radius at apoapsis (i.e., the farthest distance of the orbit to the center of mass of the system, which is a focus of the ellipse).
  • rp is the radius at periapsis (the closest distance).
The eccentricity of an elliptical orbit can also be used to obtain the ratio of the periapsis to the apoapsis:
{\displaystyle {{r_{\text{p}}} \over {r_{\text{a}}}}={{1-e} \over {1+e}}}
For Earth, orbital eccentricity ≈ 0.0167, apoapsis= aphelion = apogee and periapsis= perihelion = perigee relative to sun.

For Earth's annual orbit path, ra/rp ratio = longest_radius / shortest_radius ≈ 1.034 relative to center point of path.

Examples

Gravity Simulator plot of the changing orbital eccentricity of Mercury, Venus, Earth, and Mars over the next 50,000 years. The arrows indicate the different scales used. The 0 point on this plot is the year 2007.
 
Eccentricities of Solar System bodies
Object eccentricity
Triton 0.00002
Venus 0.0068
Neptune 0.0086
Earth 0.0167
Titan 0.0288
Uranus 0.0472
Jupiter 0.0484
Saturn 0.0541
Moon 0.0549
1 Ceres 0.0758
4 Vesta 0.0887
Mars 0.0934
10 Hygiea 0.1146
Makemake 0.1559
Haumea 0.1887
Mercury 0.2056
2 Pallas 0.2313
Pluto 0.2488
3 Juno 0.2555
324 Bamberga 0.3400
Eris 0.4407
Nereid 0.7507
Sedna 0.8549
Halley's Comet 0.9671
Comet Hale-Bopp 0.9951
Comet Ikeya-Seki 0.9999
ʻOumuamua 1.20[a]

The eccentricity of the Earth's orbit is currently about 0.0167; the Earth's orbit is nearly circular. Venus and Neptune have even lower eccentricities. Over hundreds of thousands of years, the eccentricity of the Earth's orbit varies from nearly 0.0034 to almost 0.058 as a result of gravitational attractions among the planets (see graph).[1]

The table lists the values for all planets and dwarf planets, and selected asteroid, comets and moons. Mercury has the greatest orbital eccentricity of any planet in the Solar System (e = 0.2056). Such eccentricity is sufficient for Mercury to receive twice as much solar irradiation at perihelion compared to aphelion. Before its demotion from planet status in 2006, Pluto was considered to be the planet with the most eccentric orbit (e = 0.248). Other Trans-Neptunian objects have significant eccentricity, notably the dwarf planet Eris (0.44). Even further out, Sedna, has an extremely high eccentricity of 0.855 due to its estimated aphelion of 937 AU and perihelion of about 76 AU.

Most of the Solar System's asteroids have orbital eccentricities between 0 and 0.35 with an average value of 0.17.[2] Their comparatively high eccentricities are probably due to the influence of Jupiter and to past collisions.

The Moon's value is 0.0549, the most eccentric of the large moons of the Solar System. The four Galilean moons have eccentricity < 0.01. Neptune's largest moon Triton has an eccentricity of 1.6×10−5 (0.000016),[3] the smallest eccentricity of any known body in the Solar System;[citation needed] its orbit is as close to a perfect circle as can be currently[when?] measured. However, smaller moons, particularly irregular moons, can have significant eccentricity, such as Neptune's third largest moon Nereid (0.75).

Comets have very different values of eccentricity. Periodic comets have eccentricities mostly between 0.2 and 0.7,[4] but some of them have highly eccentric elliptical orbits with eccentricities just below 1, for example, Halley's Comet has a value of 0.967. Non-periodic comets follow near-parabolic orbits and thus have eccentricities even closer to 1. Examples include Comet Hale–Bopp with a value of 0.995[5] and comet C/2006 P1 (McNaught) with a value of 1.000019.[6] As Hale–Bopp's value is less than 1, its orbit is elliptical and it will in fact return.[5] Comet McNaught has a hyperbolic orbit while within the influence of the planets, but is still bound to the Sun with an orbital period of about 105 years.[7] As of a 2010 Epoch, Comet C/1980 E1 has the largest eccentricity of any known hyperbolic comet with an eccentricity of 1.057,[8] and will leave the Solar System indefinitely.

ʻOumuamua is the first interstellar object found passing through the Solar System. Its orbital eccentricity of 1.20 indicates that ʻOumuamua has never been gravitationally bound to our sun. It was discovered 0.2 AU (30,000,000 km; 19,000,000 mi) from Earth and is roughly 200 meters in diameter. It has an interstellar speed (velocity at infinity) of 26.33 km/s (58,900 mph).

Mean eccentricity

The mean eccentricity of an object is the average eccentricity as a result of perturbations over a given time period. Neptune currently has an instant (current epoch) eccentricity of 0.0113,[9] but from 1800 to 2050 has a mean eccentricity of 0.00859.[10]

Climatic effect

Orbital mechanics require that the duration of the seasons be proportional to the area of the Earth's orbit swept between the solstices and equinoxes, so when the orbital eccentricity is extreme, the seasons that occur on the far side of the orbit (aphelion) can be substantially longer in duration. Today, northern hemisphere fall and winter occur at closest approach (perihelion), when the earth is moving at its maximum velocity—while the opposite occurs in the southern hemisphere. As a result, in the northern hemisphere, fall and winter are slightly shorter than spring and summer—but in global terms this is balanced with them being longer below the equator. In 2006, the northern hemisphere summer was 4.66 days longer than winter, and spring was 2.9 days longer than fall due to the Milankovitch cycles.[11][12]

Apsidal precession also slowly changes the place in the Earth's orbit where the solstices and equinoxes occur. Note that this is a slow change in the orbit of the Earth, not the axis of rotation, which is referred to as axial precession (see Precession § Astronomy). Over the next 10,000 years, the northern hemisphere winters will become gradually longer and summers will become shorter. However, any cooling effect in one hemisphere is balanced by warming in the other, and any overall change will be counteracted by the fact that the eccentricity of Earth's orbit will be almost halved.[13] This will reduce the mean orbital radius and raise temperatures in both hemispheres closer to the mid-interglacial peak.

Exoplanets

Of the many exoplanets discovered, most have a higher orbital eccentricity than planets in our solar system. Exoplanets found with low orbital eccentricity, near circular orbits, are very close to their star and are tidally locked to the star. All eight planets in the Solar System have near-circular orbits. The exoplanets discovered show that the solar system, with its unusually low eccentricity, is rare and unique.[14] One theory attributes this low eccentricity to the high number of planets in the Solar System; another suggests it arose because of its unique asteroid belts. A few other multiplanetary systems have been found, but none resemble the Solar System. The Solar System has unique planetesimal systems, which led the planets to have near-circular orbits. Solar planetesimal systems include the asteroid belt, Hilda family, Kuiper belt, Hills cloud, and the Oort cloud. The exoplanet systems discovered have either no planetesimal systems or one very large one. Low eccentricity is needed for habitability, especially advanced life.[15] High multiplicity planet systems are much more likely to have habitable exoplanets.[16][17] The grand tack hypothesis of the Solar System also helps understand its near-circular orbits and other unique features.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...