Search This Blog

Saturday, August 30, 2025

Pierre-Simon Laplace

From Wikipedia, the free encyclopedia
Pierre-Simon Laplace
Pierre-Simon Laplace as chancellor of the Senate under the First French Empire

Born23 March 1749
Died5 March 1827 (aged 77)
Alma materUniversity of Caen
Known for

 
Scientific career
FieldsAstronomy and Mathematics
InstitutionsÉcole Militaire (1769–1776)
Academic advisorsJean d'Alembert
Christophe Gadbled
Pierre Le Canu
Notable studentsSiméon Denis Poisson
Napoleon Bonaparte


Minister of the Interior
In office
12 November 1799 – 25 December 1799
Prime MinisterNapoleon Bonaparte (as First Consul)
Preceded byNicolas Marie Quinette
Succeeded byLucien Bonaparte

Signature

Pierre-Simon, Marquis de Laplace (/ləˈplɑːs/; French: [pjɛʁ simɔ̃ laplas]; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summarized and extended the work of his predecessors in his five-volume Mécanique céleste (Celestial Mechanics) (1799–1825). This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. Laplace also popularized and further confirmed Sir Isaac Newton's work. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace.

Laplace formulated Laplace's equation, and pioneered the Laplace transform which appears in many branches of mathematical physics, a field that he took a leading role in forming. The Laplacian differential operator, widely used in mathematics, is also named after him. He restated and developed the nebular hypothesis of the origin of the Solar System and was one of the first scientists to suggest an idea similar to that of a black hole, with Stephen Hawking stating that "Laplace essentially predicted the existence of black holes". He originated Laplace's demon, which is a hypothetical all-predicting intellect. He also refined Newton's calculation of the speed of sound to derive a more accurate measurement.

Laplace is regarded as one of the greatest scientists of all time. Sometimes referred to as the French Newton or Newton of France, he has been described as possessing a phenomenal natural mathematical faculty superior to that of almost all of his contemporaries. He was Napoleon's examiner when Napoleon graduated from the École Militaire in Paris in 1785. Laplace became a count of the Empire in 1806 and was named a marquis in 1817, after the Bourbon Restoration.

Early years

Portrait of Pierre-Simon Laplace by Johann Ernst Heinsius (1775)

Some details of Laplace's life are not known, as records of it were burned in 1925 with the family château in Saint Julien de Mailloc, near Lisieux, the home of his great-great-grandson the Comte de Colbert-Laplace. Others had been destroyed earlier, when his house at Arcueil near Paris was looted in 1871.

Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749, a village four miles west of Pont l'Évêque. According to W. W. Rouse Ball, his father, Pierre de Laplace, owned and farmed the small estates of Maarquis. His great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It would seem that from a pupil he became an usher in the school at Beaumont; but, having procured a letter of introduction to d'Alembert, he went to Paris to advance his fortune. However, Karl Pearson is scathing about the inaccuracies in Rouse Ball's account and states:

Indeed Caen was probably in Laplace's day the most intellectually active of all the towns of Normandy. It was here that Laplace was educated and was provisionally a professor. It was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771. Thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a raw self-taught country lad with only a peasant background! In 1765 at the age of sixteen Laplace left the "School of the Duke of Orleans" in Beaumont and went to the University of Caen, where he appears to have studied for five years and was a member of the Sphinx. The École Militaire of Beaumont did not replace the old school until 1776.

His parents, Pierre Laplace and Marie-Anne Sochon, were from comfortable families. The Laplace family was involved in agriculture until at least 1750, but Pierre Laplace senior was also a cider merchant and syndic of the town of Beaumont.

Pierre Simon Laplace attended a school in the village run at a Benedictine priory, his father intending that he be ordained in the Roman Catholic Church. At sixteen, to further his father's intention, he was sent to the University of Caen to read theology.

At the university, he was mentored by two enthusiastic teachers of mathematics, Christophe Gadbled and Pierre Le Canu, who awoke his zeal for the subject. Here Laplace's brilliance as a mathematician was quickly recognised and while still at Caen he wrote a memoir Sur le Calcul integral aux differences infiniment petites et aux differences finies. This provided the first correspondence between Laplace and Lagrange. Lagrange was the senior by thirteen years, and had recently founded in his native city Turin a journal named Miscellanea Taurinensia, in which many of his early works were printed and it was in the fourth volume of this series that Laplace's paper appeared. About this time, recognising that he had no vocation for the priesthood, he resolved to become a professional mathematician. Some sources state that he then broke with the church and became an atheist. Laplace did not graduate in theology but left for Paris with a letter of introduction from Le Canu to Jean le Rond d'Alembert who at that time was supreme in scientific circles.

According to his great-great-grandson, d'Alembert received him rather poorly, and to get rid of him gave him a thick mathematics book, saying to come back when he had read it. When Laplace came back a few days later, d'Alembert was even less friendly and did not hide his opinion that it was impossible that Laplace could have read and understood the book. But upon questioning him, he realised that it was true, and from that time he took Laplace under his care.

Another account is that Laplace solved overnight a problem that d'Alembert set him for submission the following week, then solved a harder problem the following night. D'Alembert was impressed and recommended him for a teaching place in the École Militaire.

With a secure income and undemanding teaching, Laplace now threw himself into original research and for the next seventeen years, 1771–1787, he produced much of his original work in astronomy.

The Calorimeter of Lavoisier and La Place, Encyclopaedia Londinensis, 1801

From 1780 to 1784, Laplace and French chemist Antoine Lavoisier collaborated on several experimental investigations, designing their own equipment for the task. In 1783 they published their joint paper, Memoir on Heat, in which they discussed the kinetic theory of molecular motion. In their experiments they measured the specific heat of various bodies, and the expansion of metals with increasing temperature. They also measured the boiling points of ethanol and ether under pressure.

Laplace further impressed the Marquis de Condorcet, and already by 1771 Laplace felt entitled to membership in the French Academy of Sciences. However, that year admission went to Alexandre-Théophile Vandermonde and in 1772 to Jacques Antoine Joseph Cousin. Laplace was disgruntled, and early in 1773 d'Alembert wrote to Lagrange in Berlin to ask if a position could be found for Laplace there. However, Condorcet became permanent secretary of the Académie in February and Laplace was elected associate member on 31 March, at age 24. In 1773 Laplace read his paper on the invariability of planetary motion in front of the Academy des Sciences. That March he was elected to the academy, a place where he conducted the majority of his science.

On 15 March 1788, at the age of thirty-nine, Laplace married Marie-Charlotte de Courty de Romanges, an eighteen-year-old girl from a "good" family in Besançon. The wedding was celebrated at Saint-Sulpice, Paris. The couple had a son, Charles-Émile (1789–1874), and a daughter, Sophie-Suzanne (1792–1813).

Analysis, probability, and astronomical stability

Laplace's early published work in 1771 started with differential equations and finite differences but he was already starting to think about the mathematical and philosophical concepts of probability and statistics. However, before his election to the Académie in 1773, he had already drafted two papers that would establish his reputation. The first, Mémoire sur la probabilité des causes par les événements was ultimately published in 1774 while the second paper, published in 1776, further elaborated his statistical thinking and also began his systematic work on celestial mechanics and the stability of the Solar System. The two disciplines would always be interlinked in his mind. "Laplace took probability as an instrument for repairing defects in knowledge." Laplace's work on probability and statistics is discussed below with his mature work on the analytic theory of probabilities.

Stability of the Solar System

Sir Isaac Newton had published his Philosophiæ Naturalis Principia Mathematica in 1687 in which he gave a derivation of Kepler's laws, which describe the motion of the planets, from his laws of motion and his law of universal gravitation. However, though Newton had privately developed the methods of calculus, all his published work used cumbersome geometric reasoning, unsuitable to account for the more subtle higher-order effects of interactions between the planets. Newton himself had doubted the possibility of a mathematical solution to the whole, even concluding that periodic divine intervention was necessary to guarantee the stability of the Solar System. Dispensing with the hypothesis of divine intervention would be a major activity of Laplace's scientific life. It is now generally regarded that Laplace's methods on their own, though vital to the development of the theory, are not sufficiently precise to demonstrate the stability of the Solar System; today the Solar System is understood to be generally chaotic at fine scales, although currently fairly stable on coarse scale.

One particular problem from observational astronomy was the apparent instability whereby Jupiter's orbit appeared to be shrinking while that of Saturn was expanding. The problem had been tackled by Leonhard Euler in 1748, and Joseph Louis Lagrange in 1763, but without success. In 1776, Laplace published a memoir in which he first explored the possible influences of a purported luminiferous ether or of a law of gravitation that did not act instantaneously. He ultimately returned to an intellectual investment in Newtonian gravity. Euler and Lagrange had made a practical approximation by ignoring small terms in the equations of motion. Laplace noted that though the terms themselves were small, when integrated over time they could become important. Laplace carried his analysis into the higher-order terms, up to and including the cubic. Using this more exact analysis, Laplace concluded that any two planets and the Sun must be in mutual equilibrium and thereby launched his work on the stability of the Solar System. Gerald James Whitrow described the achievement as "the most important advance in physical astronomy since Newton".

Laplace had a wide knowledge of all sciences and dominated all discussions in the Académie. Laplace seems to have regarded analysis merely as a means of attacking physical problems, though the ability with which he invented the necessary analysis is almost phenomenal. As long as his results were true he took but little trouble to explain the steps by which he arrived at them; he never studied elegance or symmetry in his processes, and it was sufficient for him if he could by any means solve the particular question he was discussing.

Tidal dynamics

Dynamic theory of tides

While Newton explained the tides by describing the tide-generating forces and Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides took into account friction, resonance and natural periods of ocean basins. It predicted the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed.

The equilibrium theory, based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects, could not explain the real ocean tides.

Newton's three-body model

Since measurements have confirmed the theory, many things have possible explanations now, like how the tides interact with deep sea ridges and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels.

Laplace's tidal equations

A. Lunar gravitational potential: this depicts the Moon directly over 30° N (or 30° S) viewed from above the Northern Hemisphere.
B. This view shows same potential from 180° from view A. Viewed from above the Northern Hemisphere. Red up, blue down.

In 1776, Laplace formulated a single set of linear partial differential equations, for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamic equations. But they can also be derived from energy integrals via Lagrange's equation.

For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations:

where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential.

William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity.

On the figure of the Earth

During the years 1784–1787 he published some papers of exceptional power. Prominent among these is one read in 1783, reprinted as Part II of Théorie du Mouvement et de la figure elliptique des planètes in 1784, and in the third volume of the Mécanique céleste. In this work, Laplace completely determined the attraction of a spheroid on a particle outside it. This is memorable for the introduction into analysis of spherical harmonics or Laplace's coefficients, and also for the development of the use of what we would now call the gravitational potential in celestial mechanics.

Spherical harmonics

Spherical harmonics

In 1783, in a paper sent to the Académie, Adrien-Marie Legendre had introduced what are now known as associated Legendre functions. If two points in a plane have polar coordinates (r, θ) and (r ', θ'), where r ' ≥ r, then, by elementary manipulation, the reciprocal of the distance between the points, d, can be written as:

This expression can be expanded in powers of r/r ' using Newton's generalised binomial theorem to give:

The sequence of functions P0k(cos φ) is the set of so-called "associated Legendre functions" and their usefulness arises from the fact that every function of the points on a circle can be expanded as a series of them.

Laplace, with scant regard for credit to Legendre, made the non-trivial extension of the result to three dimensions to yield a more general set of functions, the spherical harmonics or Laplace coefficients. The latter term is not in common use now.

Potential theory

This paper is also remarkable for the development of the idea of the scalar potential. The gravitational force acting on a body is, in modern language, a vector, having magnitude and direction. A potential function is a scalar function that defines how the vectors will behave. A scalar function is computationally and conceptually easier to deal with than a vector function.

Alexis Clairaut had first suggested the idea in 1743 while working on a similar problem though he was using Newtonian-type geometric reasoning. Laplace described Clairaut's work as being "in the class of the most beautiful mathematical productions". However, Rouse Ball alleges that the idea "was appropriated from Joseph Louis Lagrange, who had used it in his memoirs of 1773, 1777 and 1780". The term "potential" itself was due to Daniel Bernoulli, who introduced it in his 1738 memoire Hydrodynamica. However, according to Rouse Ball, the term "potential function" was not actually used (to refer to a function V of the coordinates of space in Laplace's sense) until George Green's 1828 An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism.

Laplace applied the language of calculus to the potential function and showed that it always satisfies the differential equation:

An analogous result for the velocity potential of a fluid had been obtained some years previously by Leonhard Euler.

Laplace's subsequent work on gravitational attraction was based on this result. The quantity ∇2V has been termed the concentration of V and its value at any point indicates the "excess" of the value of V there over its mean value in the neighbourhood of the point. Laplace's equation, a special case of Poisson's equation, appears ubiquitously in mathematical physics. The concept of a potential occurs in fluid dynamics, electromagnetism and other areas. Rouse Ball speculated that it might be seen as "the outward sign" of one of the a priori forms in Kant's theory of perception.

The spherical harmonics turn out to be critical to practical solutions of Laplace's equation. Laplace's equation in spherical coordinates, such as are used for mapping the sky, can be simplified, using the method of separation of variables into a radial part, depending solely on distance from the centre point, and an angular or spherical part. The solution to the spherical part of the equation can be expressed as a series of Laplace's spherical harmonics, simplifying practical computation.

Planetary and lunar inequalities

Title page of an 1817 copy of Delambre's "Tables écliptiques des satellites de Jupiter", which references Laplace's contributions in its title
Tables in an 1817 copy of Delambre's "Tables écliptiques des satellites de Jupiter" – these calculations were influenced by Laplace's previous discoveries.

Jupiter–Saturn great inequality

Laplace presented a memoir on planetary inequalities in three sections, in 1784, 1785, and 1786. This dealt mainly with the identification and explanation of the perturbations now known as the "great Jupiter–Saturn inequality". Laplace solved a longstanding problem in the study and prediction of the movements of these planets. He showed by general considerations, first, that the mutual action of two planets could never cause large changes in the eccentricities and inclinations of their orbits; but then, even more importantly, that peculiarities arose in the Jupiter–Saturn system because of the near approach to commensurability of the mean motions of Jupiter and Saturn.

In this context commensurability means that the ratio of the two planets' mean motions is very nearly equal to a ratio between a pair of small whole numbers. Two periods of Saturn's orbit around the Sun almost equal five of Jupiter's. The corresponding difference between multiples of the mean motions, (2nJ − 5nS), corresponds to a period of nearly 900 years, and it occurs as a small divisor in the integration of a very small perturbing force with this same period. As a result, the integrated perturbations with this period are disproportionately large, about 0.8° degrees of arc in orbital longitude for Saturn and about 0.3° for Jupiter.

Further developments of these theorems on planetary motion were given in his two memoirs of 1788 and 1789, but with the aid of Laplace's discoveries, the tables of the motions of Jupiter and Saturn could at last be made much more accurate. It was on the basis of Laplace's theory that Delambre computed his astronomical tables.

Books

Laplace now set himself the task to write a work which should "offer a complete solution of the great mechanical problem presented by the Solar System, and bring theory to coincide so closely with observation that empirical equations should no longer find a place in astronomical tables." The result is embodied in the Exposition du système du monde and the Mécanique céleste.

The former was published in 1796, and gives a general explanation of the phenomena, but omits all details. It contains a summary of the history of astronomy. This summary procured for its author the honour of admission to the forty of the French Academy and is commonly esteemed one of the masterpieces of French literature, though it is not altogether reliable for the later periods of which it treats.

Laplace developed the nebular hypothesis of the formation of the Solar System, first suggested by Emanuel Swedenborg and expanded by Immanuel Kant. This hypothesis remains the most widely accepted model in the study of the origin of planetary systems. According to Laplace's description of the hypothesis, the Solar System evolved from a globular mass of incandescent gas rotating around an axis through its centre of mass. As it cooled, this mass contracted, and successive rings broke off from its outer edge. These rings in their turn cooled, and finally condensed into the planets, while the Sun represented the central core which was still left. On this view, Laplace predicted that the more distant planets would be older than those nearer the Sun.

As mentioned, the idea of the nebular hypothesis had been outlined by Immanuel Kant in 1755, who had also suggested "meteoric aggregations" and tidal friction as causes affecting the formation of the Solar System. Laplace was probably aware of this, but, like many writers of his time, he generally did not reference the work of others.

Laplace's analytical discussion of the Solar System is given in his Mécanique céleste published in five volumes. The first two volumes, published in 1799, contain methods for calculating the motions of the planets, determining their figures, and resolving tidal problems. The third and fourth volumes, published in 1802 and 1805, contain applications of these methods, and several astronomical tables. The fifth volume, published in 1825, is mainly historical, but it gives as appendices the results of Laplace's latest researches. The Mécanique céleste contains numerous of Laplace's own investigations but many results are appropriated from other writers with little or no acknowledgement. The volume's conclusions, which are described by historians as the organised result of a century of work by other writers as well as Laplace, are presented by Laplace as if they were his discoveries alone.

First pages to Exposition du Système du Monde (1799)
First pages to Exposition du Système du Monde (1799)

Jean-Baptiste Biot, who assisted Laplace in revising it for the press, says that Laplace himself was frequently unable to recover the details in the chain of reasoning, and, if satisfied that the conclusions were correct, he was content to insert the phrase, "Il est aisé à voir que..." ("It is easy to see that..."). The Mécanique céleste is not only the translation of Newton's Principia Mathematica into the language of differential calculus, but it completes parts of which Newton had been unable to fill in the details. The work was carried forward in a more finely tuned form in Félix Tisserand's Traité de mécanique céleste (1889–1896), but Laplace's treatise remains a standard authority. In the years 1784–1787, Laplace produced some memoirs of exceptional power. The significant among these was one issued in 1784, and reprinted in the third volume of the Mécanique céleste. In this work he completely determined the attraction of a spheroid on a particle outside it. This is known for the introduction into analysis of the potential, a useful mathematical concept of broad applicability to the physical sciences.

Optics

Laplace was a supporter of the corpuscle theory of light of Newton. In the fourth edition of Mécanique Céleste, Laplace assumed that short-ranged molecular forces were responsible for refraction of the corpuscles of light. Laplace and Étienne-Louis Malus also showed that Huygens principle of double refraction could be recovered from the principle of least action on light particles.

However in 1815, Augustin-Jean Fresnel presented a new wave theory for diffraction to a commission of the French Academy with the help of François Arago. Laplace was one of the commission members and they ultimately awarded a prize to Fresnel for his new approach.

Influence of gravity on light

Using corpuscular theory, Laplace also came close to propounding the concept of the black hole. He suggested that gravity could influence light and that there could be massive stars whose gravity is so great that not even light could escape from their surface (see escape velocity). However, this insight was so far ahead of its time that it played no role in the history of scientific development.

Arcueil

Laplace's house at Arcueil to the south of Paris

In 1806, Laplace bought a house in Arcueil, then a village and not yet absorbed into the Paris conurbation. The chemist Claude Louis Berthollet was a neighbour – their gardens were not separated – and the pair formed the nucleus of an informal scientific circle, latterly known as the Society of Arcueil. Because of their closeness to Napoleon, Laplace and Berthollet effectively controlled advancement in the scientific establishment and admission to the more prestigious offices. The Society built up a complex pyramid of patronage. In 1806, Laplace was also elected a foreign member of the Royal Swedish Academy of Sciences.

Analytic theory of probabilities

In 1812, Laplace issued his Théorie analytique des probabilités in which he laid down many fundamental results in statistics. The first half of this treatise was concerned with probability methods and problems, the second half with statistical methods and applications. Laplace's proofs are not always rigorous according to the standards of a later day, and his perspective slides back and forth between the Bayesian and non-Bayesian views with an ease that makes some of his investigations difficult to follow, but his conclusions remain basically sound even in those few situations where his analysis goes astray. In 1819, he published a popular account of his work on probability. This book bears the same relation to the Théorie des probabilités that the Système du monde does to the Mécanique céleste. In its emphasis on the analytical importance of probabilistic problems, especially in the context of the "approximation of formula functions of large numbers," Laplace's work goes beyond the contemporary view which almost exclusively considered aspects of practical applicability. Laplace's Théorie analytique remained the most influential book of mathematical probability theory to the end of the 19th century. The general relevance for statistics of Laplacian error theory was appreciated only by the end of the 19th century. However, it influenced the further development of a largely analytically oriented probability theory.

Inductive probability

In his Essai philosophique sur les probabilités (1814), Laplace set out a mathematical system of inductive reasoning based on probability, which we would today recognise as Bayesian. He begins the text with a series of principles of probability, the first seven being:

  1. Probability is the ratio of the "favored events" to the total possible events.
  2. The first principle assumes equal probabilities for all events. When this is not true, we must first determine the probabilities of each event. Then, the probability is the sum of the probabilities of all possible favoured events.
  3. For independent events, the probability of the occurrence of all is the probability of each multiplied together.
  4. When two events A and B depend on each other, the probability of compound event is the probability of A multiplied by the probability that, given A, B will occur.
  5. The probability that A will occur, given that B has occurred, is the probability of A and B occurring divided by the probability of B.
  6. Three corollaries are given for the sixth principle, which amount to Bayesian rule. Where event Ai ∈ {A1, A2, ... An} exhausts the list of possible causes for event B, Pr(B) = Pr(A1, A2, ..., An). Then
  7. The probability of a future event C is the sum of the products of the probability of each causes Bi drawn from the event observed A, by the probability that, this cause existing, the future event will occur. Symbolically,

One well-known formula arising from his system is the rule of succession, given as principle seven. Suppose that some trial has only two possible outcomes, labelled "success" and "failure". Under the assumption that little or nothing is known a priori about the relative plausibilities of the outcomes, Laplace derived a formula for the probability that the next trial will be a success.

where s is the number of previously observed successes and n is the total number of observed trials. It is still used as an estimator for the probability of an event if we know the event space, but have only a small number of samples.

The rule of succession has been subject to much criticism, partly due to the example which Laplace chose to illustrate it. He calculated that the probability that the sun will rise tomorrow, given that it has never failed to in the past, was

where d is the number of times the sun has risen in the past. This result has been derided as absurd, and some authors have concluded that all applications of the Rule of Succession are absurd by extension. However, Laplace was fully aware of the absurdity of the result; immediately following the example, he wrote, "But this number [i.e., the probability that the sun will rise tomorrow] is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at the present moment can arrest the course of it."

Probability-generating function

The method of estimating the ratio of the number of favourable cases to the whole number of possible cases had been previously indicated by Laplace in a paper written in 1779. It consists of treating the successive values of any function as the coefficients in the expansion of another function, with reference to a different variable. The latter is therefore called the probability-generating function of the former. Laplace then shows how, by means of interpolation, these coefficients may be determined from the generating function. Next he attacks the converse problem, and from the coefficients he finds the generating function; this is effected by the solution of a finite difference equation.

Least squares and central limit theorem

The fourth chapter of this treatise includes an exposition of the method of least squares, a remarkable testimony to Laplace's command over the processes of analysis. In 1805 Legendre had published the method of least squares, making no attempt to tie it to the theory of probability. In 1809 Gauss had derived the normal distribution from the principle that the arithmetic mean of observations gives the most probable value for the quantity measured; then, turning this argument back upon itself, he showed that, if the errors of observation are normally distributed, the least squares estimates give the most probable values for the coefficients in regression situations. These two works seem to have spurred Laplace to complete work toward a treatise on probability he had contemplated as early as 1783.

In two important papers in 1810 and 1811, Laplace first developed the characteristic function as a tool for large-sample theory and proved the first general central limit theorem. Then in a supplement to his 1810 paper written after he had seen Gauss's work, he showed that the central limit theorem provided a Bayesian justification for least squares: if one were combining observations, each one of which was itself the mean of a large number of independent observations, then the least squares estimates would not only maximise the likelihood function, considered as a posterior distribution, but also minimise the expected posterior error, all this without any assumption as to the error distribution or a circular appeal to the principle of the arithmetic mean. In 1811 Laplace took a different non-Bayesian tack. Considering a linear regression problem, he restricted his attention to linear unbiased estimators of the linear coefficients. After showing that members of this class were approximately normally distributed if the number of observations was large, he argued that least squares provided the "best" linear estimators. Here it is "best" in the sense that it minimised the asymptotic variance and thus both minimised the expected absolute value of the error, and maximised the probability that the estimate would lie in any symmetric interval about the unknown coefficient, no matter what the error distribution. His derivation included the joint limiting distribution of the least squares estimators of two parameters.

Laplace's demon

In 1814, Laplace published what may have been the first scientific articulation of causal determinism:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be the present to it.

— Pierre Simon Laplace, A Philosophical Essay on Probabilities

This intellect is often referred to as Laplace's demon (in the same vein as Maxwell's demon) and sometimes Laplace's Superman (after Hans Reichenbach). Laplace, himself, did not use the word "demon", which was a later embellishment. As translated into English above, he simply referred to: "Une intelligence ... Rien ne serait incertain pour elle, et l'avenir comme le passé, serait présent à ses yeux."

Even though Laplace is generally credited with having first formulated the concept of causal determinism, in a philosophical context the idea was actually widespread at the time, and can be found as early as 1756 in Maupertuis' 'Sur la Divination'. As well, Jesuit scientist Boscovich first proposed a version of scientific determinism very similar to Laplace's in his 1758 book Theoria philosophiae naturalis.

Laplace transforms

As early as 1744, Euler, followed by Lagrange, had started looking for solutions of differential equations in the form:

The Laplace transform has the form:

This integral operator transforms a function of time () into a function of a new variable ().

Other discoveries and accomplishments

Mathematics

Among the other discoveries of Laplace in pure and applied mathematics are:

Surface tension

Laplace built upon the qualitative work of Thomas Young to develop the theory of capillary action and the Young–Laplace equation.

Speed of sound

Laplace in 1816 was the first to point out that the speed of sound in air depends on the heat capacity ratio. Newton's original theory gave too low a value, because it does not take account of the adiabatic compression of the air which results in a local rise in temperature and pressure. Laplace's investigations in practical physics were confined to those carried on by him jointly with Lavoisier in the years 1782 to 1784 on the specific heat of various bodies.

Politics

Minister of the Interior

In his early years, Laplace was careful never to become involved in politics, or indeed in life outside the Académie des sciences. He prudently withdrew from Paris during the most violent part of the Revolution.

In November 1799, immediately after seizing power in the coup of 18 Brumaire, Napoleon appointed Laplace to the post of Minister of the Interior. The appointment, however, lasted only six weeks, after which Lucien Bonaparte, Napoleon's brother, was given the post. Evidently, once Napoleon's grip on power was secure, there was no need for a prestigious but inexperienced scientist in the government. Napoleon later (in his Mémoires de Sainte Hélène) wrote of Laplace's dismissal as follows:

Geometrician of the first rank, Laplace was not long in showing himself a worse than average administrator; from his first actions in office we recognized our mistake. Laplace did not consider any question from the right angle: he sought subtleties everywhere, conceived only problems, and finally carried the spirit of "infinitesimals" into the administration.

Grattan-Guinness, however, describes these remarks as "tendentious", since there seems to be no doubt that Laplace "was only appointed as a short-term figurehead, a place-holder while Napoleon consolidated power".

From Bonaparte to the Bourbons

Laplace

Although Laplace was removed from office, it was desirable to retain his allegiance. He was accordingly raised to the senate, and to the third volume of the Mécanique céleste he prefixed a note that of all the truths therein contained the most precious to the author was the declaration he thus made of his devotion towards the peacemaker of Europe. In copies sold after the Bourbon Restoration this was struck out. (Pearson points out that the censor would not have allowed it anyway.) In 1814 it was evident that the empire was falling; Laplace hastened to tender his services to the Bourbons, and in 1817 during the Restoration he was rewarded with the title of marquis.

According to Rouse Ball, the contempt that his more honest colleagues felt for his conduct in the matter may be read in the pages of Paul Louis Courier. His knowledge was useful on the numerous scientific commissions on which he served, and, says Rouse Ball, probably accounts for the manner in which his political insincerity was overlooked.

Roger Hahn in his 2005 biography disputes this portrayal of Laplace as an opportunist and turncoat, pointing out that, like many in France, he had followed the debacle of Napoleon's Russian campaign with serious misgivings. The Laplaces, whose only daughter Sophie had died in childbirth in September 1813, were in fear for the safety of their son Émile, who was on the eastern front with the emperor. Napoleon had originally come to power promising stability, but it was clear that he had overextended himself, putting the nation at peril. It was at this point that Laplace's loyalty began to weaken. Although he still had easy access to Napoleon, his personal relations with the emperor cooled considerably. As a grieving father, he was particularly cut to the quick by Napoleon's insensitivity in an exchange related by Jean-Antoine Chaptal: "On his return from the rout in Leipzig, he [Napoleon] accosted Mr Laplace: 'Oh! I see that you have grown thin—Sire, I have lost my daughter—Oh! that's not a reason for losing weight. You are a mathematician; put this event in an equation, and you will find that it adds up to zero.'"

Political philosophy

In the second edition (1814) of the Essai philosophique, Laplace added some revealing comments on politics and governance. Since it is, he says, "the practice of the eternal principles of reason, justice and humanity that produce and preserve societies, there is a great advantage to adhere to these principles, and a great inadvisability to deviate from them". Noting "the depths of misery into which peoples have been cast" when ambitious leaders disregard these principles, Laplace makes a veiled criticism of Napoleon's conduct: "Every time a great power intoxicated by the love of conquest aspires to universal domination, the sense of liberty among the unjustly threatened nations breeds a coalition to which it always succumbs." Laplace argues that "in the midst of the multiple causes that direct and restrain various states, natural limits" operate, within which it is "important for the stability as well as the prosperity of empires to remain". States that transgress these limits cannot avoid being "reverted" to them, "just as is the case when the waters of the seas whose floor has been lifted by violent tempests sink back to their level by the action of gravity".

About the political upheavals he had witnessed, Laplace formulated a set of principles derived from physics to favour evolutionary over revolutionary change:

Let us apply to the political and moral sciences the method founded upon observation and calculation, which has served us so well in the natural sciences. Let us not offer fruitless and often injurious resistance to the inevitable benefits derived from the progress of enlightenment; but let us change our institutions and the usages that we have for a long time adopted only with extreme caution. We know from past experience the drawbacks they can cause, but we are unaware of the extent of ills that change may produce. In the face of this ignorance, the theory of probability instructs us to avoid all change, especially to avoid sudden changes which in the moral as well as the physical world never occur without a considerable loss of vital force.

In these lines, Laplace expressed the views he had arrived at after experiencing the Revolution and the Empire. He believed that the stability of nature, as revealed through scientific findings, provided the model that best helped to preserve the human species. "Such views," Hahn comments, "were also of a piece with his steadfast character."

In the Essai philosophique, Laplace also illustrates the potential of probabilities in political studies by applying the law of large numbers to justify the candidates’ integer-valued ranks used in the Borda method of voting, with which the new members of the Academy of Sciences were elected. Laplace’s verbal argument is so rigorous that it can easily be converted into a formal proof.

Death

Tomb of Pierre-Simon Laplace

Laplace died in Paris on 5 March 1827, which was the same day Alessandro Volta died. His brain was removed by his physician, François Magendie, and kept for many years, eventually being displayed in a roving anatomical museum in Britain. It was reportedly smaller than the average brain. Laplace was buried at Père Lachaise in Paris but in 1888 his remains were moved to Saint Julien de Mailloc in the canton of Orbec and reinterred on the family estate. The tomb is situated on a hill overlooking the village of St Julien de Mailloc, Normandy, France.

Religious opinions

I had no need of that hypothesis

A frequently cited but potentially apocryphal interaction between Laplace and Napoleon purportedly concerns the existence of God. Although the conversation in question did occur, the exact words Laplace used and his intended meaning are not known. A typical version is provided by Rouse Ball:

Laplace went in state to Napoleon to present a copy of his work, and the following account of the interview is well authenticated, and so characteristic of all the parties concerned that I quote it in full. Someone had told Napoleon that the book contained no mention of the name of God; Napoleon, who was fond of putting embarrassing questions, received it with the remark, 'M. Laplace, they tell me you have written this large book on the system of the universe, and have never even mentioned its Creator.' Laplace, who, though the most supple of politicians, was as stiff as a martyr on every point of his philosophy, drew himself up and answered bluntly, Je n'avais pas besoin de cette hypothèse-là. ("I had no need of that hypothesis.") Napoleon, greatly amused, told this reply to Lagrange, who exclaimed, Ah! c'est une belle hypothèse; ça explique beaucoup de choses. ("Ah, it is a fine hypothesis; it explains many things.")

An earlier report, although without the mention of Laplace's name, is found in Antommarchi's The Last Moments of Napoleon (1825):

Je m'entretenais avec L ..... je le félicitais d'un ouvrage qu'il venait de publier et lui demandais comment le nom de Dieu, qui se reproduisait sans cesse sous la plume de Lagrange, ne s'était pas présenté une seule fois sous la sienne. C'est, me répondit-il, que je n'ai pas eu besoin de cette hypothèse. ("While speaking with L ..... I congratulated him on a work which he had just published and asked him how the name of God, which appeared endlessly in the works of Lagrange, didn't occur even once in his. He replied that he had no need of that hypothesis.")

In 1884, however, the astronomer Hervé Faye affirmed that this account of Laplace's exchange with Napoleon presented a "strangely transformed" (étrangement transformée) or garbled version of what had actually happened. It was not God that Laplace had treated as a hypothesis, but merely his intervention at a determinate point:

In fact Laplace never said that. Here, I believe, is what truly happened. Newton, believing that the secular perturbations which he had sketched out in his theory would in the long run end up destroying the Solar System, says somewhere that God was obliged to intervene from time to time to remedy the evil and somehow keep the system working properly. This, however, was a pure supposition suggested to Newton by an incomplete view of the conditions of the stability of our little world. Science was not yet advanced enough at that time to bring these conditions into full view. But Laplace, who had discovered them by a deep analysis, would have replied to the First Consul that Newton had wrongly invoked the intervention of God to adjust from time to time the machine of the world (la machine du monde) and that he, Laplace, had no need of such an assumption. It was not God, therefore, that Laplace treated as a hypothesis, but his intervention in a certain place.

Laplace's younger colleague, the astronomer François Arago, who gave his eulogy before the French Academy in 1827, told Faye of an attempt by Laplace to keep the garbled version of his interaction with Napoleon out of circulation. Faye writes:

I have it on the authority of M. Arago that Laplace, warned shortly before his death that that anecdote was about to be published in a biographical collection, had requested him [Arago] to demand its deletion by the publisher. It was necessary to either explain or delete it, and the second way was the easiest. But, unfortunately, it was neither deleted nor explained.

The Swiss-American historian of mathematics Florian Cajori appears to have been unaware of Faye's research, but in 1893 he came to a similar conclusion. Stephen Hawking said in 1999, "I don't think that Laplace was claiming that God does not exist. It's just that he doesn't intervene, to break the laws of Science."

The only eyewitness account of Laplace's interaction with Napoleon is from the entry for 8 August 1802 in the diary of the British astronomer Sir William Herschel:

The first Consul then asked a few questions relating to Astronomy and the construction of the heavens to which I made such answers as seemed to give him great satisfaction. He also addressed himself to Mr Laplace on the same subject, and held a considerable argument with him in which he differed from that eminent mathematician. The difference was occasioned by an exclamation of the first Consul, who asked in a tone of exclamation or admiration (when we were speaking of the extent of the sidereal heavens): 'And who is the author of all this!' Mons. De la Place wished to shew that a chain of natural causes would account for the construction and preservation of the wonderful system. This the first Consul rather opposed. Much may be said on the subject; by joining the arguments of both we shall be led to 'Nature and nature's God'.

Since this makes no mention of Laplace's saying, "I had no need of that hypothesis," Daniel Johnson argues that "Laplace never used the words attributed to him." Arago's testimony, however, appears to imply that he did, only not in reference to the existence of God.

Views on God

Raised a Catholic, Laplace appears in adult life to have inclined to deism (presumably his considered position, since it is the only one found in his writings). However, some of his contemporaries thought he was an atheist, while a number of recent scholars have described him as agnostic.

Faye thought that Laplace "did not profess atheism", but Napoleon, on Saint Helena, told General Gaspard Gourgaud, "I often asked Laplace what he thought of God. He owned that he was an atheist." Roger Hahn, in his biography of Laplace, mentions a dinner party at which "the geologist Jean-Étienne Guettard was staggered by Laplace's bold denunciation of the existence of God." It appeared to Guettard that Laplace's atheism "was supported by a thoroughgoing materialism." But the chemist Jean-Baptiste Dumas, who knew Laplace well in the 1820s, wrote that Laplace "provided materialists with their specious arguments, without sharing their convictions."

Hahn states: "Nowhere in his writings, either public or private, does Laplace deny God's existence." Expressions occur in his private letters that appear inconsistent with atheism. On 17 June 1809, for instance, he wrote to his son, "Je prie Dieu qu'il veille sur tes jours. Aie-Le toujours présent à ta pensée, ainsi que ton père et ta mère [I pray that God watches over your days. Let Him be always present to your mind, as also your father and your mother]." Ian S. Glass, quoting Herschel's account of the celebrated exchange with Napoleon, writes that Laplace was "evidently a deist like Herschel".

In Exposition du système du monde, Laplace quotes Newton's assertion that "the wondrous disposition of the Sun, the planets and the comets, can only be the work of an all-powerful and intelligent Being." This, says Laplace, is a "thought in which he [Newton] would be even more confirmed, if he had known what we have shown, namely that the conditions of the arrangement of the planets and their satellites are precisely those which ensure its stability." By showing that the "remarkable" arrangement of the planets could be entirely explained by the laws of motion, Laplace had eliminated the need for the "supreme intelligence" to intervene, as Newton had "made" it do. Laplace cites with approval Leibniz's criticism of Newton's invocation of divine intervention to restore order to the Solar System: "This is to have very narrow ideas about the wisdom and the power of God." He evidently shared Leibniz's astonishment at Newton's belief "that God has made his machine so badly that unless he affects it by some extraordinary means, the watch will very soon cease to go."

In a group of manuscripts, preserved in relative secrecy in a black envelope in the library of the Académie des sciences and published for the first time by Hahn, Laplace mounted a deist critique of Christianity. It is, he writes, the "first and most infallible of principles ... to reject miraculous facts as untrue." As for the doctrine of transubstantiation, it "offends at the same time reason, experience, the testimony of all our senses, the eternal laws of nature, and the sublime ideas that we ought to form of the Supreme Being." It is the sheerest absurdity to suppose that "the sovereign lawgiver of the universe would suspend the laws that he has established, and which he seems to have maintained invariably."

Laplace also ridiculed the use of probability in theology. Even following Pascal's reasoning presented in Pascal's wager, it is not worth making a bet, for the hope of profit – equal to the product of the value of the testimonies (infinitely small) and the value of the happiness they promise (which is significant but finite) – must necessarily be infinitely small.

In old age, Laplace remained curious about the question of God and frequently discussed Christianity with the Swiss astronomer Jean-Frédéric-Théodore Maurice. He told Maurice that "Christianity is quite a beautiful thing" and praised its civilising influence. Maurice thought that the basis of Laplace's beliefs was, little by little, being modified, but that he held fast to his conviction that the invariability of the laws of nature did not permit of supernatural events. After Laplace's death, Poisson told Maurice, "You know that I do not share your [religious] opinions, but my conscience forces me to recount something that will surely please you." When Poisson had complimented Laplace about his "brilliant discoveries", the dying man had fixed him with a pensive look and replied, "Ah! We chase after phantoms [chimères]." These were his last words, interpreted by Maurice as a realisation of the ultimate "vanity" of earthly pursuits. Laplace received the last rites from the curé of the Missions Étrangères (in whose parish he was to be buried) and the curé of Arcueil.

According to his biographer, Roger Hahn, it is "not credible" that Laplace "had a proper Catholic end", and he "remained a skeptic" to the very end of his life. Laplace in his last years has been described as an agnostic.

Excommunication of a comet

In 1470 the humanist scholar Bartolomeo Platina wrote that Pope Callixtus III had asked for prayers for deliverance from the Turks during a 1456 appearance of Halley's Comet. Platina's account does not accord with Church records, which do not mention the comet. Laplace is alleged to have embellished the story by claiming the Pope had "excommunicated" Halley's comet. What Laplace actually said, in Exposition du système du monde (1796), was that the Pope had ordered the comet to be "exorcised" (conjuré). It was Arago, in Des Comètes en général (1832), who first spoke of an excommunication.

Honors

Quotations

  • I had no need of that hypothesis. ("Je n'avais pas besoin de cette hypothèse-là", allegedly as a reply to Napoleon, who had asked why he hadn't mentioned God in his book on astronomy.)
  • It is therefore obvious that ... (Frequently used in the Celestial Mechanics when he had proved something and mislaid the proof, or found it clumsy. Notorious as a signal for something true, but hard to prove.)
  • If we seek a cause wherever we perceive symmetry, it is not that we regard a symmetrical event as less possible than the others, but, since this event ought to be the effect of a regular cause or that of chance, the first of these suppositions is more probable than the second.
  • The more extraordinary the event, the greater the need of its being supported by strong proofs.
  • "We are so far from knowing all the agents of nature and their diverse modes of action that it would not be philosophical to deny phenomena solely because they are inexplicable in the actual state of our knowledge. But we ought to examine them with an attention all the more scrupulous as it appears more difficult to admit them."
    • This is restated in Theodore Flournoy's work From India to the Planet Mars as the Principle of Laplace or, "The weight of the evidence should be proportioned to the strangeness of the facts."
    • Most often repeated as "The weight of evidence for an extraordinary claim must be proportioned to its strangeness." (see also: Sagan standard)
  • This simplicity of ratios will not appear astonishing if we consider that all the effects of nature are only mathematical results of a small number of immutable laws.
  • Infinitely varied in her effects, nature is only simple in her causes.
  • What we know is little, and what we are ignorant of is immense. (Fourier comments: "This was at least the meaning of his last words, which were articulated with difficulty.")
  • One sees in this essay that the theory of probabilities is basically only common sense reduced to a calculus. It makes one estimate accurately what right-minded people feel by a sort of instinct, often without being able to give a reason for it.
  • False balance

    From Wikipedia, the free encyclopedia
    Among climate scientists in 2013, 97% of peer-reviewed papers that took a position on the cause of global warming said that humans are responsible, while 3% said they were not. Meanwhile, 69% of Fox News guests on Intergovernmental Panel on Climate Change stories in late 2013 were "climate contrarians".

    False balance, known colloquially as bothsidesism, is a media bias in which journalists present an issue as being more balanced between opposing viewpoints than the evidence supports. Journalists may present evidence and arguments out of proportion to the actual evidence for each side, or may omit information that would establish one side's claims as baseless. False balance has been cited as a cause of misinformation.

    False balance is a bias which often stems from an attempt to avoid bias and gives unsupported or dubious positions an illusion of respectability. It creates a public perception that some issues are scientifically contentious, though in reality they are not, therefore creating doubt about the scientific state of research. This can be exploited by interest groups such as corporations like the fossil fuel industry or the tobacco industry, or ideologically motivated activists such as vaccination opponents or creationists.

    Examples of false balance in reporting on science issues include the topics of human-caused climate change versus natural climate variability, the health effects of tobacco, the disproven relation between thiomersal and autism, alleged negative side effects of the HPV vaccine, and evolution versus intelligent design.

    Description and origin

    False balance emerges from the ideal of journalistic objectivity, where factual news is presented in a way that allows the reader to make determinations about how to interpret the facts, and interpretations or arguments around those facts are left to the opinion pages. Because many newsworthy events have two or more opposing camps making competing claims, news media are responsible for reporting all (credible or reasonable) opposing positions, along with verified facts that may support one or the other side of an issue. At one time, when false balance was prevalent, news media sometimes reported all positions as though they were equally credible, even though the facts clearly contradicted a position, or there was a substantial consensus on one side of an issue, and only a fringe or nascent theory supporting the other side.

    More recently, in contrast to prior decades, most media are willing to advocate for a particular viewpoint which they regarded as better evidenced. For instance, claims that the Earth is not warming are regularly referred to in news (vs only editorials) as "denial", "misleading", or "debunked". Prior to this shift, media would sometimes list all positions without clarifying that one position is known or generally agreed to be false.

    Unlike most other media biases, false balance may result from an attempt to avoid bias; producers and editors may consider treating competing viewpoints fairly—i.e., in proportion to their actual merits and significance—as equivalent to treating them equally, giving them equal time to present their views, even though one of the viewpoints may be overwhelmingly dominant. Media would then present two opposing viewpoints on an issue as equally credible, or present a major issue on one side of a debate as having the same weight as a minor one on the other. False balance can also originate from other motives such as sensationalism, where producers and editors may feel that a story portrayed as a contentious debate will be more commercially successful than a more accurate (or widely-agreed) account of the issue.

    Science journalist Dirk Steffens mocked the practice as comparable to inviting a flat Earther to debate with an astrophysicist over the shape of the Earth, as if the truth could be found somewhere in the middleLiz Spayd of The New York Times wrote: "The problem with false balance doctrine is that it masquerades as rational thinking."

    Examples

    Climate change

    A 2022 study found that the public in many countries substantially underestimates the degree of scientific consensus that humans are causing climate change. Studies from 2019–2021 found scientific consensus to range from 98.7–100%.
     
    Research found that 80–90% of Americans underestimate the prevalence of support for major climate change mitigation policies and climate concern. While 66–80% Americans support these policies, Americans estimate the prevalence to be 37–43%. Researchers have called this misperception a false social reality, a form of pluralistic ignorance.

    Although the scientific community almost unanimously attributes a majority of the global warming since 1950 to the effects of the Industrial Revolution, there are a very small number – a few dozen scientists out of tens of thousands – who dispute the conclusion. Giving equal voice to scientists on both sides makes it seem like there is serious disagreement within the scientific community, when in fact there is an overwhelming scientific consensus on climate change that anthropogenic global warming exists.

    MMR vaccine controversy

    Observers have criticized the involvement of mass media in the MMR vaccine controversy, what is known as "science by press conference", alleging that the media provided Andrew Wakefield's study with more credibility than it deserved. A March 2007 paper in BMC Public Health by Shona Hilton, Mark Petticrew, and Kate Hunt postulated that media reports on Wakefield's study had "created the misleading impression that the evidence for the link with autism was as substantial as the evidence against". Earlier papers in Communication in Medicine and the British Medical Journal concluded that media reports provided a misleading picture of the level of support for Wakefield's hypothesis.

    Microgeneration

    From Wikipedia, the free encyclopedia
    https://en.wikipedia.org/wiki/Microgeneration
    A group of small-scale wind turbines providing electricity to a community in Dali, Yunnan, China

    Microgeneration is the small-scale production of heat or electric power from a "low carbon source," as an alternative or supplement to traditional centralized grid-connected power.

    Microgeneration technologies include small-scale wind turbines, micro hydro, solar PV systems, microbial fuel cells, ground source heat pumps, and micro combined heat and power installations. These technologies are often combined to form a hybrid power solution that can offer superior performance and lower cost than a system based on one generator.

    History

    In the United States, Microgeneration had its roots in the 1973 oil crisis and the Yom Kippur War which prompted innovation.

    On June 20, 1979, 32 solar panels were installed at the White House. The solar cells were dismantled 7 years later during the Reagan administration.

    The use of Solar water heating dates back before 1900 with "the first practical solar cell being developed by Bell Labs in 1954." The "University of Delaware is credited with creating one of the first solar buildings, “Solar One,” in 1973. The construction ran on a combination of solar thermal and solar photovoltaic power. The building didn't use solar panels; instead, solar was integrated into the rooftop."

    Technologies and set-up

    Power plant

    In addition to the electricity production plant (e.g. wind turbine and solar panel), infrastructure for energy storage and power conversion and a hook-up to the regular electricity grid is usually needed and/or foreseen. Although a hookup to the regular electricity grid is not essential, it helps to decrease costs by allowing financial recompensation schemes. In the developing world however, the start-up cost for this equipment is generally too high, thus leaving no choice but to opt for alternative set-ups.

    Extra equipment needed besides the power plant

    A complete PV-solar system

    The whole of the equipment required to set up a working system and for an off-the-grid generation and/or a hook up to the electricity grid herefore is termed a balance of system and is composed of the following parts with PV-systems:

    Energy storage apparatus

    A major issue with off-grid solar and wind systems is that the power is often needed when the sun is not shining or when the wind is calm, this is generally not required for purely grid-connected systems:

    or other means of energy storage (e.g. hydrogen fuel cells, Flywheel energy storage, pumped-storage hydroelectricity, compressed air tanks, ...)

    For converting DC battery power into AC as required for many appliances, or for feeding excess power into a commercial power grid:

    Safety equipment

    Usually, in microgeneration for homes in the developing world, prefabricated house-wiring systems (as wiring harnesses or prefabricated distribution units) are used instead. Simplified house-wiring boxes and cables, known as wiring harnesses, can simply be bought and mounted into the building without requiring much knowledge about the wiring itself. As such, even people without technical expertise are able to install them. They are also comparatively cheap and offer safety advantages.

    Small-scale (DIY) generation system

    Wind turbine specific

    With wind turbines, hydroelectric plants, ... the extra equipment needed is more or less the same as with PV-systems (depending on the type of wind turbine used), yet also include:

    • a manual disconnect switch
    • foundation for the tower
    • grounding system
    • shutoff and/or dummy-load devices for use in high wind when power generated exceeds current needs and storage system capacity.
    Vibro-wind power

    A new wind energy technology is being developed that converts energy from wind energy vibrations to electricity. This energy, called Vibro-Wind technology, can use winds of less strength than normal wind turbines, and can be placed in almost any location.

    A prototype consisted of a panel mounted with oscillators made out of pieces of foam. The conversion from mechanical to electrical energy is done using a piezoelectric transducer, a device made of a ceramic or polymer that emits electrons when stressed. The building of this prototype was led by Francis Moon, professor of mechanical and aerospace engineering at Cornell University. Moon's work in Vibro-Wind Technology was funded by the Atkinson Center for a Sustainable Future at Cornell. Vibro-wind power is not yet commercially viable and in early development stages. Significant progress will be needed to commercialize this early stage venture.

    Possible set-ups

    Several microgeneration set-ups are possible. These are:

    • Off-the-grid set-ups which include:
      • Off-the grid set-ups without energy storage (e.g., battery, ...)
      • Off-the grid set-ups with energy storage (e.g., battery, ...)
      • Battery charging stations
    • Grid-connected set-ups which include:
      • Grid connected with backup to power critical loads
      • Grid-connected set-ups without financial recompensation scheme
      • Grid-connected set-ups with net metering
      • Grid connected set-ups with net purchase and sale

    All set-ups mentioned can work either on a single power plant or a combination of power plants (in which case it is called a hybrid power system). For safety, grid-connected set-ups must automatically switch off or enter an "anti-islanding mode" when there is a failure of the mains power supply. For more about this, see the article on the condition of islanding.

    Costs

    Depending on the set-up chosen (financial recompensation scheme, power plant, extra equipment), prices may vary. According to Practical Action, microgeneration at home which uses the latest in cost saving-technology (wiring harnesses, ready boards, cheap DIY-power plants, e.g. DIY wind turbines) the household expenditure can be extremely low-cost. In fact, Practical Action mentions that many households in farming communities in the developing world spend less than $1 on electricity per month. However, if matters are handled less economically (using more commercial systems/approaches), costs will be dramatically higher. In most cases however, financial advantage will still be done using microgeneration on renewable power plants; often in the range of 50-90% as local production has no electricity transportation losses on long distance power lines or energy losses from the Joule effect in transformers where in general 8-15% of the energy is lost.

    In the UK, the government offers both grants and feedback payments to help businesses, communities and private homes to install these technologies. Businesses can write the full cost of installation off against taxable profits whilst homeowners receive a flat-rate grant or payments per kW h of electricity generated and paid back into the national grid. Community organizations can also receive up to £200,000 in grant funding.

    In the UK, the Microgeneration Certification Scheme provides approval for Microgeneration Installers and Products which is a mandatory requirement of funding schemes such as the Feed in Tariffs and Renewable Heat Incentive.

    Grid parity

    Grid parity (or socket parity) occurs when an alternative energy source can generate electricity at a levelized cost of energy (LCOE) that is less than or equal to the price of purchasing power from the electricity grid. Reaching grid parity is considered to be the point at which an energy source becomes a contender for widespread development without subsidies or government support. It is widely believed that a wholesale shift in a generation to these forms of energy will take place when they reach grid parity.

    Grid parity has been reached in some locations with on-shore wind power around 2000, and with solar power it was achieved for the first time in Spain in 2013.

    Comparison with large-scale generation


    microgeneration large-scale generation Notes
    Other names Distributed generation Centralized generation
    Economy of scale Necessitates mass production of generators which will create an associated environmental impact. Systems are less expensive when produced in quantity. Depends on power source - generally more economical given the larger scale of the generators. Photovoltaics, similar panels are used in all applications are affected less by this whilst wind power, where power scales approximately as the square of size is affected greatly.
    Ability to meet needs supply within the limits of the installed generation or storage
    • For wind and solar energy, the actual production is only a fraction of nameplate capacity.
    • Fuel based systems are fully dispatchable
    • Solar panels are simple and reliable, they can provide a little electricity at a reasonable cost.
    generally more flexible supply within the limits of local transmission as long as the grid is effectively maintained
    Environmental impact larger number of smaller devices may lead to greater impact from device production especially with the wind. larger generators can have more local impact, transmission equipment can also disrupt areas, however, the overall impact is likely reduced due to economies of scale. Commentators claim  that householders who buy their electricity with green energy tariffs can reduce their carbon usage further than with microgeneration and at a lower cost.
    Transmission losses Proximity to end user typically closer resulting in potentially fewer losses. (Potentially, because the lack of scale at each individual installation may lead to use of less efficient transmission technologies.) A significant proportion of electrical power is lost during transmission (approximately 8% in the United Kingdom according to BBC Radio 4 Today programme in March 2006).
    Changes to Grid reduces the transmission load, and thus reduces the need for grid upgrades increases the power transmitted, and thus increases the need for grid upgrades
    Grid failure event Electricity may still be available to local area in many circumstances Electricity may be not available due to grid
    Generator failure event Electricity will not be available except in hybrid scenario Electricity is very likely to be available due to grid redundancy
    Consumer choices May choose to purchase any legal system May choose to purchase offerings of the power companies depending on market
    Reliability and Maintenance requirements photovoltaics, Stirling engines, and certain other systems, are usually extremely reliable [citation needed], and can generate electric power continuously for many thousands of hours with little or no maintenance. However, unreliable systems will incur additional maintenance labor and costs. Managed by power company. Grid reliability varies with location.
    Waste Heat by-product Can be used for heating purposes in cold climates, thus greatly increasing efficiency and offsetting energy total costs. This method is known as micro combined heat and power (microCHP).

    Used in some privately owned industrial combined heat and power (CHP) installations. It is also used in large-scale applications where it's called district heating and uses the heat that is normally exhausted by inefficient powerplants.


    Most forms of microgeneration can dynamically balance the supply and demand for electric power, by producing more power during periods of high demand and high grid prices, and less power during periods of low demand and low grid prices. This "hybridized grid" allows both microgeneration systems and large power plants to operate with greater energy efficiency and cost effectiveness than either could alone.

    Domestic self-sufficiency

    Horizontal Axis Micro-Windmill in Lahore, 1000Watt Rated Output

    Microgeneration can be integrated as part of a self-sufficient house and is typically complemented with other technologies such as domestic food production systems (permaculture and agroecosystem), rainwater harvesting, composting toilets or even complete greywater treatment systems. Domestic microgeneration technologies include: photovoltaic solar systems, small-scale wind turbines, micro combined heat and power installations, biodiesel and biogas.

    A small Quietrevolution QR5 Gorlov type vertical axis wind turbine in Bristol, England. Measuring 3 m in diameter and 5 m high, it has a nameplate rating of 6.5 kW to the grid.

    Private generation decentralizes the generation of electricity and may also centralize the pooling of surplus energy. While they have to be purchased, solar shingles and panels are both available. Capital cost is high, but saves in the long run. With appropriate power conversion, solar PV panels can run the same electric appliances as electricity from other sources.

    Passive solar water heating is another effective method of utilizing solar power. The simplest method is the solar (or a black plastic) bag. Set between 5 and 20 litres (1 and 5 US gal) out in the sun and allow to heat. Perfect for a quick warm shower.

    The ‘breadbox’ heater can be constructed easily with recycled materials and basic building experience. Consisting of a single or array of black tanks mounted inside a sturdy box insulated on the bottom and sides. The lid, either horizontal or angled to catch the most sun, should be well sealed and of a transparent glazing material (glass, fiberglass, or high temp resistant molded plastic). Cold water enters the tank near the bottom, heats and rises to the top where it is piped back into the home.

    Ground source heat pumps exploit stable ground temperatures by benefiting from the thermal energy storage capacity of the ground. Typically ground source heat pumps have a high initial cost and are difficult to install by the average homeowner. They use electric motors to transfer heat from the ground with a high level of efficiency. The electricity may come from renewable sources or from external non-renewable sources.

    Fuel

    Biodiesel is an alternative fuel that can power diesel engines and can be used for domestic heating. Numerous forms of biomass, including soybeans, peanuts, and algae (which has the highest yield), can be used to make biodiesel. Recycled vegetable oil (from restaurants) can also be converted into biodiesel.

    Biogas is another alternative fuel, created from the waste product of animals. Though less practical for most homes, a farm environment provides a perfect place to implement the process. By mixing the waste and water in a tank with space left for air, methane produces naturally in the airspace. This methane can be piped out and burned, and used for a cookfire.

    Government policy

    Policymakers were accustomed to an energy system based on big, centralised projects like nuclear or gas-fired power stations. A change of mindsets and incentives are bringing microgeneration into the mainstream. Planning regulations may also require streamlining to facilitate the retrofitting of microgenerating facilities onto homes and buildings.

    Most of developed countries, including Canada (Alberta), the United Kingdom, Germany, Poland, Israel and USA have laws allowing microgenerated electricity to be sold into the national grid.

    Alberta, Canada

    In January 2009, the Government of Alberta's Micro-Generation Regulation came into effect, setting rules that allow Albertans to generate their own environmentally friendly electricity and receive credit for any power they send into the electricity grid.

    Poland

    In December 2014, the Polish government will vote on a bill which calls for microgeneration, as well as large scale wind farms in the Baltic Sea as a solution to cut back on CO2 emissions from the country's coal plants as well as to reduce Polish dependence on Russian gas. Under the terms of the new bill, individuals and small businesses which generate up to 40 kW of 'green' energy will receive 100% of market price for any electricity they feed back into the grid, and businesses who set up large-scale offshore wind farms in the Baltic will be eligible for subsidization by the state. Costs of implementing these new policies will be offset by the creation of a new tax on non-sustainable energy use.

    United States

    The United States has inconsistent energy generation policies across its 50 states. State energy policies and laws may vary significantly with location. Some states have imposed requirements on utilities that a certain percentage of total power generation be from renewable sources. For this purpose, renewable sources include wind, hydroelectric, and solar power whether from large or microgeneration projects. Further, in some areas transferable "renewable source energy" credits are needed by power companies to meet these mandates. As a result, in some portions of the United States, power companies will pay a portion of the cost of renewable source microgeneration projects in their service areas. These rebates are in addition to any Federal or State renewable-energy income-tax credits that may be applicable. In other areas, such rebates may differ or may not be available.

    United Kingdom

    The UK Government published its Microgeneration Strategy in March 2006, although it was seen as a disappointment by many commentators. In contrast, the Climate Change and Sustainable Energy Act 2006 has been viewed as a positive step. To replace earlier schemes, the Department of Trade and Industry (DTI) launched the Low Carbon Buildings Programme in April 2006, which provided grants to individuals, communities and businesses wishing to invest in microgenerating technologies. These schemes have been replaced in turn by new proposals from the Department for Energy and Climate Change (DECC) for clean energy cashback via Feed-In Tariffs for generating electricity from April 2010 and the Renewable Heat Incentive for generating renewable heat from 28 November 2011.

    Feed-In Tariffs are intended to incentivise small-scale (less than 5MW), low-carbon electricity generation. These feed-in tariffs work alongside the Renewables Obligation (RO), which will remain the primary mechanism to incentivise deployment of large-scale renewable electricity generation. The Renewable Heat Incentive (RHI) in intended to incentivise the generation of heat from renewable sources. They also currently offer up to 21p per kWh from December 2011 in the Tariff for photovoltaics plus another 3p for the Export Tariff - an overall figure which could see a household earning back double what they currently pay for their electricity.

    On 31 October 2011, the government announced a sudden cut in the feed-in tariff from 43.3p/kWh to 21p/kWh with the new tariff to apply to all new solar PV installations with an eligibility date on or after 12 December 2011.

    Prominent British politicians who have announced they are fitting microgenerating facilities to their homes include the Conservative party leader, David Cameron, and the Labour Science Minister, Malcolm Wicks. These plans included small domestic sized wind turbines. Cameron, before becoming Prime Minister in the 2010 general elections, had been asked during an interview on BBC One's The Politics Show on October 29, 2006, if he would do the same should he get to 10 Downing Street. “If they’d let me, yes,” he replied.

    In the December 2006 Pre-Budget Report the government announced that the sale of surplus electricity from installations designed for personal use, would not be subject to Income Tax. Legislation to this effect has been included in the Finance Bill 2007.

    Several movies and TV shows such as The Mosquito Coast, Jericho, The Time Machine and Beverly Hills Family Robinson have done a great deal in raising interest in microgeneration among the general public. Websites such as Instructables and Practical Action propose DIY solutions that can lower the cost of microgeneration, thus increasing its popularity. Specialised magazines such as OtherPower and Home Power also provide practical advice and guidance.

    Soil ecology

    From Wikipedia, the free encyclopedia

    Soil ecology studies interactions among soil organisms, and their environment. It is particularly concerned with the cycling of nutrients, soil aggregate formation and soil biodiversity.

    Overview

    Soil is made up of a multitude of physical, chemical, and biological entities, with many interactions occurring among them. It is a heterogenous mixture of minerals and organic matter with variations in moisture, temperature and nutrients. Soil supports a wide range of living organisms and is an essential component of terrestrial ecology.

    Features of the ecosystem

    • Moisture is a significant limiting factor in terrestrial ecosystems and majorly in the soil. Soil organisms are constantly confronted with the problem of dehydration. Soil microbial communities experience shifts in the diversity and composition during dehydration and rehydration cycles. Soil moisture affects carbon cycling a phenomenon known as Birch effect.
    • Temperature variations in soil are influenced by factors such as seasonality, environmental conditions, vegetation, and soil composition. Soil temperature also varies with depth; upper soil layers are majorly influence by air temperature, while soil temperature fluctuations decrease with depth. Soil temperature influences biological and biochemical processes in soil, playing an important role in microbial and enzymatic activities, mineralization and organic matter decomposition.
    • Air is vital for respiration in soil organisms and in plant growth. Both wind and atmospheric pressure play critical roles in soil aeration. In addition, convection and diffusion also influence the rates of soil aeration
    • Soil structure refers to the size, shape and arrangement of solid particles in soil. Factors such as climate, vegetation and organisms influence the complex arrangement of particles in the soil  Structural features of the soil include microporosity and pore size which are also affected by minerals and soil organic matter.
    • Land, unlike the ocean, is not continuous; there are important geographical barriers to free movement.
    • The nature of the substrate, although important in water is especially vital in terrestrial environment. Soil, not air, is the source of highly variable nutrients; it is a highly developed ecological subsystem.

    Soil fauna

    Soil fauna is crucial to soil formation, litter decomposition, nutrient cycling, biotic regulation, and for promoting plant growth. Yet soil organisms remain underrepresented in studies on soil processes and in existing modeling exercises. This is a consequence of assuming that much below ground diversity is ecologically redundant and that soil food webs exhibit a higher degree of omnivory. However, evidence is accumulating on the strong influence of abiotic filters, such as temperature, moisture and soil pH, as well as soil habitat characteristics in controlling their spatial and temporal patterns.

    Soils are complex systems and their complexity resides in their heterogeneous nature: a mixture of air, water, minerals, organic compounds, and living organisms. The spatial variation, both horizontal and vertical, of all these constituents is related to soil forming agents varying from micro to macro scales. Consequently, the horizontal patchy distribution of soil properties (soil temperature, moisture, pH, litter/nutrient availability, etc.) also drives the patchiness of the soil organisms across the landscape, and has been one of the main arguments for explaining the great diversity observed in soil communities. Because soils also show vertical stratification of their elemental constituents along the soil profile as result of microclimate, soil texture, and resource quantity and quality differing between soil horizons, soil communities also change in abundance and structure with soil depth.

    The majority of these organisms are aerobic, so the amount of porous space, pore-size distribution, surface area, and oxygen levels are crucial to their life cycles and activities. The smallest creatures (microbes) use the micropores filled with air to grow, whereas other bigger animals require bigger spaces, macropores, or the water film surrounding the soil particles to move in search for food. Therefore, soil textural properties together with the depth of the water table are also important factors regulating their diversity, population sizes, and their vertical stratification. Ultimately, the structure of the soil communities strongly depends not only on the natural soil forming factors but also on human activities (agriculture, forestry, urbanization) and determines the shape of landscapes in terms of healthy or contaminated, pristine or degraded soils.

    Macrofauna

    Soil macrofauna, climatic gradients and soil heterogeneity
     
    Historical factors, such as climate and soil parent materials, shape landscapes above and below ground, but the regional/local abiotic conditions constraint biological activities. These operate at different spatial and temporal scales and can switch on and off different organisms at different microsites resulting in a hot moment in a particular hotspot. As a result, trophic cascades can occur up and down the food web.
    Soil invertebrates are shown. Ellipses indicate hot (red) or cold spots (blue), with the curved arrows giving some examples of the factors that could switch on/off a hot moment and the straight black arrows (continuous black line = on, dashed = off) showing the implications for soil processes along the soil profile. In the boxes, the main ecosystem characteristics are listed.

    Since all these drivers of biodiversity changes also operate above ground, it is thought that there must be some concordance of mechanisms regulating the spatial patterns and structure of both above and below ground communities. In support of this, a small-scale field study revealed that the relationships between environmental heterogeneity and species richness might be a general property of ecological communities. In contrast, the molecular examination of 17,516 environmental 18S rRNA gene sequences representing 20 phyla of soil animals covering a range of biomes and latitudes around the world indicated otherwise, and the main conclusion from this study was that below-ground animal diversity may be inversely related to above-ground biodiversity.

    The lack of distinct latitudinal gradients in soil biodiversity contrasts with those clear global patterns observed for plants above ground and has led to the assumption that they are indeed controlled by different factors. For example, in 2007 Lozupone and Knight found salinity was the major environmental determinant of bacterial diversity composition across the globe, rather than extremes of temperature, pH, or other physical and chemical factors. In another global scale study in 2014, Tedersoo et al. concluded fungal richness is causally unrelated to plant diversity and is better explained by climatic factors, followed by edaphic and spatial patterns. Global patterns of the distribution of macroscopic organisms are far poorer documented. However, the little evidence available appears to indicate that, at large scales, soil metazoans respond to altitudinal, latitudinal or area gradients in the same way as those described for above-ground organisms. In contrast, at local scales, the great diversity of microhabitats commonly found in soils provides the required niche portioning to create hot spots of diversity in just a gram of soil.

    Spatial patterns of soil biodiversity are difficult to explain, and its potential linkages to many soil processes and the overall ecosystem functioning are debated. For example, while some studies have found that reductions in the abundance and presence of soil organisms results in the decline of multiple ecosystem functions, others concluded that above-ground plant diversity alone is a better predictor of ecosystem multi-functionality than soil biodiversity. Soil organisms exhibit a wide array of feeding preferences, life-cycles and survival strategies and they interact within complex food webs. Consequently, species richness per se has very little influence on soil processes and functional dissimilarity can have stronger impacts on ecosystem functioning. Therefore, besides the difficulties in linking above and below ground diversities at different spatial scales, gaining a better understanding of the biotic effects on ecosystem processes might require incorporating a great number of components together with several multi-trophic levels  as well as the much less considered non-trophic interactions such as phoresy, passive consumption.) In addition, if soil systems are indeed self-organized, and soil organisms concentrate their activities within a selected set of discrete scales with some form of overall coordination, there is no need for looking for external factors controlling the assemblages of soil constituents. Instead we might just need to recognize the unexpected and that the linkages between above and below ground diversity and soil processes are difficult to predict.

    Microfauna

    Recent advances are emerging from studying sub-organism level responses using environmental DNA  and various omics approaches, such as metagenomics, metatranscriptomics, proteomics and proteogenomics, are rapidly advancing, at least for the microbial world. Metaphenomics has been proposed recently as a better way to encompass the omics and the environmental constraints.

    Soil microbes

    Soil harbors many microbes: bacteria, archaea, protist, fungi and viruses. A majority of these microbes have not been cultured and remain undescribed. Development of next generation sequencing technologies open up the avenue to investigate microbial diversity in soil.  One feature of soil microbes is spatial separation which influences microbe to microbe interactions and ecosystem functioning in the soil habitat.  Microorganisms in soil are found to be concentrated in specific sites called 'hot spots' which is characterized by an abundance of resources such as moisture or nutrients.  An example is the rhizosphere, and areas with accumulated organic matter such as the detritusphere. These areas are characterized by the presence of decaying root litter and exudates released from plant roots which regulates the availability of carbon and nitrogen and in consequence modulate microbial processes.  Apart from labile organic carbon, spatial separation of microbes in soil may be influenced by other environmental factors such as temperature and moisture.  Other abiotic factors like pH and mineral nutrient composition may also influence the distribution of microorganisms in soil. Variability of these factors make soil a dynamic system.  Interactions between members of the soil microhabitat takes place via chemical signaling which is mediated by soluble metabolites and volatile organic compounds, in addition to extracellular polysaccharides.  Chemical signals enable microbes to interact, for example bacterial peptidoglycans stimulate growth of Candida albicans. Reciprocally, C. albicans production of farnesol modulates the expression of virulence genes and influences bacterial quorum sensing.  Trophic interactions by microbes in the same environment is driven by molecular communication. Microbes may also exchange metabolites to support each other's growth, e.g., the release of extracellular enzymes by ectomycorrhiza decomposes organic matter and releases nutrients which then benefits other members of the population, in exchange organic acids from bacteria stimulate fungal growth  These examples of trophic interactions especially metabolite dependencies drive species interactions and are important in the assembly of soil microbial communities.

    Soil food web

    Diverse organisms make up the soil food web. They range in size from one-celled bacteria, algae, fungi, and protozoa, to more complex nematodes and micro-arthropods, to the visible earthworms, insects, small vertebrates, and plants. As these organisms eat, grow, and move through the soil, they make it possible to have clean water, clean air, healthy plants, and moderated water flow.

    There are many ways that the soil food web is an integral part of landscape processes. Soil organisms decompose organic compounds, including manure, plant residues, and pesticides, preventing them from entering water and becoming pollutants. They sequester nitrogen and other nutrients that might otherwise enter groundwater, and they fix nitrogen from the atmosphere, making it available to plants. Many organisms enhance soil aggregation and porosity, thus increasing infiltration and reducing surface runoff. Soil organisms prey on crop pests and are food for above-ground animals.

    Research

    Research interests span many aspects of soil ecology and microbiology. Fundamentally, researchers are interested in understanding the interplay among microorganisms, fauna, and plants, the biogeochemical processes they carry out, and the physical environment in which their activities take place, and applying this knowledge to address environmental problems.

    Example research projects are to examine the biogeochemistry and microbial ecology of septic drain field soils used to treat domestic wastewater, the role of anecic earthworms in controlling the movement of water and nitrogen cycle in agricultural soils, and the assessment of soil quality in turf production.

    Of particular interest as of 2006 is to understand the roles and functions of arbuscular mycorrhizal fungi in natural ecosystems. The effect of anthropic soil conditions on arbuscular mycorrhizal fungi and the production of glomalin by arbuscular mycorrhizal fungi are both of interest due to their roles in sequestering atmospheric carbon dioxide.

    Spatial ability

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Spatial_ability Space Engineer...