Search This Blog

Friday, June 11, 2021

Olbers' paradox

From Wikipedia, the free encyclopedia
 
In this animation depicting an infinite and homogeneous sky, successively more distant stars are revealed in each frame. As the animation progresses, the more distant stars fill the gaps between closer stars in the field of view. Eventually, the entire image is as bright as a single star.
As more distant stars are revealed in this animation depicting an infinite, homogeneous and static universe, they fill the gaps between closer stars. Olbers's paradox argues that as the night sky is dark, at least one of these three assumptions about the nature of the universe must be false.

In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840), also known as the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. In the hypothetical case that the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, any line of sight from Earth must end at the surface of a star and hence the night sky should be completely illuminated and very bright. This contradicts the observed darkness and non-uniformity of the night.

The darkness of the night sky is one of the pieces of evidence for a dynamic universe, such as the Big Bang model. That model explains the observed non-uniformity of brightness by invoking spacetime's expansion, which lengthens the light originating from the Big Bang to microwave levels via a process known as redshift; this microwave radiation background has wavelengths much longer than those of visible light, and so appears dark to the naked eye. Other explanations for the paradox have been offered, but none have wide acceptance in cosmology.

History

The first one to address the problem of an infinite number of stars and the resulting heat in the Cosmos was Cosmas Indicopleustes, a Greek monk from Alexandria, who states in his Topographia Christiana: "The crystal-made sky sustains the heat of the Sun, the moon, and the infinite number of stars; otherwise, it would have been full of fire, and it could melt or set on fire."

Edward Robert Harrison's Darkness at Night: A Riddle of the Universe (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars. Kepler also posed the problem in 1610, and the paradox took its mature form in the 19th century work of Halley and Cheseaux. The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper, and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:

Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.

The paradox

The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.

A view of a square section of four concentric shells

To show this, we divide the universe into a series of concentric shells, 1 light year thick. A certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell, which is between 2,000,000,000 and 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear one quarter as bright as the stars in the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.

Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light; and with infinitely many shells, there would be a bright night sky.

While dark clouds could obstruct the light, these clouds would heat up, until they were as hot as the stars, and then radiate the same amount of light.

Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe: though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.

Explanation

The poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox. More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed from Earth (although the whole universe can be infinite in space). The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.

However, the Big Bang theory seems to introduce a new problem: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will originate not from a star but the relic of the Big Bang.

This problem is addressed by the fact that the Big Bang theory also involves the expansion of space, which can cause the energy of emitted light to be reduced via redshift. More specifically, the extremely energetic radiation from the Big Bang has been redshifted to microwave wavelengths (1100 times the length of its original wavelength) as a result of the cosmic expansion, and thus forms the cosmic microwave background radiation. This explains the relatively low light densities and energy levels present in most of our sky today despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but this diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.

Other factors

Steady state

The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky even if the universe were infinitely old. In the Steady state theory the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The expansion of the universe causes the light from these distant stars and quasars to redshift, so that the total light flux from the sky remains finite. Thus the observed radiation density (the sky brightness of extragalactic background light) can be independent of finiteness of the universe. Mathematically, the total electromagnetic energy density (radiation energy density) in thermodynamic equilibrium from Planck's law is

e.g. for temperature 2.7 K it is 40 fJ/m3 ... 4.5×10−31 kg/m3 and for visible temperature 6000 K we get 1 J/m3 ... 1.1×10−17 kg/m3. But the total radiation emitted by a star (or other cosmic object) is at most equal to the total nuclear binding energy of isotopes in the star. For the density of the observable universe of about 4.6×10−28 kg/m3 and given the known abundance of the chemical elements, the corresponding maximal radiation energy density of 9.2×10−31 kg/m3, i.e. temperature 3.2 K (matching the value observed for the optical radiation temperature by Arthur Eddington). This is close to the summed energy density of the cosmic microwave background (CMB) and the cosmic neutrino background. The Big Bang hypothesis predicts that the CBR should have the same energy density as the binding energy density of the primordial helium, which is much greater than the binding energy density of the non-primordial elements; so it gives almost the same result. However, the steady-state model does not predict the angular distribution of the microwave background temperature accurately (as the standard ΛCDM paradigm does). Nevertheless, the modified gravitation theories (without metric expansion of the universe) cannot be ruled out as of 2017 by CMB and BAO observations.

Finite age of stars

Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. Edgar Allan Poe suggested that this idea could provide a resolution to Olbers' paradox; a related theory was also proposed by Jean-Philippe de Chéseaux. However, stars are continually being born as well as dying. As long as the density of stars throughout the universe remains constant, regardless of whether the universe itself has a finite or infinite age, there would be infinitely many other stars in the same angular direction, with an infinite total impact. So the finite age of the stars does not explain the paradox.

Brightness

Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about 1.2×1017 eV/m3 = 2.1×10−19 kg/m3, which is much greater than the observed value of 4.7×10−31 kg/m3. So the sky is about five hundred billion times darker than it would be if the universe was neither expanding nor too young to have reached equilibrium yet. However, recent observations increasing the lower bound on the number of galaxies suggest UV absorption by hydrogen and reemission in near-IR (not visible) wavelengths also plays a role.

Fractal star distribution

A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred.

Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is

where:

  • r0 = the distance of the nearest star, r0 > 0;
  • r = the variable measuring distance from the Earth;
  • L(r) = average luminosity per star at distance r;
  • N(r) = number of stars at distance r.

The function of luminosity from a given distance L(r)N(r) determines whether the light received is finite or infinite. For any luminosity from a given distance L(r)N(r) proportional to ra, is infinite for a ≥ −1 but finite for a < −1. So if L(r) is proportional to r−2, then for to be finite, N(r) must be proportional to rb, where b < 1. For b = 1, the numbers of stars at a given radius is proportional to that radius. When integrated over the radius, this implies that for b = 1, the total number of stars is proportional to r2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.

This explanation is not widely accepted among cosmologists, since the evidence suggests that the fractal dimension of the universe is at least 2. Moreover, the majority of cosmologists accept the cosmological principle, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrarily, fractal cosmology requires anisotropic matter distribution at the largest scales. Cosmic microwave background radiation has cosine anisotropy.

Heat death of the universe

From Wikipedia, the free encyclopedia

The heat death of the universe (also known as the Big Chill or Big Freeze) is a theory on the ultimate fate of the universe, which suggests the universe would evolve to a state of no thermodynamic free energy and would therefore be unable to sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium (maximum entropy).

If the topology of the universe is open or flat, or if dark energy is a positive cosmological constant (both of which are consistent with current data), the universe will continue expanding forever, and a heat death is expected to occur, with the universe cooling to approach equilibrium at a very low temperature after a very long time period.

The hypothesis of heat death stems from the ideas of Lord Kelvin, who in the 1850s took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale.

Origins of the idea

The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis implies that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, there is a tendency in nature to the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that, in time, the mechanical movement of the universe will run down as work is converted to heat because of the second law.

The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases.

The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by Lord Kelvin (William Thomson), who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843) and Rudolf Clausius (1850). Thomson's views were then elaborated over the next decade by Hermann von Helmholtz and William Rankine.

History

The idea of heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851, Lord Kelvin outlined the view, as based on recent experiments on the dynamical theory of heat: "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."

Lord Kelvin originated the idea of universal heat death in 1852.

In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy, in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will naturally tend to dissipate or run down. The ideas in this paper, in relation to their application to the age of the Sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject. In 1862, Thomson published "On the age of the Sun's heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy through the material universe, while clarifying his view of the consequences for the universe as a whole. Thomson wrote:

The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.

In the years to follow both Thomson's 1852 and the 1862 papers, Helmholtz and Rankine both credited Thomson with the idea, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz) which will be the "end of all physical phenomena" (Rankine).

Current status

Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, either a heat death or a Big Rip is expected to eventually occur. If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, as appears to be the case in recent observations (2011 Nobel Prize), the temperature will asymptote to a non-zero positive value, and the universe will approach a state of maximum entropy in which no further work is possible.

If a Big Rip does not happen long before that and protons, electrons, and neutrons bound to atom's nucleus are stable and never decay, the full "heat death" situation could be avoided if there is a method or mechanism to regenerate hydrogen atoms from radiation, dark matter, dark energy, zero-point energy, sphalerons, virtual particles, or other sources, such as retrieving matter and energy from black holes or causing black holes to explode so that mass contained in them is released, which can lead to formation of new stars and planets. If so, it is at least possible that star formation and heat transfer can continue, avoiding a gradual running down of the universe due to the conversion of matter into energy and heavier elements in stellar processes, and the absorption of matter by black holes and their subsequent evaporation as Hawking radiation.

A new study published on November 2020 found that the universe is actually getting hotter. The study probed the thermal history of the universe over the last 10 billion years. It has found that "the mean temperature of gas across the universe has increased more than 10 times over that time period and reached about 2 million degrees Kelvin today—approximately 4 million degrees Fahrenheit." Yi-Kuan Chiang, lead author of the study and a research fellow at The Ohio State University Center for Cosmology and AstroParticle Physics, stated that "Our new measurement provides a direct confirmation of the seminal work by Jim Peebles—the 2019 Nobel Laureate in Physics—who laid out the theory of how the large-scale structure forms in the universe."

Timeframe for heat death

From the Big Bang through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to do so well into the future. Therefore, the universe is not in thermodynamic equilibrium, and objects can do physical work. The decay time for a supermassive black hole of roughly 1 galaxy mass (1011 solar masses) due to Hawking radiation is on the order of 10100 years, so entropy can be produced until at least that time. Some large black holes in the universe are predicted to continue to grow up to perhaps 1014 M during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years. After that time, the universe enters the so-called Dark Era and is expected to consist chiefly of a dilute gas of photons and leptons. With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long timescales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state. It is also possible that entropy production will cease and the universe will reach heat death. Another universe could possibly be created by random quantum fluctuations or quantum tunneling in roughly years. Over vast periods of time, a spontaneous entropy decrease would eventually occur via the Poincaré recurrence theorem, thermal fluctuations, and fluctuation theorem. Such a scenario, however, has been described as "highly speculative, probably wrong, [and] completely untestable". Sean M. Carroll, originally an advocate of this idea, no longer supports it.

Opposing views

Max Planck wrote that the phrase "entropy of the universe" has no meaning because it admits of no accurate definition. More recently, Walter Grandy writes: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence." According to Tisza: "If an isolated system is not in equilibrium, we cannot associate an entropy with it." Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system". According to Gallavotti: "... there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state." Discussing the question of entropy for non-equilibrium states in general, Lieb and Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way." In Landsberg's opinion: "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations, and lie beyond the scope of this book."

A 2010 analysis of entropy states, "The entropy of a general gravitational field is still not known", and "gravitational entropy is difficult to quantify". The analysis considers several possible assumptions that would be needed for estimates and suggests that the observable universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor. Lee Smolin goes further: "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems." This point of view is also supported by the fact of a recent experimental discovery of a stable non-equilibrium steady state in a relatively simple closed system. It should be expected that an isolated system fragmented into subsystems does not necessarily come to thermodynamic equilibrium and remain in non-equilibrium steady state. Entropy will be transmitted from one subsystem to another, but its production will be zero, which does not contradict the second law of thermodynamics.

Naturalism (philosophy)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Naturalism_(philosophy)

In philosophy, naturalism is the idea or belief that only natural laws and forces (as opposed to supernatural or spiritual ones) operate in the universe. Adherents of naturalism assert that natural laws are the only rules that govern the structure and behavior of the natural world, and that the changing universe is at every stage a product of these laws.

Naturalism is not so much a special system as a point of view or tendency common to a number of philosophical and religious systems; not so much a well-defined set of positive and negative doctrines as an attitude or spirit pervading and influencing many doctrines. As the name implies, this tendency consists essentially in looking upon nature as the one original and fundamental source of all that exists, and in attempting to explain everything in terms of nature. Either the limits of nature are also the limits of existing reality, or at least the first cause, if its existence is found necessary, has nothing to do with the working of natural agencies. All events, therefore, find their adequate explanation within nature itself. But, as the terms nature and natural are themselves used in more than one sense, the term naturalism is also far from having one fixed meaning.

According to philosopher David Papineau, naturalism can be separated into an ontological component and a methodological component. "Ontological" refers to ontology, the philosophical study of what exists. On an ontological level, philosophers often treat naturalism as equivalent to materialism. For example, philosopher Paul Kurtz argues that nature is best accounted for by reference to material principles. These principles include mass, energy, and other physical and chemical properties accepted by the scientific community. Further, this sense of naturalism holds that spirits, deities, and ghosts are not real and that there is no "purpose" in nature. This stronger formulation of naturalism is commonly referred to as metaphysical naturalism. On the other hand, the more moderate view that naturalism should be assumed in one's working methods as the current paradigm, without any further consideration of whether naturalism is true in the robust metaphysical sense, is called methodological naturalism.

With the exception of pantheists—who believe that Nature is identical with divinity while not recognizing a distinct personal anthropomorphic god—theists challenge the idea that nature contains all of reality. According to some theists, natural laws may be viewed as secondary causes of God(s).

In the 20th century, Willard Van Orman Quine, George Santayana, and other philosophers argued that the success of naturalism in science meant that scientific methods should also be used in philosophy. According to this view, science and philosophy are not always distinct from one another, but instead form a continuum.

History of Naturalism

Ancient and medieval philosophy

Naturalism is most notably a Western phenomenon, but an equivalent idea has long existed in the East. Naturalism was the foundation of two out of six orthodox schools and one heterodox school of Hinduism. Samkhya, one of the oldest schools of Indian philosophy puts nature (Prakriti) as the primary cause of the universe, without assuming the existence of a personal God or Ishwara. The Carvaka, Nyaya, Vaisheshika schools originated in the 7th, 6th, and 2nd century BCE, respectively. Similarly, though unnamed and never articulated into a coherent system, one tradition within Confucian philosophy embraced a form of Naturalism dating to the Wang Chong in the 1st century, if not earlier, but it arose independently and had little influence on the development of modern naturalist philosophy or on Eastern or Western culture.

Western metaphysical naturalism originated in ancient Greek philosophy. The earliest pre-Socratic philosophers, especially the Milesians (Thales, Anaximander, and Anaximenes) and the atomists (Leucippus and Democritus), were labeled by their peers and successors "the physikoi" (from the Greek φυσικός or physikos, meaning "natural philosopher" borrowing on the word φύσις or physis, meaning "nature") because they investigated natural causes, often excluding any role for gods in the creation or operation of the world. This eventually led to fully developed systems such as Epicureanism, which sought to explain everything that exists as the product of atoms falling and swerving in a void.

The current usage of the term naturalism "derives from debates in America in the first half of the 20th century. The self-proclaimed 'naturalists' from that period included John Dewey, Ernest Nagel, Sidney Hook and Roy Wood Sellars."

Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses.

Plato's world of eternal and unchanging Forms, imperfectly represented in matter by a divine Artisan, contrasts sharply with the various mechanistic Weltanschauungen, of which atomism was, by the fourth century at least, the most prominent… This debate was to persist throughout the ancient world. Atomistic mechanism got a shot in the arm from Epicurus… while the Stoics adopted a divine teleology… The choice seems simple: either show how a structured, regular world could arise out of undirected processes, or inject intelligence into the system. This was how Aristotle… when still a young acolyte of Plato, saw matters. Cicero… preserves Aristotle's own cave-image: if troglodytes were brought on a sudden into the upper world, they would immediately suppose it to have been intelligently arranged. But Aristotle grew to abandon this view; although he believes in a divine being, the Prime Mover is not the efficient cause of action in the Universe, and plays no part in constructing or arranging it... But, although he rejects the divine Artificer, Aristotle does not resort to a pure mechanism of random forces. Instead he seeks to find a middle way between the two positions, one which relies heavily on the notion of Nature, or phusis.

With the rise and dominance of Christianity in the West and the later spread of Islam, metaphysical naturalism was generally abandoned by intellectuals. Thus, there is little evidence for it in medieval philosophy. The reintroduction of Aristotle's empirical epistemology as well as previously lost treatises by Greco-Roman natural philosophers which was begun by the medieval Scholastics without resulting in any noticeable increase in commitment to naturalism.

Modern philosophy

It was not until the early modern era of philosophy and the Age of Enlightenment that naturalists like Benedict Spinoza (who put forward a theory of psychophysical parallelism), David Hume, and the proponents of French materialism (notably Denis Diderot, Julien La Mettrie, and Baron d'Holbach) started to emerge again in the 17th and 18th centuries. In this period, some metaphysical naturalists adhered to a distinct doctrine, materialism, which became the dominant category of metaphysical naturalism widely defended until the end of the 19th century.

Immanuel Kant rejected (reductionist) materialist positions in metaphysics, but he was not hostile to naturalism. His transcendental philosophy is considered to be a form of liberal naturalism.

In late modern philosophy, Naturphilosophie, a form of natural philosophy, was developed by Friedrich Wilhelm Joseph von Schelling and Georg Wilhelm Friedrich Hegel as an attempt to comprehend nature in its totality and to outline its general theoretical structure.

A version of naturalism that arose after Hegel was Ludwig Feuerbach's anthropological materialism, which influenced Karl Marx and Friedrich Engels's historical materialism, Engels's "materialist dialectic" philosophy of nature (Dialectics of Nature), and their follower Georgi Plekhanov's dialectical materialism.

Another notable school of late modern philosophy advocating naturalism was German materialism: members included Ludwig Büchner, Jacob Moleschott, and Carl Vogt.

Contemporary philosophy

A politicized version of naturalism that has arisen in contemporary philosophy is Ayn Rand's Objectivism. Objectivism is an expression of capitalist ethical idealism within a naturalistic framework. An example of a more progressive naturalistic philosophy is secular humanism.

The current usage of the term naturalism "derives from debates in America in the first half of the last century.

Currently, metaphysical naturalism is more widely embraced than in previous centuries, especially but not exclusively in the natural sciences and the Anglo-American, analytic philosophical communities. While the vast majority of the population of the world remains firmly committed to non-naturalistic worldviews, prominent contemporary defenders of naturalism and/or naturalistic theses and doctrines today include Kai Nielsen, J. J. C. Smart, David Malet Armstrong, David Papineau, Paul Kurtz, Brian Leiter, Daniel Dennett, Michael Devitt, Fred Dretske, Paul and Patricia Churchland, Mario Bunge, Jonathan Schaffer, Hilary Kornblith, Quentin Smith, Paul Draper and Michael Martin, among many other academic philosophers.

According to David Papineau, contemporary naturalism is a consequence of the build-up of scientific evidence during the twentieth century for the "causal closure of the physical", the doctrine that all physical effects can be accounted for by physical causes.

By the middle of the twentieth century, the acceptance of the causal closure of the physical realm led to even stronger naturalist views. The causal closure thesis implies that any mental and biological causes must themselves be physically constituted, if they are to produce physical effects. It thus gives rise to a particularly strong form of ontological naturalism, namely the physicalist doctrine that any state that has physical effects must itself be physical.

From the 1950s onwards, philosophers began to formulate arguments for ontological physicalism. Some of these arguments appealed explicitly to the causal closure of the physical realm (Feigl 1958, Oppenheim and Putnam 1958). In other cases, the reliance on causal closure lay below the surface. However, it is not hard to see that even in these latter cases the causal closure thesis played a crucial role.

— David Papineau, "Naturalism" in the Stanford Encyclopedia of Philosophy

In contemporary continental philosophy, Quentin Meillassoux proposed speculative materialism, a post-Kantian return to David Hume which can strengthen classical materialist ideas.

Etymology

The term "methodological naturalism" is much more recent, though. According to Ronald Numbers, it was coined in 1983 by Paul de Vries, a Wheaton College philosopher. De Vries distinguished between what he called "methodological naturalism", a disciplinary method that says nothing about God's existence, and "metaphysical naturalism", which "denies the existence of a transcendent God". The term "methodological naturalism" had been used in 1937 by Edgar S. Brightman in an article in The Philosophical Review as a contrast to "naturalism" in general, but there the idea was not really developed to its more recent distinctions.

Description

According to Steven Schafersman, naturalism is a philosophy that maintains that;

  1. "Nature encompasses all that exists throughout space and time;
  2. Nature (the universe or cosmos) consists only of natural elements, that is, of spatio-temporal physical substance—massenergy. Non-physical or quasi-physical substance, such as information, ideas, values, logic, mathematics, intellect, and other emergent phenomena, either supervene upon the physical or can be reduced to a physical account;
  3. Nature operates by the laws of physics and in principle, can be explained and understood by science and philosophy;
  4. The supernatural does not exist, i.e., only nature is real. Naturalism is therefore a metaphysical philosophy opposed primarily by supernaturalism".

Or, as Carl Sagan succinctly put it: "The Cosmos is all that is or ever was or ever will be."

In addition Arthur C. Danto states that Naturalism, in recent usage, is a species of philosophical monism according to which whatever exists or happens is natural in the sense of being susceptible to explanation through methods which, although paradigmatically exemplified in the natural sciences, are continuous from domain to domain of objects and events. Hence, naturalism is polemically defined as repudiating the view that there exists or could exist any entities which lie, in principle, beyond the scope of scientific explanation.

Arthur Newell Strahler states: "The naturalistic view is that the particular universe we observe came into existence and has operated through all time and in all its parts without the impetus or guidance of any supernatural agency." "The great majority of contemporary philosophers urge that that reality is exhausted by nature, containing nothing ‘supernatural’, and that the scientific method should be used to investigate all areas of reality, including the ‘human spirit’.” Philosophers widely regard naturalism as a "positive" term, and "few active philosophers nowadays are happy to announce themselves as 'non-naturalists'". "Philosophers concerned with religion tend to be less enthusiastic about 'naturalism'" and that despite an "inevitable" divergence due to its popularity, if more narrowly construed, (to the chagrin of John McDowell, David Chalmers and Jennifer Hornsby, for example), those not so disqualified remain nonetheless content "to set the bar for 'naturalism' higher."

Alvin Plantinga stated that Naturalism is presumed to not be a religion. However, in one very important respect it resembles religion by performing the cognitive function of a religion. There is a set of deep human questions to which a religion typically provides an answer. In like manner naturalism gives a set of answers to these questions".

Providing assumptions required for science

According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that are untested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on an approved agenda of unprovable assumptions about the character of the universe, rather than merely on empirical facts. These assumptions—a paradigm—comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as 'supernatural'. The scientific method is to be used to investigate all reality, including the human spirit.

Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method:

  1. that there is an objective reality shared by all rational observers.
    "The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else."
  2. that this objective reality is governed by natural laws;
    "Science, at least today, assumes that the universe obeys to knoweable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible."
  3. that reality can be discovered by means of systematic observation and experimentation.
    Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding."
  4. that Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.
    Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it.

"The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different."

  1. that experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results.
  2. that experimenters won't be significantly biased by their presumptions.
  3. that random sampling is representative of the entire population.
    A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions.

Metaphysical naturalism

Naturalism is also known as "metaphysical naturalism", "ontological naturalism", “pure naturalism" and "philosophical naturalism".

Metaphysical naturalism holds that all properties related to consciousness and the mind are reducible to, or supervene upon, nature. Broadly, the corresponding theological perspective is religious naturalism or spiritual naturalism. More specifically, metaphysical naturalism rejects the supernatural concepts and explanations that are part of many religions.

Methodological naturalism

Methodological naturalism requires scientists to seek explanations in the world around us based on what we can observe, test, replicate and verify. It is a self-imposed convention of science.

Methodological naturalism concerns itself with methods of learning what nature is. These methods are useful in the evaluation of claims about existence and knowledge and in identifying causal mechanisms responsible for the emergence of physical phenomena. It attempts to explain and test scientific endeavors, hypotheses, and events with reference to natural causes and events. This second sense of the term "naturalism" seeks to provide a framework within which to conduct the scientific study of the laws of nature. Methodological naturalism is a way of acquiring knowledge. It is a distinct system of thought concerned with a cognitive approach to reality, and is thus a philosophy of knowledge. Studies by sociologist Elaine Ecklund suggest that religious scientists in practice apply methodological naturalism. They report that their religious beliefs affect the way they think about the implications – often moral – of their work, but not the way they practice science.

Steven Schafersman states that methodological naturalism is "the adoption or assumption of philosophical naturalism within the scientific method with or without fully accepting or believing it ... science is not metaphysical and does not depend on the ultimate truth of any metaphysics for its success, but methodological naturalism must be adopted as a strategy or working hypothesis for science to succeed. We may therefore be agnostic about the ultimate truth of naturalism, but must nevertheless adopt it and investigate nature as if nature is all that there is."

In a series of articles and books from 1996 onward, Robert T. Pennock wrote using the term "methodological naturalism" to clarify that the scientific method confines itself to natural explanations without assuming the existence or non-existence of the supernatural, and is not based on dogmatic metaphysical naturalism. Pennock's testimony as an expert witness at the Kitzmiller v. Dover Area School District trial was cited by the Judge in his Memorandum Opinion concluding that "Methodological naturalism is a 'ground rule' of science today":

Expert testimony reveals that since the scientific revolution of the 16th and 17th centuries, science has been limited to the search for natural causes to explain natural phenomena.... While supernatural explanations may be important and have merit, they are not part of science." " It is a "ground rule" that "requires scientists to seek explanations in the world around us based upon what we can observe, test, replicate, and verify.

Schafersman writes that "while science as a process only requires methodological naturalism, I think that the assumption of methodological naturalism by scientists and others logically and morally entails ontological naturalism", and "I maintain that the practice or adoption of methodological naturalism entails a logical and moral belief in ontological naturalism, so they are not logically decoupled."

Views on Methodological Naturalism

W. V. O. Quine

W. V. O. Quine describes naturalism as the position that there is no higher tribunal for truth than natural science itself. In his view, there is no better method than the scientific method for judging the claims of science, and there is neither any need nor any place for a "first philosophy", such as (abstract) metaphysics or epistemology, that could stand behind and justify science or the scientific method.

Therefore, philosophy should feel free to make use of the findings of scientists in its own pursuit, while also feeling free to offer criticism when those claims are ungrounded, confused, or inconsistent. In Quine's view, philosophy is "continuous with" science and both are empirical. Naturalism is not a dogmatic belief that the modern view of science is entirely correct. Instead, it simply holds that science is the best way to explore the processes of the universe and that those processes are what modern science is striving to understand. However, this Quinean Replacement Naturalism finds relatively few supporters among philosophers.

Karl Popper

Karl Popper equated naturalism with inductive theory of science. He rejected it based on his general critique of induction, yet acknowledged its utility as means for inventing conjectures.

A naturalistic methodology (sometimes called an "inductive theory of science") has its value, no doubt.... I reject the naturalistic view: It is uncritical. Its upholders fail to notice that whenever they believe to have discovered a fact, they have only proposed a convention. Hence the convention is liable to turn into a dogma. This criticism of the naturalistic view applies not only to its criterion of meaning, but also to its idea of science, and consequently to its idea of empirical method.

— Karl R. Popper, The Logic of Scientific Discovery, (Routledge, 2002), pp. 52–53, ISBN 0-415-27844-9.

Popper instead proposed that science should adopt a methodology based on falsifiability for demarcation, because no number of experiments can ever prove a theory, but a single experiment can contradict one. Popper holds that scientific theories are characterized by falsifiability.

Alvin Plantinga

Alvin Plantinga, Professor Emeritus of Philosophy at Notre Dame, and a Christian, has become a well-known critic of naturalism. He suggests, in his evolutionary argument against naturalism, that the probability that evolution has produced humans with reliable true beliefs, is low or inscrutable, unless the evolution of humans was guided (for example, by God). According to David Kahan of the University of Glasgow, in order to understand how beliefs are warranted, a justification must be found in the context of supernatural theism, as in Plantinga's epistemology.

Plantinga argues that together, naturalism and evolution provide an insurmountable "defeater for the belief that our cognitive faculties are reliable", i.e., a skeptical argument along the lines of Descartes' evil demon or brain in a vat.

Take philosophical naturalism to be the belief that there aren't any supernatural entities – no such person as God, for example, but also no other supernatural entities, and nothing at all like God. My claim was that naturalism and contemporary evolutionary theory are at serious odds with one another – and this despite the fact that the latter is ordinarily thought to be one of the main pillars supporting the edifice of the former. (Of course I am not attacking the theory of evolution, or anything in that neighborhood; I am instead attacking the conjunction of naturalism with the view that human beings have evolved in that way. I see no similar problems with the conjunction of theism and the idea that human beings have evolved in the way contemporary evolutionary science suggests.) More particularly, I argued that the conjunction of naturalism with the belief that we human beings have evolved in conformity with current evolutionary doctrine... is in a certain interesting way self-defeating or self-referentially incoherent.

— Alvin Plantinga, Naturalism Defeated?: Essays on Plantinga's Evolutionary Argument Against Naturalism, "Introduction"

Robert T. Pennock

Robert T. Pennock contends that as supernatural agents and powers "are above and beyond the natural world and its agents and powers" and "are not constrained by natural laws", only logical impossibilities constrain what a supernatural agent cannot do. He states: "If we could apply natural knowledge to understand supernatural powers, then, by definition, they would not be supernatural." As the supernatural is necessarily a mystery to us, it can provide no grounds on which one can judge scientific models. "Experimentation requires observation and control of the variables.... But by definition we have no control over supernatural entities or forces." Science does not deal with meanings; the closed system of scientific reasoning cannot be used to define itself. Allowing science to appeal to untestable supernatural powers would make the scientist's task meaningless, undermine the discipline that allows science to make progress, and "would be as profoundly unsatisfying as the ancient Greek playwright's reliance upon the deus ex machina to extract his hero from a difficult predicament."

Naturalism of this sort says nothing about the existence or nonexistence of the supernatural, which by this definition is beyond natural testing. As a practical consideration, the rejection of supernatural explanations would merely be pragmatic, thus it would nonetheless be possible for an ontological supernaturalist to espouse and practice methodological naturalism. For example, scientists may believe in God while practicing methodological naturalism in their scientific work. This position does not preclude knowledge that is somehow connected to the supernatural. Generally however, anything that one can examine and explain scientifically would not be supernatural, simply by definition.

Criticism

Applicability of Mathematics to the Material Universe

The late eminent Philosopher of mathematics Mark Steiner has written extensively on this matter and acknowledges that the applicability of mathematics constitutes “a challenge to naturalism.”

Genopolitics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Gen...