Search This Blog

Friday, September 8, 2023

Proof of the Truthful

From Wikipedia, the free encyclopedia
Avicenna, the proponent of the argument, depicted on a 1999 Tajikistani banknote

The Proof of the Truthful (Arabic: برهان الصديقين, romanizedburhān al-ṣiddīqīn, also translated Demonstration of the Truthful or Proof of the Veracious, among others) is a formal argument for proving the existence of God introduced by the Islamic philosopher Avicenna (also known as Ibn Sina, 980–1037). Avicenna argued that there must be a "necessary existent" (Arabic: واجب الوجود, romanizedwājib al-wujūd), an entity that cannot not exist. The argument says that the entire set of contingent things must have a cause that is not contingent because otherwise it would be included in the set. Furthermore, through a series of arguments, he derived that the necessary existent must have attributes that he identified with God in Islam, including unity, simplicity, immateriality, intellect, power, generosity, and goodness.

Historian of philosophy Peter Adamson called the argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy. It was enthusiastically received and repeated (sometimes with modification) by later philosophers, including generations of Muslim philosophers, Western Christian philosophers such as Thomas Aquinas and Duns Scotus, and Jewish philosophers such as Maimonides.

Critics of the argument include Averroes, who objected to its methodology, Al-Ghazali, who disagreed with its characterization of God, and modern critics who state that its piecemeal derivation of God's attributes allows people to accept parts of the argument but still reject God's existence. There is no consensus among modern scholars on the classification of the argument; some say that it is ontological while others say it is cosmological.

Origin

The argument is outlined in Avicenna's various works. The most concise and influential form is found in the fourth "class" of his Remarks and Admonitions (Al-isharat wa al-tanbihat). It is also present in Book II, Chapter 12 of the Book of Salvation (Kitab al-najat) and throughout the Metaphysics section of the Book of Healing (al-Shifa). The passages in Remarks and Admonitions draw a distinction between two types of proof for the existence of God: the first is derived from reflection on nothing but existence itself; the second requires reflection on things such as God's creations or God's acts. Avicenna says that the first type is the proof for "the truthful", which is more solid and nobler than the second one, which is proof for a certain "group of people". According to the professor of Islamic philosophy Shams C. Inati, by "the truthful" Avicenna means the philosophers, and the "group of people" means the theologians and others who seek to demonstrate God's existence through his creations. The proof then became known in the Arabic tradition as the "Proof of the Truthful" (burhan al-siddiqin).

Argument

The necessary existent

Avicenna distinguishes between a thing that needs an external cause in order to exist – a contingent thing – and a thing that is guaranteed to exist by its essence or intrinsic nature – a necessary existent. The argument tries to prove that there is indeed a necessary existent. It does this by first considering whether the opposite could be true: that everything that exists is contingent. Each contingent thing will need something other than itself to bring it into existence, which will in turn need another cause to bring it into existence, and so on. Because this seemed to lead to an infinite regress, cosmological arguments before Avicenna concluded that some necessary cause (such as God) is needed to end the infinite chain. However, Avicenna's argument does not preclude the possibility of an infinite regress.

Instead, the argument considers the entire collection (jumla) of contingent things, the sum total of every contingent thing that exists, has existed, or will exist. Avicenna argues that this aggregate, too, must obey the rule that applies to a single contingent thing; in other words, it must have something outside itself that causes it to exist. This cause has to be either contingent or necessary. It cannot be contingent, though, because if it were, it would already be included within the aggregate. Thus the only remaining possibility is that an external cause is necessary, and that cause must be a necessary existent.

Avicenna anticipates that one could reject the argument by saying that the collection of contingent things may not be contingent. A whole does not automatically share the features of its parts; for example, in mathematics a set of numbers is not a number. Therefore, the objection goes, the step in the argument that assumes that the collection of contingent things is also contingent, is wrong. However, Avicenna dismisses this counter-argument as a capitulation, and not an objection at all. If the entire collection of contingent things is not contingent, then it must be necessary. This also leads to the conclusion that there is a necessary existent, the very thing Avicenna is trying to prove. Avicenna remarks, "in a certain way, this is the very thing that is sought".

From the necessary existent to God

The limitation of the argument so far is that it only shows the existence of a necessary existent, and that is different from showing the existence of God as worshipped in Islam. An atheist might agree that a necessary existent exists, but it could be the universe itself, or there could be many necessary existents, none of which is God. Avicenna is aware of this limitation, and his works contain numerous arguments to show the necessary existent must have the attributes associated with God identified in Islam.

For example, Avicenna gives a philosophical justification for the Islamic doctrine of tawhid (oneness of God) by showing the uniqueness and simplicity of the necessary existent. He argues that the necessary existent must be unique, using a proof by contradiction, or reductio, showing that a contradiction would follow if one supposes that there were more than one necessary existent. If one postulates two necessary existents, A and B, a simplified version of the argument considers two possibilities: if A is distinct from B as a result of something implied from necessity of existence, then B would share it, too (being a necessary existent itself), and the two are not distinct after all. If, on the other hand, the distinction resulted from something not implied by necessity of existence, then this individuating factor will be a cause for A, and this means that A has a cause and is not a necessary existent after all. Either way, the opposite proposition resulted in contradiction, which to Avicenna proves the correctness of the argument. Avicenna argued that the necessary existent must be simple (not a composite) by a similar reductio strategy. If it were a composite, its internal parts would need a feature that distinguishes each from the others. The distinguishing feature cannot be solely derived from the parts' necessity of existence, because then they would both have the same feature and not be distinct: a contradiction. But it also cannot be accidental, or requiring an outside cause, because this would contradict its necessity of existence.

Avicenna derives other attributes of the necessary existent in multiple texts in order to justify its identification with God. He shows that the necessary existent must also be immaterial, intellective, powerful, generous, of pure good (khayr mahd), willful (irada), "wealthy" or "sufficient" (ghani), and self-subsistent (qayyum), among other qualities. These attributes often correspond to the epithets of God found in the Quran. In discussing some of the attributes' derivations, Adamson commented that "a complete consideration of Avicenna's derivation of all the attributes ... would need a book-length study". In general, Avicenna derives the attributes based on two aspects of the necessary existent: (1) its necessity, which can be shown to imply its sheer existence and a range of negations (e.g. not being caused, not being multiple), and (2) its status as a cause of other existents, which can be shown to imply a range of positive relations (e.g. knowing and powerful).

Reaction

Reception

Present-day historian of philosophy Peter Adamson called this argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy. Generations of Muslim philosophers and theologians took up the proof and its conception of God as a necessary existent with approval and sometimes with modifications. The phrase wajib al-wujud (necessary existent) became widely used to refer to God, even in the works of Avicenna's staunch critics, a sign of the proof's influence. Outside the Muslim tradition, it is also "enthusiastically" received, repeated, and modified by later philosophers such as Thomas Aquinas (1225–1274) and Duns Scotus (1266–1308) of the Western Christian tradition, as well by Jewish philosophers such as Maimonides (d. 1204).

Adamson said that one reason for its popularity is that it matches "an underlying rationale for many people's belief in God", which he contrasted with Anselm's ontological argument, formulated a few years later, which read more like a "clever trick" than a philosophical justification of one's faith. Professor of medieval philosophy Jon McGinnis said that the argument requires only a few premises, namely, the distinction between the necessary and the contingent, that "something exists", and that a set subsists through their members (an assumption McGinnis said to be "almost true by definition").

Criticism

The Islamic Andalusi philosopher Averroes or Ibn Rushd (1126–1198) criticized the argument on its methodology. Averroes, an avid Aristotelian, argued that God's existence has to be shown on the basis of the natural world, as Aristotle had done. According to Averroes, a proof should be based on physics, and not on metaphysical reflections as in the "Proof of the Truthful". Other Muslim philosophers such as Al-Ghazali (1058–1111) attacked the argument over its implications that seemed incompatible with God as known through the Islamic revelation. For example, according to Avicenna, God can have no features or relations that are contingent, so his causing of the universe must be necessary. Al-Ghazali disputed this as incompatible with the concept of God's untrammelled free will as taught in Al-Ghazali's Asharite theology. He further argued that God's free choice can be shown by the arbitrary nature of the exact size of the universe or the time of its creation.

Peter Adamson offered several more possible lines of criticism. He pointed out that Avicenna adopts a piecemeal approach to prove the necessary existent, and then derives God's traditional attribute from it one at a time. This makes each of the arguments subject to separate assessments. Some might accept the proof for the necessary existent while rejecting the other arguments; such a critic could still reject the existence of God. Another type of criticism might attack the proof of the necessary existent itself. Such a critic might reject Avicenna's conception of contingency, a starting point in the original proof, by saying that the universe could just happen to exist without being necessary or contingent on an external cause.

Classification

German philosopher Immanuel Kant (1724–1804) divided arguments for the existence of God into three groups: ontological, cosmological, or teleological. Scholars disagree on whether Avicenna's "Proof of the Truthful" is ontological, that is, derived through sheer conceptual analysis, or cosmological, that is, derived by invoking empirical premises (e.g. "a contingent thing exists"). Scholars Herbert A. Davidson, Lenn E. Goodman, Michael E. Marmura, M. Saeed Sheikh, and Soheil Afnan argued that it was cosmological. Davidson said that Avicenna did not regard "the analysis of the concept necessary existent by virtue of itself as sufficient to establish the actual existence of anything in the external world" and that he had offered a new form of cosmological argument. Others, including Parviz Morewedge, Gary Legenhausen, Abdel Rahman Badawi, Miguel Cruz Hernández, and M. M. Sharif, argued that Avicenna's argument was ontological. Morewedge referred to the argument as "Ibn Sina's ontological argument for the existence of God", and said that it was purely based on his analytic specification of this concept [the Necessary Existent]." Steve A. Johnson and Toby Mayer said the argument was a hybrid of the two.

Teleportation

From Wikipedia, the free encyclopedia

Teleportation is the hypothetical transfer of matter or energy from one point to another without traversing the physical space between them. It is a common subject in science fiction literature and in other popular culture. Teleportation is often paired with time travel, being that the travelling between the two points takes an unknown period of time, sometimes being immediate. An apport is a similar phenomenon featured in parapsychology and spiritualism.

There is no known physical mechanism that would allow for teleportation. Frequently appearing scientific papers and media articles with the term teleportation typically report on so-called "quantum teleportation", a scheme for information transfer which, due to the no-communication theorem, still would not allow for faster-than-light communication.

Etymology

The use of the term teleport to describe the hypothetical movement of material objects between one place and another without physically traversing the distance between them has been documented as early as 1878.

American writer Charles Fort is credited with having coined the word teleportation in 1931 to describe the strange disappearances and appearances of anomalies, which he suggested may be connected. As in the earlier usage, he joined the Greek prefix tele- (meaning "remote") to the root of the Latin verb portare (meaning "to carry"). Fort's first formal use of the word occurred in the second chapter of his 1931 book Lo!:

Mostly in this book I shall specialize upon indications that there exists a transportory force that I shall call Teleportation. I shall be accused of having assembled lies, yarns, hoaxes, and superstitions. To some degree I think so, myself. To some degree, I do not. I offer the data.

Cultural references

Fiction

Teleportation is a common subject in science fiction literature, film, video games, and television. The use of matter transmitters in science fiction originated at least as early as the 19th century. An early example of scientific teleportation (as opposed to magical or spiritual teleportation) is found in the 1897 novel To Venus in Five Seconds by Fred T. Jane. Jane's protagonist is transported from a strange-machinery-containing gazebo on Earth to planet Venus – hence the title.

The earliest recorded story of a "matter transmitter" was Edward Page Mitchell's "The Man Without a Body" in 1877.

Quantum teleportation

Quantum teleportation is distinct from regular teleportation, as it does not transfer matter from one place to another, but rather transmits the information necessary to prepare a (microscopic) target system in the same quantum state as the source system. The scheme was named quantum “teleportation”, because certain properties of the source system are recreated in the target system without any apparent quantum information carrier propagating between the two.

In many cases, such as normal matter at room temperature, the exact quantum state of a system is irrelevant for any practical purpose (because it fluctuates rapidly anyway, it "decoheres"), and the necessary information to recreate the system is classical. In those cases, quantum teleportation may be replaced by the simple transmission of classical information, such as radio communication.

In 1993, Bennett et al proposed that a quantum state of a particle could be transferred to another distant particle, without moving the two particles at all. This is called quantum state teleportation. There are many following theoretical and experimental papers published. Researchers believe that quantum teleportation is the foundation of quantum calculation and quantum communication.

In 2008, M. Hotta proposed that it may be possible to teleport energy by exploiting quantum energy fluctuations of an entangled vacuum state of a quantum field. There are some papers published but no experimental verification.

In 2014, researcher Ronald Hanson and colleagues from the Technical University Delft in the Netherlands, demonstrated the teleportation of information between two entangled quantumbits three metres apart.

In 2016, Y. Wei showed that in a generalization of quantum mechanics, particles themselves could teleport from one place to another. This is called particle teleportation. With this concept, superconductivity can be viewed as the teleportation of some electrons in the superconductor and superfluidity as the teleportation of some of the atoms in the cellular tube. This effect is not predicted to occur in standard quantum mechanics.

Philosophy

Philosopher Derek Parfit used teleportation in his teletransportation paradox.

Space travel in science fiction

Rocket on cover of Other Worlds sci-fi magazine, September 1951

Space travel, or space flight (less often, starfaring or star voyaging) is a classic science-fiction theme that has captivated the public and is almost archetypal for science fiction. Space travel, interplanetary or interstellar, is usually performed in space ships, and spacecraft propulsion in various works ranges from the scientifically plausible to the totally fictitious.

While some writers focus on realistic, scientific, and educational aspects of space travel, other writers see this concept as a metaphor for freedom, including "free[ing] mankind from the prison of the solar system". Though the science-fiction rocket has been described as a 20th-century icon, according to The Encyclopedia of Science Fiction "The means by which space flight has been achieved in sf – its many and various spaceships – have always been of secondary importance to the mythical impact of the theme". Works related to space travel have popularized such concepts as time dilation, space stations, and space colonization.

While generally associated with science fiction, space travel has also occasionally featured in fantasy, sometimes involving magic or supernatural entities such as angels.

History

Science and Mechanics, November 1931, showing a proposed sub-orbital spaceship that would reach a 700-mile altitude on a one-hour flight from Berlin to New York
Still from Lost in Space TV series premiere (1965), depicting space travelers in suspended animation

A classic, defining trope of the science-fiction genre is that the action takes place in space, either aboard a spaceship or on another planet. Early works of science fiction, termed "proto SF" – such as novels by 17th-century writers Francis Godwin and Cyrano de Bergerac, and by astronomer Johannes Kepler – include "lunar romances", much of whose action takes place on the Moon. Science-fiction critic George Slusser also pointed to Christopher Marlowe's Doctor Faustus (1604) – in which the main character is able to see the entire Earth from high above – and noted the connections of space travel to earlier dreams of flight and air travel, as far back as the writings of Plato and Socrates. In such a grand view, space travel, and inventions such as various forms of "star drive", can be seen as metaphors for freedom, including "free[ing] mankind from the prison of the solar system".

In the following centuries, while science fiction addressed many aspects of futuristic science as well as space travel, space travel proved the more influential with the genre's writers and readers, evoking their sense of wonder. Most works were mainly intended to amuse readers, but a small number, often by authors with a scholarly background, sought to educate readers about related aspects of science, including astronomy; this was the motive of the influential American editor Hugo Gernsback, who dubbed it "sugar-coated science" and "scientifiction". Science-fiction magazines, including Gernsback's Science Wonder Stories, alongside works of pure fiction, discussed the feasibility of space travel; many science-fiction writers also published nonfiction works on space travel, such as Willy Ley's articles and David Lasser's book, The Conquest of Space (1931).

A roadside replica starship atop a stone base
Roadside replica of Star Trek starship Enterprise

From the late 19th and early 20th centuries on, there was a visible distinction between the more "realistic", scientific fiction (which would later evolve into hard sf)), whose authors, often scientists like Konstantin Tsiolkovsky and Max Valier, focused on the more plausible concept of interplanetary travel (to the Moon or Mars); and the more grandiose, less realistic stories of "escape from Earth into a Universe filled with worlds", which gave rise to the genre of space opera, pioneered by E. E. Smith and popularized by the television series Star Trek, which debuted in 1966. This trend continues to the present, with some works focusing on "the myth of space flight", and others on "realistic examination of space flight"; the difference can be described as that between the authors' concern with the "imaginative horizons rather than hardware".

The successes of 20th-century space programs, such as the Apollo 11 Moon landing, have often been described as "science fiction come true" and have served to further "demystify" the concept of space travel within the Solar System. Henceforth writers who wanted to focus on the "myth of space travel" were increasingly likely to do so through the concept of interstellar travel. Edward James wrote that many science fiction stories have "explored the idea that without the constant expansion of humanity, and the continual extension of scientific knowledge, comes stagnation and decline." While the theme of space travel has generally been seen as optimistic, some stories by revisionist authors, often more pessimistic and disillusioned, juxtapose the two types, contrasting the romantic myth of space travel with a more down-to-Earth reality. George Slusser suggests that "science fiction travel since World War II has mirrored the United States space program: anticipation in the 1950s and early 1960s, euphoria into the 1970s, modulating into skepticism and gradual withdrawal since the 1980s."

On the screen, the 1902 French film A Trip to the Moon, by Georges Méliès, described as the first science-fiction film, linked special effects to depictions of spaceflight. With other early films, such as Woman in the Moon (1929) and Things to Come (1936), it contributed to an early recognition of the rocket as the iconic, primary means of space travel, decades before space programs began. Later milestones in film and television include the Star Trek series and films, and the film 2001: A Space Odyssey by Stanley Kubrick (1968), which visually advanced the concept of space travel, allowing it to evolve from the simple rocket toward a more complex space ship. Stanley Kubrick's 1968 epic film featured a lengthy sequence of interstellar travel through a mysterious "star gate". This sequence, noted for its psychedelic special effects conceived by Douglas Trumbull, influenced a number of later cinematic depictions of superluminal and hyperspatial travel, such as Star Trek: The Motion Picture (1979).

Means of travel

Generic terms for engines enabling science-fiction spacecraft propulsion include "space drive" and "star drive". In 1977 The Visual Encyclopedia of Science Fiction listed the following means of space travel: anti-gravity, atomic (nuclear), bloater, cannon one-shot, Dean drive, faster-than-light (FTL), hyperspace, Inertialess drive, Ion thruster, photon rocket, plasma propulsion engine, Bussard ramjet, R. force, solar sail, spindizzy, and torchship.

The 2007 Brave New Words: The Oxford Dictionary of Science Fiction lists the following terms related to the concept of space drive: gravity drive, hyperdrive, ion drive, jump drive, overdrive, ramscoop (a synonym for ram-jet), reaction drive, stargate, ultradrive, warp drive and torchdrive. Several of these terms are entirely fictitious or are based on "rubber science", while others are based on real scientific theories. Many fictitious means of travelling through space, in particular, faster than light travel, tend to go against the current understanding of physics, in particular, the theory of relativity. Some works sport numerous alternative star drives; for example the Star Trek universe, in addition to its iconic "warp drive", has introduced concepts such as "transwarp", "slipstream" and "spore drive", among others.

Many, particularly early, writers of science fiction did not address means of travel in much detail, and many writings of the "proto-SF" era were disadvantaged by their authors' living in a time when knowledge of space was very limited — in fact, many early works did not even consider the concept of vacuum and instead assumed that an atmosphere of sorts, composed of air or "aether", continued indefinitely. Highly influential in popularizing the science of science fiction was the 19th-century French writer Jules Verne, whose means of space travel in his 1865 novel, From the Earth to the Moon (and its sequel, Around the Moon), was explained mathematically, and whose vehicle — a gun-launched space capsule — has been described as the first such vehicle to be "scientifically conceived" in fiction. Percy Greg's Across the Zodiac (1880) featured a spaceship with a small garden, an early precursor of hydroponics. Another writer who attempted to merge concrete scientific ideas with science fiction was the turn-of-the-century Russian writer and scientist, Konstantin Tsiolkovsky, who popularized the concept of rocketry. George Mann mentions Robert A. Heinlein's Rocket Ship Galileo (1947) and Arthur C. Clarke's Prelude to Space (1951) as early, influential modern works that emphasized the scientific and engineering aspects of space travel. From the 1960s on, growing popular interest in modern technology also led to increasing depictions of interplanetary spaceships based on advanced plausible extensions of real modern technology. The Alien franchise features ships with Ion propulsion, a developing technology at the time that would be used years later in the Deep Space 1, Hayabusa 1 and SMART-1 spacecraft.

Interstellar travel

Slower than light

With regard to interstellar travel, in which faster-than-light speeds are generally considered unrealistic, more realistic depictions of interstellar travel have often focused on the idea of "generation ships" that travel at sub-light speed for many generations before arriving at their destinations. Other scientifically plausible concepts of interstellar travel include suspended animation and, less often, ion drive, solar sail, Bussard ramjet, and time dilation.

Faster than light

Some works discuss Einstein's general theory of relativity and challenges that it faces from quantum mechanics, and include concepts of space travel through wormholes or black holes. Many writers, however, gloss over such problems, introducing entirely fictional concepts such as hyperspace (also, subspace, nulspace, overspace, jumpspace, or slipstream) travel using inventions such as hyperdrive, jump drive, warp drive, or space folding. Invention of completely made-up devices enabling space travel has a long tradition — already in the early 20th century, Verne criticized H. G. Wells' The First Men in the Moon (1901) for abandoning realistic science (his spaceship relied on anti-gravitic material called "cavorite"). Of fictitious drives, by the mid-1970s the concept of hyperspace travel was described as having achieved the most popularity, and would subsequently be further popularized — as hyperdrive — through its use in the Star Wars franchise. While the fictitious drives "solved" problems related to physics (the difficulty of faster-than-light travel), some writers introduce new wrinkles — for example, a common trope involves the difficulty of using such drives in close proximity to other objects, in some cases allowing their use only beginning from the outskirts of the planetary systems.

While usually the means of space travel is just a means to an end, in some works, particularly short stories, it is a central plot device. These works focus on themes such as the mysteries of hyperspace, or the consequences of getting lost after an error or malfunction.

Atmospheric circulation

From Wikipedia, the free encyclopedia
Idealised depiction (at equinox) of large-scale atmospheric circulation on Earth
Long-term mean precipitation by month

Atmospheric circulation is the large-scale movement of air and together with ocean circulation is the means by which thermal energy is redistributed on the surface of the Earth. The Earth's atmospheric circulation varies from year to year, but the large-scale structure of its circulation remains fairly constant. The smaller-scale weather systems – mid-latitude depressions, or tropical convective cells – occur chaotically, and long-range weather predictions of those cannot be made beyond ten days in practice, or a month in theory (see chaos theory and the butterfly effect).

The Earth's weather is a consequence of its illumination by the Sun and the laws of thermodynamics. The atmospheric circulation can be viewed as a heat engine driven by the Sun's energy and whose energy sink, ultimately, is the blackness of space. The work produced by that engine causes the motion of the masses of air, and in that process it redistributes the energy absorbed by the Earth's surface near the tropics to the latitudes nearer the poles, and thence to space.

The large-scale atmospheric circulation "cells" shift polewards in warmer periods (for example, interglacials compared to glacials), but remain largely constant as they are, fundamentally, a property of the Earth's size, rotation rate, heating and atmospheric depth, all of which change little. Over very long time periods (hundreds of millions of years), a tectonic uplift can significantly alter their major elements, such as the jet stream, and plate tectonics may shift ocean currents. During the extremely hot climates of the Mesozoic, a third desert belt may have existed at the Equator.

Latitudinal circulation features

An idealised view of three large circulation cells showing surface winds
Vertical velocity at 500 hPa, July average. Ascent (negative values; blue to violet) is concentrated close to the solar equator; descent (positive values; red to yellow) is more diffuse but also occurs mainly in the Hadley cell.

The wind belts girdling the planet are organised into three cells in each hemisphere—the Hadley cell, the Ferrel cell, and the polar cell. Those cells exist in both the northern and southern hemispheres. The vast bulk of the atmospheric motion occurs in the Hadley cell. The high pressure systems acting on the Earth's surface are balanced by the low pressure systems elsewhere. As a result, there is a balance of forces acting on the Earth's surface.

The horse latitudes are an area of high pressure at about 30° to 35° latitude (north or south) where winds diverge into the adjacent zones of Hadley or Ferrel cells, and which typically have light winds, sunny skies, and little precipitation.

Hadley cell

The ITCZ's band of clouds over the Eastern Pacific and the Americas as seen from space

The atmospheric circulation pattern that George Hadley described was an attempt to explain the trade winds. The Hadley cell is a closed circulation loop which begins at the equator. There, moist air is warmed by the Earth's surface, decreases in density and rises. A similar air mass rising on the other side of the equator forces those rising air masses to move poleward. The rising air creates a low pressure zone near the equator. As the air moves poleward, it cools, becomes denser, and descends at about the 30th parallel, creating a high-pressure area. The descended air then travels toward the equator along the surface, replacing the air that rose from the equatorial zone, closing the loop of the Hadley cell. The poleward movement of the air in the upper part of the troposphere deviates toward the east, caused by the coriolis acceleration (a manifestation of conservation of angular momentum). At the ground level, however, the movement of the air toward the equator in the lower troposphere deviates toward the west, producing a wind from the east. The winds that flow to the west (from the east, easterly wind) at the ground level in the Hadley cell are called the trade winds.

Though the Hadley cell is described as located at the equator, it shifts northerly (to higher latitudes) in June and July and southerly (toward lower latitudes) in December and January, as a result of the Sun's heating of the surface. The zone where the greatest heating takes place is called the "thermal equator". As the southern hemisphere's summer is in December to March, the movement of the thermal equator to higher southern latitudes takes place then.

The Hadley system provides an example of a thermally direct circulation. The power of the Hadley system, considered as a heat engine, is estimated at 200 terawatts.

Ferrel cell

Part of the air rising at 60° latitude diverges at high altitude toward the poles and creates the polar cell. The rest moves toward the equator where it collides at 30° latitude with the high-level air of the Hadley cell. There it subsides and strengthens the high pressure ridges beneath. A large part of the energy that drives the Ferrel cell is provided by the polar and Hadley cells circulating on either side, which drag the air of the Ferrel cell with it. The Ferrel cell, theorized by William Ferrel (1817–1891), is, therefore, a secondary circulation feature, whose existence depends upon the Hadley and polar cells on either side of it. It might be thought of as an eddy created by the Hadley and polar cells.

The air of the Ferrel cell that descends at 30° latitude returns poleward at the ground level, and as it does so it deviates toward the east. In the upper atmosphere of the Ferrel cell, the air moving toward the equator deviates toward the west. Both of those deviations, as in the case of the Hadley and polar cells, are driven by conservation of angular momentum. As a result, just as the easterly Trade Winds are found below the Hadley cell, the Westerlies are found beneath the Ferrel cell.

The Ferrel cell is weak, because it has neither a strong source of heat nor a strong sink, so the airflow and temperatures within it are variable. For this reason, the mid-latitudes are sometimes known as the "zone of mixing." The Hadley and polar cells are truly closed loops, the Ferrel cell is not, and the telling point is in the Westerlies, which are more formally known as "the Prevailing Westerlies." The easterly Trade Winds and the polar easterlies have nothing over which to prevail, as their parent circulation cells are strong enough and face few obstacles either in the form of massive terrain features or high pressure zones. The weaker Westerlies of the Ferrel cell, however, can be disrupted. The local passage of a cold front may change that in a matter of minutes, and frequently does. As a result, at the surface, winds can vary abruptly in direction. But the winds above the surface, where they are less disrupted by terrain, are essentially westerly. A low pressure zone at 60° latitude that moves toward the equator, or a high pressure zone at 30° latitude that moves poleward, will accelerate the Westerlies of the Ferrel cell. A strong high, moving polewards may bring westerly winds for days.

The Ferrel system acts as a heat pump with a coefficient of performance of 12.1, consuming kinetic energy from the Hadley and polar systems at an approximate rate of 275 terawatts.

Polar cell

The polar cell is a simple system with strong convection drivers. Though cool and dry relative to equatorial air, the air masses at the 60th parallel are still sufficiently warm and moist to undergo convection and drive a thermal loop. At the 60th parallel, the air rises to the tropopause (about 8 km at this latitude) and moves poleward. As it does so, the upper-level air mass deviates toward the east. When the air reaches the polar areas, it has cooled by radiation to space and is considerably denser than the underlying air. It descends, creating a cold, dry high-pressure area. At the polar surface level, the mass of air is driven away from the pole toward the 60th parallel, replacing the air that rose there, and the polar circulation cell is complete. As the air at the surface moves toward the equator, it deviates westwards, again as a result of the Coriolis effect. The air flows at the surface are called the polar easterlies, flowing from northeast to southwest near the north pole and from southeast to northwest near the south pole.

The outflow of air mass from the cell creates harmonic waves in the atmosphere known as Rossby waves. These ultra-long waves determine the path of the polar jet stream, which travels within the transitional zone between the tropopause and the Ferrel cell. By acting as a heat sink, the polar cell moves the abundant heat from the equator toward the polar regions.

The polar cell, terrain, and katabatic winds in Antarctica can create very cold conditions at the surface, for instance the lowest temperature recorded on Earth: −89.2 °C at Vostok Station in Antarctica, measured in 1983.

Contrast between cells

The Hadley cell and the polar cell are similar in that they are thermally direct; in other words, they exist as a direct consequence of surface temperatures. Their thermal characteristics drive the weather in their domain. The sheer volume of energy that the Hadley cell transports, and the depth of the heat sink contained within the polar cell, ensures that transient weather phenomena not only have negligible effect on the systems as a whole, but — except under unusual circumstances — they do not form. The endless chain of passing highs and lows which is part of everyday life for mid-latitude dwellers, under the Ferrel cell at latitudes between 30 and 60° latitude, is unknown above the 60th and below the 30th parallels. There are some notable exceptions to this rule; over Europe, unstable weather extends to at least the 70th parallel north.

Longitudinal circulation features

Diurnal wind change in local coastal area, also applies on the continental scale.

While the Hadley, Ferrel, and polar cells (whose axes are oriented along parallels or latitudes) are the major features of global heat transport, they do not act alone. Temperature differences also drive a set of circulation cells, whose axes of circulation are longitudinally oriented. This atmospheric motion is known as zonal overturning circulation.

Latitudinal circulation is a result of the highest solar radiation per unit area (solar intensity) falling on the tropics. The solar intensity decreases as the latitude increases, reaching essentially zero at the poles. Longitudinal circulation, however, is a result of the heat capacity of water, its absorptivity, and its mixing. Water absorbs more heat than does the land, but its temperature does not rise as greatly as does the land. As a result, temperature variations on land are greater than on water.

The Hadley, Ferrel, and polar cells operate at the largest scale of thousands of kilometers (synoptic scale). The latitudinal circulation can also act on this scale of oceans and continents, and this effect is seasonal or even decadal. Warm air rises over the equatorial, continental, and western Pacific Ocean regions. When it reaches the tropopause, it cools and subsides in a region of relatively cooler water mass.

The Pacific Ocean cell plays a particularly important role in Earth's weather. This entirely ocean-based cell comes about as the result of a marked difference in the surface temperatures of the western and eastern Pacific. Under ordinary circumstances, the western Pacific waters are warm, and the eastern waters are cool. The process begins when strong convective activity over equatorial East Asia and subsiding cool air off South America's west coast create a wind pattern which pushes Pacific water westward and piles it up in the western Pacific. (Water levels in the western Pacific are about 60 cm higher than in the eastern Pacific.).

The daily (diurnal) longitudinal effects are at the mesoscale (a horizontal range of 5 to several hundred kilometres). During the day, air warmed by the relatively hotter land rises, and as it does so it draws a cool breeze from the sea that replaces the risen air. At night, the relatively warmer water and cooler land reverses the process, and a breeze from the land, of air cooled by the land, is carried offshore by night.

Walker circulation

The Pacific cell is of such importance that it has been named the Walker circulation after Sir Gilbert Walker, an early-20th-century director of British observatories in India, who sought a means of predicting when the monsoon winds of India would fail. While he was never successful in doing so, his work led him to the discovery of a link between the periodic pressure variations in the Indian Ocean, and those between the eastern and western Pacific, which he termed the "Southern Oscillation".

The movement of air in the Walker circulation affects the loops on either side. Under normal circumstances, the weather behaves as expected. But every few years, the winters become unusually warm or unusually cold, or the frequency of hurricanes increases or decreases, and the pattern sets in for an indeterminate period.

The Walker Cell plays a key role in this and in the El Niño phenomenon. If convective activity slows in the Western Pacific for some reason (this reason is not currently known), the climates of areas adjacent to the Western Pacific are affected. First, the upper-level westerly winds fail. This cuts off the source of returning, cool air that would normally subside at about 30° south latitude, and therefore the air returning as surface easterlies ceases. There are two consequences. Warm water ceases to surge into the eastern Pacific from the west (it was "piled" by past easterly winds) since there is no longer a surface wind to push it into the area of the west Pacific. This and the corresponding effects of the Southern Oscillation result in long-term unseasonable temperatures and precipitation patterns in North and South America, Australia, and Southeast Africa, and the disruption of ocean currents.

Meanwhile, in the Atlantic, fast-blowing upper level Westerlies of the Hadley cell form, which would ordinarily be blocked by the Walker circulation and unable to reach such intensities. These winds disrupt the tops of nascent hurricanes and greatly diminish the number which are able to reach full strength.

El Niño – Southern Oscillation

El Niño and La Niña are opposite surface temperature anomalies of the Southern Pacific, which heavily influence the weather on a large scale. In the case of El Niño, warm surface water approaches the coasts of South America which results in blocking the upwelling of nutrient-rich deep water. This has serious impacts on the fish populations.

In the La Niña case, the convective cell over the western Pacific strengthens inordinately, resulting in colder than normal winters in North America and a more robust cyclone season in South-East Asia and Eastern Australia. There is also an increased upwelling of deep cold ocean waters and more intense uprising of surface air near South America, resulting in increasing numbers of drought occurrences, although fishermen reap benefits from the more nutrient-filled eastern Pacific waters.

Daemon (computing)

From Wikipedia, the free encyclopedia
Components of some Linux desktop environments that are daemons include D-Bus, NetworkManager (here called unetwork), PulseAudio (usound), and Avahi.

In multitasking computer operating systems, a daemon (/ˈdmən/ or /ˈdmən/) is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letter d, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example, syslogd is a daemon that implements system logging facility, and sshd is a daemon that serves incoming SSH connections.

In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually created either by a process forking a child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix.

Systems often start daemons at boot time that will respond to network requests, hardware activity, or other programs by performing some task. Daemons such as cron may also perform defined tasks at scheduled times.

Terminology

The term was coined by the programmers at MIT's Project MAC. According to Fernando J. Corbató, who worked on Project MAC in 1963, his team was the first to use the term daemon, inspired by Maxwell's demon, an imaginary agent in physics and thermodynamics that helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores". Unix systems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of a daemon as a supernatural being working in the background.

In the general sense, daemon is an older form of the word "demon", from the Greek δαίμων. In the Unix System Administration Handbook Evi Nemeth states the following about daemons:

Many people equate the word "daemon" with the word "demon", implying some kind of satanic connection between UNIX and the underworld. This is an egregious misunderstanding. "Daemon" is actually a much older form of "demon"; daemons have no particular bias towards good or evil, but rather serve to help define a person's character or personality. The ancient Greeks' concept of a "personal daemon" was similar to the modern concept of a "guardian angel"—eudaemonia is the state of being helped or protected by a kindly spirit. As a rule, UNIX systems seem to be infested with both daemons and demons.

A further characterization of the mythological symbolism is that a daemon is something that is not visible yet is always present and working its will. In the Theages, attributed to Plato, Socrates describes his own personal daemon to be something like the modern concept of a moral conscience: "The favour of the gods has given me a marvelous gift, which has never left me since my childhood. It is a voice that, when it makes itself heard, deters me from what I am about to do and never urges me on".

In modern usage in the context of computer software, the word daemon is pronounced /ˈdmən/ DEE-mən or /ˈdmən/ DAY-mən.

Alternative terms for daemon are service (used in Windows, from Windows NT onwards, and later also in Linux), started task (IBM z/OS), and ghost job (XDS UTS).

After the term was adopted for computer use, it was rationalized as a backronym for Disk And Execution MONitor.

Daemons that connect to a computer network are examples of network services.

Implementations

Unix-like systems

In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned the init process (process number 1) as its parent process and has no controlling terminal. However, more generally, a daemon may be any background process, whether a child of the init process or not.

On a Unix-like system, the common method for a process to become a daemon, when the process is started from the command line or from a startup script such as an init script or a SystemStarter script, involves:

  • Optionally removing unnecessary variables from environment.
  • Executing as a background task by forking and exiting (in the parent "half" of the fork). This allows daemon's parent (shell or startup process) to receive exit notification and continue its normal execution.
  • Detaching from the invoking session, usually accomplished by a single operation, setsid():
    • Dissociating from the controlling tty.
    • Creating a new session and becoming the session leader of that session.
    • Becoming a process group leader.
  • If the daemon wants to ensure that it will not acquire a new controlling tty even by accident (which happens when a session leader without a controlling tty opens a free tty), it may fork and exit again. This means that it is no longer a session leader in the new session, and cannot acquire a controlling tty.
  • Setting the root directory (/) as the current working directory so that the process does not keep any directory in use that may be on a mounted file system (allowing it to be unmounted).
  • Changing the umask to 0 to allow open(), creat(), and other operating system calls to provide their own permission masks and not to depend on the umask of the caller.
  • Redirecting file descriptors 0, 1 and 2 for the standard streams (stdin, stdout and stderr) to /dev/null or a logfile, and closing all the other file descriptors inherited from the parent process.

If the process is started by a super-server daemon, such as inetd, launchd, or systemd, the super-server daemon will perform those functions for the process, except for old-style daemons not converted to run under systemd and specified as Type=forking and "multi-threaded" datagram servers under inetd.

MS-DOS

In the Microsoft DOS environment, daemon-like programs were implemented as terminate-and-stay-resident programs (TSR).

Windows NT

On Microsoft Windows NT systems, programs called Windows services perform the functions of daemons. They run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. In Windows 2000 and later versions, Windows services are configured and manually started and stopped using the Control Panel, a dedicated control/configuration program, the Service Controller component of the Service Control Manager (sc command), the net start and net stop commands or the PowerShell scripting system.

However, any Windows application can perform the role of a daemon, not just a service, and some Windows daemons have the option of running as a normal process.

Classic Mac OS and macOS

On the classic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system; these were known as system extensions and control panels. Later versions of classic Mac OS augmented these with fully fledged faceless background applications: regular applications that ran in the background. To the user, these were still described as regular system extensions.

macOS, which is a Unix system, uses daemons but uses the term "services" to designate software that performs functions selected from the Services menu, rather than using that term for daemons, as Windows does.

 

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...