Search This Blog

Wednesday, June 13, 2018

Natural philosophy


From Wikipedia, the free encyclopedia


A celestial map from the 17th century, by the Dutch cartographer Frederik de Wit

Natural philosophy or philosophy of nature (from Latin philosophia naturalis) was the philosophical study of nature and the physical universe that was dominant before the development of modern science. It is considered to be the precursor of natural science.

From the ancient world, starting with Aristotle, to the 19th century, the term "natural philosophy" was the common term used to describe the practice of studying nature. It was in the 19th century that the concept of "science" received its modern shape with new titles emerging such as "biology" and "biologist", "physics" and "physicist" among other technical fields and titles; institutions and communities were founded, and unprecedented applications to and interactions with other aspects of society and culture occurred.[1] Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867).

In the German tradition, Naturphilosophie (philosophy of nature) persisted into the 18th and 19th century as an attempt to achieve a speculative unity of nature and spirit. Some of the greatest names in German philosophy are associated with this movement, including Goethe, Hegel and SchellingNaturphilosophie was associated with Romanticism and a view that regarded the natural world as a kind of giant organism, as opposed to the philosophical approach of figures such as John Locke and Isaac Newton who espoused a more mechanical view of the world, regarding it as being like a machine.

Origin and evolution of the term

The term natural philosophy preceded our current natural science (i.e. empirical science). Empirical science historically developed out of philosophy or, more specifically, natural philosophy. Natural philosophy was distinguished from the other precursor of modern science, natural history, in that natural philosophy involved reasoning and explanations about nature (and after Galileo, quantitative reasoning), whereas natural history was essentially qualitative and descriptive.

In the 14th and 15th centuries, natural philosophy was one of many branches of philosophy, but was not a specialized field of study. The first person appointed as a specialist in Natural Philosophy per se was Jacopo Zabarella, at the University of Padua in 1577.

Modern meanings of the terms science and scientists date only to the 19th century. Before that, science was a synonym for knowledge or study, in keeping with its Latin origin. The term gained its modern meaning when experimental science and the scientific method became a specialized branch of study apart from natural philosophy.[2]

From the mid-19th century, when it became increasingly unusual for scientists to contribute to both physics and chemistry, "natural philosophy" came to mean just physics, and the word is still used in that sense in degree titles at the University of Oxford. In general, chairs of Natural Philosophy established long ago at the oldest universities are nowadays occupied mainly by physics professors. Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867).

Scope

In Plato's earliest known dialogue, Charmides distinguishes between science or bodies of knowledge that produce a physical result, and those that do not. Natural philosophy has been categorized as a theoretical rather than a practical branch of philosophy (like ethics). Sciences that guide arts and draw on the philosophical knowledge of nature may produce practical results, but these subsidiary sciences (e.g., architecture or medicine) go beyond natural philosophy.

The study of natural philosophy seeks to explore the cosmos by any means necessary to understand the universe. Some ideas presuppose that change is a reality. Although this may seem obvious, there have been some philosophers who have denied the concept of metamorphosis, such as Plato's predecessor Parmenides and later Greek philosopher Sextus Empiricus, and perhaps some Eastern philosophers. George Santayana, in his Scepticism and Animal Faith, attempted to show that the reality of change cannot be proven. If his reasoning is sound, it follows that to be a physicist, one must restrain one's skepticism enough to trust one's senses, or else rely on anti-realism.

René Descartes' metaphysical system of mind–body dualism describes two kinds of substance: matter and mind. According to this system, everything that is "matter" is deterministic and natural—and so belongs to natural philosophy—and everything that is "mind" is volitional and non-natural, and falls outside the domain of philosophy of nature.

Branches and subject matter

Major branches of natural philosophy include astronomy and cosmology, the study of nature on the grand scale; etiology, the study of (intrinsic and sometimes extrinsic) causes; the study of chance, probability and randomness; the study of elements; the study of the infinite and the unlimited (virtual or actual); the study of matter; mechanics, the study of translation of motion and change; the study of nature or the various sources of actions; the study of natural qualities; the study of physical quantities; the study of relations between physical entities; and the philosophy of space and time. (Adler, 1993)

History


Humankind's mental engagement with nature certainly predates civilization and the record of history. Philosophical, and specifically non-religious thought about the natural world, goes back to ancient Greece. These lines of thought began before Socrates, who turned from his philosophical studies from speculations about nature to a consideration of man, viz., political philosophy. The thought of early philosophers such Parmenides, Heraclitus, and Democritus centered on the natural world. In addition, three presocratic philosophers who lived in the Ionian town of Miletus (hence the Milesian School of philosophy,) Thales, Anaximander, and Anaximenes, attempted to explain natural phenomena without recourse to creation myths involving the Greek gods. They were called the physikoi (natural philosophers,) or, as Aristotle referred to them, the physiologoi. Plato followed Socrates in concentrating on man. It was Plato's student, Aristotle, who, in basing his thought on the natural world, returned empiricism to its primary place, while leaving room in the world for man.[3]  Martin Heidegger observes that Aristotle was the originator of conception of nature that prevailed in the Middle Ages into the modern era:
The Physics is a lecture in which he seeks to determine beings that arise on their own, τὰ φύσει ὄντα, with regard to their being. Aristotelian "physics" is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle's "physics" is philosophy, whereas modern physics is a positive science that presupposes a philosophy.... This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle's Physics there would have been no Galileo.[4]
Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses.[5]
Plato's world of eternal and unchanging Forms, imperfectly represented in matter by a divine Artisan, contrasts sharply with the various mechanistic Weltanschauungen, of which atomism was, by the fourth century at least, the most prominent… This debate was to persist throughout the ancient world. Atomistic mechanism got a shot in the arm from Epicurus… while the Stoics adopted a divine teleology… The choice seems simple: either show how a structured, regular world could arise out of undirected processes, or inject intelligence into the system. This was how Aristotle… when still a young acolyte of Plato, saw matters. Cicero… preserves Aristotle's own cave-image: if troglodytes were brought on a sudden into the upper world, they would immediately suppose it to have been intelligently arranged. But Aristotle grew to abandon this view; although he believes in a divine being, the Prime Mover is not the efficient cause of action in the Universe, and plays no part in constructing or arranging it... But, although he rejects the divine Artificer, Aristotle does not resort to a pure mechanism of random forces. Instead he seeks to find a middle way between the two positions, one which relies heavily on the notion of Nature, or phusis.[6]
"The world we inhabit is an orderly one, in which things generally behave in predictable ways, Aristotle argued, because every natural object has a "nature"—an attribute (associated primarily with form) that makes the object behave in its customary fashion..."[7] Aristotle recommended four causes as appropriate for the business of the natural philosopher, or physicist, “and if he refers his problems back to all of them, he will assign the ‘why’ in the way proper to his science—the matter, the form, the mover, [and] ‘that for the sake of which’”. While the vagaries of the material cause are subject to circumstance, the formal, efficient and final cause often coincide because in natural kinds, the mature form and final cause are one and the same. The capacity to mature into a specimen of one's kind is directly acquired from “the primary source of motion”, i.e., from one's father, whose seed (sperma) conveys the essential nature (common to the species), as a hypothetical ratio.[8]
Material cause  
An object's motion will behave in different ways depending on the [substance/essence] from which it is made. (Compare clay, steel, etc.)
Formal cause  
An object's motion will behave in different ways depending on its material arrangement. (Compare a clay sphere, clay block, etc.)
Efficient cause 
That which caused the object to come into being; an "agent of change" or an "agent of movement".
Final cause  
The reason that caused the object to be brought into existence.
From the late Middle Ages into the modern era, the tendency has been to narrow "science" to the consideration of efficient or agency-based causes of a particular kind:[9]
The action of an efficient cause may sometimes, but not always, be described in terms of quantitative force. The action of an artist on a block of clay, for instance, can be described in terms of how many pounds of pressure per square inch is exerted on it. The efficient causality of the teacher in directing the activity of the artist, however, cannot be so described…

The final cause acts on the agent to influence or induce her to act. If the artist works "to make money," making money is in some way the cause of her action. But we cannot describe this influence in terms of quantitative force. The final cause acts, but it acts according to the mode of final causality, as an end or good that induces the efficient cause to act. The mode of causality proper to the final cause cannot itself be reduced to efficient causality, much less to the mode of efficient causality we call "force."[10]

Medieval philosophy of motion

Medieval thoughts on motion involved much of Aristotle's works Physics and Metaphysics. The issue that medieval philosophers had with motion was the inconsistency found between book 3 of Physics and book 5 of Metaphysics. Aristotle claimed in book 3 of Physics that motion can be categorized by substance, quantity, quality, and place. where in book 5 of Metaphysics he stated that motion is a magnitude of quantity. This disputation led to some important questions to natural philosophers: Which category/categories does motion fit into? Is motion the same thing as a terminus? Is motion separate from real things? These questions asked by medieval philosophers tried to classify motion.[11]

William Ockham gives a good concept of motion for many people in the Middle Ages. There is an issue with the vocabulary behind motion which makes people think that there is a correlation between nouns and the qualities that make nouns. Ockham states that this distinction is what will allow people to understand motion, that motion is a property of mobiles, locations, and forms and that is all that is required to define what motion is. A famous example of this is Occam's razor which simplifies vague statements by cutting them into more descriptive examples. "Every motion derives from an agent." becomes "each thing that is moved, is moved by an agent" this makes motion a more personal quality referring to individual objects that are moved.[11]

Aristotle's philosophy of nature

"An acorn is potentially, but not actually, an oak tree. In becoming an oak tree, it becomes actually what it originally was only potentially. This change thus involves passage from potentiality to actuality — not from non-being to being but from one kind or degree to being another"

Aristotle held many important beliefs that started a convergence of thought for natural philosophy. Aristotle believed that attributes of objects belong to the objects themselves, and share traits with other objects that fit them into a category. He uses the example of dogs to press this point. An individual dog may have very specific attributes (ex. one dog can be black and another brown) but also very general ones that classify it as a dog (ex. four-legged). This philosophy can be applied to many other objects as well. This idea is different than that of Plato, with whom Aristotle had a direct association. Aristotle argued that objects have properties "form" and something that is not part of its properties "matter" that defines the object. The form cannot be separated from the matter. Given the example that you can not separate properties and matter since this is impossible, you cannot collect properties in a pile and matter in another.[7]

Aristotle believed that change was a natural occurrence. He used his philosophy of form and matter to argue that when something changes you change its properties without changing its matter. This change occurs by replacing certain properties with other properties. Since this change is always an intentional alteration whether by forced means or by natural ones, change is a controllable order of qualities. He argues that this happens through three categories of being: non-being, potential being, and actual being. Through these three states the process of changing an object never truly destroys an objects forms during this transition state but rather just blurs the reality between the two states. An example of this could be changing an object from red to blue with a transitional purple phase.[7]

Other significant figures in natural philosophy

Early Greek philosophers studied motion and the cosmos. Figures like Hesiod regarded the Natural world as offspring of the gods, whereas others like Leucippus and Democritus regarded the world as lifeless atoms in a vortex. Anaximander deduced that eclipses happen because of apertures in rings of celestial fire. Heraclitus believed that the heavenly bodies were made of fire that were contained within bowls. He thought that eclipses happen when the bowl turned away from the earth.  Anaximenes is believed to have stated that an underlying element was air, and by manipulating air someone could change its thickness to create fire, water, dirt, and stones. Empedocles identified the elements that make up the world which he termed the roots of all things as Fire, Air. Earth, and Water. Parmenides argued that all change is a logical impossibility. He gives the example that nothing can go from nonexistence to existence. Plato argues that the world is an imperfect replica of an idea that a divine craftsman once held. He also believed that the only way to truly know something was through reason and logic not the study of the object itself, but that changeable matter is a viable course of study.[7]

The scientific method has ancient precedents and Galileo exemplifies a mathematical understanding of nature which is the hallmark of modern natural scientists. Galileo proposed that objects falling regardless of their mass would fall at the same rate, as long as the medium they fall in is identical. The 19th-century distinction of a scientific enterprise apart from traditional natural philosophy has its roots in prior centuries. Proposals for a more "inquisitive" and practical approach to the study of nature are notable in Francis Bacon, whose ardent convictions did much to popularize his insightful Baconian method. The late 17th-century natural philosopher Robert Boyle wrote a seminal work on the distinction between physics and metaphysics called, A Free Enquiry into the Vulgarly Received Notion of Nature, as well as The Skeptical Chymist, after which the modern science of chemistry is named, (as distinct from proto-scientific studies of alchemy). These works of natural philosophy are representative of a departure from the medieval scholasticism taught in European universities, and anticipate in many ways, the developments which would lead to science as practiced in the modern sense. As Bacon would say, "vexing nature" to reveal "her" secrets, (scientific experimentation), rather than a mere reliance on largely historical, even anecdotal, observations of empirical phenomena, would come to be regarded as a defining characteristic of modern science, if not the very key to its success. Boyle's biographers, in their emphasis that he laid the foundations of modern chemistry, neglect how steadily he clung to the scholastic sciences in theory, practice and doctrine.[12] However, he meticulously recorded observational detail on practical research, and subsequently advocated not only this practice, but its publication, both for successful and unsuccessful experiments, so as to validate individual claims by replication.
For sometimes we use the word nature for that Author of nature whom the schoolmen, harshly enough, call natura naturans, as when it is said that nature hath made man partly corporeal and partly immaterial. Sometimes we mean by the nature of a thing the essence, or that which the schoolmen scruple not to call the quiddity of a thing, namely, the attribute or attributes on whose score it is what it is, whether the thing be corporeal or not, as when we attempt to define the nature of an angel, or of a triangle, or of a fluid body, as such. Sometimes we take nature for an internal principle of motion, as when we say that a stone let fall in the air is by nature carried towards the centre of the earth, and, on the contrary, that fire or flame does naturally move upwards toward heaven. Sometimes we understand by nature the established course of things, as when we say that nature makes the night succeed the day, nature hath made respiration necessary to the life of men. Sometimes we take nature for an aggregate of powers belonging to a body, especially a living one, as when physicians say that nature is strong or weak or spent, or that in such or such diseases nature left to herself will do the cure. Sometimes we take nature for the universe, or system of the corporeal works of God, as when it is said of a phoenix, or a chimera, that there is no such thing in nature, i.e. in the world. And sometimes too, and that most commonly, we would express by nature a semi-deity or other strange kind of being, such as this discourse examines the notion of.[13]
— Robert Boyle, A Free Enquiry into the Vulgarly Received Notion of Nature
Natural philosophers of the late 17th or early 18th century were sometimes insultingly described as 'projectors'. A projector was an entrepreneur who invited people to invest in his invention but - as the caricature went - could not be trusted, usually because his device was impractical.[14] Jonathan Swift satirized natural philosophers of the Royal Society as 'the academy of projectors' in his novel Gulliver's Travels. Historians of science have argued that natural philosophers and the so-called projectors sometimes overlapped in their methods and aims.[15][16]

The modern emphasis is less on a broad empiricism (one that includes passive observation of nature's activity), but on a narrow conception of the empirical concentrating on the control exercised through experimental (active) observation for the sake of control of nature. Nature is reduced to a passive recipient of human activity.

Current work in the philosophy of science and nature

In the middle of the 20th century, Ernst Mayr's discussions on the teleology of nature brought up issues that were dealt with previously by Aristotle (regarding final cause) and Kant (regarding reflective judgment).[17]

Especially since the mid-20th-century European crisis, some thinkers argued the importance of looking at nature from a broad philosophical perspective, rather than what they considered a narrowly positivist approach relying implicitly on a hidden, unexamined philosophy.[18] One line of thought grows from the Aristotelian tradition, especially as developed by Thomas Aquinas. Another line springs from Edmund Husserl, especially as expressed in The Crisis of European Sciences. Students of his such as Jacob Klein and Hans Jonas more fully developed his themes. Last, but not least, there is the process philosophy inspired by Alfred North Whitehead's works.[19]

Among living scholars, Brian David Ellis, Nancy Cartwright, David Oderberg, and John Dupré are some of the more prominent thinkers who can arguably be classed as generally adopting a more open approach to the natural world. Ellis (2002) observes the rise of a "New Essentialism."[20] David Oderberg (2007) takes issue with other philosophers, including Ellis to a degree, who claim to be essentialists. He revives and defends the Thomistic-Aristotelian tradition from modern attempts to flatten nature to the limp subject of the experimental method.[21] In his In Praise of Natural Philosophy: A Revolution for Thought and Life (2017), Nicholas Maxwell argues that we need to reform philosophy and put science and philosophy back together again to create a modern version of natural philosophy.

Hawking talks about no clear Big Bang and no boundary to space-time

Stephen Hawking describes what came before the big bang.

Hawking says the universe had no clear “bang.” You can wind back the clock to the edges of those first moments of existence, but asking what came before would be like asking why you can keep walking north when you get to the North Pole. Time, as we define it, loses its meaning as the universe shrinks down.

It never quite narrows to a single point. But no one has proved physics works like that—yet.

Hawking proposes a no boundary condition version of space-time. As you approach the beginning space-time is replaced with imaginary time.

In theoretical physics, the Hartle–Hawking state, named after James Hartle and Stephen Hawking, is a proposal concerning the state of the universe prior to the Planck epoch.

Hartle and Hawking suggest that if we could travel backward in time toward the beginning of the universe, we would note that quite near what might have otherwise been the beginning, time gives way to space such that at first there is only space and no time. Beginnings are entities that have to do with time; because time did not exist before the Big Bang, the concept of a beginning of the universe is meaningless. According to the Hartle–Hawking proposal, the universe has no origin as we would understand it: the universe was a singularity in both space and time, pre-Big Bang. Thus, the Hartle–Hawking state universe has no beginning, but it is not the steady state universe of Hoyle; it simply has no initial boundaries in time nor space.

The Hartle–Hawking state is the wave function of the Universe—a notion meant to figure out how the Universe started—that is calculated from Feynman’s path integral.

More precisely, it is a hypothetical vector in the Hilbert space of a theory of quantum gravity that describes this wave function.

It is a functional of the metric tensor defined at a (D − 1)-dimensional compact surface, the Universe, where D is the spacetime dimension. The precise form of the Hartle–Hawking state is the path integral over all D-dimensional geometries that have the required induced metric on their boundary. According to the theory time diverged from three state dimension—as we know the time now—after the universe was at the age of the Planck time.

Such a wave function of the Universe can be shown to satisfy the Wheeler–DeWitt equation.

Imaginary time is a mathematical representation of time which appears in some approaches to special relativity and quantum mechanics. It finds uses in connecting quantum mechanics with statistical mechanics and in certain cosmological theories.

Mathematically, imaginary time is real-time which has undergone a Wick rotation so that its coordinates are multiplied by the imaginary root i. Imaginary time is not imaginary in the sense that it is unreal or made-up (any more than say irrational numbers defy logic), it is simply expressed in terms of what mathematicians call imaginary numbers.

Stephen Hawking popularized the concept of imaginary time in his book The Universe in a Nutshell.

“ One might think this means that imaginary numbers are just a mathematical game having nothing to do with the real world. From the viewpoint of positivist philosophy, however, one cannot determine what is real. All one can do is find which mathematical models describe the universe we live in. It turns out that a mathematical model involving imaginary time predicts not only effects we have already observed but also effects we have not been able to measure yet nevertheless believe in for other reasons. So what is real and what is imaginary? Is the distinction just in our minds? ”

In the theory of relativity, time is multiplied by i. This may be accepted as a feature of the relationship between space and time, or it may be incorporated into time itself, as imaginary time, and the equations rewritten accordingly.

In physical cosmology, imaginary time may be incorporated into certain models of the universe which are solutions to the equations of general relativity. In particular, imaginary time can help to smooth out gravitational singularities, where known physical laws break down, to remove the singularity and avoid such breakdowns (see Hartle–Hawking state). The Big Bang, for example, appears as a singularity in ordinary time but, when modelled with imaginary time, the singularity can be removed and the Big Bang functions like any other point in four-dimensional spacetime.

Wave Function of the Universe(s)

From Hyperspace by Dr. Michiu Kaku Original link:  https://jacobsm.com/deoxy/deoxy.org/h_kaku2.htm

[Physicist Stephen] Hawking is one of the founders of a new scientific discipline called quantum cosmology. At first, this seems like a contradiction in terms. The word quantum applies to the infinitesimally small world of quarks and neutrinos, while cosmology signifies the almost limitless expanse of outer space. However, Hawking and others now believe that the ultimate questions of cosmology can be answered only by quantum theory. Hawking takes quantum cosmology to its ultimate conclusion, allowing the existence of infinite numbers of parallel universes.

The starting point of quantum theory ... is a wave function that describes all the possible various possible states of a particle. For example, imagine a large, irregular thundercloud that fills up the sky. The darker the thundercloud, the greater the concentration of water vapor and dust at that point. Thus by simply looking at a thundercloud, we can rapidly estimate the probability of finding large concentrations of water and dust in certain parts of the sky.

The thundercloud may be compared to a single electron's wave function. Like a thundercloud, it fills up all space. Likewise, the greater its value at a point, the greater the probability of finding the electron there. Similarly, wave functions can be associated with large objects, like people. As I sit in my chair in Princeton, I know that I have a Schroedinger probability wave function. If I could somehow see my own wave function, it would resemble a cloud very much in the shape of my body. However, some of the cloud would spread out all over space, out to Mars and even beyond the solar system, although it would be vanishingly small there. This means that there is a very large likelihood that I am, in fact, sitting here in my chair and not on the planet Mars. Although part of my wave function has spread even beyond the Milky Way galaxy, there is only an infinitesimal chance that I am sitting in another galaxy.

Hawking's new idea was to treat the entire universe as though it were a quantum particle. By repeating some simple steps, we are led to some eye-opening conclusions.

We begin with a wave function describing the set of all possible universes. This means that the starting point of Hawking's theory must be an infinite set of parallel universes, the wave function of the universe. Hawking's rather simple analysis, replacing the word particle with universe, has led to a conceptual revolution in our thinking about cosmology.

According to this picture, the wave function of the universe spreads out over all possible universes. The wave function is assumed to be quite large near our own universe, so there is a good chance that our universe is the correct one, as we expect. However, the wave functon spreads out over all other universes, even those that are lifeless and incompatible with the familiar laws of physics. Since the wave function is supposedly vanishingly small for these other universes, we do not expect that our universe will make a quantum leap to them in the near future.

The goal facing quantum cosmologists is to verify this conjecture mathematically, to show that the wave function of the universe is large for our present universe and vanishingly small for other universes. This would then prove that our familiar universe is in some sense unique and also stable. (At present, quantum cosmologists are unable to solve this important problem.)

If we take Hawking seriously, it means that we must begin our analysis with an infinite number of all possible universes, coexisting with one another. To put it bluntly, the definition of the word universe is no longer "all that exists." It now means "all that can exist." For example, in Figure 12.1 we see how the wave function of the universe can spread out over several possible universes, with our universe being the most likely one but certanly not the only one. Hawking's quantum cosmology also assumes that the wave function of the universe allows these universes to collide. Wormholes can develop and link these universes. However, these wormholes are not like the ones ... which connect different parts of three-dimensional space with itself - these wormholes connect different universes with one another.

Figure 12.1

Figure 12.1 gif  
















In Hawking's wave function of the universe, the wave function
is most likely concentrated around our own universe. We live in 
our universe because it is the most likely, with the largest
probability. However, there is a small but non-vanishing
probability that the wave function prefers neighboring, parallel
universes. Thus transitions between universes may be possible
(although with very low probability).

Think, for example, of a large collection of soap bubbles, suspended in the air. Normally each soap bubble is like a universe unto itself, except that periodically it bumps into another bubble, forming a larger one, or splits into two smaller bubbles. The difference is that each soap bubble is now an entire ten-dimensional universe. Since space and time can exist only on each bubble, there is no such thing as space and time between the bubbles. Each universe has its own self-contained "time." It is meaningless to say that time passes at the same rate in all these universes. (We should, however, stress that (1) travel between these universes is not open to us because of our primitive technological level ... and (2) large quantum transitions on this scale are extremely rare, probably much larger than the lifetime of our universe.) Most of these universes are dead universes, devoid of any life. On these universes, the laws of physics were different, and hence the physical conditions that made life possble were not satisfied. Perhaps, among the billions of parallel universes, only one (ours) had the right set of physical laws to allow life.

Hawking's "baby universe" theory, although not a practical method of transportation, certainly raises philosophical and perhaps even religious questions.

Tuesday, June 12, 2018

Faraday's law of induction

From Wikipedia, the free encyclopedia

Faraday's law of induction is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF)—a phenomenon called electromagnetic induction. It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators and solenoids.[1][2]

The Maxwell–Faraday equation is a generalization of Faraday's law, and is listed as one of Maxwell's equations.

History

A diagram of Faraday's iron ring apparatus. The changing magnetic flux of the left coil induces a current in the right coil.[3]
 
Faraday's disk, the first electric generator, a type of homopolar generator.

Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832.[4] Faraday was the first to publish the results of his experiments.[5][6] In Faraday's first experimental demonstration of electromagnetic induction (August 29, 1831),[7] he wrapped two wires around opposite sides of an iron ring (torus) (an arrangement similar to a modern toroidal transformer). Based on his assessment of recently discovered properties of electromagnets, he expected that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he connected the wire to the battery, and another when he disconnected it.[8] This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected.[3] Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").[9]

Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically.[10] An exception was James Clerk Maxwell, who in 1861-2 used Faraday's ideas as the basis of his quantitative electromagnetic theory.[10][11][12] In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional EMF. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.

Lenz's law, formulated by Emil Lenz in 1834,[13] describes "flux through the circuit", and gives the direction of the induced EMF and current resulting from electromagnetic induction (elaborated upon in the examples below).
Faraday's experiment showing induction between coils of wire: The liquid battery (right) provides a current which flows through the small coil (A), creating a magnetic field. When the coils are stationary, no current is induced. But when the small coil is moved in or out of the large coil (B), the magnetic flux through the large coil changes, inducing a current which is detected by the galvanometer (G).[14]

Faraday's law

Qualitative statement

The most widespread version of Faraday's law states:
The induced electromotive force in any closed circuit is equal to the negative of the time rate of change of the magnetic flux enclosed by the circuit.[15][16]
This version of Faraday's law strictly holds only when the closed circuit is a loop of infinitely thin wire,[17] and is invalid in other circumstances as discussed below. A different version, the Maxwell–Faraday equation (discussed below), is valid in all circumstances.

Quantitative

The definition of surface integral relies on splitting the surface Σ into small surface elements. Each element is associated with a vector dA of magnitude equal to the area of the element and with direction normal to the element and pointing "outward" (with respect to the orientation of the surface).

Faraday's law of induction makes use of the magnetic flux ΦB through a hypothetical surface Σ whose boundary is a wire loop. Since the wire loop may be moving, we write Σ(t) for the surface. The magnetic flux is defined by a surface integral:
{\displaystyle \Phi _{B}=\iint \limits _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot d\mathbf {A} \,,}
where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field (also called "magnetic flux density"), and B·dA is a vector dot product (the infinitesimal amount of magnetic flux through the infinitesimal area element dA). In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic flux lines that pass through the loop.

When the flux changes—because B changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an EMF, , defined as the energy available from a unit charge that has travelled once around the wire loop.[17][18][19][20] Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads.

Faraday's law states that the EMF is also given by the rate of change of the magnetic flux:
{\displaystyle {\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}},}
where {\mathcal {E}} is the electromotive force (EMF) and ΦB is the magnetic flux.

The direction of the electromotive force is given by Lenz's law.
The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845.[21]

Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.

A Left Hand Rule for Faraday’s Law.
The sign of ΔΦB, the change in flux, is found based on the relationship between the magnetic field B, the area of the loop A, and the normal n to that area, as represented by the fingers of the left hand. If ΔΦB is positive, the direction of the EMF is the same as that of the curved fingers (yellow arrowheads). If ΔΦB is negative, the direction of the EMF is against the arrowheads.[22]

It is possible to find out the direction of the electromotive force (EMF) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows:[22][23]
  • Align the curved fingers of the left hand with the loop (yellow line).
  • Stretch your thumb. The stretched thumb indicates the direction of n (brown), the normal to the area enclosed by the loop.
  • Find the sign of ΔΦB, the change in flux. Determine the initial and final fluxes (whose difference is ΔΦB) with respect to the normal n, as indicated by the stretched thumb.
  • If the change in flux, ΔΦB, is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads).
  • If ΔΦB is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads).
For a tightly wound coil of wire, composed of N identical turns, each with the same ΦB, Faraday's law of induction states that[24][25]
{\displaystyle {\mathcal {E}}=-N{\frac {d\Phi _{B}}{dt}}}
where N is the number of turns of wire and ΦB is the magnetic flux through a single loop.

Maxwell–Faraday equation

An illustration of the Kelvin–Stokes theorem with surface Σ, its boundary Σ, and orientation n set by the right-hand rule.

The Maxwell–Faraday equation is a modification and generalisation of Faraday's law that states that a time-varying magnetic field will always accompany a spatially varying, non-conservative electric field, and vice versa. The Maxwell–Faraday equation is
\nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}
(in SI units) where ∇ × is the curl operator and again E(r, t) is the electric field and B(r, t) is the magnetic field. These fields can generally be functions of position r and time t.

The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem:[26]
{\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot d\mathbf {l} =-\int _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot d\mathbf {A} }
where, as indicated in the figure:
Σ is a surface bounded by the closed contour Σ,
E is the electric field, B is the magnetic field.
dl is an infinitesimal vector element of the contour ∂Σ,
dA is an infinitesimal vector element of surface Σ. If its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface.
Both dl and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface Σ, a positive path element dl of curve Σ is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal n to the surface Σ.

The integral around Σ is called a path integral or line integral.

Notice that a nonzero path integral for E is different from the behavior of the electric field generated by charges. A charge-generated E-field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem.

The integral equation is true for any path Σ through space, and any surface Σ for which that path is a boundary.

If the surface Σ is not changing in time, the equation can be rewritten:
{\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot d\mathbf {l} =-{\frac {d}{dt}}\int _{\Sigma }\mathbf {B} \cdot d\mathbf {A} .}
The surface integral at the right-hand side is the explicit expression for the magnetic flux ΦB through Σ.

Proof of Faraday's law

The four Maxwell's equations (including the Maxwell–Faraday equation), along with the Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism.[17][18] Therefore, it is possible to "prove" Faraday's law starting with these equations.[27][28]

The starting point is the time-derivative of flux through an arbitrary, possibly moving surface in space Σ:
{\frac {d\Phi _{B}}{dt}}={\frac {d}{dt}}\int _{\Sigma (t)}\mathbf {B} (t)\cdot d\mathbf {A}
(by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation, Gauss's law for magnetism, and some vector calculus.  The result is:
{\displaystyle {\frac {d\Phi _{B}}{dt}}=-\oint _{\partial \Sigma }\left(\mathbf {E} +\mathbf {v} _{\mathbf {l} }\times \mathbf {B} \right)\cdot d\mathbf {l} .}
where ∂Σ is the boundary of the surface Σ, and vl is the velocity of that boundary.

While this equation is true for any arbitrary moving surface Σ in space, it can be simplified further in the special case that ∂Σ is a loop of wire. In this case, we can relate the right-hand-side to EMF. Specifically, EMF is defined as the energy available per unit charge that travels once around the loop. Therefore, by the Lorentz force law,
{\displaystyle {\mathcal {E}}=\oint \left(\mathbf {E} +\mathbf {v} _{m}\times \mathbf {B} \right)\cdot {\text{d}}\mathbf {l} }
where {\mathcal {E}} is EMF and vm is the material velocity, i.e. the velocity of the atoms that makes up the circuit. If ∂Σ is a loop of wire, then vm=vl, and hence:
{\displaystyle {\frac {d\Phi _{B}}{dt}}=-{\mathcal {E}}}

EMF for non-thin-wire circuits

It is tempting to generalize Faraday's law to state that If ∂Σ is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through Σ equals the EMF around ∂Σ. This statement, however, is not always true—and not just for the obvious reason that EMF is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve ∂Σ matches the actual velocity of the material conducting the electricity.[30] The two examples illustrated below show that one often obtains incorrect results when the motion of ∂Σ is divorced from the motion of the material.[17]
One can analyze examples like these by taking care that the path ∂Σ moves with the same velocity as the material.[30] Alternatively, one can always correctly calculate the EMF by combining the Lorentz force law with the Maxwell–Faraday equation:[17][31]
{\displaystyle {\mathcal {E}}=\int _{\partial \Sigma }(\mathbf {E} +\mathbf {v} _{m}\times \mathbf {B} )\cdot d\mathbf {l} =-\int _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot d\mathbf {\Sigma } +\oint _{\partial \Sigma }(\mathbf {v} _{m}\times \mathbf {B} )\cdot d\mathbf {l} }
where "it is very important to notice that (1) [vm] is the velocity of the conductor ... not the velocity of the path element dl and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time".[31]

Faraday's law and relativity

Two phenomena

Faraday's law is a single equation describing two different phenomena: the motional EMF generated by a magnetic force on a moving wire (see Lorentz force), and the transformer EMF generated by an electric force due to a changing magnetic field (due to the Maxwell–Faraday equation).

James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force.[32] In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena.

A reference to these two aspects of electromagnetic induction is made in some modern textbooks.[33] As Richard Feynman states:[17]
So the "flux rule" that the emf in a circuit is equal to the rate of change of the magnetic flux through the circuit applies whether the flux changes because the field changes or because the circuit moves (or both) ...

Yet in our explanation of the rule we have used two completely distinct laws for the two cases – v × B for "circuit moves" and ∇ × E = −∂tB for "field changes".

We know of no other place in physics where such a simple and accurate general principle requires for its real understanding an analysis in terms of two different phenomena.
— Richard P. Feynman, The Feynman Lectures on Physics

Einstein's view

Reflection on this apparent dichotomy was one of the principal paths that led Einstein to develop special relativity:
It is known that Maxwell's electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor.

The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated.

But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise—assuming equality of relative motion in the two cases discussed—to electric currents of the same path and intensity as those produced by the electric forces in the former case.

Examples of this sort, together with unsuccessful attempts to discover any motion of the earth relative to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest.

Politics of Europe

From Wikipedia, the free encyclopedia ...