Search This Blog

Sunday, May 28, 2023

Quantum vacuum state

From Wikipedia, the free encyclopedia

In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. The term zero-point field is sometimes used as a synonym for the vacuum state of a quantized field which is completely individual.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space". According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of the quantum field.

The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga, and Schwinger, who jointly received the Nobel prize for this work in 1965. Today the electromagnetic interactions and the weak interactions are unified (at very high energies only) in the theory of the electroweak interaction.

The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics (or QCD) is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.

Non-zero expectation value

If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.

Energy

The vacuum state is associated with a zero-point energy, and this zero-point energy (equivalent to the lowest possible energy state) has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV). An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.

Symmetry

For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can be also proved directly without these axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model.

Non-linear permittivity

Quantum corrections to Maxwell's equations are expected to result in a tiny nonlinear electric polarization term in the vacuum, resulting in a field-dependent electrical permittivity ε deviating from the nominal value ε0 of vacuum permittivity. These theoretical developments are described, for example, in Dittrich and Gies. The theory of quantum electrodynamics predicts that the QED vacuum should exhibit a slight nonlinearity so that in the presence of a very strong electric field, the permitivity is increased by a tiny amount with respect to ε0. Subject to ongoing experimental efforts is the effect that a strong electric field would modify the effective permeability of free space, becoming anisotropic with a value slightly below μ0 in the direction of the electric field and slightly exceeding μ0 in the perpendicular direction. The quantum vacuum exposed to an electric field thereby exhibits birefringence for an electromagnetic wave travelling in a direction other than that of the electric field. The effect is similar to the Kerr effect but without matter being present. This tiny nonlinearity can be interpreted in terms of virtual pair production A characteristic electric field strength for which the nonlinearities become sizable is predicted to be enormous, about V/m, known as the Schwinger limit; the equivalent Kerr constant has been estimated, being about 1020 times smaller than the Kerr constant of water. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Experimentally measuring such an effect is very difficult, and has not yet been successful.

Virtual particles

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles". It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle:

(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times. Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The very many approaches to the energy-time uncertainty principle are a long and continuing subject.

Physical nature of the quantum vacuum

According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment [thought experiment] the quantum vacuum state." According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows:

It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations.

Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization. According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations." This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on. According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero." In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes:

The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ... Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects."

This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."

Notations

The vacuum state is written as or . The vacuum expectation value (see also Expectation value) of any field should be written as .

X-ray emission spectroscopy

From Wikipedia, the free encyclopedia

X-ray emission spectroscopy (XES) is a form of X-ray spectroscopy in which the X-ray line spectra are measured with a spectral resolution sufficient to analyze the impact of the chemical environment on the X-ray line energy and on branching ratios. This is done by exciting electrons out of their shell and then watching the emitted photons of the recombinating electrons.

Fig.1: K-Beta Mainline and V2C

There are several types of XES and can be categorized as non-resonant XES (XES), which includes -measurements, valence-to-core (VtC/V2C)-measurements, and ()-measurements, or as resonant XES (RXES or RIXS), which includes XXAS+XES 2D-measurement, high-resolution XAS, 2p3d RIXS, and Mössbauer-XES-combined measurements. In addition, Soft X-ray emission spectroscopy (SXES) is used in determining the electronic structure of materials.

History

The first XES experiments were published by Lindh and Lundquist in 1924

Fig.2: Energy Level Diagramm K-Lines

In these early studies, the authors utilized the electron beam of an X-ray tube to excite core electrons and obtain the -line spectra of sulfur and other elements. Three years later, Coster and Druyvesteyn performed the first experiments using photon excitation. Their work demonstrated that the electron beams produce artifacts, thus motivating the use of X-ray photons for creating the core hole. Subsequent experiments were carried out with commercial X-ray spectrometers, as well as with high-resolution spectrometers.

While these early studies provided fundamental insights into the electronic configuration of small molecules, XES only came into broader use with the availability of high intensity X-ray beams at synchrotron radiation facilities, which enabled the measurement of (chemically) dilute samples. In addition to the experimental advances, it is also the progress in quantum chemical computations, which makes XES an intriguing tool for the study of the electronic structure of chemical compounds.

Henry Moseley, a British physicist was the first to discover a relation between the -lines and the atomic numbers of the probed elements. This was the birth hour of modern x-ray spectroscopy. Later these lines could be used in elemental analysis to determine the contents of a sample.

William Lawrence Bragg later found a relation between the energy of a photon and its diffraction within a crystal. The formula he established, says that an X-ray photon with a certain energy bends at a precisely defined angle within a crystal.

Equipment

Analyzers

A special kind of monochromator is needed to diffract the radiation produced in X-Ray-Sources. This is because X-rays have a refractive index n ≈ 1. Bragg came up with the equation that describes x-ray/neutron diffraction when those particles pass a crystal lattice.(X-ray diffraction)

For this purpose "perfect crystals" have been produced in many shapes, depending on the geometry and energy range of the instrument. Although they are called perfect, there are miscuts within the crystal structure which leads to offsets of the Rowland plane. These offsets can be corrected by turning the crystal while looking at a specific energy(for example: -line of copper at 8027.83eV). When the intensity of the signal is maximized, the photons diffracted by the crystal hit the detector in the Rowland plane. There will now be a slight offset in the horizontal plane of the instrument which can be corrected by increasing or decreasing the detector angle.

In the Von Hamos geometry, a cylindrically bent crystal disperses the radiation along its flat surface's plane and focuses it along its axis of curvature onto a line like feature.

Fig.3: Rowland Circle(Johann) with two orders

The spatially distributed signal is recorded with a position sensitive detector at the crystal's focusing axis providing the overall spectrum. Alternative wavelength dispersive concepts have been proposed and implemented based on Johansson geometry having the source positioned inside the Rowland circle, whereas an instrument based on Johann geometry has its source placed on the Rowland circle.

X-ray sources

X-ray sources are produced for many different purposes, yet not every X-ray source can be used for spectroscopy. Commonly used sources for medical applications generally generate very "noisy" source spectra, because the used cathode material must not be very pure for these measurements. These lines must be eliminated as much as possible to get a good resolution in all used energy ranges.

For this purpose normal X-ray tubes with highly pure tungsten, molybdenum, palladium, etc. are made. Except for the copper they are embedded in, the produce a relatively "white" spectrum. Another way of producing X-rays are particle accelerators. The way they produce X-rays is from vectorial changes of their direction through magnetic fields. Every time a moving charge changes direction it has to give off radiation of corresponding energy. In X-ray tubes this directional change is the electron hitting the metal target (Anode) in synchrotrons it is the outer magnetic field accelerating the electron into a circular path.

There are many different kind of X-ray tubes and operators have to choose accurately depending on what it is, that should be measured.

Modern spectroscopy and the importance of -lines in the 21st Century

Today, XES is less used for elemental analysis, but more and more do measurements of -line spectra find importance, as the relation between these lines and the electronic structure of the ionized atom becomes more detailed.

If an 1s-Core-Electron gets excited into the continuum(out of the atoms energy levels in MO), electrons of higher energy orbitals need to lose energy and "fall" to the 1s-Hole that was created to fulfill Hund's Rule.(Fig.2) Those electron transfers happen with distinct probabilities. (See Siegbahn notation)

Scientists noted that after an ionisation of a somehow bonded 3d-transition metal-atom the -lines intensities and energies shift with oxidation state of the metal and with the species of ligand(s). This gave way to a new method in structural analysis:

By high-resolution scans of these lines the exact energy level and structural configuration of a chemical compound can be determined. This is because there are only two major electron transfer mechanisms, if we ignore every transfer not affecting valence electrons. If we include the fact that chemical compounds of 3d-transition metals can either be high-spin or low-spin we get 2 mechanisms for each spin configuration.

These two spin configurations determine the general shape of the and -mainlines as seen in figure one and two, while the structural configuration of electrons within the compound causes different intensities, broadening, tailing and piloting of the and -lines. Although this is quite a lot of information, this data has to be combined with absorption measurements of the so-called "pre-edge" region. Those measurements are called XANES (X-ray absorption near edge structure).

Fig.4: XAS Measurement against HERFD

In synchrotron facilities those measurement can be done at the same time, yet the experiment setup is quite complex and needs exact and fine tuned crystal monochromators to diffract the tangential beam coming from the electron storage ring. Method is called HERFD, which stands for High Energy Resolution Fluorescence Detection. The collection method is unique in that, after a collection of all wavelengths coming from "the source" called , the beam is then shone onto the sampleholder with a detector behind it for the XANES part of the measurement. The sample itself starts to emit X-rays and after those photons have been monochromatized they are collected, too. Most setups use at least three crystal monochromators or more. The is used in absorption measurements as a part of the Beer-Lambert Law in the equation

where is the intensity of transmitted photons. The received values for the extinction are wavelength specific which therefore creates a spectrum of the absorption. The spectrum produced from the combined data shows clear advantage in that background radiation is almost completely eliminated while still having an extremely resoluted view on features on a given absorption edge.(Fig.4)

In the field of development of new catalysts for more efficient energy storage, production and usage in form of hydrogen fuelcells and new battery materials, the research of the -lines is essential nowadays.

The exact shape of specific oxidation states of metals is mostly known, yet newly produced chemical compounds with the potential of becoming a reasonable catalyst for electrolysis, for example, are measured every day.

Several countries encourage many different facilities all over the globe in this special field of science in the hope for clean, responsible and cheap energy.

Soft x-ray emission spectroscopy

Soft X-ray emission spectroscopy or (SXES) is an experimental technique for determining the electronic structure of materials.

Uses

X-ray emission spectroscopy (XES) provides a means of probing the partial occupied density of electronic states of a material. XES is element-specific and site-specific, making it a powerful tool for determining detailed electronic properties of materials.

Forms

Emission spectroscopy can take the form of either resonant inelastic X-ray emission spectroscopy (RIXS) or non-resonant X-ray emission spectroscopy (NXES). Both spectroscopies involve the photonic promotion of a core level electron, and the measurement of the fluorescence that occurs as the electron relaxes into a lower-energy state. The differences between resonant and non-resonant excitation arise from the state of the atom before fluorescence occurs.

In resonant excitation, the core electron is promoted to a bound state in the conduction band. Non-resonant excitation occurs when the incoming radiation promotes a core electron to the continuum. When a core hole is created in this way, it is possible for it to be refilled through one of several different decay paths. Because the core hole is refilled from the sample's high-energy free states, the decay and emission processes must be treated as separate dipole transitions. This is in contrast with RIXS, where the events are coupled, and must be treated as a single scattering process.

Properties

Soft X-rays have different optical properties than visible light and therefore experiments must take place in ultra high vacuum, where the photon beam is manipulated using special mirrors and diffraction gratings.

Gratings diffract each energy or wavelength present in the incoming radiation in a different direction. Grating monochromators allow the user to select the specific photon energy they wish to use to excite the sample. Diffraction gratings are also used in the spectrometer to analyze the photon energy of the radiation emitted by the sample.

Demarcation problem

From Wikipedia, the free encyclopedia

In philosophy of science and epistemology, the demarcation problem is the question of how to distinguish between science and non-science. It examines the boundaries between science, pseudoscience and other products of human activity, like art and literature and beliefs. The debate continues after over two millennia of dialogue among philosophers of science and scientists in various fields. The debate has consequences for what can be called "scientific" in fields such as education and public policy.

The ancient world

An early attempt at demarcation can be seen in the efforts of Greek natural philosophers and medical practitioners to distinguish their methods and their accounts of nature from the mythological or mystical accounts of their predecessors and contemporaries.

Aristotle described at length what was involved in having scientific knowledge of something. To be scientific, he said, one must deal with causes, one must use logical demonstration, and one must identify the universals which 'inhere' in the particulars of sense. But above all, to have science one must have apodictic certainty. It is the last feature which, for Aristotle, most clearly distinguished the scientific way of knowing.

— Larry Laudan, "The Demise of the Demarcation Problem" (1983)

G. E. R. Lloyd noted that there was a sense in which the groups engaged in various forms of inquiry into nature set out to "legitimate their own positions", laying "claim to a new kind of wisdom ... that purported to yield superior enlightenment, even superior practical effectiveness". Medical writers in the Hippocratic tradition maintained that their discussions were based on necessary demonstrations, a theme developed by Aristotle in his Posterior Analytics. One element of this polemic for science was an insistence on a clear and unequivocal presentation of arguments, rejecting the imagery, analogy, and myth of the old wisdom. Some of their claimed naturalistic explanations of phenomena have been found to be quite fanciful, with little reliance on actual observations.

Cicero's De Divinatione implicitly used five criteria of scientific demarcation that are also used by modern philosophers of science.

Logical positivism

Logical positivism, formulated in the 1920s, held that only statements about matters of fact or logical relations between concepts are meaningful. All other statements lack sense and are labelled "metaphysics" (see the verifiability theory of meaning also known as verificationism).

According to A. J. Ayer, metaphysicians make statements which claim to have "knowledge of a reality which [transcends] the phenomenal world". Ayer, a member of the Vienna Circle and a noted English logical-positivist, argued that making any statements about the world beyond one's immediate sense-perception is impossible. This is because even metaphysicians' first premises will necessarily begin with observations made through sense-perception.

Ayer implied that the line of demarcation is characterized as the place at which statements become "factually significant". To be "factually significant", a statement must be verifiable. In order to be verifiable, the statement must be verifiable in the observable world, or facts that can be induced from "derived experience". This is referred to as the "verifiability" criterion.

This distinction between science, which in the view of the Vienna Circle possessed empirically verifiable statements, and what they pejoratively called "metaphysics", which lacked such statements, can be seen as representing another aspect of the demarcation problem. Logical positivism is often discussed in the context of the demarcation between science and non-science or pseudoscience. However, "The verificationist proposals had the aim of solving a distinctly different demarcation problem, namely that between science and metaphysics."

Falsifiability

Karl Popper saw demarcation as a central problem in the philosophy of science. Popper articulates the problem of demarcation as:

The problem of finding a criterion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as 'metaphysical' systems on the other, I call the problem of demarcation."

Falsifiability is the demarcation criterion proposed by Popper as opposed to verificationism: "statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations".

Against verifiability

Popper rejected solutions to the problem of demarcation that are grounded in inductive reasoning, and so rejected logical-positivist responses to the problem of demarcation. He argued that logical-positivists want to create a demarcation between the metaphysical and the empirical because they believe that empirical claims are meaningful and metaphysical ones are not. Unlike the Vienna Circle, Popper stated that his proposal was not a criterion of "meaningfulness".

Popper's demarcation criterion has been criticized both for excluding legitimate science ... and for giving some pseudosciences the status of being scientific ... According to Larry Laudan (1983, 121), it "has the untoward consequence of countenancing as 'scientific' every crank claim which makes ascertainably false assertions". Astrology, rightly taken by Popper as an unusually clear example of a pseudoscience, has in fact been tested and thoroughly refuted ... Similarly, the major threats to the scientific status of psychoanalysis, another of his major targets, do not come from claims that it is untestable but from claims that it has been tested and failed the tests.

— Sven Ove Hansson, The Stanford Encyclopedia of Philosophy, "Science and Pseudo-Science"

Popper argued that the Humean induction problem shows that there is no way to make meaningful universal statements on the basis of any number of empirical observations. Therefore, empirical statements are no more "verifiable" than metaphysical statements.

This creates a problem for the line of demarcation the positivists wanted to cleave between the empirical and the metaphysical. By their very own "verifiability criterion", Popper argued, the empirical is subsumed into the metaphysical, and the line of demarcation between the two becomes non-existent.

The solution of falsifiability

In Popper's later work, he stated that falsifiability is both a necessary and sufficient criterion for demarcation. He described falsifiability as a property of "the logical structure of sentences and classes of sentences", so that a statement's scientific or non-scientific status does not change over time. This has been summarized as a statement being falsifiable "if and only if it logically contradicts some (empirical) sentence that describes a logically possible event that it would be logically possible to observe".

Kuhnian postpositivism

Thomas Kuhn, an American historian and philosopher of science, is often connected with what has been called postpositivism or postempiricism. In his 1962 book The Structure of Scientific Revolutions, Kuhn divided the process of doing science into two different endeavors, which he called normal science and extraordinary science (sometimes called "revolutionary science"), the latter of which introduces a new "paradigm" that solves new problems while continuing to provide solutions to the problems solved by the preceding paradigm.

Finally, and this is for now my main point, a careful look at the scientific enterprise suggests that it is normal science, in which Sir Karl's sort of testing does not occur, rather than extraordinary science which most nearly distinguishes science from other enterprises. If a demarcation criterion exists (we must not, I think, seek a sharp or decisive one), it may lie just in that part of science which Sir Karl ignores.

— Thomas S. Kuhn, "Logic of Discovery or Psychology of Research?", in Criticism and the Growth of Knowledge (1970), edited by Imre Lakatos and Alan Musgrave

Kuhn's view of demarcation is most clearly expressed in his comparison of astronomy with astrology. Since antiquity, astronomy has been a puzzle-solving activity and therefore a science. If an astronomer's prediction failed, then this was a puzzle that he could hope to solve for instance with more measurements or with adjustments of the theory. In contrast, the astrologer had no such puzzles since in that discipline "particular failures did not give rise to research puzzles, for no man, however skilled, could make use of them in a constructive attempt to revise the astrological tradition" ... Therefore, according to Kuhn, astrology has never been a science.

— Sven Ove Hansson, "Science and Pseudo-Science", in the Stanford Encyclopedia of Philosophy

Popper criticized Kuhn's demarcation criterion, saying that astrologers are engaged in puzzle solving, and that therefore Kuhn's criterion recognized astrology as a science. He stated that Kuhn's criterion leads to a "major disaster ... [the] replacement of a rational criterion of science by a sociological one".

Feyerabend and Lakatos

Kuhn's work largely called into question Popper's demarcation, and emphasized the human, subjective quality of scientific change. Paul Feyerabend was concerned that the very question of demarcation was insidious: science itself had no need of a demarcation criterion, but instead some philosophers were seeking to justify a special position of authority from which science could dominate public discourse. Feyerabend argued that science does not in fact occupy a special place in terms of either its logic or method, and no claim to special authority made by scientists can be upheld. He argued that, within the history of scientific practice, no rule or method can be found that has not been violated or circumvented at some point in order to advance scientific knowledge. Both Imre Lakatos and Feyerabend suggest that science is not an autonomous form of reasoning, but is inseparable from the larger body of human thought and inquiry.

Thagard

Paul R. Thagard proposed another set of principles to try to overcome these difficulties, and argued that it is important for society to find a way of doing so. According to Thagard's method, a theory is not scientific if it satisfies two conditions:

  1. The theory has been less progressive than alternative theories over a long period of time, and faces many unsolved problems; and...
  2. The community of practitioners makes little attempt to develop the theory towards solutions of the problems, shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmations and disconfirmations.

Thagard specified that sometimes theories will spend some time as merely "unpromising" before they truly deserve the title of pseudoscience. He cited astrology as an example: it was stagnant compared to advances in physics during the 17th century, and only later became "pseudoscience" in the advent of alternative explanations provided by psychology during the 19th century.

Thagard also stated that his criteria should not be interpreted so narrowly as to allow willful ignorance of alternative explanations, or so broadly as to discount our modern science compared to science of the future. His definition is a practical one, which generally seeks to distinguish pseudoscience as areas of inquiry which are stagnant and without active scientific investigation.

Some historians' perspectives

Many historians of science are concerned with the development of science from its primitive origins; consequently they define science in sufficiently broad terms to include early forms of natural knowledge. In the article on science in the eleventh edition of the Encyclopædia Britannica, the scientist and historian William Cecil Dampier Whetham defined science as "ordered knowledge of natural phenomena and of the relations between them". In his study of Greek science, Marshall Clagett defined science as "first, the orderly and systematic comprehension, description and/or explanation of natural phenomena and, secondly, the [mathematical and logical] tools necessary for the undertaking". A similar definition appeared more recently in David Pingree's study of early science: "Science is a systematic explanation of perceived or imaginary phenomena, or else is based on such an explanation. Mathematics finds a place in science only as one of the symbolical languages in which scientific explanations may be expressed." These definitions tend to focus more on the subject matter of science than on its method and from these perspectives, the philosophical concern to establish a line of demarcation between science and non-science becomes "problematic, if not futile".

Laudan

Larry Laudan concluded, after examining various historical attempts to establish a demarcation criterion, that "philosophy has failed to deliver the goods" in its attempts to distinguish science from non-science—to distinguish science from pseudoscience. None of the past attempts would be accepted by a majority of philosophers nor, in his view, should they be accepted by them or by anyone else. He stated that many well-founded beliefs are not scientific and, conversely, many scientific conjectures are not well-founded. He also stated that demarcation criteria were historically used as machines de guerre in polemical disputes between "scientists" and "pseudo-scientists". Advancing a number of examples from everyday practice of football and carpentry and non-scientific scholarship such as literary criticism and philosophy, he saw the question of whether a belief is well-founded or not to be more practically and philosophically significant than whether it is scientific or not. In his judgment, the demarcation between science and non-science was a pseudo-problem that would best be replaced by focusing on the distinction between reliable and unreliable knowledge, without bothering to ask whether that knowledge is scientific or not. He would consign phrases like "pseudo-science" or "unscientific" to the rhetoric of politicians or sociologists.

After Laudan

Others have disagreed with Laudan. Sebastian Lutz, for example, argued that demarcation does not have to be a single necessary and sufficient condition as Laudan implied. Rather, Laudan's reasoning at the most establishes that there has to be one necessary criterion and one possibly different sufficient criterion.

Various typologies or taxonomies of sciences versus nonsciences, and reliable knowledge versus illusory knowledge, have been proposed. Ian Hacking, Massimo Pigliucci, and others have noted that the sciences generally conform to Ludwig Wittgenstein's concept of family resemblances.

Other critics have argued for multiple demarcation criteria, some suggesting that there should be one set of criteria for the natural sciences, another set of criteria for the social sciences, and claims involving the supernatural could have a set of pseudoscientific criteria.

Significance

Concerning science education, Michael D. Gordin wrote:

Every student in public or private schools takes several years of science, but only a small fraction of them pursue careers in the sciences. We teach the rest of them so much science so that they will appreciate what it means to be scientific – and, hopefully, become scientifically literate and apply some of those lessons in their lives. For such students, the myth of a bright line of demarcation is essential.

Discussions of the demarcation problem highlight the rhetoric of science and promote critical thinking. Citizens thinking critically, and expressing themselves with reasoned argument in policy discussion, contribute to enlightened democracy. For example, Gordin stated: "Demarcation remains essential for the enormously high political stakes of climate-change denial and other anti-regulatory fringe doctrines".

Philosopher Herbert Keuth [de] noted:

Perhaps the most important function of the demarcation between science and nonscience is to refuse political and religious authorities the right to pass binding judgments on the truth of certain statements of fact.

Concern for informed human nutrition sparked the following note in 1942:

If our boys and girls are to be exposed to the superficial and frequently ill-informed statements about science and medicine made over the radio and in the daily press, it is desirable, if not necessary, that some corrective in the form of accurate factual information be provided in the schools. Although this is not a plea that chemistry teachers should at once introduce the study of proteins into their curricula, it is a suggestion that they should at least inform themselves and become prepared to answer questions and counteract the effects of misinformation.

The demarcation problem has been compared to the problem of differentiating fake news from real news, which rose to prominence in the 2016 United States presidential election.

 

X-ray spectroscopy

From Wikipedia, the free encyclopedia

X-ray spectroscopy is a general term for several spectroscopic techniques for characterization of materials by using x-ray radiation.

Characteristic X-ray spectroscopy

When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. When it returns to the low energy level, the energy which it previously gained by the excitation is emitted as a photon which has a wavelength that is characteristic for the element (there could be several characteristic wavelengths per element). Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen. Comparison of the specimen's spectrum with the spectra of samples of known composition produces quantitative results (after some mathematical corrections for absorption, fluorescence and atomic number). Atoms can be excited by a high-energy beam of charged particles such as electrons (in an electron microscope for example), protons (see PIXE) or a beam of X-rays (see X-ray fluorescence, or XRF or also recently in transmission XRT). These methods enable elements from the entire periodic table to be analysed, with the exception of H, He and Li. In electron microscopy an electron beam excites X-rays; there are two main techniques for analysis of spectra of characteristic X-ray radiation: energy-dispersive X-ray spectroscopy (EDS) and wavelength dispersive X-ray spectroscopy (WDS). In X-Ray Transmission (XRT), the equivalent atomic composition (Zeff) is captured based on photoelectric and Compton effects.

Energy-dispersive X-ray spectroscopy

In an energy-dispersive X-ray spectrometer, a semiconductor detector measures energy of incoming photons. To maintain detector integrity and resolution it should be cooled with liquid nitrogen or by Peltier cooling. EDS is widely employed in electron microscopes (where imaging rather than spectroscopy is a main task) and in cheaper and/or portable XRF units.

Bragg X-ray Spectrometer

Wavelength-dispersive X-ray spectroscopy

In a wavelength-dispersive X-ray spectrometer, a single crystal diffracts the photons according to Bragg's law, which are then collected by a detector. By moving the diffraction crystal and detector relative to each other, a wide region of the spectrum can be observed. To observe a large spectral range, three of four different single crystals may be needed. In contrast to EDS, WDS is a method of sequential spectrum acquisition. While WDS is slower than EDS and more sensitive to the positioning of the sample in the spectrometer, it has superior spectral resolution and sensitivity. WDS is widely used in microprobes (where X-ray microanalysis is the main task) and in XRF; it is widely used in the field of X-ray diffraction to calculate various data such as interplanar spacing and wavelength of the incident X-ray using Bragg's law.

X-ray emission spectroscopy

The father-and-son scientific team of William Lawrence Bragg and William Henry Bragg, who were 1915 Nobel Prize Winners, were the original pioneers in developing X-ray emission spectroscopy. An example of a spectrometer developed by William Henry Bragg, which was used by both father and son to investigate the structure of crystals, can be seen at the Science Museum, London. Jointly they measured the X-ray wavelengths of many elements to high precision, using high-energy electrons as excitation source. The cathode ray tube or an x-ray tube was the method used to pass electrons through a crystal of numerous elements. They also painstakingly produced numerous diamond-ruled glass diffraction gratings for their spectrometers. The law of diffraction of a crystal is called Bragg's law in their honor.

Intense and wavelength-tunable X-rays are now typically generated with synchrotrons. In a material, the X-rays may suffer an energy loss compared to the incoming beam. This energy loss of the re-emerging beam reflects an internal excitation of the atomic system, an X-ray analogue to the well-known Raman spectroscopy that is widely used in the optical region.

In the X-ray region there is sufficient energy to probe changes in the electronic state (transitions between orbitals; this is in contrast with the optical region, where the energy loss is often due to changes in the state of the rotational or vibrational degrees of freedom). For instance, in the ultra soft X-ray region (below about 1 keV), crystal field excitations give rise to the energy loss.

The photon-in-photon-out process may be thought of as a scattering event. When the x-ray energy corresponds to the binding energy of a core-level electron, this scattering process is resonantly enhanced by many orders of magnitude. This type of X-ray emission spectroscopy is often referred to as resonant inelastic X-ray scattering (RIXS).

Due to the wide separation of orbital energies of the core levels, it is possible to select a certain atom of interest. The small spatial extent of core level orbitals forces the RIXS process to reflect the electronic structure in close vicinity of the chosen atom. Thus, RIXS experiments give valuable information about the local electronic structure of complex systems, and theoretical calculations are relatively simple to perform.

Instrumentation

There exist several efficient designs for analyzing an X-ray emission spectrum in the ultra soft X-ray region. The figure of merit for such instruments is the spectral throughput, i.e. the product of detected intensity and spectral resolving power. Usually, it is possible to change these parameters within a certain range while keeping their product constant.

Grating spectrometers

Usually X-ray diffraction in spectrometers is achieved on crystals, but in Grating spectrometers, the X-rays emerging from a sample must pass a source-defining slit, then optical elements (mirrors and/or gratings) disperse them by diffraction according to their wavelength and, finally, a detector is placed at their focal points.

Spherical grating mounts

Henry Augustus Rowland (1848–1901) devised an instrument that allowed the use of a single optical element that combines diffraction and focusing: a spherical grating. Reflectivity of X-rays is low, regardless of the used material and therefore, grazing incidence upon the grating is necessary. X-ray beams impinging on a smooth surface at a few degrees glancing angle of incidence undergo external total reflection which is taken advantage of to enhance the instrumental efficiency substantially.

Denoted by R the radius of a spherical grating. Imagine a circle with half the radius R tangent to the center of the grating surface. This small circle is called the Rowland circle. If the entrance slit is anywhere on this circle, then a beam passing the slit and striking the grating will be split into a specularly reflected beam, and beams of all diffraction orders, that come into focus at certain points on the same circle.

Plane grating mounts

Similar to optical spectrometers, a plane grating spectrometer first needs optics that turns the divergent rays emitted by the x-ray source into a parallel beam. This may be achieved by using a parabolic mirror. The parallel rays emerging from this mirror strike a plane grating (with constant groove distance) at the same angle and are diffracted according to their wavelength. A second parabolic mirror then collects the diffracted rays at a certain angle and creates an image on a detector. A spectrum within a certain wavelength range can be recorded simultaneously by using a two-dimensional position-sensitive detector such as a microchannel photomultiplier plate or an X-ray sensitive CCD chip (film plates are also possible to use).

Interferometers

Instead of using the concept of multiple beam interference that gratings produce, the two rays may simply interfere. By recording the intensity of two such co-linearly at some fixed point and changing their relative phase one obtains an intensity spectrum as a function of path length difference. One can show that this is equivalent to a Fourier transformed spectrum as a function of frequency. The highest recordable frequency of such a spectrum is dependent on the minimum step size chosen in the scan and the frequency resolution (i.e. how well a certain wave can be defined in terms of its frequency) depends on the maximum path length difference achieved. The latter feature allows a much more compact design for achieving high resolution than for a grating spectrometer because x-ray wavelengths are small compared to attainable path length differences.

Early history of X-ray spectroscopy in the U.S.

Philips Gloeilampen Fabrieken, headquartered in Eindhoven in the Netherlands, got its start as a manufacturer of light bulbs, but quickly evolved until it is now one of the leading manufacturers of electrical apparatus, electronics, and related products including X-ray equipment. It also has had one of the world's largest R&D labs. In 1940, the Netherlands was overrun by Hitler’s Germany. The company was able to transfer a substantial sum of money to a company that it set up as an R&D laboratory in an estate in Irvington on the Hudson in NY. As an extension to their work on light bulbs, the Dutch company had developed a line of X-ray tubes for medical applications that were powered by transformers. These X-ray tubes could also be used in scientific X-ray instrumentations, but there was very little commercial demand for the latter. As a result, management decided to try to develop this market and they set up development groups in their research labs in both Holland and the United States.

They hired Dr. Ira Duffendack, a professor at University of Michigan and a world expert on infrared research to head the lab and to hire a staff. In 1951 he hired Dr. David Miller as Assistant Director of Research. Dr. Miller had done research on X-ray instrumentation at Washington University in St. Louis. Dr. Duffendack also hired Dr. Bill Parish, a well known researcher in X-ray diffraction, to head up the section of the lab on X-ray instrumental development. X-ray diffraction units were widely used in academic research departments to do crystal analysis. An essential component of a diffraction unit was a very accurate angle measuring device known as a goniometer. Such units were not commercially available, so each investigator had do try to make their own. Dr Parrish decided this would be a good device to use to generate an instrumental market, so his group designed and learned how to manufacture a goniometer. This market developed quickly and, with the readily available tubes and power supplies, a complete diffraction unit was made available and was successfully marketed.

The U.S. management did not want the laboratory to be converted to a manufacturing unit so it decided to set up a commercial unit to further develop the X-ray instrumentation market. In 1953 Norelco Electronics was established in Mount Vernon, NY, dedicated to the sale and support of X-ray instrumentation. It included a sales staff, a manufacturing group, an engineering department and an applications lab. Dr. Miller was transferred from the lab to head up the engineering department. The sales staff sponsored three schools a year, one in Mount Vernon, one in Denver, and one in San Francisco. The week-long school curricula reviewed the basics of X-ray instrumentation and the specific application of Norelco products. The faculty were members of the engineering department and academic consultants. The schools were well attended by academic and industrial R&D scientists. The engineering department was also a new product development group. It added an X-ray spectrograph to the product line very quickly and contributed other related products for the next 8 years.

The applications lab was an essential sales tool. When the spectrograph was introduced as a quick and accurate analytical chemistry device, it was met with widespread skepticism. All research facilities had a chemistry department and analytical analysis was done by “wet chemistry” methods. The idea of doing this analysis by physics instrumentation was considered suspect. To overcome this bias, the salesman would ask a prospective customer for a task the customer was doing by “wet methods”. The task would be given to the applications lab and they would demonstrate how accurately and quickly it could be done using the X-ray units. This proved to be a very strong sales tool, particularly when the results were published in the Norelco Reporter, a technical journal issued monthly by the company with wide distribution to commercial and academic institutions.

An X-ray spectrograph consists of a high voltage power supply (50 kV or 100 kV), a broad band X-ray tube, usually with a tungsten anode and a beryllium window, a specimen holder, an analyzing crystal, a goniometer, and an X-ray detector device. These are arranged as shown in Fig. 1.

The continuous X-spectrum emitted from the tube irradiates the specimen and excites the characteristic spectral X-ray lines in the specimen. Each of the 92 elements emits a characteristic spectrum. Unlike the optical spectrum, the X-ray spectrum is quite simple. The strongest line, usually the Kalpha line, but sometimes the Lalpha line, suffices to identify the element. The existence of a particular line betrays the existence of an element, and the intensity is proportional to the amount of the particular element in the specimen. The characteristic lines are reflected from a crystal, the analyzer, under an angle that is given by the Bragg condition. The crystal samples all the diffraction angles theta by rotation, while the detector rotates over the corresponding angle 2-theta. With a sensitive detector, the X-ray photons are counted individually. By stepping the detectors along the angle, and leaving it in position for a known time, the number of counts at each angular position gives the line intensity. These counts may be plotted on a curve by an appropriate display unit. The characteristic X-rays come out at specific angles, and since the angular position for every X-ray spectral line is known and recorded, it is easy to find the sample's composition.

A chart for a scan of a Molybdenum specimen is shown in Fig. 2. The tall peak on the left side is the characteristic alpha line at a two theta of 12 degrees. Second and third order lines also appear.

Since the alpha line is often the only line of interest in many industrial applications, the final device in the Norelco X- ray spectrographic instrument line was the Autrometer. This device could be programmed to automatically read at any desired two theta angle for any desired time interval.

Soon after the Autrometer was introduced, Philips decided to stop marketing X-ray instruments developed in both the U.S. and Europe and settled on offering only the Eindhoven line of instruments.

In 1961, during the development of the Autrometer, Norelco was given a sub-contract from the Jet Propulsion Lab. The Lab was working on the instrument package for the Surveyor spaceship. The composition of the moon’s surface was of major interest and the use of an X-ray detection instrument was viewed as a possible solution. Working with a power limit of 30 watts was very challenging, and a device was delivered but it wasn’t used. Later NASA developments did lead to an X-ray spectrographic unit that did make the desired moon soil analysis.

The Norelco efforts faded but the use of X-ray spectroscopy in units known as XRF instruments continued to grow. With a boost from NASA, units were finally reduced to handheld size and are seeing widespread use. Units are available from Bruker, Thermo Scientific, Elvatech Ltd. and SPECTRA.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...