Search This Blog

Tuesday, January 13, 2015

Spacetime

From Wikipedia, the free encyclopedia

In physics, spacetime (also space–time, space time or space–time continuum) is any mathematical model that combines space and time into a single interwoven continuum. The spacetime of our universe is usually interpreted from a Euclidean space perspective, which regards space as consisting of three dimensions, and time as consisting of one dimension, the "fourth dimension". By combining space and time into a single manifold called Minkowski space, physicists have significantly simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.

In non-relativistic classical mechanics, the use of Euclidean space instead of spacetime is appropriate, as time is treated as universal and constant, being independent of the state of motion of an observer.[disambiguation needed] In relativistic contexts, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer and also on the strength of gravitational fields, which can slow the passage of time for an object as seen by an observer outside the field.

In cosmology, the concept of spacetime combines space and time to a single abstract universe. Mathematically it is a manifold consisting of "events" which are described by some type of coordinate system. Typically three spatial dimensions (length, width, height), and one temporal dimension (time) are required. Dimensions are independent components of a coordinate grid needed to locate a point in a certain defined "space". For example, on the globe the latitude and longitude are two independent coordinates which together uniquely determine a location. In spacetime, a coordinate grid that spans the 3+1 dimensions locates events (rather than just points in space), i.e., time is added as another dimension to the coordinate grid. This way the coordinates specify where and when events occur. However, the unified nature of spacetime and the freedom of coordinate choice it allows imply that to express the temporal coordinate in one coordinate system requires both temporal and spatial coordinates in another coordinate system. Unlike in normal spatial coordinates, there are still restrictions for how measurements can be made spatially and temporally (see Spacetime intervals). These restrictions correspond roughly to a particular mathematical model which differs from Euclidean space in its manifest symmetry.

Until the beginning of the 20th century, time was believed to be independent of motion, progressing at a fixed rate in all reference frames; however, later experiments revealed that time slows at higher speeds of the reference frame relative to another reference frame. Such slowing, called time dilation, is explained in special relativity theory. Many experiments have confirmed time dilation, such as the relativistic decay of muons from cosmic ray showers and the slowing of atomic clocks aboard a Space Shuttle relative to synchronized Earth-bound inertial clocks. The duration of time can therefore vary according to events and reference frames.

When dimensions are understood as mere components of the grid system, rather than physical attributes of space, it is easier to understand the alternate dimensional views as being simply the result of coordinate transformations.

The term spacetime has taken on a generalized meaning beyond treating spacetime events with the normal 3+1 dimensions. It is really the combination of space and time. Other proposed spacetime theories include additional dimensions—normally spatial but there exist some speculative theories that include additional temporal dimensions and even some that include dimensions that are neither temporal nor spatial (e.g., superspace). How many dimensions are needed to describe the universe is still an open question. Speculative theories such as string theory predict 10 or 26 dimensions (with M-theory predicting 11 dimensions: 10 spatial and 1 temporal), but the existence of more than four dimensions would only appear to make a difference at the subatomic level.[1]

Spacetime in literature

Incas regarded space and time as a single concept, referred to as pacha (Quechua: pacha, Aymara: pacha).[2][3] The peoples of the Andes maintain a similar understanding.[4]

Arthur Schopenhauer wrote in §18 of On the Fourfold Root of the Principle of Sufficient Reason (1813): "the representation of coexistence is impossible in Time alone; it depends, for its completion, upon the representation of Space; because, in mere Time, all things follow one another, and in mere Space all things are side by side; it is accordingly only by the combination of Time and Space that the representation of coexistence arises".

The idea of a unified spacetime is stated by Edgar Allan Poe in his essay on cosmology titled Eureka (1848) that "Space and duration are one". In 1895, in his novel The Time Machine, H. G. Wells wrote, "There is no difference between time and any of the three dimensions of space except that our consciousness moves along it", and that "any real body must have extension in four directions: it must have Length, Breadth, Thickness, and Duration".

Marcel Proust, in his novel Swann's Way (published 1913), describes the village church of his childhood's Combray as "a building which occupied, so to speak, four dimensions of space—the name of the fourth being Time".

Mathematical concept

In Encyclopedie under the term dimension Jean le Rond d'Alembert speculated that duration (time) might be considered a fourth dimension if the idea was not too novel.[5]

Another early venture was by Joseph Louis Lagrange in his Theory of Analytic Functions (1797, 1813). He said, "One may view mechanics as a geometry of four dimensions, and mechanical analysis as an extension of geometric analysis".[6]

The ancient idea of the cosmos gradually was described mathematically with differential equations, differential geometry, and abstract algebra. These mathematical articulations blossomed in the nineteenth century as electrical technology stimulated men like Michael Faraday and James Clerk Maxwell to describe the reciprocal relations of electric and magnetic fields. Daniel Siegel phrased Maxwell's role in relativity as follows:
[...] the idea of the propagation of forces at the velocity of light through the electromagnetic field as described by Maxwell's equations—rather than instantaneously at a distance—formed the necessary basis for relativity theory.[7]
Maxwell used vortex models in his papers on On Physical Lines of Force, but ultimately gave up on any substance but the electromagnetic field. Pierre Duhem wrote:
[Maxwell] was not able to create the theory that he envisaged except by giving up the use of any model, and by extending by means of analogy the abstract system of electrodynamics to displacement currents.[8]
In Siegel's estimation, "this very abstract view of the electromagnetic fields, involving no visualizable picture of what is going on out there in the field, is Maxwell's legacy."[9] Describing the behaviour of electric fields and magnetic fields led Maxwell to a unified view of an electromagnetic field. Being functions, these fields took values on a domain, a piece of spacetime. It is the intermingling of electric and magnetic manifestations, described by Maxwell's equations that give spacetime its structure. In particular, the rate of motion of an observer determines the electric and magnetic profiles of the electromagnetic field. The propagation of the field is determined by the electromagnetic wave equation which also requires spacetime for description.

Spacetime was described as an affine space with quadratic form in Minkowski space of 1908.[10] In his 1914 textbook The Theory of Relativity, Ludwik Silberstein used biquaternions to represent events in Minkowski space. He also exhibited the Lorentz transformations between observers of differing velocities as biquaternion mappings. Biquaternions were described in 1853 by W. R. Hamilton, so while the physical interpretation was new, the mathematics was well known in English literature, making relativity an instance of applied mathematics.

The first inkling of general relativity in spacetime was articulated by W. K. Clifford. Description of the effect of gravitation on space and time was found to be most easily visualized as a "warp" or stretching in the geometrical fabric of space and time, in a smooth and continuous way that changed smoothly from point-to-point along the spacetime fabric. In 1947 James Jeans provided a concise summary of the development of spacetime theory in his book The Growth of Physical Science.[11]

Basic concepts

Spacetimes are the arenas in which all physical events take place—an event is a point in spacetime specified by its time and place. For example, the motion of planets around the sun may be described in a particular type of spacetime, or the motion of light around a rotating star may be described in another type of spacetime. The basic elements of spacetime are events. In any given spacetime, an event is a unique position at a unique time. Because events are spacetime points, an example of an event in classical relativistic physics is (x,y,z,t), the location of an elementary (point-like) particle at a particular time. A spacetime itself can be viewed as the union of all events in the same way that a line is the union of all of its points, formally organized into a manifold, a space which can be described at small scales using coordinate systems.

A spacetime is independent of any observer.[12] However, in describing physical phenomena (which occur at certain moments of time in a given region of space), each observer chooses a convenient metrical coordinate system. Events are specified by four real numbers in any such coordinate system. The trajectories of elementary (point-like) particles through space and time are thus a continuum of events called the world line of the particle. Extended or composite objects (consisting of many elementary particles) are thus a union of many world lines twisted together by virtue of their interactions through spacetime into a "world-braid".

However, in physics, it is common to treat an extended object as a "particle" or "field" with its own unique (e.g., center of mass) position at any given time, so that the world line of a particle or light beam is the path that this particle or beam takes in the spacetime and represents the history of the particle or beam. The world line of the orbit of the Earth (in such a description) is depicted in two spatial dimensions x and y (the plane of the Earth's orbit) and a time dimension orthogonal to x and y.
The orbit of the Earth is an ellipse in space alone, but its world line is a helix in spacetime.[13]
The unification of space and time is exemplified by the common practice of selecting a metric (the measure that specifies the interval between two events in spacetime) such that all four dimensions are measured in terms of units of distance: representing an event as (x_0,x_1,x_2,x_3) = (ct,x,y,z) (in the Lorentz metric) or (x_1,x_2,x_3,x_4) = (x,y,z,ict) (in the original Minkowski metric) where c is the speed of light.[14] The metrical descriptions of Minkowski Space and spacelike, lightlike, and timelike intervals given below follow this convention, as do the conventional formulations of the Lorentz transformation.

Spacetime intervals

In a Euclidean space, the separation between two points is measured by the distance between the two points. The distance is purely spatial, and is always positive. In spacetime, the separation between two events is measured by the invariant interval between the two events, which takes into account not only the spatial separation between the events, but also their temporal separation. The interval, s2, between two events is defined as:

s^2 = \Delta r^2 - c^2\Delta t^2 \,   (spacetime interval),

where c is the speed of light, and Δr and Δt denote differences of the space and time coordinates, respectively, between the events. The choice of signs for s^2 above follows the space-like convention (−+++). The reason s^2 is called the interval and not s is that s^2 can be positive, zero or negative.
Spacetime intervals may be classified into three distinct types, based on whether the temporal separation (c^2 \Delta t^2) or the spatial separation (\Delta r^2) of the two events is greater: time-like, light-like or space-like.

Certain types of world lines are called geodesics of the spacetime – straight lines in the case of Minkowski space and their closest equivalent in the curved spacetime of general relativity. In the case of purely time-like paths, geodesics are (locally) the paths of greatest separation (spacetime interval) as measured along the path between two events, whereas in Euclidean space and Riemannian manifolds, geodesics are paths of shortest distance between two points.[15][16] The concept of geodesics becomes central in general relativity, since geodesic motion may be thought of as "pure motion" (inertial motion) in spacetime, that is, free from any external influences.

Time-like interval

\begin{align} \\
  c^2\Delta t^2 &> \Delta r^2\\
            s^2 &< 0 \\
\end{align}
For two events separated by a time-like interval, enough time passes between them that there could be a cause–effect relationship between the two events. For a particle traveling through space at less than the speed of light, any two events which occur to or by the particle must be separated by a time-like interval. Event pairs with time-like separation define a negative squared spacetime interval (s^2 < 0) and may be said to occur in each other's future or past. There exists a reference frame such that the two events are observed to occur in the same spatial location, but there is no reference frame in which the two events can occur at the same time.

The measure of a time-like spacetime interval is described by the proper time, \Delta\tau:

\Delta\tau = \sqrt{\Delta t^2 - \frac{\Delta r^2}{c^2}}   (proper time).

The proper time interval would be measured by an observer with a clock traveling between the two events in an inertial reference frame, when the observer's path intersects each event as that event occurs. (The proper time defines a real number, since the interior of the square root is positive.)

Light-like interval

\begin{align}
 c^2\Delta t^2 &= \Delta r^2 \\
           s^2 &= 0 \\
\end{align}
In a light-like interval, the spatial distance between two events is exactly balanced by the time between the two events. The events define a squared spacetime interval of zero (s^2 = 0). Light-like intervals are also known as "null" intervals.

Events which occur to or are initiated by a photon along its path (i.e., while traveling at c, the speed of light) all have light-like separation. Given one event, all those events which follow at light-like intervals define the propagation of a light cone, and all the events which preceded from a light-like interval define a second (graphically inverted, which is to say "pastward") light cone.

Space-like interval

\begin{align} \\
  c^2\Delta t^2 &< \Delta r^2 \\
            s^2 &> 0 \\
\end{align}
When a space-like interval separates two events, not enough time passes between their occurrences for there to exist a causal relationship crossing the spatial distance between the two events at the speed of light or slower. Generally, the events are considered not to occur in each other's future or past. There exists a reference frame such that the two events are observed to occur at the same time, but there is no reference frame in which the two events can occur in the same spatial location.

For these space-like event pairs with a positive squared spacetime interval (s^2 > 0), the measurement of space-like separation is the proper distance, \Delta\sigma:

\Delta\sigma = \sqrt{s^2} = \sqrt{\Delta r^2 - c^2\Delta t^2}   (proper distance).

Like the proper time of time-like intervals, the proper distance of space-like spacetime intervals is a real number value.

Mathematics of spacetimes

For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold (M,g). This means the smooth Lorentz metric g has signature (3,1). The metric determines the geometry of spacetime, as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates (x, y, z, t) are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light c is equal to 1.

A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event p. Another reference frame may be identified by a second coordinate chart about p. Two observers (one in each reference frame) may describe the same event p but obtain different descriptions.

Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing p (representing an observer) and another containing q (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.

For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event p). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (x, y, z, t) (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.

Geodesics are said to be time-like, null, or space-like if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by time-like and null (light-like) geodesics, respectively.

Topology

The assumptions contained in the definition of a spacetime are usually justified by the following considerations.
The connectedness assumption serves two main purposes. First, different observers making measurements (represented by coordinate charts) should be able to compare their observations on the non-empty intersection of the charts. If the connectedness assumption were dropped, this would not be possible. Second, for a manifold, the properties of connectedness and path-connectedness are equivalent, and one requires the existence of paths (in particular, geodesics) in the spacetime to represent the motion of particles and radiation.

Every spacetime is paracompact. This property, allied with the smoothness of the spacetime, gives rise to a smooth linear connection, an important structure in general relativity. Some important theorems on constructing spacetimes from compact and non-compact manifolds include the following:
  • A compact manifold can be turned into a spacetime if, and only if, its Euler characteristic is 0. (Proof idea: the existence of a Lorentzian metric is shown to be equivalent to the existence of a nonvanishing vector field.)
  • Any non-compact 4-manifold can be turned into a spacetime.[17]

Spacetime symmetries

Often in relativity, spacetimes that have some form of symmetry are studied. As well as helping to classify spacetimes, these symmetries usually serve as a simplifying assumption in specialized work. Some of the most popular ones include:

Causal structure

The causal structure of a spacetime describes causal relationships between pairs of points in the spacetime based on the existence of certain types of curves joining the points.

Spacetime in special relativity

The geometry of spacetime in special relativity is described by the Minkowski metric on R4. This spacetime is called Minkowski space. The Minkowski metric is usually denoted by \eta and can be written as a four-by-four matrix:
\eta_{ab} \, = \operatorname{diag}(1, -1, -1, -1)
where the Landau–Lifshitz space-like convention is being used. A basic assumption of relativity is that coordinate transformations must leave spacetime intervals invariant. Intervals are invariant under Lorentz transformations. This invariance property leads to the use of four-vectors (and other tensors) in describing physics.

Strictly speaking, one can also consider events in Newtonian physics as a single spacetime. This is Galilean–Newtonian relativity, and the coordinate systems are related by Galilean transformations. However, since these preserve spatial and temporal distances independently, such a spacetime can be decomposed into spatial coordinates plus temporal coordinates, which is not possible in the general case.

Spacetime in general relativity

In general relativity, it is assumed that spacetime is curved by the presence of matter (energy), this curvature being represented by the Riemann tensor. In special relativity, the Riemann tensor is identically zero, and so this concept of "non-curvedness" is sometimes expressed by the statement Minkowski spacetime is flat.

The earlier discussed notions of time-like, light-like and space-like intervals in special relativity can similarly be used to classify one-dimensional curves through curved spacetime. A time-like curve can be understood as one where the interval between any two infinitesimally close events on the curve is time-like, and likewise for light-like and space-like curves. Technically the three types of curves are usually defined in terms of whether the tangent vector at each point on the curve is time-like, light-like or space-like. The world line of a slower-than-light object will always be a time-like curve, the world line of a massless particle such as a photon will be a light-like curve, and a space-like curve could be the world line of a hypothetical tachyon. In the local neighborhood of any event, time-like curves that pass through the event will remain inside that event's past and future light cones, light-like curves that pass through the event will be on the surface of the light cones, and space-like curves that pass through the event will be outside the light cones. One can also define the notion of a three-dimensional "spacelike hypersurface", a continuous three-dimensional "slice" through the four-dimensional property with the property that every curve that is contained entirely within this hypersurface is a space-like curve.[18]

Many spacetime continua have physical interpretations which most physicists would consider bizarre or unsettling. For example, a compact spacetime has closed timelike curves, which violate our usual ideas of causality (that is, future events could affect past ones). For this reason, mathematical physicists usually consider only restricted subsets of all the possible spacetimes. One way to do this is to study "realistic" solutions of the equations of general relativity. Another way is to add some additional "physically reasonable" but still fairly general geometric restrictions and try to prove interesting things about the resulting spacetimes. The latter approach has led to some important results, most notably the Penrose–Hawking singularity theorems.

Quantized spacetime

In general relativity, spacetime is assumed to be smooth and continuous—and not just in the mathematical sense. In the theory of quantum mechanics, there is an inherent discreteness present in physics. In attempting to reconcile these two theories, it is sometimes postulated that spacetime should be quantized at the very smallest scales. Current theory is focused on the nature of spacetime at the Planck scale. Causal sets, loop quantum gravity, string theory, causal dynamical triangulation, and black hole thermodynamics all predict a quantized spacetime with agreement on the order of magnitude. Loop quantum gravity makes precise predictions about the geometry of spacetime at the Planck scale.

Pseudoscience

From Wikipedia, the free encyclopedia

Pseudoscience is a claim, belief or practice which is falsely presented as scientific, but does not adhere to a valid scientific method, lacks supporting scientific evidence or plausibility, cannot be reliably tested, or otherwise lacks scientific status.[1] Pseudoscience is often characterized by the use of vague, contradictory, exaggerated or unprovable claims, an over-reliance on confirmation rather than rigorous attempts at refutation, a lack of openness to evaluation by other experts, and a general absence of systematic processes to rationally develop theories.

A field, practice, or body of knowledge can reasonably be called pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[2] Science is also distinguishable from revelation, theology, or spirituality in that it offers insight into the physical world obtained by empirical research and testing.[3] Commonly held beliefs in popular science may not meet the criteria of science.[4] "Pop science" may blur the divide between science and pseudoscience among the general public, and may also involve science fiction.[4]
Pseudoscientific beliefs are widespread, even among public school science teachers and newspaper reporters.[5]

The demarcation problem between science and pseudoscience has ethical political implications, as well as philosophical and scientific issues.[6] Differentiating science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education.[7] Distinguishing scientific facts and theories from pseudoscientific beliefs such as those found in astrology, alchemy, medical quackery, and occult beliefs combined with scientific concepts, is part of science education and scientific literacy.[8]

The term pseudoscience is often considered inherently pejorative, because it suggests something is being inaccurately or even deceptively portrayed as science.[9] Accordingly, those labeled as practicing or advocating pseudoscience usually dispute the characterization.[9]

Overview

Scientific methodology

A typical 19th century phrenology chart: In the 1820s, phrenologists claimed the mind was located in areas of the brain, and were attacked for doubting that mind came from the nonmaterial soul. Their idea of reading "bumps" in the skull to predict personality traits was later discredited.[10] Phrenology was first called a pseudoscience in 1843 and continues to be considered so.[11]

While the standards for determining whether a body of knowledge, methodology, or practice is scientific can vary from field to field, a number of basic principles are widely agreed upon by scientists. The basic notion is that all experimental results should be reproducible, and able to be verified by other individuals.[12] These principles aim to ensure experiments can be measurably reproduced under the same conditions, allowing further investigation to determine whether a hypothesis or theory related to given phenomena is both valid and reliable. Standards require the scientific method to be applied throughout, and bias will be controlled for or eliminated through randomization, fair sampling procedures, blinding of studies, and other methods. All gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available for peer review, allowing further experiments or studies to be conducted to confirm or falsify results. Statistical quantification of significance, confidence, and error[13] are also important tools for the scientific method.

Falsifiability

In the mid-20th century, Karl Popper put forth the criterion of falsifiability to distinguish science from nonscience.[14] Falsifiability means a result can be disproved. For example, a statement such as "God created the universe" may be true or false, but no tests can be devised that could prove it either way; it simply lies outside the reach of science. Popper used astrology and psychoanalysis as examples of pseudoscience and Einstein's theory of relativity as an example of science. He subdivided nonscience into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other, though he did not provide clear criteria for the differences.[15]

Another example which shows the distinct need for a claim to be falsifiable was put forth in Carl Sagan's The Demon-Haunted World when he talks about an invisible dragon that he has in his garage. The point is made that there is no physical test to refute the claim of the presence of this dragon. No matter what test you think you can come up with, there is then a reason why this does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. Sagan concludes; "Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?". He states that "your inability to invalidate my hypothesis is not at all the same thing as proving it true,[16] once again explaining that even if such a claim were true, it would lie outside the realm of scientific inquiry.

Merton's norms

In 1942, Robert K. Merton identified a small set of "norms" which characterized what makes a "real" science. If any of the norms were violated, Merton considered the enterprise to be nonscience. These are not broadly accepted in the scientific community. His norms were:
  • Originality: The tests and research done must present something new to the scientific community.
  • Detachment: The scientists' reasons for practicing this science must be simply for the expansion of their knowledge. The scientists should not have personal reasons to expect certain results.
  • Universality: No person should be able to more easily obtain the information of a test than another person. Social class, religion, ethnicity, or any other personal factors should not be factors in someone's ability to receive or perform a type of science.
  • Skepticism: Scientific facts must not be based on faith. One should always question every case and argument and constantly check for errors or invalid claims.
  • Public accessibility: Any scientific knowledge one obtains should be made available to everyone. The results of any research should be openly published and shared with the scientific community.[17]

Refusal to acknowledge problems

In 1978, Paul Thagard proposed that pseudoscience is primarily distinguishable from science when it is less progressive than alternative theories over a long period of time, and its proponents fail to acknowledge or address problems with the theory.[18] In 1983, Mario Bunge has suggested the categories of "belief fields" and "research fields" to help distinguish between pseudoscience and science, where the former is primarily personal and subjective and the latter involves a certain systematic approach.[19]

Criticism of the term

Philosophers of science, such as Paul Feyerabend, argued that a distinction between science and nonscience is neither possible nor desirable.[20][21] Among the issues which can make the distinction difficult is variable rates of evolution among the theories and methodologies of science in response to new data.[22] In addition, specific standards applicable to one field of science may not be applicable in other fields.[citation needed]

Larry Laudan has suggested pseudoscience has no scientific meaning and is mostly used to describe our emotions: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudo-science' and 'unscientific' from our vocabulary; they are just hollow phrases which do only emotive work for us".[23] Likewise, Richard McNally states, "The term 'pseudoscience' has become little more than an inflammatory buzzword for quickly dismissing one's opponents in media sound-bites" and "When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?"[24]

Etymology

The word "pseudoscience" is derived from the Greek root pseudo meaning false and the English word science. Although the term has been in use since at least the late 18th century (used in 1796 in reference to alchemy,[25][26]) the concept of pseudoscience as distinct from real or proper science appears to have emerged in the mid-19th century. Among the first recorded uses of the word "pseudo-science" was in 1844 in the Northern Journal of Medicine, I 387: "That opposite kind of innovation which pronounces what has been recognized as a branch of science, to have been a pseudo-science, composed merely of so-called facts, connected together by misapprehensions under the disguise of principles". An earlier recorded use of the term was in 1843 by the French physiologist François Magendie.[11] During the 20th century, the word was used as a pejorative to describe explanations of phenomena which were claimed to be scientific, but which were not in fact supported by reliable experimental evidence. From time to time, though, the usage of the word occurred in a more formal, technical manner around a perceived threat to individual and institutional security in a social and cultural setting.[27]

History

The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to properly be called such.[28][29]
Distinguishing between proper science and pseudoscience is sometimes difficult. One proposal for demarcation between the two is the falsification criterion, most notably attributed to the philosopher Karl Popper. In the history of science and "history of pseudoscience" it can be especially hard to separate the two, because some sciences developed from pseudosciences. An example of this is the science chemistry, which traces its origins to pseudoscientific alchemy.

The vast diversity in pseudosciences further complicates the history of science. Some modern pseudosciences, such as astrology and acupuncture, originated before the scientific era. Others developed as part of an ideology, such as Lysenkoism, or as a response to perceived threats to an ideology. Examples are creation science and intelligent design, which were developed in response to the scientific theory of evolution.

Despite failing to meet proper scientific standards, many pseudosciences survive. This is usually due to a persistent core of devotees who refuse to accept scientific criticism of their beliefs, or due to popular misconceptions. Sheer popularity is also a factor, as is attested by astrology, which remains popular despite being rejected by a large majority of scientists.[30][31][32][33]

Identifying pseudoscience

A field, practice, or body of knowledge might reasonably be called pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[2]

Karl Popper stated it is insufficient to distinguish science from pseudoscience, or from metaphysics, by the criterion of rigorous adherence to the empirical method, which is essentially inductive, based on observation or experimentation.[34] He proposed a method to distinguish between genuine empirical, nonempirical or even pseudoempirical methods. The latter case was exemplified by astrology, which appeals to observation and experimentation. While it had astonishing empirical evidence based on observation, on horoscopes and biographies, it crucially failed to adhere to acceptable scientific standards.[34] Popper proposed falsifiability as an important criterion in distinguishing science from pseudoscience.

To demonstrate this point, Popper[34] gave two cases of human behavior and typical explanations from Freud and Adler's theories: "that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child."[34] From Freud's perspective, the first man would have suffered from psychological repression, probably originating from an Oedipus complex, whereas the second had attained sublimation. From Adler's perspective, the first and second man suffered from feelings of inferiority and had to prove himself which drove him to commit the crime or, in the second case, rescue the child. Popper was not able to find any counterexamples of human behavior in which the behavior could not be explained in the terms of Adler's or Freud's theory. Popper argued[34] it was that the observation always fitted or confirmed the theory which, rather than being its strength, was actually its weakness.

In contrast, Popper[34] gave the example of Einstein's gravitational theory, which predicted "light must be attracted by heavy bodies (such as the sun), precisely as material bodies were attracted."[34] Following from this, stars closer to the sun would appear to have moved a small distance away from the sun, and away from each other. This prediction was particularly striking to Popper because it involved considerable risk. The brightness of the sun prevented this effect from being observed under normal circumstances, so photographs had to be taken during an eclipse and compared to photographs taken at night. Popper states, "If observation shows that the predicted effect is definitely absent, then the theory is simply refuted."[34] Popper summed up his criterion for the scientific status of a theory as depending on its falsifiability, refutability, or testability.

Paul R. Thagard used astrology as a case study to distinguish science from pseudoscience and proposed principles and criteria to delineate them.[35] First, astrology has not progressed in that it has not been updated nor added any explanatory power since Ptolemy. Second, it has ignored outstanding problems such as the precession of equinoxes in astronomy. Third, alternative theories of personality and behavior have grown progressively to encompass explanations of phenomena which astrology statically attributes to heavenly forces. Fourth, astrologers have remained uninterested in furthering the theory to deal with outstanding problems or in critically evaluating the theory in relation to other theories. Thagard intended this criterion to be extended to areas other than astrology. He believed it would delineate as pseudoscientific such practices as witchcraft and pyramidology, while leaving physics, chemistry and biology in the realm of science. Biorhythms, which like astrology relied uncritically on birth dates, did not meet the criterion of pseudoscience at the time because there were no alternative explanations for the same observations. The use of this criterion has the consequence that a theory can at one time be scientific and at another pseudoscientific.[35]

Science is also distinguishable from revelation, theology, or spirituality in that it offers insight into the physical world obtained by empirical research and testing.[3] For this reason, the teaching of creation science and intelligent design has been strongly condemned in position statements from scientific organisations.[36] The most notable disputes concern the evolution of living organisms, the idea of common descent, the geologic history of the Earth, the formation of the solar system, and the origin of the universe.[37] Systems of belief that derive from divine or inspired knowledge are not considered pseudoscience if they do not claim either to be scientific or to overturn well-established science. Moreover, some specific religious claims, such as the power of intercessory prayer to heal the sick, although they may be based on untestable beliefs, can be tested by the scientific method.

Some statements and commonly held beliefs in popular science may not meet the criteria of science. "Pop" science may blur the divide between science and pseudoscience among the general public, and may also involve science fiction.[4] Indeed, pop science is disseminated to, and can also easily emanate from, persons not accountable to scientific methodology and expert peer review.

If the claims of a given field can be experimentally tested and methodological standards are upheld, it is not "pseudoscience", however odd, astonishing, or counterintuitive. If claims made are inconsistent with existing experimental results or established theory, but the methodology is sound, caution should be used; science consists of testing hypotheses which may turn out to be false. In such a case, the work may be better described as ideas that are "not yet generally accepted". Protoscience is a term sometimes used to describe a hypothesis that has not yet been adequately tested by the scientific method, but which is otherwise consistent with existing science or which, where inconsistent, offers reasonable account of the inconsistency. It may also describe the transition from a body of practical knowledge into a scientific field.[38]

Pseudoscientific concepts

Examples of pseudoscience concepts, proposed as scientific when they are not scientific, include acupuncture, alchemy, ancient astronauts, applied kinesiology, astrology, Ayurvedic medicine, biorhythms, cellular memory, cold fusion,[39] craniometry, creation science, Scientology founder L. Ron Hubbard's engram theory, enneagrams, eugenics,[40] extrasensory perception (ESP), facilitated communication, graphology, homeopathy, intelligent design, iridology, kundalini, Lysenkoism, metoposcopy, N-rays, naturopathy, orgone energy, paranormal plant perception, phrenology, physiognomy, qi, New Age psychotherapies (e.g., rebirthing therapy), reflexology, remote viewing, neuro-linguistic programming (NLP), reiki, Rolfing, therapeutic touch, and the revised history of the solar system proposed by Immanuel Velikovsky.
Robert T. Carroll stated, in part, "Pseudoscientists claim to base their theories on empirical evidence, and they may even use some scientific methods, though often their understanding of a controlled experiment is inadequate. Many pseudoscientists relish being able to point out the consistency of their ideas with known facts or with predicted consequences, but they do not recognize that such consistency is not proof of anything. It is a necessary condition but not a sufficient condition that a good scientific theory be consistent with the facts."[41]

In 2006, the U.S. National Science Foundation (NSF) issued an executive summary of a paper on science and engineering which briefly discussed the prevalence of pseudoscience in modern times. It said, "belief in pseudoscience is widespread" and, referencing a Gallup Poll,[42] stated that belief in the 10 commonly believed examples of paranormal phenomena listed in the poll were "pseudoscientific beliefs".[43] The items were "extrasensory perception (ESP), that houses can be haunted, ghosts, telepathy, clairvoyance, astrology, that people can communicate mentally with someone who has died, witches, reincarnation, and channelling".[43] Such beliefs in pseudoscience reflect a lack of knowledge of how science works. The scientific community may aim to communicate information about science out of concern for the public's susceptibility to unproven claims.[43]

The following are some of the indicators of the possible presence of pseudoscience.

Use of vague, exaggerated or untestable claims

  • Assertion of scientific claims that are vague rather than precise, and that lack specific measurements[44]
  • Failure to make use of operational definitions (i.e. publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can independently measure or test them)[45] (See also: Reproducibility)
  • Failure to make reasonable use of the principle of parsimony, i.e. failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (see: Occam's razor)[46]
  • Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science
  • Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.[47]
  • Lack of effective controls, such as placebo and double-blind, in experimental design
  • Lack of understanding of basic and established principles of physics and engineering[48]

Over-reliance on confirmation rather than refutation

  • Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (see also: Falsifiability)[49]
  • Assertion of claims that a theory predicts something that it has not been shown to predict.[50] Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" (e.g. Ignoratio elenchi)[51]
  • Assertion that claims which have not been proven false must be true, and vice versa (see: Argument from ignorance)[52]
  • Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e. hypothesis generation), but should not be used in the context of justification (e.g. Statistical hypothesis testing).[53]
  • Presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims.[54] This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
  • Promulgating to the status of facts excessive or untested claims that have been previously published elsewhere; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the Woozle effect.[55]
  • Reversed burden of proof: science places the burden of proof on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g. an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than on the claimant.[56]
  • Appeals to holism as opposed to reductionism: proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.[57]

Lack of openness to testing by other experts

  • Evasion of peer review before publicizing results (called "science by press conference"):[58] Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.[59]
  • Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.[60]
  • Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested[60]
  • Substantive debate on the evidence by knowledgeable proponents of all view points is not encouraged.[61]

Absence of progress

  • Failure to progress towards additional evidence of its claims.[62] Terence Hines has identified astrology as a subject that has changed very little in the past two millennia.[63] (see also: scientific progress)
  • Lack of self-correction: scientific research programmes make mistakes, but they tend to eliminate these errors over time.[64] By contrast, ideas may be regarded as pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g. The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.
  • Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.

Personalization of issues

  • Tight social groups and authoritarian personality, suppression of dissent, and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies.[65]
  • Assertion of claims of a conspiracy on the part of the scientific community to suppress the results[66]
  • Attacking the motives or character of anyone who questions the claims (see Ad hominem fallacy)[67]

Use of misleading language

  • Creating scientific-sounding terms to add weight to claims and persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
  • Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline

Demographics

In his book The Demon-Haunted World Carl Sagan discusses the government of China and the Chinese Communist Party concern about Western pseudoscience developments and certain ancient Chinese practices in China. He sees pseudoscience occurring in the U.S. as part of a worldwide trend and suggests its causes, dangers, diagnosis and treatment may be universal.[68] In Spain, another science writer Luis Alfonso Gámez was sued after he notified the public about the lack of efficacy to support the claims of a popular pseudoscientist. In the US, 54% of the population believe in psychic healing and 35% believe in telepathy. In Europe, the statistics are not that much different. A significant percentage of Europeans consider homeopathy (34%) and horoscopes (13%) to be reliable science.[69] Over the past decade, consumer interest in the use of complementary and alternative medicine (CAM) practices and products has increased. Surveys demonstrate that the people with the most serious medical conditions, such as cancer, chronic pain, and HIV, are the most routine consumers of CAM.[69]

The National Science Foundation stated that pseudoscientific beliefs in the U.S. became more widespread during the 1990s, peaked near 2001, and declined slightly since with pseudoscientific beliefs remaining common. According to the NSF report, there is a lack of knowledge of pseudoscientific issues in society and pseudoscientific practices are commonly followed.[70] Surveys indicate about a third of all adult Americans consider astrology to be scientific.[71][72][73]

A large percentage of the United States population lacks scientific literacy, not adequately understanding scientific principles and methodology.[43][74][75][76] In the Journal of College Science Teaching, Art Hobson writes, "Pseudoscientific beliefs are surprisingly widespread in our culture even among public school science teachers and newspaper editors, and are closely related to scientific illiteracy."[5] However, a 10,000 student study in the same journal concluded there was no strong correlation between science knowledge and belief in pseudoscience.[77]

Explanations

In a report Singer and Benassi (1981) wrote that pseudoscientific beliefs have their origin from at least four sources.[78]
  • Common cognitive errors from personal experience
  • Erroneous sensationalistic mass media coverage
  • Sociocultural factors
  • Poor or erroneous science education
Another American study (Eve and Dunn, 1990) supported the findings of Singer and Benassi and found sufficient levels of pseudoscientific belief being promoted by high school life science and biology teachers.[79]

Psychology

The psychology of pseudoscience aims to explore and analyze pseudoscientific thinking by means of thorough clarification on making the distinction of what is considered scientific vs. pseudoscientific.
The human proclivity for seeking confirmation rather than refutation (confirmation bias),[80] the tendency to hold comforting beliefs, and the tendency to overgeneralize have been proposed as reasons for the common adherence to pseudoscientific thinking. According to Beyerstein (1991), humans are prone to associations based on resemblances only, and often prone to misattribution in cause-effect thinking.[81]

Michael Shermer's theory of belief-dependent realism is driven by the belief that the brain is essentially a "belief engine," which scans data perceived by the senses and looks for patterns and meaning. There is also the tendency for the brain to create cognitive biases, as a result of inferences and assumptions made without logic and based on instinct — usually resulting in patterns in cognition. These tendencies of patternicity and agenticity are also driven "by a meta-bias called the bias blind spot, or the tendency to recognize the power of cognitive biases in other people but to be blind to their influence on our own beliefs." [82] Lindeman states that social motives (i.e., "to comprehend self and the world, to have a sense of control over outcomes, to belong, to find the world benevolent and to maintain one's self-esteem") are often "more easily" fulfilled by pseudoscience than by scientific information. Furthermore, pseudoscientific explanations are generally not analyzed rationally, but instead experientially. Operating within a different set of rules compared to rational thinking, experiential thinking regards an explanation as valid if the explanation is "personally functional, satisfying and sufficient", offering a description of the world that may be more personal than can be provided by science and reducing the amount of potential work involved in understanding complex events and outcomes.[83]

Some people believe the prevalence of pseudoscientific beliefs is due to widespread "scientific illiteracy".[84] The individuals lacking scientific literacy are more susceptible to wishful thinking, since they are likely to turn to immediate gratification powered by System 1, our default operating system which requires little to no effort. This system encourages one to accept the conclusions they believe, and reject the ones they don't. Further analysis of complex pseudoscientific phenomena require System 2, which follows rules, compares objects along multiple dimensions, and weighs options. These two systems have several other differences which are further discussed in the dual-process theory.[citation needed] The scientific and secular systems of morality and meaning are generally unsatisfying to most people. Humans are, by nature, a forward-minded species pursuing greater avenues of happiness and satisfaction, but we are all too frequently willing to grasp at unrealistic promises of a better life.[85]

Psychology has much to discuss about pseudoscience thinking, as it is the illusory perceptions of causality and effectiveness of numerous individuals that needs to be illuminated. Research suggests that illusionary thinking happens in most people when exposed to certain circumstances such as reading a book, an advertisement or the testimony of others are the basis of pseudoscience beliefs. It is assumed that illusions are not unusual, and given the right conditions, illusions are able to occur systematically even in normal emotional situations. One of the things pseudoscience believers quibble most about is that academic science usually treats them as fools. Minimizing these illusions in the real world is not simple.[69] To this aim, designing evidence-based educational programs can be effective to help people identify and reduce their own illusions.[69]

Boundaries between science and pseudoscience

In the philosophy and history of science, Imre Lakatos stresses the social and political importance of the demarcation problem, the normative methodological problem of distinguishing between science and pseudoscience. His distinctive historical analysis of scientific methodology based on research programmes suggests: "scientists regard the successful theoretical prediction of stunning novel facts – such as the return of Halley's comet or the gravitational bending of light rays – as what demarcates good scientific theories from pseudo-scientific and degenerate theories, and in spite of all scientific theories being forever confronted by 'an ocean of counterexamples'".[6] Lakatos offers a "novel fallibilist analysis of the development of Newton's celestial dynamics, [his] favourite historical example of his methodology" and argues in light of this historical turn, that his account answers for certain inadequacies in those of Sir Karl Popper and Thomas Kuhn.[6] "Nonetheless, Lakatos did recognize the force of Kuhn's historical criticism of Popper – all important theories have been surrounded by an 'ocean of anomalies', which on a falsificationist view would require the rejection of the theory outright... Lakatos sought to reconcile the rationalism of Popperian falsificationism with what seemed to be its own refutation by history".[86]
Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton's is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one's most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.

Thus a statement may be pseudoscientific even if it is eminently 'plausible' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. A theory may even be of supreme scientific value even if no one understands it, let alone believes in it.[6]
—Imre Lakatos, Science and Pseudoscience
The boundary lines between science and pseudoscience are disputed and difficult to determine analytically, even after more than a century of dialogue among philosophers of science and scientists in varied fields, and despite some basic agreements on the fundaments of scientific methodology.[2][87] The concept of pseudoscience rests on an understanding that scientific methodology has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. According to Lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but "a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence."[6]
To Popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. To Popper, falsifiability is what determines the scientific status of a theory. Taking a historical approach, Kuhn observed that scientists did not follow Popper's rule, and might ignore falsifying data, unless overwhelming. To Kuhn, puzzle-solving within a paradigm is science. Lakatos attempted to resolve this debate, by suggesting history shows that science occurs in research programmes, competing according to how progressive they are. The leading idea of a programme could evolve, driven by its heuristic to make predictions that can be supported by evidence. Feyerabend claimed that Lakatos was selective in his examples, and the whole history of science shows there is no universal rule of scientific method, and imposing one on the scientific community impedes progress.[88]
—David Newbold and Julia Roberts, "An analysis of the demarcation problem in science and its application to therapeutic touch theory" in International Journal of Nursing Practice, Vol. 13
Laudan maintained that the demarcation between science and non-science was a pseudo-problem, preferring to focus on the more general distinction between reliable and unreliable knowledge.[89]
[Feyerabend] regards Lakatos's view as being closet anarchism disguised as methodological rationalism. It should be noted that Feyerabend's claim was not that standard methodological rules should never be obeyed, but rather that sometimes progress is made by abandoning them. In the absence of a generally accepted rule, there is a need for alternative methods of persuasion. According to Feyerabend, Galileo employed stylistic and rhetorical techniques to convince his reader, while he also wrote in Italian rather than Latin and directed his arguments to those already temperamentally inclined to accept them.[86]
—Alexander Bird, "The Historical Turn in the Philosophy of Science" in Routledge Companion to the Philosophy of Science

Politics, health, and education

Political implications

The demarcation problem between science and pseudoscience brings up debate in the realms of science, philosophy and politics. Imre Lakatos, for instance, points out that the Communist Party of the Soviet Union at one point declared that Mendelian genetics was pseudoscientific and had its advocates, including well-established scientists such as Nikolai Vavilov, sent to a Gulag and that the "liberal Establishment of the West" denies freedom of speech to topics it regards as pseudoscience, particularly where they run up against social mores.[6]

Pseudoscience is used recurrently in political, policy-making discourse in allegations of distortion or fabrication of scientific findings to support a political position. The Prince of Wales has accused climate change skeptics of using pseudoscience and persuasion to hinder the world from adopting precautionary principles to avert the negative effects of global warming. People have given attention to the climate skeptics and have tried to understand the kind of pseudoscience they are canvassing. But the evidence is already here, which Prince Charles describes as an "environmental collapse", not only in climbing temperatures but the imprint on particular species like honey bees.[90]

It becomes pseudoscientific when science cannot be separated from ideology, scientists misrepresent scientific findings to promote or draw attention for publicity, when politicians, journalists and a nation's intellectual elite distort the facts of science for short-term political gain, when powerful individuals in the public conflate causation and cofactors (for example, in the causes of HIV/AIDS) through a mixture of clever wordplay, or when science is being used by the powerful to promote ignorance rather than tackle ignorance. These ideas reduce the authority, value, integrity and independence of science in society.[91]

Health and education implications

Distinguishing science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education. Treatments with a patina of scientific authority which have not actually been subjected to actual scientific testing may be ineffective, expensive, and dangerous to patients, and confuse health providers, insurers, government decision makers, and the public as to what treatments are appropriate. Claims advanced by pseudoscience may result in government officials and educators making poor decisions in selecting curricula; for example, creation science may replace evolution in studies of biology.[7]

The extent to which students acquire a range of social and cognitive thinking skills related to the proper usage of science and technology determines whether they are scientifically literate. Education in the sciences encounters new dimensions with the changing landscape of science and technology, a fast-changing culture, and a knowledge-driven era. A reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. Scientific literacy, which allows a person to distinguish science from pseudosciences such as astrology, is among the attributes that enable students to adapt to the changing world. Its characteristics are embedded in a curriculum where students are engaged in resolving problems, conductung investigations, or developing projects.[8]

Scientists do not want to get involved to counter pseudoscience for various reasons. For example, pseudoscientific beliefs are irrational and impossible to combat with rational arguments, and even agreeing to talk about pseudoscience indicates acceptance as a credible discipline. Pseudoscience harbors a continuous and an increasing threat to our society.[92] It is impossible to determine the irreversible harm that will happen in the long term. In a time when the public science literacy has declined and the danger of pseudoscience has increased, revising the conventional science course to address current science through the prism of pseudoscience could help improve science literacy and help society to eliminate misconceptions and assault growing trends (remote viewing, psychic readings, etc.) that may harm (financially or otherwise) trusting citizens.[92]

Pseudosciences such as homeopathy, even if generally benign, are magnets for charlatans. This poses a serious issue because incompetent practitioners should not be given the right of administering health care. True-believing zealots may pose a more serious threat than typical con men because of their affection to homeopathy's ideology. Irrational health care is not harmless, and it is careless to create patient confidence in pseudomedicine.[93]

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...