Search This Blog

Wednesday, November 23, 2022

Scientific theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Scientific_theory

A scientific theory is an explanation of an aspect of the natural world and universe that has been repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.A scientific theory differs from a scientific fact or scientific law in that a theory explains "why" or "how": a fact is a simple, basic observation, whereas a law is a statement (often a mathematical equation) about a relationship between facts. For example, Newton’s Law of Gravity is a mathematical equation that can be used to predict the attraction between bodies, but it is not a theory to explain how gravity works. Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."

The meaning of the term scientific theory (often contracted to theory for brevity) as used in the is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in science it describes an explanation that has been tested and is widely accepted as valid.

The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. Some theories are so well-established that they are unlikely ever to be fundamentally changed (for example, scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics, germ theory of disease, etc.). In certain cases, a scientific theory or scientific law that fails to fit all data can still be useful (due to its simplicity) as an approximation under specific conditions. An example is Newton's laws of motion, which are a highly accurate approximation to special relativity at velocities that are small relative to the speed of light.

Scientific theories are testable and make falsifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power. Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine.

Types

Albert Einstein described two different types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic theory. Principle theories are empirical generalisations , one such example being Newton's laws of motion.

Characteristics

Essential criteria

For any theory to be accepted within most academia there is usually one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.

The tectonic plates of the world were mapped in the second half of the 20th century. Plate tectonic theory successfully explains numerous observations about the Earth, including the distribution of earthquakes, mountains, continents, and oceans.

The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.

A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:

  • It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
  • It is well-supported by many independent strands of evidence, rather than a single foundation.
  • It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.

These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.

Other criteria

In addition, most scientists prefer to work with a theory that meets the following qualities:

  • It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
  • It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Definitions from scientific organizations

The United States National Academy of Sciences defines scientific theories as follows:

The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the Sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.

From the American Association for the Advancement of Science:

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.

Formation

The first observation of cells, by Robert Hooke, using an early microscope. This led to the development of cell theory.

The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.

Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have resisted attempts at falsification. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.

Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relativistic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.

Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.

Modification and improvement

If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.

If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.

After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.

Unification

In quantum mechanics, the electrons of an atom occupy orbitals around the nucleus. This image shows the orbitals of a hydrogen atom (s, p, d) at three different energy levels (1, 2, 3). Brighter areas correspond to higher probability density.

In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.

When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; the potential unification of these is sometimes called the Theory of Everything.

Example: Relativity

In 1905, Albert Einstein published the principle of special relativity, which soon became a theory. Special relativity predicted the alignment of the Newtonian principle of Galilean invariance, also termed Galilean relativity, with the electromagnetic field. By omitting from special relativity the luminiferous aether, Einstein stated that time dilation and length contraction measured in an object in relative motion is inertial—that is, the object exhibits constant velocity, which is speed with direction, when measured by its observer. He thereby duplicated the Lorentz transformation and the Lorentz contraction that had been hypothesized to resolve experimental riddles and inserted into electrodynamic theory as dynamical consequences of the aether's properties. An elegant theory, special relativity yielded its own consequences, such as the equivalence of mass and energy transforming into one another and the resolution of the paradox that an excitation of the electromagnetic field could be viewed in one reference frame as electricity, but in another as magnetism.

Einstein sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still uses Newton's equations, which are simpler to operate.

Theories and laws

Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses, and can predict the behavior of the natural world. Both are also typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope, and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources, and may contain one or several laws.

A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.

Theories and laws are also distinct from hypotheses. Unlike hypotheses, theories and laws may be simply referred to as scientific fact. However, in science, theories are different from facts even when they are well supported. For example, evolution is both a theory and a fact.

About theories

Theories as axioms

The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the predictions, and the "axioms" can be revised as a direct result.

The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.

Theories as models

The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.

Precession of the perihelion of Mercury (exaggerated). The deviation in Mercury's position from the Newtonian prediction is about 43 arc-seconds (about two-thirds of 1/60 of a degree) per century.

In this approach, theories are a specific category of models that fulfill the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.

The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of a solar system is clearly not an actual house or an actual solar system; the aspects of an actual house or an actual solar system represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.

Differences between theory and model

Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models, and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.

Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, magnified, in realistic detail. Software packages for creating the bill of materials for construction allows subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing

Assumptions in formulating theories

An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:

...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them corresponded to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.

Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.

Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).

The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").

Descriptions

From philosophers of science

Karl Popper described the characteristics of a scientific theory as follows:

  1. It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
  2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
  3. Every "good" scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
  4. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
  5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
  6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
  7. Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.

Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".

Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.

Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.

According to Kitcher, good scientific theories must have three features:

  1. Unity: "A science should be unified.... Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
  2. Fecundity: "A great scientific theory, like Newton's, opens up new areas of research.... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry.... Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity.... A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
  3. Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)

Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.

Analogies and metaphors

The concept of a scientific theory has also been described using analogies and metaphors. For example, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"

Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.

Michael Polanyi made an analogy between a theory and a map:

A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.

A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:

Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.

The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:

I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on...Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.

In physics

In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).

Examples

Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.

Mastery learning

From Wikipedia, the free encyclopedia

Mastery learning (or, as it was initially called, "learning for mastery"; also known as "mastery-based learning") is an instructional strategy and educational philosophy, first formally proposed by Benjamin Bloom in 1968. Mastery learning maintains that students must achieve a level of mastery (e.g., 90% on a knowledge test) in prerequisite knowledge before moving forward to learn subsequent information. If a student does not achieve mastery on the test, they are given additional support in learning and reviewing the information and then tested again. This cycle continues until the learner accomplishes mastery, and they may then move on to the next stage.

Mastery learning methods suggest that the focus of instruction should be the time required for different students to learn the same material and achieve the same level of mastery. This is very much in contrast with classic models of teaching that focus more on differences in students' ability and where all students are given approximately the same amount of time to learn and the same set of instructions.

In mastery learning, there is a shift in responsibilities, so that the students' failure is considered to be more due to the instruction and not necessarily their lack of ability. This also means teachers' attention to individual students is emphasised as opposed to assessing group performance. Therefore, in a mastery learning environment, the challenge becomes providing enough time and employing instructional strategies so that all students can achieve the same level of learning.

Since its conception, mastery learning has empirically been demonstrated to be effective in improving education outcomes in a variety of settings. Its effectiveness is influenced by the subject being taught, whether testing is designed locally or nationally, course pace and the amount of feedback provided to students.

Definition

Mastery learning is a set of group-based, individualized, teaching and learning strategies based on the premise that students will achieve a high level of understanding in a given domain if they are given enough time.

Motivation

The motivation for mastery learning comes from trying to reduce achievement gaps for students in average school classrooms. During the 1960s John B. Carroll and Benjamin S. Bloom pointed out that, if students are normally distributed with respect to aptitude for a subject and if they are provided uniform instruction (in terms of quality and learning time), then achievement level at completion of the subject is also expected to be normally distributed. This can be illustrated as shown below:

Comparison between normal curve for aptitude and normal curve for achievement after learning

Mastery Learning approaches propose that, if each learner were to receive optimal quality of instruction and as much learning time as they require, then a majority of students could be expected to attain mastery. This situation would be represented as follows:

Comparison between normal curve for aptitude and normal curve for achievement after optimal learning

In many situations educators preemptively use the normal curve for grading students. Bloom was critical of this usage, condemning it because it creates expectation by the teachers that some students will naturally be successful while others will not. Bloom defended that, if educators are effective, the distribution of achievement could and should be very different from the normal curve. Bloom proposed Mastery Learning as a way to address this. He believed that by using his approach, the majority of students (more than 90 percent) would achieve successful and rewarding learning. As an added advantage, Mastery Learning was also thought to create more positive interest and attitude towards the subject learned if compared with usual classroom methods.

Related terms

Individualized instruction has some elements in common with mastery learning, although it dispenses with group activities in favor of allowing more capable or more motivated students to progress ahead of others while maximizing teacher interaction with those students who need the most assistance.

Bloom's 2 Sigma Problem is an educational phenomenon observed where the average student tutored one-to-one (using mastery learning techniques) performed two standard deviations better than students who learn via conventional instructional methods.

Competency-based learning is a framework for the assessment of learning based on predetermined competencies. It draws inspiration from mastery learning.

History

In the 1920s there were at least two attempts to produce mastery in students' learning: the Winnetka Plan, by Carleton Washburne and associates, and another approach by Henry C. Morrison, at the University of Chicago Laboratory School. Both these projects provided school situations where mastery of particular learning tasks - rather than time spent - was the central theme. While these ideas were popular for a while, they faded due primarily to the lack of technologies that could sustain a successful implementation.

The idea of mastery learning resurfaced in the late 1950s and early 1960s as a corollary of programmed instruction, a technology invented by B.F. Skinner to improve teaching. Underlying programmed instruction was the idea that the learning of any behavior, no matter how complex, rested upon the learning of a sequence of less-complex component behaviors.

Around that same time, John B. Carroll was working on his "Model of School Learning" - a conceptual paradigm which outlined the major factors influencing student success in school learning and indicating how these factors interacted. Carroll's model stemmed from his previous work with foreign language learning. He found that a student's aptitude for a language predicted not only the level to which they learned in a given time, but also the amount of time they required to learn to a given level. Carroll then suggests that aptitudes are actually a way to measure the amount of time required to learn a task up to a certain level (under ideal instructional conditions). As such, Carroll's model implies that, if each student is given the sufficient time they needed to learn to any particular level, then they would be expected to attain it.

Later in the 1960s Benjamin Bloom and his graduate students were researching individual differences in school learning. They observed that teachers displayed very little variation in their instructional practices and yet, there was a lot of variation in student's achievements. Bloom used Carroll's conceptual model to create his own working model of Mastery Learning. Bloom realized that, if aptitudes were predictive of the rate at which a student can learn (and not necessarily the level to which), it should be possible to fix the degree of learning expected to some mastery level and then systematically manipulate the variables in Carroll's model such that all or almost all students attained that level.

Also in the 1960s, Fred S. Keller was collaborating with colleagues developing his own instructional methods of Mastery Learning. Keller's strategies were based on the ideas of reinforcement as seen in operant conditioning theories. Keller formally introduced his teaching method, Personalized System of Instruction (PSI) - sometimes referred to as Keller Plan), in his 1967 paper, "Engineering personalized instruction in the classroom".

From the late 1960s to the early 1980s, there was a surge of research on both Keller's and Bloom's instruction methods. Most of these studies showed that mastery learning has a positive effect on achievement, for all subjects and at all levels. Also, mastery learning brings positive affective outcomes for both students and teachers. These studies also showed that there are many variables that are either affected by mastery learning or that influence it somehow: student entry variables, curriculum, type of test, pacing, level of mastery, and time.

Despite those mostly positive research results, interest in mastery learning strategies decreased throughout the 1980s, as reflected in publication activity in professional journals and presentations at conferences. Many explanations were put forward to justify this decline, like alleged recalcitrance of the educational establishment to change, or the ineffective implementations of mastery learning methods, or the extra time demanded in setting up and maintaining a mastery learning course or even concerns that behavioristic-based models for teaching would conflict with the generally humanistic-oriented teachers and the surrounding culture.

Mastery learning strategies are best represented by Bloom's Learning For Mastery (LFM) and Keller's Personalized System of Instruction (PSI). Bloom's approach was focused in the schoolroom, whereas Keller developed his system for higher education. Both have been applied in many different contexts and have been found to be very powerful methods for increasing student performance in a wide range of activities. Despite sharing some commonalities in terms of goals, they are built on different psychological principles.

Learning For Mastery (LFM)

Variables of LFM

Bloom, when first proposing his mastery learning strategy in 1968, was convinced that most students can attain a high level of learning capability if the following conditions are available:

  • instruction is approached sensitively and systematically
  • students are helped when and where they have learning difficulties
  • students are given sufficient time to achieve mastery
  • there is some clear criterion of what constitutes mastery.

Many variables will influence achievement levels and learning outcomes:

Aptitude

Aptitude, measured by standard aptitude tests, in this context is interpreted as "the amount of time required by the learner to attain mastery of a learning task". Several studies show that majority of students can achieve mastery in a learning task, but the time that they need to spend on is different. Bloom argues that there are 1 to 5 percent of students who have special talent for learning a subject (especially music and foreign languages) and there are also around five percent of students who have special disability for learning a subject. For other 90% of students, aptitude is merely an indicator of the rate of learning. Additionally, Bloom argues that aptitude for a learning task is not constant and can be changed by environmental conditions or learning experience at school or home.

Quality of instruction

The quality of instruction is defined as the degree to which the presentation, explanation, and ordering of elements of the task to be learned approach the optimum for a given learner. Bloom insists that the quality of instruction has to be evaluated according to its effect on individual students rather than on random groups of students. Bloom shows that while in traditional classrooms, the relationship between students' aptitude test for mathematics and their final grade in algebra is very high, this relationship is almost zero for students who are receiving tutorial instruction in the home. He argues that a good tutor tries to find the quality of learning best fit to the given students, thus the majority of students would be able to master a subject if they have access to a good tutor.

Ability to understand instruction

According to Bloom the ability to understand instruction is defined as the nature of the task that a learner is to learn and the procedure that the learner is to follow. Verbal ability and reading comprehension are two language abilities that are highly related to student achievements. Since the ability to understand instruction varies significantly among students, Bloom recommends that teachers modify their instruction, provide help, and teaching aids to fit the needs of different students. Some of the teaching aids that could be provided according to the ability of the learner are:

  • Alternative Textbooks
  • Group Studies and Peer Tutoring
  • Workbooks
  • Programmed Instruction Units
  • Audiovisual Methods
  • Academic Games
  • Laboratory experiences
  • Simple demonstrations
  • Puzzles

Perseverance

Perseverance in this context is defined as the time the learner is willing to spend in learning. According to Bloom, a student who demonstrates a low level of perseverance in one learning task might have a very high level of perseverance in a different learning task. He suggests that students' perseverance be enhanced by increasing the frequency of reward and providing evidence of success in learning. He recommends that teachers use frequent feedback accompanied by specific help to improve the quality of instruction, thus reducing the perseverance required for learning.

Time allowed for learning

According to the International Study of Education in 12 countries, if the top 5% of students are omitted, the ratio of the time needed for slower and faster learners of a subject such as mathematics is 6 to 1 while there is zero or slightly negative relationship between the final grades and the amount of time spent on homework. Thus, the amount of time spent on homework is not a good indicator of mastery in a subject. Bloom postulates that the time required for a learner to achieve mastery in a specific subject is affected by various factors such as:

  • the student's aptitude for that subject,
  • The student's verbal ability,
  • the quality of instruction, and
  • the quality of the help provided.

LFM strategy

LFM curricula generally consists of discrete topics which all students begin together. After beginning a unit, students will be given a meaningful and formative assessment so that the teacher can conclude whether or not an objective has been mastered. At this step, instruction goes in one of two directions. If a student has mastered an objective, he or she will begin on a path of enrichment activities that correspond to and build upon the original objective. Students who do not satisfactorily complete a topic are given additional instruction until they succeed. If a student does not demonstrate that he or she has mastered the objective, then a series of correctives will be employed. These correctives can include varying activities, individualized instruction, and additional time to complete assignments. These students will receive constructive feedback on their work and will be encouraged to revise and revisit their assignment until the objective is mastered.

Preconditions

There are some preconditions for the process of mastery learning. First, the objectives and content of instruction have to be specified and clarified to both the students and the teacher. Another precondition is that the summative evaluation criteria should be developed and both the teacher and the learner should be clear about the achievement criteria. Bloom suggests that using absolute standards rather than competitive criteria, helps students to collaborate and facilitates mastery.

Operating procedures

The operating procedures are the methods used to provide detailed feedback and instructional help to facilitate the process of mastery in learning. The main operation procedures are:

  • Formative Evaluation, and
  • Alternative Learning Resources
Formative evaluation

Formative Evaluation in the context of mastery learning is a diagnostic progress tests to determine whether or not the student has mastered the subject unit. Each unit is usually a learning outcome that could be taught in a week or two of learning activity. The formative tests are administered at the learning units. Bloom insists that the diagnostic process has to be followed by a prescription and the result of formative assessment is better to express in not-grade format since the use of grades on repeated progress evaluations prepare students for accepting a level of learning less than mastery.

Alternative learning resources

The progress tests should be followed by detailed feedback and specific suggestions so that the students could work on their difficulties. Some of the alternative learning resources are:

  • Small groups of students (two or three) meet and work together
  • Tutorial help
  • Reviewing the instructional material
  • Reading alternative textbooks
  • Using workbook or programmed texts
  • Using selected audiovisual materials

Outcomes

The outcomes of mastery learning could be summarized into two groups: 1- Cognitive Outcomes 2- Affective Outcomes.

Cognitive outcomes

The cognitive outcomes of mastery learning are mainly related to increase in student excellence in a subject. According to one study, applying the strategies of mastery learning in a class resulted in the increase of students with the grade of A from 20 percent to 80 percent (about two standard deviation), and using the formative evaluation records as a base for quality control helped the teacher to improve the strategies and increase the percent of students with a grade of A to 90% in the following year.

Affective outcomes

Affective outcomes of mastery are mainly related to the sense of self-efficacy and confidence in the learners. Bloom argues that when the society (through education system) recognizes a learner's mastery, profound changes happen in his or her view of self and the outer world. The learner would start believing that he or she is able to adequately cope with problems, would have higher motivation for learning the subject in a higher level of expertise, and would have a better mental state due to less feeling of frustration. Finally, it is argued that in a modern society that lifelong learning is a necessity, mastery learning can develop a lifelong interest and motivation in learning.

Personalized System of Instruction (PSI)

Personalized System of Instruction, also known as the Keller Plan was developed in the mid 1960s by Fred Keller and colleagues. It was developed based on the idea of reinforcement in teaching processes.

Keller gives the following description to a group of psychology students enrolled in his course developed using mastery learning theory: "This is a course through which you may move, from start to finish, at your own pace. You will not be held back by other students or forced to go ahead until you are ready. At best, you may meet all the course requirements in less than one semester; at worst, you may not complete the job within that time. How fast you go is up to you" (Keller, 1968, pg 80-81).

Five elements of PSI

There are five main elements in PSI as described in Keller's paper from 1967:

  1. "The go-at-your-own-pace feature, which permits a student to move through the course at a speed commensurate with his ability and other demands upon his time.
  2. The unit-perfection requirement for advance, which lets the student go ahead to new material only after demonstrating mastery of that which preceded.
  3. The use of lectures and demonstrations as vehicles of motivation, rather than sources of critical information.
  4. The related stress upon the written word in teacher-student communication.
  5. The use of proctors, which permits repeated testing, immediate scoring, almost unavoidable tutoring, and a marked enhancement of the personal-social aspect of the educational process".

Assessment

In a mastery learning environment, the teacher directs a variety of group-based instructional techniques, with frequent and specific feedback by using diagnostic, formative tests, as well as regularly correcting mistakes students make along their learning path. Assessment in the mastery learning classroom is not used as a measure of accountability but rather as a source of evidence to guide future instruction. A teacher using the mastery approach will use the evidence generated from his or her assessment to modify activities to best serve each student. Teachers evaluate students with criterion-referenced tests rather than norm-referenced tests. In this sense, students are not competing against each other, but rather competing against themselves in order to achieve a personal best.

Criticism

Time-achievement equality dilemma

The goal of mastery learning is to have all students reach a prescribed level of mastery (i.e. 80–90% on a test). In order to achieve this, some students will require more time than others, either in practice or instruction, to achieve success. The Time-Achievement Equality Dilemma refers to this relationship between time and achievement in the context of individual differences. If achievement is held constant, time will need to vary. If time is held constant (as with modern learning models), achievement will vary. According to its critics, mastery theory doesn't accurately address this relationship.

Bloom's original theory assumes that with practice, the slower learners will become faster learners and the gap of individual differences will disappear. Bloom believes these differences in learning pace occur because of lack of prerequisite knowledge and if all children have the same prerequisite knowledge, then learning will progress at the same rate. Bloom places the blame on teaching settings where students aren't given enough time to reach mastery levels in prerequisite knowledge before moving on to the new lesson. He also uses this to explain why variance in student learning is smaller in the first grade when compared to students in the 7th grade (the smart get smarter, and the slower fall further behind). He referred to this learning rate variance as the Vanishing Point.

A four-year longitudinal study by Arlin (1984) found no indication of a vanishing point in students who learned arithmetic through a mastery approach. Students who required extra assistance to learn material in the first year of the study required relatively the same amount of additional instruction in the 4th year. Individual differences in learning rates appear to be impacted by more than just method of instruction, contrary to Bloom's opinions.

Methodology errors in research

Experimental vs. control groups

In studies investigating the effectiveness of mastery learning, control and experimental groups were not always valid. Experimental groups typically consisted of courses that were developed to adhere to the best principles of mastery. However, control groups were sometimes existing classes to use as a comparison. This poses a problem since there was no way to test the effectiveness of the control group to begin with - it could have been a poorly constructed course being compared against a strictly designed mastery course.

Measurement tools

In studies where the largest effect sizes were found, experimenter-made tests were used to test the mastery levels of students in the experiments. By using tests designed for the experiment, the mastery instruction intervention may have been able to better tailor the learning goals of the class to align with the measurement tool. Conversely, these dramatic effect sizes essentially disappeared when standardized tests were used to measure mastery levels in control and experimental groups.

Study duration

There are very few studies that investigate the long-term effects of mastery learning. Many studies included an arbitrary 3-4 week intervention period and results were based on findings from this time period. It's important to consider the length of time students were immersed in a mastery learning program to get a greater understanding of the long-term effects of this teaching strategy.

General concerns and opinions

Typical mastery programs involve providing class instruction then testing using reliable tools (i.e. multiple choice unit test). This format of learning may only be beneficial to learners who are interested in surface rather than deep processing of information. This contradicts many of today's modern learning approaches which focus less on direct assessment of knowledge, and more on creating meaningful applications and interpretations of the obtained knowledge (see Constructivism (philosophy of education))

The Chicago Mastery Learning Reading program was criticized for a focus on testing. A concern is that children were taught to pass tests without a focus on enduring skills. The duration of the retention of skills was questioned. A love of reading was not promoted. Students rarely read books or stories. Student failure was an aspect of the program design. 80% was required on 80% of the test to pass. This resulted in huge retention levels. Ultimately, the program was not practical to implement.

The value of having all children achieve mastery brings into question our general view of success. If the goal of education became having children become experts, grades would become much less varied. That is, you would theoretically have a high school graduating class all with grades above 90%. Universities would have to make selections from a pool of applicants with similar grades, how would admission requirements have to change to account for uniform ratings of intelligence? Would time it took to reach mastery become a new measure of success? These questions about the wider implications of mastery as a new standard raise discussion about its actual value.

Mastery learning today

Mastery Learning has been one of the most highly investigated teaching methods over the past 50 years. While it has been the subject of high criticism, it has also been found to have resounding success when implemented correctly. A meta-analysis by Guskey & Pigott (1988) looked at 46 studies that implemented group-based mastery learning classrooms. Results found consistently positive effects for a number of variables including "student achievement, retention of learned material, involvement in learning activities, and student affect". However, a notable variation was found within student achievement and it was believed this was due mainly to the subject being taught. Courses such as science, probability, and social studies yielded the most consistent positive results, while other subjects were varied.

Another large-scale meta analysis conducted by Kulik et al. (1990) investigated 108 studies of mastery programs being implemented at the elementary, secondary, and post-secondary level. Results revealed positive effects in favour of these teaching strategies, with students also reporting positive attitudes toward this style of learning. This study also found mastery programs to be most effective for weaker students.

Despite the empirical evidence, many mastery programs in schools have been replaced by more traditional forms of instruction due to the level of commitment required by the teacher and the difficulty in managing the classroom when each student is following an individual course of learning. However, the central tenets of mastery learning are still found in today's teaching strategies such as differentiated instruction and understanding by design.

Researchers at Northwestern University led by Drs. Diane Wayne, Jeff Barsuk and William McGaghie pioneered the use of mastery learning in the health professions. In 2006 they investigated mastery learning vs. traditional medical education in advanced cardiac life support techniques and showed that internal medicine resident trainees significantly improved adherence to American Heart Association protocols after mastery training. Subsequent investigations showed improved patient care practices as a result of this rigorous education including reduced patient complications and healthcare costs. These effects on patient care were seen in operating rooms, cardiac catheterization lab, intensive care units and patient floors at a large urban teaching hospital in Chicago. Further study also involved communication skills such as breaking bad news and end of life discussions, and patient self-management skills. In 2020 the Northwestern group published an important textbook entitled Mastery Learning in Health Professions Education. The approach designed by Northwestern investigators is currently in use at other health care institutions and medical schools throughout the US and the world.

In 2012, Jonathan Bergmann and Aaron Sams published the book Flip Your Classroom, Reach Every Student in Every Class Every Day. The second half of the book was dedicated to how to implement what they called the Flipped-Mastery Model. They merged mastery learning with flipped learning and saw significant results. The book has spurred many teachers across the world to adopt the Flipped-Mastery approach. Bergmann and Sams show that the logistical problems associated with setting up a mastery learning program are now solved by technology. If teachers have to deliver direct instruction, this can be time-shifted with either an instructional video or a flipped-reading assignment. The issue of multiple assessments is also solved by programs that allow for testing to be much more seamless and less burdensome.

 

Anencephaly

From Wikipedia, the free encyclopedia
 
Anencephaly
Anencephaly-web.jpg
Illustration of an anencephalic fetus
SpecialtyMedical genetics; pediatrics
SymptomsAbsence of the cerebrum and cerebellum
Risk factorsFolic acid deficiency
PreventionMother taking enough folic acid
PrognosisDeath typically occurs within hours to days after birth.

Anencephaly is the absence of a major portion of the brain, skull, and scalp that occurs during embryonic development. It is a cephalic disorder that results from a neural tube defect that occurs when the rostral (head) end of the neural tube fails to close, usually between the 23rd and 26th day following conception. Strictly speaking, the Greek term translates as "without a brain" (or totally lacking the inside part of the head), but it is accepted that children born with this disorder usually only lack a telencephalon, the largest part of the brain consisting mainly of the cerebral hemispheres, including the neocortex, which is responsible for cognition. The remaining structure is usually covered only by a thin layer of membrane—skin, bone, meninges, etc., are all lacking. With very few exceptions, infants with this disorder do not survive longer than a few hours or days after birth.

Signs and symptoms

The National Institute of Neurological Disorders and Stroke (NINDS) describes the presentation of this condition as follows: "A baby born with anencephaly is usually blind, deaf, unaware of its surroundings and unable to feel pain. Although some individuals with anencephaly may be born with a main brain stem, the lack of a functioning cerebrum permanently rules out the possibility of ever gaining awareness of their surroundings. Reflex actions such as breathing and responses to sound or touch may occur."

Due to the presence of the brainstem, children with anencephaly have almost all the primitive reflexes of a newborn, responding to auditory, vestibular and painful stimuli. This means that the child can move, smile, suckle and breathe without the aid of devices.

A side view of an anencephalic fetus
 
A front view of an anencephalic fetus
 
X-ray of an anencephalic stillborn fetus

Causes

Folic acid has been shown to be important in neural tube formation since at least 1991, and as a subtype of neural tube defect, folic acid may play a role in anencephaly. Studies have shown that the addition of folic acid to the diet of women of child-bearing age may significantly reduce, although not eliminate, the incidence of neural tube defects. Therefore, it is recommended that all women of child-bearing age consume 0.4 mg of folic acid daily, especially those attempting to conceive or who may possibly conceive, as this can reduce the risk to 0.03%. It is not advisable to wait until pregnancy has begun, since, by the time a woman knows she is pregnant, the critical time for the formation of a neural tube defect has usually already passed. A physician may prescribe even higher dosages of folic acid (5 mg/day) for women having had a previous pregnancy with a neural tube defect.

Neural tube defects can follow patterns of heredity, with direct evidence of autosomal recessive inheritance. As reported by Bruno Reversade and colleagues, the homozygous inactivation of the NUAK2 kinase leads to anencephaly in humans. Animal models indicate a possible association with deficiencies of the transcription factor TEAD2. Studies show that a woman who has had one child with a neural tube defect such as anencephaly has about a 3% risk of having another child with a neural tube defect, as opposed to the background rate of 0.1% occurrence in the population at large. Genetic counseling is usually offered to women at a higher risk of having a child with a neural tube defect to discuss available testing.

An infant with anencephaly and acrania

It is known that people taking certain anticonvulsants and people with insulin-dependent diabetes have a higher risk of having a child with a neural tube defect.

Relation to genetic ciliopathy

Until recently, medical literature did not indicate a connection among many genetic disorders, both genetic syndromes and genetic diseases, that are now being found to be related. As a result of new genetic research, some of these are, in fact, highly related in their root cause despite the widely varying set of medical symptoms that are clinically visible in the disorders. Anencephaly is one such disease, part of an emerging class of diseases called ciliopathies. The underlying cause may be a dysfunctional molecular mechanism in the primary cilia structures of the cell, organelles present in many cellular types throughout the human body. The cilia defects adversely affect "numerous critical developmental signaling pathways" essential to cellular development and, thus, offer a plausible hypothesis for the often multi-symptom nature of a large set of syndromes and diseases. Known ciliopathies include primary ciliary dyskinesia, Bardet–Biedl syndrome, polycystic kidney and liver disease, nephronophthisis, Alström syndrome, Meckel–Gruber syndrome, and some forms of retinal degeneration.

Diagnosis

Ultrasound image of fetus with anencephaly

Anencephaly can often be diagnosed before birth through an ultrasound examination. The maternal serum alpha-fetoprotein (AFP screening) and detailed fetal ultrasound can be useful for screening for neural tube defects such as spina bifida or anencephaly.

Meroanencephaly

Meroanencephaly is a rare form of anencephaly characterized by malformed cranial bones, a median cranial defect, and a cranial protrusion called area cerebrovasculosa. Area cerebrovasculosa is a section of abnormal, spongy, vascular tissue admixed with glial tissue ranging from simply a membrane to a large mass of connective tissue, hemorrhagic vascular channels, glial nodules, and disorganized choroid plexuses.

Holoanencephaly

The most common type of anencephaly, where the brain has entirely failed to form, except for the brain stem. Infants rarely survive more than one day after birth with holoanencephaly.

Craniorachischisis

The most severe type of anencephaly where area cerebrovasculosa and area medullovasculosa fill both cranial defects and the spinal column. Craniorachischisis is characterized by anencephaly accompanied by bony defects in the spine and the exposure of neural tissue as the vault of the skull fails to form. Craniorachischisis occurs in about 1 of every 1000 live births, but various physical and chemical tests can detect neural tube closure during early pregnancy.

Prognosis

There is no cure or standard treatment for anencephaly. Prognosis is extremely poor, as many anencephalic fetuses do not survive birth and infants that are not stillborn will usually die within a few hours or days after birth from cardiorespiratory arrest.

Epidemiology

In the United States, anencephaly occurs in about 1 out of every 10,000 births. Rates may be higher among Africans with rates in Nigeria estimated at 3 per 10,000 in 1990 while rates in Ghana estimated at 8 per 10,000 in 1992. Rates in China are estimated at 5 per 10,000.

Research has suggested that, overall, female babies are more likely to be affected by the disorder.

Ethical issues

Organ donation

One issue concerning anencephalic newborns is organ donation. Initial legal guidance came from the case of Baby Theresa in 1992, in which the boundaries of organ donation were tested for the first time. Infant organs are scarce, and the high demand for pediatric organ transplants poses a major public health issue. In 1999, it was found that for American children under the age of two who are waiting for a transplant, 30–50% die before an organ becomes available.

Within the medical community, the main ethical issues with organ donation are a misdiagnosis of anencephaly, the slippery slope argument, that anencephalic neonates would rarely be a source of organs, and that it would undermine confidence in organ transplantation. Slippery slope concerns are a major issue in personhood debates, across the board. In regards to anencephaly, those who oppose organ donation argue that it could open the door for involuntary organ donors such as an elderly person with severe dementia. Another point of contention is the number of children who would actually benefit. There are discrepancies in statistics; however, it is known that most anencephalic children are stillborn.

Proposals have been made to bypass the legal and ethical issues surrounding organ donation. These include waiting for death to occur before procuring organs, expanding the definition of death, creating a special legal category for anencephalic infants, and defining them as non-persons.

In the United Kingdom, a child born with anencephaly was reported as the country's youngest organ donor. Teddy Houlston was diagnosed as anencephalic at 12 weeks of gestation. His parents, Jess Evans and Mike Houlston, decided against abortion and instead proposed organ donation. Teddy was born on 22 April 2014, in Cardiff, Wales, and lived for 100 minutes, after which his heart and kidneys were removed. His kidneys were later transplanted into an adult in Leeds. Teddy's twin, Noah, was born healthy.

Brain death

There are four different concepts used to determine brain death: failure of heart, failure of lungs, whole brain death, and neocortical death.

Neocortical death, similar to a persistent vegetative state (PVS), involves loss of cognitive functioning of the brain. A proposal by law professor David Randolph Smith, in an attempt to prove that neocortical death should legally be treated the same as brain death, involved PET scans to determine the similarities. However, this proposal has been criticized on the basis that confirming neocortical death by PET scan may risk indeterminacy.

Pregnancy termination

Anencephaly can be diagnosed before delivery with a high degree of accuracy. Although anencephaly is a fatal condition, the option of abortion is dependent on the abortion laws in the state. According to a 2013 report, 26% of the world's population reside in a country where abortion is generally prohibited. In 2012, Brazil extended the right of abortion to mothers with anencephalic fetuses. This decision is, however, receiving much disapproval by several religious groups.

Legal proceedings

The case of baby Theresa was the beginning of the ethical debate over anencephalic infant organ donation. The story of baby Theresa remains a focus of basic moral philosophy. Baby Theresa was born with anencephaly in 1992. Her parents, knowing that their child was going to die, requested that her organs be given for transplantation. Although her physicians agreed, Florida law prohibited the infant's organs from being removed while she was still alive. By the time she died nine days after birth, her organs had deteriorated past the point of being viable.

United States uniform acts

The Uniform Determination of Death Act (UDDA) is a model bill, adopted by many US states, stating that an individual who has sustained either 1) irreversible cessation of circulatory and respiratory functions or 2) irreversible cessation of all functions of the entire brain, including the brain stem, is dead. This bill was a result of much debate over the definition of death and is applicable to the debate over anencephaly. A related bill, the Uniform Anatomical Gift Act (UAGA), grants individuals and, after death, their family members the right to decide whether or not to donate organs. Because it is against the law for any person to pay money for an organ, the person in need of an organ transplant must rely on a volunteer.

There have been two state bills that proposed to change current laws regarding death and organ donation. California Senate Bill 2018 proposed to amend the UDDA to define anencephalic infants as already dead, while New Jersey Assembly Bill 3367 proposed to allow anencephalic infants to be organ sources even if they are not dead.

Research

Some genetic research has been conducted to determine the causes of anencephaly. It has been found that cartilage homeoprotein (CART1) is selectively expressed in chondrocytes (cartilage cells). The CART1 gene to chromosome 12q21.3–q22 has been mapped. Also, it has been found that mice homozygous for deficiency in the Cart1 gene manifested acrania and meroanencephaly, and prenatal treatment with folic acid will suppress acrania and meroanencephaly in the Cart1-deficient mutants.

Khan Academy

From Wikipedia, the free encyclopedia
 
Khan Academy
Khan Academy logo (2018).svg
Type of site
501(c)(3)
Available inMultiple languages
OwnerKhan Academy, Inc.
Founder(s)Salman Khan
URLkhanacademy.org
Launched2008; 14 years ago 
Sal Khan presenting during TED 2011

Khan Academy is an American non-profit educational organization created in 2008 by Salman Khan. Its goal is creating a set of online tools that help educate students. The organization produces short lessons in the form of videos. Its website also includes supplementary practice exercises and materials for educators. It has produced over 8,000 video lessons teaching a wide spectrum of academic subjects, originally focusing on mathematics and sciences. All resources are available for free to users of the website and application.

As of 2018, over 70 million people use Khan Academy, out of which 2.3 million students use it to prepare for the SAT. As of November 2022, the Khan Academy channel on YouTube has 7.59 million subscribers and Khan Academy videos have been viewed over 2 billion times.

History

Starting in 2004, Salman "Sal" Khan began tutoring one of his cousins in mathematics on the Internet using a service called Yahoo! Doodle Images. After a while, Khan's other cousins began to use his tutoring service. Due to the demand, Khan decided to make his videos watchable on the Internet, so he published his content on YouTube. Later, he used a drawing application called SmoothDraw, and now uses a Wacom tablet to draw using ArtRage. The video tutorials were recorded on his computer.

Positive responses prompted Khan to incorporate Khan Academy in 2008 and to quit his job in 2009, to focus full-time on creating educational tutorials (then released under the name Khan Academy). Khan Lab School, a school founded by Sal Khan and associated with Khan Academy, opened on September 15, 2014, in Mountain View, California. In June 2017, Khan Academy officially launched the Financial Literacy Video Series for college graduates, jobseekers and young professionals.

Funding

Khan Academy is a 501(c)(3) nonprofit organization, mostly funded by donations coming from philanthropic organizations. On its IRS form 990, the organization reported $31 million in revenues in 2018 and $28 million in 2019, including $839,000 in 2019 compensation for Khan as CEO.

In 2010, Google donated $2 million for creating new courses and translating content into other languages, as part of their Project 10100 program. In 2013, Carlos Slim from the Luis Alcazar Foundation in Mexico, made a donation for creating Spanish versions of videos. In 2015, AT&T contributed $2.25 million to Khan Academy for mobile versions of the content accessible through apps. The Bill & Melinda Gates Foundation has donated $1.5 million to Khan Academy. On January 11, 2021, Elon Musk donated $5 million through his Musk foundation.

Content

Khan Academy's website aims to provide a free personalized learning experience, that are built on the videos, hosted on YouTube. The website is meant to be used as a supplement to its videos, because it includes other features such as progress tracking, practice exercises, and teaching tools. The material can also be accessed through mobile applications.

The videos display a recording of drawings on an electronic blackboard, which are similar to the style of a teacher giving a lecture. The narrator describes each drawing and how they relate to the material being taught. Furthermore, throughout the lessons, users can earn badges and energy points, which can be displayed on their profiles. Non-profit groups have distributed offline versions of the videos to rural areas in Asia, Latin America, and Africa. Videos range from all subjects covered in school and for all grades from kindergarten up through high school. The Khan Academy website also hosts content from educational YouTube channels and organizations such as Crash Course and the Museum of Modern Art. It also provides online courses for preparing for standardized tests, including the SAT, AP Chemistry, Praxis Core and MCAT and released LSAT preparation lessons in 2018. They also have a collaboration with independent chemists, which are mentioned in their, "Meet the chemistry professional". Khan Academy has also supported Code.org’s Hour of Code, providing coding lessons on its website.

In July 2017, Khan Academy became the official practice partner for the College Board's Advanced Placement.

Language availability

Khan Academy videos have been translated into several languages, with close to 20,000 subtitle translations available. These translations are mainly volunteer-driven with help from international partnerships. The Khan Academy platform is fully available in English (en), Bangla (bn), Bulgarian (BG), Chinese (ZH), French (fr), German (de), Georgian (ka), Norwegian (nb), Polish (pl) Portuguese (pt), Spanish (es), Serbian (sr), Turkish (tr) and Uzbek (uz), and partially available in 28 other languages.

Official SAT preparation

Since 2015, Khan Academy has been the official SAT preparation website. According to reports, studying for the SAT for 20 hours at Khan Academy is associated with a 115-point average score increase. Many book exercises select questions from the Khan Academy site to be published.

Pixar in a Box

In 2015, Khan Academy teamed up with Pixar to create a new course named Pixar in a Box, which teaches how skills learned in school help the creators at Pixar.

Official Test Preparation

Khan Academy also provides free test preps for SAT, LSAT, and MCAT.

Khan Academy Kids

In 2018, Khan Academy created an application called Khan Academy Kids. It is used by two to six-year-old children to learn basic skills (primarily mathematics and language arts) before progressing to grade school.

Teachers

Teachers can set up a classroom within Khan Academy. This classroom allows teachers to assign courses within Khan Academy's database to their students. Teachers can also track their student's progress as they work through the assigned tutorials.

Criticism

Khan Academy has been criticized because its creator, Sal Khan, lacks a formal background or qualifications in pedagogy. Statements made in certain mathematics and physics videos have been questioned for their technical accuracy. In response to these criticisms, the organization has corrected errors in its videos, expanded its faculty and formed a network of over 200 content experts.

In an interview from January 2016, Khan defended the value of Khan Academy online lectures while acknowledging their limitations: "I think they're valuable, but I'd never say they somehow constitute a complete education." Khan Academy positions itself as a supplement to in-class learning, with the ability to improve the effectiveness of teachers by freeing them from traditional lectures and giving them more time to tend to individual students' needs.

Recognition

Khan Academy has gained recognition both in the US and internationally:

  • In April 2012, Khan was listed among TIME's 100 Most Influential People for 2012.
  • In 2012, Khan Academy won a Webby Award in the category Websites and Mobile Sites, Education.
  • Khan was one of five winners of the 2013 Heinz Award. His award was in the area of "Human Condition."
  • In 2016, Khan Academy won a Shorty Award for Best in Education.

Outcome-based education

From Wikipedia, the free encyclopedia
 
A High School class in Cape Town, South Africa

Outcome-based education or outcomes-based education (OBE) is an educational theory that bases each part of an educational system around goals (outcomes). By the end of the educational experience, each student should have achieved the goal. There is no single specified style of teaching or assessment in OBE; instead, classes, opportunities, and assessments should all help students achieve the specified outcomes. The role of the faculty adapts into instructor, trainer, facilitator, and/or mentor based on the outcomes targeted.

Outcome-based methods have been adopted in education systems around the world, at multiple levels. Australia and South Africa adopted OBE policies from the 1990s to the mid 2000s, but were abandoned in the face of substantial community opposition. The United States has had an OBE program in place since 1994 that has been adapted over the years. In 2005, Hong Kong adopted an outcome-based approach for its universities. Malaysia implemented OBE in all of their public schools systems in 2008. The European Union has proposed an education shift to focus on outcomes, across the EU. In an international effort to accept OBE, The Washington Accord was created in 1989; it is an agreement to accept undergraduate engineering degrees that were obtained using OBE methods. As of 2017, the full signatories are Australia, Canada, Taiwan, Hong Kong, India, Ireland, Japan, Korea, Malaysia, New Zealand, Russia, Singapore, South Africa, Sri Lanka, Turkey, the United Kingdom, Pakistan, China and the United States.

Differences from traditional education methods

OBE can primarily be distinguished from traditional education method by the way it incorporates three elements: theory of education, a systematic structure for education, and a specific approach to instructional practice. It organizes the entire educational system towards what are considered essential for the learners to successfully do at the end of their learning experiences. In this model, the term "outcome" is the core concept and sometimes used interchangeably with the terms "competency, "standards, "benchmarks", and "attainment targets". OBE also uses the same methodology formally and informally adopted in actual workplace to achieve outcomes. It focuses on the following skills when developing curricula and outcomes:

  • Life skills;
  • Basic skills;
  • Professional and vocational skills;
  • Intellectual skills;
  • Interpersonal and personal skills.

In a regional/local/foundational/electrical education system, students are given grades and rankings compared to each other. Content and performance expectations are based primarily on what was taught in the past to students of a given age of 12-18. The goal of this education was to present the knowledge and skills of an older generation to the new generation of students, and to provide students with an environment in which to learn. The process paid little attention (beyond the classroom teacher) to whether or not students learn any of the material.

Benefits of OBE

Clarity

The focus on outcomes creates a clear expectation of what needs to be accomplished by the end of the course. Students will understand what is expected of them and teachers will know what they need to teach during the course. Clarity is important over years of schooling and when team teaching is involved. Each team member, or year in school, will have a clear understanding of what needs to be accomplished in each class, or at each level, allowing students to progress. Those designing and planning the curriculum are expected to work backwards once an outcome has been decided upon; they must determine what knowledge and skills will be required to reach the outcome.

Flexibility

With a clear sense of what needs to be accomplished, instructors will be able to structure their lessons around the student’s needs. OBE does not specify a specific method of instruction, leaving instructors free to teach their students using any method. Instructors will also be able to recognize diversity among students by using various teaching and assessment techniques during their class. OBE is meant to be a student-centered learning model. Teachers are meant to guide and help the students understand the material in any way necessary, study guides, and group work are some of the methods instructors can use to facilitate students learning.

Comparison

OBE can be compared across different institutions. On an individual level, institutions can look at what outcomes a student has achieved to decide what level the student would be at within a new institution. On an institutional level, institutions can compare themselves, by checking to see what outcomes they have in common, and find places where they may need improvement, based on the achievement of outcomes at other institutions. The ability to compare easily across institutions allows students to move between institutions with relative ease. The institutions can compare outcomes to determine what credits to award the student. The clearly articulated outcomes should allow institutions to assess the student’s achievements rapidly, leading to increased movement of students. These outcomes also work for school to work transitions. A potential employer can look at records of the potential employee to determine what outcomes they have achieved. They can then determine if the potential employee has the skills necessary for the job.

Involvement

Student involvement in the classroom is a key part of OBE. Students are expected to do their own learning, so that they gain a full understanding of the material. Increased student involvement allows students to feel responsible for their own learning, and they should learn more through this individual learning. Other aspects of involvement are parental and community, through developing curriculum, or making changes to it. OBE outcomes are meant to be decided upon within a school system, or at a local level. Parents and community members are asked to give input in order to uphold the standards of education within a community and to ensure that students will be prepared for life after school.

Drawbacks of OBE

Definition

The definitions of the outcomes decided upon are subject to interpretation by those implementing them. Across different programs or even different instructors outcomes could be interpreted differently, leading to a difference in education, even though the same outcomes were said to be achieved. By outlining specific outcomes, a holistic approach to learning is lost. Learning can find itself reduced to something that is specific, measurable, and observable. As a result, outcomes are not yet widely recognized as a valid way of conceptualizing what learning is about.

Assessment problems

When determining if an outcome has been achieved, assessments may become too mechanical, looking only to see if the student has acquired the knowledge. The ability to use and apply the knowledge in different ways may not be the focus of the assessment. The focus on determining if the outcome has been achieved leads to a loss of understanding and learning for students, who may never be shown how to use the knowledge they have gained. Instructors are faced with a challenge: they must learn to manage an environment that can become fundamentally different from what they are accustomed to. In regards to giving assessments, they must be willing to put in the time required to create a valid, reliable assessment that ideally would allow students to demonstrate their understanding of the information, while remaining objective.

Generality

Education outcomes can lead to a constrained nature of teaching and assessment. Assessing liberal outcomes such as creativity, respect for self and others, responsibility, and self-sufficiency, can become problematic. There is not a measurable, observable, or specific way to determine if a student has achieved these outcomes. Due to the nature of specific outcomes, OBE may actually work against its ideals of serving and creating individuals that have achieved many outcomes.

Involvement

Parental involvement, as discussed in the benefits section can also be a drawback, if parents and community members are not willing to express their opinions on the quality of the education system, the system may not see a need for improvement, and not change to meet student’s needs. Parents may also become too involved, requesting too many changes, so that important improvements get lost with other changes that are being suggested. Instructors will also find that their work is increased; they must work to first understand the outcome, then build a curriculum around each outcome they are required to meet. Instructors have found that implementing multiple outcomes is difficult to do equally, especially in primary school. Instructors will also find their work load increased if they chose to use an assessment method that evaluates students holistically.

Adoption and removal

Australia

In the early 1990s, all states and territories in Australia developed intended curriculum documents largely based on OBE for their primary and secondary schools. Criticism arose shortly after implementation. Critics argued that no evidence existed that OBE could be implemented successfully on a large scale, in either the United States or Australia. An evaluation of Australian schools found that implementing OBE was difficult. Teachers felt overwhelmed by the amount of expected achievement outcomes. Educators believed that the curriculum outcomes did not attend to the needs of the students or teachers. Critics felt that too many expected outcomes left students with shallow understanding of the material. Many of Australia’s current education policies have moved away from OBE and towards a focus on fully understanding the essential content, rather than learning more content with less understanding.

Western Australia

Officially, an agenda to implement Outcomes Based Education took place between 1992 and 2008 in Western Australia. Dissatisfaction with OBE escalated from 2004 when the government proposed the implementation of an alternative assessment system using OBE 'levels' for years 11 and 12. With government school teachers not permitted to publicly express dissatisfaction with the new system, a community lobby group called PLATO as formed in June 2004 by high school science teacher Marko Vojkavi. Teachers anonymously expressed their views through the website and online forums, with the website quickly became one of the most widely read educational websites in Australia with more 180,000 hits per month and contained an archive of more than 10,000 articles on the subject of OBE implementation. In 2008 it was officially abandoned by the state government with Minister for Education Mark McGowan remarking that the 1990s fad "to dispense with syllabus" was over.

European Union

In December 2012, the European Commission presented a new strategy to decrease youth unemployment rate, which at the time was close to 23% across the European Union. The European Qualifications Framework calls for a shift towards learning outcomes in primary and secondary schools throughout the EU. Students are expected to learn skills that they will need when they complete their education. It also calls for lessons to have a stronger link to employment through work-based learning (WBL). Work-based learning for students should also lead to recognition of vocational training for these students. The program also sets goals for learning foreign languages, and for teachers continued education. It also highlights the importance of using technology, especially the internet, in learning to make it relevant to students.

Hong Kong

Hong Kong’s University Grants Committee adopted an outcomes-based approach to teaching and learning in 2005. No specific approach was created leaving universities to design the approach themselves. Universities were also left with a goal of ensuring an education for their students that will contribute to social and economic development, as defined by the community in which the university resides. With little to no direction or feedback from the outside universities will have to determine if their approach is achieving its goals on their own.

Malaysia

OBE has been practiced in Malaysia since the 1950s; however, as of 2008, OBE is being implemented at all levels of education, especially tertiary education. This change is a result of the belief that the education system used prior to OBE inadequately prepared graduates for life outside of school. The Ministry of Higher Education has pushed for this change because of the number of unemployed graduates. Findings in 2006 state that nearly 70% of graduates from public universities were considered unemployed. A further study of those graduates found that they felt they lacked, job experience, communication skills, and qualifications relevant to the current job market. The Malaysian Qualifications Agency (MQA) was created to oversee quality of education and to ensure outcomes were being reached. The MQA created a framework that includes eight levels of qualification within higher education, covering three sectors; skills, vocational and technical, and academic. Along with meeting the standards set by the MQA, universities set and monitor their own outcome expectations for students

South Africa

OBE was introduced to South Africa in the late 1990s by the post-apartheid government as part of its Curriculum 2005 program. Initial support for the program derived from anti-apartheid education policies. The policy also gained support from the labor movements that borrowed ideas about competency-based education, and Vocational education from New Zealand and Australia, as well as the labor movement that critiqued the apartheid education system. With no strong alternative proposals, the idea of outcome-based education, and a national qualification framework, became the policy of the African National Congress government. This policy was believed to be a democratization of education, people would have a say in what they wanted the outcomes of education to be. It was also believed to be a way to increase education standards and increase the availability of education. The National Qualifications Framework (NQF) went into effect in 1997. In 2001 people realized that the intended effects were not being seen. By 2006 no proposals to change the system had been accepted by the government, causing a hiatus of the program. The program came to be viewed as a failure and a new curriculum improvement process was announced in 2010, slated to be implemented between 2012 and 2014.

United States

In 1983, a report from the National Commission on Excellence in Education declared that American education standards were eroding, that young people in the United States were not learning enough. In 1989, President Bush and the nation’s governors set national goals to be achieved by the year 2000. Goals 2000: Educate America Act was signed in March 1994. The goal of this new reform was to show that results were being achieved in schools. In 2001, the No Child Left Behind Act took the place of Goals 2000. It mandated certain measurements as a condition of receiving federal education funds. States are free to set their own standards, but the federal law mandates public reporting of math and reading test scores for disadvantaged demographic subgroups, including racial minorities, low-income students, and special education students. Various consequences for schools that do not make "adequate yearly progress" are included in the law. In 2010, President Obama proposed improvements for the program. In 2012, the U.S. Department of Education invited states to request flexibility waivers in exchange for rigorous plans designed to improve students' education in the state.

India

India has become the permanent signatory member of the Washington Accord on 13 June 2014. India has started implementing OBE in higher technical education like diploma and undergraduate programmes. The National Board of Accreditation, a body for promoting international quality standards for technical education in India has started accrediting only the programmes running with OBE from 2013.

The National Board of Accreditation mandates establishing a culture of outcomes-based education in institutions that offer Engineering, Pharmacy, Management programs. Outcomes analysis and using the analytical reports to find gaps and carry out continuous improvement is essential cultural shift from how the above programs are run when OBE culture is not embraced. Outcomes analysis requires huge amount of data to be churned and made available at any time, anywhere. Such an access to scalable, accurate, automated and real-time data analysis is possible only if the institute adopts either excelsheet based measurement system or some kind of home-grown or commercial software system. It is observed that excelsheet based measurement and analysis system doesn't scale when the stakeholders want to analyse longitudinal data.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...