Search This Blog

Thursday, November 23, 2023

Drug interaction

From Wikipedia, the free encyclopedia
Grapefruit juice can act as an enzyme inhibitor, affecting the metabolism of drugs.

In pharmaceutical sciences, drug interactions occur when a drug's mechanism of action is affected by the concomitant administration of substances such as foods, beverages, or other drugs. A popular example of drug-food interaction is the effect of grapefruit in the metabolism of drugs.

Interactions may occur by simultaneous targeting of receptors, directly or indirectly. For example, both Zolpidem and alcohol affect GABAA receptors, and their simultaneous consumption results in the overstimulation of the receptor, which can lead to loss of consciousness. When two drugs affect each other, it receives the name of a drug-drug interaction. The risk of a drug-drug interaction (DDI) increases with the number of drugs used.

A large share of elderly people regularly use five or more medications or supplements, with a significant sharte risk of side-effects from drug-drug interactions.

Drug interactions can be of three kinds:

  • additive (the result is what you expect when you add together the effect of each drug taken independently),
  • synergistic (combining the drugs leads to a larger effect than expected), or
  • antagonistic (combining the drugs leads to a smaller effect than expected).

It may be difficult to distinguish between synergistic or additive interactions, as individual effects of drugs may vary.

Direct interactions between drugs are also possible and may occur when two drugs are mixed before intravenous injection. For example, mixing thiopentone and suxamethonium can lead to the precipitation of thiopentone.

Interactions based on pharmacodynamics

Pharmacodynamic interactions are the drug-drug interactions that occur at a biochemical level and depend mainly on the biological processes of organisms. These interactions occur due to action on the same targets, for example the same receptor or signaling pathway.

Effects of the competitive inhibition of an agonist by increases in the concentration of an antagonist. A drug's potency can be affected (the response curve shifted to the right) by the presence of an antagonistic interaction.

Pharmacodynamic interactions can occur on protein receptors. Two drugs can be considered to be Homodynamic, if they act on the same receptor. Homodynamic effects include drugs that act as (1) pure agonists, if they bind to the main locus of the receptor, causing a similar effect to that of the main drug, (2) partial agonists if, on binding to a secondary site, they have the same effect as the main drug, but with a lower intensity and (3) antagonists, if they bind directly to the receptor's main locus but their effect is opposite to that of the main drug. These may be competitive antagonists, if they compete with the main drug to bind with the receptor. or uncompetitive antagonists, when the antagonist binds to the receptor irreversibly. The drugs can be considered Heterodynamic competitors, if they act on distinct receptor with similar downstream pathways.

The interaction my also occur via signal transduction mechanisms. For example, low blood glucose leads to a release of catecholamines, triggering symptoms that hint the organism to take action, including the eating sugar. If a patient is on insulin, which reduces blood sugar, and also beta-blockers, the body is less able to cope with an insulin overdose.

Interactions based on pharmacokinetics

Pharmacokinetics is the field of research studying the chemical and biochemical factors that directly affect dosage and the half-life of drugs in an organism, including absorption, transport, distribution, metabolism and excretion. Compounds may affect any of those process, ultimately interfering with the flux of drugs in the human body, increasing or reducing drug availability.

Based on absorption

Drugs that change intestinal motility may impact the level of other drugs taken. For example, prokinetic agents increase the intestinal motility, which may cause drugs to go through the digestive system too fast, reducing absorption.

The pharmacological modification of pH can also affect other compounds. Drugs can be present in ionized or non-ionized forms depending on pKa, and neutral compounds are usually better absorbed by membranes. Medication like antacids can increase pH and inhibit the absorption of other drugs such as zalcitabine, tipranavir and amprenavir. The opposite is more common, with, for example, the antacid cimetidine stimulating the absorption of didanosine. Some resources describe that a gap of two to four hours between taking the two drugs is needed to avoid the interaction.

Factors such as food with high-fat content may also alter the solubility of drugs and impact its absorption. This is the case for oral anticoagulants and avocado. The formation of non-absorbable complexes may occur also via chelation, when cations can make certain drugs harder to absorb, for example between tetracycline or the fluoroquinolones and dairy products, due to the presence of calcium ions. Other drugs Binding with proteins. Some drugs such as sucralfate binds to proteins, especially if they have a high bioavailability. For this reason its administration is contraindicated in enteral feeding.

Some drugs also alter absorption by acting on the P-glycoprotein of the enterocytes. This appears to be one of the mechanisms by which grapefruit juice increases the bioavailability of various drugs beyond its inhibitory activity on first pass metabolism.

Based on transport and distribution

Drugs also may affect each other by competing for transport proteins in plasma, such as albumin. In these cases the drug that arrives first binds with the plasma protein, leaving the other drug dissolved in the plasma, modifying its expected concentration. The organism has mechanisms to counteract these situations (by, for example, increasing plasma clearance), and thus they are not usually clinically relevant. They may become relevant if other problems are present, such as issues with drug excretion.

Based on metabolism

Diagram of cytochrome P450 isoenzyme 2C9 with the haem group in the centre of the enzyme.

Many drug interactions are due to alterations in drug metabolism. Further, human drug-metabolizing enzymes are typically activated through the engagement of nuclear receptors. One notable system involved in metabolic drug interactions is the enzyme system comprising the cytochrome P450 oxidases.

CYP450

Cytochrome P450 is a very large family of haemoproteins (hemoproteins) that are characterized by their enzymatic activity and their role in the metabolism of a large number of drugs. Of the various families that are present in humans, the most interesting in this respect are the 1, 2 and 3, and the most important enzymes are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4. The majority of the enzymes are also involved in the metabolism of endogenous substances, such as steroids or sex hormones, which is also important should there be interference with these substances. The function of the enzymes can either be stimulated (enzyme induction) or inhibited (enzyme inhibition).

Through enzymatic inhibition and induction

If a drug is metabolized by a CYP450 enzyme and drug B blocks the activity of these enzymes, it can lead to pharmacokinetic alterations. A. This alteration results in drug A remaining in the bloodstream for an extended duration, and eventually increase in concentration.

In some instances, the inhibition may reduce the therapeutic effect, if instead the metabolites of the drug is responsible for the effect.

Compounds that increase the efficiency of the enzymes, on the other hand, may have the opposite effect and increase the rate of metabolism.

Examples of metabolism-based interactions

An example of this is shown in the following table for the CYP1A2 enzyme, showing the substrates (drugs metabolized by this enzyme) and some inductors and inhibitors of its activity:

Drugs related to CYP1A2
Substrates Inhibitors Inductors

Some foods also act as inductors or inhibitors of enzymatic activity. The following table shows the most common:

Foods and their influence on drug metabolism
Food Mechanism Drugs affected
Enzymatic inductor Acenocoumarol, warfarin
Grapefruit juice Enzymatic inhibition
Soya Enzymatic inhibition Clozapine, haloperidol, olanzapine, caffeine, NSAIDs, phenytoin, zafirlukast, warfarin
Garlic Increases antiplatelet activity
Ginseng To be determined Warfarin, heparin, aspirin and NSAIDs
Ginkgo biloba Strong inhibitor of platelet aggregation factor Warfarin, aspirin and NSAIDs
Hypericum perforatum (St John's wort) Enzymatic inductor (CYP450) Warfarin, digoxin, theophylline, cyclosporine, phenytoin and antiretrovirals
Ephedra Receptor level agonist MAOI, central nervous system stimulants, alkaloids ergotamines and xanthines
Kava (Piper methysticum) Unknown Levodopa
Ginger Inhibits thromboxane synthetase (in vitro) Anticoagulants
Chamomile Unknown Benzodiazepines, barbiturates and opioids
Hawthorn Unknown Beta-adrenergic antagonists, cisapride, digoxin, quinidine

Based on excretion

Renal and biliary excretion

Drugs tightly bound to proteins (i.e. not in the free fraction) are not available for renal excretion. Filtration depends on a number of factors including the pH of the urine. Drug interactions may affect those points.

With herbal medicines

Herb-drug interactions are drug interactions that occur between herbal medicines and conventional drugs. These types of interactions may be more common than drug-drug interactions because herbal medicines often contain multiple pharmacologically active ingredients, while conventional drugs typically contain only one. Some such interactions are clinically significant, although most herbal remedies are not associated with drug interactions causing serious consequences. Most catalogued herb-drug interactions are moderate in severity. The most commonly implicated conventional drugs in herb-drug interactions are warfarin, insulin, aspirin, digoxin, and ticlopidine, due to their narrow therapeutic indices. The most commonly implicated herbs involved in such interactions are those containing St. John’s Wort, magnesium, calcium, iron, or ginkgo.

Examples

Examples of herb-drug interactions include, but are not limited to:

Mechanisms

The mechanisms underlying most herb-drug interactions are not fully understood. Interactions between herbal medicines and anticancer drugs typically involve enzymes that metabolize cytochrome P450. For example, St. John's Wort has been shown to induce CYP3A4 and P-glycoprotein in vitro and in vivo.

Underlying factors

The factors or conditions that predispose the appearance of interactions include factors Old age: factors relating to how human physiology changes with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission, or the functioning of bone marrow all decrease with age. In addition, in old age, there is a sensory decrease that increases the chances of errors being made in the administration of drugs. The elderly are also more vulnerable to polypharmacy, and the more drugs a patient takes, higher is the chance of an interaction.

Genetic factors may also affect the enzymes and receptors, thus altering the possibilities of interactions.

Parients with hepatic or renal diseases already may have difficulties metabolizing and excreding drugs, what may exacerbate the effect of interactions.

Some drugs present an intrinsic increased risk for a harmful interaction, including drugs with a narrow therapeutic index, where the difference between the effective dose and the toxic dose is small. The drug digoxin is an example of this type of drug.

Risks are also increased when the drug presents a steep dose-response curve, and small changes in the dosage produce large changes in the drug's concentration in the blood plasma.

Epidemiology

As of 2008, among adults in the United States of America older than 56, 4% were taking medication and or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the less-educated elderly even after controlling for age, sex, place of residence, and comorbidity.

Just-so story

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Just-so_story

In science and philosophy, a just-so story is an untestable narrative explanation for a cultural practice, a biological trait, or behavior of humans or other animals. The pejorative nature of the expression is an implicit criticism that reminds the listener of the fictional and unprovable nature of such an explanation. Such tales are common in folklore genres like mythology (where they are known as etiological myths – see etiology). A less pejorative term is a pourquoi story, which has been used to describe usually more mythological or otherwise traditional examples of this genre, aimed at children.

This phrase is a reference to Rudyard Kipling's 1902 Just So Stories, containing fictional and deliberately fanciful tales for children, in which the stories pretend to explain animal characteristics, such as the origin of the spots on the leopard. It has been used to criticize evolutionary explanations of traits that have been proposed to be adaptations, particularly in the evolution–creation debates and in debates regarding research methods in sociobiology and evolutionary psychology.

However, the first widely acknowledged use of the phrase in the modern and pejorative sense seems to have originated in 1978 with Stephen Jay Gould, a prominent paleontologist and popular science writer. Gould expressed deep skepticism as to whether evolutionary psychology could ever provide objective explanations for human behavior, even in principle; additionally, even if it were possible to do so, Gould did not think that it could be proven in a properly scientific way.

Critique

Academics such as David Barash say that the term just-so story, when applied to a proposed evolutionary adaptation, is simply a derogatory term for a hypothesis. Hypotheses, by definition, require further empirical assessment, and are a part of normal science. Similarly, Robert Kurzban suggested that "The goal should not be to expel stories from science, but rather to identify the stories that are also good explanations." In his book The Triumph of Sociobiology, John Alcock suggested that the term just-so story as applied to proposed evolved adaptations is "one of the most successful derogatory labels ever invented". In a response to Gould's criticism, John Tooby and Leda Cosmides argued that the "just-so" accusation is unsubstantiated as it claims evolutionary psychologists are only interested in facts already known, when in reality evolutionary psychology is interested in what can be predicted from already known information as a means of pursuing unknown avenues of research. Thus evolutionary psychology has predictive utility, meaning it is not composed of just-so stories. Steve Stewart-Williams argues that all scientific hypotheses are just-so stories prior to being tested, yet the accusation is seldom levelled at other fields. Stewart-Williams also agrees with the idea that evolutionary explanations can potentially be made up for almost anything, but argues the same could be said of competing approaches, such as sociocultural explanations, so in the view of Stewart-Williams this is not a useful criticism. In 2001 interview, Leda Cosmides argued:

There is nothing wrong with explaining facts that are already known: no one faults a physicist for explaining why stars shine or apples fall toward earth. But evolutionary psychology would not be very useful if it were only capable of providing explanations after the fact, because almost nothing about the mind is known or understood: there are few facts, at the moment, to be explained! The strength of an evolutionary approach is that it can aid discovery: it allows you to generate predictions about what programs the mind might contain, so that you can conduct experiments to see if they in fact exist.....[W]hat about evolutionary explanations of phenomena that are already known? Those who have a professional knowledge of evolutionary biology know that it is not possible to cook up after the fact explanations of just any trait. There are important constraints on evolutionary explanation. More to the point, every decent evolutionary explanation has testable predictions about the design of the trait. For example, the hypothesis that pregnancy sickness is a byproduct of prenatal hormones predicts different patterns of food aversions than the hypothesis that it is an adaptation that evolved to protect the fetus from pathogens and plant toxins in food at the point in embryogenesis when the fetus is most vulnerable – during the first trimester. Evolutionary hypotheses – whether generated to discover a new trait or to explain one that is already known – carry predictions about the design of that trait. The alternative – having no hypothesis about adaptive function – carries no predictions whatsoever. So which is the more constrained and sober scientific approach?

Al-Shawaf et al. argue that many evolutionary psychology hypotheses are formed in a "top-down" approach; a theory is used to generate a hypothesis and predictions are then made from this hypothesis. This method makes it generally impossible to engage in just-so storytelling because the hypothesis and predictions are made a priori, based on the theory. By contrast, the "bottom-up" approach, whereby an observation is made and a hypothesis is generated to explain the observation, could potentially be a form of just-so storytelling if no novel predictions were developed from the hypothesis. Provided novel, testable predictions are made from the hypothesis, then it cannot be argued that the hypothesis is a just-so story. Al-Shawaf et al. argue that the just-so accusation is a result of the fact that, like other evolutionary sciences, evolutionary psychology is partially a historical discipline. However, the authors argue that if this made evolutionary psychology nothing but just-so storytelling, then other partially historical scientific disciplines such as astrophysics, geology or cosmology would also be just-so storytelling. What makes any scientific discipline, not just partially historical ones, valid is their ability to make testable novel predictions in the present day. Evolutionary psychologists do not need to travel back in time to test their hypotheses, as their hypotheses yield predictions about what we would expect to see in the modern world.

Lisa DeBruine argues that evolutionary psychology can generate testable, novel predictions. She gives an example of evolved navigation theory, which hypothesised that people would overestimate vertical distances relative to horizontal ones and that vertical distances are overestimated more from the top than from the bottom, due to the risks of falling from a greater height leading to a greater chance of injury or death encouraging people to be more cautious when assessing the risks of vertical distances. The predictions of the theory were confirmed and the facts were previously unknown until evolved navigation theory tested them, demonstrating that evolutionary psychology can make novel predictions of previously unknown facts.

Berry et al. argue that critics of adaptationist "just so stories" are often guilty of creating "just not so stories", uncritically accepting any alternative explanation provided it is not the adaptationist one. Furthermore, the authors argue that Gould's use of the term "adaptive function" is overly restrictive, as they insist it must refer to the original adaptive function the trait evolved for. According to the authors, this is a nonsensical requirement, because if an adaptation was then used for a new, different, adaptive function, then this makes the trait an adaptation because it remains in the population because it helps organisms with this new function. Thus the trait's original purpose is irrelevant because it has been co-opted for a new purpose and maintains itself within the species because it increases reproductive success of members of the species who have it (versus those who may have lost it for some reason); nature is blind to the original "intended" function of the trait.

David Buss argued that while Gould's "just-so story" criticism is that the data that an evolutionary psychology adaptationist hypothesis explains could be equally explained by different hypotheses (such as exaptationist or co-opted spandrel hypotheses), Gould failed to meet the relevant evidentiary burdens with regards to these alternative hypotheses. According to Buss, co-opted exaptationist and spandrel hypotheses have an additional evidentiary burden compared to adaptationist hypotheses, as they must identify both the later co-opted functionality and the original adaptational functionality, while proposals that something is a co-opted byproduct must identify what the trait was a byproduct of and what caused it to be co-opted; it is not sufficient simply to propose an alternative exaptationist, functionless byproduct or spandrel hypotheses to the adaptationist one, rather these evidentiary burdens must be met. Buss argues that Gould's failure to do this meant that his assertion that apparent adaptations were actually exaptations was itself nothing more than a just-so story.

Alternatives in evolutionary developmental biology

How the Snake Lost Its Legs: Curious Tales from the Frontier of Evo-Devo is a 2014 book on evolutionary developmental biology by Lewis I. Held, Jr. The title pays a “factual homage to Rudyard Kipling's fanciful Just So Stories."

Mechanism (philosophy)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mechanism_(philosophy)

Mechanism is the belief that natural wholes (principally living things) are similar to complicated machines or artifacts, composed of parts lacking any intrinsic relationship to each other.

The doctrine of mechanism in philosophy comes in two different flavors. They are both doctrines of metaphysics, but they are different in scope and ambitions: the first is a global doctrine about nature; the second is a local doctrine about humans and their minds, which is hotly contested. For clarity, we might distinguish these two doctrines as universal mechanism and anthropic mechanism.

There is no constant meaning in the history of philosophy for the word Mechanism. Originally, the term meant that cosmological theory which ascribes the motion and changes of the world to some external force. In this view material things are purely passive, while according to the opposite theory (i. e., Dynamism), they possess certain internal sources of energy which account for the activity of each and for its influence on the course of events; These meanings, however, soon underwent modification. The question as to whether motion is an inherent property of bodies, or has been communicated to them by some external agency, was very often ignored. With many cosmologists the essential feature of Mechanism is the attempt to reduce all the qualities and activities of bodies to quantitative realities, i. e. to mass and motion. But a further modification soon followed. Living bodies, as is well known, present at first sight certain characteristic properties which have no counterpart in lifeless matter. Mechanism aims to go beyond these appearances. It seeks to explain all "vital" phenomena as physical and chemical facts; whether or not these facts are in turn reducible to mass and motion becomes a secondary question, although Mechanists are generally inclined to favour such reduction. The theory opposed to this biological mechanism is no longer Dynamism, but Vitalism or Neo-vitalism, which maintains that vital activities cannot be explained, and never will be explained, by the laws which govern lifeless matter.

— "Mechanism" in Catholic Encyclopedia (1913)

Mechanical philosophy

The mechanical philosophy is a form of natural philosophy which compares the universe to a large-scale mechanism (i.e. a machine). The mechanical philosophy is associated with the scientific revolution of early modern Europe. One of the first expositions of universal mechanism is found in the opening passages of Leviathan by Thomas Hobbes, published in 1651.

Some intellectual historians and critical theorists argue that early mechanical philosophy was tied to disenchantment and the rejection of the idea of nature as living or animated by spirits or angels. Other scholars, however, have noted that early mechanical philosophers nevertheless believed in magic, Christianity and spiritualism.

Mechanism and determinism

Some ancient philosophies held that the universe is reducible to completely mechanical principles—that is, the motion and collision of matter. This view was closely linked with materialism and reductionism, especially that of the atomists and to a large extent, stoic physics. Later mechanists believed the achievements of the scientific revolution of the 17th century had shown that all phenomena could eventually be explained in terms of "mechanical laws": natural laws governing the motion and collision of matter that imply a determinism. If all phenomena can be explained entirely through the motion of matter under physical laws, as the gears of a clock determine that it must strike 2:00 an hour after striking 1:00, all phenomena must be completely determined, past, present or future.

Development of the mechanical philosophy

The natural philosophers concerned with developing the mechanical philosophy were largely a French group, together with some of their personal connections. They included Pierre Gassendi, Marin Mersenne and René Descartes. Also involved were the English thinkers Sir Kenelm Digby, Thomas Hobbes and Walter Charleton; and the Dutch natural philosopher Isaac Beeckman.

Robert Boyle used "mechanical philosophers" to refer both to those with a theory of "corpuscles" or atoms of matter, such as Gassendi and Descartes, and those who did without such a theory. One common factor was the clockwork universe view. His meaning would be problematic in the cases of Hobbes and Galileo Galilei; it would include Nicolas Lemery and Christiaan Huygens, as well as himself. Newton would be a transitional figure. Contemporary usage of "mechanical philosophy" dates back to 1952 and Marie Boas Hall.

In France the mechanical philosophy spread mostly through private academies and salons; in England in the Royal Society. In England it did not have a large initial impact in universities, which were somewhat more receptive in France, the Netherlands and Germany.

Hobbes and the mechanical philosophy

One of the first expositions of universal mechanism is found in the opening passages of Leviathan (1651) by Hobbes; the book's second chapter invokes the principle of inertia, foundational for the mechanical philosophy. Boyle did not mention him as one of the group; but at the time they were on opposite sides of a controversy. Richard Westfall deems him a mechanical philosopher.

Hobbes's major statement of his natural philosophy is in De Corpore (1655). In part II and III of this work he goes a long way towards identifying fundamental physics with geometry; and he freely mixes concepts from the two areas.

Descartes and the mechanical philosophy

Descartes was also a mechanist. A substance dualist, he argued that reality is composed of two radically different types of substance: extended matter, on the one hand, and immaterial mind, on the other. He identified matter with the spatial extension which is its only clear and distinct idea, and consequently denied the existence of vacuum. Descartes argued that one cannot explain the conscious mind in terms of the spatial dynamics of mechanistic bits of matter cannoning off each other. Nevertheless, his understanding of biology was mechanistic in nature:

"I should like you to consider that these functions (including passion, memory, and imagination) follow from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels." (Descartes, Treatise on Man, p.108)

His scientific work was based on the traditional mechanistic understanding which maintains that animals and humans are completely mechanistic automata. Descartes' dualism was motivated by the seeming impossibility that mechanical dynamics could yield mental experiences.

Beeckman and the mechanical philosophy

Isaac Beeckman's theory of mechanical philosophy described in his books Centuria and Journal is grounded in two components: matter and motion. To explain matter, Beeckman relied on atomism philosophy which explains that matter is composed of tiny inseparable particles that interact to create the objects seen in life. To explain motion, he supported the idea of inertia, a theory generated by Isaac Newton.

Newton's mechanical philosophy

Isaac Newton ushered in a weaker notion of mechanism that tolerated the action at a distance of gravity. Interpretations of Newton's scientific work in light of his occult research have suggested that he did not properly view the universe as mechanistic, but instead populated by mysterious forces and spirits and constantly sustained by God and angels. Later generations of philosophers who were influenced by Newton's example were nonetheless often mechanists. Among them were Julien Offray de La Mettrie and Denis Diderot.

The mechanist thesis

The French mechanist and determinist Pierre Simon de Laplace formulated some implications of the mechanist thesis, writing:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.

— Pierre Simon Laplace, A Philosophical Essay on Probabilities

Criticism

Critics argue that although mechanical philosophy includes a wide range of useful observational and principled data, it has not adequately explained the world and its components, and there are weaknesses in its definitions. Among the criticisms made of this philosophy are:

  • Experts in religious studies have criticized the philosophy that God's intervention in the management of the world seems unnecessary.
  • Newton's mechanical philosophy, with all its positive effects on human life, ultimately leads to Deism.
  • It is a stagnant worldview that cannot explain God's constant presence and favor in the world.
  • At the height of this philosophy, God was viewed as a skilled designer, and for him the mental structure and human morality were conceived.
  • The assumption that God tuned the world like a clock and left it to its own devices is in clear conflict with the God of the Bible, who is at all times directly and immediately involved in his creation.
  • This philosophy abandons concepts such as essence, accident, matter, form, Ipso facto and potential that are used in ontology, and denies the involvement of transcendental affairs in the management of this world.
  • This philosophy is incapable of explaining human spiritual experiences and the immaterial realms of the world.

Several 20th-century philosophers have raised doubts concerning the concept of mechanical philosophy in general. Among them is the Australian philosopher Colin Murray Turbayne who notes that the concepts of "substance" and "substratum" which underlie mind-body dualism as utilized within the Cartesian analysis of the universe have limited meaning at best. He argues further that the mechanistic constructs described within the Newtonian system are more properly characterized as linguistic metaphors which have been mistakenly interpreted as literal truths over time and incorporated through the use of deductive logic into the hypotheses which support philosophical materialism throughout much of the Western world. He concludes that mankind can successfully embrace more beneficial theoretic constructs of the universe only after first acknowledging the metaphorical nature of such mechanistic concepts and the central role which they have assumed in the guise of literal truth within the realm of epistemology and metaphysics.

Universal mechanism

The older doctrine, here called universal mechanism, is the ancient philosophies closely linked with materialism and reductionism, especially that of the atomists and to a large extent, stoic physics. They held that the universe is reducible to completely mechanical principles—that is, the motion and collision of matter. Later mechanists believed the achievements of the scientific revolution had shown that all phenomena could eventually be explained in terms of 'mechanical' laws, natural laws governing the motion and collision of matter that implied a thorough going determinism: if all phenomena could be explained entirely through the motion of matter under the laws of classical physics, then even more surely than the gears of a clock determine that it must strike 2:00 an hour after striking 1:00, all phenomena must be completely determined: whether past, present or future.

The French mechanist and determinist Pierre Simon de Laplace formulated the sweeping implications of this thesis by saying:

We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.

— Pierre Simon Laplace, A Philosophical Essay on Probabilities

One of the first and most famous expositions of universal mechanism is found in the opening passages of Leviathan by Thomas Hobbes (1651). What is less frequently appreciated is that René Descartes was a staunch mechanist, though today, in the philosophy of mind, he is remembered for introducing the mind–body problem in terms of dualism and physicalism.

Descartes was a substance dualist, and argued that reality was composed of two radically different types of substance: extended matter, on the one hand, and immaterial mind, on the other. Descartes argued that one cannot explain the conscious mind in terms of the spatial dynamics of mechanistic bits of matter cannoning off each other. Nevertheless, his understanding of biology was thoroughly mechanistic in nature:

I should like you to consider that these functions (including passion, memory, and imagination) follow from the mere arrangement of the machine’s organs every bit as naturally as the movements of a clock or other automaton follow from the arrangement of its counter-weights and wheels.

— René Descartes, Treatise on Man, p.108

His scientific work was based on the traditional mechanistic understanding that animals and humans are completely mechanistic automata. Descartes' dualism was motivated by the seeming impossibility that mechanical dynamics could yield mental experiences.

Isaac Newton ushered in a much weaker acceptation of mechanism that tolerated the antithetical, and as yet inexplicable, action at a distance of gravity. However, his work seemed to successfully predict the motion of both celestial and terrestrial bodies according to that principle, and the generation of philosophers who were inspired by Newton's example carried the mechanist banner nonetheless. Chief among them were French philosophers such as Julien Offray de La Mettrie and Denis Diderot (see also: French materialism).

Anthropic mechanism

The thesis in anthropic mechanism is not that everything can be completely explained in mechanical terms (although some anthropic mechanists may also believe that), but rather that everything about human beings can be completely explained in mechanical terms, as surely as can everything about clocks or the internal combustion engine.

One of the chief obstacles that all mechanistic theories have faced is providing a mechanistic explanation of the human mind; Descartes, for one, endorsed dualism in spite of endorsing a completely mechanistic conception of the material world because he argued that mechanism and the notion of a mind be logically incompatible. Hobbes, on the other hand, conceived of the mind and the will as purely mechanistic, completely explicable in terms of the effects of perception and the pursuit of desire, which in turn he held to be completely explicable in terms of the materialistic operations of the nervous system. Following Hobbes, other mechanists argued for a thoroughly mechanistic explanation of the mind, with one of the most influential and controversial expositions of the doctrine being offered by Julien Offray de La Mettrie in his Man a Machine (1748).

The main points of debate between anthropic mechanists and anti-mechanists are mainly occupied with two topics: the mind—consciousness, in particular—and free will. Anti-mechanists argue that anthropic mechanism be incompatible with our commonsense intuitions: in philosophy of mind they argue that if matter is devoid of mental properties, then the phenomenon of consciousness cannot be explained by mechanistic principles acting on matter. In metaphysics anti-mechanists argue that anthropic mechanism implies determinism about human action, which is incompatible with our experience of free will. Contemporary philosophers who have argued for this position include Norman Malcolm and David Chalmers.

Anthropic mechanists typically respond in one of two ways. In the first, they agree with anti-mechanists that mechanism conflicts with some of our commonsense intuitions, but go on to argue that our commonsense intuitions are simply mistaken and need to be revised. Down this path lies eliminative materialism in philosophy of mind, and hard determinism on the question of free will. This option is accepted by the eliminative materialist philosopher Paul Churchland. Some have questioned how eliminative materialism is compatible with the freedom of will apparently required for anyone (including its adherents) to make truth claims. The second option, common amongst philosophers who adopt anthropic mechanism, is to argue that the arguments given for incompatibility are specious: whatever it is we mean by "consciousness" and "free will," be fully compatible with a mechanistic understanding of the human mind and will. As a result, they tend to argue for one or another non-eliminativist physicalist theories of mind, and for compatibilism on the question of free will. Contemporary philosophers who have argued for this sort of account include J. J. C. Smart and Daniel Dennett.

Gödelian arguments

Some scholars have debated over what, if anything, Gödel's incompleteness theorems imply about anthropic mechanism. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.

Gödelian arguments claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent and powerful enough to recognize its own consistency. Since this is impossible for a Turing machine, the Gödelian concludes that human reasoning must be non-mechanical.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent: any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument against mechanism. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize [Gödel's incompleteness results] to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."

History

One of the earliest attempts to use incompleteness to reason about human intelligence was by Gödel himself in his 1951 Gibbs Lecture entitled "Some basic theorems on the foundations of mathematics and their philosophical implications". In this lecture, Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact".

In subsequent years, more direct anti-mechanist lines of reasoning were apparently floating around the intellectual atmosphere. In 1960, Hilary Putnam published a paper entitled "Minds and Machines," in which he points out the flaws of a typical anti-mechanist argument. Informally, this is the argument that the (alleged) difference between "what can be mechanically proven" and "what can be seen to be true by humans" shows that human intelligence is not mechanical in nature. Or, as Putnam puts it:

Let T be a Turing machine which "represents" me in the sense that T can prove just the mathematical statements I prove. Then using Gödel's technique I can discover a proposition that T cannot prove, and moreover I can prove this proposition. This refutes the assumption that T "represents" me, hence I am not a Turing machine.

Hilary Putnam objects that this argument ignores the issue of consistency. Gödel's technique can only be applied to consistent systems. It is conceivable, argues Putnam, that the human mind is inconsistent. If one is to use Gödel's technique to prove the proposition that T cannot prove, one must first prove (the mathematical statement representing) the consistency of T, a daunting and perhaps impossible task. Later Putnam suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. If we are to believe that it is consistent, then either we cannot prove its consistency, or it cannot be represented by a Turing machine.

J. R. Lucas in Minds, Machines and Gödel (1961), and later in his book The Freedom of the Will (1970), lays out an anti-mechanist argument closely following the one described by Putnam, including reasons for why the human mind can be considered consistent. Lucas admits that, by Gödel's second theorem, a human mind cannot formally prove its own consistency, and even says (perhaps facetiously) that women and politicians are inconsistent. Nevertheless, he sets out arguments for why a male non-politician can be considered consistent.

Another work was done by Judson Webb in his 1968 paper "Metamathematics and the Philosophy of Mind". Webb claims that previous attempts have glossed over whether one truly can see that the Gödelian statement p pertaining to oneself, is true. Using a different formulation of Gödel's theorems, namely, that of Raymond Smullyan and Emil Post, Webb shows one can derive convincing arguments for oneself of both the truth and falsity of p. He furthermore argues that all arguments about the philosophical implications of Gödel's theorems are really arguments about whether the Church-Turing thesis is true.

Later, Roger Penrose entered the fray, providing somewhat novel anti-mechanist arguments in his books, The Emperor's New Mind (1989) [ENM] and Shadows of the Mind (1994) [SM]. These books have proved highly controversial. Martin Davis responded to ENM in his paper "Is Mathematical Insight Algorithmic?" (ps), where he argues that Penrose ignores the issue of consistency. Solomon Feferman gives a critical examination of SM in his paper "Penrose's Gödelian argument." The response of the scientific community to Penrose's arguments has been negative, with one group of scholars calling Penrose's repeated attempts to form a persuasive Gödelian argument "a kind of intellectual shell game, in which a precisely defined notion to which a mathematical result applies ... is switched for a vaguer notion".

A Gödel-based anti-mechanism argument can be found in Douglas Hofstadter's book Gödel, Escher, Bach: An Eternal Golden Braid, though Hofstadter is widely viewed as a known skeptic of such arguments:

Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, no matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true.

What might such high-level concepts be? It has been proposed for eons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone.

Religiosity

From Wikipedia, the free encyclopedia

The Oxford English Dictionary defines religiosity as: "Religiousness; religious feeling or belief. [...] Affected or excessive religiousness". Different scholars have seen this concept as broadly about religious orientations and degrees of involvement or commitment. Religiosity is measured at the levels of individuals or groups and there is a lack of agreement on what criteria would constitute religiosity among scholars. Sociologists of religion have observed that an individual's experience, beliefs, sense of belonging, and behavior often are not congruent with their actual religious behavior, since there is much diversity in how one can be religious or not. Multiple problems exist in measuring religiosity. For instance, measures of variables such as church attendance produce different results when different methods are used - such as traditional surveys vs time-use surveys.

Measuring religion

Inaccuracy of polling and identification

The reliability of any poll results, in general and specifically on religion, can be questioned due numerous factors such as:

  • there have been very low response rates for polls since the 1990s
  • polls consistently fail to predict government election outcomes, which signifies that polls in general do not capture the actual views of the population
  • biases in wording or topic affect how people respond to polls
  • polls categorize people based on limited choices
  • polls often generalize broadly
  • polls have shallow or superficial choices, which complicate expressing their complex religious beliefs and practices
  • interviewer and respondent fatigue is very common

The measurement of religiosity is hampered by the difficulties involved in defining what is meant by the term and the variables it entails. Numerous studies have explored the different components of religiosity, with most finding some distinction between religious beliefs/doctrine, religious practice, and spirituality. When religiosity is measured, it is important to specify which aspects of religiosity are referred to.

Researchers also note that an estimated 20-40% of the population changes their self-reported religious affiliation/identity over time due to numerous factors and that usually it is their answers on surveys that change, not necessarily their religious practices or beliefs.

In general, polling numbers should not be taken at face value since the way people answer questions differs in meaning in context by different cultures and is, thus, misleading to assume that answering a poll questions has a simple interpretation.

According to Gallup there are variations on the responses based on how they ask questions. They routinely ask on complex things like belief in God since the early 2000s in 3 different wordings and they constantly receive 3 different percentages in responses.

Surveys in the United States

Two major surveys in the United States (General Social Survey and Cooperative Congressional Election Study) consistently have discrepancies between their demographic estimates that amount to 8% and growing. This is due to a few factors such as each one asking questions differently and, thus, impacting how respondents answer their questions due "social desirability bias"; the lumping of very different groups (atheist, agnostics, nothing in particular) into singular categories (e.g. "no religion" vs "nothing in particular"); and imbalance of representative respondents (e.g. GSS sample of nones is more politically moderate than the nones in the CCES, while simultaneously the Protestant sample in the CCES is further to the right of the political spectrum).

The 2008 American Religious Identification Survey (ARIS) found a difference between how people identify and what people believe. While only 0.7% of U.S. adults identified as atheist, 2.3% said there is no such thing as a god. Only 0.9% identified as agnostic, but 10.0% said there is either no way to know if a god exists or they weren't sure. Another 12.1% said there is a higher power but no personal god. In total, only 15.0% identified as Nones or No Religion, but 24.4% did not believe in the traditional concept of a personal god. The conductors of the study concluded, "The historic reluctance of Americans to self-identify in this manner or use these terms seems to have diminished. Nevertheless ... the level of under-reporting of these theological labels is still significant ... many millions do not subscribe fully to the theology of the groups with which they identify."

According to a Pew study in 2009, only 5% of the total US population did not have a belief in a god. Out of all those without a belief in a god, only 24% self-identified as "atheist", while 15% self-identified as "agnostic", 35% self-identified as "nothing in particular", and 24% identified with a religious tradition.

According to a Gallup's editor in chief, Frank Newport, numbers on surveys may not be the whole story. In his view, declines in religious affiliation or declines in belief in God on surveys may not actually reflect an actual decline in these beliefs among people since increased honesty on spiritual matters to interviewers may merely be increasing since people may feel more comfortable today expressing viewpoints that were previously deviant.

Diversity in an individual's beliefs, affiliations, and behaviors

Decades of anthropological, sociological, and psychological research have established that "religious congruence" (the assumption that religious beliefs and values are tightly integrated in an individual's mind or that religious practices and behaviors follow directly from religious beliefs or that religious beliefs are chronologically linear and stable across different contexts) is actually rare. People's religious ideas are fragmented, loosely connected, and context-dependent; like in all other domains of culture and in life. The beliefs, affiliations, and behaviors of any individual are complex activities that have many sources including culture. As examples of religious incongruence he notes, "Observant Jews may not believe what they say in their Sabbath prayers. Christian ministers may not believe in God. And people who regularly dance for rain don't do it in the dry season."

Demographic studies often show wide diversity of religious beliefs, belonging, and practices in both religious and non-religious populations. For instance, out of Americans who are not religious and not seeking religion: 68% believe in God, 12% are atheists, 17% are agnostics; also, in terms of self-identification of religiosity 18% consider themselves religious, 37% consider themselves as spiritual but not religious, and 42% considers themselves as neither spiritual nor religious; and 21% pray every day and 24% pray once a month. Global studies on religion also show diversity.

Results of a 2008/2009 Gallup poll on whether respondents said that religion was "important in [their] daily life."
  90%-100%
  80%-89%
  70%-79%
  60%-69%
  50%-59%
  40%-49%
  30%-39%
  20%-29%
  10%-19%
  0%-9%
  No data

Components

Numerous studies have explored the different components of human religiosity (Brink, 1993; Hill & Hood 1999). What most have found is that there are multiple dimensions (they often employ factor analysis). For instance, Cornwall, Albrecht, Cunningham and Pitcher (1986) identify six dimensions of religiosity based on the understanding that there are at least three components to religious behavior: knowing (cognition in the mind), feeling (effect to the spirit), and doing (behavior of the body). For each of these components of religiosity, there were two cross classifications resulting in the six dimensions:

  • Cognition
    • traditional orthodoxy
    • particularistic orthodoxy
  • Effect
    • Palpable
    • Tangible
  • Behavior
    • religious behavior
    • religious participation

Other researchers have found different dimensions, ranging generally from four to twelve components. What most measures of religiosity find is that there is at least some distinction between religious doctrine, religious practice, and spirituality.

For example, one can accept the truthfulness of the Bible (belief dimension), but never attend a church or even belong to an organized religion (practice dimension). Another example is an individual who does not hold orthodox Christian doctrines (belief dimension), but does attend a charismatic worship service (practice dimension) in order to develop his/her sense of oneness with the divine (spirituality dimension).

An individual could disavow all doctrines associated with organized religions (belief dimension), not affiliate with an organized religion or attend religious services (practice dimension), and at the same time be strongly committed to a higher power and feel that the connection with that higher power is ultimately relevant (spirituality dimension). These are explanatory examples of the broadest dimensions of religiosity and may not be reflected in specific religiosity measures.

Most dimensions of religiosity are correlated, meaning people who often attend church services (practice dimension) are also likely to score highly on the belief and spirituality dimensions. But individuals do not have to score high on all dimensions or low on all dimensions; their scores can vary by dimension.

Sociologists have differed over the exact number of components of religiosity. Charles Glock's five-dimensional approach (Glock, 1972: 39) was among the first of its kind in the field of sociology of religion. Other sociologists adapted Glock's list to include additional components (see for example, a six component measure by Mervin F. Verbit).

Other factors

Genes and environment

National welfare spending vs church attendance in Christian societies

The contributions of genes and environment to religiosity have been quantified in studies of twins (Bouchard et al., 1999; Kirk et al., 1999) and sociological studies of welfare, availability, and legal regulations  (state religions, etc.).

Koenig et al. (2005) report that the contribution of genes to variation in religiosity (called heritability) increases from 12% to 44% and the contribution of shared (family) effects decreases from 56% to 18% between adolescence and adulthood.

A market-based theory of religious choice and governmental regulation of religion have been the dominant theories used to explain variations of religiosity between societies. However, Gill and Lundsgaarde (2004)  documented a much stronger correlation between welfare state spending and religiosity. (see diagram)

Just-world hypothesis

Studies have found belief in a just world to be correlated with aspects of religiousness.

Risk-aversion

Several studies have discovered a positive correlation between the degree of religiousness and risk aversion.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...