Search This Blog

Saturday, September 9, 2023

Biomarker (medicine)

From Wikipedia, the free encyclopedia

In medicine, a biomarker is a measurable indicator of the severity or presence of some disease state. It may be defined as a "cellular, biochemical or molecular alteration in cells, tissues or fluids that can be measured and evaluated to indicate normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention." More generally a biomarker is anything that can be used as an indicator of a particular disease state or some other physiological state of an organism. According to the WHO, the indicator may be chemical, physical, or biological in nature - and the measurement may be functional, physiological, biochemical, cellular, or molecular.

A biomarker can be a substance that is introduced into an organism as a means to examine organ function or other aspects of health. For example, rubidium chloride is used in isotopic labeling to evaluate perfusion of heart muscle. It can also be a substance whose detection indicates a particular disease state, for example, the presence of an antibody may indicate an infection. More specifically, a biomarker indicates a change in expression or state of a protein that correlates with the risk or progression of a disease, or with the susceptibility of the disease to a given treatment. Biomarkers can be characteristic biological properties or molecules that can be detected and measured in parts of the body like the blood or tissue. They may indicate either normal or diseased processes in the body. Biomarkers can be specific cells, molecules, or genes, gene products, enzymes, or hormones. Complex organ functions or general characteristic changes in biological structures can also serve as biomarkers. Although the term biomarker is relatively new, biomarkers have been used in pre-clinical research and clinical diagnosis for a considerable time. For example, body temperature is a well-known biomarker for fever. Blood pressure is used to determine the risk of stroke. It is also widely known that cholesterol values are a biomarker and risk indicator for coronary and vascular disease, and that C-reactive protein (CRP) is a marker for inflammation.

Biomarkers are useful in a number of ways, including measuring the progress of disease, evaluating the most effective therapeutic regimes for a particular cancer type, and establishing long-term susceptibility to cancer or its recurrence. Biomarkers characterize disease progression starting from the earliest natural history of the disease. Biomarkers assess disease susceptibility and severity, which allows one to predict outcomes, determine interventions and evaluate therapeutic responses. From a forensics and epidemiologic perspective, biomarkers offer unique insight about the relationships between environmental risk factors. The parameter can be chemical, physical or biological. In molecular terms biomarker is "the subset of markers that might be discovered using genomics, proteomics technologies or imaging technologies. Biomarkers play major roles in medicinal biology. Biomarkers help in early diagnosis, disease prevention, drug target identification, drug response etc. Several biomarkers have been identified for many diseases such as serum LDL for cholesterol, blood pressure, and P53 gene and MMPs  as tumor markers for cancer.

Disease-related biomarkers and drug-related biomarkers

It is necessary to distinguish between disease-related and drug-related biomarkers. Disease-related biomarkers give an indication of the probable effect of treatment on patient (risk indicator or predictive biomarkers), if a disease already exists (diagnostic biomarker), or how such a disease may develop in an individual case regardless of the type of treatment (prognostic biomarker). Predictive biomarkers help to assess the most likely response to a particular treatment type, while prognostic markers shows the progression of disease with or without treatment. In contrast, drug-related biomarkers indicate whether a drug will be effective in a specific patient and how the patient's body will process it.

In addition to long-known parameters, such as those included and objectively measured in a blood count, there are numerous novel biomarkers used in the various medical specialties. Currently, intensive work is taking place on the discovery and development of innovative and more effective biomarkers. These "new" biomarkers have become the basis for preventive medicine, meaning medicine that recognises diseases or the risk of disease early, and takes specific countermeasures to prevent the development of disease. Biomarkers are also seen as the key to personalised medicine, treatments individually tailored to specific patients for highly efficient intervention in disease processes. Often, such biomarkers indicate changes in metabolic processes.

The "classic" biomarker in medicine is a laboratory parameter that the doctor can use to help make decisions in making a diagnosis and selecting a course of treatment. For example, the detection of certain autoantibodies in patient blood is a reliable biomarker for autoimmune disease, and the detection of rheumatoid factors has been an important diagnostic marker for rheumatoid arthritis (RA) for over 50 years. For the diagnosis of this autoimmune disease the antibodies against the bodies own citrullinated proteins are of particular value. These ACPAs, (ACPA stands for Anti-citrullinated protein/peptide antibody) can be detected in the blood before the first symptoms of RA appear. They are thus highly valuable biomarkers for the early diagnosis of this autoimmune disease. In addition, they indicate if the disease threatens to be severe with serious damage to the bones and joints, which is an important tool for the doctor when providing a diagnosis and developing a treatment plan.

There are also more and more indications that ACPAs can be very useful in monitoring the success of treatment for RA. This would make possible the accurate use of modern treatments with biologicals. Physicians hope to soon be able to individually tailor rheumatoid arthritis treatments for each patient.

According to Häupl T. et al. prediction of response to treatment will become the most important aim of biomarker research in medicine. With the growing number of new biological agents, there is increasing pressure to identify molecular parameters such as ACPAs that will not only guide the therapeutic decision but also help to define the most important targets for which new biological agents should be tested in clinical studies.

An NIH study group committed to the following definition in 1998: "a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention." In the past, biomarkers were primarily physiological indicators such as blood pressure or heart rate. More recently, biomarker is becoming a synonym for molecular biomarker, such as elevated prostate specific antigen as a molecular biomarker for prostate cancer, or using enzyme assays as liver function tests. There has recently been heightened interest in the relevance of biomarkers in oncology, including the role of KRAS in colorectal cancer and other EGFR-associated cancers. In patients whose tumors express the mutated KRAS gene, the KRAS protein, which forms part of the EGFR signaling pathway, is always 'turned on'. This overactive EGFR signaling means that signaling continues downstream – even when the upstream signaling is blocked by an EGFR inhibitor, such as cetuximab (Erbitux) – and results in continued cancer cell growth and proliferation. Testing a tumor for its KRAS status (wild-type vs. mutant) helps to identify those patients who will benefit most from treatment with cetuximab.

Currently, effective treatment is available for only a small percentage of cancer patients. In addition, many cancer patients are diagnosed at a stage where the cancer has advanced too far to be treated. Biomarkers have the ability to greatly enhance cancer detection and the drug development process. In addition, biomarkers will enable physicians to develop individualized treatment plans for their cancer patients; thus allowing doctors to tailor drugs specific to their patient's tumor type. By doing so, drug response rate will improve, drug toxicity will be limited and costs associated with testing various therapies and the ensuing treatment for side effects will decrease.

Biomarkers also cover the use of molecular indicators of environmental exposure in epidemiologic studies such as human papilloma virus or certain markers of tobacco exposure such as 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK). To date no biomarkers have been established for head and neck cancer.

Biomarker requirements

For chronic diseases, whose treatment may require patients to take medications for years, accurate diagnosis is particularly important, especially when strong side effects are expected from the treatment. In these cases, biomarkers are becoming more and more important, because they can confirm a difficult diagnosis or even make it possible in the first place. A number of diseases, such as Alzheimer's disease or rheumatoid arthritis, often begin with an early, symptom-free phase. In such symptom-free patients there may be more or less probability of actually developing symptoms. In these cases, biomarkers help to identify high-risk individuals reliably and in a timely manner so that they can either be treated before onset of the disease or as soon as possible thereafter.

In order to use a biomarker for diagnostics, the sample material must be as easy to obtain as possible. This may be a blood sample taken by a doctor, a urine or saliva sample, or a drop of blood like those diabetes patients extract from their own fingertips for regular blood-sugar monitoring.

For rapid initiation of treatment, the speed with which a result is obtained from the biomarker test is critical. A rapid test, which delivers a result after only a few minutes, is optimal. This makes it possible for the physician to discuss with the patient how to proceed and if necessary to start treatment immediately after the test.

Naturally, the detection method for a biomarker must be accurate and as easy to carry out as possible. The results from different laboratories may not differ significantly from each other, and the biomarker must naturally have proven its effectiveness for the diagnosis, prognosis, and risk assessment of the affected diseases in independent studies.

A biomarker for clinical use needs good sensitivity and specificity e.g. ≥0.9, and good specificity e.g. ≥0.9 although they should be chosen with the population in mind so positive predictive value and negative predictive value are more relevant.

Biomarker classification and application

Biomarkers can be classified based on different criteria.

Based on their characteristics they can be classified as imaging biomarkers (CT, PET, MRI) or molecular biomarkers with three subtypes: volatile, like breath, body fluid, or biopsy biomarkers.

Molecular biomarkers refer to non-imaging biomarkers that have biophysical properties, which allow their measurements in biological samples (e.g., plasma, serum, cerebrospinal fluid, bronchoalveolar lavage, biopsy) and include nucleic acids-based biomarkers such as gene mutations or polymorphisms and quantitative gene expression analysis, peptides, proteins, lipids metabolites, and other small molecules.

Biomarkers can also be classified based on their application such as diagnostic biomarkers (i.e., cardiac troponin for the diagnosis of myocardial infarction), staging of disease biomarkers (i.e., brain natriuretic peptide for congestive heart failure), disease prognosis biomarkers (cancer biomarkers), and biomarkers for monitoring the clinical response to an intervention (HbAlc for antidiabetic treatment). Another category of biomarkers includes those used in decision making in early drug development. For instance, pharmacodynamic (PD) biomarkers are markers of a certain pharmacological response, which are of special interest in dose optimization studies.

Classes

Three broad classes of biomarkers are prognostic biomarkers, predictive biomarkers and pharmacodynamic biomarkers.

Prognostic

Prognostic biomarkers give intervention-independent information on disease status through screening, diagnosis and disease monitoring. Prognostic biomarkers can signify individuals in the latent period of a disease's natural history, allowing optimal therapy and prevention until the disease's termination. Prognostic biomarkers give information on disease status by measuring the internal precursors that increase or decrease the likelihood of attaining a disease. For example, blood pressure and cholesterol are biomarkers for CVD. Prognostic biomarkers can be direct or indirect to the causal pathway of a disease. If a prognostic biomarker is a direct step in the causal pathway, it is one of the factors or products of the disease. A prognostic biomarker could be indirectly associated with a disease if it is related to a change caused by the exposure, or related to an unknown factor connected with the exposure or disease.

Predictive

Predictive biomarkers measure the effect of a drug and tell if the drug is having its expected activity, but do not offer any direct information on the disease. Predictive biomarkers are highly sensitive and specific; therefore they increase diagnostic validity of a drug or toxin's site-specific effect by eliminating recall bias and subjectivity from those exposed. For example, when an individual is exposed to a drug or toxin, the concentration of that drug or toxin within the body, or the biological effective dose, provides a more accurate prediction for the effect of the drug or toxin compared to an estimation or measurement of the toxin from the origin or external environment.

Pharmacodynamic

Pharmacodynamic (PD) biomarkers can measure the direct interaction between a drug and its receptor. Pharmacodynamic biomarkers reveal drug mechanisms, if the drug has its intended effect on the biology of the disease, ideal biological dosing concentrations, and physiologic response/resistance mechanisms. Pharmacodynamic biomarkers are particularly relevant in drug mechanisms of tumor cells, where pharmacodynamic endpoints for drug interventions can be assessed directly on tumor tissues. For example, protein phosphorylation biomarkers indicate alterations in target protein kinases and activation of downstream signaling molecules.

Types

Biomarkers validated by genetic and molecular biology methods can be classified into three types.

  • Type 0 — Natural history markers
  • Type 1 — Drug activity markers
  • Type 2 — Surrogate markers

Discovery of molecular biomarkers

Molecular biomarkers have been defined as biomarkers that can be discovered using basic and acceptable platforms such as genomics and proteomics. Many genomic and proteomics techniques are available for biomarker discovery and a few techniques that are recently being used can be found on that page. Apart from genomics and proteomics platforms biomarker assay techniques, metabolomics, lipidomics, glycomics, and secretomics are the most commonly used as techniques in identification of biomarkers.

Clinical applications

Biomarkers can be classified on their clinical applications as molecular biomarkers, cellular biomarkers or imaging biomarkers.

Molecular

Four of the main types of molecular biomarkers are genomic biomarkers, transcriptomic biomarkers, proteomic biomarkers and metabolic biomarkers.

Genomic

Genomic biomarkers analyze DNA by identifying irregular sequences in the genome, typically a single nucleotide polymorphism. Genetic biomarkers are particularly significant in cancer because most cancer cell lines carry somatic mutations. Somatic mutations are distinguishable from hereditary mutations because the mutation is not in every cell; just the tumor cells, making them easy targets.

Transcriptomic

Transcriptomic biomarkers analyze all RNA molecules, not solely the exome. Transcriptomic biomarkers reveal the molecular identity and concentration of RNA in a specific cell or population. Pattern-based RNA expression analysis provides increased diagnostic and prognostic capability in predicting therapeutic responses for individuals. For example, distinct RNA subtypes in breast cancer patients have different survival rates.

Proteomic

Proteomics permits the quantitative analysis and detection of changes to proteins or protein biomarkers. Protein biomarkers detect a variety of biological changes, such as protein-protein interactions, post-translational modifications and immunological responses.

Cellular

Cellular biomarkers allow cells to be isolated, sorted, quantified and characterized by their morphology and physiology. Cellular biomarkers are used in both clinical and laboratory settings, and can discriminate between a large sample of cells based on their antigens. An example of a cellular biomarker sorting technique is Fluorescent-activated cell sorting.

Imaging biomarkers

Imaging biomarkers allow earlier detection of disease compared to molecular biomarkers, and streamline translational research in the drug discovery marketplace. For example, one could determine the percent of receptors a drug targets, shortening the time and money of research during the new drug development stage. Imaging biomarkers also are non-invasive, which is a clinical advantage over molecular biomarkers. Some of the image-based biomarkers are X-Ray, Computed Tomography (CT), Positron Emission Tomography (PET), Single Photo Emission Computed Tomography (SPECT) and Magnetic Resonance Imaging (MRI).

Many new biomarkers are being developed that involve imaging technology. Imaging biomarkers have many advantages. They are usually noninvasive, and they produce intuitive, multidimensional results. Yielding both qualitative and quantitative data, they are usually relatively comfortable for patients. When combined with other sources of information, they can be very useful to clinicians seeking to make a diagnosis.

Cardiac imaging is an active area of biomarker research. Coronary angiography, an invasive procedure requiring catheterization, has long been the gold standard for diagnosing arterial stenosis, but scientists and doctors hope to develop noninvasive techniques. Many believe that cardiac computed tomography (CT) has great potential in this area, but researchers are still attempting to overcome problems related to "calcium blooming," a phenomenon in which calcium deposits interfere with image resolution. Other intravascular imaging techniques involving magnetic resonance imaging (MRI), optical coherence tomography (OCT), and near infrared spectroscopy are also being investigated.

Another new imaging biomarker involves radiolabeled fludeoxyglucose. Positron emission tomography (PET) can be used to measure where in the body cells take up glucose. By tracking glucose, doctors can find sites of inflammation because macrophages there take up glucose at high levels. Tumors also take up a lot of glucose, so the imaging strategy can be used to monitor them as well. Tracking radiolabeled glucose is a promising technique because it directly measures a step known to be crucial to inflammation and tumor growth.

Imaging disease biomarkers by magnetic resonance imaging (MRI)

MRI has the advantages of having very high spatial resolution and is very adept at morphological imaging and functional imaging. MRI does have several disadvantages though. First, MRI has a sensitivity of around 10−3 mol/L to 10−5 mol/L which, compared to other types of imaging, can be very limiting. This problem stems from the fact that the difference between atoms in the high energy state and the low energy state is very small. For example, at 1.5 tesla, a typical field strength for clinical MRI, the difference between high and low energy states is approximately 9 molecules per 2 million. Improvements to increase MR sensitivity include increasing magnetic field strength, and hyperpolarization via optical pumping or dynamic nuclear polarization. There are also a variety of signal amplification schemes based on chemical exchange that increase sensitivity.

To achieve molecular imaging of disease biomarkers using MRI, targeted MRI contrast agents with high specificity and high relaxivity (sensitivity) are required. To date, many studies have been devoted to developing targeted-MRI contrast agents to achieve molecular imaging by MRI. Commonly, peptides, antibodies, or small ligands, and small protein domains, such as HER-2 affibodies, have been applied to achieve targeting. To enhance the sensitivity of the contrast agents, these targeting moieties are usually linked to high payload MRI contrast agents or MRI contrast agents with high relaxivities.

Examples

  • Embryonic: Embryonic biomarkers are very important to fetuses, as each cell's role is decided through the use of biomarkers. Research has been conducted concerning the use of embryonic stem cells (ESCs) in regenerative medicine. This is because certain biomarkers within a cell could be altered (most likely in the tertiary stage of their formation) to change the future role of the cell, thereby creating new ones. One example of an embryonic biomarker is the protein Oct-4.
  • Autism: ASDs are complex; autism is a medical condition with several etiologies caused due to the interactions between environmental conditions and genetic vulnerability. The challenge in finding out the biomarkers related to ASDs is that they may reflect genetic or neurobiological changes that may be active only to a certain point. ASDs show heterogeneous clinical symptoms and genetic architecture, which have hindered the identification of common genetic susceptibility factors. Still, many researches are being done to find out the main reason behind the genetic incomparability.
Multiplex analysis of circulating tumor cells using QuantiGene ViewRNA CTC Platform
  • Cancer: Biomarkers have an extremely high upside for therapeutic interventions in cancer patients. Most cancer biomarkers consist of proteins or altered segments of DNA, and are expressed in all cells, just at higher rates in cancer cells. There has not yet been one, universal tumor biomarker, but there is a biomarker for every type of cancer. These tumor biomarkers are used to track the health of tumors, but cannot serve as the sole diagnostic for specific cancers. Examples of tumoral markers used to follow up cancer treatment are the Carcinoembryonic Antigen (CEA) for colorectal cancer and the Prostate Specific Antigen (PSA) for prostate cancer. In 2014, Cancer research identified Circulating Tumor Cells (CTCs) and Circulating Tumor DNA (ctDNA) as metastasizing tumor biomarkers with special cellular differentiation and prognostic skills. Innovative technology needs to be harnessed to determine the full capabilities of CTCs and ctDNA, but insight into their roles has potential for new understanding of cancer evolution, invasion and metastasis.

List of Biomarkers

In alphabetic order

Potential disadvantages

Not all biomarkers should be used as surrogate endpoints to assess clinical outcomes. Biomarkers can be difficult to validate and require different levels of validation depending on their intended use. If a biomarker is to be used to measure the success of a therapeutic intervention, the biomarker should reflect a direct effect of that medicine.

Positivism


From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Positivism
A portrait of Auguste Comte, the founder of modern positivism

Positivism is a philosophical school that holds that all genuine knowledge is either true by definition or positive—meaning a posteriori facts derived by reason and logic from sensory experience. Other ways of knowing, such as intuition, introspection, or religious faith, are rejected or considered meaningless.

Although the positivist approach has been a recurrent theme in the history of western thought, modern positivism was first articulated in the early 19th century by Auguste Comte. His school of sociological positivism holds that society, like the physical world, operates according to general laws. After Comte, positivist schools arose in logic, psychology, economics, historiography, and other fields of thought. Generally, positivists attempted to introduce scientific methods to their respective fields. Since the turn of the 20th century, positivism has declined under criticism from antipositivists and critical theorists, among others, for its alleged scientism, reductionism, overgeneralizations, and methodological limitations.

Etymology

The English noun positivism was re-imported in the 19th century from the French word positivisme, derived from positif in its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective (Latin: positīvus) has been used in a similar sense to discuss law (positive law compared to natural law) since the time of Chaucer.

Background

Kieran Egan argues that positivism can be traced to the philosophy side of what Plato described as the quarrel between philosophy and poetry, later reformulated by Wilhelm Dilthey as a quarrel between the natural sciences (German: Naturwissenschaften) and the humanities (Geisteswissenschaft).

In the early nineteenth century, massive advances in the natural sciences encouraged philosophers to apply scientific methods to other fields. Thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought.

Positivism in the social sciences

Comte's positivism

Comte first laid out his theory of positivism in The Course in Positive Philosophy

Auguste Comte (1798–1857) first described the epistemological perspective of positivism in The Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed by the 1844 work, A General View of Positivism (published in French 1848, English in 1865). The first three volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasized the inevitable coming of social science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term. For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. His View of Positivism therefore set out to define the empirical goals of sociological method.

"The most important thing to determine was the natural order in which the sciences stand—not how they can be made to stand, but how they must stand, irrespective of the wishes of any one. ... This Comte accomplished by taking as the criterion of the position of each the degree of what he called "positivity," which is simply the degree to which the phenomena can be exactly determined. This, as may be readily seen, is also a measure of their relative complexity, since the exactness of a science is in inverse proportion to its complexity. The degree of exactness or positivity is, moreover, that to which it can be subjected to mathematical demonstration, and therefore mathematics, which is not itself a concrete science, is the general gauge by which the position of every science is to be determined. Generalizing thus, Comte found that there were five great groups of phenomena of equal classificatory value but of successively decreasing positivity. To these he gave the names astronomy, physics, chemistry, biology, and sociology."

— Lester F. Ward, The Outlines of Sociology (1898), 

Comte offered an account of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general "law of three stages". Comte intended to develop a secular-scientific ideology in the wake of European secularisation.

Comte's stages were (1) the theological, (2) the metaphysical, and (3) the positive. The theological phase of man was based on whole-hearted belief in all things with reference to God. God, Comte says, had reigned supreme over human existence pre-Enlightenment. Humanity's place in society was governed by its association with the divine presences and with the church. The theological phase deals with humankind's accepting the doctrines of the church (or place of worship) rather than relying on its rational powers to explore basic questions about existence. It dealt with the restrictions put in place by the religious organization at the time and the total acceptance of any "fact" adduced for society to believe.

Comte describes the metaphysical phase of humanity as the time since the Enlightenment, a time steeped in logical rationalism, to the time right after the French Revolution. This second phase states that the universal rights of humanity are most important. The central idea is that humanity is invested with certain rights that must be respected. In this phase, democracies and dictators rose and fell in attempts to maintain the innate rights of humanity.

The final stage of the trilogy of Comte's universal law is the scientific, or positive, stage. The central idea of this phase is that individual rights are more important than the rule of any one person. Comte stated that the idea of humanity's ability to govern itself makes this stage inherently different from the rest. There is no higher power governing the masses and the intrigue of any one person can achieve anything based on that individual's free will. The third principle is most important in the positive stage. Comte calls these three phases the universal rule in relation to society and its development. Neither the second nor the third phase can be reached without the completion and understanding of the preceding stage. All stages must be completed in progress.

Comte believed that the appreciation of the past and the ability to build on it towards the future was key in transitioning from the theological and metaphysical phases. The idea of progress was central to Comte's new science, sociology. Sociology would "lead to the historical consideration of every science" because "the history of one science, including pure political history, would make no sense unless it was attached to the study of the general progress of all of humanity". As Comte would say: "from science comes prediction; from prediction comes action." It is a philosophy of human intellectual development that culminated in science. The irony of this series of phases is that though Comte attempted to prove that human development has to go through these three stages, it seems that the positivist stage is far from becoming a realization. This is due to two truths: The positivist phase requires having a complete understanding of the universe and world around us and requires that society should never know if it is in this positivist phase. Anthony Giddens argues that since humanity constantly uses science to discover and research new things, humanity never progresses beyond the second metaphysical phase.

Positivist temple in Porto Alegre, Brazil

Comte's fame today owes in part to Emile Littré, who founded The Positivist Review in 1867. As an approach to the philosophy of history, positivism was appropriated by historians such as Hippolyte Taine. Many of Comte's writings were translated into English by the Whig writer, Harriet Martineau, regarded by some as the first female sociologist. Debates continue to rage as to how much Comte appropriated from the work of his mentor, Saint-Simon. He was nevertheless influential: Brazilian thinkers turned to Comte's ideas about training a scientific elite in order to flourish in the industrialization process. Brazil's national motto, Ordem e Progresso ("Order and Progress") was taken from the positivism motto, "Love as principle, order as the basis, progress as the goal", which was also influential in Poland.

In later life, Comte developed a 'religion of humanity' for positivist societies in order to fulfil the cohesive function once held by traditional worship. In 1849, he proposed a calendar reform called the 'positivist calendar'. For close associate John Stuart Mill, it was possible to distinguish between a "good Comte" (the author of the Course in Positive Philosophy) and a "bad Comte" (the author of the secular-religious system). The system was unsuccessful but met with the publication of Darwin's On the Origin of Species to influence the proliferation of various secular humanist organizations in the 19th century, especially through the work of secularists such as George Holyoake and Richard Congreve. Although Comte's English followers, including George Eliot and Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others", from which comes the word "altruism").

The early sociology of Herbert Spencer came about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted (in vain) to reformulate the discipline in what we might now describe as socially Darwinistic terms.

Early followers of Comte

Within a few years, other scientific and philosophical thinkers began creating their own definitions for positivism. These included Émile Zola, Emile Hennequin, Wilhelm Scherer, and Dimitri Pisarev. Fabien Magnin was the first working-class adherent to Comte's ideas, and became the leader of a movement known as "Proletarian Positivism". Comte appointed Magnin as his successor as president of the Positive Society in the event of Comte's death. Magnin filled this role from 1857 to 1880, when he resigned. Magnin was in touch with the English positivists Richard Congreve and Edward Spencer Beesly. He established the Cercle des prolétaires positivistes in 1863 which was affiliated to the First International. Eugène Sémérie was a psychiatrist who was also involved in the Positivist movement, setting up a positivist club in Paris after the foundation of the French Third Republic in 1870. He wrote: "Positivism is not only a philosophical doctrine, it is also a political party which claims to reconcile order—the necessary basis for all social activity—with Progress, which is its goal."

Durkheim's positivism

Émile Durkheim

The modern academic discipline of sociology began with the work of Émile Durkheim (1858–1917). While Durkheim rejected much of the details of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality. Durkheim set up the first European department of sociology at the University of Bordeaux in 1895, publishing his Rules of the Sociological Method (1895). In this text he argued: "[o]ur main goal is to extend scientific rationalism to human conduct... What has been called our positivism is but a consequence of this rationalism."

Durkheim's seminal monograph, Suicide (1897), a case study of suicide rates amongst Catholic and Protestant populations, distinguished sociological analysis from psychology or philosophy. By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objective sui generis "social facts" to delineate a unique empirical object for the science of sociology to study. Through such studies, he posited, sociology would be able to determine whether a given society is 'healthy' or 'pathological', and seek social reform to negate organic breakdown or "social anomie". Durkheim described sociology as the "science of institutions, their genesis and their functioning".

David Ashley and David M. Orenstein have alleged, in a consumer textbook published by Pearson Education, that accounts of Durkheim's positivism are possibly exaggerated and oversimplified; Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in exactly the same way as natural science, whereas Durkheim saw a far greater need for a distinctly sociological scientific methodology. His lifework was fundamental in the establishment of practical social research as we know it today—techniques which continue beyond sociology and form the methodological basis of other social sciences, such as political science, as well of market research and other fields.

Historical positivism

In historiography, historical or documentary positivism is the belief that historians should pursue the objective truth of the past by allowing historical sources to "speak for themselves", without additional interpretation. In the words of the French historian Fustel de Coulanges, as a positivist, "It is not I who am speaking, but history itself". The heavy emphasis placed by historical positivists on documentary sources led to the development of methods of source criticism, which seek to expunge bias and uncover original sources in their pristine state.

The origin of the historical positivist school is particularly associated with the 19th-century German historian Leopold von Ranke, who argued that the historian should seek to describe historical truth "wie es eigentlich gewesen ist" ("as it actually was")—though subsequent historians of the concept, such as Georg Iggers, have argued that its development owed more to Ranke's followers than Ranke himself.[32]

Historical positivism was critiqued in the 20th century by historians and philosophers of history from various schools of thought, including Ernst Kantorowicz in Weimar Germany—who argued that "positivism ... faces the danger of becoming Romantic when it maintains that it is possible to find the Blue Flower of truth without preconceptions"—and Raymond Aron and Michel Foucault in postwar France, who both posited that interpretations are always ultimately multiple and there is no final objective truth to recover. In his posthumously published 1946 The Idea of History, the English historian R. G. Collingwood criticized historical positivism for conflating scientific facts with historical facts, which are always inferred and cannot be confirmed by repetition, and argued that its focus on the "collection of facts" had given historians "unprecedented mastery over small-scale problems", but "unprecedented weakness in dealing with large-scale problems".

Historicist arguments against positivist approaches in historiography include that history differs from sciences like physics and ethology in subject matter and method; that much of what history studies is nonquantifiable, and therefore to quantify is to lose in precision; and that experimental methods and mathematical models do not generally apply to history, so that it is not possible to formulate general (quasi-absolute) laws in history.

Other subfields

In psychology the positivist movement was influential in the development of operationalism. The 1927 philosophy of science book The Logic of Modern Physics in particular, which was originally intended for physicists, coined the term operational definition, which went on to dominate psychological method for the whole century.

In economics, practicing researchers tend to emulate the methodological assumptions of classical positivism, but only in a de facto fashion: the majority of economists do not explicitly concern themselves with matters of epistemology. Economic thinker Friedrich Hayek (see "Law, Legislation and Liberty") rejected positivism in the social sciences as hopelessly limited in comparison to evolved and divided knowledge. For example, much (positivist) legislation falls short in contrast to pre-literate or incompletely defined common or evolved law.

In jurisprudence, "legal positivism" essentially refers to the rejection of natural law; thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law.

Logical positivism

Moritz Schlick, the founding father of logical positivism and the Vienna Circle

Logical positivism (later and more accurately called logical empiricism) is a school of philosophy that combines empiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version of rationalism, the idea that our knowledge includes a component that is not derived from observation.

Logical positivism grew from the discussions of a group called the "First Vienna Circle", which gathered at the Café Central before World War I. After the war Hans Hahn, a member of that early group, helped bring Moritz Schlick to Vienna. Schlick's Vienna Circle, along with Hans Reichenbach's Berlin Circle, propagated the new doctrines more widely in the 1920s and early 1930s.

It was Otto Neurath's advocacy that made the movement self-conscious and more widely known. A 1929 pamphlet written by Neurath, Hahn, and Rudolf Carnap summarized the doctrines of the Vienna Circle at that time. These included the opposition to all metaphysics, especially ontology and synthetic a priori propositions; the rejection of metaphysics not as wrong but as meaningless (i.e., not empirically verifiable); a criterion of meaning based on Ludwig Wittgenstein's early work (which he himself later set out to refute); the idea that all knowledge should be codifiable in a single standard language of science; and above all the project of "rational reconstruction," in which ordinary-language concepts were gradually to be replaced by more precise equivalents in that standard language. However, the project is widely considered to have failed.

After moving to the United States, Carnap proposed a replacement for the earlier doctrines in his Logical Syntax of Language. This change of direction, and the somewhat differing beliefs of Reichenbach and others, led to a consensus that the English name for the shared doctrinal platform, in its American exile from the late 1930s, should be "logical empiricism." While the logical positivist movement is now considered dead, it has continued to influence philosophical development.

Criticism

Historically, positivism has been criticized for its reductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."

The consideration that laws in physics may not be absolute but relative, and, if so, this might be even more true of social sciences, was stated, in different terms, by G. B. Vico in 1725. Vico, in contrast to the positivist movement, asserted the superiority of the science of the human mind (the humanities, in other words), on the grounds that natural sciences tell us nothing about the inward aspects of things.

Wilhelm Dilthey fought strenuously against the assumption that only explanations derived from science are valid. He reprised Vico's argument that scientific explanations do not reach the inner nature of phenomena and it is humanistic knowledge that gives us insight into thoughts, feelings and desires. Dilthey was in part influenced by the historicism of Leopold von Ranke (1795–1886).

The contestation over positivism is reflected both in older debates (see the Positivism dispute) and current ones over the proper role of science in the public sphere. Public sociology—especially as described by Michael Burawoy—argues that sociologists should use empirical evidence to display the problems of society so they might be changed.

Antipositivism

At the turn of the 20th century, the first wave of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human cultural norms, values, symbols, and social processes viewed from a subjective perspective. Max Weber, one such thinker, argued that while sociology may be loosely described as a 'science' because it is able to identify causal relationships (especially among ideal types), sociologists should seek relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists. Weber regarded sociology as the study of social action, using critical analysis and verstehen techniques. The sociologists Georg Simmel, Ferdinand Tönnies, George Herbert Mead, and Charles Cooley were also influential in the development of sociological antipositivism, whilst neo-Kantian philosophy, hermeneutics, and phenomenology facilitated the movement in general.

Critical rationalism and postpositivism

In the mid-twentieth century, several important philosophers and philosophers of science began to critique the foundations of logical positivism. In his 1934 work The Logic of Scientific Discovery, Karl Popper argued against verificationism. A statement such as "all swans are white" cannot actually be empirically verified, because it is impossible to know empirically whether all swans have been observed. Instead, Popper argued that at best an observation can falsify a statement (for example, observing a black swan would prove that not all swans are white). Popper also held that scientific theories talk about how the world really is (not about phenomena or observations experienced by scientists), and critiqued the Vienna Circle in his Conjectures and Refutations. W. V. O. Quine and Pierre Duhem went even further. The Duhem–Quine thesis states that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses); thus, unambiguous scientific falsifications are also impossible. Thomas Kuhn, in his 1962 book The Structure of Scientific Revolutions, put forward his theory of paradigm shifts. He argued that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.

Together, these ideas led to the development of critical rationalism and postpositivism. Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. Postpositivism of this type is described in social science guides to research methods. Postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.

In the early 1960s, the positivism dispute arose between the critical theorists (see below) and the critical rationalists over the correct solution to the value judgment dispute (Werturteilsstreit). While both sides accepted that sociology cannot avoid a value judgement that inevitably influences subsequent conclusions, the critical theorists accused the critical rationalists of being positivists; specifically, of asserting that empirical questions can be severed from their metaphysical heritage and refusing to ask questions that cannot be answered with scientific methods. This contributed to what Karl Popper termed the "Popper Legend", a misconception among critics and admirers of Popper that he was, or identified himself as, a positivist.

Critical theory

Although Karl Marx's theory of historical materialism drew upon positivism, the Marxist tradition would also go on to influence the development of antipositivist critical theory. Critical theorist Jürgen Habermas critiqued pure instrumental rationality (in its relation to the cultural "rationalisation" of the modern West) as a form of scientism, or science "as ideology". He argued that positivism may be espoused by "technocrats" who believe in the inevitability of social progress through science and technology. New movements, such as critical realism, have emerged in order to reconcile postpositivist aims with various so-called 'postmodern' perspectives on the social acquisition of knowledge.

Max Horkheimer criticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action. The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness. Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas. Positivism falsely represented the object of study by reifying social reality as existing objectively and independently of the labour that actually produced those conditions. Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it. This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.

Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development of postpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.

Other criticisms

During the later twentieth century, positivism began to fall out of favor with scientists as well. Later in his career, German theoretical physicist Werner Heisenberg, Nobel laureate for his pioneering work in quantum mechanics, distanced himself from positivism:

The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.

In the early 1970s, urbanists of the quantitative school like David Harvey started to question the positivist approach itself, saying that the arsenal of scientific theories and methods developed so far in their camp were "incapable of saying anything of depth and profundity" on the real problems of contemporary cities.

According the Catholic Encyclopedia, the Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins in sense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience. This runs contrary to a Platonic or Christian ideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear.

Other new movements, such as critical realism, have emerged in opposition to positivism. Critical realism seeks to reconcile the overarching aims of social science with postmodern critiques. Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself. In other words, it rejects the positivist assertion that a portion of human knowledge is a priori.

Positivism today

Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other. To complicate the issues further, few practising scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actually postpositivists. One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it is not rather than what it is, and then proceeding to attribute far greater homogeneity to their opponents than actually exists. Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a social meta-theory which includes a philosophical critique of scientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of the reliability and validity of work that they see as violating such standards.

Social sciences

While most social scientists today are not explicit about their epistemological commitments, articles in top American sociology and political science journals generally follow a positivist logic of argument. It can be thus argued that "natural science and social science [research articles] can therefore be regarded with a good deal of confidence as members of the same genre".

In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detail observer bias and structural limitations. Modern positivists generally eschew metaphysical concerns in favour of methodological debates concerning clarity, replicability, reliability and validity. This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited to Paul Lazarsfeld, who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to what Robert K. Merton called middle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.

In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense. There are no fewer than twelve distinct epistemologies that are referred to as positivism. Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.

The majority of articles published in leading American sociology and political science journals today are positivist (at least to the extent of being quantitative rather than qualitative). This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question. Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).

Natural sciences

The key features of positivism as of the 1950s, as defined in the "received view", are:

  1. A focus on science as a product, a linguistic or numerical set of statements;
  2. A concern with axiomatization, that is, with demonstrating the logical structure and coherence of these statements;
  3. An insistence on at least some of these statements being testable; that is, amenable to being verified, confirmed, or shown to be false by the empirical observation of reality. Statements that would, by their nature, be regarded as untestable included the teleological; thus positivism rejects much of classical metaphysics.
  4. The belief that science is markedly cumulative;
  5. The belief that science is predominantly transcultural;
  6. The belief that science rests on specific results that are dissociated from the personality and social position of the investigator;
  7. The belief that science contains theories or research traditions that are largely commensurable;
  8. The belief that science sometimes incorporates new ideas that are discontinuous from old ones;
  9. The belief that science involves the idea of the unity of science, that there is, underlying the various scientific disciplines, basically one science about one real world.
  10. The belief that science is nature and nature is science; and out of this duality, all theories and postulates are created, interpreted, evolve, and are applied.
Stephen Hawking

Stephen Hawking was a recent high-profile advocate of positivism in the physical sciences. In The Universe in a Nutshell (p. 31) he wrote:

Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes.

Quantitative research

From Wikipedia, the free encyclopedia

Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.

Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of observable phenomena to test and understand relationships. This is done through a range of quantifying methods and techniques, reflecting on its broad utilization as a research strategy across differing academic disciplines.

There are several situations where quantitative research may not be the most appropriate or effective method to use:

1. When exploring in-depth or complex topics.

2. When studying subjective experiences and personal opinions.

3. When conducting exploratory research.

4. When studying sensitive or controversial topics

The objective of quantitative research is to develop and employ mathematical models, theories, and hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships.

Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyses the data with the help of statistics and hopes the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, inquires deeply into specific experiences, with the intention of describing and exploring meaning through text, narrative, or visual-based data, by developing themes exclusive to that set of participants.

Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health & human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods originating in both philosophical positivism and the history of statistics, in contrast with qualitative research methods.

Qualitative research produces information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two-thirds of these articles used quantitative method.

Overview

Quantitative research is generally closely affiliated with ideas from 'the scientific method', which can include:

  • The generation of models, theories and hypotheses
  • The development of instruments and methods for measurement
  • Experimental control and manipulation of variables
  • Collection of empirical data
  • Modeling and analysis of data

Quantitative research is often contrasted with qualitative research, which purports to be focused more on discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modeled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn concludes that "large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences". Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?) (Kieron Yeoman).

Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework. Positivism emphasized the use of the scientific method through observation to empirically test hypotheses explaining and predicting what, where, why, how, and when phenomena occurred. Positivist scholars like Comte believed only scientific methods rather than previous spiritual explanations for human behavior could advance.

Quantitative methods are an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative methods, reviews of the literature (including scholarly), interviews with experts and computer simulation, and which forms an extension of data triangulation.

Quantitative methods have limitations. These studies do not provide reasoning behind participants' responses, they often do not reach underrepresented populations, and they may span long periods in order to collect the data.

Use of statistics

Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods starts with the collection of data, based on the hypothesis or theory. Usually a big sample of data is collected – this would require verification, validation and recording before the analysis can take place. Software packages such as SPSS and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide.

Empirical relationships and associations are also frequently studied by using some form of general linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation, although some such as Clive Granger suggest that a series of correlations can imply a degree of causality. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics. Other data analytical approaches for studying causal relations can be performed with Necessary Condition Analysis (NCA), which outlines must-have conditions for the studied outcome variable.

Measurement

Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Kuhn argued that within quantitative research, the results that are shown can prove to be strange. This is because accepting a theory based on results of quantitative data could prove to be a natural phenomenon. He argued that such abnormalities are interesting when done during the process of obtaining data, as seen below:

When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180).

In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences.

Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable.

Relationship with qualitative methods

In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method can be a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. The majority tendency throughout the history of social science, however, is to use eclectic approaches-by combining both methods. Qualitative methods might be used to understand the meaning of the conclusions produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research.

Examples

  • Research that consists of the percentage amounts of all the elements that make up Earth's atmosphere.
  • Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected.
  • An experiment in which group x was given two tablets of aspirin a day and group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups. The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative.
  • In economics, quantitative research is used to analyze business enterprises and the factors contributing to the diversity of organizational structures and the relationships of firms with labour, capital and product markets.
  • Rydberg atom

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...