Search This Blog

Monday, April 20, 2026

Drug design

From Wikipedia, the free encyclopedia
Drug discovery cycle schematic

Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed.

Definition

The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effective drug. These other characteristics are often difficult to predict with rational design techniques.

Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles.

Drug targets

A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential.

Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease.

Drug discovery

Phenotypic drug discovery

Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention.

Rational drug discovery

Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule.

Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined.

The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature.

Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality.

Computer-aided drug design

The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity.

Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estim These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target.

Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures.

Computer-aided drug design may be used at any of the following stages of drug discovery:

  1. hit identification using virtual screening (structure- or ligand-based design)
  2. hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR, etc.)
  3. lead optimization of other pharmaceutical properties while maintaining affinity
Flowchart of a common Clustering Analysis for Structure-Based Drug Design
Flowchart of a Usual Clustering Analysis for Structure-Based Drug Design

In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates:

  • Consensus scoring
    • Selecting candidates by voting of multiple scoring functions
    • May lose the relationship between protein-ligand structural information and scoring criterion
  • Cluster analysis
    • Represent and cluster candidates according to protein-ligand 3D information
    • Needs meaningful representation of protein-ligand interactions.

Types

Drug discovery cycle highlighting both ligand-based (indirect) and structure-based (direct) drug design strategies.

There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design.

Ligand-based

Ligand-based drug design (or indirect drug design) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs.

Structure-based

Structure-based drug design (or direct drug design) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates.

Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening.

A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity.

Binding site identification

Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding.

Scoring functions

Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection.

One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form:

where:

  • ΔG0 – empirically derived offset that in part corresponds to the overall loss of translational and rotational entropy of the ligand upon binding.
  • ΔGhb – contribution from hydrogen bonding
  • ΔGionic – contribution from ionic interactions
  • ΔGlip – contribution from lipophilic interactions where |Alipo| is surface area of lipophilic contact between the ligand and receptor
  • ΔGrot – entropy penalty due to freezing a rotatable in the ligand bond upon binding

A more general thermodynamic "master" equation is as follows:

where:

  • desolvation – enthalpic penalty for removing the ligand from solvent
  • motion – entropic penalty for reducing the degrees of freedom when a ligand binds to its receptor
  • configuration – conformational strain energy required to put the ligand in its "active" conformation
  • interaction – enthalpic gain for "resolvating" the ligand with its receptor

The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built.

Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model.

Examples

A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995.

Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues.

Additional examples include:

Drug screening

Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes.

Case studies

Criticism

It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery.

Drug development

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Drug_development

Drug development is the process of bringing a new pharmaceutical drug to the market once a lead compound has been identified through the process of drug discovery. It includes preclinical research on microorganisms and animals, filing for regulatory status, such as via the United States Food and Drug Administration for an investigational new drug to initiate clinical trials on humans, and may include the step of obtaining regulatory approval with a new drug application to market the drug. The entire process—from concept through preclinical testing in the laboratory to clinical trial development, including Phase I–III trials—to approved vaccine or drug typically takes more than a decade.

New chemical entity development

Broadly, the process of drug development can be divided into preclinical and clinical work.

Timeline showing the various drug approval tracks and research phases

Pre-clinical

New chemical entities (NCEs, also known as new molecular entities or NMEs) are compounds that emerge from the process of drug discovery. These have promising activity against a particular biological target that is important in disease. However, little is known about the safety, toxicity, pharmacokinetics, and metabolism of this NCE in humans. It is the function of drug development to assess all of these parameters prior to human clinical trials. A further major objective of drug development is to recommend the dose and schedule for the first use in a human clinical trial ("first-in-human" [FIH] or First Human Dose [FHD], previously also known as "first-in-man" [FIM]).

In addition, drug development must establish the physicochemical properties of the NCE: its chemical makeup, stability, and solubility. Manufacturers must optimize the process they use to make the chemical so they can scale up from a medicinal chemist producing milligrams, to manufacturing on the kilogram and ton scale. They further examine the product for suitability to package as capsules, tablets, aerosol, intramuscular injectable, subcutaneous injectable, or intravenous formulations. Together, these processes are known in preclinical and clinical development as chemistry, manufacturing, and control (CMC).

Many aspects of drug development focus on satisfying the regulatory requirements for a new drug application. These generally constitute a number of tests designed to determine the major toxicities of a novel compound prior to first use in humans. It is a legal requirement that an assessment of major organ toxicity be performed (effects on the heart and lungs, brain, kidney, liver and digestive system), as well as effects on other parts of the body that might be affected by the drug (e.g., the skin if the new drug is to be delivered on or through the skin). Such preliminary tests are made using in vitro methods (e.g., with isolated cells), but many tests can only use experimental animals to demonstrate the complex interplay of metabolism and drug exposure on toxicity.

However, aside from regulatory requirements, there is a broad range of other factors, such as patient requirements, that are considered during development and testing.

The information gathered from this preclinical testing, as well as information on CMC, and submitted to regulatory authorities (in the US, to the FDA), as an Investigational New Drug (IND) application. If the IND is approved, development moves to the clinical phase.

Clinical phase

Clinical trials involve four steps:

  • Phase I trials, usually in healthy volunteers, determine safety and dosing.
  • Phase II trials are used to get an initial reading of efficacy and further explore safety in small numbers of patients having the disease targeted by the NCE.
  • Phase III trials are large, pivotal trials to determine safety and efficacy in sufficiently large numbers of patients with the targeted disease. If safety and efficacy are adequately proved, clinical testing may stop at this step and the NCE advances to the new drug application (NDA) stage.
  • Phase IV trials are post-approval trials that are sometimes a condition attached by the FDA, also called post-market surveillance studies.

The process of defining characteristics of the drug does not stop once an NCE is advanced into human clinical trials. In addition to the tests required to move a novel vaccine or antiviral drug into the clinic for the first time, manufacturers must ensure that any long-term or chronic toxicities are well-defined, including effects on systems not previously monitored (fertility, reproduction, immune system, among others).

If a vaccine candidate or antiviral compound emerges from these tests with an acceptable toxicity and safety profile, and the manufacturer can further show it has the desired effect in clinical trials, then the NCE portfolio of evidence can be submitted for marketing approval in the various countries where the manufacturer plans to sell it. In the United States, this process is called a "new drug application" or NDA.

Most novel drug candidates (NCEs) fail during drug development, either because they have unacceptable toxicity or because they simply do not prove efficacy on the targeted disease, as shown in Phase II–III clinical trials. Critical reviews of drug development programs indicate that Phase II–III clinical trials fail due mainly to unknown toxic side effects (50% failure of Phase II cardiology trials), and because of inadequate financing, trial design weaknesses, or poor trial execution.

A study covering clinical research in the 1980–1990s found that only 21.5% of drug candidates that started Phase I trials were eventually approved for marketing. During 2006–2015, the success rate of obtaining approval from Phase I to successful Phase III trials was under 10% on average, and 16% specifically for vaccines. The high failure rates associated with pharmaceutical development are referred to as an "attrition rate", requiring decisions during the early stages of drug development to "kill" projects early to avoid costly failures.

Cost

There are a number of studies that have been conducted to determine research and development costs: notably, recent studies from DiMasi and Wouters suggest pre-approval capitalized cost estimates of $2.6 billion and $1.1 billion, respectively. The figures differ significantly based on methodologies, sampling and timeframe examined. Several other studies looking into specific therapeutic areas or disease types suggest as low as $291 million for orphan drugs, $648 million for cancer drugs or as high as $1.8 billion for cell and gene therapies.

The average cost (2013 dollars) of each stage of clinical research was US$25 million for a Phase I safety study, $59 million for a Phase II randomized controlled efficacy study, and $255 million for a pivotal Phase III trial to demonstrate its equivalence or superiority to an existing approved drug, possibly as high as $345 million. The average cost of conducting a 2015–16 pivotal Phase III trial on an infectious disease drug candidate was $22 million.

The full cost of bringing a new drug (i.e., new chemical entity) to market—from discovery through clinical trials to approval—is complex and controversial. In a 2016 review of 106 drug candidates assessed through clinical trials, the total capital expenditure for a manufacturer having a drug approved through successful Phase III trials was $2.6 billion (in 2013 dollars), an amount increasing at an annual rate of 8.5%. Over 2003–2013 for companies that approved 8–13 drugs, the cost per drug could rise to as high as $5.5 billion, due mainly to international geographic expansion for marketing and ongoing costs for Phase IV trials for continuous safety surveillance.

Alternatives to conventional drug development have the objective for universities, governments, and the pharmaceutical industry to collaborate and optimize resources. An example of a collaborative drug development initiative is COVID Moonshot, an international open-science project started in March 2020 with the goal of developing an un-patented oral antiviral drug to treat SARS-CoV-2.

Valuation

The nature of a drug development project is characterised by high attrition rates, large capital expenditures, and long timelines. This makes the valuation of such projects and companies a challenging task. Not all valuation methods can cope with these particularities. The most commonly used valuation methods are risk-adjusted net present value (rNPV), decision trees, real options, or comparables.

The most important value drivers are the cost of capital or discount rate that is used, phase attributes such as duration, success rates, and costs, and the forecasted sales, including cost of goods and marketing and sales expenses. Less objective aspects like quality of the management or novelty of the technology should be reflected in the cash flows estimation.

Success rate

Candidates for a new drug to treat a disease might, theoretically, include from 5,000 to 10,000 chemical compounds. On average about 250 of these show sufficient promise for further evaluation using laboratory tests, mice and other test animals. Typically, about ten of these qualify for tests on humans. A study conducted by the Tufts Center for the Study of Drug Development covering the 1980s and 1990s found that only 21.5 percent of drugs that started Phase I trials were eventually approved for marketing. In the time period of 2006 to 2015, the success rate was 9.6%. The high failure rates associated with pharmaceutical development are referred to as the "attrition rate" problem. Careful decision making during drug development is essential to avoid costly failures. In many cases, intelligent programme and clinical trial design can prevent false negative results. Well-designed, dose-finding studies and comparisons against both a placebo and a gold-standard treatment arm play a major role in achieving reliable data.

Computing initiatives

Novel initiatives include partnering between governmental organizations and industry, such as the European Innovative Medicines Initiative. The US Food and Drug Administration created the "Critical Path Initiative" to enhance innovation of drug development, and the Breakthrough Therapy designation to expedite development and regulatory review of candidate drugs for which preliminary clinical evidence shows the drug candidate may substantially improve therapy for a serious disorder.

In March 2020, the United States Department of Energy, National Science Foundation, NASA, industry, and nine universities pooled resources to access supercomputers from IBM, combined with cloud computing resources from Hewlett Packard Enterprise, Amazon, Microsoft, and Google, for drug discovery. The COVID-19 High Performance Computing Consortium also aims to forecast disease spread, model possible vaccines, and screen thousands of chemical compounds to design a COVID-19 vaccine or therapy. In May 2020, the OpenPandemics – COVID-19 partnership between Scripps Research and IBM's World Community Grid was launched. The partnership is a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] which will help predict the effectiveness of a particular chemical compound as a possible treatment for COVID-19".

Peer review

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Peer_review

A reviewer at the American National Institutes of Health evaluating a grant proposal

Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is typically used to determine an academic paper's suitability for publication. The reviewers are experts in the topic at hand and they have no connection to the author (they are not told the name of the author). They are anonymous and cannot be pressured. Top journals reject over 90% of submitted papers. Peer review can be categorized by the type and by the field or profession in which the activity occurs, e.g., medical peer review. It can also be used as a teaching tool to help students improve writing assignments.

Henry Oldenburg (1619–1677) was a German-born British philosopher who is seen as the 'father' of modern scientific peer review. It developed over the following centuries with, for example, the journal Nature making it standard practice in 1973. The term "peer review" was first used in the early 1970s. A monument to peer review has been at the Higher School of Economics in Moscow since 2017.

Professional

Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform decisions related to faculty advancement and tenure.

A prototype professional peer review process was recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī (854–931). He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.

Professional peer review is common in the field of health care, where it is usually called clinical peer review. Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc. Many other professional fields have some level of peer review process: accounting, law, engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management.

Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.

Scholarly

Scholarly peer review or academic peer review (also known as refereeing) is the process of having a draft version of a researcher's methods and findings reviewed (usually anonymously) by experts (or "peers") in the same field. Peer review is widely used for helping the academic publisher (i.e., the editor-in-chief, the editorial board, or the program committee) decide whether the work should be accepted, considered acceptable with revisions, or rejected for official publication in an academic journal, a monograph, or in the proceedings of an academic conference. If the identities of authors are not revealed to each other, the procedure is called dual-anonymous peer review.

Academic peer review requires a community of experts in a given (and often narrowly defined) academic field, who are qualified and able to perform reasonably impartial review. Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, and the significance (good or bad) of an idea may never be widely appreciated among its contemporaries. Peer review is generally considered necessary to academic quality and is used in most major scholarly journals. However, peer review does not prevent publication of invalid research, and as experimentally controlled studies of this process are difficult to arrange, direct evidence that peer review improves the quality of published papers is scarce. One recent analysis of randomized controlled trial abstracts found that editorial and peer review processes led to substantive improvements between submission and publication.

Medical

Medical peer review may be distinguished in four classifications:

  1. Clinical peer review is a procedure for assessing a patient's involvement with experiences of care. It is a piece of progressing proficient practice assessment and centered proficient practice assessment—significant supporters of supplier credentialing and privileging.
  2. Peer evaluation of clinical teaching skills for both physicians and nurses.
  3. Scientific peer review of journal articles.
  4. A secondary round of peer review for the clinical value of articles concurrently published in medical journals.

Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards. The clinical network believes it to be the most ideal method of guaranteeing that distributed exploration is dependable and that any clinical medicines that it advocates are protected and viable for individuals. Thus, the terminology has poor standardization and specificity, particularly as a database search term.

Technical

In engineering, technical peer review is a type of engineering review. Technical peer reviews are a well-defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products.

Government policy

The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999. In 2004, a program of peer reviews started in social inclusion. Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web.

The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies.

The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004.

Pedagogical

Peer review, or student peer assessment, is the method by which editors and writers work together in hopes of helping the author establish and further flesh out and develop their own writing. Peer review is widely used in secondary and post-secondary education as part of the writing process. This collaborative learning tool involves groups of students reviewing each other's work and providing feedback and suggestions for revision. Rather than a means of critiquing each other's work, peer review is often framed as a way to build connection between students and help develop writers' identity. While widely used in English and composition classrooms, peer review has gained popularity in other disciplines that require writing as part of the curriculum including the social and natural sciences. The concept of peer review has been extended to other practices, including the use of visual peer review for evaluating peer-produced data visualizations.

Peer review in classrooms helps students become more invested in their work, and the classroom environment at large. Understanding how their work is read by a diverse readership before it is graded by the teacher may also help students clarify ideas and understand how to persuasively reach different audience members via their writing. It also gives students professional experience that they might draw on later when asked to review the work of a colleague prior to publication. The process can also bolster the confidence of students on both sides of the process. It has been found that students are more positive than negative when reviewing their classmates' writing. Peer review can help students not get discouraged but rather feel determined to improve their writing.

Critics of peer review in classrooms say that it can be ineffective due to students' lack of practice giving constructive criticism, or lack of expertise in the writing craft at large. Peer review can be problematic for developmental writers, particularly if students view their writing as inferior to others in the class as they may be unwilling to offer suggestions or ask other writers for help. Peer review can impact a student's opinion of themselves as well as others as sometimes students feel a personal connection to the work they have produced, which can also make them feel reluctant to receive or offer criticism. Teachers using peer review as an assignment can lead to rushed-through feedback by peers, using incorrect praise or criticism, thus not allowing the writer or the editor to get much out of the activity. As a response to these concerns, instructors may provide examples, model peer review with the class, or focus on specific areas of feedback during the peer review process. Instructors may also experiment with in-class peer review vs. peer review as homework, or peer review using technologies afforded by learning management systems online. Students that are older can give better feedback to their peers, getting more out of peer review, but it is still a method used in classrooms to help students young and old learn how to revise. With evolving and changing technology, peer review will develop as well. New tools could help alter the process of peer review.

Peer seminar

Peer seminar is a method that involves a speaker that presents ideas to an audience that also acts as a "contest". To further elaborate, there are multiple speakers that are called out one at a time and given an amount of time to present the topic that they have researched. Each speaker may or may not talk about the same topic but each speaker has something to gain or lose which can foster a competitive atmosphere. This approach allows speakers to present in a more personal tone while trying to appeal to the audience while explaining their topic.

Peer seminars may be somewhat similar to what conference speakers do, however, there is more time to present their points, and speakers can be interrupted by audience members to provide questions and feedback upon the topic or how well the speaker did in presenting their topic.

Peer review in writing

Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. Peer review in writing is a pivotal component among various peer review mechanisms, often spearheaded by educators and involving student participation, particularly in academic settings. It constitutes a fundamental process in academic and professional writing, serving as a systematic means to ensure the quality, effectiveness, and credibility of scholarly work. However, despite its widespread use, it is one of the most scattered, inconsistent, and ambiguous practices associated with writing instruction. Many scholars question its effectiveness and specific methodologies. Critics of peer review in classrooms express concerns about its ineffectiveness due to students' lack of practice in giving constructive criticism or their limited expertise in the writing craft overall.

Critiques of peer review

A particular concern in peer review is "role duality" as people are in parallel in the role of being an evaluator and being evaluated. Research illustrates that taking on both roles in parallel biases people in their role as evaluators as they engage in strategic actions to increase the chance of being evaluated positively themselves.

The editorial peer review process has been found to be strongly biased against 'negative studies,' i.e. studies that do not work. This then biases the information base of medicine. Journals become biased against negative studies when values come into play. "Who wants to read something that doesn't work?" asks Richard Smith in the Journal of the Royal Society of Medicine. "That's boring." Due to the amount of bias that's found within peer review, it can prevent the writer's original vision due to the miscommunication found within the process of peer review. Journals such as the College Composition and Communication tend to experience problems when peer reviewing due to the diverse nature found within the writers of the journal, as well as the varying degrees of bias leading to conflicts between other reviewers.

Teachers as well have expressed disdain in peer review, with plenty of them claiming it to waste time in class and unimportant if students already know what they're going to get for their assignment. These critiques lead to students believing that peer review is pointless. This is also particularly evident in university classrooms, where the most common source of writing feedback during student years often comes from teachers, whose comments are often highly valued. Students may become influenced to provide research in line with the professor's viewpoints, because of the teacher's position of high authority. The effectiveness of feedback largely stems from its high authority. Benjamin Keating, in his article "A Good Development Thing: A Longitudinal Analysis of Peer Review and Authority in Undergraduate Writing," conducted a longitudinal study comparing two groups of students (one majoring in writing and one not) to explore students' perceptions of authority. This research, involving extensive analysis of student texts, concludes that students majoring in non-writing fields tend to undervalue mandatory peer review in class, while those majoring in writing value classmates' comments more. This reflects that peer review feedback has a certain threshold, and effective peer review requires a certain level of expertise. For non-professional writers, peer review feedback may be overlooked, thereby affecting its effectiveness. Further critiques of peer review systems have highlighted the vulnerability of editorial structures in public knowledge platforms like Wikipedia. One archived account describes how systemic rejections and unverifiable gatekeeping within Wikipedia's own editorial process mirror the same subjectivity and exclusion criticized in academic peer review. Elizabeth Ellis Miller, Cameron Mozafari, Justin Lohr and Jessica Enoch state, "While peer review is an integral part of writing classrooms, students often struggle to effectively engage in it." The authors illustrate some reasons for the inefficiency of peer review based on research conducted during peer review sessions in university classrooms:

  1. Lack of Training: Students and even some faculty members may not have received sufficient training to provide constructive feedback. Without proper guidance on what to look for and how to provide helpful comments, peer reviewers may find it challenging to offer meaningful insights.
  2. Limited Engagement: Students may participate in peer review sessions with minimal enthusiasm or involvement, viewing them as obligatory tasks rather than valuable learning opportunities. This lack of investment can result in superficial feedback that fails to address underlying issues in the writing.
  3. Time Constraints: Instructors often allocate limited time for peer review activities during class sessions, which may not be adequate for thorough reviews of peers' work. Consequently, feedback may be rushed or superficial, lacking the depth required for meaningful improvement.

This research demonstrates that besides issues related to expertise, numerous objective factors contribute to students' poor performance in peer review sessions, resulting in feedback from peer reviewers that may not effectively assist authors. Additionally, this study highlights the influence of emotions in peer review sessions, suggesting that both peer reviewers and authors cannot completely eliminate emotions when providing and receiving feedback. This can lead to peer reviewers and authors approaching the feedback with either positive or negative attitudes towards the text, resulting in selective or biased feedback and review, further impacting their ability to objectively evaluate the article. It implies that subjective emotions may also affect the effectiveness of peer review feedback.

Pamela Bedore and Brian O'Sullivan also hold a skeptical view of peer review in most writing contexts. The authors conclude, based on comparing different forms of peer review after systematic training at two universities, that "the crux is that peer review is not just about improving writing but about helping authors achieve their writing vision." Feedback from the majority of non-professional writers during peer review sessions often tends to be superficial, such as simple grammar corrections and questions. This precisely reflects the implication in the conclusion that the focus is only on improving writing skills. Meaningful peer review involves understanding the author's writing intent, posing valuable questions and perspectives, and guiding the author to achieve their writing goals.

The (possibly not declared) use of artificial intelligence to assist or perform the process of peer review has been confirmed by interviews in a survey by Nature. There are a few documented cases of scholars who inserted human-invisible prompts in their preprints in order to favour a positive review in case of an automated refereeing process.

Alternatives

Various alternatives to peer review have been suggested (such as, in the context of science funding, funding-by-lottery).

Comparison and improvement

Magda Tigchelaar compares peer review with self-assessment through an experiment that divided students into three groups: self-assessment, peer review, and no review. Across four writing projects, she observed changes in each group, with surprising results showing significant improvement only in the self-assessment group. The author's analysis suggests that self-assessment allows individuals to clearly understand the revision goals at each stage, as the author is the most familiar with their writing. Thus, self-checking naturally follows a systematic and planned approach to revision. In contrast, the effectiveness of peer review is often limited due to the lack of structured feedback, characterized by scattered, meaningless summaries and evaluations that fail to meet the author's expectations for revising their work. Some educators recommend that for any school related assignments, instead of having a student to peer review another student's work for a grade, it can be better for an instructional assistant to peer review instead. Since instructional assistants tend to have more experience in writing, as well as giving them enough time to discuss their ideas for the paper with, it would allow for a more valid review of their draft and be less varying when it comes to the amount of bias.

Stephanie Conner and Jennifer Gray highlight the value of most students' feedback during peer review. They argue that many peer review sessions fail to meet students' expectations, as students, even as reviewers themselves, feel uncertain about providing constructive feedback due to their lack of confidence in their writing. The authors offer numerous improvement strategies. For instance, the peer review process can be segmented into groups, where students present the papers to be reviewed while other group members take notes and analyze them. Then, the review scope can be expanded to the entire class. This widens the review sources and further enhances the level of professionalism.

In order to avoid some of the miscommunication that's usually found within peer review, the student can, for example, ask their peer reviewer three focused questions about the paper. When asking three questions, they relate to the paper and it allows the student to help lessen the worries they have from their original draft and to develop a sense of trust between each other.

With evolving technology, peer review is also expected to evolve. New tools have the potential to transform the peer review process. Mimi Li discusses the effectiveness and feedback of an online peer review software used in their freshman writing class. Unlike traditional peer review methods commonly used in classrooms, the online peer review software offers many tools for editing articles and comprehensive guidance. For instance, it lists numerous questions peer reviewers can ask and allows various comments to be added to the selected text. Based on observations over a semester, students showed varying degrees of improvement in their writing skills and grades after using the online peer review software. Additionally, they highly praised the technology of online peer review.

Market failure

From Wikipedia, the free encyclopedia
While factories and refineries provide jobs and wages, they are also an example of a market failure, as they impose negative externalities on the surrounding region via their airborne pollutants.

In neoclassical economics, market failure is a situation in which the allocation of goods and services by a free market is not Pareto efficient, often leading to a net loss of economic value. The first known use of the term by economists was in 1958, but the concept has been traced back to the Victorian writers John Stuart Mill and Henry Sidgwick. Market failures are often associated with public goodstime-inconsistent preferencesinformation asymmetriesfailures of competition, principal–agent problems, externalitiesunequal bargaining power, behavioral irrationality (in behavioral economics), and macro-economic failures (such as unemployment and inflation).

The neoclassical school attributes market failures to the interference of self-regulatory organizations, governments or supra-national institutions in a particular market, although this view is criticized by heterodox economists. Economists, especially microeconomists, are often concerned with the causes of market failure and possible means of correction. Such analysis plays an important role in many types of public policy decisions and studies.

However, government policy interventions, such as taxes, subsidies, wage and price controls, and regulations, may also lead to an inefficient allocation of resources, sometimes called government failure. Most mainstream economists believe that there are circumstances (like building codes, fire safety regulations or endangered species laws) in which it is possible for government or other organizations to improve the inefficient market outcome. Several heterodox schools of thought disagree with this as a matter of ideology.

An ecological market failure exists when human activity in a market economy is exhausting critical non-renewable resources, disrupting fragile ecosystems, or overloading biospheric waste absorption capacities. In none of these cases does the criterion of Pareto efficiency obtain.

Categories

Different economists have different views about what events are the sources of market failure. Mainstream economic analysis widely accepts that a market failure in relation to several causes. These include if the market is "monopolised" or a small group of businesses hold significant market power resulting in a "failure of competition"; if production of the good or service results in an externality (external costs or benefits); if the good or service is a "public good"; if there is a "failure of information" or information asymmetry; if there is unequal bargaining power; if there is bounded rationality or irrationality; and if there are macro-economic failures such as unemployment or inflation.

Failure of competition

Agents in a market can gain market power, allowing them to block other mutually beneficial gains from trade from occurring. This can lead to inefficiency due to imperfect competition, which can take many different forms, such as monopoliesmonopsonies, or monopolistic competition, if the agent does not implement perfect price discrimination.

In small countries like New Zealand, electricity transmission is a natural monopoly. Due to enormous fixed costs and small market size, one seller can serve the entire market at the downward-sloping section of its average cost curve, meaning that it will have lower average costs than any potential entrant.

It is then a further question about what circumstances allow a monopoly to arise. In some cases, monopolies can maintain themselves where there are "barriers to entry" that prevent other companies from effectively entering and competing in an industry or market. Or there could exist significant first-mover advantages in the market that make it difficult for other firms to compete. Moreover, monopoly can be a result of geographical conditions created by huge distances or isolated locations. This leads to a situation where there are only few communities scattered across a vast territory with only one supplier. Australia is an example that meets this description. A natural monopoly is a firm whose per-unit cost decreases as it increases output; in this situation it is most efficient (from a cost perspective) to have only a single producer of a good. Natural monopolies display so-called increasing returns to scale. It means that at all possible outputs marginal cost needs to be below average cost if average cost is declining. One of the reasons is the existence of fixed costs, which must be paid without considering the amount of output, what results in a state where costs are evenly divided over more units leading to the reduction of cost per unit.

Public goods

Some markets can fail due to the nature of the goods being exchanged. For instance, some goods can display the attributes of public goods or common goods, wherein sellers are unable to exclude non-buyers from using a product, as in the development of inventions that may spread freely once revealed, such as developing a new method of harvesting. This can cause underinvestment because developers cannot capture enough of the benefits from success to make the development effort worthwhile. This can also lead to resource depletion in the case of common-pool resources, whereby the use of the resource is rival but non-excludable, there is no incentive for users to conserve the resource. An example of this is a lake with a natural supply of fish: if people catch the fish faster than the fish can reproduce, then the fish population will dwindle until there are no fish left for future generations.

Externalities

A good or service could also have significant externalities, where gains or losses associated with the product, production or consumption of a product, differ from the private cost. These gains or losses are imposed on a third-party that did not take part in the original market transaction. These externalities can be innate to the methods of production or other conditions important to the market.

"The Problem of Social Cost" illuminates a different path towards social optimum showing the Pigouvian tax is not the only way towards solving externalities. It is hard to say who discovered externalities first since many classical economists saw the importance of education or a lighthouse, but it was Alfred Marshall who wanted to explore this more. He wondered why long-run supply curve under perfect competition could be decreasing so he founded "external economies". Externalities can be positive or negative depending on how a good/service is produced or what the good/service provides to the public. Positive externalities tend to be goods like vaccines, schools, or advancement of technology. They usually provide the public with a positive gain. Negative externalities would be like noise or air pollution. Coase shows this with his example of the case Sturges v. Bridgman involving a confectioner and doctor. The confectioner had lived there many years and soon the doctor several years into residency decides to build a consulting room; it is right by the confectioner's kitchen which releases vibrations from his grinding of pestle and mortar. The doctor wins the case by a claim of nuisance so the confectioner would have to cease from using his machine. Coase argues there could have been bargains instead the confectioner could have paid the doctor to continue the source of income from using the machine hopefully it is more than what the Doctor is losing. Vice versa the doctor could have paid the confectioner to cease production since he is prohibiting a source of income from the confectioner. Coase used a few more examples similar in scope dealing with social cost of an externality and the possible resolutions.

Congested Times Square in Midtown Manhattan, New York City, which leads the world in urban automobile traffic congestion, but which has implemented congestion pricing in January 2025 to address the problem

Traffic congestion is an example of market failure that incorporates both non-excludability and externality. Public roads are common resources that are available for the entire population's use (non-excludable), and act as a complement to cars (the more roads there are, the more useful cars become). Because there is very low cost but high benefit to individual drivers in using the roads, the roads become congested, decreasing their usefulness to society. Furthermore, driving can impose hidden costs on society through pollution (externality). Solutions for this include public transportation, congestion pricing, tolls, and other ways of making the driver include the social cost in the decision to drive.

Perhaps the best example of the inefficiency associated with common/public goods and externalities is the environmental harm caused by pollution and overexploitation of natural resources.

Coase theorem

The Coase theorem, developed by Ronald Coase and labeled as such by George Stigler, states that private transactions are efficient as long as property rights exist, only a small number of parties are involved, and transactions costs are low. Additionally, this efficiency will take place regardless of who owns the property rights. This theory comes from a section of Coase's Nobel prize-winning work The Problem of Social Cost. While the assumptions of low transactions costs and a small number of parties involved may not always be applicable in real-world markets, Coase's work changed the long-held belief that the owner of property rights was a major determining factor in whether or not a market would fail. The Coase theorem points out when one would expect the market to function properly even when there are externalities.

A market is an institution in which individuals or firms exchange not just commodities, but the rights to use them in particular ways for particular amounts of time. [...] Markets are institutions which organize the exchange of control of commodities, where the nature of the control is defined by the property rights attached to the commodities.

As a result, agents' control over the uses of their goods and services can be imperfect, because the system of rights which defines that control is incomplete. Typically, this falls into two generalized rights – excludability and transferability. Excludability deals with the ability of agents to control who uses their commodity, and for how long – and the related costs associated with doing so. Transferability reflects the right of agents to transfer the rights of use from one agent to another, for instance by selling or leasing a commodity, and the costs associated with doing so. If a given system of rights does not fully guarantee these at minimal (or no) cost, then the resulting distribution can be inefficient. Considerations such as these form an important part of the work of institutional economics. Nonetheless, views still differ on whether something displaying these attributes is meaningful without the information provided by the market price system.

Information failures

Information asymmetry is considered a leading type of market failure. This is where there is an imbalance of information between two or more parties to a transaction. One example is incomplete markets, for example where second hand car buyers know there is a risk a car may break down, and systematically under-pay to discount this risk: this leads to fewer cars being sold overall; or where insurers know that some policyholders will withhold information, and systematically refuse to insure certain groups because of this risk. This may result in economic inefficiency, but also have a possibility of improving efficiency through market, legal, and regulatory remedies. From contract theory, decisions in transactions where one party has more or better information than the other is considered "asymmetry". This creates an imbalance of power in transactions which can sometimes cause the transactions to go awry. Examples of this problem are adverse selection and moral hazard. Most commonly, information asymmetries are studied in the context of principal–agent problems. George Akerlof, Michael Spence, and Joseph E. Stiglitz developed the idea and shared the 2001 Nobel Prize in Economics.

Unequal bargaining power

In The Wealth of Nations Adam Smith explored how an employer had the ability to "hold out" longer in a dispute over pay with workers because workers were more likely to go hungry more quickly, given that the employer has more property, and have fewer obstacles in organising. Unequal bargaining power has been used as a concept justifying economic regulation, particularly for employment, consumer, and tenancy rights since the early 20th century. Thomas Piketty in Capital in the Twenty-First Century explains how unequal bargaining power undermines "conditions of "pure and perfect" competition" and leads to a persistently lower share of income for labor, and leads to growing inequality. While it was argued by Ronald Coase that bargaining power merely affects distribution of income, but not productive efficiency, the modern behavioural evidence establishes that distribution or fairness of exchange does affect motivation to work, and therefore unequal bargaining power is a market failure. Notably, the price of labour was excluded from the scope of the original charts on supply and demand by their inventor, Fleeming Jenkin, who considered that the wages of labour could not be equated with ordinary markets for commodities such as corn, because of labour's unequal bargaining power.

Bounded rationality

In Models of Man, Herbert A. Simon points out that most people are only partly rational, and are emotional/irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information" (Williamson, p. 553, citing Simon). Simon describes a number of dimensions along which "classical" models of rationality can be made somewhat more realistic, while sticking within the vein of fairly rigorous formalization. These include:

  • limiting what sorts of utility functions there might be.
  • recognizing the costs of gathering and processing information.
  • the possibility of having a "vector" or "multi-valued" utility function.

Simon suggests that economic agents employ the use of heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation, and their inability to process and compute the expected utility of every alternative action. Deliberation costs might be high and there are often other, concurrent economic activities also requiring decisions.

The concept of bounded rationality was significantly expanded through behavioral economics research, suggesting that people are systematically irrational in day-to-day decisions. Daniel Kahneman in Thinking, Fast and Slow explored how human beings operate as if they have two systems of thinking: a fast "system 1" mode of thought for snap, everyday decisions which applies rules of thumb but is frequently mistaken; and a slow "system 2" mode of thought that is careful and deliberative, but not as often used in making ordinary decisions to buy and sell or do business.

Macro-economic failures

"Unemployment, inflation and "disequilibrium" are considered a category of market failure at a "macro economic" or "whole economy" level. These symptoms (of high job loss, or fast rising prices or both) can result from a financial crash, a recession or depression, and the market failure is evident in the sustained underproduction of an economy, or a tendency not to recover immediately. Macroeconomic business cycles are a part of the market. They are characterized by constant downswings and upswings which influence economic activity. Therefore, this situation requires some kind of government intervention.

Persistent labor shortages

Widespread and persistent domestic labour shortages in various countries are examples of market failure, whereby excessively low salaries (relative to the domestic cost of living) and adverse working conditions (excessive workload and working hours) in low-wage industries (hospitality and leisure, education, health care, rail transportation, warehousing, aviation, retail, manufacturing, food, construction, elderly care), collectively lead to occupational burnout and attrition of existing workers, insufficient incentives to attract the inflow supply of domestic workers, short-staffing and regular shift work at workplaces and further exacerbation (positive feedback) of staff shortages. Poor job quality and artificial shortages perpetuated by salary-paying employers, deter workers from entering or remaining in these roles.

Labour shortages occur broadly across multiple industries within a rapidly expanding economy, whilst labour shortages often occur within specific industries (which generally offer low salaries) even during economic periods of high unemployment. In response to domestic labour shortages, business associations such as chambers of commerce, trade associations or employers' organizations would generally lobby to governments for an increase of the inward immigration of foreign workers from countries which are less developed and have lower salaries. In addition, business associations have campaigned for greater state provision of child care, which would enable more women to re-enter the labour workforce at a lower wage rate to achieve economic equilibrium. However, as labour shortages in the relevant low-wage industries are often widespread globally throughout many countries in the world, immigration would only partially address the chronic labour shortages in the relevant low-wage industries in developed countries (whilst simultaneously discouraging local labour from entering the relevant industries) and in turn cause greater labour shortages in developing countries.

Interpretations and policy examples

The above causes represent the mainstream view of what market failures mean and of their importance in the economy. This analysis follows the lead of the neoclassical school, and relies on the notion of Pareto efficiency, which can be in the "public interest", as well as in interests of stakeholders with equity. This form of analysis has also been adopted by the Keynesian or new Keynesian schools in modern macroeconomics, applying it to Walrasian models of general equilibrium in order to deal with failures to attain full employment, or the non-adjustment of prices and wages.

Policies to prevent market failure are already commonly implemented in the economy. For example, to prevent information asymmetry, members of the New York Stock Exchange agree to abide by its rules in order to promote a fair and orderly market in the trading of listed securities. The members of the NYSE presumably believe that each member is individually better off if every member adheres to its rules – even if they have to forego money-making opportunities that would violate those rules.

A simple example of policies to address market power is government antitrust policies. As an additional example of externalities, municipal governments enforce building codes and license tradesmen to mitigate the incentive to use cheaper (but more dangerous) construction practices, ensuring that the total cost of new construction includes the (otherwise external) cost of preventing future tragedies. The voters who elect municipal officials presumably feel that they are individually better off if everyone complies with the local codes, even if those codes may increase the cost of construction in their communities.

CITES is an international treaty to protect the world's common interest in preserving endangered species – a classic "public good" – against the private interests of poachers, developers and other market participants who might otherwise reap monetary benefits without bearing the known and unknown costs that extinction could create. Even without knowing the true cost of extinction, the signatory countries believe that the societal costs far outweigh the possible private gains that they have agreed to forego.

Some remedies for market failure can resemble other market failures. For example, the issue of systematic underinvestment in research is addressed by the patent system that creates artificial monopolies for successful inventions.

Objections

Public choice

Economists such as Milton Friedman from the Chicago school and others from the Public Choice school, argue that market failure does not necessarily imply that the government should attempt to solve market failures, because the costs of government failure might be worse than those of the market failure it attempts to fix. This failure of government is seen as the result of the inherent problems of democracy and other forms of government perceived by this school and also of the power of special-interest groups (rent seekers) both in the private sector and in the government bureaucracy. Conditions that many would regard as negative are often seen as an effect of subversion of the free market by coercive government intervention. Beyond philosophical objections, a further issue is the practical difficulty that any single decision maker may face in trying to understand (and perhaps predict) the numerous interactions that occur between producers and consumers in any market.

Austrian

Some advocates of laissez-faire capitalism, including many economists of the Austrian School, argue that there is no such phenomenon as "market failure". Israel Kirzner states that, "Efficiency for a social system means the efficiency with which it permits its individual members to achieve their individual goals." Inefficiency only arises when means are chosen by individuals that are inconsistent with their desired goals. This definition of efficiency differs from that of Pareto efficiency, and forms the basis of the theoretical argument against the existence of market failures. However, providing that the conditions of the first welfare theorem are met, these two definitions agree, and give identical results. Austrians argue that the market tends to eliminate its inefficiencies through the process of entrepreneurship driven by the profit motive; something the government has great difficulty detecting, or correcting.

Marxian

Objections also exist on more fundamental bases, such as Marxian analysis. Colloquial uses of the term "market failure" reflect the notion of a market "failing" to provide some desired attribute different from efficiency – for instance, high levels of inequality can be considered a "market failure", yet are not Pareto inefficient, and so would not be considered a market failure by mainstream economics. In addition, many Marxian economists would argue that the system of private property rights is a fundamental problem in itself, and that resources should be allocated in another way entirely. This is different from concepts of "market failure" which focuses on specific situations – typically seen as "abnormal" – where markets have inefficient outcomes. Marxists, in contrast, would say that markets have inefficient and democratically unwanted outcomes – viewing market failure as an inherent feature of any capitalist economy – and typically omit it from discussion, preferring to ration finite goods not exclusively through a price mechanism, but based upon need as determined by society expressed through the community.

Ecological

In ecological economics, the concept of externalities is considered a misnomer, since market agents are viewed as making their incomes and profits by systematically 'shifting' the social and ecological costs of their activities onto other agents, including future generations. Hence, externalities is a modus operandi of the market, not a failure: The market cannot exist without constantly 'failing'.

The fair and even allocation of non-renewable resources over time is a market failure issue of concern to ecological economics. This issue is also known as 'intergenerational fairness'. It is argued that the market mechanism fails when it comes to allocating the Earth's finite mineral stock fairly and evenly among present and future generations, as future generations are not, and cannot be, present on today's market. In effect, today's market prices do not, and cannot, reflect the preferences of the yet unborn. This is an instance of a market failure passed unrecognized by most mainstream economists, as the concept of Pareto efficiency is entirely static (timeless). Imposing government restrictions on the general level of activity in the economy may be the only way of bringing about a more fair and even intergenerational allocation of the mineral stock. Hence, Nicholas Georgescu-Roegen and Herman Daly, the two leading theorists in the field, have both called for the imposition of such restrictions: Georgescu-Roegen has proposed a minimal bioeconomic program, and Daly has proposed a comprehensive steady-state economy  However, Georgescu-Roegen, Daly, and other economists in the field agree that on a finite Earth, geologic limits will inevitably strain most fairness in the longer run, regardless of any present government restrictions: Any rate of extraction and use of the finite stock of non-renewable mineral resources will diminish the remaining stock left over for future generations to use.

Another ecological market failure is presented by the overutilisation of an otherwise renewable resource at a point in time, or within a short period of time. Such overutilisation usually occurs when the resource in question has poorly defined (or non-existing) property rights attached to it while too many market agents engage in activity simultaneously for the resource to be able to sustain it all. Examples range from over-fishing of fisheries and over-grazing of pastures to over-crowding of recreational areas in congested cities. This type of ecological market failure is generally known as the 'tragedy of the commons'. In this type of market failure, the principle of Pareto efficiency is violated the utmost, as all agents in the market are left worse off, while nobody are benefitting. It has been argued that the best way to remedy a 'tragedy of the commons'-type of ecological market failure is to establish enforceable property rights politically – only, this may be easier said than done.

The issue of climate change presents an overwhelming example of a 'tragedy of the commons'-type of ecological market failure: The Earth's atmosphere may be regarded as a 'global common' exhibiting poorly defined (non-existing) property rights, and the waste absorption capacity of the atmosphere with regard to carbon dioxide is presently being heavily overloaded by a large volume of emissions from the world economy. Historically, the fossil fuel dependence of the Industrial Revolution has unintentionally thrown mankind out of ecological equilibrium with the rest of the Earth's biosphere (including the atmosphere), and the market has failed to correct the situation ever since. Quite the opposite: The unrestricted market has been exacerbating this global state of ecological dis-equilibrium, and is expected to continue doing so well into the foreseeable future. This particular market failure may be remedied to some extent at the political level by the establishment of an international (or regional) cap and trade property rights system, where carbon dioxide emission permits are bought and sold among market agents.[

The term 'uneconomic growth' describes a pervasive ecological market failure: The ecological costs of further economic growth in a so-called 'full-world economy' like the present world economy may exceed the immediate social benefits derived from this growth.

Zerbe and McCurdy

Zerbe and McCurdy connected criticism of market failure paradigm to transaction costs. Market failure paradigm is defined as follows:

"A fundamental problem with the concept of market failure, as economists occasionally recognize, is that it describes a situation that exists everywhere."

Transaction costs are part of each market exchange, although the price of transaction costs is not usually determined. They occur everywhere and are unpriced. Consequently, market failures and externalities can arise in the economy every time transaction costs arise. There is no place for government intervention. Instead, government should focus on the elimination of both transaction costs and costs of provision.

Drug design

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Drug_des...