Search This Blog

Sunday, December 8, 2024

Emotivism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Emotivism

Emotivism is a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes. Hence, it is colloquially known as the hurrah/boo theory. Influenced by the growth of analytic philosophy and logical positivism in the 20th century, the theory was stated vividly by A. J. Ayer in his 1936 book Language, Truth and Logic, but its development owes more to C. L. Stevenson.

Emotivism can be considered a form of non-cognitivism or expressivism. It stands in opposition to other forms of non-cognitivism (such as quasi-realism and universal prescriptivism), as well as to all forms of cognitivism (including both moral realism and ethical subjectivism).

In the 1950s, emotivism appeared in a modified form in the universal prescriptivism of R. M. Hare.

History

David Hume's statements on ethics foreshadowed those of 20th century emotivists.

Emotivism reached prominence in the early 20th century, but it was born centuries earlier. In 1710, George Berkeley wrote that language in general often serves to inspire feelings as well as communicate ideas. Decades later, David Hume espoused ideas similar to Stevenson's later ones. In his 1751 book An Enquiry Concerning the Principles of Morals, Hume considered morality not to be related to fact but "determined by sentiment":

In moral deliberations we must be acquainted beforehand with all the objects, and all their relations to each other; and from a comparison of the whole, fix our choice or approbation. … While we are ignorant whether a man were aggressor or not, how can we determine whether the person who killed him be criminal or innocent? But after every circumstance, every relation is known, the understanding has no further room to operate, nor any object on which it could employ itself. The approbation or blame which then ensues, cannot be the work of the judgement, but of the heart; and is not a speculative proposition or affirmation, but an active feeling or sentiment.

G. E. Moore published his Principia Ethica in 1903 and argued that the attempts of ethical naturalists to translate ethical terms (like good and bad) into non-ethical ones (like pleasing and displeasing) committed the "naturalistic fallacy". Moore was a cognitivist, but his case against ethical naturalism steered other philosophers toward noncognitivism, particularly emotivism.

The emergence of logical positivism and its verifiability criterion of meaning early in the 20th century led some philosophers to conclude that ethical statements, being incapable of empirical verification, were cognitively meaningless. This criterion was fundamental to A. J. Ayer's defense of positivism in Language, Truth and Logic, which contains his statement of emotivism. However, positivism is not essential to emotivism itself, perhaps not even in Ayer's form, and some positivists in the Vienna Circle, which had great influence on Ayer, held non-emotivist views.

R. M. Hare unfolded his ethical theory of universal prescriptivism in 1952's The Language of Morals, intending to defend the importance of rational moral argumentation against the "propaganda" he saw encouraged by Stevenson, who thought moral argumentation was sometimes psychological and not rational. But Hare's disagreement was not universal, and the similarities between his noncognitive theory and the emotive one — especially his claim, and Stevenson's, that moral judgments contain commands and are thus not purely descriptive — caused some to regard him as an emotivist, a classification he denied:

I did, and do, follow the emotivists in their rejection of descriptivism. But I was never an emotivist, though I have often been called one. But unlike most of their opponents I saw that it was their irrationalism, not their non-descriptivism, which was mistaken. So my main task was to find a rationalist kind of non-descriptivism, and this led me to establish that imperatives, the simplest kinds of prescriptions, could be subject to logical constraints while not [being] descriptive.

Proponents

Influential statements of emotivism were made by C. K. Ogden and I. A. Richards in their 1923 book on language, The Meaning of Meaning, and by W. H. F. Barnes and A. Duncan-Jones in independent works on ethics in 1934. However, it is the later works of Ayer and especially Stevenson that are the most developed and discussed defenses of the theory.

A. J. Ayer

A. J. Ayer's version of emotivism is given in chapter six, "Critique of Ethics and Theology", of Language, Truth and Logic. In that chapter, Ayer divides "the ordinary system of ethics" into four classes:

  1. "Propositions that express definitions of ethical terms, or judgements about the legitimacy or possibility of certain definitions"
  2. "Propositions describing the phenomena of moral experience, and their causes"
  3. "Exhortations to moral virtue"
  4. "Actual ethical judgments"

He focuses on propositions of the first class—moral judgments—saying that those of the second class belong to science, those of the third are mere commands, and those of the fourth (which are considered in normative ethics as opposed to meta-ethics) are too concrete for ethical philosophy. While class three statements were irrelevant to Ayer's brand of emotivism, they would later play a significant role in Stevenson's.

Ayer argues that moral judgments cannot be translated into non-ethical, empirical terms and thus cannot be verified; in this he agrees with ethical intuitionists. But he differs from intuitionists by discarding appeals to intuition as "worthless" for determining moral truths, since the intuition of one person often contradicts that of another. Instead, Ayer concludes that ethical concepts are "mere pseudo-concepts":

The presence of an ethical symbol in a proposition adds nothing to its factual content. Thus if I say to someone, "You acted wrongly in stealing that money," I am not stating anything more than if I had simply said, "You stole that money." In adding that this action is wrong I am not making any further statement about it. I am simply evincing my moral disapproval of it. It is as if I had said, "You stole that money," in a peculiar tone of horror, or written it with the addition of some special exclamation marks. … If now I generalise my previous statement and say, "Stealing money is wrong," I produce a sentence that has no factual meaning—that is, expresses no proposition that can be either true or false. … I am merely expressing certain moral sentiments.

Ayer agrees with subjectivists in saying that ethical statements are necessarily related to individual attitudes, but he says they lack truth value because they cannot be properly understood as propositions about those attitudes; Ayer thinks ethical sentences are expressions, not assertions, of approval. While an assertion of approval may always be accompanied by an expression of approval, expressions can be made without making assertions; Ayer's example is boredom, which can be expressed through the stated assertion "I am bored" or through non-assertions including tone of voice, body language, and various other verbal statements. He sees ethical statements as expressions of the latter sort, so the phrase "Theft is wrong" is a non-propositional sentence that is an expression of disapproval but is not equivalent to the proposition "I disapprove of theft".

Having argued that his theory of ethics is noncognitive and not subjective, he accepts that his position and subjectivism are equally confronted by G. E. Moore's argument that ethical disputes are clearly genuine disputes and not just expressions of contrary feelings. Ayer's defense is that all ethical disputes are about facts regarding the proper application of a value system to a specific case, not about the value systems themselves, because any dispute about values can only be resolved by judging that one value system is superior to another, and this judgment itself presupposes a shared value system. If Moore is wrong in saying that there are actual disagreements of value, we are left with the claim that there are actual disagreements of fact, and Ayer accepts this without hesitation:

If our opponent concurs with us in expressing moral disapproval of a given type t, then we may get him to condemn a particular action A, by bringing forward arguments to show that A is of type t. For the question whether A does or does not belong to that type is a plain question of fact.

C. L. Stevenson

Stevenson's work has been seen both as an elaboration upon Ayer's views and as a representation of one of "two broad types of ethical emotivism." An analytic philosopher, Stevenson suggested in his 1937 essay "The Emotive Meaning of Ethical Terms" that any ethical theory should explain three things: that intelligent disagreement can occur over moral questions, that moral terms like good are "magnetic" in encouraging action, and that the scientific method is insufficient for verifying moral claims. Stevenson's own theory was fully developed in his 1944 book Ethics and Language. In it, he agrees with Ayer that ethical sentences express the speaker's feelings, but he adds that they also have an imperative component intended to change the listener's feelings and that this component is of greater importance. Where Ayer spoke of values, or fundamental psychological inclinations, Stevenson speaks of attitudes, and where Ayer spoke of disagreement of fact, or rational disputes over the application of certain values to a particular case, Stevenson speaks of differences in belief; the concepts are the same. Terminology aside, Stevenson interprets ethical statements according to two patterns of analysis.

First pattern analysis

Under his first pattern of analysis an ethical statement has two parts: a declaration of the speaker's attitude and an imperative to mirror it, so "'This is good' means I approve of this; do so as well." The first half of the sentence is a proposition, but the imperative half is not, so Stevenson's translation of an ethical sentence remains a noncognitive one.

Imperatives cannot be proved, but they can still be supported so that the listener understands that they are not wholly arbitrary:

If told to close the door, one may ask "Why?" and receive some such reason as "It is too drafty," or "The noise is distracting." … These reasons cannot be called "proofs" in any but a dangerously extended sense, nor are they demonstratively or inductively related to an imperative; but they manifestly do support an imperative. They "back it up," or "establish it," or "base it on concrete references to fact."

The purpose of these supports is to make the listener understand the consequences of the action they are being commanded to do. Once they understand the command's consequences, they can determine whether or not obedience to the command will have desirable results.

The imperative is used to alter the hearer's attitudes or actions. … The supporting reason then describes the situation the imperative seeks to alter, or the new situation the imperative seeks to bring about; and if these facts disclose that the new situation will satisfy a preponderance of the hearer's desires, he will hesitate to obey no longer. More generally, reasons support imperatives by altering such beliefs as may in turn alter an unwillingness to obey.

Second pattern analysis

Stevenson's second pattern of analysis is used for statements about types of actions, not specific actions. Under this pattern,

'This is good' has the meaning of 'This has qualities or relations X, Y, Z … ,' except that 'good' has as well a laudatory meaning, which permits it to express the speaker's approval, and tends to evoke the approval of the hearer.

In second-pattern analysis, rather than judge an action directly, the speaker is evaluating it according to a general principle. For instance, someone who says "Murder is wrong" might mean "Murder decreases happiness overall"; this is a second-pattern statement that leads to a first-pattern one: "I disapprove of anything that decreases happiness overall. Do so as well."

Methods of argumentation

For Stevenson, moral disagreements may arise from different fundamental attitudes, different moral beliefs about specific cases, or both. The methods of moral argumentation he proposed have been divided into three groups, known as logical, rational psychological and nonrational psychological forms of argumentation.

Logical methods involve efforts to show inconsistencies between a person's fundamental attitudes and their particular moral beliefs. For example, someone who says "Edward is a good person" who has previously said "Edward is a thief" and "No thieves are good people" is guilty of inconsistency until he retracts one of his statements. Similarly, a person who says "Lying is always wrong" might consider lies in some situations to be morally permissible, and if examples of these situations can be given, his view can be shown to be logically inconsistent.

Rational psychological methods examine facts that relate fundamental attitudes to particular moral beliefs; the goal is not to show that someone has been inconsistent, as with logical methods, but only that they are wrong about the facts that connect their attitudes to their beliefs. To modify the former example, consider the person who holds that all thieves are bad people. If she sees Edward pocket a wallet found in a public place, she may conclude that he is a thief, and there would be no inconsistency between her attitude (that thieves are bad people) and her belief (that Edward is a bad person because he is a thief). However, it may be that Edward recognized the wallet as belonging to a friend, to whom he promptly returned it. Such a revelation would likely change the observer's belief about Edward, and even if it did not, the attempt to reveal such facts would count as a rational psychological form of moral argumentation.

Non-rational psychological methods revolve around language with psychological influence but no necessarily logical connection to the listener's attitudes. Stevenson called the primary such method "'persuasive,' in a somewhat broadened sense", and wrote:

[Persuasion] depends on the sheer, direct emotional impact of words—on emotive meaning, rhetorical cadence, apt metaphor, stentorian, stimulating, or pleading tones of voice, dramatic gestures, care in establishing rapport with the hearer or audience, and so on. … A redirection of the hearer's attitudes is sought not by the mediating step of altering his beliefs, but by exhortation, whether obvious or subtle, crude or refined.

Persuasion may involve the use of particular emotion-laden words, like "democracy" or "dictator", or hypothetical questions like "What if everyone thought the way you do?" or "How would you feel if you were in their shoes?"

Criticism

Utilitarian philosopher Richard Brandt offered several criticisms of emotivism in his 1959 book Ethical Theory. His first is that "ethical utterances are not obviously the kind of thing the emotive theory says they are, and prima facie, at least, should be viewed as statements." He thinks that emotivism cannot explain why most people, historically speaking, have considered ethical sentences to be "fact-stating" and not just emotive. Furthermore, he argues that people who change their moral views see their prior views as mistaken, not just different, and that this does not make sense if their attitudes were all that changed:

Suppose, for instance, as a child a person disliked eating peas. When he recalls this as an adult he is amused and notes how preferences change with age. He does not say, however, that his former attitude was mistaken. If, on the other hand, he remembers regarding irreligion or divorce as wicked, and now does not, he regards his former view as erroneous and unfounded. … Ethical statements do not look like the kind of thing the emotive theory says they are.

James Urmson's 1968 book The Emotive Theory of Ethics also disagreed with many of Stevenson's points in Ethics and Language, "a work of great value" with "a few serious mistakes [that] led Stevenson consistently to distort his otherwise valuable insights".

Magnetic influence

Brandt criticized what he termed "the 'magnetic influence' thesis", the idea of Stevenson that ethical statements are meant to influence the listener's attitudes. Brandt contends that most ethical statements, including judgments of people who are not within listening range, are not made with the intention to alter the attitudes of others. Twenty years earlier, Sir William David Ross offered much the same criticism in his book Foundations of Ethics. Ross suggests that the emotivist theory seems to be coherent only when dealing with simple linguistic acts, such as recommending, commanding, or passing judgement on something happening at the same point of time as the utterance.

… There is no doubt that such words as 'you ought to do so-and-so' may be used as one's means of so inducing a person to behave a certain way. But if we are to do justice to the meaning of 'right' or 'ought', we must take account also of such modes of speech as 'he ought to do so-and-so', 'you ought to have done so-and-so', 'if this and that were the case, you ought to have done so-and-so', 'if this and that were the case, you ought to do so-and-so', 'I ought to do so-and-so.' Where the judgement of obligation has referenced either a third person, not the person addressed, or to the past, or to an unfulfilled past condition, or to a future treated as merely possible, or to the speaker himself, there is no plausibility in describing the judgement as command.

According to this view, it would make little sense to translate a statement such as "Galileo should not have been forced to recant on heliocentricism" into a command, imperative, or recommendation - to do so might require a radical change in the meaning of these ethical statements. Under this criticism, it would appear as if emotivist and prescriptivist theories are only capable of converting a relatively small subset of all ethical claims into imperatives.

Like Ross and Brandt, Urmson disagrees with Stevenson's "causal theory" of emotive meaning—the theory that moral statements only have emotive meaning when they are made to change in a listener's attitude—saying that is incorrect in explaining "evaluative force in purely causal terms". This is Urmson's fundamental criticism, and he suggests that Stevenson would have made a stronger case by explaining emotive meaning in terms of "commending and recommending attitudes", not in terms of "the power to evoke attitudes".

Stevenson's Ethics and Language, written after Ross's book but before Brandt's and Urmson's, states that emotive terms are "not always used for purposes of exhortation." For example, in the sentence "Slavery was good in Ancient Rome", Stevenson thinks one is speaking of past attitudes in an "almost purely descriptive" sense. And in some discussions of current attitudes, "agreement in attitude can be taken for granted," so a judgment like "He was wrong to kill them" might describe one's attitudes yet be "emotively inactive", with no real emotive (or imperative) meaning. Stevenson is doubtful that sentences in such contexts qualify as normative ethical sentences, maintaining that "for the contexts that are most typical of normative ethics, the ethical terms have a function that is both emotive and descriptive."

Philippa Foot's moral realism

Philippa Foot adopts a moral realist position, criticizing the idea that when evaluation is superposed on fact there has been a "committal in a new dimension." She introduces, by analogy, the practical implications of using the word injury. Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, have not we fallen into the old naturalist fallacy?

It may seem that the only way to make a necessary connexion between 'injury' and the things that are to be avoided, is to say that it is only used in an 'action-guiding sense' when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all.

Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness.

Philosophers who have supposed that actual action was required if 'good' were to be used in a sincere evaluation have got into difficulties over weakness of will, and they should surely agree that enough has been done if we can show that any man has reason to aim at virtue and avoid vice. But is this impossibly difficult if we consider the kinds of things that count as virtue and vice? Consider, for instance, the cardinal virtues, prudence, temperance, courage and justice. Obviously any man needs prudence, but does he not also need to resist the temptation of pleasure when there is harm involved? And how could it be argued that he would never need to face what was fearful for the sake of some good? It is not obvious what someone would mean if he said that temperance or courage were not good qualities, and this not because of the 'praising' sense of these words, but because of the things that courage and temperance are.

Standard using and standard setting

As an offshoot of his fundamental criticism of Stevenson's magnetic influence thesis, Urmson wrote that ethical statements had two functions—"standard using", the application of accepted values to a particular case, and "standard setting", the act of proposing certain values as those that should be accepted—and that Stevenson confused them. According to Urmson, Stevenson's "I approve of this; do so as well" is a standard-setting statement, yet most moral statements are actually standard-using ones, so Stevenson's explanation of ethical sentences is unsatisfactory. Colin Wilks has responded that Stevenson's distinction between first-order and second-order statements resolves this problem: a person who says "Sharing is good" may be making a second-order statement like "Sharing is approved of by the community", the sort of standard-using statement Urmson says is most typical of moral discourse. At the same time, their statement can be reduced to a first-order, standard-setting sentence: "I approve of whatever is approved of by the community; do so as well."

Atlantic history

From Wikipedia, the free encyclopedia
The Atlantic Ocean which gives its name to the so-called Atlantic World of the early modern period

Atlantic history is a specialty field in history that studies the Atlantic World in the early modern period. The Atlantic World was created by the contact between Europeans and the Americas, and Atlantic History is the study of that world. It is premised on the idea that, following the rise of sustained European contact with the New World in the 16th century, the continents that bordered the Atlantic Ocean—the Americas, Europe, and Africa—constituted a regional system or common sphere of economic and cultural exchange that can be studied as a totality.

Its theme is the complex interaction between Europe (especially Great Britain, France, Spain, and Portugal) and their colonies in the Americas. It encompasses a wide range of demographic, social, economic, political, legal, military, intellectual and religious topics treated in comparative fashion by looking at both sides of the Atlantic. Religious revivals in Britain and Germany are studies, as well as the First Great Awakening in the Thirteen Colonies. Emigration, race and slavery are also important topics.

Researchers of Atlantic history typically focus on the interconnections and exchanges between these regions and the civilizations they harbored. In particular, they argue that the boundaries between nation states which traditionally determined the limits of older historiography should not be applied to such transnational phenomena as slavery, colonialism, missionary activity and economic expansion. Environmental history and the study of historical demography also play an important role, as many key questions in the field revolve around the ecological and epidemiological impact of the Columbian exchange.

Robert R. Palmer, an American historian of the French Revolution, pioneered the concept in the 1950s with a wide-ranging comparative history of how numerous nations experienced what he called The Age of the Democratic Revolution: A Political History of Europe and America, 1760–1800 (1959 and 1964). In this monumental work, he did not compare the French and the American Revolutions as successful models against other types of revolutions. Indeed, he developed a wider understanding of the changes that were led by revolutionary processes across the Western civilization. Such work followed in the footsteps of C. L. R. James who, in the 1930s, connected the French and Haitian Revolutions. Since the 1980s Atlantic history has emerged as an increasingly popular alternative to the older discipline of imperial history, although it could be argued that the field is simply a refinement and reorientation of traditional historiography dealing with the interaction between early modern Europeans and native peoples in the Atlantic sphere. The organization of Atlantic History as a recognized area of historiography began in the 1980s under the impetus of American historians Bernard Bailyn of Harvard University and Jack P. Greene of Johns Hopkins University, among others. The post-World War II integration of the European Union and the continuing importance of NATO played an indirect role in stimulating interest throughout the 1990s.

Development of the field

Bernard Bailyn's Seminar on the History of the Atlantic World promoted social and demographic studies, and especially regarding demographic flows of population into colonial America. As a leading advocate of the history of the Atlantic world, Bailyn has since 1995 organized an annual international seminar at Harvard designed to promote scholarship in this field. Professor Bailyn was the promoter of "The International Seminar on the History of the Atlantic World, 1500-1825" at Harvard University. This was one of the first, and most important, academic initiatives to launch the Atlantic perspective. From 1995-2010 the Atlantic History Seminar sponsored an annual meeting of young historians engaged in creative research on aspects of Atlantic History. In all, 366 young historians came through the Seminar program, 202 from universities in the US and 164 from universities abroad. Its purpose was to advance the scholarship of young historians of many nations interested in the common, comparative, and interactive aspects of the lives of the peoples in the lands that are part of the Atlantic basin, mainly in the early modern period in order to contribute to the study of this transnational historical subject.

Bailyn's Atlantic History: Concepts and Contours (2005) explores the borders and contents of the emerging field, which emphasizes cosmopolitan and multicultural elements that have tended to be neglected or considered in isolation by traditional historiography dealing with the Americas. Bailyn's reflections stem in part from his seminar at Harvard since the mid-1980s.

Other important scholars are Jack Greene, who directed a program at Johns Hopkins in Atlantic History from 1972 to 1992 that has now expanded to global concerns. Karen Ordahl Kupperman established the Atlantic Workshop at New York University in 1997.

Other scholars in the field include Ida Altman, Kenneth J. Andrien, David Armitage, Trevor Burnard, Jorge Canizares-Esguerra, Nicholas Canny, Philip D. Curtin, Laurent Dubois, J.H. Elliott, David Eltis, Alison Games, Eliga H. Gould, Anthony Grafton, Joseph C. Miller, Philip D. Morgan, Anthony Pagden, Jennifer L. Anderson, John Thornton, James D. Tracy, Carla G. Pestana, Isaac Land, Richard S. Dunn, and Ned C. Landsman.

Perspectives

Alison Games (2006) explores the convergence of the multiple strands of scholarly interest that have generated the new field of Atlantic history, which takes as its geographic unit of analysis the Atlantic Ocean and the four continents that surround it. She argues Atlantic history is best approached as a slice of world history. The Atlantic, moreover, is a region that has logic as a unit of historical analysis only within a limited chronology. An Atlantic perspective can help historians understand changes within the region that a more limited geographic framework might obscure. Attempts to write a Braudelian Atlantic history, one that includes and connects the entire region, remain elusive, driven in part by methodological impediments, by the real disjunction that characterized the Atlantic's historical and geographic components, by the disciplinary divisions that discourage historians from speaking to and writing for each other, and by the challenge of finding a vantage point that is not rooted in any single place.

Colonial studies

One impetus for Atlantic studies began in the 1960s with the historians of slavery who started tracking the routes of the transatlantic slave trade. A second source came from historians who studied the colonial history of the United States. Many were trained in early modern European history and were familiar with the historiography of the British Empire, which had been introduced a century before by George Louis Beer and Charles McLean Andrews. Historians studying colonialism have long been open to interdisciplinary perspectives, such as comparative approaches. In addition there was a frustration involved in writing about very few people in a small remote colony. Atlantic history opens the horizon to large forces at work over great distances.

Criticism

Some critics have complained that Atlantic history is little more than imperial history under another name. It has been argued that it is too expansive in claiming to subsume both of the American continents, Africa, and Europe, without seriously engaging with them. According to Caroline Dodds Pennock, indigenous people are often seen as static recipients of transatlantic encounter, despite the fact that thousands of Native Americans crossed the ocean during the sixteenth century, some by choice.

Canadian scholar Ian K. Steele argued that Atlantic history will tend to draw students interested in exploring their country's historian beyond national myths, while offering historical support for such 21st century policies as the North American Free Trade Agreement (NAFTA), the Organization of American States (OAS), the North Atlantic Treaty Organization (NATO), the New Europe, Christendom, and even the United Nations (UN). He concludes, "The early modern Atlantic can even be read as a natural antechamber for American-led globalization of capitalism and serve as an historical challenge to the coalescing New Europe. No wonder that the academic reception of the new Atlantic history has been enthusiastic in the United States, and less so in Britain, France, Spain, and Portugal, where histories of national Atlantic empires continue to thrive."

Vaccine trial

From Wikipedia, the free encyclopedia
Volunteer participating in phase 3 trial of CoronaVac in Padjadjaran University, Bandung, West Java, Indonesia.

A vaccine trial is a clinical trial that aims at establishing the safety and efficacy of a vaccine prior to it being licensed.

A vaccine candidate drug is first identified through preclinical evaluations that could involve high throughput screening and selecting the proper antigen to invoke an immune response.

Some vaccine trials may take months or years to complete, depending on the time required for the subjects to react to the vaccine and develop the required antibodies.

Preclinical stage

Preclinical development stages are necessary to determine the immunogenicity potential and safety profile for a vaccine candidate.

This is also the stage in which the drug candidate may be first tested in laboratory animals prior to moving to the Phase I trials. Vaccines such as the oral polio vaccine have been first tested for adverse effects and immunogenicity in monkeys as well as non-human primates and lab mice.

Recent scientific advances have helped to use transgenic animals as a part of vaccine preclinical protocol in hopes to more accurately determine drug reactions in humans. Understanding vaccine safety and the immunological response to the vaccine, such as toxicity, are necessary components of the preclinical stage. Other drug trials focus on the pharmacodynamics and pharmacokinetics; however, in vaccine studies it is essential to understand toxic effects at all possible dosage levels and the interactions with the immune system.

Phase I

The Phase I study consists of introducing the vaccine candidate to assess its safety in healthy people. A vaccine Phase I trial involves normal healthy subjects, each tested with either the candidate vaccine or a "control" treatment, typically a placebo or an adjuvant-containing cocktail, or an established vaccine (which might be intended to protect against a different pathogen). The primary observation is for detection of safety (absence of an adverse event) and evidence of an immune response.

After the administration of the vaccine or placebo, the researchers collect data on antibody production, on health outcomes (such as illness due to the targeted infection or to another infection). Following the trial protocol, the specified statistical test is performed to gauge the statistical significance of the observed differences in the outcomes between the treatment and control groups. Side effects of the vaccine are also noted, and these contribute to the decision on whether to advance the candidate vaccine to a Phase II trial.

One typical version of Phase I studies in vaccines involves an escalation study, which is used in mainly medicinal research trials. The drug is introduced into a small cohort of healthy volunteers. Vaccine escalation studies aim to minimize chances of serious adverse effects (SAE) by slowly increasing the drug dosage or frequency. The first level of an escalation study usually has two or three groups of around 10 healthy volunteers. Each subgroup receives the same vaccine dose, which is the expected lowest dose necessary to invoke an immune response (the main goal in a vaccine – to create immunity). New subgroups can be added to experiment with a different dosing regimen as long as the previous subgroup did not experience SAEs. There are variations in the vaccination order that can be used for different studies. For example, the first subgroup could complete the entire regimen before the second subgroup starts or the second can begin before the first ends as long as SAEs were not detected. The vaccination schedule will vary depending on the nature of the drug (i.e. the need for a booster or several doses over the course of short time period). Escalation studies are ideal for minimizing risks for SAEs that could occur with less controlled and divided protocols.

Phase II

The transition to Phase II relies on the immunogenic and toxicity results from Phase I in a small cohort of healthy volunteers. Phase II will consist of more healthy volunteers in the vaccine target population (~ hundreds of people) to determine reactions in a more diverse set of humans and test different schedules.

Phase III

Similarly. Phase III trials continue to monitor toxicity, immunogenicity, and SAEs on a much larger scale. The vaccine must be shown to be safe and effective in natural disease conditions before being submitted for approval and then general production. In the United States, the Food and Drug Administration (FDA) is responsible for approving vaccines.

Phase IV

Phase IV trials are typically monitor stages that collect information continuously on vaccine usage, adverse effects, and long-term immunity after the vaccine is licensed and marketed. Harmful effects, such as increased risk of liver failure or heart attacks, discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Further examples include the swine flu vaccine and the rotavirus vaccine, which increased the risk of Guillain-Barré syndrome (GBS) and intussusception respectively. Thus, the fourth phase of clinical trials is used to ensure long-term vaccine safety.

Saturday, December 7, 2024

Collective memory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Collective_memory

Collective memory
refers to the shared pool of memories, knowledge and information of a social group that is significantly associated with the group's identity. The English phrase "collective memory" and the equivalent French phrase "la mémoire collective" appeared in the second half of the nineteenth century. The philosopher and sociologist Maurice Halbwachs analyzed and advanced the concept of the collective memory in the book Les cadres sociaux de la mémoire (1925).

Collective memory can be constructed, shared, and passed on by large and small social groups. Examples of these groups can include nations, generations, communities, among others.

Collective memory has been a topic of interest and research across a number of disciplines, including psychology, sociology, history, philosophy, and anthropology.

Conceptualization of collective memory

Attributes of collective memory

Collective memory has been conceptualized in several ways and proposed to have certain attributes. For instance, collective memory can refer to a shared body of knowledge (e.g., memory of a nation's past leaders or presidents); the image, narrative, values and ideas of a social group; or the continuous process by which collective memories of events change.

History versus collective memory

The difference between history and collective memory is best understood when comparing the aims and characteristics of each. A goal of history broadly is to provide a comprehensive, accurate, and unbiased portrayal of past events. This often includes the representation and comparison of multiple perspectives and the integration of these perspectives and details to provide a complete and accurate account. In contrast, collective memory focuses on a single perspective, for instance, the perspective of one social group, nation, or community. Consequently, collective memory represents past events as associated with the values, narratives and biases specific to that group.

Studies have found that people from different nations can have major differences in their recollections of the past. In one study where American and Russian students were instructed to recall significant events from World War II and these lists of events were compared, the majority of events recalled by the American and Russian students were not shared. Differences in the events recalled and emotional views towards the Civil War, World War II and the Iraq War have also been found in a study comparing collective memory between generations of Americans.

Perspectives on collective memory

The concept of collective memory, initially developed by Halbwachs, has been explored and expanded from various angles – a few of these are introduced below.

James E. Young has introduced the notion of 'collected memory' (opposed to collective memory), marking memory's inherently fragmented, collected and individual character, while Jan Assmann develops the notion of 'communicative memory', a variety of collective memory based on everyday communication. This form of memory resembles the exchanges in oral cultures or the memories collected (and made collective) through oral tradition. As another subform of collective memories, Assmann mentions forms detached from the everyday; they can be particular materialized and fixed points as, e.g. texts and monuments.

The theory of collective memory was also discussed by former Hiroshima resident and atomic-bomb survivor, Kiyoshi Tanimoto, in a tour of the United States as an attempt to rally support and funding for the reconstruction of his Memorial Methodist Church in Hiroshima. He theorized that the use of the atomic bomb had forever added to the world's collective memory and would serve in the future as a warning against such devices. See John Hersey's 1946 book Hiroshima.

Historian Guy Beiner (1968- ), an authority on memory and the history of Ireland, has criticized the unreflective use of the adjective "collective" in many studies of memory:

The problem is with crude concepts of collectivity, which assume a homogeneity that is rarely, if ever, present, and maintain that, since memory is constructed, it is entirely subject to the manipulations of those invested in its maintenance, denying that there can be limits to the malleability of memory or to the extent to which artificial constructions of memory can be inculcated. In practice, the construction of a completely collective memory is at best an aspiration of politicians, which is never entirely fulfilled and is always subject to contestations.

In its place, Beiner has promoted the term "social memory" and has also demonstrated its limitations by developing a related concept of "social forgetting".

Historian David Rieff takes issue with the term "collective memory", distinguishing between memories of people who were actually alive during the events in question, and people who only know about them from culture or media. Rieff writes in opposition to George Santayana's aphorism "those who cannot remember the past are condemned to repeat it", pointing out that strong cultural emphasis on certain historical events (often wrongs against the group) can prevent resolution of armed conflicts, especially when the conflict has been previously fought to a draw. The sociologist David Leupold draws attention to the problem of structural nationalism inherent in the notion of collective memory, arguing in favor of "emancipating the notion of collective memory from being subjected to the national collective" by employing a multi-collective perspective that highlights the mutual interaction of other memory collectives that form around generational belonging, family, locality or socio-political world-views.

Pierre Lévy argues that the phenomenon of human collective intelligence undergoes a profound shift with the arrival of the internet paradigm, as it allows the vast majority of humanity to access and modify a common shared online collective memory.

Collective memory and psychological research

Though traditionally a topic studied in the humanities, collective memory has become an area of interest in psychology. Common approaches taken in psychology to study collective memory have included investigating the cognitive mechanisms involved in the formation and transmission of collective memory; and comparing the social representations of history between social groups.

Social representations of history

Research on collective memory have taken the approach to compare how different social groups form their own representations of history and how such collective memories can impact ideals, values, behaviors and vice versa. Developing social identity and evaluating the past in order to prevent past patterns of conflict and errors are proposed functions of why groups form social representations of history. This research has focused on surveying different groups or comparing differences in recollections of historical events, such as the examples given earlier when comparing history and collective memory.

Differences in collective memories between social groups, such as nations or states, have been attributed to collective narcissism and egocentric/ethnocentric bias. In one related study where participants from 35 countries were questioned about their country's contribution to world history and provided a percentage estimation from 0% to 100%, evidence for collective narcissism was found as many countries gave responses exaggerating their country's contribution. In another study where American's from the 50 states were asked similar questions regarding their state's contribution to the history of the United States, patterns of overestimation and collective narcissism were also found.

Cognitive mechanisms underlying collaborative recall

Certain cognitive mechanisms involved during group recall and the interactions between these mechanisms have been suggested to contribute to the formation of collective memory. Below are some mechanisms involved during when groups of individuals recall collaboratively.

Collaborative inhibition and retrieval disruption

When groups collaborate to recall information, they experience collaborative inhibition, a decrease in performance compared to the pooled memory recall of an equal number of individuals. Weldon and Bellinger (1997) and Basden, Basden, Bryner, and Thomas (1997) provided evidence that retrieval interference underlies collaborative inhibition, as hearing other members' thoughts and discussion about the topic at hand interferes with one's own organization of thoughts and impairs memory.

The main theoretical account for collaborative inhibition is retrieval disruption. During the encoding of information, individuals form their own idiosyncratic organization of the information. This organization is later used when trying to recall the information. In a group setting as members exchange information, the information recalled by group members disrupts the idiosyncratic organization one had developed. As each member's organization is disrupted, this results in the less information recalled by the group compared to the pooled recall of participants who had individually recalled (an equal number of participants as in the group).

Despite the problem of collaborative inhibition, working in groups may benefit an individual's memory in the long run, as group discussion exposes one to many different ideas over time. Working alone initially prior to collaboration seems to be the optimal way to increase memory.

Early speculations about collaborative inhibition have included explanations, such as diminished personal accountability, social loafing and the diffusion of responsibility, however retrieval disruption remains the leading explanation. Studies have found that collective inhibition to sources other than social loafing, as offering a monetary incentive have been evidenced to fail to produce an increase in memory for groups. Further evidence from this study suggest something other than social loafing is at work, as reducing evaluation apprehension – the focus on one's performance amongst other people – assisted in individuals' memories but did not produce a gain in memory for groups. Personal accountability – drawing attention to one's own performance and contribution in a group – also did not reduce collaborative inhibition. Therefore, group members' motivation to overcome the interference of group recall cannot be achieved by several motivational factors.

Cross-cueing

Information exchange among group members often helps individuals to remember things that they would not have remembered had they been working alone. In other words, the information provided by person A may 'cue' memories in person B. This results in enhanced recall. During a group recall, an individual might not remember as much as they would on their own, as their memory recall cues may be distorted because of other team members. Nevertheless, this has enhanced benefits, team members can remember something specific to the disruption of the group. Cross-cueing plays a role in formulation of group recall (Barber, 2011).

Collective false memories

In 2010, a study was done to see how individuals remembered a bombing that occurred in the 1980s. The clock was later set at 10.25 to remember the tragic bomb (de Vito et al. 2009). The individuals were asked to remember if the clock at Bologna central station in Italy had remained functioning, everyone said no, in fact it was the opposite (Legge, 2018). There have been many instances in history where people create a false memory. In a 2003 study done in the Claremont Graduate University, results demonstrated that during a stressful event and the actual event are managed by the brain differently. Other instances of false memories may occur when remembering something on an object that is not actually there or mistaking how someone looks in a crime scene (Legge, 2018). It is possible for people to remember the same false memories; some people call it the "Mandela effect". The name "Mandela effect" comes from the name of South African civil rights leader Nelson Mandela whom many people falsely believed was dead. (Legge, 2018). The Pandora Box experiment explains that language complexes the mind more when it comes to false memories. Language plays a role with imaginative experiences, because it makes it hard for humans to gather correct information (Jablonka, 2017).

Error pruning

Compared to recalling individually, group members can provide opportunities for error pruning during recall to detect errors that would otherwise be uncorrected by an individual.

Social contagion errors

Group settings can also provide opportunities for exposure to erroneous information that may be mistaken to be correct or previously studied.

Re-exposure effects

Listening to group members recall the previously encoded information can enhance memory as it provides a second exposure opportunity to the information.

Forgetting

Studies have shown that information forgotten and excluded during group recall can promote the forgetting of related information compared to information unrelated to that which was excluded during group recall. Selective forgetting has been suggested to be a critical mechanism involved in the formation of collective memories and what details are ultimately included and excluded by group members. This mechanism has been studied using the socially-shared retrieval induced forgetting paradigm, a variation of the retrieval induced forgetting method with individuals.[39][40][41] The brain has many important brain regions that are directed at memory, the cerebral cortex, the fornix and the structures that they contain. These structures in the brain are required for attaining new information, and if any of these structures are damaged you can get anterograde or retrograde amnesia (Anastasio et al.,p. 26, 2012). Amnesia could be anything that disrupts your memory or affects you psychologically. Over time, memory loss becomes a natural part of amnesia. Sometimes you can get retrograde memory of a recent or past event.

Synchronization of memories from dyads to networks

Bottom-up approaches to the formation of collective memories investigate how cognitive-level phenomena allow for people to synchronize their memories following conversational remembering. Due to the malleability of human memory, talking with one another about the past results in memory changes that increase the similarity between the interactional partners' memories When these dyadic interactions occur in a social network, one can understand how large communities converge on a similar memory of the past. Research on larger interactions show that collective memory in larger social networks can emerge due to cognitive mechanisms involved in small group interactions.

Computational approaches to collective memory analysis

With the ability of online data such as social media and social network data and developments in natural language processing as well as information retrieval it has become possible to study how online users refer to the past and what they focus at. In an early study in 2010 researchers extracted absolute year references from large amounts of news articles collected for queries denoting particular countries. This allowed to portray so-called memory curves that demonstrate which years are particularly strongly remembered in the context of different countries (commonly, exponential shape of memory curves with occasional peaks that relate to commemorating important past events) and how the attention to more distant years declines in news. Based on a topic modelling and analysis they then detected major topics portraying how particular years are remembered. Rather than news, Wikipedia was also the target of analysis. Viewership statistics of Wikipedia articles on aircraft crashes were analyzed to study the relation between recent events and past events, particularly for understanding memory-triggering patterns.

Other studies focused on the analysis of collective memory in social networks such as investigation of over 2 million tweets (both quantitively and qualitatively) that are related to history to uncover their characteristics and ways in which history-related content is disseminated in social networks. Hashtags, as well as tweets, can be classified into the following types:

  • General History hashtags used in general to broadly identify history-related tweets that do not fall into any specific type (e.g., #history, #historyfacts).
  • National or Regional History hashtags which relate to national or regional histories, for example, #ushistory or #canadianhistory including also past names of locations (e.g., #ancientgreece).
  • Facet-focused History hashtags which relate to particular thematic facets of history (e.g.,#sporthistory, #arthistory).
  • General Commemoration hashtags that serve for commemorating or recalling a certain day or period (often somehow related to the day of tweet posting), or unspecified entities, such as #todaywe remember, #otd, #onthisday, #4yearsago and #rememberthem.
  • Historical Events hashtags related to particular events in the past (e.g., #wwi, #sevenyearswar).
  • Historical Entities hashtags denoting references to specific entities such as persons, organizations or objects (e.g., #stalin, #napoleon).

The study of digital memorialization, which encompasses the ways in social and collective memory has shifted after the digital turn, has grown substantially responding to rising proliferation of memorial content not only on the internet, but also the increased use of digital formats and tools in heritage institutions, classrooms, and among individual users worldwide.

Medical education

From Wikipedia, the free encyclopedia
Medical student in a laboratory at Monterrey Institute of Technology and Higher Education, Mexico City
Medical Student taking blood pressure during awareness campaign event

Medical education is education related to the practice of being a medical practitioner, including the initial training to become a physician (i.e., medical school and internship) and additional training thereafter (e.g., residency, fellowship, and continuing medical education).

Medical education and training varies considerably across the world. Various teaching methodologies have been used in medical education, which is an active area of educational research.

Medical education is also the subject-didactic academic field of educating medical doctors at all levels, including entry-level, post-graduate, and continuing medical education. Specific requirements such as entrustable professional activities must be met before moving on in stages of medical education.

Common techniques and evidence base

Medical education applies theories of pedagogy specifically in the context of medical education. Medical education has been a leader in the field of evidence-based education, through the development of evidence syntheses such as the Best Evidence Medical Education collection, formed in 1999, which aimed to "move from opinion-based education to evidence-based education". Common evidence-based techniques include the Objective structured clinical examination (commonly known as the 'OSCE) to assess clinical skills, and reliable checklist-based assessments to determine the development of soft skills such as professionalism. However, there is a persistence of ineffective instructional methods in medical education, such as the matching of teaching to learning styles and Edgar Dales' "Cone of Learning".

Entry-level education

Faculty of Medicine (Comenius University in Bratislava) Slovakia

Entry-level medical education programs are tertiary-level courses undertaken at a medical school. Depending on jurisdiction and university, these may be either undergraduate-entry (most of Europe, Asia, South America and Oceania), or graduate-entry programs (mainly Australia, Philippines and North America). Some jurisdictions and universities provide both undergraduate entry programs and graduate entry programs (Australia, South Korea).

In general, initial training is taken at medical school. Traditionally initial medical education is divided between preclinical and clinical studies. The former consists of the basic sciences such as anatomy, physiology, biochemistry, pharmacology, pathology, microbiology. The latter consists of teaching in the various areas of clinical medicine such as internal medicine, pediatrics, obstetrics and gynecology, psychiatry, general practice and surgery. More recently, there have been significant efforts in the United States to integrate health systems science (HSS) as the "third pillar" of medical education, alongside preclinical and clinical studies. HSS is a foundational platform and framework for the study and understanding of how care is delivered, how health professionals work together to deliver that care, and how the health system can improve patient care and health care delivery.

There has been a proliferation of programmes that combine medical training with research (M.D./Ph.D.) or management programmes (M.D./ MBA), although this has been criticised because extended interruption to clinical study has been shown to have a detrimental effect on ultimate clinical knowledge.

The LCME and the "Function and Structure of a Medical School"

The Liaison Committee on Medical Education (LCME) is a committee of educational accreditation for schools of medicine leading to an MD in the United States and Canada. In order to maintain accreditation, medical schools are required to ensure that students meet a certain set of standards and competencies, defined by the accreditation committees. The "Function and Structure of a Medical School" article is a yearly published article from the LCME that defines 12 accreditation standards.

Entrustable Professional Activities for entering residency

The Association of American Medical Colleges (AAMC) has recommended thirteen Entrustable Professional Activities (EPAs) that medical students should be expected to accomplish prior to beginning a residency program. EPAs are based on the integrated core competencies developed over the course of medical school training. Each EPA lists its key feature, associated competencies, and observed behaviors required for completion of that activity. The students progress through levels of understanding and capability, developing with decreasing need for direct supervision. Eventually students should be able to perform each activity independently, only requiring assistance in situations of unique or uncommon complexity.

The list of topics that EPAs address include:

  1. History and physical exam skills
  2. Differential diagnosis
  3. Diagnostic/screening tests
  4. Orders and prescriptions
  5. Patient encounter documentation
  6. Oral presentations of patient encounters
  7. Clinical questioning/using evidence
  8. Patient handovers/transitions of care
  9. Teamwork
  10. Urgent/Emergency care
  11. Informed consent
  12. Procedures
  13. Safety and improvement

Postgraduate education

Dean's office at the First Faculty of Medicine, Charles University, Prague

Following completion of entry-level training, newly graduated doctors are often required to undertake a period of supervised practice before full registration is granted; this is most often of one-year duration and may be referred to as an "internship" or "provisional registration" or "residency".

Further training in a particular field of medicine may be undertaken. In the U.S., further specialized training, completed after residency is referred to as "fellowship". In some jurisdictions, this is commenced immediately following completion of entry-level training, while other jurisdictions require junior doctors to undertake generalist (unstreamed) training for a number of years before commencing specialization.

Each residency and fellowship program is accredited by the Accreditation Council for Graduate Medical Education (ACGME), a non-profit organization led by physicians with the goal of enhancing educational standards among physicians. The ACGME oversees all MD and DO residency programs in the United States. As of 2019, there were approximately 11,700 ACGME accredited residencies and fellowship programs in 181 specialties and subspecialties.

Education theory itself is becoming an integral part of postgraduate medical training. Formal qualifications in education are also becoming the norm for medical educators, such that there has been a rapid increase in the number of available graduate programs in medical education.

Continuing medical education

In most countries, continuing medical education (CME) courses are required for continued licensing. CME requirements vary by state and by country. In the US, accreditation is overseen by the Accreditation Council for Continuing Medical Education (ACCME). Physicians often attend dedicated lectures, grand rounds, conferences, and performance improvement activities in order to fulfill their requirements. Additionally, physicians are increasingly opting to pursue further graduate-level training in the formal study of medical education as a pathway for continuing professional development.

Online learning

Medical education is increasingly utilizing online teaching, usually within learning management systems (LMSs) or virtual learning environments (VLEs). Additionally, several medical schools have incorporated the use of blended learning combining the use of video, asynchronous, and in-person exercises. A landmark scoping review published in 2018 demonstrated that online teaching modalities are becoming increasingly prevalent in medical education, with associated high student satisfaction and improvement on knowledge tests. However, the use of evidence-based multimedia design principles in the development of online lectures was seldom reported, despite their known effectiveness in medical student contexts. To enhance variety in an online delivery environment, the use of serious games, which have previously shown benefit in medical education, can be incorporated to break the monotony of online-delivered lectures.

Research areas into online medical education include practical applications, including simulated patients and virtual medical records (see also: telehealth). When compared to no intervention, simulation in medical education training is associated with positive effects on knowledge, skills, and behaviors and moderate effects for patient outcomes. However, data is inconsistent on the effectiveness of asynchronous online learning when compared to traditional in-person lectures. Furthermore, studies utilizing modern visualization technology (i.e. virtual and augmented reality) have shown great promise as means to supplement lesson content in physiological and anatomical education.

Telemedicine/telehealth education

With the advent of telemedicine (aka telehealth), students learn to interact with and treat patients online, an increasingly important skill in medical education. In training, students and clinicians enter a "virtual patient room" in which they interact and share information with a simulated or real patient actors. Students are assessed based on professionalism, communication, medical history gathering, physical exam, and ability to make shared decisions with the patient actor.

Medical education systems by country

Jackson Memorial Hospital in Miami, the primary teaching hospital for the Miller School of Medicine at the University of Miami, in July 2010

In the United Kingdom, a typical medicine course at university is five years, or four years if the student already holds a degree. Among some institutions and for some students, it may be six years (including the selection of an intercalated BSc—taking one year—at some point after the pre-clinical studies). All programs culminate in the Bachelor of Medicine and Surgery degree (abbreviated MBChB, MBBS, MBBCh, BM, etc.). This is followed by two clinical foundation years afterward, namely F1 and F2, similar to internship training. Students register with the UK General Medical Council at the end of F1. At the end of F2, they may pursue further years of study. The system in Australia is very similar, with registration by the Australian Medical Council (AMC).

In the U.S. and Canada, a potential medical student must first complete an undergraduate degree in any subject before applying to a graduate medical school to pursue an (M.D. or D.O.) program. U.S. medical schools are almost all four-year programs. Some students opt for the research-focused M.D./Ph.D. dual degree program, which is usually completed in 7–10 years. There are certain courses that are pre-requisite for being accepted to medical school, such as general chemistry, organic chemistry, physics, mathematics, biology, English, labwork, etc. The specific requirements vary by school.

In Australia, there are two pathways to a medical degree. Students can choose to take a five- or six-year undergraduate medical degree Bachelor of Medicine/Bachelor of Surgery (MBBS or BMed) as a first tertiary degree directly after secondary school graduation, or first complete a bachelor's degree (in general three years, usually in the medical sciences) and then apply for a four-year graduate entry Bachelor of Medicine/Bachelor of Surgery (MBBS) program.

See:

North America
Europe
Asia/Middle East/Oceania
Africa

Norms and values

Along with training individuals in the practice of medicine, medical education influences the norms and values of its participants (patients, families, etc.) This either occurs through explicit training in medical ethics, or covertly through a "hidden curriculum" –– a body of norms and values that students encounter implicitly, but is not formally taught. While formal ethics courses are a requirement at schools such as those accredited by the LCME, gaps between these courses and the "hidden curriculum" throughout medical education are frequently raised as issues contributing to the culture of medicine.

The aims of medical ethics training are to give medical doctors the ability to recognise ethical issues, reason about them morally and legally when making clinical decisions, and be able to interact to obtain the information necessary to do so.

The hidden curriculum may include the use of unprofessional behaviours for efficiency or viewing the academic hierarchy as more important than the patient. In certain institutions, such as those with LCME accreditation, the requirement of "professionalism" may be additionally weaponized against trainees, with complaints about ethics and safety being labelled as unprofessional.

The hidden curriculum was recently shown to be a cause of reduction in medical student empathy as they progress throughout medical school.

Integration with health policy

As medical professional stakeholders in the field of health care (i.e. entities integrally involved in the health care system and affected by reform), the practice of medicine (i.e. diagnosing, treating, and monitoring disease) is directly affected by the ongoing changes in both national and local health policy and economics.

There is a growing call for health professional training programs to not only adopt more rigorous health policy education and leadership training, but to apply a broader lens to the concept of teaching and implementing health policy through health equity and social disparities that largely affect health and patient outcomes. Increased mortality and morbidity rates occur from birth to age 75, attributed to medical care (insurance access, quality of care), individual behavior (smoking, diet, exercise, drugs, risky behavior), socioeconomic and demographic factors (poverty, inequality, racial disparities, segregation), and physical environment (housing, education, transportation, urban planning). A country's health care delivery system reflects its "underlying values, tolerances, expectations, and cultures of the societies they serve", and medical professionals stand in a unique position to influence opinion and policy of patients, healthcare administrators, & lawmakers.

In order to truly integrate health policy matters into physician and medical education, training should begin as early as possible – ideally during medical school or premedical coursework – to build "foundational knowledge and analytical skills" continued during residency and reinforced throughout clinical practice, like any other core skill or competency. This source further recommends adopting a national standardized core health policy curriculum for medical schools and residencies in order to introduce a core foundation in this much needed area, focusing on four main domains of health care: (1) systems and principles (e.g. financing; payment; models of management; information technology; physician workforce), (2) quality and safety (e.g. quality improvement indicators, measures, and outcomes; patient safety), (3) value and equity (e.g. medical economics, medical decision making, comparative effectiveness, health disparities), and (4) politics and law (e.g. history and consequences of major legislations; adverse events, medical errors, and malpractice).

However limitations to implementing these health policy courses mainly include perceived time constraints from scheduling conflicts, the need for an interdisciplinary faculty team, and lack of research / funding to determine what curriculum design may best suit the program goals. Resistance in one pilot program was seen from program directors who did not see the relevance of the elective course and who were bounded by program training requirements limited by scheduling conflicts and inadequate time for non-clinical activities. But for students in one medical school study, those taught higher-intensity curriculum (vs lower-intensity) were "three to four times as likely to perceive themselves as appropriately trained in components of health care systems", and felt it did not take away from getting poorer training in other areas. Additionally, recruiting and retaining a diverse set of multidisciplinary instructors and policy or economic experts with sufficient knowledge and training may be limited at community-based programs or schools without health policy or public health departments or graduate programs. Remedies may include having online courses, off-site trips to the capitol or health foundations, or dedicated externships, but these have interactive, cost, and time constraints as well. Despite these limitations, several programs in both medical school and residency training have been pioneered.

Lastly, more national support and research will be needed to not only establish these programs but to evaluate how to both standardize and innovate the curriculum in a way that is flexible with the changing health care and policy landscape. In the United States, this will involve coordination with the ACGME (Accreditation Council for Graduate Medical Education), a private NPO that sets educational and training standards for U.S. residencies and fellowships that determines funding and ability to operate.

Medical education as a subject-didactic field

Medical education is also the subject-didactic field of educating medical doctors at all levels, applying theories of pedagogy in the medical context, with its own journals, such as Medical Education. Researchers and practitioners in this field are usually medical doctors or educationalists. Medical curricula vary between medical schools, and are constantly evolving in response to the need of medical students, as well as the resources available. Medical schools have been documented to utilize various forms of problem-based learning, team-based learning, and simulation. The Liaison Committee on Medical Education (LCME) publishes standard guidelines regarding goals of medical education, including curriculum design, implementation, and evaluation.

Air National Guard Base training in medical simulation

The objective structured clinical examinations (OSCEs) are widely utilized as a way to assess health science students' clinical abilities in a controlled setting. Although used in medical education programs throughout the world, the methodology for assessment may vary between programs and thus attempts to standardize the assessment have been made.

Cadaver laboratory

Medical student describes anatomical landmarks of a donated human cadaver.

Medical schools and surgical residency programs may utilize cadavers to identify anatomy, study pathology, perform procedures, correlate radiology findings, and identify causes of death. With the integration of technology, traditional cadaver dissection has been debated regarding its effectiveness in medical education, but remains a large component of medical curriculum around the world. Didactic courses in cadaver dissection are commonly offered by certified anatomists, scientists, and physicians with a background in the subject.

Medical curriculum and evidence-based medical education journals

Medical curriculum vary widely among medical schools and residency programs, but generally follow an evidence based medical education (EBME) approach. These evidence based approaches are published in medical journals. The list of peer-reviewed medical education journals includes, but is not limited to:

Open access medical education journals:

Graduate Medical Education and Continuing Medical Education focused journals:

  • Journal of Continuing Education in the Health Professions
  • Journal of Graduate Medical Education

This is not a complete list of medical education journals. Each medical journal in this list has a varying impact factor, or mean number of citations indicating how often it is used in scientific research and study.

Molten-salt reactor

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Molten-salt_reactor ...