Search This Blog

Sunday, December 8, 2024

Superhabitable world

From Wikipedia, the free encyclopedia
Artist's impression of one possible appearance of a superhabitable planet. The reddish hue is vegetation.

A superhabitable world is a hypothetical type of planet or moon that is better suited than Earth for the emergence and evolution of life. The concept was introduced in a 2014 paper by RenΓ© Heller and John Armstrong, in which they criticized the language used in the search for habitable exoplanets and proposed clarifications. The authors argued that knowing whether a world is located within the star's habitable zone is insufficient to determine its habitability, that the principle of mediocrity cannot adequately explain why Earth should represent the archetypal habitable world, and that the prevailing model of characterization was geocentric or anthropocentric in nature. Instead, they proposed a biocentric approach that prioritized astrophysical characteristics affecting the abundance and variety of life on a world's surface.

If a world possesses more diverse flora and fauna than there are on Earth, then it would empirically show that its natural environment is more hospitable to life. To identify such a world, one should consider its geological processes, formation age, atmospheric composition, ocean coverage, and the type of star that it orbits. In other words, a superhabitable world would likely be larger, warmer, and older than Earth, with an evenly-distributed ocean, and orbiting a K-type main-sequence star. In 2020, astronomers, building on Heller and Armstrong's hypothesis, identified 24 potentially superhabitable exoplanets based on measured characteristics that fit these criteria.

Stellar characteristics

Artist's impression of Kepler-62f orbiting the orange dwarf star Kepler-62.

A star's characteristics is a key consideration for planetary habitability. The types of stars generally considered to be potential hosts for habitable worlds include F, G, K, and M-type main-sequence stars. The most massive stars—O, B, and A-type, respectively—have average lifespans on the main sequence that are considered too short for complex life to develop, ranging from a few hundred million years for A-type stars to only a few million years for O-type stars. Thus, F-type stars are described as the "hot limit" for stars that can potentially support life, as their lifespan of 2 to 4 billion years would be sufficient for habitability. However, F-type stars emit large amounts of ultraviolet radiation, and without the presence of a protective ozone layer, could disrupt nucleic acid-based life on a planet's surface.

On the opposite end, the less massive red dwarfs, which generally includes M-type stars, are by far the most common and long-lived stars in the universe, but ongoing research points to serious challenges to their ability to support life. Due to the low luminosity of red dwarfs, the circumstellar habitable zone (HZ) is in very close proximity to the star, which causes any planet to become tidally locked. The primary concern for researchers, however, is the star's propensity for frequent outbreaks of high-energy radiation, especially early in its life, that could strip away a planet's atmosphere. At the same time, red dwarfs do not emit enough quiescent UV radiation (i.e., UV radiation emitted during inactive periods) to support biological processes like photosynthesis.

Dismissing both ends, G and K-type stars—yellow and orange dwarfs, respectively—have been primary objects of interest for astronomers because they are seen to provide the best life-supporting characteristics. However, Heller and Armstrong argue that a limiting factor to the habitability of yellow dwarfs is their higher emissions of quiescent UV radiation compared to cooler orange dwarfs. For this reason, along with the shorter lifespan of yellow dwarfs, the authors are led to conclude that orange dwarfs offer the best conditions for a superhabitable world. Also nicknamed "Goldilocks stars," orange dwarfs emit low enough levels of ultraviolet radiation to eliminate the need for a protective ozone layer, but just enough to contribute to necessary biological processes. Moreover, the long average lifespan of an orange dwarf (18 to 34 billion years, compared to 10 billion for the Sun) provides stable habitable zones that do not move very much throughout the star's lifetime.

Planetary characteristics

Age

The earliest stars in the universe were metal-free stars, which was initially believed to prevent the formation of rocky planets.

The age of a superhabitable world should be greater than Earth's age (~4.5 billion years). This is based on the belief that as a planet ages, it experiences increasing levels of biodiversity, since native species have had more time to evolve, adapt, and stabilize the environmental conditions suitable for life. As for the maximum age, research points to rocky planets existing as early as 12 billion years ago.

It was initially believed that since older stars contained little to no heavy elements (i.e., metallicity), they were incapable of forming rocky planets. Early exoplanet discoveries supported this hypothesis, as they were mostly gas giants orbiting in close proximity to stars with a heavy metal abundance. However, in 2012, the Kepler space telescope challenged this assumption when it discovered many rocky exoplanets orbiting stars with a relatively low metallicity. These findings suggested that the first Earth-sized planets likely appeared much earlier in the universe's lifetime at around 12 billion years ago.

Orbit and rotation

Habitable zone (HZ) position of some of the most similar and average surface temperature exoplanets.

During the main sequence phase, a star burns hydrogen in its core, producing energy through nuclear fusion. Over time, as the hydrogen fuel is consumed, the star's core contracts and heats up, leading to an increase in the rate of fusion. This causes the star to gradually become more luminous, and as its luminosity increases, the amount of energy it emits grows, pushing the habitable zone (HZ) outward. Because a main sequence star's luminosity gradually increases throughout its life, its HZ is not static but slowly moves outward. This means that any planet will experience a limited time within the HZ, known as its "habitable zone lifetime." Studies suggest that Earth's orbit lies near the inner edge of the Solar System's HZ, which could harm its long-term livability as it nears the end of its HZ lifetime.

Ideally, the orbit of a superhabitable world should be further out and closer to the center of the HZ relative to Earth's orbit, but knowing whether a world is in this region is insufficient on its own to determine habitability. Not all rocky planets in the HZ may be habitable, while tidal heating can render planets or moons habitable beyond this region. For example, Jupiter's moon Europa is well beyond the outer limits of the Solar System's HZ, yet as a result of its orbital interactions with the other Galilean moons, it is believed to have a subsurface ocean of liquid water beneath its icy surface.

There is no consensus on the optimal rotation rate for habitability, but a planet's rotation can affect the presence of geologically-active plate tectonics and the generation of a global magnetic field.

According to a 2023 paper by Jonathan Jernigan and colleagues, marine biological activity increases on planets with increasing obliquity and eccentricity. The authors suggest that planets with a high obliquity and/or eccentricity may be superhabitable, and that scientists should be keen to look for biosignatures on exoplanets with these orbital characteristics.

Mass and size

Kepler-62e, second from the left has a radius of 1.6 R🜨. Earth is on the far right; scaled.

Assuming that a greater surface area would provide greater biodiversity, the size of a superhabitable world should generally be greater than 1 R🜨, with the condition that its mass is not arbitrarily large. Studies of the mass-radius relationship indicate that there is a transition point between rocky planets and gaseous planets (i.e., mini-Neptunes) that occurs around 2 M🜨 or 1.7 R🜨. Another study argues that there is a natural radius limit, set at 1.6 R🜨, below which nearly all planets are terrestrial, composed primarily of rock-iron-water mixtures.

Heller and Armstrong argue that the optimal mass and radius of a superhabitable world can be determined by geological activity; the more massive a planetary body, the longer time it will continuously generate internal heat—a major contributing factor to plate tectonics. Too much mass, however, can slow plate tectonics by increasing the pressure of the mantle. It is believed that plate tectonics peak in bodies between 1 and 5 M🜨, and from this perspective, a planet can be considered superhabitable up to around 2 M🜨. Assuming this planet has a density similar to Earth's, its radius should be between 1.2 and 1.3 R🜨.

Geology

Volcanic activity from plate tectonics can release greenhouse gases like carbon dioxide into a planet's atmosphere, leading to climate warming. Pictured: Iceland's Fagradalsfjall volcano.

An important geological process is plate tectonics, which appears to be common in terrestrial planets with a significant rotation speed and an internal heat source. If large bodies of water are present on a planet, plate tectonics can maintain high levels of carbon dioxide (CO
2
) in its atmosphere and increase the global surface temperature through the greenhouse effect. However, if tectonic activity is not significant enough to increase temperatures above the freezing point of water, the planet could experience a permanent ice age, unless the process is offset by another energy source like tidal heating or stellar irradiation. On the other hand, if the effects of any of these processes are too strong, the amount of greenhouse gases in the atmosphere could cause a runaway greenhouse effect by trapping heat and preventing adequate cooling.

The presence of a magnetic field is important for the long-term survivability of life on the surface of a planet or moon. A sufficiently strong magnetic field effectively shields a world's surface and atmosphere against ionizing radiation emanating from the interstellar medium and its host star. A planet can generate an intrinsic magnetic field through a dynamo that involves an internal heat source, an electrically conductive fluid like molten iron, and a significant rotation speed, while a moon could be extrinsically protected by its host planet's magnetic field. Less massive bodies and those that are tidally locked are likely to have a weak to non-existent magnetic field, which over time can result in the loss of a significant portion of its atmosphere by hydrodynamic escape and become a desert planet. If a planet's rotation is too slow, such as with Venus, then it cannot generate an Earth-like magnetic field. A more massive planet could overcome this problem by hosting multiple moons, which through their combined gravitational effects, can boost the planet's magnetic field.

Surface features

Artistic impression of a possible Earth analog, Kepler-186f. Some superhabitable planets could have a similar appearance and may not have important differences with Earth.

The appearance of a superhabitable world should be similar to the conditions found in the tropical climates of Earth. Due to the denser atmosphere and less temperature variation across its surface, such a world would lack any major ice sheets and have a higher concentration of clouds, while plant life would potentially cover more of the planet's surface and be visible from space.

When considering the differences in the peak wavelength of visible light for K-type stars and the lower stellar flux of the planet, surface vegetation may exhibit colors different than the typical green color found on Earth. Instead, vegetation on these worlds could have a red, orange, or even purple appearance.

An ocean that covers a large portion of a world's surface with fractionate continents and archipelagos could provide a stable environment across its surface. In addition, the greater surface gravity of a superhabitable world could reduce the average ocean depth and create shallow ocean basins, providing the optimal environment for marine life to thrive. For example, marine ecosystems found in the shallow areas of Earth's oceans and seas, given the amount of light and heat they receive, are observed to have greater biodiversity and are generally seen as being more comfortable for aquatic species. This has led researchers to speculate that shallow water environments on exoplanets should be similarly suitable for life.

Climate

The climate of a warmer and wetter terrestrial exoplanet may resemble that of the tropical regions of Earth. In the picture, mangrove in Cambodia.

In general, the climate of a superhabitable planet would be warm, moist, and homogeneous, allowing life to extend across the surface without presenting large population differences. These characteristics are in contrast to those found on Earth, which has more variable and inhospitable regions that include frigid tundra and dry deserts. Deserts on superhabitable planets would be more limited in area and would likely support habitat-rich coastal environments.

The optimum surface temperature for Earth-like life is unknown, although it appears that on Earth, organism diversity has been greater in warmer periods. It is therefore possible that exoplanets with slightly higher average temperatures than that of Earth are more suitable for life. The denser atmosphere of a superhabitable planet would naturally provide a greater average temperature and less variability of the global climate. Ideally, the temperature should reach the optimal levels for plant life, which is 25 °C (77 °F). In addition, a large distributed ocean would have the ability to regulate a planet's surface temperature similar to Earth's ocean currents, and could allow it to maintain a moderate temperature within the habitable zone.

There are no solid arguments to explain if Earth's atmosphere has the optimal composition, but relative atmospheric oxygen levels is required to meet the high-energy demands of complex life (O
2
). Therefore, it is hypothesized that oxygen abundance in the atmosphere is essential for complex life on other worlds.

Emotivism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Emotivism

Emotivism is a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes. Hence, it is colloquially known as the hurrah/boo theory. Influenced by the growth of analytic philosophy and logical positivism in the 20th century, the theory was stated vividly by A. J. Ayer in his 1936 book Language, Truth and Logic, but its development owes more to C. L. Stevenson.

Emotivism can be considered a form of non-cognitivism or expressivism. It stands in opposition to other forms of non-cognitivism (such as quasi-realism and universal prescriptivism), as well as to all forms of cognitivism (including both moral realism and ethical subjectivism).

In the 1950s, emotivism appeared in a modified form in the universal prescriptivism of R. M. Hare.

History

David Hume's statements on ethics foreshadowed those of 20th century emotivists.

Emotivism reached prominence in the early 20th century, but it was born centuries earlier. In 1710, George Berkeley wrote that language in general often serves to inspire feelings as well as communicate ideas. Decades later, David Hume espoused ideas similar to Stevenson's later ones. In his 1751 book An Enquiry Concerning the Principles of Morals, Hume considered morality not to be related to fact but "determined by sentiment":

In moral deliberations we must be acquainted beforehand with all the objects, and all their relations to each other; and from a comparison of the whole, fix our choice or approbation. … While we are ignorant whether a man were aggressor or not, how can we determine whether the person who killed him be criminal or innocent? But after every circumstance, every relation is known, the understanding has no further room to operate, nor any object on which it could employ itself. The approbation or blame which then ensues, cannot be the work of the judgement, but of the heart; and is not a speculative proposition or affirmation, but an active feeling or sentiment.

G. E. Moore published his Principia Ethica in 1903 and argued that the attempts of ethical naturalists to translate ethical terms (like good and bad) into non-ethical ones (like pleasing and displeasing) committed the "naturalistic fallacy". Moore was a cognitivist, but his case against ethical naturalism steered other philosophers toward noncognitivism, particularly emotivism.

The emergence of logical positivism and its verifiability criterion of meaning early in the 20th century led some philosophers to conclude that ethical statements, being incapable of empirical verification, were cognitively meaningless. This criterion was fundamental to A. J. Ayer's defense of positivism in Language, Truth and Logic, which contains his statement of emotivism. However, positivism is not essential to emotivism itself, perhaps not even in Ayer's form, and some positivists in the Vienna Circle, which had great influence on Ayer, held non-emotivist views.

R. M. Hare unfolded his ethical theory of universal prescriptivism in 1952's The Language of Morals, intending to defend the importance of rational moral argumentation against the "propaganda" he saw encouraged by Stevenson, who thought moral argumentation was sometimes psychological and not rational. But Hare's disagreement was not universal, and the similarities between his noncognitive theory and the emotive one — especially his claim, and Stevenson's, that moral judgments contain commands and are thus not purely descriptive — caused some to regard him as an emotivist, a classification he denied:

I did, and do, follow the emotivists in their rejection of descriptivism. But I was never an emotivist, though I have often been called one. But unlike most of their opponents I saw that it was their irrationalism, not their non-descriptivism, which was mistaken. So my main task was to find a rationalist kind of non-descriptivism, and this led me to establish that imperatives, the simplest kinds of prescriptions, could be subject to logical constraints while not [being] descriptive.

Proponents

Influential statements of emotivism were made by C. K. Ogden and I. A. Richards in their 1923 book on language, The Meaning of Meaning, and by W. H. F. Barnes and A. Duncan-Jones in independent works on ethics in 1934. However, it is the later works of Ayer and especially Stevenson that are the most developed and discussed defenses of the theory.

A. J. Ayer

A. J. Ayer's version of emotivism is given in chapter six, "Critique of Ethics and Theology", of Language, Truth and Logic. In that chapter, Ayer divides "the ordinary system of ethics" into four classes:

  1. "Propositions that express definitions of ethical terms, or judgements about the legitimacy or possibility of certain definitions"
  2. "Propositions describing the phenomena of moral experience, and their causes"
  3. "Exhortations to moral virtue"
  4. "Actual ethical judgments"

He focuses on propositions of the first class—moral judgments—saying that those of the second class belong to science, those of the third are mere commands, and those of the fourth (which are considered in normative ethics as opposed to meta-ethics) are too concrete for ethical philosophy. While class three statements were irrelevant to Ayer's brand of emotivism, they would later play a significant role in Stevenson's.

Ayer argues that moral judgments cannot be translated into non-ethical, empirical terms and thus cannot be verified; in this he agrees with ethical intuitionists. But he differs from intuitionists by discarding appeals to intuition as "worthless" for determining moral truths, since the intuition of one person often contradicts that of another. Instead, Ayer concludes that ethical concepts are "mere pseudo-concepts":

The presence of an ethical symbol in a proposition adds nothing to its factual content. Thus if I say to someone, "You acted wrongly in stealing that money," I am not stating anything more than if I had simply said, "You stole that money." In adding that this action is wrong I am not making any further statement about it. I am simply evincing my moral disapproval of it. It is as if I had said, "You stole that money," in a peculiar tone of horror, or written it with the addition of some special exclamation marks. … If now I generalise my previous statement and say, "Stealing money is wrong," I produce a sentence that has no factual meaning—that is, expresses no proposition that can be either true or false. … I am merely expressing certain moral sentiments.

Ayer agrees with subjectivists in saying that ethical statements are necessarily related to individual attitudes, but he says they lack truth value because they cannot be properly understood as propositions about those attitudes; Ayer thinks ethical sentences are expressions, not assertions, of approval. While an assertion of approval may always be accompanied by an expression of approval, expressions can be made without making assertions; Ayer's example is boredom, which can be expressed through the stated assertion "I am bored" or through non-assertions including tone of voice, body language, and various other verbal statements. He sees ethical statements as expressions of the latter sort, so the phrase "Theft is wrong" is a non-propositional sentence that is an expression of disapproval but is not equivalent to the proposition "I disapprove of theft".

Having argued that his theory of ethics is noncognitive and not subjective, he accepts that his position and subjectivism are equally confronted by G. E. Moore's argument that ethical disputes are clearly genuine disputes and not just expressions of contrary feelings. Ayer's defense is that all ethical disputes are about facts regarding the proper application of a value system to a specific case, not about the value systems themselves, because any dispute about values can only be resolved by judging that one value system is superior to another, and this judgment itself presupposes a shared value system. If Moore is wrong in saying that there are actual disagreements of value, we are left with the claim that there are actual disagreements of fact, and Ayer accepts this without hesitation:

If our opponent concurs with us in expressing moral disapproval of a given type t, then we may get him to condemn a particular action A, by bringing forward arguments to show that A is of type t. For the question whether A does or does not belong to that type is a plain question of fact.

C. L. Stevenson

Stevenson's work has been seen both as an elaboration upon Ayer's views and as a representation of one of "two broad types of ethical emotivism." An analytic philosopher, Stevenson suggested in his 1937 essay "The Emotive Meaning of Ethical Terms" that any ethical theory should explain three things: that intelligent disagreement can occur over moral questions, that moral terms like good are "magnetic" in encouraging action, and that the scientific method is insufficient for verifying moral claims. Stevenson's own theory was fully developed in his 1944 book Ethics and Language. In it, he agrees with Ayer that ethical sentences express the speaker's feelings, but he adds that they also have an imperative component intended to change the listener's feelings and that this component is of greater importance. Where Ayer spoke of values, or fundamental psychological inclinations, Stevenson speaks of attitudes, and where Ayer spoke of disagreement of fact, or rational disputes over the application of certain values to a particular case, Stevenson speaks of differences in belief; the concepts are the same. Terminology aside, Stevenson interprets ethical statements according to two patterns of analysis.

First pattern analysis

Under his first pattern of analysis an ethical statement has two parts: a declaration of the speaker's attitude and an imperative to mirror it, so "'This is good' means I approve of this; do so as well." The first half of the sentence is a proposition, but the imperative half is not, so Stevenson's translation of an ethical sentence remains a noncognitive one.

Imperatives cannot be proved, but they can still be supported so that the listener understands that they are not wholly arbitrary:

If told to close the door, one may ask "Why?" and receive some such reason as "It is too drafty," or "The noise is distracting." … These reasons cannot be called "proofs" in any but a dangerously extended sense, nor are they demonstratively or inductively related to an imperative; but they manifestly do support an imperative. They "back it up," or "establish it," or "base it on concrete references to fact."

The purpose of these supports is to make the listener understand the consequences of the action they are being commanded to do. Once they understand the command's consequences, they can determine whether or not obedience to the command will have desirable results.

The imperative is used to alter the hearer's attitudes or actions. … The supporting reason then describes the situation the imperative seeks to alter, or the new situation the imperative seeks to bring about; and if these facts disclose that the new situation will satisfy a preponderance of the hearer's desires, he will hesitate to obey no longer. More generally, reasons support imperatives by altering such beliefs as may in turn alter an unwillingness to obey.

Second pattern analysis

Stevenson's second pattern of analysis is used for statements about types of actions, not specific actions. Under this pattern,

'This is good' has the meaning of 'This has qualities or relations X, Y, Z … ,' except that 'good' has as well a laudatory meaning, which permits it to express the speaker's approval, and tends to evoke the approval of the hearer.

In second-pattern analysis, rather than judge an action directly, the speaker is evaluating it according to a general principle. For instance, someone who says "Murder is wrong" might mean "Murder decreases happiness overall"; this is a second-pattern statement that leads to a first-pattern one: "I disapprove of anything that decreases happiness overall. Do so as well."

Methods of argumentation

For Stevenson, moral disagreements may arise from different fundamental attitudes, different moral beliefs about specific cases, or both. The methods of moral argumentation he proposed have been divided into three groups, known as logical, rational psychological and nonrational psychological forms of argumentation.

Logical methods involve efforts to show inconsistencies between a person's fundamental attitudes and their particular moral beliefs. For example, someone who says "Edward is a good person" who has previously said "Edward is a thief" and "No thieves are good people" is guilty of inconsistency until he retracts one of his statements. Similarly, a person who says "Lying is always wrong" might consider lies in some situations to be morally permissible, and if examples of these situations can be given, his view can be shown to be logically inconsistent.

Rational psychological methods examine facts that relate fundamental attitudes to particular moral beliefs; the goal is not to show that someone has been inconsistent, as with logical methods, but only that they are wrong about the facts that connect their attitudes to their beliefs. To modify the former example, consider the person who holds that all thieves are bad people. If she sees Edward pocket a wallet found in a public place, she may conclude that he is a thief, and there would be no inconsistency between her attitude (that thieves are bad people) and her belief (that Edward is a bad person because he is a thief). However, it may be that Edward recognized the wallet as belonging to a friend, to whom he promptly returned it. Such a revelation would likely change the observer's belief about Edward, and even if it did not, the attempt to reveal such facts would count as a rational psychological form of moral argumentation.

Non-rational psychological methods revolve around language with psychological influence but no necessarily logical connection to the listener's attitudes. Stevenson called the primary such method "'persuasive,' in a somewhat broadened sense", and wrote:

[Persuasion] depends on the sheer, direct emotional impact of words—on emotive meaning, rhetorical cadence, apt metaphor, stentorian, stimulating, or pleading tones of voice, dramatic gestures, care in establishing rapport with the hearer or audience, and so on. … A redirection of the hearer's attitudes is sought not by the mediating step of altering his beliefs, but by exhortation, whether obvious or subtle, crude or refined.

Persuasion may involve the use of particular emotion-laden words, like "democracy" or "dictator", or hypothetical questions like "What if everyone thought the way you do?" or "How would you feel if you were in their shoes?"

Criticism

Utilitarian philosopher Richard Brandt offered several criticisms of emotivism in his 1959 book Ethical Theory. His first is that "ethical utterances are not obviously the kind of thing the emotive theory says they are, and prima facie, at least, should be viewed as statements." He thinks that emotivism cannot explain why most people, historically speaking, have considered ethical sentences to be "fact-stating" and not just emotive. Furthermore, he argues that people who change their moral views see their prior views as mistaken, not just different, and that this does not make sense if their attitudes were all that changed:

Suppose, for instance, as a child a person disliked eating peas. When he recalls this as an adult he is amused and notes how preferences change with age. He does not say, however, that his former attitude was mistaken. If, on the other hand, he remembers regarding irreligion or divorce as wicked, and now does not, he regards his former view as erroneous and unfounded. … Ethical statements do not look like the kind of thing the emotive theory says they are.

James Urmson's 1968 book The Emotive Theory of Ethics also disagreed with many of Stevenson's points in Ethics and Language, "a work of great value" with "a few serious mistakes [that] led Stevenson consistently to distort his otherwise valuable insights".

Magnetic influence

Brandt criticized what he termed "the 'magnetic influence' thesis", the idea of Stevenson that ethical statements are meant to influence the listener's attitudes. Brandt contends that most ethical statements, including judgments of people who are not within listening range, are not made with the intention to alter the attitudes of others. Twenty years earlier, Sir William David Ross offered much the same criticism in his book Foundations of Ethics. Ross suggests that the emotivist theory seems to be coherent only when dealing with simple linguistic acts, such as recommending, commanding, or passing judgement on something happening at the same point of time as the utterance.

… There is no doubt that such words as 'you ought to do so-and-so' may be used as one's means of so inducing a person to behave a certain way. But if we are to do justice to the meaning of 'right' or 'ought', we must take account also of such modes of speech as 'he ought to do so-and-so', 'you ought to have done so-and-so', 'if this and that were the case, you ought to have done so-and-so', 'if this and that were the case, you ought to do so-and-so', 'I ought to do so-and-so.' Where the judgement of obligation has referenced either a third person, not the person addressed, or to the past, or to an unfulfilled past condition, or to a future treated as merely possible, or to the speaker himself, there is no plausibility in describing the judgement as command.

According to this view, it would make little sense to translate a statement such as "Galileo should not have been forced to recant on heliocentricism" into a command, imperative, or recommendation - to do so might require a radical change in the meaning of these ethical statements. Under this criticism, it would appear as if emotivist and prescriptivist theories are only capable of converting a relatively small subset of all ethical claims into imperatives.

Like Ross and Brandt, Urmson disagrees with Stevenson's "causal theory" of emotive meaning—the theory that moral statements only have emotive meaning when they are made to change in a listener's attitude—saying that is incorrect in explaining "evaluative force in purely causal terms". This is Urmson's fundamental criticism, and he suggests that Stevenson would have made a stronger case by explaining emotive meaning in terms of "commending and recommending attitudes", not in terms of "the power to evoke attitudes".

Stevenson's Ethics and Language, written after Ross's book but before Brandt's and Urmson's, states that emotive terms are "not always used for purposes of exhortation." For example, in the sentence "Slavery was good in Ancient Rome", Stevenson thinks one is speaking of past attitudes in an "almost purely descriptive" sense. And in some discussions of current attitudes, "agreement in attitude can be taken for granted," so a judgment like "He was wrong to kill them" might describe one's attitudes yet be "emotively inactive", with no real emotive (or imperative) meaning. Stevenson is doubtful that sentences in such contexts qualify as normative ethical sentences, maintaining that "for the contexts that are most typical of normative ethics, the ethical terms have a function that is both emotive and descriptive."

Philippa Foot's moral realism

Philippa Foot adopts a moral realist position, criticizing the idea that when evaluation is superposed on fact there has been a "committal in a new dimension." She introduces, by analogy, the practical implications of using the word injury. Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, have not we fallen into the old naturalist fallacy?

It may seem that the only way to make a necessary connexion between 'injury' and the things that are to be avoided, is to say that it is only used in an 'action-guiding sense' when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all.

Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness.

Philosophers who have supposed that actual action was required if 'good' were to be used in a sincere evaluation have got into difficulties over weakness of will, and they should surely agree that enough has been done if we can show that any man has reason to aim at virtue and avoid vice. But is this impossibly difficult if we consider the kinds of things that count as virtue and vice? Consider, for instance, the cardinal virtues, prudence, temperance, courage and justice. Obviously any man needs prudence, but does he not also need to resist the temptation of pleasure when there is harm involved? And how could it be argued that he would never need to face what was fearful for the sake of some good? It is not obvious what someone would mean if he said that temperance or courage were not good qualities, and this not because of the 'praising' sense of these words, but because of the things that courage and temperance are.

Standard using and standard setting

As an offshoot of his fundamental criticism of Stevenson's magnetic influence thesis, Urmson wrote that ethical statements had two functions—"standard using", the application of accepted values to a particular case, and "standard setting", the act of proposing certain values as those that should be accepted—and that Stevenson confused them. According to Urmson, Stevenson's "I approve of this; do so as well" is a standard-setting statement, yet most moral statements are actually standard-using ones, so Stevenson's explanation of ethical sentences is unsatisfactory. Colin Wilks has responded that Stevenson's distinction between first-order and second-order statements resolves this problem: a person who says "Sharing is good" may be making a second-order statement like "Sharing is approved of by the community", the sort of standard-using statement Urmson says is most typical of moral discourse. At the same time, their statement can be reduced to a first-order, standard-setting sentence: "I approve of whatever is approved of by the community; do so as well."

Atlantic history

From Wikipedia, the free encyclopedia
The Atlantic Ocean which gives its name to the so-called Atlantic World of the early modern period

Atlantic history is a specialty field in history that studies the Atlantic World in the early modern period. The Atlantic World was created by the contact between Europeans and the Americas, and Atlantic History is the study of that world. It is premised on the idea that, following the rise of sustained European contact with the New World in the 16th century, the continents that bordered the Atlantic Ocean—the Americas, Europe, and Africa—constituted a regional system or common sphere of economic and cultural exchange that can be studied as a totality.

Its theme is the complex interaction between Europe (especially Great Britain, France, Spain, and Portugal) and their colonies in the Americas. It encompasses a wide range of demographic, social, economic, political, legal, military, intellectual and religious topics treated in comparative fashion by looking at both sides of the Atlantic. Religious revivals in Britain and Germany are studies, as well as the First Great Awakening in the Thirteen Colonies. Emigration, race and slavery are also important topics.

Researchers of Atlantic history typically focus on the interconnections and exchanges between these regions and the civilizations they harbored. In particular, they argue that the boundaries between nation states which traditionally determined the limits of older historiography should not be applied to such transnational phenomena as slavery, colonialism, missionary activity and economic expansion. Environmental history and the study of historical demography also play an important role, as many key questions in the field revolve around the ecological and epidemiological impact of the Columbian exchange.

Robert R. Palmer, an American historian of the French Revolution, pioneered the concept in the 1950s with a wide-ranging comparative history of how numerous nations experienced what he called The Age of the Democratic Revolution: A Political History of Europe and America, 1760–1800 (1959 and 1964). In this monumental work, he did not compare the French and the American Revolutions as successful models against other types of revolutions. Indeed, he developed a wider understanding of the changes that were led by revolutionary processes across the Western civilization. Such work followed in the footsteps of C. L. R. James who, in the 1930s, connected the French and Haitian Revolutions. Since the 1980s Atlantic history has emerged as an increasingly popular alternative to the older discipline of imperial history, although it could be argued that the field is simply a refinement and reorientation of traditional historiography dealing with the interaction between early modern Europeans and native peoples in the Atlantic sphere. The organization of Atlantic History as a recognized area of historiography began in the 1980s under the impetus of American historians Bernard Bailyn of Harvard University and Jack P. Greene of Johns Hopkins University, among others. The post-World War II integration of the European Union and the continuing importance of NATO played an indirect role in stimulating interest throughout the 1990s.

Development of the field

Bernard Bailyn's Seminar on the History of the Atlantic World promoted social and demographic studies, and especially regarding demographic flows of population into colonial America. As a leading advocate of the history of the Atlantic world, Bailyn has since 1995 organized an annual international seminar at Harvard designed to promote scholarship in this field. Professor Bailyn was the promoter of "The International Seminar on the History of the Atlantic World, 1500-1825" at Harvard University. This was one of the first, and most important, academic initiatives to launch the Atlantic perspective. From 1995-2010 the Atlantic History Seminar sponsored an annual meeting of young historians engaged in creative research on aspects of Atlantic History. In all, 366 young historians came through the Seminar program, 202 from universities in the US and 164 from universities abroad. Its purpose was to advance the scholarship of young historians of many nations interested in the common, comparative, and interactive aspects of the lives of the peoples in the lands that are part of the Atlantic basin, mainly in the early modern period in order to contribute to the study of this transnational historical subject.

Bailyn's Atlantic History: Concepts and Contours (2005) explores the borders and contents of the emerging field, which emphasizes cosmopolitan and multicultural elements that have tended to be neglected or considered in isolation by traditional historiography dealing with the Americas. Bailyn's reflections stem in part from his seminar at Harvard since the mid-1980s.

Other important scholars are Jack Greene, who directed a program at Johns Hopkins in Atlantic History from 1972 to 1992 that has now expanded to global concerns. Karen Ordahl Kupperman established the Atlantic Workshop at New York University in 1997.

Other scholars in the field include Ida Altman, Kenneth J. Andrien, David Armitage, Trevor Burnard, Jorge Canizares-Esguerra, Nicholas Canny, Philip D. Curtin, Laurent Dubois, J.H. Elliott, David Eltis, Alison Games, Eliga H. Gould, Anthony Grafton, Joseph C. Miller, Philip D. Morgan, Anthony Pagden, Jennifer L. Anderson, John Thornton, James D. Tracy, Carla G. Pestana, Isaac Land, Richard S. Dunn, and Ned C. Landsman.

Perspectives

Alison Games (2006) explores the convergence of the multiple strands of scholarly interest that have generated the new field of Atlantic history, which takes as its geographic unit of analysis the Atlantic Ocean and the four continents that surround it. She argues Atlantic history is best approached as a slice of world history. The Atlantic, moreover, is a region that has logic as a unit of historical analysis only within a limited chronology. An Atlantic perspective can help historians understand changes within the region that a more limited geographic framework might obscure. Attempts to write a Braudelian Atlantic history, one that includes and connects the entire region, remain elusive, driven in part by methodological impediments, by the real disjunction that characterized the Atlantic's historical and geographic components, by the disciplinary divisions that discourage historians from speaking to and writing for each other, and by the challenge of finding a vantage point that is not rooted in any single place.

Colonial studies

One impetus for Atlantic studies began in the 1960s with the historians of slavery who started tracking the routes of the transatlantic slave trade. A second source came from historians who studied the colonial history of the United States. Many were trained in early modern European history and were familiar with the historiography of the British Empire, which had been introduced a century before by George Louis Beer and Charles McLean Andrews. Historians studying colonialism have long been open to interdisciplinary perspectives, such as comparative approaches. In addition there was a frustration involved in writing about very few people in a small remote colony. Atlantic history opens the horizon to large forces at work over great distances.

Criticism

Some critics have complained that Atlantic history is little more than imperial history under another name. It has been argued that it is too expansive in claiming to subsume both of the American continents, Africa, and Europe, without seriously engaging with them. According to Caroline Dodds Pennock, indigenous people are often seen as static recipients of transatlantic encounter, despite the fact that thousands of Native Americans crossed the ocean during the sixteenth century, some by choice.

Canadian scholar Ian K. Steele argued that Atlantic history will tend to draw students interested in exploring their country's historian beyond national myths, while offering historical support for such 21st century policies as the North American Free Trade Agreement (NAFTA), the Organization of American States (OAS), the North Atlantic Treaty Organization (NATO), the New Europe, Christendom, and even the United Nations (UN). He concludes, "The early modern Atlantic can even be read as a natural antechamber for American-led globalization of capitalism and serve as an historical challenge to the coalescing New Europe. No wonder that the academic reception of the new Atlantic history has been enthusiastic in the United States, and less so in Britain, France, Spain, and Portugal, where histories of national Atlantic empires continue to thrive."

Vaccine trial

From Wikipedia, the free encyclopedia
Volunteer participating in phase 3 trial of CoronaVac in Padjadjaran University, Bandung, West Java, Indonesia.

A vaccine trial is a clinical trial that aims at establishing the safety and efficacy of a vaccine prior to it being licensed.

A vaccine candidate drug is first identified through preclinical evaluations that could involve high throughput screening and selecting the proper antigen to invoke an immune response.

Some vaccine trials may take months or years to complete, depending on the time required for the subjects to react to the vaccine and develop the required antibodies.

Preclinical stage

Preclinical development stages are necessary to determine the immunogenicity potential and safety profile for a vaccine candidate.

This is also the stage in which the drug candidate may be first tested in laboratory animals prior to moving to the Phase I trials. Vaccines such as the oral polio vaccine have been first tested for adverse effects and immunogenicity in monkeys as well as non-human primates and lab mice.

Recent scientific advances have helped to use transgenic animals as a part of vaccine preclinical protocol in hopes to more accurately determine drug reactions in humans. Understanding vaccine safety and the immunological response to the vaccine, such as toxicity, are necessary components of the preclinical stage. Other drug trials focus on the pharmacodynamics and pharmacokinetics; however, in vaccine studies it is essential to understand toxic effects at all possible dosage levels and the interactions with the immune system.

Phase I

The Phase I study consists of introducing the vaccine candidate to assess its safety in healthy people. A vaccine Phase I trial involves normal healthy subjects, each tested with either the candidate vaccine or a "control" treatment, typically a placebo or an adjuvant-containing cocktail, or an established vaccine (which might be intended to protect against a different pathogen). The primary observation is for detection of safety (absence of an adverse event) and evidence of an immune response.

After the administration of the vaccine or placebo, the researchers collect data on antibody production, on health outcomes (such as illness due to the targeted infection or to another infection). Following the trial protocol, the specified statistical test is performed to gauge the statistical significance of the observed differences in the outcomes between the treatment and control groups. Side effects of the vaccine are also noted, and these contribute to the decision on whether to advance the candidate vaccine to a Phase II trial.

One typical version of Phase I studies in vaccines involves an escalation study, which is used in mainly medicinal research trials. The drug is introduced into a small cohort of healthy volunteers. Vaccine escalation studies aim to minimize chances of serious adverse effects (SAE) by slowly increasing the drug dosage or frequency. The first level of an escalation study usually has two or three groups of around 10 healthy volunteers. Each subgroup receives the same vaccine dose, which is the expected lowest dose necessary to invoke an immune response (the main goal in a vaccine – to create immunity). New subgroups can be added to experiment with a different dosing regimen as long as the previous subgroup did not experience SAEs. There are variations in the vaccination order that can be used for different studies. For example, the first subgroup could complete the entire regimen before the second subgroup starts or the second can begin before the first ends as long as SAEs were not detected. The vaccination schedule will vary depending on the nature of the drug (i.e. the need for a booster or several doses over the course of short time period). Escalation studies are ideal for minimizing risks for SAEs that could occur with less controlled and divided protocols.

Phase II

The transition to Phase II relies on the immunogenic and toxicity results from Phase I in a small cohort of healthy volunteers. Phase II will consist of more healthy volunteers in the vaccine target population (~ hundreds of people) to determine reactions in a more diverse set of humans and test different schedules.

Phase III

Similarly. Phase III trials continue to monitor toxicity, immunogenicity, and SAEs on a much larger scale. The vaccine must be shown to be safe and effective in natural disease conditions before being submitted for approval and then general production. In the United States, the Food and Drug Administration (FDA) is responsible for approving vaccines.

Phase IV

Phase IV trials are typically monitor stages that collect information continuously on vaccine usage, adverse effects, and long-term immunity after the vaccine is licensed and marketed. Harmful effects, such as increased risk of liver failure or heart attacks, discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Further examples include the swine flu vaccine and the rotavirus vaccine, which increased the risk of Guillain-BarrΓ© syndrome (GBS) and intussusception respectively. Thus, the fourth phase of clinical trials is used to ensure long-term vaccine safety.

Saturday, December 7, 2024

Collective memory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Collective_memory

Collective memory
refers to the shared pool of memories, knowledge and information of a social group that is significantly associated with the group's identity. The English phrase "collective memory" and the equivalent French phrase "la mΓ©moire collective" appeared in the second half of the nineteenth century. The philosopher and sociologist Maurice Halbwachs analyzed and advanced the concept of the collective memory in the book Les cadres sociaux de la mΓ©moire (1925).

Collective memory can be constructed, shared, and passed on by large and small social groups. Examples of these groups can include nations, generations, communities, among others.

Collective memory has been a topic of interest and research across a number of disciplines, including psychology, sociology, history, philosophy, and anthropology.

Conceptualization of collective memory

Attributes of collective memory

Collective memory has been conceptualized in several ways and proposed to have certain attributes. For instance, collective memory can refer to a shared body of knowledge (e.g., memory of a nation's past leaders or presidents); the image, narrative, values and ideas of a social group; or the continuous process by which collective memories of events change.

History versus collective memory

The difference between history and collective memory is best understood when comparing the aims and characteristics of each. A goal of history broadly is to provide a comprehensive, accurate, and unbiased portrayal of past events. This often includes the representation and comparison of multiple perspectives and the integration of these perspectives and details to provide a complete and accurate account. In contrast, collective memory focuses on a single perspective, for instance, the perspective of one social group, nation, or community. Consequently, collective memory represents past events as associated with the values, narratives and biases specific to that group.

Studies have found that people from different nations can have major differences in their recollections of the past. In one study where American and Russian students were instructed to recall significant events from World War II and these lists of events were compared, the majority of events recalled by the American and Russian students were not shared. Differences in the events recalled and emotional views towards the Civil War, World War II and the Iraq War have also been found in a study comparing collective memory between generations of Americans.

Perspectives on collective memory

The concept of collective memory, initially developed by Halbwachs, has been explored and expanded from various angles – a few of these are introduced below.

James E. Young has introduced the notion of 'collected memory' (opposed to collective memory), marking memory's inherently fragmented, collected and individual character, while Jan Assmann develops the notion of 'communicative memory', a variety of collective memory based on everyday communication. This form of memory resembles the exchanges in oral cultures or the memories collected (and made collective) through oral tradition. As another subform of collective memories, Assmann mentions forms detached from the everyday; they can be particular materialized and fixed points as, e.g. texts and monuments.

The theory of collective memory was also discussed by former Hiroshima resident and atomic-bomb survivor, Kiyoshi Tanimoto, in a tour of the United States as an attempt to rally support and funding for the reconstruction of his Memorial Methodist Church in Hiroshima. He theorized that the use of the atomic bomb had forever added to the world's collective memory and would serve in the future as a warning against such devices. See John Hersey's 1946 book Hiroshima.

Historian Guy Beiner (1968- ), an authority on memory and the history of Ireland, has criticized the unreflective use of the adjective "collective" in many studies of memory:

The problem is with crude concepts of collectivity, which assume a homogeneity that is rarely, if ever, present, and maintain that, since memory is constructed, it is entirely subject to the manipulations of those invested in its maintenance, denying that there can be limits to the malleability of memory or to the extent to which artificial constructions of memory can be inculcated. In practice, the construction of a completely collective memory is at best an aspiration of politicians, which is never entirely fulfilled and is always subject to contestations.

In its place, Beiner has promoted the term "social memory" and has also demonstrated its limitations by developing a related concept of "social forgetting".

Historian David Rieff takes issue with the term "collective memory", distinguishing between memories of people who were actually alive during the events in question, and people who only know about them from culture or media. Rieff writes in opposition to George Santayana's aphorism "those who cannot remember the past are condemned to repeat it", pointing out that strong cultural emphasis on certain historical events (often wrongs against the group) can prevent resolution of armed conflicts, especially when the conflict has been previously fought to a draw. The sociologist David Leupold draws attention to the problem of structural nationalism inherent in the notion of collective memory, arguing in favor of "emancipating the notion of collective memory from being subjected to the national collective" by employing a multi-collective perspective that highlights the mutual interaction of other memory collectives that form around generational belonging, family, locality or socio-political world-views.

Pierre LΓ©vy argues that the phenomenon of human collective intelligence undergoes a profound shift with the arrival of the internet paradigm, as it allows the vast majority of humanity to access and modify a common shared online collective memory.

Collective memory and psychological research

Though traditionally a topic studied in the humanities, collective memory has become an area of interest in psychology. Common approaches taken in psychology to study collective memory have included investigating the cognitive mechanisms involved in the formation and transmission of collective memory; and comparing the social representations of history between social groups.

Social representations of history

Research on collective memory have taken the approach to compare how different social groups form their own representations of history and how such collective memories can impact ideals, values, behaviors and vice versa. Developing social identity and evaluating the past in order to prevent past patterns of conflict and errors are proposed functions of why groups form social representations of history. This research has focused on surveying different groups or comparing differences in recollections of historical events, such as the examples given earlier when comparing history and collective memory.

Differences in collective memories between social groups, such as nations or states, have been attributed to collective narcissism and egocentric/ethnocentric bias. In one related study where participants from 35 countries were questioned about their country's contribution to world history and provided a percentage estimation from 0% to 100%, evidence for collective narcissism was found as many countries gave responses exaggerating their country's contribution. In another study where American's from the 50 states were asked similar questions regarding their state's contribution to the history of the United States, patterns of overestimation and collective narcissism were also found.

Cognitive mechanisms underlying collaborative recall

Certain cognitive mechanisms involved during group recall and the interactions between these mechanisms have been suggested to contribute to the formation of collective memory. Below are some mechanisms involved during when groups of individuals recall collaboratively.

Collaborative inhibition and retrieval disruption

When groups collaborate to recall information, they experience collaborative inhibition, a decrease in performance compared to the pooled memory recall of an equal number of individuals. Weldon and Bellinger (1997) and Basden, Basden, Bryner, and Thomas (1997) provided evidence that retrieval interference underlies collaborative inhibition, as hearing other members' thoughts and discussion about the topic at hand interferes with one's own organization of thoughts and impairs memory.

The main theoretical account for collaborative inhibition is retrieval disruption. During the encoding of information, individuals form their own idiosyncratic organization of the information. This organization is later used when trying to recall the information. In a group setting as members exchange information, the information recalled by group members disrupts the idiosyncratic organization one had developed. As each member's organization is disrupted, this results in the less information recalled by the group compared to the pooled recall of participants who had individually recalled (an equal number of participants as in the group).

Despite the problem of collaborative inhibition, working in groups may benefit an individual's memory in the long run, as group discussion exposes one to many different ideas over time. Working alone initially prior to collaboration seems to be the optimal way to increase memory.

Early speculations about collaborative inhibition have included explanations, such as diminished personal accountability, social loafing and the diffusion of responsibility, however retrieval disruption remains the leading explanation. Studies have found that collective inhibition to sources other than social loafing, as offering a monetary incentive have been evidenced to fail to produce an increase in memory for groups. Further evidence from this study suggest something other than social loafing is at work, as reducing evaluation apprehension – the focus on one's performance amongst other people – assisted in individuals' memories but did not produce a gain in memory for groups. Personal accountability – drawing attention to one's own performance and contribution in a group – also did not reduce collaborative inhibition. Therefore, group members' motivation to overcome the interference of group recall cannot be achieved by several motivational factors.

Cross-cueing

Information exchange among group members often helps individuals to remember things that they would not have remembered had they been working alone. In other words, the information provided by person A may 'cue' memories in person B. This results in enhanced recall. During a group recall, an individual might not remember as much as they would on their own, as their memory recall cues may be distorted because of other team members. Nevertheless, this has enhanced benefits, team members can remember something specific to the disruption of the group. Cross-cueing plays a role in formulation of group recall (Barber, 2011).

Collective false memories

In 2010, a study was done to see how individuals remembered a bombing that occurred in the 1980s. The clock was later set at 10.25 to remember the tragic bomb (de Vito et al. 2009). The individuals were asked to remember if the clock at Bologna central station in Italy had remained functioning, everyone said no, in fact it was the opposite (Legge, 2018). There have been many instances in history where people create a false memory. In a 2003 study done in the Claremont Graduate University, results demonstrated that during a stressful event and the actual event are managed by the brain differently. Other instances of false memories may occur when remembering something on an object that is not actually there or mistaking how someone looks in a crime scene (Legge, 2018). It is possible for people to remember the same false memories; some people call it the "Mandela effect". The name "Mandela effect" comes from the name of South African civil rights leader Nelson Mandela whom many people falsely believed was dead. (Legge, 2018). The Pandora Box experiment explains that language complexes the mind more when it comes to false memories. Language plays a role with imaginative experiences, because it makes it hard for humans to gather correct information (Jablonka, 2017).

Error pruning

Compared to recalling individually, group members can provide opportunities for error pruning during recall to detect errors that would otherwise be uncorrected by an individual.

Social contagion errors

Group settings can also provide opportunities for exposure to erroneous information that may be mistaken to be correct or previously studied.

Re-exposure effects

Listening to group members recall the previously encoded information can enhance memory as it provides a second exposure opportunity to the information.

Forgetting

Studies have shown that information forgotten and excluded during group recall can promote the forgetting of related information compared to information unrelated to that which was excluded during group recall. Selective forgetting has been suggested to be a critical mechanism involved in the formation of collective memories and what details are ultimately included and excluded by group members. This mechanism has been studied using the socially-shared retrieval induced forgetting paradigm, a variation of the retrieval induced forgetting method with individuals.[39][40][41] The brain has many important brain regions that are directed at memory, the cerebral cortex, the fornix and the structures that they contain. These structures in the brain are required for attaining new information, and if any of these structures are damaged you can get anterograde or retrograde amnesia (Anastasio et al.,p. 26, 2012). Amnesia could be anything that disrupts your memory or affects you psychologically. Over time, memory loss becomes a natural part of amnesia. Sometimes you can get retrograde memory of a recent or past event.

Synchronization of memories from dyads to networks

Bottom-up approaches to the formation of collective memories investigate how cognitive-level phenomena allow for people to synchronize their memories following conversational remembering. Due to the malleability of human memory, talking with one another about the past results in memory changes that increase the similarity between the interactional partners' memories When these dyadic interactions occur in a social network, one can understand how large communities converge on a similar memory of the past. Research on larger interactions show that collective memory in larger social networks can emerge due to cognitive mechanisms involved in small group interactions.

Computational approaches to collective memory analysis

With the ability of online data such as social media and social network data and developments in natural language processing as well as information retrieval it has become possible to study how online users refer to the past and what they focus at. In an early study in 2010 researchers extracted absolute year references from large amounts of news articles collected for queries denoting particular countries. This allowed to portray so-called memory curves that demonstrate which years are particularly strongly remembered in the context of different countries (commonly, exponential shape of memory curves with occasional peaks that relate to commemorating important past events) and how the attention to more distant years declines in news. Based on a topic modelling and analysis they then detected major topics portraying how particular years are remembered. Rather than news, Wikipedia was also the target of analysis. Viewership statistics of Wikipedia articles on aircraft crashes were analyzed to study the relation between recent events and past events, particularly for understanding memory-triggering patterns.

Other studies focused on the analysis of collective memory in social networks such as investigation of over 2 million tweets (both quantitively and qualitatively) that are related to history to uncover their characteristics and ways in which history-related content is disseminated in social networks. Hashtags, as well as tweets, can be classified into the following types:

  • General History hashtags used in general to broadly identify history-related tweets that do not fall into any specific type (e.g., #history, #historyfacts).
  • National or Regional History hashtags which relate to national or regional histories, for example, #ushistory or #canadianhistory including also past names of locations (e.g., #ancientgreece).
  • Facet-focused History hashtags which relate to particular thematic facets of history (e.g.,#sporthistory, #arthistory).
  • General Commemoration hashtags that serve for commemorating or recalling a certain day or period (often somehow related to the day of tweet posting), or unspecified entities, such as #todaywe remember, #otd, #onthisday, #4yearsago and #rememberthem.
  • Historical Events hashtags related to particular events in the past (e.g., #wwi, #sevenyearswar).
  • Historical Entities hashtags denoting references to specific entities such as persons, organizations or objects (e.g., #stalin, #napoleon).

The study of digital memorialization, which encompasses the ways in social and collective memory has shifted after the digital turn, has grown substantially responding to rising proliferation of memorial content not only on the internet, but also the increased use of digital formats and tools in heritage institutions, classrooms, and among individual users worldwide.

Cognitive rehabilitation therapy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cognitive_rehabilitation_therapy     ...