Search This Blog

Thursday, March 9, 2023

Lewis structure

From Wikipedia, the free encyclopedia
 
A lewis structure of a water molecule, composed of two hydrogen atoms and one oxygen atom sharing valence electrons
Lewis structure of a water molecule.

Lewis structures, also known as Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDS), are diagrams that show the bonding between atoms of a molecule, as well as the lone pairs of electrons that may exist in the molecule. A Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. The Lewis structure was named after Gilbert N. Lewis, who introduced it in his 1916 article The Atom and the Molecule. Lewis structures extend the concept of the electron dot diagram by adding lines between atoms to represent shared pairs in a chemical bond.

Lewis structures show each atom and its position in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another (pairs of dots can be used instead of lines). Excess electrons that form lone pairs are represented as pairs of dots, and are placed next to the atoms.

Although main group elements of the second period and beyond usually react by gaining, losing, or sharing electrons until they have achieved a valence shell electron configuration with a full octet of (8) electrons, hydrogen (H) can only form bonds which share just two electrons.

Construction and electron counting

The total number of electrons represented in a Lewis structure is equal to the sum of the numbers of valence electrons on each individual atom. Non-valence electrons are not represented in Lewis structures.

Once the total number of available electrons has been determined, electrons must be placed into the structure according to these steps:

  1. The atoms are first connected by single bonds.
  2. If t is the total number of electrons and n the number of single bonds, t-2n electrons remain to be placed. These should be placed as lone pairs: one pair of dots for each pair of electrons available. Lone pairs should initially be placed on outer atoms (other than hydrogen) until each outer atom has eight electrons in bonding pairs and lone pairs; extra lone pairs may then be placed on the central atom. When in doubt, lone pairs should be placed on more electronegative atoms first.
  3. Once all lone pairs are placed, atoms (especially the central atoms) may not have an octet of electrons. In this case, the atoms must form a double bond; a lone pair of electrons is moved to form a second bond between the two atoms. As the bonding pair is shared between the two atoms, the atom that originally had the lone pair still has an octet; the other atom now has two more electrons in its valence shell.

Lewis structures for polyatomic ions may be drawn by the same method. When counting electrons, negative ions should have extra electrons placed in their Lewis structures; positive ions should have fewer electrons than an uncharged molecule. When the Lewis structure of an ion is written, the entire structure is placed in brackets, and the charge is written as a superscript on the upper right, outside the brackets.

A simpler method has been proposed for constructing Lewis structures, eliminating the need for electron counting: the atoms are drawn showing the valence electrons; bonds are then formed by pairing up valence electrons of the atoms involved in the bond-making process, and anions and cations are formed by adding or removing electrons to/from the appropriate atoms.

A trick is to count up valence electrons, then count up the number of electrons needed to complete the octet rule (or with hydrogen just 2 electrons), then take the difference of these two numbers. The answer is the number of electrons that make up the bonds. The rest of the electrons just go to fill all the other atoms' octets.

Another simple and general procedure to write Lewis structures and resonance forms has been proposed.

Formal charge

In terms of Lewis structures, formal charge is used in the description, comparison, and assessment of likely topological and resonance structures by determining the apparent electronic charge of each atom within, based upon its electron dot structure, assuming exclusive covalency or non-polar bonding. It has uses in determining possible electron re-configuration when referring to reaction mechanisms, and often results in the same sign as the partial charge of the atom, with exceptions. In general, the formal charge of an atom can be calculated using the following formula, assuming non-standard definitions for the markup used:

where:

  • is the formal charge.
  • represents the number of valence electrons in a free atom of the element.
  • represents the number of unshared electrons on the atom.
  • represents the total number of electrons in bonds the atom has with another.

The formal charge of an atom is computed as the difference between the number of valence electrons that a neutral atom would have and the number of electrons that belong to it in the Lewis structure. Electrons in covalent bonds are split equally between the atoms involved in the bond. The total of the formal charges on an ion should be equal to the charge on the ion, and the total of the formal charges on a neutral molecule should be equal to zero.

Resonance

For some molecules and ions, it is difficult to determine which lone pairs should be moved to form double or triple bonds, and two or more different resonance structures may be written for the same molecule or ion. In such cases it is usual to write all of them with two-way arrows in between (see Example below). This is sometimes the case when multiple atoms of the same type surround the central atom, and is especially common for polyatomic ions.

When this situation occurs, the molecule's Lewis structure is said to be a resonance structure, and the molecule exists as a resonance hybrid. Each of the different possibilities is superimposed on the others, and the molecule is considered to have a Lewis structure equivalent to some combination of these states.

The nitrate ion (NO3), for instance, must form a double bond between nitrogen and one of the oxygens to satisfy the octet rule for nitrogen. However, because the molecule is symmetrical, it does not matter which of the oxygens forms the double bond. In this case, there are three possible resonance structures. Expressing resonance when drawing Lewis structures may be done either by drawing each of the possible resonance forms and placing double-headed arrows between them or by using dashed lines to represent the partial bonds (although the latter is a good representation of the resonance hybrid which is not, formally speaking, a Lewis structure).

When comparing resonance structures for the same molecule, usually those with the fewest formal charges contribute more to the overall resonance hybrid. When formal charges are necessary, resonance structures that have negative charges on the more electronegative elements and positive charges on the less electronegative elements are favored.

Single bonds can also be moved in the same way to create resonance structures for hypervalent molecules such as sulfur hexafluoride, which is the correct description according to quantum chemical calculations instead of the common expanded octet model.

The resonance structure should not be interpreted to indicate that the molecule switches between forms, but that the molecule acts as the average of multiple forms.

Example

The formula of the nitrite ion is NO2.

  1. Nitrogen is the least electronegative atom of the two, so it is the central atom by multiple criteria.
  2. Count valence electrons. Nitrogen has 5 valence electrons; each oxygen has 6, for a total of (6 × 2) + 5 = 17. The ion has a charge of −1, which indicates an extra electron, so the total number of electrons is 18.
  3. Connect the atoms by single bonds. Each oxygen must be bonded to the nitrogen, which uses four electrons—two in each bond.
  4. Place lone pairs. The 14 remaining electrons should initially be placed as 7 lone pairs. Each oxygen may take a maximum of 3 lone pairs, giving each oxygen 8 electrons including the bonding pair. The seventh lone pair must be placed on the nitrogen atom.
  5. Satisfy the octet rule. Both oxygen atoms currently have 8 electrons assigned to them. The nitrogen atom has only 6 electrons assigned to it. One of the lone pairs on an oxygen atom must form a double bond, but either atom will work equally well. Therefore, there is a resonance structure.
  6. Tie up loose ends. Two Lewis structures must be drawn: Each structure has one of the two oxygen atoms double-bonded to the nitrogen atom. The second oxygen atom in each structure will be single-bonded to the nitrogen atom. Place brackets around each structure, and add the charge (−) to the upper right outside the brackets. Draw a double-headed arrow between the two resonance forms.
Nitrite-ion-lewis-canonical.svg

Alternative formations

Two varieties of condensed structural formula, both showing butane
A skeletal diagram of butane

Chemical structures may be written in more compact forms, particularly when showing organic molecules. In condensed structural formulas, many or even all of the covalent bonds may be left out, with subscripts indicating the number of identical groups attached to a particular atom. Another shorthand structural diagram is the skeletal formula (also known as a bond-line formula or carbon skeleton diagram). In a skeletal formula, carbon atoms are not signified by the symbol C but by the vertices of the lines. Hydrogen atoms bonded to carbon are not shown—they can be inferred by counting the number of bonds to a particular carbon atom—each carbon is assumed to have four bonds in total, so any bonds not shown are, by implication, to hydrogen atoms.

Other diagrams may be more complex than Lewis structures, showing bonds in 3D using various forms such as space-filling diagrams.

Usage and limitations

Despite their simplicity and development in the early twentieth century, when understanding of chemical bonding was still rudimentary, Lewis structures capture many of the key features of the electronic structure of a range of molecular systems, including those of relevance to chemical reactivity. Thus, they continue to enjoy widespread use by chemists and chemistry educators. This is especially true in the field of organic chemistry, where the traditional valence-bond model of bonding still dominates, and mechanisms are often understood in terms of curve-arrow notation superimposed upon skeletal formulae, which are shorthand versions of Lewis structures. Due to the greater variety of bonding schemes encountered in inorganic and organometallic chemistry, many of the molecules encountered require the use of fully delocalized molecular orbitals to adequately describe their bonding, making Lewis structures comparatively less important (although they are still common).

It is important to note that there are simple and archetypal molecular systems for which a Lewis description, at least in unmodified form, is misleading or inaccurate. Notably, the naive drawing of Lewis structures for molecules known experimentally to contain unpaired electrons (e.g., O2, NO, and ClO2) leads to incorrect inferences of bond orders, bond lengths, and/or magnetic properties. A simple Lewis model also does not account for the phenomenon of aromaticity. For instance, Lewis structures do not offer an explanation for why cyclic C6H6 (benzene) experiences special stabilization beyond normal delocalization effects, while C4H4 (cyclobutadiene) actually experiences a special destabilization. Molecular orbital theory provides the most straightforward explanation for these phenomena.

Computer-assisted proof

From Wikipedia, the free encyclopedia

A computer-assisted proof is a mathematical proof that has been at least partially generated by computer.

Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program.

Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using automated reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion.

Methods

One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and inclusion principle in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say . In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers.

Philosophical objections

Computer-assisted proofs are the subject of some controversy in the mathematical world, with Thomas Tymoczko first to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real' mathematical proofs because they involve so many logical steps that they are not practically verifiable by human beings, and that mathematicians are effectively being asked to replace logical deduction from assumed axioms with trust in an empirical computational process, which is potentially affected by errors in the computer program, as well as defects in the runtime environment and hardware.

Other mathematicians believe that lengthy computer-assisted proofs should be regarded as calculations, rather than proofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to the four-color theorem in 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware.

Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine-readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding.

Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion.

An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first century theoretical physics is becoming too mathematical, and leaving behind its experimental roots.

The emerging field of experimental mathematics is confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration.

Applications

Theorems proved with the help of computer programs

Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details.

Theorems for sale

In 2010, academics at The University of Edinburgh offered people the chance to "buy their own theorem" created through a computer-assisted proof. This new theorem would be named after the purchaser. This service now appears to no longer be available.

Moral reasoning

From Wikipedia, the free encyclopedia

Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy, and is the foundation of descriptive ethics.

Description

Starting from a young age, people can make moral decisions about what is right and wrong. Moral reasoning, however, is a part of morality that occurs both within and between individuals. Prominent contributors to this theory include Lawrence Kohlberg and Elliot Turiel. The term is sometimes used in a different sense: reasoning under conditions of uncertainty, such as those commonly obtained in a court of law. It is this sense that gave rise to the phrase, "To a moral certainty;" however, this idea is now seldom used outside of charges to juries.

Moral reasoning is an important and often daily process that people use when trying to do the right thing. For instance, every day people are faced with the dilemma of whether to lie in a given situation or not. People make this decision by reasoning the morality of their potential actions, and through weighing their actions against potential consequences.

A moral choice can be a personal, economic, or ethical one; as described by some ethical code, or regulated by ethical relationships with others. This branch of psychology is concerned with how these issues are perceived by ordinary people, and so is the foundation of descriptive ethics. There are many different forms of moral reasoning which often are dictated by culture. Cultural differences in the high-levels of cognitive function associated with moral reasoning can be observed through the association of brain networks from various cultures and their moral decision making. These cultural differences demonstrate the neural basis that cultural influences can have on an individual's moral reasoning and decision making.

Distinctions between theories of moral reasoning can be accounted for by evaluating inferences (which tend to be either deductive or inductive) based on a given set of premises. Deductive inference reaches a conclusion that is true based on whether a given set of premises preceding the conclusion are also true, whereas, inductive inference goes beyond information given in a set of premises to base the conclusion on provoked reflection.

In philosophy

Philosopher David Hume claims that morality is based more on perceptions than on logical reasoning. This means that people's morality is based more on their emotions and feelings than on a logical analysis of any given situation. Hume regards morals as linked to passion, love, happiness, and other emotions and therefore not based on reason. Jonathan Haidt agrees, arguing in his social intuitionist model that reasoning concerning a moral situation or idea follows an initial intuition. Haidt's fundamental stance on moral reasoning is that "moral intuitions (including moral emotions) come first and directly cause moral judgments"; he characterizes moral intuition as "the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion".

Immanuel Kant had a radically different view of morality. In his view, there are universal laws of morality that one should never break regardless of emotions. He proposes a four-step system to determine whether or not a given action was moral based on logic and reason. The first step of this method involves formulating "a maxim capturing your reason for an action". In the second step, one "frame[s] it as a universal principle for all rational agents". The third step is assessing "whether a world based on this universal principle is conceivable". If it is, then the fourth step is asking oneself "whether [one] would will the maxim to be a principle in this world". In essence, an action is moral if the maxim by which it is justified is one which could be universalized. For instance, when deciding whether or not to lie to someone for one's own advantage, one is meant to imagine what the world would be like if everyone always lied, and successfully so. In such a world, there would be no purpose in lying, for everybody would expect deceit, rendering the universal maxim of lying whenever it is to your advantage absurd. Thus, Kant argues that one should not lie under any circumstance. Another example would be if trying to decide whether suicide is moral or immoral; imagine if everyone committed suicide. Since mass international suicide would not be a good thing, the act of suicide is immoral. Kant's moral framework, however, operates under the overarching maxim that you should treat each person as an end in themselves, not as a means to an end. This overarching maxim must be considered when applying the four aforementioned steps.

Reasoning based on analogy is one form of moral reasoning. When using this form of moral reasoning the morality of one situation can be applied to another based on whether this situation is relevantly similar: similar enough that the same moral reasoning applies. A similar type of reasoning is used in common law when arguing based upon legal precedent

In consequentialism (often distinguished from deontology) actions are based as right on wrong based upon the consequences of action as opposed to a property intrinsic to the action itself.

In developmental psychology

Moral reasoning first attracted a broad attention from developmental psychologists in the mid-to-late 20th century. Their main theorization involved elucidating the stages of development of moral reasoning capacity.

Jean Piaget

Jean Piaget developed two phases of moral development, one common among children and the other common among adults. The first is known as the Heteronomous Phase. This phase, more common among children, is characterized by the idea that rules come from authority figures in one's life such as parents, teachers, and God. It also involves the idea that rules are permanent no matter what. Thirdly, this phase of moral development includes the belief that "naughty" behavior must always be punished and that the punishment will be proportional.

The second phase in Piaget's theory of moral development is referred to as the Autonomous Phase. This phase is more common after one has matured and is no longer a child. In this phase people begin to view the intentions behind actions as more important than their consequences. For instance, if a person who is driving swerves in order to not hit a dog and then knocks over a road sign, adults are likely to be less angry at the person than if he or she had done it on purpose just for fun. Even though the outcome is the same, people are more forgiving because of the good intention of saving the dog. This phase also includes the idea that people have different morals and that morality is not necessarily universal. People in the Autonomous Phase also believe rules may be broken under certain circumstances. For instance, Rosa Parks broke the law by refusing to give up her seat on a bus, which was against the law but something many people consider moral nonetheless. In this phase people also stop believing in the idea of immanent justice.

Lawrence Kohlberg

Inspired by Piaget, Lawrence Kohlberg made significant contributions to the field of moral reasoning by creating a theory of moral development. His theory is a "widely accepted theory that provides the basis for empirical evidence on the influence of human decision making on ethical behavior." In Lawrence Kohlberg's view, moral development consists of the growth of less egocentric and more impartial modes of reasoning on more complicated matters. He believed that the objective of moral education is the reinforcement of children to grow from one stage to an upper stage. Dilemma was a critical tool that he emphasized that children should be presented with; yet also, the knowledge for children to cooperate. According to his theory, people pass through three main stages of moral development as they grow from early childhood to adulthood. These are pre-conventional morality, conventional morality, and post-conventional morality. Each of these is subdivided into two levels.

The first stage in the pre-conventional level is obedience and punishment. In this stage people, usually young children, avoid certain behaviors only because of the fear of punishment, not because they see them as wrong. The second stage in the pre-conventional level is called individualism and exchange: in this stage people make moral decisions based on what best serves their needs.

The third stage is part of the conventional morality level and is called interpersonal relationships. In this stage one tries to conform to what is considered moral by the society that they live in, attempting to be seen by peers as a good person. The fourth stage is also in the conventional morality level and is called maintaining social order. This stage focuses on a view of society as a whole and following the laws and rules of that society.

The fifth stage is a part of the post-conventional level and is called social contract and individual rights. In this stage people begin to consider differing ideas about morality in other people and feel that rules and laws should be agreed on by the members of a society. The sixth and final stage of moral development, the second in the post-conventional level, is called universal principles. At this stage people begin to develop their ideas of universal moral principles and will consider them the right thing to do regardless of what the laws of a society are.

James Rest

In 1983, James Rest developed the four component Model of Morality, which addresses the ways that moral motivation and behavior occurs. The first of these is moral sensitivity, which is "the ability to see an ethical dilemma, including how our actions will affect others". The second is moral judgment, which is "the ability to reason correctly about what 'ought' to be done in a specific situation". The third is moral motivation, which is "a personal commitment to moral action, accepting responsibility for the outcome". The fourth and final component of moral behavior is moral character, which is a "courageous persistence in spite of fatigue or temptations to take the easy way out".

In social cognition

Based on empirical results from behavioral and neuroscientific studies, social and cognitive psychologists attempted to develop a more accurate descriptive (rather than normative) theory of moral reasoning. That is, the emphasis of research was on how real-world individuals made moral judgments, inferences, decisions, and actions, rather than what should be considered as moral.

Dual-process theory and social intuitionism

Developmental theories of moral reasoning were critiqued as prioritizing on the maturation of cognitive aspect of moral reasoning. From Kohlberg's perspective, one is considered as more advanced in moral reasoning as she is more efficient in using deductive reasoning and abstract moral principles to make moral judgments about particular instances. For instance, an advanced reasoner may reason syllogistically with the Kantian principle of 'treat individuals as ends and never merely as means' and a situation where kidnappers are demanding a ransom for a hostage, to conclude that the kidnappers have violated a moral principle and should be condemned. In this process, reasoners are assumed to be rational and have conscious control over how they arrive at judgments and decisions.

In contrast with such view, however, Joshua Greene and colleagues argued that laypeople's moral judgments are significantly influenced, if not shaped, by intuition and emotion as opposed to rational application of rules. In their fMRI studies in the early 2000s, participants were shown three types of decision scenarios: one type included moral dilemmas that elicited emotional reaction (moral-personal condition), the second type included moral dilemmas that did not elicit emotional reaction (moral-impersonal condition), and the third type had no moral content (non-moral condition). Brain regions such as posterior cingulate gyrus and angular gyrus, whose activation is known to correlate with experience of emotion, showed activations in moral-personal condition but not in moral-impersonal condition. Meanwhile, regions known to correlate with working memory, including right middle frontal gyrus and bilateral parietal lobe, were less active in moral-personal condition than in moral-impersonal condition. Moreover, participants' neural activity in response to moral-impersonal scenarios was similar to their activity in response to non-moral decision scenarios.

Another study used variants of trolley problem that differed in the 'personal/impersonal' dimension and surveyed people's permissibility judgment (Scenarios 1 and 2). Across scenarios, participants were presented with the option of sacrificing a person to save five people. However, depending on the scenario, the sacrifice involved pushing a person off a footbridge to block the trolley (footbridge dilemma condition; personal) or simply throwing a switch to redirect the trolley (trolley dilemma condition; impersonal). The proportions of participants who judged the sacrifice as permissible differed drastically: 11% (footbridge dilemma) vs. 89% (trolley dilemma). This difference was attributed to the emotional reaction evoked from having to apply personal force on the victim, rather than simply throwing a switch without physical contact with the victim. Focusing on participants who judged the sacrifice in trolley dilemma as permissible but the sacrifice in footbridge dilemma as impermissible, the majority of them failed to provide a plausible justification for their differing judgments. Several philosophers have written critical responses on this matter to Joshua Greene and colleagues.

Based on these results, social psychologists proposed the dual process theory of morality. They suggested that our emotional intuition and deliberate reasoning are not only qualitatively distinctive, but they also compete in making moral judgments and decisions. When making an emotionally-salient moral judgment, automatic, unconscious, and immediate response is produced by our intuition first. More careful, deliberate, and formal reasoning then follows to produce a response that is either consistent or inconsistent with the earlier response produced by intuition, in parallel with more general form of dual process theory of thinking. But in contrast with the previous rational view on moral reasoning, the dominance of the emotional process over the rational process was proposed. Haidt highlighted the aspect of morality not directly accessible by our conscious search in memory, weighing of evidence, or inference. He describes moral judgment as akin to aesthetic judgment, where an instant approval or disapproval of an event or object is produced upon perception. Hence, once produced, the immediate intuitive response toward a situation or person cannot easily be overridden by the rational consideration that follows. The theory explained that in many cases, people resolve inconsistency between the intuitive and rational processes by using the latter for post-hoc justification of the former. Haidt, using the metaphor "the emotional dog and its rational tail", applied such nature of our reasoning to the contexts ranging from person perception to politics.

A notable illustration of the influence of intuition involved feeling of disgust. According to Haidt's moral foundations theory, political liberals rely on two dimensions (harm/care and fairness/reciprocity) of evaluation to make moral judgments, but conservatives utilize three additional dimensions (ingroup/loyalty, authority/respect, and purity/sanctity). Among these, studies have revealed the link between moral evaluations based on purity/sanctity dimension and reasoner's experience of disgust. That is, people with higher sensitivity to disgust were more likely to be conservative toward political issues such as gay marriage and abortion. Moreover, when the researchers reminded participants of keeping the lab clean and washing their hands with antiseptics (thereby priming the purity/sanctity dimension), participants' attitudes were more conservative than in the control condition. In turn, Helzer and Pizarro's findings have been rebutted by two failed attempts of replications.

Other studies raised criticism toward Haidt's interpretation of his data. Augusto Blasi also rebuts the theories of Jonathan Haidt on moral intuition and reasoning. He agrees with Haidt that moral intuition plays a significant role in the way humans operate. However, Blasi suggests that people use moral reasoning more than Haidt and other cognitive scientists claim. Blasi advocates moral reasoning and reflection as the foundation of moral functioning. Reasoning and reflection play a key role in the growth of an individual and the progress of societies.

Alternatives to these dual-process/intuitionist models have been proposed, with several theorists proposing that moral judgment and moral reasoning involves domain general cognitive processes, e.g., mental models, social learning or categorization processes.

Motivated reasoning

A theorization of moral reasoning similar to dual-process theory was put forward with emphasis on our motivations to arrive at certain conclusions. Ditto and colleagues likened moral reasoners in everyday situations to lay attorneys than lay judges; people do not reason in the direction from assessment of individual evidence to moral conclusion (bottom-up), but from a preferred moral conclusion to assessment of evidence (top-down). The former resembles the thought process of a judge who is motivated to be accurate, unbiased, and impartial in her decisions; the latter resembles that of an attorney whose goal is to win a dispute using partial and selective arguments.

Kunda proposed motivated reasoning as a general framework for understanding human reasoning. She emphasized the broad influence of physiological arousal, affect, and preference (which constitute the essence of motivation and cherished beliefs) on our general cognitive processes including memory search and belief construction. Importantly, biases in memory search, hypothesis formation and evaluation result in confirmation bias, making it difficult for reasoners to critically assess their beliefs and conclusions. It is reasonable to state that individuals and groups will manipulate and confuse reasoning for belief depending on the lack of self control to allow for their confirmation bias to be the driving force of their reasoning. This tactic is used by media, government, extremist groups, cults, etc. Those with a hold on information may dull out certain variables that propagate their agenda and then leave out specific context to push an opinion into the form of something reasonable to control individual, groups, and entire populations. This allows the use of alternative specific context with fringe content to further veer from any form of dependability in their reasoning. Leaving a fictional narrative in the place of real evidence for a logical outlook to form a proper, honest, and logical assessment. 

Applied to moral domain, our strong motivation to favor people we like leads us to recollect beliefs and interpret facts in ways that favor them. In Alicke (1992, Study 1), participants made responsibility judgments about an agent who drove over the speed limit and caused an accident. When the motive for speeding was described as moral (to hide a gift for his parents' anniversary), participants assigned less responsibility to the agent than when the motive was immoral (to hide a vial of cocaine). Even though the causal attribution of the accident may technically fall under the domain of objective, factual understanding of the event, it was nevertheless significantly affected by the perceived intention of the agent (which was presumed to have determined the participants' motivation to praise or blame him).

Another paper by Simon, Stenstrom, and Read (2015, Studies 3 and 4) used a more comprehensive paradigm that measures various aspects of participants' interpretation of a moral event, including factual inferences, emotional attitude toward agents, and motivations toward the outcome of decision. Participants read about a case involving a purported academic misconduct and were asked to role-play as a judicial officer who must provide a verdict. A student named Debbie had been accused of cheating in an exam, but the overall situation of the incident was kept ambiguous to allow participants to reason in a desired direction. Then, the researchers attempted to manipulate participants' motivation to support either the university (conclude that she cheated) or Debbie (she did not cheat) in the case. In one condition, the scenario stressed that through previous incidents of cheating, the efforts of honest students have not been honored and the reputation of the university suffered (Study 4, Pro-University condition); in another condition, the scenario stated that Debbie's brother died from a tragic accident a few months ago, eliciting participants' motivation to support and sympathize with Debbie (Study 3, Pro-Debbie condition). Behavioral and computer simulation results showed an overall shift in reasoning--factual inference, emotional attitude, and moral decision--depending on the manipulated motivation. That is, when the motivation to favor the university/Debbie was elicited, participants' holistic understanding and interpretation of the incident shifted in the way that favored the university/Debbie. In these reasoning processes, situational ambiguity was shown to be critical for reasoners to arrive at their preferred conclusion.

From a broader perspective, Holyoak and Powell interpreted motivated reasoning in the moral domain as a special pattern of reasoning predicted by coherence-based reasoning framework. This general framework of cognition, initially theorized by the philosopher Paul Thagard, argues that many complex, higher-order cognitive functions are made possible by computing the coherence (or satisfying the constraints) between psychological representations such as concepts, beliefs, and emotions. Coherence-based reasoning framework draws symmetrical links between consistent (things that co-occur) and inconsistent (things that do not co-occur) psychological representations and use them as constraints, thereby providing a natural way to represent conflicts between irreconcilable motivations, observations, behaviors, beliefs, and attitudes, as well as moral obligations. Importantly, Thagard's framework was highly comprehensive in that it provided a computational basis for modeling reasoning processes using moral and non-moral facts and beliefs as well as variables related to both 'hot' and 'cold' cognitions.

Causality and intentionality

Classical theories of social perception had been offered by psychologists including Fritz Heider (model of intentional action) and Harold Kelley (attribution theory). These theories highlighted how laypeople understand another person's action based on their causal knowledge of internal (intention and ability of actor) and external (environment) factors surrounding that action. That is, people assume a causal relationship between an actor's disposition or mental states (personality, intention, desire, belief, ability; internal cause), environment (external cause), and the resulting action (effect). In later studies, psychologists discovered that moral judgment toward an action or actor is critically linked with these causal understanding and knowledge about the mental state of the actor.

Bertram Malle and Joshua Knobe conducted survey studies to investigate laypeople's understanding and use (the folk concept) of the word 'intentionality' and its relation to action. His data suggested that people think of intentionality of an action in terms of several psychological constituents: desire for outcome, belief about the expected outcome, intention to act (combination of desire and belief), skill to bring about the outcome, and awareness of action while performing that action. Consistent with this view as well as with our moral intuitions, studies found significant effects of the agent's intention, desire, and beliefs on various types of moral judgments, Using factorial designs to manipulate the content in the scenarios, Cushman showed that the agent's belief and desire regarding a harmful action significantly influenced judgments of wrongness, permissibility, punishment, and blame. However, whether the action actually brought about negative consequence or not only affected blame and punishment judgments, but not wrongness and permissibility judgments. Another study also provided neuroscientific evidence for the interplay between theory of mind and moral judgment.

Through another set of studies, Knobe showed a significant effect in the opposite direction: Intentionality judgments are significantly affected by the reasoner's moral evaluation of the actor and action. In one of his scenarios, a CEO of a corporation hears about a new programme designed to increase profit. However, the program is also expected to benefit or harm the environment as a side effect, to which he responds by saying 'I don't care'. The side effect was judged as intentional by the majority of participants in the harm condition, but the response pattern was reversed in the benefit condition.

Many studies on moral reasoning have used fictitious scenarios involving anonymous strangers (e.g., trolley problem) so that external factors irrelevant to researcher's hypothesis can be ruled out. However, criticisms have been raised about the external validity of the experiments in which the reasoners (participants) and the agent (target of judgment) are not associated in any way. As opposed to the previous emphasis on evaluation of acts, Pizarro and Tannenbaum stressed our inherent motivation to evaluate the moral characters of agents (e.g., whether an actor is good or bad), citing the Aristotelian virtue ethics. According to their view, learning the moral character of agents around us must have been a primary concern for primates and humans beginning from their early stages of evolution, because the ability to decide whom to cooperate with in a group was crucial to survival. Furthermore, observed acts are no longer interpreted separately from the context, as reasoners are now viewed as simultaneously engaging in two tasks: evaluation (inference) of moral character of agent and evaluation of her moral act. The person-centered approach to moral judgment seems to be consistent with results from some of the previous studies that involved implicit character judgment. For instance, in Alicke's (1992) study, participants may have immediately judged the moral character of the driver who sped home to hide cocaine as negative, and such inference led the participants to assess the causality surrounding the incident in a nuanced way (e.g., a person as immoral as him could have been speeding as well).

In order to account for laypeople's understanding and use of causal relations between psychological variables, Sloman, Fernbach, and Ewing proposed a causal model of intentionality judgment based on Bayesian network. Their model formally postulates that character of agent is a cause for the agent's desire for outcome and belief that action will result in consequence, desire and belief are causes for intention toward action, and the agent's action is caused by both that intention and the skill to produce consequence. Combining computational modeling with the ideas from theory of mind research, this model can provide predictions for inferences in bottom-up direction (from action to intentionality, desire, and character) as well as in top-down direction (from character, desire, and intentionality to action).

Gender difference

At one time psychologists believed that men and women have different moral values and reasoning. This was based on the idea that men and women often think differently and would react to moral dilemmas in different ways. Some researchers hypothesized that women would favor care reasoning, meaning that they would consider issues of need and sacrifice, while men would be more inclined to favor fairness and rights, which is known as justice reasoning. However, some also knew that men and women simply face different moral dilemmas on a day-to-day basis and that might be the reason for the perceived difference in their moral reasoning. With these two ideas in mind, researchers decided to do their experiments based on moral dilemmas that both men and women face regularly. To reduce situational differences and discern how both genders use reason in their moral judgments, they therefore ran the tests on parenting situations, since both genders can be involved in child rearing. The research showed that women and men use the same form of moral reasoning as one another and the only difference is the moral dilemmas they find themselves in on a day-to-day basis. When it came to moral decisions both men and women would be faced with, they often chose the same solution as being the moral choice. At least this research shows that a division in terms of morality does not actually exist, and that reasoning between genders is the same in moral decisions.

Argument technology

From Wikipedia, the free encyclopedia

Argument technology is a sub-field of artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates. In the 1980s and 1990s, philosophical theories of arguments in general, and argumentation theory in particular, were leveraged to handle key computational challenges, such as modeling non-monotonic and defeasible reasoning and designing robust coordination protocols for multi-agent systems. At the same time, mechanisms for computing semantics of Argumentation frameworks were introduced as a way of providing a calculus of opposition for computing what it is reasonable to believe in the context of conflicting arguments.

With these foundations in place, the area was kick-started by a workshop held in the Scottish Highlands in 2000, the result of which was a book coauthored by philosophers of argument, rhetoricians, legal scholars and AI researchers. Since then, the area has been supported by various dedicated events such as the International Workshop on Computational Models of Natural Argument (CMNA) which has run annually since 2001; the International Workshop on Argument in Multi Agent Systems (ArgMAS) annually since 2004; the Workshop on Argument Mining, annually since 2014, and the Conference on Computational Models of Argument (COMMA), biennially since 2006. Since 2010, the field has also had its own journal, Argument & Computation, which was published by Taylor & Francis until 2016 and since then by IOS Press.

One of the challenges that argument technology faced was a lack of standardisation in the representation and underlying conception of argument in machine readable terms. Many different software tools for manual argument analysis, in particular, developed idiosyncratic and ad hoc ways of representing arguments which reflected differing underlying ways of conceiving of argumentative structure. This lack of standardisation also meant that there was no interchange between tools or between research projects, and little re-use of data resources that were often expensive to create. To tackle this problem, the Argument Interchange Format set out to establish a common standard that captured the minimal common features of argumentation which could then be extended in different settings.

Since about 2018, argument technology has been growing rapidly, with, for example, IBM's Grand Challenge, Project Debater, results for which were published in Nature in March 2021; German research funder, DFG's nationwide research programme on Robust Argumentation Machines, RATIO, begun in 2019; and UK nationwide deployment of The Evidence Toolkit by the BBC in 2019. A 2021 video narrated by Stephen Fry provides a summary of the societal motivations for work in argument technology.

Argument technology has applications in a variety of domains, including education, healthcare, policy making, political science, intelligence analysis and risk management and has a variety of sub-fields, methodologies and technologies.

Technologies

Argument assistant

An argument assistant is a software tool which support users when writing arguments. Argument assistants can help users compose content, review content from one other, including in dialogical contexts. In addition to Web services, such functionalities can be provided through the plugin architectures of word processor software or those of Web browsers. Internet forums, for instance, can be greatly enhanced by such software tools and services.

Argument blogging

ArguBlogging is software which allows its users to select portions of hypertext on webpages in their Web browsers and to agree or disagree with the selected content, posting their arguments to their blogs with linked argument data. It is implemented as a bookmarklet, adding functionality to Web browsers and interoperating with blogging platforms such as Blogger and Tumblr.

Argument mapping

Argument maps are visual, diagrammatic representations of arguments. Such visual diagrams facilitate diagrammatic reasoning and promote one's ability to grasp and to make sense of information rapidly and readily. Argument maps can provide structured, semi-formal frameworks for representing arguments using interactive visual language.

Argument mining

Argument mining, or argumentation mining, is a research area within the natural language processing field. The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs.

Argument search

An argument search engine is a search engine that is given a topic as a user query and returns a list of arguments for and against the topic or about that topic. Such engines could be used to support informed decision-making or to help debaters prepare for debates.

Automated argumentative essay scoring

The goal of automated argumentative essay scoring systems is to assist students in improving their writing skills by measuring the quality of their argumentative content.

Debate technology

Debate technology focuses on human-machine interaction and in particular providing systems that support, monitor and engage in debate. One of the most high-profile examples of debating technology is IBM's Project Debater which combines scripted communication with very large-scale processing of news articles to identify and construct arguments on the fly in a competitive debating setting. Debating technology also encompasses tools aimed at providing insight into debates, typically using techniques from data science. These analytics have been developed in both academic and commercial settings.

Decision support system

Argument technology can reduce both individual and group biases and facilitate more accurate decisions. Argument-based decision support systems do so by helping users to distinguish between claims and the evidence supporting them, and express their confidence in and evaluate the strength of evidence of competing claims. They have been used to improve predictions of housing market trends, risk analysis, ethical and legal decision making.

Ethical decision support system

An ethical decision support system is a decision support system which supports users in moral reasoning and decision-making.

Legal decision support system

A legal decision support system is a decision support system which supports users in legal reasoning and decision-making.

Explainable artificial intelligence

An explainable or transparent artificial intelligence system is an artificial intelligence system whose actions can be easily understood by humans.

Intelligent tutoring system

An intelligent tutoring system is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. The intersection of argument technology and intelligent tutoring systems includes computer systems which aim to provide instruction in: critical thinking, argumentation, ethics, law, mathematics, and philosophy.

Legal expert system

A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law.

Machine ethics

Machine ethics is a part of the ethics of artificial intelligence concerned with the moral behavior of artificially intelligent beings. As humans argue with respect to morality and moral behavior, argument can be envisioned as a component of machine ethics systems and moral reasoning components.

Proof assistant

In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer.

Ethical considerations

Ethical considerations of argument technology include privacy, transparency, societal concerns, and diversity in representation. These factors cut across different levels such as technology, user interface design, user, service context, and society.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...