Search This Blog

Thursday, March 9, 2023

Computer-assisted proof

From Wikipedia, the free encyclopedia

A computer-assisted proof is a mathematical proof that has been at least partially generated by computer.

Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program.

Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using automated reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion.

Methods

One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and inclusion principle in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say . In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers.

Philosophical objections

Computer-assisted proofs are the subject of some controversy in the mathematical world, with Thomas Tymoczko first to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real' mathematical proofs because they involve so many logical steps that they are not practically verifiable by human beings, and that mathematicians are effectively being asked to replace logical deduction from assumed axioms with trust in an empirical computational process, which is potentially affected by errors in the computer program, as well as defects in the runtime environment and hardware.

Other mathematicians believe that lengthy computer-assisted proofs should be regarded as calculations, rather than proofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to the four-color theorem in 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware.

Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine-readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding.

Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion.

An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first century theoretical physics is becoming too mathematical, and leaving behind its experimental roots.

The emerging field of experimental mathematics is confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration.

Applications

Theorems proved with the help of computer programs

Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details.

Theorems for sale

In 2010, academics at The University of Edinburgh offered people the chance to "buy their own theorem" created through a computer-assisted proof. This new theorem would be named after the purchaser. This service now appears to no longer be available.

Moral reasoning

From Wikipedia, the free encyclopedia

Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy, and is the foundation of descriptive ethics.

Description

Starting from a young age, people can make moral decisions about what is right and wrong. Moral reasoning, however, is a part of morality that occurs both within and between individuals. Prominent contributors to this theory include Lawrence Kohlberg and Elliot Turiel. The term is sometimes used in a different sense: reasoning under conditions of uncertainty, such as those commonly obtained in a court of law. It is this sense that gave rise to the phrase, "To a moral certainty;" however, this idea is now seldom used outside of charges to juries.

Moral reasoning is an important and often daily process that people use when trying to do the right thing. For instance, every day people are faced with the dilemma of whether to lie in a given situation or not. People make this decision by reasoning the morality of their potential actions, and through weighing their actions against potential consequences.

A moral choice can be a personal, economic, or ethical one; as described by some ethical code, or regulated by ethical relationships with others. This branch of psychology is concerned with how these issues are perceived by ordinary people, and so is the foundation of descriptive ethics. There are many different forms of moral reasoning which often are dictated by culture. Cultural differences in the high-levels of cognitive function associated with moral reasoning can be observed through the association of brain networks from various cultures and their moral decision making. These cultural differences demonstrate the neural basis that cultural influences can have on an individual's moral reasoning and decision making.

Distinctions between theories of moral reasoning can be accounted for by evaluating inferences (which tend to be either deductive or inductive) based on a given set of premises. Deductive inference reaches a conclusion that is true based on whether a given set of premises preceding the conclusion are also true, whereas, inductive inference goes beyond information given in a set of premises to base the conclusion on provoked reflection.

In philosophy

Philosopher David Hume claims that morality is based more on perceptions than on logical reasoning. This means that people's morality is based more on their emotions and feelings than on a logical analysis of any given situation. Hume regards morals as linked to passion, love, happiness, and other emotions and therefore not based on reason. Jonathan Haidt agrees, arguing in his social intuitionist model that reasoning concerning a moral situation or idea follows an initial intuition. Haidt's fundamental stance on moral reasoning is that "moral intuitions (including moral emotions) come first and directly cause moral judgments"; he characterizes moral intuition as "the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion".

Immanuel Kant had a radically different view of morality. In his view, there are universal laws of morality that one should never break regardless of emotions. He proposes a four-step system to determine whether or not a given action was moral based on logic and reason. The first step of this method involves formulating "a maxim capturing your reason for an action". In the second step, one "frame[s] it as a universal principle for all rational agents". The third step is assessing "whether a world based on this universal principle is conceivable". If it is, then the fourth step is asking oneself "whether [one] would will the maxim to be a principle in this world". In essence, an action is moral if the maxim by which it is justified is one which could be universalized. For instance, when deciding whether or not to lie to someone for one's own advantage, one is meant to imagine what the world would be like if everyone always lied, and successfully so. In such a world, there would be no purpose in lying, for everybody would expect deceit, rendering the universal maxim of lying whenever it is to your advantage absurd. Thus, Kant argues that one should not lie under any circumstance. Another example would be if trying to decide whether suicide is moral or immoral; imagine if everyone committed suicide. Since mass international suicide would not be a good thing, the act of suicide is immoral. Kant's moral framework, however, operates under the overarching maxim that you should treat each person as an end in themselves, not as a means to an end. This overarching maxim must be considered when applying the four aforementioned steps.

Reasoning based on analogy is one form of moral reasoning. When using this form of moral reasoning the morality of one situation can be applied to another based on whether this situation is relevantly similar: similar enough that the same moral reasoning applies. A similar type of reasoning is used in common law when arguing based upon legal precedent

In consequentialism (often distinguished from deontology) actions are based as right on wrong based upon the consequences of action as opposed to a property intrinsic to the action itself.

In developmental psychology

Moral reasoning first attracted a broad attention from developmental psychologists in the mid-to-late 20th century. Their main theorization involved elucidating the stages of development of moral reasoning capacity.

Jean Piaget

Jean Piaget developed two phases of moral development, one common among children and the other common among adults. The first is known as the Heteronomous Phase. This phase, more common among children, is characterized by the idea that rules come from authority figures in one's life such as parents, teachers, and God. It also involves the idea that rules are permanent no matter what. Thirdly, this phase of moral development includes the belief that "naughty" behavior must always be punished and that the punishment will be proportional.

The second phase in Piaget's theory of moral development is referred to as the Autonomous Phase. This phase is more common after one has matured and is no longer a child. In this phase people begin to view the intentions behind actions as more important than their consequences. For instance, if a person who is driving swerves in order to not hit a dog and then knocks over a road sign, adults are likely to be less angry at the person than if he or she had done it on purpose just for fun. Even though the outcome is the same, people are more forgiving because of the good intention of saving the dog. This phase also includes the idea that people have different morals and that morality is not necessarily universal. People in the Autonomous Phase also believe rules may be broken under certain circumstances. For instance, Rosa Parks broke the law by refusing to give up her seat on a bus, which was against the law but something many people consider moral nonetheless. In this phase people also stop believing in the idea of immanent justice.

Lawrence Kohlberg

Inspired by Piaget, Lawrence Kohlberg made significant contributions to the field of moral reasoning by creating a theory of moral development. His theory is a "widely accepted theory that provides the basis for empirical evidence on the influence of human decision making on ethical behavior." In Lawrence Kohlberg's view, moral development consists of the growth of less egocentric and more impartial modes of reasoning on more complicated matters. He believed that the objective of moral education is the reinforcement of children to grow from one stage to an upper stage. Dilemma was a critical tool that he emphasized that children should be presented with; yet also, the knowledge for children to cooperate. According to his theory, people pass through three main stages of moral development as they grow from early childhood to adulthood. These are pre-conventional morality, conventional morality, and post-conventional morality. Each of these is subdivided into two levels.

The first stage in the pre-conventional level is obedience and punishment. In this stage people, usually young children, avoid certain behaviors only because of the fear of punishment, not because they see them as wrong. The second stage in the pre-conventional level is called individualism and exchange: in this stage people make moral decisions based on what best serves their needs.

The third stage is part of the conventional morality level and is called interpersonal relationships. In this stage one tries to conform to what is considered moral by the society that they live in, attempting to be seen by peers as a good person. The fourth stage is also in the conventional morality level and is called maintaining social order. This stage focuses on a view of society as a whole and following the laws and rules of that society.

The fifth stage is a part of the post-conventional level and is called social contract and individual rights. In this stage people begin to consider differing ideas about morality in other people and feel that rules and laws should be agreed on by the members of a society. The sixth and final stage of moral development, the second in the post-conventional level, is called universal principles. At this stage people begin to develop their ideas of universal moral principles and will consider them the right thing to do regardless of what the laws of a society are.

James Rest

In 1983, James Rest developed the four component Model of Morality, which addresses the ways that moral motivation and behavior occurs. The first of these is moral sensitivity, which is "the ability to see an ethical dilemma, including how our actions will affect others". The second is moral judgment, which is "the ability to reason correctly about what 'ought' to be done in a specific situation". The third is moral motivation, which is "a personal commitment to moral action, accepting responsibility for the outcome". The fourth and final component of moral behavior is moral character, which is a "courageous persistence in spite of fatigue or temptations to take the easy way out".

In social cognition

Based on empirical results from behavioral and neuroscientific studies, social and cognitive psychologists attempted to develop a more accurate descriptive (rather than normative) theory of moral reasoning. That is, the emphasis of research was on how real-world individuals made moral judgments, inferences, decisions, and actions, rather than what should be considered as moral.

Dual-process theory and social intuitionism

Developmental theories of moral reasoning were critiqued as prioritizing on the maturation of cognitive aspect of moral reasoning. From Kohlberg's perspective, one is considered as more advanced in moral reasoning as she is more efficient in using deductive reasoning and abstract moral principles to make moral judgments about particular instances. For instance, an advanced reasoner may reason syllogistically with the Kantian principle of 'treat individuals as ends and never merely as means' and a situation where kidnappers are demanding a ransom for a hostage, to conclude that the kidnappers have violated a moral principle and should be condemned. In this process, reasoners are assumed to be rational and have conscious control over how they arrive at judgments and decisions.

In contrast with such view, however, Joshua Greene and colleagues argued that laypeople's moral judgments are significantly influenced, if not shaped, by intuition and emotion as opposed to rational application of rules. In their fMRI studies in the early 2000s, participants were shown three types of decision scenarios: one type included moral dilemmas that elicited emotional reaction (moral-personal condition), the second type included moral dilemmas that did not elicit emotional reaction (moral-impersonal condition), and the third type had no moral content (non-moral condition). Brain regions such as posterior cingulate gyrus and angular gyrus, whose activation is known to correlate with experience of emotion, showed activations in moral-personal condition but not in moral-impersonal condition. Meanwhile, regions known to correlate with working memory, including right middle frontal gyrus and bilateral parietal lobe, were less active in moral-personal condition than in moral-impersonal condition. Moreover, participants' neural activity in response to moral-impersonal scenarios was similar to their activity in response to non-moral decision scenarios.

Another study used variants of trolley problem that differed in the 'personal/impersonal' dimension and surveyed people's permissibility judgment (Scenarios 1 and 2). Across scenarios, participants were presented with the option of sacrificing a person to save five people. However, depending on the scenario, the sacrifice involved pushing a person off a footbridge to block the trolley (footbridge dilemma condition; personal) or simply throwing a switch to redirect the trolley (trolley dilemma condition; impersonal). The proportions of participants who judged the sacrifice as permissible differed drastically: 11% (footbridge dilemma) vs. 89% (trolley dilemma). This difference was attributed to the emotional reaction evoked from having to apply personal force on the victim, rather than simply throwing a switch without physical contact with the victim. Focusing on participants who judged the sacrifice in trolley dilemma as permissible but the sacrifice in footbridge dilemma as impermissible, the majority of them failed to provide a plausible justification for their differing judgments. Several philosophers have written critical responses on this matter to Joshua Greene and colleagues.

Based on these results, social psychologists proposed the dual process theory of morality. They suggested that our emotional intuition and deliberate reasoning are not only qualitatively distinctive, but they also compete in making moral judgments and decisions. When making an emotionally-salient moral judgment, automatic, unconscious, and immediate response is produced by our intuition first. More careful, deliberate, and formal reasoning then follows to produce a response that is either consistent or inconsistent with the earlier response produced by intuition, in parallel with more general form of dual process theory of thinking. But in contrast with the previous rational view on moral reasoning, the dominance of the emotional process over the rational process was proposed. Haidt highlighted the aspect of morality not directly accessible by our conscious search in memory, weighing of evidence, or inference. He describes moral judgment as akin to aesthetic judgment, where an instant approval or disapproval of an event or object is produced upon perception. Hence, once produced, the immediate intuitive response toward a situation or person cannot easily be overridden by the rational consideration that follows. The theory explained that in many cases, people resolve inconsistency between the intuitive and rational processes by using the latter for post-hoc justification of the former. Haidt, using the metaphor "the emotional dog and its rational tail", applied such nature of our reasoning to the contexts ranging from person perception to politics.

A notable illustration of the influence of intuition involved feeling of disgust. According to Haidt's moral foundations theory, political liberals rely on two dimensions (harm/care and fairness/reciprocity) of evaluation to make moral judgments, but conservatives utilize three additional dimensions (ingroup/loyalty, authority/respect, and purity/sanctity). Among these, studies have revealed the link between moral evaluations based on purity/sanctity dimension and reasoner's experience of disgust. That is, people with higher sensitivity to disgust were more likely to be conservative toward political issues such as gay marriage and abortion. Moreover, when the researchers reminded participants of keeping the lab clean and washing their hands with antiseptics (thereby priming the purity/sanctity dimension), participants' attitudes were more conservative than in the control condition. In turn, Helzer and Pizarro's findings have been rebutted by two failed attempts of replications.

Other studies raised criticism toward Haidt's interpretation of his data. Augusto Blasi also rebuts the theories of Jonathan Haidt on moral intuition and reasoning. He agrees with Haidt that moral intuition plays a significant role in the way humans operate. However, Blasi suggests that people use moral reasoning more than Haidt and other cognitive scientists claim. Blasi advocates moral reasoning and reflection as the foundation of moral functioning. Reasoning and reflection play a key role in the growth of an individual and the progress of societies.

Alternatives to these dual-process/intuitionist models have been proposed, with several theorists proposing that moral judgment and moral reasoning involves domain general cognitive processes, e.g., mental models, social learning or categorization processes.

Motivated reasoning

A theorization of moral reasoning similar to dual-process theory was put forward with emphasis on our motivations to arrive at certain conclusions. Ditto and colleagues likened moral reasoners in everyday situations to lay attorneys than lay judges; people do not reason in the direction from assessment of individual evidence to moral conclusion (bottom-up), but from a preferred moral conclusion to assessment of evidence (top-down). The former resembles the thought process of a judge who is motivated to be accurate, unbiased, and impartial in her decisions; the latter resembles that of an attorney whose goal is to win a dispute using partial and selective arguments.

Kunda proposed motivated reasoning as a general framework for understanding human reasoning. She emphasized the broad influence of physiological arousal, affect, and preference (which constitute the essence of motivation and cherished beliefs) on our general cognitive processes including memory search and belief construction. Importantly, biases in memory search, hypothesis formation and evaluation result in confirmation bias, making it difficult for reasoners to critically assess their beliefs and conclusions. It is reasonable to state that individuals and groups will manipulate and confuse reasoning for belief depending on the lack of self control to allow for their confirmation bias to be the driving force of their reasoning. This tactic is used by media, government, extremist groups, cults, etc. Those with a hold on information may dull out certain variables that propagate their agenda and then leave out specific context to push an opinion into the form of something reasonable to control individual, groups, and entire populations. This allows the use of alternative specific context with fringe content to further veer from any form of dependability in their reasoning. Leaving a fictional narrative in the place of real evidence for a logical outlook to form a proper, honest, and logical assessment. 

Applied to moral domain, our strong motivation to favor people we like leads us to recollect beliefs and interpret facts in ways that favor them. In Alicke (1992, Study 1), participants made responsibility judgments about an agent who drove over the speed limit and caused an accident. When the motive for speeding was described as moral (to hide a gift for his parents' anniversary), participants assigned less responsibility to the agent than when the motive was immoral (to hide a vial of cocaine). Even though the causal attribution of the accident may technically fall under the domain of objective, factual understanding of the event, it was nevertheless significantly affected by the perceived intention of the agent (which was presumed to have determined the participants' motivation to praise or blame him).

Another paper by Simon, Stenstrom, and Read (2015, Studies 3 and 4) used a more comprehensive paradigm that measures various aspects of participants' interpretation of a moral event, including factual inferences, emotional attitude toward agents, and motivations toward the outcome of decision. Participants read about a case involving a purported academic misconduct and were asked to role-play as a judicial officer who must provide a verdict. A student named Debbie had been accused of cheating in an exam, but the overall situation of the incident was kept ambiguous to allow participants to reason in a desired direction. Then, the researchers attempted to manipulate participants' motivation to support either the university (conclude that she cheated) or Debbie (she did not cheat) in the case. In one condition, the scenario stressed that through previous incidents of cheating, the efforts of honest students have not been honored and the reputation of the university suffered (Study 4, Pro-University condition); in another condition, the scenario stated that Debbie's brother died from a tragic accident a few months ago, eliciting participants' motivation to support and sympathize with Debbie (Study 3, Pro-Debbie condition). Behavioral and computer simulation results showed an overall shift in reasoning--factual inference, emotional attitude, and moral decision--depending on the manipulated motivation. That is, when the motivation to favor the university/Debbie was elicited, participants' holistic understanding and interpretation of the incident shifted in the way that favored the university/Debbie. In these reasoning processes, situational ambiguity was shown to be critical for reasoners to arrive at their preferred conclusion.

From a broader perspective, Holyoak and Powell interpreted motivated reasoning in the moral domain as a special pattern of reasoning predicted by coherence-based reasoning framework. This general framework of cognition, initially theorized by the philosopher Paul Thagard, argues that many complex, higher-order cognitive functions are made possible by computing the coherence (or satisfying the constraints) between psychological representations such as concepts, beliefs, and emotions. Coherence-based reasoning framework draws symmetrical links between consistent (things that co-occur) and inconsistent (things that do not co-occur) psychological representations and use them as constraints, thereby providing a natural way to represent conflicts between irreconcilable motivations, observations, behaviors, beliefs, and attitudes, as well as moral obligations. Importantly, Thagard's framework was highly comprehensive in that it provided a computational basis for modeling reasoning processes using moral and non-moral facts and beliefs as well as variables related to both 'hot' and 'cold' cognitions.

Causality and intentionality

Classical theories of social perception had been offered by psychologists including Fritz Heider (model of intentional action) and Harold Kelley (attribution theory). These theories highlighted how laypeople understand another person's action based on their causal knowledge of internal (intention and ability of actor) and external (environment) factors surrounding that action. That is, people assume a causal relationship between an actor's disposition or mental states (personality, intention, desire, belief, ability; internal cause), environment (external cause), and the resulting action (effect). In later studies, psychologists discovered that moral judgment toward an action or actor is critically linked with these causal understanding and knowledge about the mental state of the actor.

Bertram Malle and Joshua Knobe conducted survey studies to investigate laypeople's understanding and use (the folk concept) of the word 'intentionality' and its relation to action. His data suggested that people think of intentionality of an action in terms of several psychological constituents: desire for outcome, belief about the expected outcome, intention to act (combination of desire and belief), skill to bring about the outcome, and awareness of action while performing that action. Consistent with this view as well as with our moral intuitions, studies found significant effects of the agent's intention, desire, and beliefs on various types of moral judgments, Using factorial designs to manipulate the content in the scenarios, Cushman showed that the agent's belief and desire regarding a harmful action significantly influenced judgments of wrongness, permissibility, punishment, and blame. However, whether the action actually brought about negative consequence or not only affected blame and punishment judgments, but not wrongness and permissibility judgments. Another study also provided neuroscientific evidence for the interplay between theory of mind and moral judgment.

Through another set of studies, Knobe showed a significant effect in the opposite direction: Intentionality judgments are significantly affected by the reasoner's moral evaluation of the actor and action. In one of his scenarios, a CEO of a corporation hears about a new programme designed to increase profit. However, the program is also expected to benefit or harm the environment as a side effect, to which he responds by saying 'I don't care'. The side effect was judged as intentional by the majority of participants in the harm condition, but the response pattern was reversed in the benefit condition.

Many studies on moral reasoning have used fictitious scenarios involving anonymous strangers (e.g., trolley problem) so that external factors irrelevant to researcher's hypothesis can be ruled out. However, criticisms have been raised about the external validity of the experiments in which the reasoners (participants) and the agent (target of judgment) are not associated in any way. As opposed to the previous emphasis on evaluation of acts, Pizarro and Tannenbaum stressed our inherent motivation to evaluate the moral characters of agents (e.g., whether an actor is good or bad), citing the Aristotelian virtue ethics. According to their view, learning the moral character of agents around us must have been a primary concern for primates and humans beginning from their early stages of evolution, because the ability to decide whom to cooperate with in a group was crucial to survival. Furthermore, observed acts are no longer interpreted separately from the context, as reasoners are now viewed as simultaneously engaging in two tasks: evaluation (inference) of moral character of agent and evaluation of her moral act. The person-centered approach to moral judgment seems to be consistent with results from some of the previous studies that involved implicit character judgment. For instance, in Alicke's (1992) study, participants may have immediately judged the moral character of the driver who sped home to hide cocaine as negative, and such inference led the participants to assess the causality surrounding the incident in a nuanced way (e.g., a person as immoral as him could have been speeding as well).

In order to account for laypeople's understanding and use of causal relations between psychological variables, Sloman, Fernbach, and Ewing proposed a causal model of intentionality judgment based on Bayesian network. Their model formally postulates that character of agent is a cause for the agent's desire for outcome and belief that action will result in consequence, desire and belief are causes for intention toward action, and the agent's action is caused by both that intention and the skill to produce consequence. Combining computational modeling with the ideas from theory of mind research, this model can provide predictions for inferences in bottom-up direction (from action to intentionality, desire, and character) as well as in top-down direction (from character, desire, and intentionality to action).

Gender difference

At one time psychologists believed that men and women have different moral values and reasoning. This was based on the idea that men and women often think differently and would react to moral dilemmas in different ways. Some researchers hypothesized that women would favor care reasoning, meaning that they would consider issues of need and sacrifice, while men would be more inclined to favor fairness and rights, which is known as justice reasoning. However, some also knew that men and women simply face different moral dilemmas on a day-to-day basis and that might be the reason for the perceived difference in their moral reasoning. With these two ideas in mind, researchers decided to do their experiments based on moral dilemmas that both men and women face regularly. To reduce situational differences and discern how both genders use reason in their moral judgments, they therefore ran the tests on parenting situations, since both genders can be involved in child rearing. The research showed that women and men use the same form of moral reasoning as one another and the only difference is the moral dilemmas they find themselves in on a day-to-day basis. When it came to moral decisions both men and women would be faced with, they often chose the same solution as being the moral choice. At least this research shows that a division in terms of morality does not actually exist, and that reasoning between genders is the same in moral decisions.

Argument technology

From Wikipedia, the free encyclopedia

Argument technology is a sub-field of artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates. In the 1980s and 1990s, philosophical theories of arguments in general, and argumentation theory in particular, were leveraged to handle key computational challenges, such as modeling non-monotonic and defeasible reasoning and designing robust coordination protocols for multi-agent systems. At the same time, mechanisms for computing semantics of Argumentation frameworks were introduced as a way of providing a calculus of opposition for computing what it is reasonable to believe in the context of conflicting arguments.

With these foundations in place, the area was kick-started by a workshop held in the Scottish Highlands in 2000, the result of which was a book coauthored by philosophers of argument, rhetoricians, legal scholars and AI researchers. Since then, the area has been supported by various dedicated events such as the International Workshop on Computational Models of Natural Argument (CMNA) which has run annually since 2001; the International Workshop on Argument in Multi Agent Systems (ArgMAS) annually since 2004; the Workshop on Argument Mining, annually since 2014, and the Conference on Computational Models of Argument (COMMA), biennially since 2006. Since 2010, the field has also had its own journal, Argument & Computation, which was published by Taylor & Francis until 2016 and since then by IOS Press.

One of the challenges that argument technology faced was a lack of standardisation in the representation and underlying conception of argument in machine readable terms. Many different software tools for manual argument analysis, in particular, developed idiosyncratic and ad hoc ways of representing arguments which reflected differing underlying ways of conceiving of argumentative structure. This lack of standardisation also meant that there was no interchange between tools or between research projects, and little re-use of data resources that were often expensive to create. To tackle this problem, the Argument Interchange Format set out to establish a common standard that captured the minimal common features of argumentation which could then be extended in different settings.

Since about 2018, argument technology has been growing rapidly, with, for example, IBM's Grand Challenge, Project Debater, results for which were published in Nature in March 2021; German research funder, DFG's nationwide research programme on Robust Argumentation Machines, RATIO, begun in 2019; and UK nationwide deployment of The Evidence Toolkit by the BBC in 2019. A 2021 video narrated by Stephen Fry provides a summary of the societal motivations for work in argument technology.

Argument technology has applications in a variety of domains, including education, healthcare, policy making, political science, intelligence analysis and risk management and has a variety of sub-fields, methodologies and technologies.

Technologies

Argument assistant

An argument assistant is a software tool which support users when writing arguments. Argument assistants can help users compose content, review content from one other, including in dialogical contexts. In addition to Web services, such functionalities can be provided through the plugin architectures of word processor software or those of Web browsers. Internet forums, for instance, can be greatly enhanced by such software tools and services.

Argument blogging

ArguBlogging is software which allows its users to select portions of hypertext on webpages in their Web browsers and to agree or disagree with the selected content, posting their arguments to their blogs with linked argument data. It is implemented as a bookmarklet, adding functionality to Web browsers and interoperating with blogging platforms such as Blogger and Tumblr.

Argument mapping

Argument maps are visual, diagrammatic representations of arguments. Such visual diagrams facilitate diagrammatic reasoning and promote one's ability to grasp and to make sense of information rapidly and readily. Argument maps can provide structured, semi-formal frameworks for representing arguments using interactive visual language.

Argument mining

Argument mining, or argumentation mining, is a research area within the natural language processing field. The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs.

Argument search

An argument search engine is a search engine that is given a topic as a user query and returns a list of arguments for and against the topic or about that topic. Such engines could be used to support informed decision-making or to help debaters prepare for debates.

Automated argumentative essay scoring

The goal of automated argumentative essay scoring systems is to assist students in improving their writing skills by measuring the quality of their argumentative content.

Debate technology

Debate technology focuses on human-machine interaction and in particular providing systems that support, monitor and engage in debate. One of the most high-profile examples of debating technology is IBM's Project Debater which combines scripted communication with very large-scale processing of news articles to identify and construct arguments on the fly in a competitive debating setting. Debating technology also encompasses tools aimed at providing insight into debates, typically using techniques from data science. These analytics have been developed in both academic and commercial settings.

Decision support system

Argument technology can reduce both individual and group biases and facilitate more accurate decisions. Argument-based decision support systems do so by helping users to distinguish between claims and the evidence supporting them, and express their confidence in and evaluate the strength of evidence of competing claims. They have been used to improve predictions of housing market trends, risk analysis, ethical and legal decision making.

Ethical decision support system

An ethical decision support system is a decision support system which supports users in moral reasoning and decision-making.

Legal decision support system

A legal decision support system is a decision support system which supports users in legal reasoning and decision-making.

Explainable artificial intelligence

An explainable or transparent artificial intelligence system is an artificial intelligence system whose actions can be easily understood by humans.

Intelligent tutoring system

An intelligent tutoring system is a computer system that aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. The intersection of argument technology and intelligent tutoring systems includes computer systems which aim to provide instruction in: critical thinking, argumentation, ethics, law, mathematics, and philosophy.

Legal expert system

A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law.

Machine ethics

Machine ethics is a part of the ethics of artificial intelligence concerned with the moral behavior of artificially intelligent beings. As humans argue with respect to morality and moral behavior, argument can be envisioned as a component of machine ethics systems and moral reasoning components.

Proof assistant

In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer.

Ethical considerations

Ethical considerations of argument technology include privacy, transparency, societal concerns, and diversity in representation. These factors cut across different levels such as technology, user interface design, user, service context, and society.

Argument map

From Wikipedia, the free encyclopedia
 
A schematic argument map showing a contention (or conclusion), supporting arguments and objections, and an inference objection
 
An argument map or argument diagram is a visual representation of the structure of an argument. An argument map typically includes the key components of the argument, traditionally called the conclusion and the premises, also called contention and reasons. Argument maps can also show co-premises, objections, counterarguments, rebuttals, and lemmas. There are different styles of argument map but they are often functionally equivalent and represent an argument's individual claims and the relationships between them.

Argument maps are commonly used in the context of teaching and applying critical thinking. The purpose of mapping is to uncover the logical structure of arguments, identify unstated assumptions, evaluate the support an argument offers for a conclusion, and aid understanding of debates. Argument maps are often designed to support deliberation of issues, ideas and arguments in wicked problems.

An argument map is not to be confused with a concept map or a mind map, two other kinds of node–link diagram which have different constraints on nodes and links.

Key features

A number of different kinds of argument maps have been proposed but the most common, which Chris Reed and Glenn Rowe called the standard diagram, consists of a tree structure with each of the reasons leading to the conclusion. There is no consensus as to whether the conclusion should be at the top of the tree with the reasons leading up to it or whether it should be at the bottom with the reasons leading down to it. Another variation diagrams an argument from left to right.

According to Douglas N. Walton and colleagues, an argument map has two basic components: "One component is a set of circled numbers arrayed as points. Each number represents a proposition (premise or conclusion) in the argument being diagrammed. The other component is a set of lines or arrows joining the points. Each line (arrow) represents an inference. The whole network of points and lines represents a kind of overview of the reasoning in the given argument..." With the introduction of software for producing argument maps, it has become common for argument maps to consist of boxes containing the actual propositions rather than numbers referencing those propositions.

There is disagreement on the terminology to be used when describing argument maps, but the standard diagram contains the following structures:

Dependent premises or co-premises, where at least one of the joined premises requires another premise before it can give support to the conclusion: An argument with this structure has been called a linked argument.

Statements 1 and 2 are dependent premises or co-premises.

Independent premises, where the premise can support the conclusion on its own: Although independent premises may jointly make the conclusion more convincing, this is to be distinguished from situations where a premise gives no support unless it is joined to another premise. Where several premises or groups of premises lead to a final conclusion the argument might be described as convergent. This is distinguished from a divergent argument where a single premise might be used to support two separate conclusions.

Statements 2, 3, 4 are independent premises.

Intermediate conclusions or sub-conclusions, where a claim is supported by another claim that is used in turn to support some further claim, i.e. the final conclusion or another intermediate conclusion: In the following diagram, statement 4 is an intermediate conclusion in that it is a conclusion in relation to statement 5 but is a premise in relation to the final conclusion, i.e. statement 1. An argument with this structure is sometimes called a complex argument. If there is a single chain of claims containing at least one intermediate conclusion, the argument is sometimes described as a serial argument or a chain argument.

Statement 4 is an intermediate conclusion or sub-conclusion.

Each of these structures can be represented by the equivalent "box and line" approach to argument maps. In the following diagram, the contention is shown at the top, and the boxes linked to it represent supporting reasons, which comprise one or more premises. The green arrow indicates that the two reasons support the contention:

A box and line diagram

Argument maps can also represent counterarguments. In the following diagram, the two objections weaken the contention, while the reasons support the premise of the objection:

A sample argument using objections

Representing an argument as an argument map

Diagramming written text

A written text can be transformed into an argument map by following a sequence of steps. Monroe Beardsley's 1950 book Practical Logic recommended the following procedure:

  1. Separate statements by brackets and number them.
  2. Put circles around the logical indicators.
  3. Supply, in parenthesis, any logical indicators that are left out.
  4. Set out the statements in a diagram in which arrows show the relationships between statements.
A diagram of the example from Beardsley's Practical Logic

Beardsley gave the first example of a text being analysed in this way:

Though ① [people who talk about the "social significance" of the arts don’t like to admit it], ② [music and painting are bound to suffer when they are turned into mere vehicles for propaganda]. For ③ [propaganda appeals to the crudest and most vulgar feelings]: (for) ④ [look at the academic monstrosities produced by the official Nazi painters]. What is more important, ⑤ [art must be an end in itself for the artist], because ⑥ [the artist can do the best work only in an atmosphere of complete freedom].

Beardsley said that the conclusion in this example is statement ②. Statement ④ needs to be rewritten as a declarative sentence, e.g. "Academic monstrosities [were] produced by the official Nazi painters." Statement ① points out that the conclusion isn't accepted by everyone, but statement ① is omitted from the diagram because it doesn't support the conclusion. Beardsley said that the logical relation between statement ③ and statement ④ is unclear, but he proposed to diagram statement ④ as supporting statement ③.

A box and line diagram of Beardsley's example, produced using Harrell's procedure

More recently, philosophy professor Maralee Harrell recommended the following procedure:

  1. Identify all the claims being made by the author.
  2. Rewrite them as independent statements, eliminating non-essential words.
  3. Identify which statements are premises, sub-conclusions, and the main conclusion.
  4. Provide missing, implied conclusions and implied premises. (This is optional depending on the purpose of the argument map.)
  5. Put the statements into boxes and draw a line between any boxes that are linked.
  6. Indicate support from premise(s) to (sub)conclusion with arrows.

Diagramming as thinking

Argument maps are useful not only for representing and analyzing existing writings, but also for thinking through issues as part of a problem-structuring process or writing process. The use of such argument analysis for thinking through issues has been called "reflective argumentation".

An argument map, unlike a decision tree, does not tell how to make a decision, but the process of choosing a coherent position (or reflective equilibrium) based on the structure of an argument map can be represented as a decision tree.

History

The philosophical origins and tradition of argument mapping

From Whately's Elements of Logic p467, 1852 edition

In the Elements of Logic, published in 1826 and issued in many subsequent editions, Archbishop Richard Whately gave probably the first form of an argument map, introducing it with the suggestion that "many students probably will find it a very clear and convenient mode of exhibiting the logical analysis of the course of argument, to draw it out in the form of a Tree, or Logical Division".

However, the technique did not become widely used, possibly because for complex arguments, it involved much writing and rewriting of the premises.

Wigmore evidence chart, from 1905

Legal philosopher and theorist John Henry Wigmore produced maps of legal arguments using numbered premises in the early 20th century, based in part on the ideas of 19th century philosopher Henry Sidgwick who used lines to indicate relations between terms.

Anglophone argument diagramming in the 20th century

Dealing with the failure of formal reduction of informal argumentation, English speaking argumentation theory developed diagrammatic approaches to informal reasoning over a period of fifty years.

Monroe Beardsley proposed a form of argument diagram in 1950. His method of marking up an argument and representing its components with linked numbers became a standard and is still widely used. He also introduced terminology that is still current describing convergent, divergent and serial arguments.

A Toulmin argument diagram, redrawn from his 1959 Uses of Argument
 
A generalised Toulmin diagram

Stephen Toulmin, in his groundbreaking and influential 1958 book The Uses of Argument, identified several elements to an argument which have been generalized. The Toulmin diagram is widely used in educational critical teaching. Whilst Toulmin eventually had a significant impact on the development of informal logic he had little initial impact and the Beardsley approach to diagramming arguments along with its later developments became the standard approach in this field. Toulmin introduced something that was missing from Beardsley's approach. In Beardsley, "arrows link reasons and conclusions (but) no support is given to the implication itself between them. There is no theory, in other words, of inference distinguished from logical deduction, the passage is always deemed not controversial and not subject to support and evaluation". Toulmin introduced the concept of warrant which "can be considered as representing the reasons behind the inference, the backing that authorizes the link".

Beardsley's approach was refined by Stephen N. Thomas, whose 1973 book Practical Reasoning In Natural Language introduced the term linked to describe arguments where the premises necessarily worked together to support the conclusion. However, the actual distinction between dependent and independent premises had been made prior to this. The introduction of the linked structure made it possible for argument maps to represent missing or "hidden" premises. In addition, Thomas suggested showing reasons both for and against a conclusion with the reasons against being represented by dotted arrows. Thomas introduced the term argument diagram and defined basic reasons as those that were not supported by any others in the argument and the final conclusion as that which was not used to support any further conclusion.

Scriven's argument diagram. The explicit premise 1 is conjoined with additional unstated premises a and b to imply 2.

Michael Scriven further developed the Beardsley-Thomas approach in his 1976 book Reasoning. Whereas Beardsley had said "At first, write out the statements...after a little practice, refer to the statements by number alone" Scriven advocated clarifying the meaning of the statements, listing them and then using a tree diagram with numbers to display the structure. Missing premises (unstated assumptions) were to be included and indicated with an alphabetical letter instead of a number to mark them off from the explicit statements. Scriven introduced counterarguments in his diagrams, which Toulmin had defined as rebuttal. This also enabled the diagramming of "balance of consideration" arguments.

In 1998 a series of large-scale argument maps released by Robert E. Horn stimulated widespread interest in argument mapping.

Development of computer-supported argument visualization

Human–computer interaction pioneer Douglas Engelbart, in a famous 1962 technical report on intelligence augmentation, envisioned in detail something like argument-mapping software as an integral part of future intelligence-augmenting computer interfaces:

You usually think of an argument as a serial sequence of steps of reason, beginning with known facts, assumptions, etc., and progressing toward a conclusion. Well, we do have to think through these steps serially, and we usually do list the steps serially when we write them out because that is pretty much the way our papers and books have to present them—they are pretty limiting in the symbol structuring they enable us to use. ... To help us get better comprehension of the structure of an argument, we can also call forth a schematic or graphical display. Once the antecedent-consequent links have been established, the computer can automatically construct such a display for us.

— Douglas Engelbart, "Augmenting human intellect: a conceptual framework" (1962)

In the middle to late 1980s, hypertext software applications that supported argument visualization were developed, including NoteCards and gIBIS; the latter generated an on-screen graphical hypertextual map of an issue-based information system, a model of argumentation developed by Werner Kunz and Horst Rittel in the 1970s. In the 1990s, Tim van Gelder and colleagues developed a series of software applications that permitted an argument map's premises to be fully stated and edited in the diagram, rather than in a legend. Van Gelder's first program, Reason!Able, was superseded by two subsequent programs, bCisive and Rationale.

Throughout the 1990s and 2000s, many other software applications were developed for argument visualization. By 2013, more than 60 such software systems existed. In a 2010 survey of computer-supported argumentation, Oliver Scheuer and colleagues noted that one of the differences between these software systems is whether collaboration is supported. In their survey, single-user argumentation systems included Convince Me, iLogos, LARGO, Athena, Araucaria, and Carneades; small group argumentation systems included Digalo, QuestMap, Compendium, Belvedere, and AcademicTalk; community argumentation systems included Debategraph and Collaboratorium.

Applications

Argument maps have been applied in many areas, but foremost in educational, academic and business settings, including design rationale. Argument maps are also used in forensic science, law, and artificial intelligence. It has also been proposed that argument mapping has a great potential to improve how we understand and execute democracy, in reference to the ongoing evolution of e-democracy.

Difficulties with the philosophical tradition

It has traditionally been hard to separate teaching critical thinking from the philosophical tradition of teaching logic and method, and most critical thinking textbooks have been written by philosophers. Informal logic textbooks are replete with philosophical examples, but it is unclear whether the approach in such textbooks transfers to non-philosophy students. There appears to be little statistical effect after such classes. Argument mapping, however, has a measurable effect according to many studies. For example, instruction in argument mapping has been shown to improve the critical thinking skills of business students.

Evidence that argument mapping improves critical thinking ability

There is empirical evidence that the skills developed in argument-mapping-based critical thinking courses substantially transfer to critical thinking done without argument maps. Alvarez's meta-analysis found that such critical thinking courses produced gains of around 0.70 SD, about twice as much as standard critical-thinking courses. The tests used in the reviewed studies were standard critical-thinking tests.

Limitations

When used with students in school, argument maps have limitations. They can "end up looking overly complex" and can increase cognitive load beyond what is optimal for learning the course content. Creating maps requires extensive coaching and feedback from an experienced argument mapper. Depending on the learning objectives, the time spent coaching students to create good maps may be better spent learning the course content instead of learning to diagram. When the goal is to prompt students to consider other perspectives and counterarguments, the goal may be more easily accomplished with other methods such as discussion, rubrics, and a simple argument framework or simple graphic organizer such as a vee diagram. To maximize the strengths of argument mapping and minimize its limitations in the classroom requires considering at what point in a learning progression the potential benefits of argument mapping would outweigh its potential disadvantages.

Standards

Argument Interchange Format

The Argument Interchange Format, AIF, is an international effort to develop a representational mechanism for exchanging argument resources between research groups, tools, and domains using a semantically rich language. AIF-RDF is the extended ontology represented in the Resource Description Framework Schema (RDFS) semantic language. Though AIF is still something of a moving target, it is settling down.

Legal Knowledge Interchange Format

The Legal Knowledge Interchange Format (LKIF) was developed in the European ESTRELLA project and designed with the goal of becoming a standard for representing and interchanging policy, legislation and cases, including their justificatory arguments, in the legal domain. LKIF builds on and uses the Web Ontology Language (OWL) for representing concepts and includes a reusable basic ontology of legal concepts.

Argdown

Argdown is a Markdown-inspired lightweight markup language for complex argumentation. It is intended for exchanging arguments and argument reconstructions in a universally accessible and highly human-readable way. The Argdown syntax is accompanied by tools that facilitate coding and transform Argdown documents into argument maps.

Cooperative

From Wikipedia, the free encyclopedia ...