Search This Blog

Tuesday, May 9, 2023

Argumentation theory

From Wikipedia, the free encyclopedia
 
Two men argue at a political protest in New York City.

Argumentation theory, or argumentation, is the interdisciplinary study of how conclusions can be supported or undermined by premises through logical reasoning. With historical origins in logic, dialectic, and rhetoric, argumentation theory includes the arts and sciences of civil debate, dialogue, conversation, and persuasion. It studies rules of inference, logic, and procedural rules in both artificial and real-world settings. Argumentation includes various forms of dialogue such as deliberation and negotiation which are concerned with collaborative decision-making procedures. It also encompasses eristic dialog, the branch of social debate in which victory over an opponent is the primary goal, and didactic dialogue used for teaching. This discipline also studies the means by which people can express and rationally resolve or at least manage their disagreements.

Argumentation is a daily occurrence, such as in public debate, science, and law. For example in law, in courts by the judge, the parties and the prosecutor, in presenting and testing the validity of evidences. Also, argumentation scholars study the post hoc rationalizations by which organizational actors try to justify decisions they have made irrationally.

Argumentation is one of four rhetorical modes (also known as modes of discourse), along with exposition, description, and narration.

Key components of argumentation

Some key components of argumentation are:

  • Understanding and identifying arguments, either explicit or implied, and the goals of the participants in the different types of dialogue.
  • Identifying the premises from which conclusions are derived.
  • Establishing the "burden of proof" – determining who made the initial claim and is thus responsible for providing evidence why his/her position merits acceptance.
  • For the one carrying the "burden of proof", the advocate, to marshal evidence for his/her position in order to convince or force the opponent's acceptance. The method by which this is accomplished is producing valid, sound, and cogent arguments, devoid of weaknesses, and not easily attacked.
  • In a debate, fulfillment of the burden of proof creates a burden of rejoinder. One must try to identify faulty reasoning in the opponent's argument, to attack the reasons/premises of the argument, to provide counterexamples if possible, to identify any fallacies, and to show why a valid conclusion cannot be derived from the reasons provided for his/her argument.

For example, consider the following exchange, illustrated by the No true Scotsman fallacy:

Argument: "No Scotsman puts sugar on his porridge."
Reply: "But my friend Angus likes sugar with his porridge."
Rebuttal: "Ah yes, but no true Scotsman puts sugar on his porridge."

In this dialogue, the proposer first offers a premise, the premise is challenged by the interlocutor, and finally the proposer offers a modification of the premise. This exchange could be part of a larger discussion, for example a murder trial, in which the defendant is a Scotsman, and it had been established earlier that the murderer was eating sugared porridge when he or she committed the murder.

Internal structure of arguments

Typically an argument has an internal structure, comprising the following:

  1. a set of assumptions or premises,
  2. a method of reasoning or deduction, and
  3. a conclusion or point.

An argument has one or more premises and one conclusion.

Often classical logic is used as the method of reasoning so that the conclusion follows logically from the assumptions or support. One challenge is that if the set of assumptions is inconsistent then anything can follow logically from inconsistency. Therefore, it is common to insist that the set of assumptions be consistent. It is also good practice to require the set of assumptions to be the minimal set, with respect to set inclusion, necessary to infer the consequent. Such arguments are called MINCON arguments, short for minimal consistent. Such argumentation has been applied to the fields of law and medicine.

A non-classical approach to argumentation investigates abstract arguments, where 'argument' is considered a primitive term, so no internal structure of arguments is taken into account.

Types of dialogue

In its most common form, argumentation involves an individual and an interlocutor or opponent engaged in dialogue, each contending differing positions and trying to persuade each other, but there are various types of dialogue:

  • Persuasion dialogue aims to resolve conflicting points of view of different positions.
  • Negotiation aims to resolve conflicts of interests by cooperation and dealmaking.
  • Inquiry aims to resolve general ignorance by the growth of knowledge.
  • Deliberation aims to resolve a need to take action by reaching a decision.
  • Information seeking aims to reduce one party's ignorance by requesting information from another party that is in a position to know something.
  • Eristic aims to resolve a situation of antagonism through verbal fighting.

Argumentation and the grounds of knowledge

Argumentation theory had its origins in foundationalism, a theory of knowledge (epistemology) in the field of philosophy. It sought to find the grounds for claims in the forms (logic) and materials (factual laws) of a universal system of knowledge. The dialectical method was made famous by Plato and his use of Socrates critically questioning various characters and historical figures. But argument scholars gradually rejected Aristotle's systematic philosophy and the idealism in Plato and Kant. They questioned and ultimately discarded the idea that argument premises take their soundness from formal philosophical systems. The field thus broadened.

One of the original contributors to this trend was the philosopher Chaïm Perelman, who together with Lucie Olbrechts-Tyteca introduced the French term la nouvelle rhetorique in 1958 to describe an approach to argument which is not reduced to application of formal rules of inference. Perelman's view of argumentation is much closer to a juridical one, in which rules for presenting evidence and rebuttals play an important role.

Karl R. Wallace's seminal essay, "The Substance of Rhetoric: Good Reasons" in the Quarterly Journal of Speech (1963) 44, led many scholars to study "marketplace argumentation" – the ordinary arguments of ordinary people. The seminal essay on marketplace argumentation is Ray Lynn Anderson's and C. David Mortensen's "Logic and Marketplace Argumentation" Quarterly Journal of Speech 53 (1967): 143–150. This line of thinking led to a natural alliance with late developments in the sociology of knowledge. Some scholars drew connections with recent developments in philosophy, namely the pragmatism of John Dewey and Richard Rorty. Rorty has called this shift in emphasis "the linguistic turn".

In this new hybrid approach argumentation is used with or without empirical evidence to establish convincing conclusions about issues which are moral, scientific, epistemic, or of a nature in which science alone cannot answer. Out of pragmatism and many intellectual developments in the humanities and social sciences, "non-philosophical" argumentation theories grew which located the formal and material grounds of arguments in particular intellectual fields. These theories include informal logic, social epistemology, ethnomethodology, speech acts, the sociology of knowledge, the sociology of science, and social psychology. These new theories are not non-logical or anti-logical. They find logical coherence in most communities of discourse. These theories are thus often labeled "sociological" in that they focus on the social grounds of knowledge.

Approaches to argumentation in communication and informal logic

In general, the label "argumentation" is used by communication scholars such as (to name only a few) Wayne E. Brockriede, Douglas Ehninger, Joseph W. Wenzel, Richard Rieke, Gordon Mitchell, Carol Winkler, Eric Gander, Dennis S. Gouran, Daniel J. O'Keefe, Mark Aakhus, Bruce Gronbeck, James Klumpp, G. Thomas Goodnight, Robin Rowland, Dale Hample, C. Scott Jacobs, Sally Jackson, David Zarefsky, and Charles Arthur Willard, while the term "informal logic" is preferred by philosophers, stemming from University of Windsor philosophers Ralph H. Johnson and J. Anthony Blair. Harald Wohlrapp developed a criterion for validness (Geltung, Gültigkeit) as freedom of objections.

Trudy Govier, Douglas N. Walton, Michael Gilbert, Harvey Seigal, Michael Scriven, and John Woods (to name only a few) are other prominent authors in this tradition. Over the past thirty years, however, scholars from several disciplines have co-mingled at international conferences such as that hosted by the University of Amsterdam (the Netherlands) and the International Society for the Study of Argumentation (ISSA). Other international conferences are the biannual conference held at Alta, Utah sponsored by the (US) National Communication Association and American Forensics Association and conferences sponsored by the Ontario Society for the Study of Argumentation (OSSA).

Some scholars (such as Ralph H. Johnson) construe the term "argument" narrowly, as exclusively written discourse or even discourse in which all premises are explicit. Others (such as Michael Gilbert) construe the term "argument" broadly, to include spoken and even nonverbal discourse, for instance the degree to which a war memorial or propaganda poster can be said to argue or "make arguments". The philosopher Stephen Toulmin has said that an argument is a claim on our attention and belief, a view that would seem to authorize treating, say, propaganda posters as arguments. The dispute between broad and narrow theorists is of long standing and is unlikely to be settled. The views of the majority of argumentation theorists and analysts fall somewhere between these two extremes.

Kinds of argumentation

Conversational argumentation

The study of naturally occurring conversation arose from the field of sociolinguistics. It is usually called conversation analysis (CA). Inspired by ethnomethodology, it was developed in the late 1960s and early 1970s principally by the sociologist Harvey Sacks and, among others, his close associates Emanuel Schegloff and Gail Jefferson. Sacks died early in his career, but his work was championed by others in his field, and CA has now become an established force in sociology, anthropology, linguistics, speech-communication and psychology. It is particularly influential in interactional sociolinguistics, discourse analysis and discursive psychology, as well as being a coherent discipline in its own right. Recently CA techniques of sequential analysis have been employed by phoneticians to explore the fine phonetic details of speech.

Empirical studies and theoretical formulations by Sally Jackson and Scott Jacobs, and several generations of their students, have described argumentation as a form of managing conversational disagreement within communication contexts and systems that naturally prefer agreement.

Mathematical argumentation

The basis of mathematical truth has been the subject of long debate. Frege in particular sought to demonstrate (see Gottlob Frege, The Foundations of Arithmetic, 1884, and Begriffsschrift, 1879) that arithmetical truths can be derived from purely logical axioms and therefore are, in the end, logical truths. The project was developed by Russell and Whitehead in their Principia Mathematica. If an argument can be cast in the form of sentences in symbolic logic, then it can be tested by the application of accepted proof procedures. This was carried out for arithmetic using Peano axioms, and the foundation most commonly used for most modern mathematics is Zermelo-Fraenkel set theory, with or without the Axiom of Choice. Be that as it may, an argument in mathematics, as in any other discipline, can be considered valid only if it can be shown that it cannot have true premises and a false conclusion.

Scientific argumentation

Perhaps the most radical statement of the social grounds of scientific knowledge appears in Alan G.Gross's The Rhetoric of Science (Cambridge: Harvard University Press, 1990). Gross holds that science is rhetorical "without remainder", meaning that scientific knowledge itself cannot be seen as an idealized ground of knowledge. Scientific knowledge is produced rhetorically, meaning that it has special epistemic authority only insofar as its communal methods of verification are trustworthy. This thinking represents an almost complete rejection of the foundationalism on which argumentation was first based.

Interpretive argumentation

Interpretive argumentation is a dialogical process in which participants explore and/or resolve interpretations often of a text of any medium containing significant ambiguity in meaning.

Interpretive argumentation is pertinent to the humanities, hermeneutics, literary theory, linguistics, semantics, pragmatics, semiotics, analytic philosophy and aesthetics. Topics in conceptual interpretation include aesthetic, judicial, logical and religious interpretation. Topics in scientific interpretation include scientific modeling.

Legal argumentation

By lawyers

Legal arguments are spoken presentations to a judge or appellate court by a lawyer, or parties when representing themselves of the legal reasons why they should prevail. Oral argument at the appellate level accompanies written briefs, which also advance the argument of each party in the legal dispute. A closing argument, or summation, is the concluding statement of each party's counsel reiterating the important arguments for the trier of fact, often the jury, in a court case. A closing argument occurs after the presentation of evidence.

By judges

A judicial opinion or legal opinion is in certain jurisdictions a written explanation by a judge or group of judges that accompanies an order or ruling in a case, laying out the rationale (justification) and legal principles for the ruling. It cites the decision reached to resolve the dispute. A judicial opinion usually includes the reasons behind the decision. Where there are three or more judges, it may take the form of a majority opinion, minority opinion or a concurring opinion.

Political argumentation

Political arguments are used by academics, media pundits, candidates for political office and government officials. Political arguments are also used by citizens in ordinary interactions to comment about and understand political events. The rationality of the public is a major question in this line of research. Political scientist Samuel L. Popkin coined the expression "low information voters" to describe most voters who know very little about politics or the world in general.

In practice, a "low information voter" may not be aware of legislation that their representative has sponsored in Congress. A low-information voter may base their ballot box decision on a media sound-bite, or a flier received in the mail. It is possible for a media sound-bite or campaign flier to present a political position for the incumbent candidate that completely contradicts the legislative action taken in the Capitol on behalf of the constituents. It may only take a small percentage of the overall voting group who base their decision on the inaccurate information to form a voter bloc large enough to swing an overall election result. When this happens, the constituency at large may have been duped or fooled. Nevertheless, the election result is legal and confirmed. Savvy Political consultants will take advantage of low-information voters and sway their votes with disinformation and fake news because it can be easier and sufficiently effective. Fact checkers have come about in recent years to help counter the effects of such campaign tactics.

Psychological aspects

Psychology has long studied the non-logical aspects of argumentation. For example, studies have shown that simple repetition of an idea is often a more effective method of argumentation than appeals to reason. Propaganda often utilizes repetition. "Repeat a lie often enough and it becomes the truth" is a law of propaganda often attributed to the Nazi politician Joseph Goebbels. Nazi rhetoric has been studied extensively as, inter alia, a repetition campaign.

Empirical studies of communicator credibility and attractiveness, sometimes labeled charisma, have also been tied closely to empirically-occurring arguments. Such studies bring argumentation within the ambit of persuasion theory and practice.

Some psychologists such as William J. McGuire believe that the syllogism is the basic unit of human reasoning. They have produced a large body of empirical work around McGuire's famous title "A Syllogistic Analysis of Cognitive Relationships". A central line of this way of thinking is that logic is contaminated by psychological variables such as "wishful thinking", in which subjects confound the likelihood of predictions with the desirability of the predictions. People hear what they want to hear and see what they expect to see. If planners want something to happen they see it as likely to happen. If they hope something will not happen, they see it as unlikely to happen. Thus smokers think that they personally will avoid cancer, promiscuous people practice unsafe sex, and teenagers drive recklessly.

Theories

Argument fields

Stephen Toulmin and Charles Arthur Willard have championed the idea of argument fields, the former drawing upon Ludwig Wittgenstein's notion of language games, (Sprachspiel) the latter drawing from communication and argumentation theory, sociology, political science, and social epistemology. For Toulmin, the term "field" designates discourses within which arguments and factual claims are grounded. For Willard, the term "field" is interchangeable with "community", "audience", or "readership". Similarly, G. Thomas Goodnight has studied "spheres" of argument and sparked a large literature created by younger scholars responding to or using his ideas. The general tenor of these field theories is that the premises of arguments take their meaning from social communities.

Stephen E. Toulmin's contributions

The most influential theorist has been Stephen Toulmin, the Cambridge educated philosopher and educator, best known for his Toulmin model of argument. What follows below is a sketch of his ideas.

An alternative to absolutism and relativism

Throughout many of his works, Toulmin pointed out that absolutism (represented by theoretical or analytic arguments) has limited practical value. Absolutism is derived from Plato's idealized formal logic, which advocates universal truth; accordingly, absolutists believe that moral issues can be resolved by adhering to a standard set of moral principles, regardless of context. By contrast, Toulmin contends that many of these so-called standard principles are irrelevant to real situations encountered by human beings in daily life.

To develop his contention, Toulmin introduced the concept of argument fields. In The Uses of Argument (1958), Toulmin claims that some aspects of arguments vary from field to field, and are hence called "field-dependent", while other aspects of argument are the same throughout all fields, and are hence called "field-invariant". The flaw of absolutism, Toulmin believes, lies in its unawareness of the field-dependent aspect of argument; absolutism assumes that all aspects of argument are field invariant.

In Human Understanding (1972), Toulmin suggests that anthropologists have been tempted to side with relativists because they have noticed the influence of cultural variations on rational arguments. In other words, the anthropologist or relativist overemphasizes the importance of the "field-dependent" aspect of arguments, and neglects or is unaware of the "field-invariant" elements. In order to provide solutions to the problems of absolutism and relativism, Toulmin attempts throughout his work to develop standards that are neither absolutist nor relativist for assessing the worth of ideas.

In Cosmopolis (1990), he traces philosophers' "quest for certainty" back to René Descartes and Thomas Hobbes, and lauds John Dewey, Wittgenstein, Martin Heidegger, and Richard Rorty for abandoning that tradition.

Toulmin model of argument

Toulmin argumentation can be diagrammed as a conclusion established, more or less, on the basis of a fact supported by a warrant (with backing), and a possible rebuttal.
 

Arguing that absolutism lacks practical value, Toulmin aimed to develop a different type of argument, called practical arguments (also known as substantial arguments). In contrast to absolutists' theoretical arguments, Toulmin's practical argument is intended to focus on the justificatory function of argumentation, as opposed to the inferential function of theoretical arguments. Whereas theoretical arguments make inferences based on a set of principles to arrive at a claim, practical arguments first find a claim of interest, and then provide justification for it. Toulmin believed that reasoning is less an activity of inference, involving the discovering of new ideas, and more a process of testing and sifting already existing ideas—an act achievable through the process of justification.

Toulmin believed that for a good argument to succeed, it needs to provide good justification for a claim. This, he believed, will ensure it stands up to criticism and earns a favourable verdict. In The Uses of Argument (1958), Toulmin proposed a layout containing six interrelated components for analyzing arguments:

Claim (Conclusion)
A conclusion whose merit must be established. In argumentative essays, it may be called the thesis. For example, if a person tries to convince a listener that he is a British citizen, the claim would be "I am a British citizen" (1).
Ground (Fact, Evidence, Data)
A fact one appeals to as a foundation for the claim. For example, the person introduced in 1 can support his claim with the supporting data "I was born in Bermuda" (2).
Warrant
A statement authorizing movement from the ground to the claim. In order to move from the ground established in 2, "I was born in Bermuda", to the claim in 1, "I am a British citizen", the person must supply a warrant to bridge the gap between 1 and 2 with the statement "A man born in Bermuda will legally be a British citizen" (3).
Backing
Credentials designed to certify the statement expressed in the warrant; backing must be introduced when the warrant itself is not convincing enough to the readers or the listeners. For example, if the listener does not deem the warrant in 3 as credible, the speaker will supply the legal provisions: "I trained as a barrister in London, specialising in citizenship, so I know that a man born in Bermuda will legally be a British citizen".
Rebuttal (Reservation)
Statements recognizing the restrictions which may legitimately be applied to the claim. It is exemplified as follows: "A man born in Bermuda will legally be a British citizen, unless he has betrayed Britain and has become a spy for another country".
Qualifier
Words or phrases expressing the speaker's degree of force or certainty concerning the claim. Such words or phrases include "probably", "possible", "impossible", "certainly", "presumably", "as far as the evidence goes", and "necessarily". The claim "I am definitely a British citizen" has a greater degree of force than the claim "I am a British citizen, presumably". (See also: Defeasible reasoning.)

The first three elements, claim, ground, and warrant, are considered as the essential components of practical arguments, while the second triad, qualifier, backing, and rebuttal, may not be needed in some arguments.

When Toulmin first proposed it, this layout of argumentation was based on legal arguments and intended to be used to analyze the rationality of arguments typically found in the courtroom. Toulmin did not realize that this layout could be applicable to the field of rhetoric and communication until his works were introduced to rhetoricians by Wayne Brockriede and Douglas Ehninger. Their Decision by Debate (1963) streamlined Toulmin's terminology and broadly introduced his model to the field of debate. Only after Toulmin published Introduction to Reasoning (1979) were the rhetorical applications of this layout mentioned in his works.

One criticism of the Toulmin model is that it does not fully consider the use of questions in argumentation. The Toulmin model assumes that an argument starts with a fact or claim and ends with a conclusion, but ignores an argument's underlying questions. In the example "Harry was born in Bermuda, so Harry must be a British subject", the question "Is Harry a British subject?" is ignored, which also neglects to analyze why particular questions are asked and others are not. (See Issue mapping for an example of an argument-mapping method that emphasizes questions.)

Toulmin's argument model has inspired research on, for example, goal structuring notation (GSN), widely used for developing safety cases, and argument maps and associated software.

The evolution of knowledge

In 1972, Toulmin published Human Understanding, in which he asserts that conceptual change is an evolutionary process. In this book, Toulmin attacks Thomas Kuhn's account of conceptual change in his seminal work The Structure of Scientific Revolutions (1962). Kuhn believed that conceptual change is a revolutionary process (as opposed to an evolutionary process), during which mutually exclusive paradigms compete to replace one another. Toulmin criticized the relativist elements in Kuhn's thesis, arguing that mutually exclusive paradigms provide no ground for comparison, and that Kuhn made the relativists' error of overemphasizing the "field variant" while ignoring the "field invariant" or commonality shared by all argumentation or scientific paradigms.

In contrast to Kuhn's revolutionary model, Toulmin proposed an evolutionary model of conceptual change comparable to Darwin's model of biological evolution. Toulmin states that conceptual change involves the process of innovation and selection. Innovation accounts for the appearance of conceptual variations, while selection accounts for the survival and perpetuation of the soundest conceptions. Innovation occurs when the professionals of a particular discipline come to view things differently from their predecessors; selection subjects the innovative concepts to a process of debate and inquiry in what Toulmin considers as a "forum of competitions". The soundest concepts will survive the forum of competition as replacements or revisions of the traditional conceptions.

From the absolutists' point of view, concepts are either valid or invalid regardless of contexts. From the relativists' perspective, one concept is neither better nor worse than a rival concept from a different cultural context. From Toulmin's perspective, the evaluation depends on a process of comparison, which determines whether or not one concept will improve explanatory power more than its rival concepts.

Pragma-dialectics

Scholars at the University of Amsterdam in the Netherlands have pioneered a rigorous modern version of dialectic under the name pragma-dialectics. The intuitive idea is to formulate clear-cut rules that, if followed, will yield reasonable discussion and sound conclusions. Frans H. van Eemeren, the late Rob Grootendorst, and many of their students and co-authors have produced a large body of work expounding this idea.

The dialectical conception of reasonableness is given by ten rules for critical discussion, all being instrumental for achieving a resolution of the difference of opinion (from Van Eemeren, Grootendorst, & Snoeck Henkemans, 2002, p. 182-183). The theory postulates this as an ideal model, and not something one expects to find as an empirical fact. The model can however serve as an important heuristic and critical tool for testing how reality approximates this ideal and point to where discourse goes wrong, that is, when the rules are violated. Any such violation will constitute a fallacy. Albeit not primarily focused on fallacies, pragma-dialectics provides a systematic approach to deal with them in a coherent way.

Van Eemeren and Grootendorst identified four stages of argumentative dialogue. These stages can be regarded as an argument protocol. In a somewhat loose interpretation, the stages are as follows:

  • Confrontation stage: Presentation of the difference of opinion, such as a debate question or a political disagreement.
  • Opening stage: Agreement on material and procedural starting points, the mutually acceptable common ground of facts and beliefs, and the rules to be followed during the discussion (such as, how evidence is to be presented, and determination of closing conditions).
  • Argumentation stage: Presentation of reasons for and against the standpoint(s) at issue, through application of logical and common-sense principles according to the agreed-upon rules
  • Concluding stage: Determining whether the standpoint has withstood reasonable criticism, and accepting it is justified. This occurs when the termination conditions are met (Among these could be, for example, a time limitation or the determination of an arbiter.)

Van Eemeren and Grootendorst provide a detailed list of rules that must be applied at each stage of the protocol. Moreover, in the account of argumentation given by these authors, there are specified roles of protagonist and antagonist in the protocol which are determined by the conditions which set up the need for argument.

Walton's logical argumentation method

Douglas N. Walton developed a distinctive philosophical theory of logical argumentation built around a set of practical methods to help a user identify, analyze and evaluate arguments in everyday conversational discourse and in more structured areas such as debate, law and scientific fields. There are four main components: argumentation schemes, dialogue structures, argument mapping tools, and formal argumentation systems. The method uses the notion of commitment in dialogue as the fundamental tool for the analysis and evaluation of argumentation rather than the notion of belief. Commitments are statements that the agent has expressed or formulated, and has pledged to carry out, or has publicly asserted. According to the commitment model, agents interact with each other in a dialogue in which each takes its turn to contribute speech acts. The dialogue framework uses critical questioning as a way of testing plausible explanations and finding weak points in an argument that raise doubt concerning the acceptability of the argument.

Walton's logical argumentation model took a view of proof and justification different from analytic philosophy's dominant epistemology, which was based on a justified true belief framework. In the logical argumentation approach, knowledge is seen as form of belief commitment firmly fixed by an argumentation procedure that tests the evidence on both sides, and uses standards of proof to determine whether a proposition qualifies as knowledge. In this evidence-based approach, knowledge must be seen as defeasible.

Artificial intelligence

Efforts have been made within the field of artificial intelligence to perform and analyze the act of argumentation with computers. Argumentation has been used to provide a proof-theoretic semantics for non-monotonic logic, starting with the influential work of Dung (1995). Computational argumentation systems have found particular application in domains where formal logic and classical decision theory are unable to capture the richness of reasoning, domains such as law and medicine. In Elements of Argumentation, Philippe Besnard and Anthony Hunter show how classical logic-based techniques can be used to capture key elements of practical argumentation.

Within computer science, the ArgMAS workshop series (Argumentation in Multi-Agent Systems), the CMNA workshop series, and now the COMMA Conference, are regular annual events attracting participants from every continent. The journal Argument & Computation is dedicated to exploring the intersection between argumentation and computer science. ArgMining is a workshop series dedicated specifically to the related argument mining task.

Socratic method

From Wikipedia, the free encyclopedia
 
Marcello Bacciarelli's Alcibiades Being Taught by Socrates (1776)

The Socratic method (also known as method of Elenchus, elenctic method, or Socratic debate) is a form of cooperative argumentative dialogue between individuals, based on asking and answering questions to stimulate critical thinking and to draw out ideas and underlying presuppositions. It is named after the Classical Greek philosopher Socrates and is introduced by him in Plato's Theaetetus as midwifery (maieutics) because it is employed to bring out definitions implicit in the interlocutors' beliefs, or to help them further their understanding.

The Socratic method is a method of hypothesis elimination, in that better hypotheses are found by steadily identifying and eliminating those that lead to contradictions.

The Socratic method searches for general commonly held truths that shape beliefs and scrutinizes them to determine their consistency with other beliefs. The basic form is a series of questions formulated as tests of logic and fact intended to help a person or group discover their beliefs about some topic, explore definitions, and characterize general characteristics shared by various particular instances.

Development

In the second half of the 5th century BCE, sophists were teachers who specialized in using the tools of philosophy and rhetoric to entertain, impress, or persuade an audience to accept the speaker's point of view. Socrates promoted an alternative method of teaching, which came to be called the Socratic method.

Socrates began to engage in such discussions with his fellow Athenians after his friend from youth, Chaerephon, visited the Oracle of Delphi, which asserted that no man in Greece was wiser than Socrates. Socrates saw this as a paradox, and began using the Socratic method to answer his conundrum. Diogenes Laërtius, however, wrote that Protagoras invented the "Socratic" method.

Plato famously formalized the Socratic elenctic style in prose—presenting Socrates as the curious questioner of some prominent Athenian interlocutor—in some of his early dialogues, such as Euthyphro and Ion, and the method is most commonly found within the so-called "Socratic dialogues", which generally portray Socrates engaging in the method and questioning his fellow citizens about moral and epistemological issues. But in his later dialogues, such as Theaetetus or Sophist, Plato had a different method to philosophical discussions, namely dialectic.

Method

Elenchus (Ancient Greek: ἔλεγχος, romanizedelenkhos, lit.'argument of disproof or refutation; cross-examining, testing, scrutiny esp. for purposes of refutation') is the central technique of the Socratic method. The Latin form elenchus (plural elenchi) is used in English as the technical philosophical term. The most common adjectival form in English is elenctic; elenchic and elenchtic are also current. This was also very important in Plato's early dialogues.

In Plato's early dialogues, the elenchus is the technique Socrates uses to investigate, for example, the nature or definition of ethical concepts such as justice or virtue. According to Vlastos, it has the following steps:

  1. Socrates' interlocutor asserts a thesis, for example "Courage is endurance of the soul".
  2. Socrates decides whether the thesis is false and targets for refutation.
  3. Socrates secures his interlocutor's agreement to further premises, for example "Courage is a fine thing" and "Ignorant endurance is not a fine thing".
  4. Socrates then argues, and the interlocutor agrees, these further premises imply the contrary of the original thesis; in this case, it leads to: "courage is not endurance of the soul".
  5. Socrates then claims he has shown his interlocutor's thesis is false and its negation is true.

One elenctic examination can lead to a new, more refined, examination of the concept being considered, in this case it invites an examination of the claim: "Courage is wise endurance of the soul". Most Socratic inquiries consist of a series of elenchi and typically end in puzzlement known as aporia.

Frede[7] points out Vlastos' conclusion in step #5 above makes nonsense of the aporetic nature of the early dialogues. Having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true. Rather, the interlocutors have reached aporia, an improved state of still not knowing what to say about the subject under discussion.

The exact nature of the elenchus is subject to a great deal of debate, in particular concerning whether it is a positive method, leading to knowledge, or a negative method used solely to refute false claims to knowledge.

W. K. C. Guthrie in The Greek Philosophers sees it as an error to regard the Socratic method as a means by which one seeks the answer to a problem, or knowledge. Guthrie claims that the Socratic method actually aims to demonstrate one's ignorance. Socrates, unlike the Sophists, did believe that knowledge was possible, but believed that the first step to knowledge was recognition of one's ignorance. Guthrie writes, "[Socrates] was accustomed to say that he did not himself know anything, and that the only way in which he was wiser than other men was that he was conscious of his own ignorance, while they were not. The essence of the Socratic method is to convince the interlocutor that whereas he thought he knew something, in fact he does not."

Application

Socrates generally applied his method of examination to concepts that seem to lack any concrete definition; e.g., the key moral concepts at the time, the virtues of piety, wisdom, temperance, courage, and justice. Such an examination challenged the implicit moral beliefs of the interlocutors, bringing out inadequacies and inconsistencies in their beliefs, and usually resulting in aporia. In view of such inadequacies, Socrates himself professed his ignorance, but others still claimed to have knowledge. Socrates believed that his awareness of his ignorance made him wiser than those who, though ignorant, still claimed knowledge. While this belief seems paradoxical at first glance, it in fact allowed Socrates to discover his own errors where others might assume they were correct. This claim was based on a reported Delphic oracular pronouncement that no man was wiser than Socrates.

Socrates used this claim of wisdom as the basis of his moral exhortation. Accordingly, he claimed that the chief goodness consists in the caring of the soul concerned with moral truth and moral understanding, that "wealth does not bring goodness, but goodness brings wealth and every other blessing, both to the individual and to the state", and that "life without examination [dialogue] is not worth living". It is with this in mind that the Socratic method is employed.

The motive for the modern usage of this method and Socrates' use are not necessarily equivalent. Socrates rarely used the method to actually develop consistent theories, instead using myth to explain them. The Parmenides dialogue shows Parmenides using the Socratic method to point out the flaws in the Platonic theory of forms, as presented by Socrates; it is not the only dialogue in which theories normally expounded by Plato/Socrates are broken down through dialectic. Instead of arriving at answers, the method was used to break down the theories we hold, to go "beyond" the axioms and postulates we take for granted. Therefore, myth and the Socratic method are not meant by Plato to be incompatible; they have different purposes, and are often described as the "left hand" and "right hand" paths to good and wisdom.

Socratic seminar

A Socratic seminar (also known as a Socratic circle) is a pedagogical approach based on the Socratic method and uses a dialogic approach to understand information in a text. Its systematic procedure is used to examine a text through questions and answers founded on the beliefs that all new knowledge is connected to prior knowledge, that all thinking comes from asking questions, and that asking one question should lead to asking further questions. A Socratic seminar is not a debate. The goal of this activity is to have participants work together to construct meaning and arrive at an answer, not for one student or one group to "win the argument".

This approach is based on the belief that participants seek and gain deeper understanding of concepts in the text through thoughtful dialogue rather than memorizing information that has been provided for them. While Socratic seminars can differ in structure, and even in name, they typically involve a passage of text that students must read beforehand and facilitate dialogue. Sometimes, a facilitator will structure two concentric circles of students: an outer circle and an inner circle. The inner circle focuses on exploring and analysing the text through the act of questioning and answering. During this phase, the outer circle remains silent. Students in the outer circle are much like scientific observers watching and listening to the conversation of the inner circle. When the text has been fully discussed and the inner circle is finished talking, the outer circle provides feedback on the dialogue that took place. This process alternates with the inner circle students going to the outer circle for the next meeting and vice versa. The length of this process varies depending on the text used for the discussion. The teacher may decide to alternate groups within one meeting, or they may alternate at each separate meeting.

The most significant difference between this activity and most typical classroom activities involves the role of the teacher. In Socratic seminar, the students lead the discussion and questioning. The teacher's role is to ensure the discussion advances regardless of the particular direction the discussion takes.

Various approaches to Socratic seminar

Teachers use Socratic seminar in different ways. The structure it takes may look different in each classroom. While this is not an exhaustive list, teachers may use one of the following structures to administer Socratic seminar:

  1. Inner/outer circle or fishbowl: Students need to be arranged in inner and outer circles. The inner circle engages in discussion about the text. The outer circle observes the inner circle, while taking notes. The outer circle shares their observations and questions the inner circle with guidance from the teacher/facilitator. Students use constructive criticism as opposed to making judgements. The students on the outside keep track of topics they would like to discuss as part of the debrief. Participants in the outer circle can use an observation checklist or notes form to monitor the participants in the inner circle. These tools will provide structure for listening and give the outside members specific details to discuss later in the seminar. The teacher may also sit in the circle but at the same height as the students.
  2. Triad: Students are arranged so that each participant (called a "pilot") in the inner circle has two "co-pilots" sitting behind them on either side. Pilots are the speakers because they are in the inner circle; co-pilots are in the outer circle and only speak during consultation. The seminar proceeds as any other seminar. At a point in the seminar, the facilitator pauses the discussion and instructs the triad to talk to each other. Conversation will be about topics that need more in-depth discussion or a question posed by the leader. Sometimes triads will be asked by the facilitator to come up with a new question. Any time during a triad conversation, group members can switch seats and one of the co-pilots can sit in the pilot's seat. Only during that time is the switching of seats allowed. This structure allows for students to speak, who may not yet have the confidence to speak in the large group. This type of seminar involves all students instead of just the students in the inner and outer circles.
  3. Simultaneous seminars: Students are arranged in multiple small groups and placed as far as possible from each other. Following the guidelines of the Socratic seminar, students engage in small group discussions. Simultaneous seminars are typically done with experienced students who need little guidance and can engage in a discussion without assistance from a teacher/facilitator. According to the literature, this type of seminar is beneficial for teachers who want students to explore a variety of texts around a main issue or topic. Each small group may have a different text to read/view and discuss. A larger Socratic seminar can then occur as a discussion about how each text corresponds with one another. Simultaneous Seminars can also be used for a particularly difficult text. Students can work through different issues and key passages from the text.

No matter what structure the teacher employs, the basic premise of the seminar/circles is to turn partial control and direction of the classroom over to the students. The seminars encourage students to work together, creating meaning from the text and to stay away from trying to find a correct interpretation. The emphasis is on critical and creative thinking.

Text selection

Socratic seminar texts

A Socratic seminar text is a tangible document that creates a thought-provoking discussion. The text ought to be appropriate for the participants' current level of intellectual and social development. It provides the anchor for dialogue whereby the facilitator can bring the participants back to the text if they begin to digress. Furthermore, the seminar text enables the participants to create a level playing field – ensuring that the dialogical tone within the classroom remains consistent and pure to the subject or topic at hand. Some practitioners argue that "texts" do not have to be confined to printed texts, but can include artifacts such as objects, physical spaces, and the like.

Pertinent elements of an effective Socratic text

Socratic seminar texts are able to challenge participants' thinking skills by having these characteristics:

  1. Ideas and values: The text must introduce ideas and values that are complex and difficult to summarize. Powerful discussions arise from personal connections to abstract ideas and from implications to personal values.
  2. Complexity and challenge: The text must be rich in ideas and complexity  and open to interpretation. Ideally it should require multiple readings, but should be neither far above the participants' intellectual level nor very long.
  3. Relevance to participants' curriculum: An effective text has identifiable themes that are recognizable and pertinent to the lives of the participants. Themes in the text should relate to the curriculum.
  4. Ambiguity: The text must be approachable from a variety of different perspectives, including perspectives that seem mutually exclusive, thus provoking critical thinking and raising important questions. The absence of right and wrong answers promotes a variety of discussion and encourages individual contributions.
Two different ways to select a text

Socratic texts can be divided into two main categories:

  1. Print texts (e.g., short stories, poems, and essays) and non-print texts (e.g. photographs, sculptures, and maps); and
  2. Subject area, which can draw from print or non-print artifacts. As examples, language arts can be approached through poems, history through written or oral historical speeches, science through policies on environmental issues, math through mathematical proofs, health through nutrition labels, and physical education through fitness guidelines.

Questioning methods

Socratic seminars are based upon the interaction of peers. The focus is to explore multiple perspectives on a given issue or topic. Socratic questioning is used to help students apply the activity to their learning. The pedagogy of Socratic questions is open-ended, focusing on broad, general ideas rather than specific, factual information. The questioning technique emphasizes a level of questioning and thinking where there is no single right answer.

Socratic seminars generally start with an open-ended question proposed either by the leader or by another participant. There is no designated first speaker; as individuals participate in Socratic dialogue, they gain experience that enables them to be effective in this role of initial questioner.

The leader keeps the topic focused by asking a variety of questions about the text itself, as well as questions to help clarify positions when arguments become confused. The leader also seeks to coax reluctant participants into the discussion, and to limit contributions from those who tend to dominate. She or he prompts participants to elaborate on their responses and to build on what others have said. The leader guides participants to deepen, clarify, and paraphrase, and to synthesize a variety of different views.

The participants share the responsibility with the leader to maintain the quality of the Socratic circle. They listen actively in order to respond effectively to what others have contributed. This teaches the participants to think and speak persuasively using the discussion to support their position. Participants must demonstrate respect for different ideas, thoughts and values, and must not interrupt each other.

Questions can be created individually or in small groups. All participants are given the opportunity to take part in the discussion. Socratic circles specify three types of questions to prepare:

  1. Opening questions generate discussion at the beginning of the seminar in order to elicit dominant themes.
  2. Guiding questions help deepen and elaborate the discussion, keeping contributions on topic and encouraging a positive atmosphere and consideration for others.
  3. Closing questions lead participants to summarize their thoughts and learning and personalize what they've discussed.

Psychotherapy

The Socratic method, in the form of Socratic questioning, has been adapted for psychotherapy, most prominently in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy and reality therapy. It can be used to clarify meaning, feeling, and consequences, as well as to gradually unfold insight, or explore alternative actions.

The Socratic method has also recently inspired a new form of applied philosophy: Socratic dialogue, also called philosophical counseling. In Europe Gerd B. Achenbach is probably the best known practitioner, and Michel Weber has also proposed another variant of the practice.

Challenges and disadvantages

Scholars such as Peter Boghossian suggest that although the method improves creative and critical thinking, there is a flip side to the method. He states that the teachers who use this method wait for the students to make mistakes, thus creating negative feelings in the class, exposing the student to possible ridicule and humiliation.

Some have countered this thought by stating that the humiliation and ridicule is not caused by the method, rather it is due to the lack of knowledge of the student. Boghossian mentions that even though the questions may be perplexing, they are not originally meant for it, in fact such questions provoke the students and can be countered by employing counterexamples.

Planck units

From Wikipedia, the free encyclopedia

In particle physics and physical cosmology, Planck units are a set of units of measurement defined exclusively in terms of four universal physical constants, in such a manner that these physical constants take on the numerical value of 1 when expressed in terms of these units. Originally proposed in 1899 by German physicist Max Planck, these units are a system of natural units because their definition is based on properties of nature, more specifically the properties of free space, rather than a choice of prototype object. They are relevant in research on unified theories such as quantum gravity.

The term Planck scale refers to quantities of space, time, energy and other units that are similar in magnitude to corresponding Planck units. This region may be characterized by particle energies of around 1019 GeV or 109 J, time intervals of around 10−43 s and lengths of around 10−35 m (approximately the energy-equivalent of the Planck mass, the Planck time and the Planck length, respectively). At the Planck scale, the predictions of the Standard Model, quantum field theory and general relativity are not expected to apply, and quantum effects of gravity are expected to dominate. The best-known example is represented by the conditions in the first 10−43 seconds of our universe after the Big Bang, approximately 13.8 billion years ago.

The four universal constants that, by definition, have a numeric value 1 when expressed in these units are:

Planck units do not incorporate an electromagnetic dimension. Some authors choose to extend the system to electromagnetism by, for example, adding either the Coulomb constant (ke = 1/4πε0) or the electric constant (ε0) to this list. Similarly, authors choose to use variants of the system that give other numeric values to one or more of the four constants above.

Introduction

Any system of measurement may be assigned a mutually independent set of base quantities and associated base units, from which all other quantities and units may be derived. In the International System of Units, for example, the SI base quantities include length with the associated unit of the metre. In the system of Planck units, a similar set of base quantities and associated units may be selected, in terms of which other quantities and coherent units may be expressed. The Planck unit of length has become known as the Planck length, and the Planck unit of time is known as the Planck time, but this nomenclature has not been established as extending to all quantities.

All Planck units are derived from the dimensional universal physical constants that define the system, and in a convention in which these units are omitted (i.e. treated as having the dimensionless value 1), these constants are then eliminated from equations of physics in which they appear. For example, Newton's law of universal gravitation,

can be expressed as:

Both equations are dimensionally consistent and equally valid in any system of quantities, but the second equation, with G absent, is relating only dimensionless quantities since any ratio of two like-dimensioned quantities is a dimensionless quantity. If, by a shorthand convention, it is understood that each physical quantity is the corresponding ratio with a coherent Planck unit (or "expressed in Planck units"), the ratios above may be expressed simply with the symbols of physical quantity, without being scaled explicitly by their corresponding unit:

This last equation (without G) is valid with F, m1′, m2′, and r being the dimensionless ratio quantities corresponding to the standard quantities, written e.g. FF or F = F/FP, but not as a direct equality of quantities. This may seem to be "setting the constants c, G, etc., to 1" if the correspondence of the quantities is thought of as equality. For this reason, Planck or other natural units should be employed with care. Referring to "G = c = 1", Paul S. Wesson wrote that, "Mathematically it is an acceptable trick which saves labour. Physically it represents a loss of information and can lead to confusion."

History and definition

The concept of natural units was introduced in 1874, when George Johnstone Stoney, noting that electric charge is quantized, derived units of length, time, and mass, now named Stoney units in his honor. Stoney chose his units so that G, c, and the electron charge e would be numerically equal to 1. In 1899, one year before the advent of quantum theory, Max Planck introduced what became later known as the Planck constant. At the end of the paper, he proposed the base units that were later named in his honor. The Planck units are based on the quantum of action, now usually known as the Planck constant, which appeared in the Wien approximation for black-body radiation. Planck underlined the universality of the new unit system, writing:

... die Möglichkeit gegeben ist, Einheiten für Länge, Masse, Zeit und Temperatur aufzustellen, welche, unabhängig von speciellen Körpern oder Substanzen, ihre Bedeutung für alle Zeiten und für alle, auch ausserirdische und aussermenschliche Culturen nothwendig behalten und welche daher als »natürliche Maasseinheiten« bezeichnet werden können.

... it is possible to set up units for length, mass, time and temperature, which are independent of special bodies or substances, necessarily retaining their meaning for all times and for all civilizations, including extraterrestrial and non-human ones, which can be called "natural units of measure".

Planck considered only the units based on the universal constants , , , and to arrive at natural units for length, time, mass, and temperature. His definitions differ from the modern ones by a factor of , because the modern definitions use rather than .

Table 1: Modern values for Planck's original choice of quantities
Name Dimension Expression Value (SI units)
Planck length length (L) 1.616255(18)×10−35 m
Planck mass mass (M) 2.176434(24)×10−8 kg
Planck time time (T) 5.391247(60)×10−44 s
Planck temperature temperature (Θ) 1.416784(16)×1032 K

Unlike the case with the International System of Units, there is no official entity that establishes a definition of a Planck unit system. Some authors define the base Planck units to be those of mass, length and time, regarding an additional unit for temperature to be redundant. Other tabulations add, in addition to a unit for temperature, a unit for electric charge, so that either the Coulomb constant or the vacuum permittivity is normalized to 1. Thus, depending on the author's choice, this charge unit is given by

for , or

for .[note 2] Some of these tabulations also replace mass with energy when doing so.

The Planck charge, as well as other electromagnetic units that can be defined like resistance and magnetic flux, are more difficult to interpret than Planck's original units and are used less frequently.

In SI units, the values of c, h, e and kB are exact and the values of ε0 and G in SI units respectively have relative uncertainties of 1.5×10−10 and 2.2×10−5. Hence, the uncertainties in the SI values of the Planck units derive almost entirely from uncertainty in the SI value of G.

Compared to Stoney units, Planck base units are all times larger.

Derived units

In any system of measurement, units for many physical quantities can be derived from base units. Table 2 offers a sample of derived Planck units, some of which are seldom used. As with the base units, their use is mostly confined to theoretical physics because most of them are too large or too small for empirical or practical use and there are large uncertainties in their values.

Table 2: Coherent derived units of Planck units
Derived unit of Expression Approximate SI equivalent
area (L2) 2.6121×10−70 m2
volume (L3) 4.2217×10−105 m3
momentum (LMT−1) 6.5249 kg⋅m/s
energy (L2MT−2) 1.9561×109 J
force (LMT−2) 1.2103×1044 N
density (L−3M) 5.1550×1096 kg/m3
acceleration (LT−2) 5.5608×1051 m/s2

Some Planck units, such as of time and length, are many orders of magnitude too large or too small to be of practical use, so that Planck units as a system are typically only relevant to theoretical physics. In some cases, a Planck unit may suggest a limit to a range of a physical quantity where present-day theories of physics apply. For example, our understanding of the Big Bang does not extend to the Planck epoch, i.e., when the universe was less than one Planck time old. Describing the universe during the Planck epoch requires a theory of quantum gravity that would incorporate quantum effects into general relativity. Such a theory does not yet exist.

Several quantities are not "extreme" in magnitude, such as the Planck mass, which is about 22 micrograms: very large in comparison with subatomic particles, and within the mass range of living organisms. Similarly, the related units of energy and of momentum are in the range of some everyday phenomena.

Significance

Planck units have little anthropocentric arbitrariness, but do still involve some arbitrary choices in terms of the defining constants. Unlike the metre and second, which exist as base units in the SI system for historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. Consequently, natural units help physicists to reframe questions. Frank Wilczek puts it succinctly:

We see that the question [posed] is not, "Why is gravity so feeble?" but rather, "Why is the proton's mass so small?" For in natural (Planck) units, the strength of gravity simply is what it is, a primary quantity, while the proton's mass is the tiny number 1/13 quintillion.

While it is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, this is not about the relative strengths of the two fundamental forces. From the point of view of Planck units, this is comparing apples with oranges, because mass and electric charge are incommensurable quantities. Rather, the disparity of magnitude of force is a manifestation of the fact that the charge on the protons is approximately the unit charge but the mass of the protons is far less than the unit mass.

Planck scale

In particle physics and physical cosmology, the Planck scale is an energy scale around 1.22×1019 GeV (the Planck energy, corresponding to the energy equivalent of the Planck mass, is 2.17645×10−8 kg) at which quantum effects of gravity become significant. At this scale, present descriptions and theories of sub-atomic particle interactions in terms of quantum field theory break down and become inadequate, due to the impact of the apparent non-renormalizability of gravity within current theories.

Relationship to gravity

At the Planck length scale, the strength of gravity is expected to become comparable with the other forces, and it has been theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown. The Planck scale is therefore the point at which the effects of quantum gravity can no longer be ignored in other fundamental interactions, where current calculations and approaches begin to break down, and a means to take account of its impact is necessary. On these grounds, it has been speculated that it may be an approximate lower limit at which a black hole could be formed by collapse.

While physicists have a fairly good understanding of the other fundamental interactions of forces on the quantum level, gravity is problematic, and cannot be integrated with quantum mechanics at very high energies using the usual framework of quantum field theory. At lesser energy levels it is usually ignored, while for energies approaching or exceeding the Planck scale, a new theory of quantum gravity is necessary. Approaches to this problem include string theory and M-theory, loop quantum gravity, noncommutative geometry, and causal set theory.

In cosmology

In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1011 tP).

Table 3 lists properties of the observable universe today expressed in Planck units.

Table 3: Today's universe in Planck units
Property of
present-day observable universe
Approximate number
of Planck units
Equivalents
Age 8.08 × 1060 tP 4.35 × 1017 s or 1.38 × 1010 years
Diameter 5.4 × 1061 lP 8.7 × 1026 m or 9.2 × 1010 light-years
Mass approx. 1060 mP 3 × 1052 kg or 1.5 × 1022 solar masses (only counting stars)
1080 protons (sometimes known as the Eddington number)
Density 1.8 × 10−123 mPlP−3 9.9 × 10−27 kg⋅m−3
Temperature 1.9 × 10−32 TP 2.725 K
temperature of the cosmic microwave background radiation
Cosmological constant ≈ 10−122 l −2
P
≈ 10−52 m−2
Hubble constant ≈ 10−61 t −1
P
≈ 10−18 s−1 ≈ 102 (km/s)/Mpc

After the measurement of the cosmological constant (Λ) in 1998, estimated at 10−122 in Planck units, it was noted that this is suggestively close to the reciprocal of the age of the universe (T) squared. Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains Λ ~ T−2 throughout the history of the universe.

Analysis of the units

Planck length

The Planck length, denoted P, is a unit of length defined as:

It is equal to 1.616255(18)×10−35 m, where the two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value, or about 10−20 times the diameter of a proton. It can be motivated in various ways, such as considering a particle whose reduced Compton wavelength is comparable to its Schwarzschild radius, though whether those concepts are in fact simultaneously applicable is open to debate. (The same heuristic argument simultaneously motivates the Planck mass.)

The Planck length is a distance scale of interest in speculations about quantum gravity. The Bekenstein–Hawking entropy of a black hole is one-fourth the area of its event horizon in units of Planck length squared. Since the 1950s, it has been conjectured that quantum fluctuations of the spacetime metric might make the familiar notion of distance inapplicable below the Planck length. This is sometimes expressed by saying that "spacetime becomes a foam at the Planck scale". It is possible that the Planck length is the shortest physically measurable distance, since any attempt to investigate the possible existence of shorter distances, by performing higher-energy collisions, would result in black hole production. Higher-energy collisions, rather than splitting matter into finer pieces, would simply produce bigger black holes.

The strings of string theory are modeled to be on the order of the Planck length. In theories with large extra dimensions, the Planck length calculated from the observed value of can be smaller than the true, fundamental Planck length.

Planck time

The Planck time tP is the time required for light to travel a distance of 1 Planck length in vacuum, which is a time interval of approximately 5.39×10−44 s. No current physical theory can describe timescales shorter than the Planck time, such as the earliest events after the Big Bang. Some conjecture that the structure of time need not remain smooth on intervals comparable to the Planck time.

Planck energy

The Planck energy EP is approximately equal to the energy released in the combustion of the fuel in an automobile fuel tank (57.2 L at 34.2 MJ/L of chemical energy). The ultra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 J, equivalent to about 2.5×10−8 EP.

Proposals for theories of doubly special relativity posit that, in addition to the speed of light, an energy scale is also invariant for all inertial observers. Typically, this energy scale is chosen to be the Planck energy.

Planck unit of force

The Planck unit of force may be thought of as the derived unit of force in the Planck system if the Planck units of time, length, and mass are considered to be base units.

It is the gravitational attractive force of two bodies of 1 Planck mass each that are held 1 Planck length apart. One convention for the Planck charge is to choose it so that the electrostatic repulsion of two objects with Planck charge and mass that are held 1 Planck length apart balances the Newtonian attraction between them.

Some authors have argued that the Planck force is on the order of the maximum force that can occur between two bodies. However, the validity of these conjectures has been disputed.

Planck temperature

The Planck temperature TP is 1.416784(16)×1032 K. At this temperature, the wavelength of light emitted by thermal radiation reaches the Planck length. There are no known physical models able to describe temperatures greater than TP; a quantum theory of gravity would be required to model the extreme energies attained. Hypothetically, a system in thermal equilibrium at the Planck temperature might contain Planck-scale black holes, constantly being formed from thermal radiation and decaying via Hawking evaporation. Adding energy to such a system might decrease its temperature by creating larger black holes, whose Hawking temperature is lower.

Nondimensionalized equations

Physical quantities that have different dimensions (such as time and length) cannot be equated even if they are numerically equal (e.g., 1 second is not the same as 1 metre). In theoretical physics, however, this scruple may be set aside, by a process called nondimensionalization. The effective result is that many fundamental equations of physics, which often include some of the constants used to define Planck units, become equations where these constants are replaced by a 1.

Examples include the energy–momentum relation , which becomes , and the Dirac equation , which becomes .

Alternative choices of normalization

As already stated above, Planck units are derived by "normalizing" the numerical values of certain fundamental constants to 1. These normalizations are neither the only ones possible nor necessarily the best. Moreover, the choice of what factors to normalize, among the factors appearing in the fundamental equations of physics, is not evident, and the values of the Planck units are sensitive to this choice.

The factor 4π is ubiquitous in theoretical physics because in three-dimensional space, the surface area of a sphere of radius r is 4πr2. This, along with the concept of flux, are the basis for the inverse-square law, Gauss's law, and the divergence operator applied to flux density. For example, gravitational and electrostatic fields produced by point objects have spherical symmetry, and so the electric flux through a sphere of radius r around a point charge will be distributed uniformly over that sphere. From this, it follows that a factor of 4πr2 will appear in the denominator of Coulomb's law in rationalized form.   (Both the numerical factor and the power of the dependence on r would change if space were higher-dimensional; the correct expressions can be deduced from the geometry of higher-dimensional spheres.) Likewise for Newton's law of universal gravitation: a factor of 4π naturally appears in Poisson's equation when relating the gravitational potential to the distribution of matter.

Hence a substantial body of physical theory developed since Planck's 1899 paper suggests normalizing not G but 4πG (or 8πG) to 1. Doing so would introduce a factor of 1/4π (or 1/8π) into the nondimensionalized form of the law of universal gravitation, consistent with the modern rationalized formulation of Coulomb's law in terms of the vacuum permittivity. In fact, alternative normalizations frequently preserve the factor of 1/4π in the nondimensionalized form of Coulomb's law as well, so that the nondimensionalized Maxwell's equations for electromagnetism and gravitoelectromagnetism both take the same form as those for electromagnetism in SI, which do not have any factors of 4π. When this is applied to electromagnetic constants, ε0, this unit system is called "rationalized". When applied additionally to gravitation and Planck units, these are called rationalized Planck units and are seen in high-energy physics.

The rationalized Planck units are defined so that c = 4πG = ħ = ε0 = kB = 1.

There are several possible alternative normalizations.

Gravitational constant

In 1899, Newton's law of universal gravitation was still seen as exact, rather than as a convenient approximation holding for "small" velocities and masses (the approximate nature of Newton's law was shown following the development of general relativity in 1915). Hence Planck normalized to 1 the gravitational constant G in Newton's law. In theories emerging after 1899, G nearly always appears in formulae multiplied by 4π or a small integer multiple thereof. Hence, a choice to be made when designing a system of natural units is which, if any, instances of 4π appearing in the equations of physics are to be eliminated via the normalization.

History of the socialist movement in the United Kingdom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/His...