Search This Blog

Friday, January 27, 2023

Legal informatics

Defeasible reasoning

From Wikipedia, the free encyclopedia

In philosophical logic, defeasible reasoning is a kind of reasoning that is rationally compelling, though not deductively valid. It usually occurs when a rule is given, but there may be specific exceptions to the rule, or subclasses that are subject to a different rule. Defeasibility is found in literatures that are concerned with argument and the process of argument, or heuristic reasoning.

Defeasible reasoning is a particular kind of non-demonstrative reasoning, where the reasoning does not produce a full, complete, or final demonstration of a claim, i.e., where fallibility and corrigibility of a conclusion are acknowledged. In other words, defeasible reasoning produces a contingent statement or claim. Defeasible reasoning is also a kind of ampliative reasoning because its conclusions reach beyond the pure meanings of the premises.

Defeasible reasoning finds its fullest expression in jurisprudence, ethics and moral philosophy, epistemology, pragmatics and conversational conventions in linguistics, constructivist decision theories, and in knowledge representation and planning in artificial intelligence. It is also closely identified with prima facie (presumptive) reasoning (i.e., reasoning on the "face" of evidence), and ceteris paribus (default) reasoning (i.e., reasoning, all things "being equal").

According to at least some schools of philosophy, all reasoning is at most defeasible, and there is no such thing as absolutely certain deductive reasoning, since it is impossible to be absolutely certain of all the facts (and know with certainty that nothing is unknown). Thus all deductive reasoning is in reality contingent and defeasible.

Other kinds of non-demonstrative reasoning

Other kinds of non-demonstrative reasoning are probabilistic reasoning, inductive reasoning, statistical reasoning, abductive reasoning, and paraconsistent reasoning.

The differences between these kinds of reasoning correspond to differences about the conditional that each kind of reasoning uses, and on what premise (or on what authority) the conditional is adopted:

  • Deductive (from meaning postulate or axiom): if p then q (equivalent to q or not-p in classical logic, not necessarily in other logics)
  • Defeasible (from authority): if p then (defeasibly) q
  • Probabilistic (from combinatorics and indifference): if p then (probably) q
  • Statistical (from data and presumption): the frequency of qs among ps is high (or inference from a model fit to data); hence, (in the right context) if p then (probably) q
  • Inductive (theory formation; from data, coherence, simplicity, and confirmation): (inducibly) "if p then q"; hence, if p then (deducibly-but-revisably) q
  • Abductive (from data and theory): p and q are correlated, and q is sufficient for p; hence, if p then (abducibly) q as cause

History

Though Aristotle differentiated the forms of reasoning that are valid for logic and philosophy from the more general ones that are used in everyday life (see dialectics and rhetoric), 20th century philosophers mainly concentrated on deductive reasoning. At the end of the 19th century, logic texts would typically survey both demonstrative and non-demonstrative reasoning, often giving more space to the latter. However, after the blossoming of mathematical logic at the hands of Bertrand Russell, Alfred North Whitehead and Willard Van Orman Quine, latter-20th century logic texts paid little attention to the non-deductive modes of inference.

There are several notable exceptions. John Maynard Keynes wrote his dissertation on non-demonstrative reasoning, and influenced the thinking of Ludwig Wittgenstein on this subject. Wittgenstein, in turn, had many admirers, including the positivist legal scholar H. L. A. Hart and the speech act linguist John L. Austin, Stephen Toulmin and Chaïm Perelman in rhetoric, the moral theorists W. D. Ross and C. L. Stevenson, and the vagueness epistemologist/ontologist Friedrich Waismann.

The etymology of defeasible usually refers to Middle English law of contracts, where a condition of defeasance is a clause that can invalidate or annul a contract or deed. Though defeat, dominate, defer, defy, deprecate and derogate are often used in the same contexts as defease, the verbs annul and invalidate (and nullify, overturn, rescind, vacate, repeal, void, cancel, countermand, preempt, etc.) are more properly correlated with the concept of defeasibility than those words beginning with the letter d. Many dictionaries do contain the verb, to defease with past participle, defeased.

Philosophers in moral theory and rhetoric had taken defeasibility largely for granted when American epistemologists rediscovered Wittgenstein's thinking on the subject: John Ladd, Roderick Chisholm, Roderick Firth, Ernest Sosa, Robert Nozick, and John L. Pollock all began writing with new conviction about how appearance as red was only a defeasible reason for believing something to be red. More importantly Wittgenstein's orientation toward language-games (and away from semantics) emboldened these epistemologists to manage rather than to expurgate prima facie logical inconsistency.

At the same time (in the mid-1960s), two more students of Hart and Austin at Oxford, Brian Barry and David Gauthier, were applying defeasible reasoning to political argument and practical reasoning (of action), respectively. Joel Feinberg and Joseph Raz were beginning to produce equally mature works in ethics and jurisprudence informed by defeasibility.

By far the most significant works on defeasibility by the mid-1970s were in epistemology, where John Pollock's 1974 Knowledge and Justification popularized his terminology of undercutting and rebutting (which mirrored the analysis of Toulmin). Pollock's work was significant precisely because it brought defeasibility so close to philosophical logicians. The failure of logicians to dismiss defeasibility in epistemology (as Cambridge's logicians had done to Hart decades earlier) landed defeasible reasoning in the philosophical mainstream.

Defeasibility had always been closely related to argument, rhetoric, and law, except in epistemology, where the chains of reasons, and the origin of reasons, were not often discussed. Nicholas Rescher's Dialectics is an example of how difficult it was for philosophers to contemplate more complex systems of defeasible reasoning. This was in part because proponents of informal logic became the keepers of argument and rhetoric while insisting that formalism was anathema to argument.

About this time, researchers in artificial intelligence became interested in non-monotonic reasoning and its semantics. With philosophers such as Pollock and Donald Nute (e.g., defeasible logic), dozens of computer scientists and logicians produced complex systems of defeasible reasoning between 1980 and 2000. No single system of defeasible reasoning would emerge in the same way that Quine's system of logic became a de facto standard. Nevertheless, the 100-year headstart on non-demonstrative logical calculi, due to George Boole, Charles Sanders Peirce, and Gottlob Frege was being closed: both demonstrative and non-demonstrative reasoning now have formal calculi.

There are related (and slightly competing) systems of reasoning that are newer than systems of defeasible reasoning, e.g., belief revision and dynamic logic. The dialogue logics of Charles Hamblin and Jim Mackenzie, and their colleagues, can also be tied closely to defeasible reasoning. Belief revision is a non-constructive specification of the desiderata with which, or constraints according to which, epistemic change takes place. Dynamic logic is related mainly because, like paraconsistent logic, the reordering of premises can change the set of justified conclusions. Dialogue logics introduce an adversary, but are like belief revision theories in their adherence to deductively consistent states of belief.

Political and judicial use

Many political philosophers have been fond of the word indefeasible when referring to rights, e.g., that were inalienable, divine, or indubitable. For example, in the 1776 Virginia Declaration of Rights, "community hath an indubitable, inalienable, and indefeasible right to reform, alter or abolish government..." (also attributed to James Madison); and John Adams, "The people have a right, an indisputable, unalienable, indefeasible, divine right to that most dreaded and envied kind of knowledge – I mean of the character and conduct of their rulers." Also, Lord Aberdeen: "indefeasible right inherent in the British Crown" and Gouverneur Morris: "the Basis of our own Constitution is the indefeasible Right of the People." Scholarship about Abraham Lincoln often cites these passages in the justification of secession. Philosophers who use the word defeasible have historically had different world views from those who use the word indefeasible (and this distinction has often been mirrored by Oxford and Cambridge zeitgeist); hence it is rare to find authors who use both words.

In judicial opinions, the use of defeasible is commonplace. There is however disagreement among legal logicians whether defeasible reasoning is central, e.g., in the consideration of open texture, precedent, exceptions, and rationales, or whether it applies only to explicit defeasance clauses. H.L.A. Hart in The Concept of Law gives two famous examples of defeasibility: "No vehicles in the park" (except during parades); and "Offer, acceptance, and memorandum produce a contract" (except when the contract is illegal, the parties are minors, inebriated, or incapacitated, etc.).

Specificity

One of the main disputes among those who produce systems of defeasible reasoning is the status of a rule of specificity. In its simplest form, it is the same rule as subclass inheritance preempting class inheritance:

 (R1) if r then (defeasibly) q                  e.g., if bird, then can fly
 (R2) if p then (defeasibly) not-q              e.g., if penguin, then cannot fly
 (O1) if p then (deductively) r                 e.g., if penguin, then bird
 (M1) arguably, p                               e.g., arguably, penguin
 (M2) R2 is a more specific reason than R1      e.g., R2 is better than R1
 (M3) therefore, arguably, not-q                e.g., therefore, arguably, cannot fly

Approximately half of the systems of defeasible reasoning discussed today adopt a rule of specificity, while half expect that such preference rules be written explicitly by whoever provides the defeasible reasons. For example, Rescher's dialectical system uses specificity, as do early systems of multiple inheritance (e.g., David Touretzky) and the early argument systems of Donald Nute and of Guillermo Simari and Ronald Loui. Defeasible reasoning accounts of precedent (stare decisis and case-based reasoning) also make use of specificity (e.g., Joseph Raz and the work of Kevin D. Ashley and Edwina Rissland). Meanwhile, the argument systems of Henry Prakken and Giovanni Sartor, of Bart Verheij and Jaap Hage, and the system of Phan Minh Dung do not adopt such a rule.

Nature of defeasibility

There is a distinct difference between those who theorize about defeasible reasoning as if it were a system of confirmational revision (with affinities to belief revision), and those who theorize about defeasibility as if it were the result of further (non-empirical) investigation. There are at least three kinds of further non-empirical investigation: progress in a lexical/syntactic process, progress in a computational process, and progress in an adversary or legal proceeding.

Defeasibility as corrigibility
Here, a person learns something new that annuls a prior inference. In this case, defeasible reasoning provides a constructive mechanism for belief revision, like a truth maintenance system as envisioned by Jon Doyle.
Defeasibility as shorthand for preconditions
Here, the author of a set of rules or legislative code is writing rules with exceptions. Sometimes a set of defeasible rules can be rewritten, with more cogency, with explicit (local) pre-conditions instead of (non-local) competing rules. Many non-monotonic systems with fixed-point or preferential semantics fit this view. However, sometimes the rules govern a process of argument (the last view on this list), so that they cannot be re-compiled into a set of deductive rules lest they lose their force in situations with incomplete knowledge or incomplete derivation of preconditions.
Defeasibility as an anytime algorithm
Here, it is assumed that calculating arguments takes time, and at any given time, based on a subset of the potentially constructible arguments, a conclusion is defeasibly justified. Isaac Levi has protested against this kind of defeasibility, but it is well-suited to the heuristic projects of, for example, Herbert A. Simon. On this view, the best move so far in a chess-playing program's analysis at a particular depth is a defeasibly justified conclusion. This interpretation works with either the prior or the next semantical view.
Defeasibility as a means of controlling an investigative or social process
Here, justification is the result of the right kind of procedure (e.g., a fair and efficient hearing), and defeasible reasoning provides impetus for pro and con responses to each other. Defeasibility has to do with the alternation of verdict as locutions are made and cases presented, not the changing of a mind with respect to new (empirical) discovery. Under this view, defeasible reasoning and defeasible argumentation refer to the same phenomenon.

Fallibilism

From Wikipedia, the free encyclopedia
Charles Sanders Peirce around 1900. Peirce is said to have initiated fallibilism.

Originally, fallibilism (from Medieval Latin: fallibilis, "liable to err") is the philosophical principle that propositions can be accepted even though they cannot be conclusively proven or justified, or that neither knowledge nor belief is certain. The term was coined in the late nineteenth century by the American philosopher Charles Sanders Peirce, as a response to foundationalism. Theorists, following Austrian-British philosopher Karl Popper, may also refer to fallibilism as the notion that knowledge might turn out to be false. Furthermore, fallibilism is said to imply corrigibilism, the principle that propositions are open to revision. Fallibilism is often juxtaposed with infallibilism.

Infinite regress and infinite progress

According to philosopher Scott F. Aikin, fallibilism cannot properly function in the absence of infinite regress. The term, usually attributed to Pyrrhonist philosopher Agrippa, is argued to be the inevitable outcome of all human inquiry, since every proposition requires justification. Infinite regress, also represented within the regress argument, is closely related to the problem of the criterion and is a constituent of the Münchhausen trilemma. Illustrious examples regarding infinite regress are the cosmological argument, turtles all the way down, and the simulation hypothesis. Many philosophers struggle with the metaphysical implications that come along with infinite regress. For this reason, philosophers have gotten creative in their quest to circumvent it.

Somewhere along the seventeenth century, English philosopher Thomas Hobbes set forth the concept of "infinite progress". With this term, Hobbes had captured the human proclivity to strive for perfection. Philosophers like Gottfried Wilhelm Leibniz, Christian Wolff, and Immanuel Kant, would elaborate further on the concept. Kant even went on to speculate that immortal species should hypothetically be able to develop their capacities to perfection. This sentiment is still alive today. Infinite progress has been associated with concepts like science, religion, technology, economic growth, consumerism, and economic materialism. All these concepts thrive on the belief that they can carry on endlessly. Infinite progress has become the panacea to turn the vicious circles of infinite regress into virtuous circles. However, vicious circles have not yet been eliminated from the world; hyperinflation, the poverty trap, and debt accumulation for instance still occur.

Already in 350 B.C.E, Greek philosopher Aristotle made a distinction between potential and actual infinities. Based on his discourse, it can be said that actual infinities do not exist, because they are paradoxical. Aristotle deemed it impossible for humans to keep on adding members to finite sets indefinitely. It eventually led him to refute some of Zeno's paradoxes. Other relevant examples of potential infinities include Galileo's paradox and the paradox of Hilbert's hotel. The notion that infinite regress and infinite progress only manifest themselves potentially pertains to fallibilism. According to philosophy professor Elizabeth F. Cooke, fallibilism embraces uncertainty, and infinite regress and infinite progress are not unfortunate limitations on human cognition, but rather necessary antecedents for knowledge acquisition. They allow us to live functional and meaningful lives.

Critical rationalism

The founder of critical rationalism: Karl Popper

In the mid-twentieth century, several important philosophers began to critique the foundations of logical positivism. In his work The Logic of Scientific Discovery (1934), Karl Popper, the founder of critical rationalism, tried to solve the problem of induction by arguing for falsifiability as a means to devalue the verifiability criterion. He adamantly proclaimed that scientific truths are not inductively inferred from experience and conclusively verified by experimentation, but rather deduced from statements and justified by means of deliberation and intersubjective consensus within a particular scientific community. Popper also tried to resolve the problem of demarcation by asserting that all knowledge is fallible, except for knowledge that was acquired by means of falsification. Hence, Popperian falsifications are temporarily infallible, until they have been retracted by an adequate research community. Although critical rationalists dismiss the fact that all claims are fallible, they do belief that all claims are provisional. Counterintuitively, these provisional statements can become conclusive once logical contradictions have been turned into methodological refutations. The claim that all assertions are provisional and thus open to revision in light of new evidence is widely taken for granted in the natural sciences.

Popper insisted that verification and falsification are logically asymmetrical. However, according to the Duhem-Quine thesis, statements can neither be conclusively verified nor falsified in isolation from auxiliary assumptions (also called a bundle of hypotheses). As a consequence, statements are held to be underdetermined. Underdetermination explains how evidence available to us may be insufficient to justify our beliefs. The Duhem-Quine thesis should therefore erode our belief in logical falsifiability as well as in methodological falsification. The thesis can be contrasted with a more recent view posited by philosophy professor Albert Casullo, which holds that statements can be overdetermined. Overdetermination explains how evidence might be considered sufficient for justifying beliefs in absence of auxiliary assumptions. Philosopher Ray S. Percival holds that the Popperian asymmetry is an illusion, because in the action of falsifying an argument, scientists will inevitably verify its negation. Thus, verification and falsification are perfectly symmetrical. It seems, in the philosophy of logic, that neither syllogisms nor polysyllogisms will save underdetermination and overdetermination from the perils of infinite regress.

Furthermore, Popper defended his critical rationalism as a normative and methodological theory, that explains how objective, and thus mind-independent, knowledge ought to work. Hungarian philosopher Imre Lakatos built upon the theory by rephrasing the problem of demarcation as the problem of normative appraisal. Lakatos' and Popper's aims were alike, that is finding rules that could justify falsifications. However, Lakatos pointed out that critical rationalism only shows how theories can be falsified, but it omits how our belief in critical rationalism can itself be justified. The belief would require an inductively verified principle. When Lakatos urged Popper to admit that the falsification principle cannot be justified without embracing induction, Popper did not succumb. Lakatos' critical attitude towards rationalism has become emblematic for his so called critical fallibilism. While critical fallibilism strictly opposes dogmatism, critical rationalism is said to require a limited amount of dogmatism. Though, even Lakatos himself had been a critical rationalist in the past, when he took it upon himself to argue against the inductivist illusion that axioms can be justified by the truth of their consequences. In summary, despite Lakatos and Popper picking one stance over the other, both have oscillated between holding a critical attitude towards rationalism as well as fallibilism.

Fallibilism has also been employed by philosopher Willard V. O. Quine to attack, among other things, the distinction between analytic and synthetic statements. British philosopher Susan Haack, following Quine, has argued that the nature of fallibilism is often misunderstood, because people tend to confuse fallible propositions with fallible agents. She claims that logic is revisable, which means that analyticity does not exist and necessity (or a priority) does not extend to logical truths. She hereby opposes the conviction that propositions in logic are infallible, while agents can be fallible. Critical rationalist Hans Albert argues that it is impossible to prove any truth with certainty, not only in logic, but also in mathematics.

Mathematical fallibilism

Imre Lakatos, in the 1960s, known for his contributions to mathematical fallibilism

In Proofs and Refutations: The Logic of Mathematical Discovery (1976), philosopher Imre Lakatos implemented mathematical proofs into what he called Popperian "critical fallibilism". Lakatos's mathematical fallibilism is the general view that all mathematical theorems are falsifiable. Mathematical fallibilism deviates from traditional views held by philosophers like Hegel, Peirce, and Popper. Although Peirce introduced fallibilism, he seems to preclude the possibility of us being mistaken in our mathematical beliefs. Mathematical fallibilism appears to uphold that even though a mathematical conjecture cannot be proven true, we may consider some to be good approximations or estimations of the truth. This so called verisimilitude may provide us with consistency amidst an inherent incompleteness in mathematics. Mathematical fallibilism differs from quasi-empiricism, to the extent that the latter does not incorporate inductivism, a feature considered to be of vital importance to the foundations of set theory.

In the philosophy of mathematics, the central tenet of fallibilism is undecidability (which bears resemblance to the notion of isostheneia; the antithesis of appearance and judgement). Two distinct types of the word "undecidable" are currently being applied. The first one relates to the continuum hypothesis; the hypothesis that a statement can neither be proved nor be refuted in a specified deductive system. The continuum hypothesis was proposed by mathematician Georg Cantor in 1873. This type of undecidability is used in the context of the independence of the continuum hypothesis, namely because this statement is said to be independent from the axioms in Zermelo–Fraenkel set theory combined with the axiom of choice (also called ZFC). Both the hypothesis and its negation are consistent with these axioms. Many noteworthy discoveries have preceded the establishment of the continuum hypothesis.

In 1877, Cantor introduced the diagonal argument to prove that the cardinality of two finite sets is equal, by putting them into a one-to-one correspondence. Diagonalization reappeared in Cantors theorem, in 1891, to show that the power set of any countable set must have strictly higher cardinality. The existence of the power set was postulated in the axiom of power set; a vital part of Zermelo–Fraenkel set theory. Moreover, in 1899, Cantor's paradox was discovered. It postulates that there is no set of all cardinalities. Two years later, polymath Bertrand Russell would invalidate the existence of the universal set by pointing towards Russell's paradox, which implies that no set can contain itself as an element (or member). The universal set can be confuted by utilizing either the axiom schema of separation or the axiom of regularity. In contrast to the universal set, a power set does not contain itself. It was only after 1940 that mathematician Kurt Gödel showed, by applying inter alia the diagonal lemma, that the continuum hypothesis cannot be refuted, and after 1963, that fellow mathematician Paul Cohen revealed, through the method of forcing, that the continuum hypothesis cannot be proved either. In spite of the undecidability, both Gödel and Cohen suspected the continuum hypothesis to be false. This sense of suspicion, in conjunction with a firm belief in the consistency of ZFC, is in line with mathematical fallibilism. Mathematical fallibilists suppose that new axioms, for example the axiom of projective determinacy, might improve ZFC, but that these axioms will not allow for dependence of the continuum hypothesis.

The second type of undecidability is used in relation to computability theory (or recursion theory) and applies not solely to statements but specifically to decision problems; mathematical questions of decidability. An undecidable problem is a type of computational problem in which there are countably infinite sets of questions, each requiring an effective method to determine whether an output is either "yes or no" (or whether a statement is either "true or false"), but where there cannot be any computer program or Turing machine that will always provide the correct answer. Any program would occasionally give a wrong answer or run forever without giving any answer. Famous examples of undecidable problems are the halting problem and the Entscheidungsproblem. Conventionally, an undecidable problem is derived from a recursive set, formulated in undecidable language, and measured by the Turing degree. Practically all undecidable problems are unsolved, but not all unsolved problems are undecidable. Undecidability, with respect to computer science and mathematical logic, is also called unsolvability or non-computability. In the end, both types of undecidability can help to build a case for fallibilism, by providing these fundamental thought experiments.

Philosophical skepticism

Fallibilism improves upon the ideas associated with philosophical skepticism. According to philosophy professor Richard Feldman, nearly all versions of ancient and modern skepticism depend on the mistaken assumption that justification, and thus knowledge, requires conclusive evidence or certainty. An exception can be made for mitigated skepticism. In philosophical parlance, mitigated skepticism is an attitude which supports doubt in knowledge. This attitude is conserved in philosophical endeavors like scientific skepticism (or rational skepticism) and David Hume's inductive skepticism (or inductive fallibilism). Scientific skepticism questions the veracity of claims lacking empirical evidence, while inductive skepticism avers that inductive inference in forming predictions and generalizations cannot be conclusively justified or proven. Mitigated skepticism is also evident in the philosophical journey of Karl Popper. Furthermore, Popper demonstrates the value of fallibilism in his book The Open Society and Its Enemies (1945) by echoing the third maxim inscribed in the forecourt of the Temple of Apollo at Delphi: "surety brings ruin".

But the fallibility of our knowledge — or the thesis that all knowledge is guesswork, though some consists of guesses which have been most severely tested — must not be cited in support of scepticism or relativism. From the fact that we can err, and that a criterion of truth which might save us from error does not exist, it does not follow that the choice between theories is arbitrary, or non-rational: that we cannot learn, or get nearer to the truth: that our knowledge cannot grow.

— Karl Popper

Fallibilism differs slightly from academic skepticism (also called global skepticism, absolute skepticism, universal skepticism, radical skepticism, or epistemological nihilism) in the sense that fallibilists assume that no beliefs are certain (not even when established a priori), while proponents of academic skepticism advocate that no beliefs exist. In order to defend their position, these skeptics will either engage in epochē, a suspension of judgement, or they will resort to acatalepsy, a rejection of all knowledge. The concept of epoché is often accredited to Pyrrhonian skepticism, while the concept of acatalepsy can be traced back to multiple branches of skepticism. Acatalepsy is also closely related to the Socratic paradox. Nonetheless, epoché and acatalepsy are respectively self-contradictory and self-refuting, namely because both concepts rely (be it logically or methodologically) on its existence to serve as a justification. Lastly, local skepticism is the view that people cannot obtain knowledge of a particular area or subject (e.g. morality, religion, or metaphysics).

Criticism

Nearly all philosophers today are fallibilists in some sense of the term. Few would claim that knowledge requires absolute certainty, or deny that scientific claims are revisable, though in the 21st century some philosophers have argued for some version of infallibilist knowledge. Historically, many Western philosophers from Plato to Saint Augustine to René Descartes have argued that some human beliefs are infallibly known. Plausible candidates for infallible beliefs include logical truths ("Either Jones is a Democrat or Jones is not a Democrat"), immediate appearances ("It seems that I see a patch of blue"), and incorrigible beliefs (i.e., beliefs that are true in virtue of being believed, such as Descartes' "I think, therefore I am"). Many others, however, have taken even these types of beliefs to be fallible.

Externalism

From Wikipedia, the free encyclopedia

Externalism is a group of positions in the philosophy of mind which argues that the conscious mind is not only the result of what is going on inside the nervous system (or the brain), but also what occurs or exists outside the subject. It is contrasted with internalism which holds that the mind emerges from neural activity alone. Externalism is a belief that the mind is not just the brain or functions of the brain.

There are different versions of externalism based on different beliefs about what the mind is taken to be. Externalism stresses factors external to the nervous system. At one extreme, the mind could possibly depend on external factors. At the opposite extreme, the mind necessarily depends on external factors. The extreme view of externalism argues either that the mind is constituted by or identical with processes partially or totally external to the nervous system.

Another important criterion in externalist theory is to which aspect of the mind is addressed. Some externalists focus on cognitive aspects of the mind – such as Andy Clark and David Chalmers, Shaun Gallagher and many others – while others engage either the phenomenal aspect of the mind or the conscious mind itself. Several philosophers consider the conscious phenomenal content and activity, such as William Lycan, Alex Byrne or Francois Tonneau; Teed Rockwell or Riccardo Manzotti.

Proto-externalists

The proto-externalist group includes authors who were not considered as externalist but whose work suggest views similar to current forms of externalism. The first group of protexternalists to consider is the group of neorealists active at the beginning of 1900. In particular, Edwin Holt suggested a view of perception that considered the external world as constitutive of mental content. His rejection of representation paved the way to consider the external object as being somehow directly perceived: "Nothing can represent a thing but that thing itself". Holt's words anticipated by almost a century the anti-representationalist slogan by Rodney Brooks: "The world is its best representation".

More recently, neorealist views were refreshed by Francois Tonneau, who wrote that "According to neorealism, consciousness is merely a part, or cross-section, of the environment. Neorealism implies that all conscious experiences, veridical or otherwise"

Another notable author is Alfred North Whitehead. Whitehead's process ontology is a form of externalism since it endorses a neutral ontology. The basic elements (prehension, actual occasions, events, and processes) proceeded from microscopic activity up to the highest level of psychological and emotional life. David Ray Griffin has written an update on Whitehead's thought.

John Dewey also expressed a conception of the mind and its role in the world which is sympathetic with externalism.

Gregory Bateson also outlined an ecological view of the mind. Because of his background in cybernetics, he was familiar with the notion of feedback that somehow hampers the traditional separation between the inside and the outside of a system. He questioned the traditional boundary of the mind and tried to express an ecological view of it, attempting to show that the chasm between mind and nature is less obvious than it seems.

Semantic externalism

Semantic externalism is the first form of externalism which was dubbed so. As the name suggests it focuses on mental content of semantic nature.

Semantic externalism suggests that the mental content does not supervene on what is in the head. Yet the physical basis and mechanisms of the mind remain inside the head. This is a relatively safe move since it does not jeopardize our beliefs of being located inside our cranium. Hilary Putnam focused particularly on intentionality between our thoughts and external state of affairs – whether concepts or objects. To defend his position, Putnam developed the famous Twin Earth thought experiment. Putnam expressed his view with the slogan "'meanings' just ain't in the head."

In contrast, Tyler Burge emphasized the social nature of the external world suggesting that semantic content is externally constituted by means of social, cultural, and linguistic interactions.

Phenomenal externalism

Phenomenal externalism extends the externalist view to phenomenal content. Fred Dretske (Dretske 1996) suggested that "The experiences themselves are in the head (why else would closing one's eyes or stopping one's ears extinguish them?), but nothing in the head (indeed, at the time one is having the experiences, nothing outside the head) need have the qualities that distinguish these experiences." (Dretske 1996, p. 144-145). So, although experiences remain in the head, their phenomenal content could depend on something elsewhere.

In similar way, William Lycan defended an externalist and representationalist view of phenomenal experience. In particular, he objected to the tenet that qualia are narrow.

It has been often held that some, if not all, of mental states must have a broad content, that is an external content to their vehicles. For instance, Frank Jackson and Philip Pettit stated that "The contents of certain intentional states are broad or context-bound. The contents of some beliefs depend on how things are outside the subject" (Jackson and Pettit 1988, p. 381)

However, neither Dretske nor Lycan go far as to claim that the phenomenal mind extends literally and physically beyond the skin. In sum they suggest that phenomenal contents could depend on phenomena external to the body, while their vehicles remains inside.

The extended mind

The extended mind model suggests that cognition is larger than the body of the subject. According to such a model, the boundaries of cognitive processes are not always inside the skin. "Minds are composed of tools for thinking" (Dennett 2000, p. 21). According to Andy Clark, "cognition leaks out into body and world". The mind then is no longer inside the skull, but it is extended to comprehend whatever tools are useful (ranging from notepad and pencils up to smartphones and USB memories). This, in a nutshell, is the model of the extended mind.

When someone uses pencil and paper to compute large sums, cognitive processes extend to the pencil and paper themselves. In a loose sense, nobody would deny it. In a stronger sense, it can be controversial whether the boundaries of the cognitive mind would extend to the pencil and paper. For most of the proponents of the extended mind, the phenomenal mind remains inside the brain. While commenting on Andy Clark's last book Supersizing the Mind, David Chalmers asks "what about the big question: extended consciousness? The dispositional beliefs, cognitive processes, perceptual mechanisms, and moods […] extend beyond the borders of consciousness, and it is plausible that it is precisely the nonconscious part of them that is extended." (Chalmers 2009, p. xiv)

Enactivism and embodied cognition

Enactivism and embodied cognition stress the tight coupling between the cognitive processes, the body, and the environment. Enactivism builds upon the work of other scholars who could be considered as proto externalists; these include Gregory Bateson, James J. Gibson, Maurice Merleau-Ponty, Eleanor Rosch and many others. These thinkers suggest that the mind is either dependent on or identical with the interactions between the world and the agents. For instance, Kevin O'Regan and Alva Noe suggested in a seminal paper that the mind is constituted by the sensory-motor contingency between the agent and the world. A sensory-motor contingency is an occasion to act in a certain way and it results from the matching between environmental and bodily properties. To a certain extent a sensory-motor contingencies strongly resembles Gibson's affordances. Eventually, Noe developed a more epistemic version of enactivism where the content is the knowledge the agent has as to what it can do in a certain situation. In any case he is an externalist when he claims that "What perception is, however, is not a process in the brain, but a kind of skilful activity on the part of the animal as a whole. The enactive view challenges neuroscience to devise new ways of understanding the neural basis of perception and consciousness" (Noë 2004, p. 2). Recently, Noe published a more popular and shorter version of his position.

Enactivism receives support from various other correlated views such as embodied cognition or situated cognition. These views are usually the result of the rejection of the classic computational view of the mind which is centered on the notion of internal representations. Enactivism receives its share of negative comments, particularly from neuroscientists such as Christof Koch (Koch 2004, p. 9): "While proponents of the enactive point of view rightly emphasize that perception usually takes place within the context of action, I have little patience for their neglect of the neural basis of perception. If there is one thing that scientists are reasonably sure of, it is that brain activity is both necessary and sufficient for biological sentience."

To recap, enactivism is a case of externalism, sometimes restricted to cognitive or semantic aspects, some other times striving to encompass phenomenal aspects. Something that no enactivist has so far claimed is that all phenomenal content is the result of the interaction with the environment.

Recent forms of phenomenal externalism

Some externalists suggest explicitly that phenomenal content as well as the mental process are partially external to the body of the subject. The authors considering these views wonder whether not only cognition but also the conscious mind could be extended in the environment. While enactivism, at the end of the day, accepts the standard physicalist ontology that conceives the world as made of interacting objects, these more radical externalists consider the possibility that there is some fundamental flaw in our way to conceive reality and that some ontological revision is indeed unavoidable.

Teed Rockwell published a wholehearted attack against all forms of dualism and internalism. He proposed that the mind emerges not entirely from brain activity but from an interacting nexus of brain, body, and world. He therefore endorses embodied cognition, holding that neuroscience wrongly endorses a form of Cartesian materialism, an indictment also issued by many others. Dwelling on John Dewey's heritage, he argues that the brain and the body bring into existence the mind as a "behavioral field" in the environment.

Ted Honderich is perhaps the philosopher with the greatest experience in the field. He defends a position he himself dubbed "radical externalism" perhaps because of its ontological consequences. One of his main examples is that "what it actually is for you to be aware of the room you are in, it is for the room a way to exist." According to him, "Phenomenologically, what it is for you to be perceptually conscious is for a world somehow to exist". Therefore, he identifies existence with consciousness.

Another radical form of phenomenal externalism is the view called the spread mind by Riccardo Manzotti. He questions the separation between subject and object, seeing these as only two incomplete perspectives and descriptions of the same physical process. He supports a process ontology that endorses a mind spread physically and spatio-temporally beyond the skin. Objects are not autonomous as we know them, but rather actual processes framing our reality.

Another explanation was proposed by Roger Bartra with his theory of the exocerebrum. He explains that consciousness is both inside and outside the brain, and that the frontier that separates both realms is useless and a burden in the explanation of the self. In his Anthropology of the Brain: Consciousness, Culture, and Free Will (Cambridge University Press, 2014; originally published in Spanish in 2005) he criticizes both externalism and internalism.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...