Search This Blog

Sunday, June 11, 2023

Argument from reason

From Wikipedia, the free encyclopedia

The argument from reason is an argument against metaphysical naturalism and for the existence of God (or at least a supernatural being that is the source of human reason). The best-known defender of the argument is C. S. Lewis. Lewis first defended the argument at length in his 1947 book, Miracles: A Preliminary Study. In the second edition of Miracles (1960), Lewis substantially revised and expanded the argument.

Contemporary defenders of the argument from reason include Alvin Plantinga, Victor Reppert and William Hasker.

The argument

Metaphysical naturalism is the view that nature as studied by the natural sciences is all that exists. Naturalists deny the existence of a supernatural God, souls, an afterlife, or anything supernatural. Nothing exists outside or beyond the physical universe.

The argument from reason seeks to show that naturalism is self-refuting, or otherwise false and indefensible.

According to Lewis,

One absolutely central inconsistency ruins [the naturalistic worldview].... The whole picture professes to depend on inferences from observed facts. Unless inference is valid, the whole picture disappears.... [U]nless Reason is an absolute--all is in ruins. Yet those who ask me to believe this world picture also ask me to believe that Reason is simply the unforeseen and unintended by-product of mindless matter at one stage of its endless and aimless becoming. Here is flat contradiction. They ask me at the same moment to accept a conclusion and to discredit the only testimony on which that conclusion can be based.

— C. S. Lewis, "Is Theology Poetry?", The Weight of Glory and Other Addresses

More precisely, Lewis's argument from reason can be stated as follows:

1. No belief is rationally inferred if it can be fully explained in terms of nonrational causes.

Support: Reasoning requires insight into logical relations. A process of reasoning (P therefore Q) is rational only if the reasoner sees that Q follows from, or is supported by, P, and accepts Q on that basis. Thus, reasoning is trustworthy (or "valid", as Lewis sometimes says) only if it involves a special kind of causality, namely, rational insight into logical implication or evidential support. If a bit of reasoning can be fully explained by nonrational causes, such as fibers firing in the brain or a bump on the head, then the reasoning is not reliable, and cannot yield knowledge. Consider this example: Person A refuses to go near the neighbor’s dog because he had a bad childhood experience with dogs. Person B refuses to go near the neighbor’s dog because one month ago he saw it attack someone. Both have given a reason for staying away from the dog, but person A’s reason is the result of nonrational causes, while person B has given an explanation for his behavior following from rational inference (animals exhibit patterns of behavior; these patterns are likely to be repeated; this dog has exhibited aggression towards someone who approached it; there is a good chance that the dog may exhibit the same behavior towards me if I approach it). Consider a second example: person A says that he is afraid to climb to the 8th story of a bank building because he and humans in general have a natural fear of heights resulting from the processes of evolution and natural selection. He has given an explanation of his fear, but since his fear results from nonrational causes (natural selection), his argument does not follow from logical inference.

2. If naturalism is true, then all beliefs can be fully explained in terms of nonrational causes.

Support: Naturalism holds that nature is all that exists, and that all events in nature can in principle be explained without invoking supernatural or other nonnatural causes. Standardly, naturalists claim that all events must have physical causes, and that human thoughts can ultimately be explained in terms of material causes or physical events (such as neurochemical events in the brain) that are nonrational.

3. Therefore, if naturalism is true, then no belief is rationally inferred (from 1 and 2).

4. We have good reason to accept naturalism only if it can be rationally inferred from good evidence.

5. Therefore, there is not, and cannot be, good reason to accept naturalism.

In short, naturalism undercuts itself. If naturalism is true, then we cannot sensibly believe it or virtually anything else.

In some versions of the argument from reason, Lewis extends the argument to defend a further conclusion: that human reason depends on an eternal, self-existent rational Being (God). This extension of the argument from reason states:

1. Since everything in nature can be wholly explained in terms of nonrational causes, human reason (more precisely, the power of drawing conclusions based solely on the rational cause of logical insight) must have a source outside of nature.

2. If human reason came from non-reason it would lose all rational credentials and would cease to be reason.

3. So, human reason cannot come from non-reason (from 2).

4. So human reason must come from a source outside nature that is itself rational (from 1 and 3).

5. This supernatural source of reason may itself be dependent on some further source of reason, but a chain of such dependent sources cannot go on forever. Eventually, we must reason back to the existence of eternal, non-dependent source of human reason.

6. Therefore, there exists an eternal, self-existent, rational Being who is the ultimate source of human reason. This Being we call God (from 4-5). (Lewis, Miracles, chap. 4)

Anscombe's criticism

On 2 February 1948, Oxford philosopher Elizabeth Anscombe read a paper to the Oxford Socratic Club criticizing the version of the argument from reason contained in the third chapter of Lewis's Miracles.

Her first criticism was against the use of the word "irrational" by Lewis (Anscombe 1981: 225-26). Her point was that there is an important difference between irrational causes of belief, such as wishful thinking, and nonrational causes, such as neurons firing in the brain, that do not obviously lead to faulty reasoning. Lewis accepted the criticism and amended the argument, basing it on the concept of nonrational causes of belief (as in the version provided in this article).

Anscombe's second criticism questioned the intelligibility of Lewis's intended contrast between "valid" and "invalid" reasoning. She wrote: "What can you mean by 'valid' beyond what would be indicated by the explanation you would give for distinguishing between valid and invalid, and what in the naturalistic hypothesis prevents that explanation from being given and from meaning what it does?" (Anscombe 1981: 226) Her point is that it makes no sense to contrast "valid" and "invalid" reasoning unless it is possible for some forms of reasoning to be valid. Lewis later conceded (Anscombe 1981: 231) that "valid" was a bad word for what he had in mind. Lewis didn't mean to suggest that if naturalism is true, no arguments can be given in which the conclusions follow logically from the premises. What he meant is that a process of reasoning is "veridical", that is, reliable as a method of pursuing knowledge and truth, only if it cannot be entirely explained by nonrational causes.

Anscombe's third objection was that Lewis failed to distinguish between different senses of the terms "why", "because", and "explanation", and that what counts as a "full" explanation varies by context (Anscombe 1981: 227-31). In the context of ordinary life, "because he wants a cup of tea" may count as a perfectly satisfactory explanation of why Peter is boiling water. Yet such a purposive explanation would not count as a full explanation (or an explanation at all) in the context of physics or biochemistry. Lewis accepted this criticism, and created a revised version of the argument in which the distinction between "because" in the sense of physical causality, and "because" in the sense of evidential support, became the central point of the argument (this is the version described in this article).

More recent critics have argued that Lewis's argument at best refutes only strict forms of naturalism that seek to explain everything in terms ultimately reducible to physics or purely mechanistic causes. So-called "broad" naturalists that see consciousness as an "emergent" non-physical property of complex brains would agree with Lewis that different levels or types of causation exist in nature, and that rational inferences are not fully explainable by nonrational causes.

Other critics have objected that Lewis's argument from reason fails because the causal origins of beliefs are often irrelevant to whether those beliefs are rational, justified, warranted, etc. Anscombe, for example, argues that "if a man has reasons, and they are good reasons, and they are genuinely his reasons, for thinking something—then his thought is rational, whatever causal statements we make about him" (Anscombe 1981: 229). On many widely accepted theories of knowledge and justification, questions of how beliefs were ultimately caused (e.g., at the level of brain neurochemistry) are viewed as irrelevant to whether those beliefs are rational or justified. Some defenders of Lewis claim that this objection misses the mark, because his argument is directed at what he calls the "veridicalness" of acts of reasoning (i.e., whether reasoning connects us with objective reality or truth), rather than with whether any inferred beliefs can be rational or justified in a materialistic world.

Criticism by eliminative materialists

The argument from reason claims that if beliefs, desires, and other contentful mental states cannot be accounted for in naturalism then naturalism is false. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, cannot be explained on naturalism and therefore concludes that such entities do not exist. Even if successful, the argument from reason only rules out certain forms of naturalism and fails to argue against a conception of naturalism which accepts eliminative materialism to be the correct scientific account of human cognition.

Criticism by computationalists

Some people think it is easy to refute any argument from reason just by appealing to the existence of computers. Computers, according to the objection, reason, they are also undeniably a physical system, but they are also rational. So whatever incompatibility there might be between mechanism and reason must be illusory. Since computers do not operate on beliefs and desires and yet come to justified conclusions about the world as in object recognition or proving mathematical theorems, it should not be a surprise on naturalism that human brains can do the same. According to John Searle, computation and syntax are observer-relative but the cognition of the human mind is not observer-relative. Such a position seems to be bolstered by arguments from the indeterminacy of translation offered by Quine and Kripke's skeptical paradox regarding meaning which support the conclusion that the interpretation of algorithms is observer-relative. However, according to the Church–Turing thesis the human brain is a computer and computationalism is a viable and developing research program in neuroscience for understanding how the brain works. Moreover, any indeterminacy of brain cognition does not entail human cognitive faculties are unreliable because natural selection has ensured they result in the survival of biological organisms, contrary to claims by the evolutionary argument against naturalism.

Similar views by other thinkers

Philosophers such as Victor Reppert, William Hasker and Alvin Plantinga have expanded on the argument from reason, and credit C.S. Lewis as an important influence on their thinking.

Lewis never claimed that he invented the argument from reason; in fact, he refers to it as a "venerable philosophical chestnut." Early versions of the argument occur in the works of Arthur Balfour (see, e.g., The Foundations of Belief, 1879, chap. 13) and G.K. Chesterton. In Chesterton's 1908 book Orthodoxy, in a chapter titled "The Suicide of Thought", he writes of the "great and possible peril . . . that the human intellect is free to destroy itself....It is idle to talk always of the alternative of reason and faith. It is an act of faith to assert that our thoughts have any relation to reality at all. If you are merely a sceptic, you must sooner or later ask yourself the question, "Why should anything go right; even observation and deduction? Why should not good logic be as misleading as bad logic? They are both movements in the brain of a bewildered ape?"

Similarly, Chesterton asserts that the argument is a fundamental, if unstated, tenet of Thomism in his 1933 book St. Thomas Aquinas: "The Dumb Ox":

Thus, even those who appreciate the metaphysical depth of Thomism in other matters have expressed surprise that he does not deal at all with what many now think the main metaphysical question; whether we can prove that the primary act of recognition of any reality is real. The answer is that St. Thomas recognised instantly, what so many modern sceptics have begun to suspect rather laboriously; that a man must either answer that question in the affirmative, or else never answer any question, never ask any question, never even exist intellectually, to answer or to ask. I suppose it is true in a sense that a man can be a fundamental sceptic, but he cannot be anything else: certainly not even a defender of fundamental scepticism. If a man feels that all the movements of his own mind are meaningless, then his mind is meaningless, and he is meaningless; and it does not mean anything to attempt to discover his meaning. Most fundamental sceptics appear to survive, because they are not consistently sceptical and not at all fundamental. They will first deny everything and then admit something, if for the sake of argument--or often rather of attack without argument. I saw an almost startling example of this essential frivolity in a professor of final scepticism, in a paper the other day. A man wrote to say that he accepted nothing but Solipsism, and added that he had often wondered it was not a more common philosophy. Now Solipsism simply means that a man believes in his own existence, but not in anybody or anything else. And it never struck this simple sophist, that if his philosophy was true, there obviously were no other philosophers to profess it.

In Miracles, Lewis himself quotes J. B. S. Haldane, who appeals to a similar line of reasoning in his 1927 book, Possible Worlds: "If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true ... and hence I have no reason for supposing my brain to be composed of atoms."

Other versions of the argument from reason occur in C.E.M. Joad's Guide to Modern Philosophy (London: Faber, 1933, pp. 58–59), Richard Taylor's Metaphysics (Englewood Cliffs, NJ: Prentice Hall, 3rd ed., 1983, pp. 104–05), and J. P. Moreland's Scaling the Secular City: A Defense of Christianity (Grand Rapids, MI: Baker, 1987, chap. 3).

Peter Kreeft used the argument from reason to create a formulation of the argument from consciousness for the existence of God. He phrased it as follows:

  1. "We experience the universe as intelligible. This intelligibility means that the universe is graspable by intelligence."
  2. "Either this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence, or both intelligibility and intelligence are the products of blind chance."
  3. "Not blind chance."
  4. "Therefore this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence."

He used the argument from reason to affirm the third premise.

Scientific evidence

From Wikipedia, the free encyclopedia

Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with scientific methods. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

Principles of inference

A person's assumptions or beliefs about the relationship between observations and a hypothesis will affect whether that person takes the observations as evidence. These assumptions or beliefs will also affect how a person utilizes the observations as evidence. For example, the Earth's apparent lack of motion may be taken as evidence for a geocentric cosmology. However, after sufficient evidence is presented for heliocentric cosmology and the apparent lack of motion is explained, the initial observation is strongly discounted as evidence.

When rational observers have different background beliefs, they may draw different conclusions from the same scientific evidence. For example, Priestley, working with phlogiston theory, explained his observations about the decomposition of mercuric oxide using phlogiston. In contrast, Lavoisier, developing the theory of elements, explained the same observations with reference to oxygen. A causal relationship between the observations and hypothesis does not exist to cause the observation to be taken as evidence, but rather the causal relationship is provided by the person seeking to establish observations as evidence.

A more formal method to characterize the effect of background beliefs is Bayesian inference. In Bayesian inference, beliefs are expressed as percentages indicating one's confidence in them. One starts from an initial probability (a prior), and then updates that probability using Bayes' theorem after observing evidence. As a result, two independent observers of the same event will rationally arrive at different conclusions if their priors (previous observations that are also relevant to the conclusion) differ. However, if they are allowed to communicate with each other, they will end in agreement (per Aumann's agreement theorem).

The importance of background beliefs in the determination of what observations are evidence can be illustrated using deductive reasoning, such as syllogisms. If either of the propositions is not accepted as true, the conclusion will not be accepted either.

Utility of scientific evidence

Philosophers, such as Karl R. Popper, have provided influential theories of the scientific method within which scientific evidence plays a central role. In summary, Popper provides that a scientist creatively develops a theory that may be falsified by testing the theory against evidence or known facts. Popper's theory presents an asymmetry in that evidence can prove a theory wrong, by establishing facts that are inconsistent with the theory. In contrast, evidence cannot prove a theory correct because other evidence, yet to be discovered, may exist that is inconsistent with the theory.

Philosophical versus scientific views

In the 20th century, many philosophers investigated the logical relationship between evidence statements and hypotheses, whereas scientists tended to focus on how the data used for statistical inference are generated. But according to philosopher Deborah Mayo, by the end of the 20th century philosophers had come to understand that "there are key features of scientific practice that are overlooked or misdescribed by all such logical accounts of evidence, whether hypothetico-deductive, Bayesian, or instantiationist".

There were a variety of 20th-century philosophical approaches to decide whether an observation may be considered evidence; many of these focused on the relationship between the evidence and the hypothesis. In the 1950s, Rudolf Carnap recommended distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis). A 1983 anthology edited by Peter Achinstein provided a concise presentation by prominent philosophers on scientific evidence, including Carl Hempel (on the logic of confirmation), R. B. Braithwaite (on the structure of a scientific system), Norwood Russell Hanson (on the logic of discovery), Nelson Goodman (of grue fame, on a theory of projection), Rudolf Carnap (on the concept of confirming evidence), Wesley C. Salmon (on confirmation and relevance), and Clark Glymour (on relevant evidence). In 1990, William Bechtel provided four factors (clarity of the data, replication by others, consistency with results arrived at by alternative methods, and consistency with plausible theories of mechanisms) that biologists used to settle controversies about procedures and reliability of evidence.

In 2001, Achinstein published his own book on the subject titled The Book of Evidence, in which, among other topics, he distinguished between four concepts of evidence: epistemic-situation evidence (evidence relative to a given epistemic situation), subjective evidence (considered to be evidence by a particular person at a particular time), veridical evidence (a good reason to believe that a hypothesis is true), and potential evidence (a good reason to believe that a hypothesis is highly probable). Achinstein defined all his concepts of evidence in terms of potential evidence, since any other kind of evidence must at least be potential evidence, and he argued that scientists mainly seek veridical evidence but they also use the other concepts of evidence, which rely on a distinctive concept of probability, and Achinstein contrasted this concept of probability with previous probabilistic theories of evidence such as Bayesian, Carnapian, and frequentist.

Simplicity is one common philosophical criterion for scientific theories. Based on the philosophical assumption of the strong Church-Turing thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam's razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized." However, some philosophers (including Richard Boyd, Mario Bunge, John D. Norton, and Elliott Sober) have adopted a skeptical or deflationary view of the role of simplicity in science, arguing in various ways that its importance has been overemphasized.

Emphasis on hypothesis testing as the essence of science is prevalent among both scientists and philosophers. However, philosophers have noted that testing hypotheses by confronting them with new evidence does not account for all the ways that scientists use evidence. For example, when Geiger and Marsden scattered alpha particles through thin gold foil, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. Rutherford used the data to develop a new atomic model, not only to test an existing hypothesis; such use of evidence to produce new hypotheses is sometimes called abduction (following C. S. Peirce). Social-science methodologist Donald T. Campbell, who emphasized hypothesis testing throughout his career, later increasingly emphasized that the essence of science is "not experimentation per se" but instead the iterative competition of "plausible rival hypotheses", a process that at any given phase may start from evidence or may start from hypothesis. Other scientists and philosophers have emphasized the central role of questions and problems in the use of data and hypotheses.

Concept of scientific proof

While the phrase "scientific proof" is often used in the popular media, many scientists and philosophers have argued that there is really no such thing as infallible proof. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by 'proof' an argument which establishes once and for ever the truth of a theory." Albert Einstein said:

The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe", and in the great majority of cases simply "No". If an experiment agrees with a theory it means for the latter "Maybe", and if it does not agree it means "No". Probably every theory will someday experience its "No"—most theories, soon after conception.

However, in contrast to the ideal of infallible proof, in practice theories may be said to be proved according to some standard of proof used in a given inquiry. In this limited sense, proof is the high degree of acceptance of a theory following a process of inquiry and critical evaluation according to the standards of a scientific community.

Empirical evidence

From Wikipedia, the free encyclopedia

Empirical evidence for a proposition is evidence, i.e. what supports or counters this proposition, that is constituted by or accessible to sense experience or experimental procedure. Empirical evidence is of central importance to the sciences and plays a role in various other fields, like epistemology and law.

There is no general agreement on how the terms evidence and empirical are to be defined. Often different fields work with quite different conceptions. In epistemology, evidence is what justifies beliefs or what determines whether holding a certain belief is rational. This is only possible if the evidence is possessed by the person, which has prompted various epistemologists to conceive evidence as private mental states like experiences or other beliefs. In philosophy of science, on the other hand, evidence is understood as that which confirms or disconfirms scientific hypotheses and arbitrates between competing theories. For this role, it is important that evidence is public and uncontroversial, like observable physical objects or events and unlike private mental states, so that evidence may foster scientific consensus. The term empirical comes from Greek ἐμπειρία empeiría, i.e. 'experience'. In this context, it is usually understood as what is observable, in contrast to unobservable or theoretical objects. It is generally accepted that unaided perception constitutes observation, but it is disputed to what extent objects accessible only to aided perception, like bacteria seen through a microscope or positrons detected in a cloud chamber, should be regarded as observable.

Empirical evidence is essential to a posteriori knowledge or empirical knowledge, knowledge whose justification or falsification depends on experience or experiment. A priori knowledge, on the other hand, is seen either as innate or as justified by rational intuition and therefore as not dependent on empirical evidence. Rationalism fully accepts that there is knowledge a priori, which is either outright rejected by empiricism or accepted only in a restricted way as knowledge of relations between our concepts but not as pertaining to the external world.

Scientific evidence is closely related to empirical evidence but not all forms of empirical evidence meet the standards dictated by scientific methods. Sources of empirical evidence are sometimes divided into observation and experimentation, the difference being that only experimentation involves manipulation or intervention: phenomena are actively created instead of being passively observed.

Background

The concept of evidence is of central importance in epistemology and in philosophy of science but plays different roles in these two fields. In epistemology, evidence is what justifies beliefs or what determines whether holding a certain doxastic attitude is rational. For example, the olfactory experience of smelling smoke justifies or makes it rational to hold the belief that something is burning. It is usually held that for justification to work, the evidence has to be possessed by the believer. The most straightforward way to account for this type of evidence possession is to hold that evidence consists of the private mental states possessed by the believer.

Some philosophers restrict evidence even further, for example, to only conscious, propositional or factive mental states. Restricting evidence to conscious mental states has the implausible consequence that many simple everyday beliefs would be unjustified. This is why it is more common to hold that all kinds of mental states, including stored but currently unconscious beliefs, can act as evidence. Various of the roles played by evidence in reasoning, for example, in explanatory, probabilistic and deductive reasoning, suggest that evidence has to be propositional in nature, i.e. that it is correctly expressed by propositional attitude verbs like "believe" together with a that-clause, like "that something is burning". But it runs counter to the common practice of treating non-propositional sense-experiences, like bodily pains, as evidence. Its defenders sometimes combine it with the view that evidence has to be factive, i.e. that only attitudes towards true propositions constitute evidence. In this view, there is no misleading evidence. The olfactory experience of smoke would count as evidence if it was produced by a fire but not if it was produced by a smoke generator. This position has problems in explaining why it is still rational for the subject to believe that there is a fire even though the olfactory experience cannot be considered evidence.

In philosophy of science, evidence is understood as that which confirms or disconfirms scientific hypotheses and arbitrates between competing theories. Measurements of Mercury's "anomalous" orbit, for example, constitute evidence that plays the role of neutral arbiter between Newton's and Einstein's theory of gravitation by confirming Einstein's theory. For scientific consensus, it is central that evidence is public and uncontroversial, like observable physical objects or events and unlike private mental states. This way it can act as a shared ground for proponents of competing theories. Two issues threatening this role are the problem of underdetermination and theory-ladenness. The problem of underdetermination concerns the fact that the available evidence often provides equal support to either theory and therefore cannot arbitrate between them. Theory-ladenness refers to the idea that evidence already includes theoretical assumptions. These assumptions can hinder it from acting as neutral arbiter. It can also lead to a lack of shared evidence if different scientists do not share these assumptions. Thomas Kuhn is an important advocate of the position that theory-ladenness in relation to scientific paradigms plays a central role in science.

Definition

A thing is evidence for a proposition if it epistemically supports this proposition or indicates that the supported proposition is true. Evidence is empirical if it is constituted by or accessible to sensory experience. There are various competing theories about the exact definition of the terms evidence and empirical. Different fields, like epistemology, the sciences or legal systems, often associate different concepts with these terms. An important distinction among theories of evidence is whether they identify evidence with private mental states or with public physical objects. Concerning the term empirical, there is a dispute about where to draw the line between observable or empirical objects in contrast to unobservable or merely theoretical objects.

The traditional view proposes that evidence is empirical if it is constituted by or accessible to sensory experience. This involves experiences arising from the stimulation of the sense organs, like visual or auditory experiences, but the term is often used in a wider sense including memories and introspection. It is usually seen as excluding purely intellectual experiences, like rational insights or intuitions used to justify basic logical or mathematical principles. The terms empirical and observable are closely related and sometimes used as synonyms.

There is an active debate in contemporary philosophy of science as to what should be regarded as observable or empirical in contrast to unobservable or merely theoretical objects. There is general consensus that everyday objects like books or houses are observable since they are accessible via unaided perception, but disagreement starts for objects that are only accessible through aided perception. This includes using telescopes to study distant galaxies, microscopes to study bacteria or using cloud chambers to study positrons. So the question is whether distant galaxies, bacteria or positrons should be regarded as observable or merely theoretical objects. Some even hold that any measurement process of an entity should be considered an observation of this entity. So in this sense, the interior of the sun is observable since neutrinos originating there can be detected. The difficulty with this debate is that there is a continuity of cases going from looking at something with the naked eye, through a window, through a pair of glasses, through a microscope, etc. Because of this continuity, drawing the line between any two adjacent cases seems to be arbitrary. One way to avoid these difficulties is to hold that it is a mistake to identify the empirical with what is observable or sensible. Instead, it has been suggested that empirical evidence can include unobservable entities as long as they are detectable through suitable measurements. A problem with this approach is that it is rather far from the original meaning of "empirical", which contains the reference to experience.

Related concepts

Knowledge a posteriori and a priori

Knowledge or the justification of a belief is said to be a posteriori if it is based on empirical evidence. A posteriori refers to what depends on experience (what comes after experience), in contrast to a priori, which stands for what is independent of experience (what comes before experience). For example, the proposition that "all bachelors are unmarried" is knowable a priori since its truth only depends on the meanings of the words used in the expression. The proposition "some bachelors are happy", on the other hand, is only knowable a posteriori since it depends on experience of the world as its justifier. Immanuel Kant held that the difference between a posteriori and a priori is tantamount to the distinction between empirical and non-empirical knowledge.

Two central questions for this distinction concern the relevant sense of "experience" and of "dependence". The paradigmatic justification of knowledge a posteriori consists in sensory experience, but other mental phenomena, like memory or introspection, are also usually included in it. But purely intellectual experiences, like rational insights or intuitions used to justify basic logical or mathematical principles, are normally excluded from it. There are different senses in which knowledge may be said to depend on experience. In order to know a proposition, the subject has to be able to entertain this proposition, i.e. possess the relevant concepts. For example, experience is necessary to entertain the proposition "if something is red all over then it is not green all over" because the terms "red" and "green" have to be acquired this way. But the sense of dependence most relevant to empirical evidence concerns the status of justification of a belief. So experience may be needed to acquire the relevant concepts in the example above, but once these concepts are possessed, no further experience providing empirical evidence is needed to know that the proposition is true, which is why it is considered to be justified a priori.

Empiricism and rationalism

In its strictest sense, empiricism is the view that all knowledge is based on experience or that all epistemic justification arises from empirical evidence. This stands in contrast to the rationalist view, which holds that some knowledge is independent of experience, either because it is innate or because it is justified by reason or rational reflection alone. Expressed through the distinction between knowledge a priori and a posteriori from the previous section, rationalism affirms that there is knowledge a priori, which is denied by empiricism in this strict form. One difficulty for empiricists is to account for the justification of knowledge pertaining to fields like mathematics and logic, for example, that 3 is a prime number or that modus ponens is a valid form of deduction. The difficulty is due to the fact that there seems to be no good candidate of empirical evidence that could justify these beliefs. Such cases have prompted empiricists to allow for certain forms of knowledge a priori, for example, concerning tautologies or relations between our concepts. These concessions preserve the spirit of empiricism insofar as the restriction to experience still applies to knowledge about the external world. In some fields, like metaphysics or ethics, the choice between empiricism and rationalism makes a difference not just for how a given claim is justified but for whether it is justified at all. This is best exemplified in metaphysics, where empiricists tend to take a skeptical position, thereby denying the existence of metaphysical knowledge, while rationalists seek justification for metaphysical claims in metaphysical intuitions.

Scientific evidence

Scientific evidence is closely related to empirical evidence. Some theorists, like Carlos Santana, have argued that there is a sense in which not all empirical evidence constitutes scientific evidence. One reason for this is that the standards or criteria that scientists apply to evidence exclude certain evidence that is legitimate in other contexts.[38] For example, anecdotal evidence from a friend about how to treat a certain disease constitutes empirical evidence that this treatment works but would not be considered scientific evidence.[38][39] Others have argued that the traditional empiricist definition of empirical evidence as perceptual evidence is too narrow for much of scientific practice, which uses evidence from various kinds of non-perceptual equipment.[40]

Central to scientific evidence is that it was arrived at by following scientific method in the context of some scientific theory.[41] But people rely on various forms of empirical evidence in their everyday lives that have not been obtained this way and therefore do not qualify as scientific evidence. One problem with non-scientific evidence is that it is less reliable, for example, due to cognitive biases like the anchoring effect,[42] in which information obtained earlier is given more weight, although science done poorly is also subject to such biases, as in the example of p-hacking.[38]

Observation, experimentation and scientific method

In the philosophy of science, it is sometimes held that there are two sources of empirical evidence: observation and experimentation.[43] The idea behind this distinction is that only experimentation involves manipulation or intervention: phenomena are actively created instead of being passively observed.[44][45][46] For example, inserting viral DNA into a bacterium is a form of experimentation while studying planetary orbits through a telescope belongs to mere observation.[47] In these cases, the mutated DNA was actively produced by the biologist while the planetary orbits are independent of the astronomer observing them. Applied to the history of science, it is sometimes held that ancient science is mainly observational while the emphasis on experimentation is only present in modern science and responsible for the scientific revolution.[44] This is sometimes phrased through the expression that modern science actively "puts questions to nature".[47] This distinction also underlies the categorization of sciences into experimental sciences, like physics, and observational sciences, like astronomy. While the distinction is relatively intuitive in paradigmatic cases, it has proven difficult to give a general definition of "intervention" applying to all cases, which is why it is sometimes outright rejected.[47][44]

Empirical evidence is required for a hypothesis to gain acceptance in the scientific community. Normally, this validation is achieved by the scientific method of forming a hypothesis, experimental design, peer review, reproduction of results, conference presentation, and journal publication. This requires rigorous communication of hypothesis (usually expressed in mathematics), experimental constraints and controls (expressed in terms of standard experimental apparatus), and a common understanding of measurement. In the scientific context, the term semi-empirical is used for qualifying theoretical methods that use, in part, basic axioms or postulated scientific laws and experimental results. Such methods are opposed to theoretical ab initio methods, which are purely deductive and based on first principles. Typical examples of both ab initio and semi-empirical methods can be found in computational chemistry.

 

Objective idealism

From Wikipedia, the free encyclopedia

Objective idealism is a philosophical theory that affirms the ideal and spiritual nature of the world and conceives of the idea of which the world is made as the objective and rational form in reality rather than as subjective content of the mind or mental representation. Objective idealism thus differs both from materialism, which holds that the external world is independent of cognizing minds and that mental processes and ideas are by-products of physical events, and from subjective idealism, which conceives of reality as totally dependent on the consciousness of the subject and therefore relative to the subject itself.

Objective idealism starts with Plato’s theory of forms, which mantains that objectively existing but non-material "ideas" give form to reality, thus shaping its basic building blocks.

Within German idealism, objective idealism identifies with the philosophy of Friedrich Schelling. According to Schelling, the rational or spiritual elements of reality are supposed to give conceptual structure to reality and ultimately constitute reality, to the point that nature and mind, matter and concept, are essentially identical: their distinction is merely psychological and depends on our predisposition to distinguish the "outside us" (nature, world) from the "in us" (mind, spirit). Within that tradition of philosophical thought, the entire world manifests itself through ideas and is governed by purposes or ends: regardless of the existence of a self-conscious subject, all reality is a manifestation of reason.

The philosopher Charles Sanders Peirce defined his own version of objective idealism as follows:

The one intelligible theory of the universe is that of objective idealism, that matter is effete mind, inveterate habits becoming physical laws (Peirce, CP 6.25).

By "objective idealism", Pierce meant that material objects such as organisms have evolved out of mind, that is, out of feelings ("such as pain, blue, cheerfulness") that are immediately present to consciousness. Contrary to Hegel, who identified mind with conceptual thinking or reason, Pierce identified it with feeling, and he claimed that at the origins of the world there was "a chaos of unpersonalized feelings", i.e., feelings that were not located in any individual subject. Therefore, in the 1890s Pierce's philosophy referred to itself as subjective idealism because it held that the mind comes first and the world is essentially mind (idealism) and the mind is independent of individuals (objectivism).

Objective idealism has also been defined as a form of metaphysical idealism that accepts Naïve realism (the view that empirical objects exist objectively) but rejects epiphenomenalist materialism (according to which the mind and spiritual values have emerged due to material causes), as opposed to subjective idealism denies that material objects exist independently of human perception and thus stands opposed to both realism and naturalism.

Saturday, June 10, 2023

Absolute idealism

From Wikipedia, the free encyclopedia

Absolute idealism is an ontologically monistic philosophy chiefly associated with G. W. F. Hegel and Friedrich Schelling, both of whom were German idealist philosophers in the 19th century. The label has also been attached to others such as Josiah Royce, an American philosopher who was greatly influenced by Hegel's work, and the British idealists.

A form of idealism, absolute idealism is Hegel's account of how being is ultimately comprehensible as an all-inclusive whole (das Absolute). Hegel asserted that in order for the thinking subject (human reason or consciousness) to be able to know its object (the world) at all, there must be in some sense an identity of thought and being. Otherwise, the subject would never have access to the object and we would have no certainty about any of our knowledge of the world.

To account for the differences between thought and being, however, as well as the richness and diversity of each, the unity of thought and being cannot be expressed as the abstract identity "A=A". Absolute idealism is the attempt to demonstrate this unity using a new "speculative" philosophical method, which requires new concepts and rules of logic. According to Hegel, the absolute ground of being is essentially a dynamic, historical process of necessity that unfolds by itself in the form of increasingly complex forms of being and of consciousness, ultimately giving rise to all the diversity in the world and in the concepts with which we think and make sense of the world.

The absolute idealist position dominated philosophy in nineteenth-century Britain and Germany, while exerting significantly less influence in the United States. The absolute idealist position should be distinguished from the subjective idealism of Berkeley, the transcendental idealism of Kant, or the post-Kantian transcendental idealism (also known as critical idealism) of Fichte and of the early Schelling.

Schelling and Hegel's Absolute

Dieter Henrich characterized Hegel's conception of the absolute as follows: "The absolute is the finite to the extent to which the finite is nothing at all but negative relation to itself" (Henrich 1982, p. 82). As Bowie describes it, Hegel's system depends upon showing how each view and positing of how the world really has an internal contradiction: "This necessarily leads thought to more comprehensive ways of grasping the world, until the point where there can be no more comprehensive way because there is no longer any contradiction to give rise to it."

For Hegel, the interaction of opposites generates, in a dialectical fashion, all concepts we use in order to understand the world. Moreover, this development occurs not only in the individual mind, but also throughout history. In The Phenomenology of Spirit, for example, Hegel presents a history of human consciousness as a journey through stages of explanations of the world. Each successive explanation created problems and oppositions within itself, leading to tensions which could only be overcome by adopting a view that could accommodate these oppositions in a higher unity.

For Kant, reason was only for us, and the categories only emerged within the subject. However, for Hegel, reason is embodied, or immanent within being and the world. Reason is immanent within nature, and spirit emerges out of nature. Spirit is self-conscious reason knowing itself as reason.

The aim of Hegel was to show that we do not relate to the world as if it is other from us, but that we continue to find ourselves embedded in that world. With the realization that both mind and world are ordered according to the same rational principles, our access to the world has been made secure, a security which had been lost in Kant's proclamation that the thing-in-itself (Ding an sich) was ultimately inaccessible.

The importance of 'love' within the formulation of the absolute has also been cited by Hegel throughout his works:

The life of God — the life which the mind apprehends and enjoys as it rises to the absolute unity of all things — may be described as a play of love with itself; but this idea sinks to an edifying truism, or even to a platitude, when it does not embrace in it the earnestness, the pain, the patience, and labor, involved in the negative aspect of things.

Yet Hegel did not see Christianity per se as the route through which one reaches the absolute, but used its religious system as an historical exemplar of absolute spirit. Arriving at such an absolute was the domain of philosophy and theoretical inquiry. For Hegel speculative philosophy presented the religious content in an elevated, self-aware form.

Hegel's position is a critical transformation of the concept of the absolute advanced by Friedrich Wilhelm Joseph von Schelling (1775–1854), who argued for a philosophy of Identity:

‘Absolute identity’ is, then, the link of the two aspects of being, which, on the one hand, is the universe, and, on the other, is the changing multiplicity which the knowable universe also is. Schelling insists now that “The I think, I am, is, since Descartes, the basic mistake of all knowledge; thinking is not my thinking, and being is not my being, for everything is only of God or the totality” (SW I/7, p. 148), so the I is ‘affirmed’ as a predicate of the being by which it is preceded.

Yet this absolute is different from Hegel's, which necessarily a telos or end result of the dialectic of multiplicities of consciousness throughout human history. For Schelling, the absolute is a causeless 'ground' upon which relativity (difference and similarity) can be discerned by human judgement (and thus permit 'freedom' itself) and this ground must be simultaneously not of the 'particular' world of finites but also not wholly different from them (or else there would be no commensurability with empirical reality, objects, sense data, etc. to be compared as 'relative' or otherwise):

The particular is determined in judgements, but the truth of claims about the totality cannot be proven because judgements are necessarily conditioned, whereas the totality is not. Given the relative status of the particular there must, though, be a ground which enables us to be aware of that relativity, and this ground must have a different status from the knowable world of finite particulars. At the same time, if the ground were wholly different from the world of relative particulars the problems of dualism would recur. As such the absolute is the finite, but we do not know this in the manner we know the finite. Without the presupposition of ‘absolute identity’, therefore, the evident relativity of particular knowledge becomes inexplicable, since there would be no reason to claim that a revised judgement is predicated of the same world as the preceding — now false — judgement.

In both Schelling and Hegel's 'systems' (especially the latter), the project aims towards a completion of metaphysics in such a way as to prioritize rational thinking (Vernuft), individual freedom, and philosophical and historical progress into a unity. Inspired by the system-building of previous Enlightenment thinkers like Immanuel Kant, Schelling and Hegel pushed idealism into new ontological territory (especially notable in Hegel's The Science of Logic (1812-16)), wherein a 'concept' of thought and its content are not distinguished, as Redding describes it:

While opinions divide as to how Hegel's approach to logic relates to that of Kant, it is important to grasp that for Hegel logic is not simply a science of the form of our thoughts. It is also a science of actual content as well, and as such has an ontological dimension.

Therefore, syllogisms of logic like those espoused in the ancient world by Aristotle and crucial to the logic of Medieval philosophy, became not simply abstractions like mathematical equations but ontological necessities to describe existence itself, and therefore to be able to derive 'truth' from such existence using reason and the dialectic method of understanding. Whereas rationality was the key to completing Hegel's philosophical system, Schelling could not accept the absolutism prioritzed to Reason. Bowie elaborates on this:

Hegel's system tries to obviate the facticity of the world by understanding reason as the world's immanent self-articulation. Schelling, in contrast, insists that human reason cannot explain its own existence, and therefore cannot encompass itself and its other within a system of philosophy. We cannot, [Schelling] maintains, make sense of the manifest world by beginning with reason, but must instead begin with the contingency of being and try to make sense of it with the reason which is only one aspect of it and which cannot be explained in terms of its being a representation of the true nature of being.

Schelling's skepticism towards the prioritization of reason in the dialectic system constituting the Absolute, therefore pre-empted the vast body of philosophy that would react against Hegelianism in the modern era. Schelling's view of reason, however, was not to discard it, as would Nietzsche, but on the contrary, to use nature as its embodiment. For Schelling, reason was an organic 'striving' in nature (not just anthropocentric) and this striving was one in which the subject and the object approached an identity. Schelling saw reason as the link between spirit and the phenomenal world, as Lauer explains: "For Schelling [...] nature is not the negative of reason, to be submitted to it as reason makes the world its home, but has since its inception been turning itself into a home for reason." In Schelling's Further Presentation of My System of Philosophy (Werke Ergänzungsband I, 391-424), he argued that the comprehension of a thing is done through reason only when we see it in a whole. So Beiser (p. 17) explains:

The task of philosophical construction is then to grasp the identity of each particular with the whole of all things. To gain such knowledge we should focus upon a thing by itself, apart from its relations to anything else; we should consider it as a single, unique whole, abstracting from all its properties, which are only its partial aspects, and which relate it to other things. Just as in mathematical construction we abstract from all the accidental features of a figure (it is written with chalk, it is on a blackboard) to see it as a perfect exemplar of some universal truth, so in philosophical construction we abstract from all the specific properties of an object to see it in the absolute whole.

Hegel's doubts about intellectual intuition's ability to prove or legitimate that the particular is in identity with whole, led him to progressively formulate the system of the dialectic, now known as the Hegelian dialectic, in which concepts like the Aufhebung came to be articulated in the Phenomenology of Spirit (1807). Beiser (p. 19) summarizes the early formulation as follows:

a) Some finite concept, true of only a limited part of reality, would go beyond its limits in attempting to know all of reality. It would claim to be an adequate concept to describe the absolute because, like the absolute, it has a complete or self-sufficient meaning independent of any other concept.

b) This claim would come into conflict with the fact that the concept depends for its meaning on some other concept, having meaning only in contrast to its negation. There would then be a contradiction between its claim to independence and its de facto dependence upon another concept.

c) The only way to resolve the contradiction would be to reinterpret the claim to independence, so that it applies not just to one concept to the exclusion of the other but to the whole of both concepts. Of course, the same stages could be repeated on a higher level, and so on, until we come to the complete system of all concepts, which is alone adequate to describe the absolute.

Hegel's innovation in the history of German Idealism was for a self-consciousness or self-questioning, that would lead to a more inclusive, holistic rationality of the world. The synthesis of one concept, deemed independently true per se, with another contradictory concept (e.g. the first is in fact dependent on some other thing), leads to the history of rationality, throughout human (largely European) civilization. For the German Idealists like Fichte, Schelling and Hegel, the extrapolation or universalization of the human process of contradiction and reconciliation, whether conceptually, theoretically, or emotionally, were all movements of the universe itself. It is understandable then, why so many philosophers saw deep problems with Hegel's all-encompassing attempt at fusing anthropocentric and Eurocentric epistemology, ontology, and logic into a singular system of thought that would admit no alternative.

Neo-Hegelianism

Neo-Hegelianism is a school (or schools) of thought associated and inspired by the works of Hegel.

It refers mainly to the doctrines of an idealist school of philosophers that were prominent in Great Britain and in the United States between 1870 and 1920. The name is also sometimes applied to cover other philosophies of the period that were Hegelian in inspiration—for instance, those of Benedetto Croce and of Giovanni Gentile.

Hegelianism after Hegel

Although Hegel died in 1831, his philosophy still remains highly debated and discussed. In politics, there was a developing schism, even before his death, between right Hegelians and left Hegelians. The latter specifically took on political dimensions in the form of Marxism.

In the philosophy of religion, Hegel's influence soon became very powerful in the English-speaking world. The British school, called British idealism and partly Hegelian in inspiration, included Thomas Hill Green, Bernard Bosanquet, F. H. Bradley, William Wallace, and Edward Caird. It was importantly directed towards political philosophy and political and social policy, but also towards metaphysics and logic, as well as aesthetics.

America saw the development of a school of Hegelian thought move toward pragmatism.

German twentieth-century neo-Hegelians

In Germany there was a neo-Hegelianism (Neuhegelianismus) of the early twentieth century, partly developing out of the Neo-Kantians. Richard Kroner wrote one of its leading works, a history of German idealism from a Hegelian point of view.

Other notable neo-Hegelians

Criticisms

Exponents of analytic philosophy, which has been the dominant form of Anglo-American philosophy for most of the last century, have criticised Hegel's work as hopelessly obscure. Existentialists also criticise Hegel for ultimately choosing an essentialistic whole over the particularity of existence. Epistemologically, one of the main problems plaguing Hegel's system is how these thought determinations have bearing on reality as such. A perennial problem of his metaphysics seems to be the question of how spirit externalises itself and how the concepts it generates can say anything true about nature. At the same time, they will have to, because otherwise Hegel's system concepts would say nothing about something that is not itself a concept and the system would come down to being only an intricate game involving vacuous concepts.

Schopenhauer

Schopenhauer noted that Hegel created his absolute idealism after Kant had discredited all proofs of God's existence. The Absolute is a non-personal substitute for the concept of God. It is the one subject that perceives the universe as one object. Individuals share in parts of this perception. Since the universe exists as an idea in the mind of the Absolute, absolute idealism copies Spinoza's pantheism in which everything is in God or Nature.

Moore and Russell

Famously, G. E. Moore’s rebellion against absolutism found expression in his defense of common sense against the radically counter-intuitive conclusions of absolutism (e.g. time is unreal, change is unreal, separateness is unreal, imperfection is unreal, etc.). G. E. Moore also pioneered the use of logical analysis against the absolutists, which Bertrand Russell promulgated and used in order to begin the entire tradition of analytic philosophy with its use against the philosophies of his direct predecessors. In recounting his own mental development Russell reports, "For some years after throwing over [absolutism] I had an optimistic riot of opposite beliefs. I thought that whatever Hegel had denied must be true." (Russell in Barrett and Adkins 1962, p. 477) Also:

G.E. Moore took the lead in the rebellion, and I followed, with a sense of emancipation. [Absolutism] argued that everything common sense believes in is mere appearance. We reverted to the opposite extreme, and thought that everything is real that common sense, uninfluenced by philosophy or theology, supposes real.

— Bertrand Russell; as quoted in Klemke 2000, p.28

Pragmatism

Particularly the works of William James and F. C. S. Schiller, both founding members of pragmatism, made lifelong assaults on Absolute Idealism. James was particularly concerned with the monism that Absolute Idealism engenders, and the consequences this has for the problem of evil, free will, and moral action. Schiller, on the other hand, attacked Absolute Idealism for being too disconnected with our practical lives, and argued that its proponents failed to realize that thought is merely a tool for action rather than for making discoveries about an abstract world that fails to have any impact on us.

20th century

Absolute idealism has greatly altered the philosophical landscape. Paradoxically, (though, from a Hegelian point of view, maybe not paradoxically at all) this influence is mostly felt in the strong opposition it engendered. Both logical positivism and Analytic philosophy grew out of a rebellion against Hegelianism prevalent in England during the 19th century. Continental phenomenology, existentialism and post-modernism also seek to 'free themselves from Hegel's thought'.

Martin Heidegger, one of the leading figures of Continental philosophy in the 20th century, sought to distance himself from Hegel's work. One of Heidegger's philosophical themes in Being and Time was "overcoming metaphysics," aiming to distinguish his book from Hegelian tracts. After the 1927 publication, Heidegger's "early dismissal of them [German idealists] gives way to ever-mounting respect and critical engagement." He continued to compare and contrast his philosophy with Absolute idealism, principally due to critical comments that certain elements of this school of thought anticipated Heideggerian notions of "overcoming metaphysics."

Politics of Europe

From Wikipedia, the free encyclopedia ...